summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPatrick Williams <patrick@stwcx.xyz>2023-05-04 05:37:45 +0300
committerPatrick Williams <patrick@stwcx.xyz>2023-05-04 05:38:27 +0300
commit841583d6ba5918b60868b708ff0b89cf0409efa7 (patch)
tree49e155d7d6c2ea5a7081fc4dcbc51cb0a522e120
parent61a2d43a172b70aa34fd7ec33fc048a211fa5c4c (diff)
downloadopenbmc-841583d6ba5918b60868b708ff0b89cf0409efa7.tar.xz
subtree updates
poky: 90a6f6a110..a631bfc3a3: Alban Bedel (1): systemd: Fix systemd when used with busybox less Alex Kiernan (1): openssl: upgrade 1.1.1q to 1.1.1s Alexander Kanavin (12): tzdata: update to 2022d linux-firmware: upgrade 20220913 -> 20221012 tzdata: update 2022d -> 2022g linux-firmware: upgrade 20221109 -> 20221214 selftest/virgl: use pkg-config from the host oeqa/qemurunner: do not use Popen.poll() when terminating runqemu with a signal vim: update 9.0.1211 -> 9.0.1293 to resolve open CVEs linux-firmware: upgrade 20221214 -> 20230117 linux-firmware: upgrade 20230117 -> 20230210 wireless-regdb: upgrade 2022.08.12 -> 2023.02.13 apr: update 1.7.0 -> 1.7.2 apr-util: update 1.6.1 -> 1.6.3 Alexey Smirnov (1): classes: make TOOLCHAIN more permissive for kernel Andrej Valek (1): libarchive: fix CVE-2022-26280 Antonin Godard (2): busybox: always start do_compile with orig config files busybox: rm temporary files if do_compile was interrupted Bartosz Golaszewski (1): bluez5: add dbus to RDEPENDS Benoît Mauduit (1): lib/oe/reproducible: Use git log without gpg signature Bhabu Bindu (4): libxml2: Fix CVE-2022-40303 libxml2: Fix CVE-2022-40304 ffmpeg: Fix CVE-2022-3109 ffmpeg: fix for CVE-2022-3341 Bruce Ashfield (12): linux-yocto/5.4: update to v5.4.216 linux-yocto/5.4: update to v5.4.219 linux-yocto/5.4: update to v5.4.221 linux-yocto/5.4: update to v5.4.224 linux-yocto/5.4: update to v5.4.225 linux-yocto/5.4: update to v5.4.228 linux-yocto/5.4: update to v5.4.229 linux-yocto/5.4: update to v5.4.230 linux-yocto/5.4: update to v5.4.231 linux-yocto/5.4: update to v5.4.233 linux-yocto/5.4: update to v5.4.234 linux-yocto/5.4: update to v5.4.237 Changqing Li (1): base.bbclass: Fix way to check ccache path Charlie Davies (1): bitbake: bitbake: fetch/git: use shlex.quote() to support spaces in SRC_URI url Chee Yang Lee (6): libksba: fix CVE-2022-47629 tiff: fix multiple CVEs ghostscript: add CVE tag for check-stack-limits-after-function-evalution.patch libksba: fix CVE-2022-3515 qemu: fix multple CVEs git: ignore CVE-2023-22743 Chen Qi (3): kernel.bbclass: make KERNEL_DEBUG_TIMESTAMPS work at rebuild psplash: consider the situation of psplash not exist for systemd bc: extend to nativesdk Christoph Lauer (1): populate_sdk_base: add zip options Daniel McGregor (1): coreutils: add openssl PACKAGECONFIG Dmitry Baryshkov (3): linux-firmware: upgrade 20221012 -> 20221109 linux-firmware: properly set license for all Qualcomm firmware linux-firmware: add yamato fw files to qcom-adreno-a2xx package Frank de Brabander (1): cve-update-db-native: add timeout to urlopen() calls Gaurav Gupta (1): qemu: fix build error introduced by CVE-2021-3929 fix Geoffrey GIRY (1): cve-check: Fix false negative version issue Harald Seiler (1): opkg: Set correct info_dir and status_file in opkg.conf Hitendra Prajapati (21): dhcp: Fix CVE-2022-2928 & CVE-2022-2929 qemu: CVE-2021-3750 hcd-ehci: DMA reentrancy issue leads to use-after-free golang: CVE-2022-2880 ReverseProxy should not forward unparseable query parameters libX11: CVE-2022-3554 Fix memory leak bluez: CVE-2022-3637 A DoS exists in monitor/jlink.c sudo: CVE-2022-43995 heap-based overflow with very small passwords libarchive: CVE-2022-36227 NULL pointer dereference in archive_write.c sysstat: fix CVE-2022-39377 golang: CVE-2022-41715 regexp/syntax: limit memory used by parsing regexps grub2: CVE-2022-28735 shim_lock verifier allows non-kernel files to be loaded grub2: Fix CVE-2022-2601 & CVE-2022-3775 xserver-xorg: Fix Multiple CVEs git: CVE-2022-23521 gitattributes parsing integer overflow curl: fix CVE-2022-43552 Use-after-free triggered by an HTTP proxy deny response QEMU: CVE-2022-4144 QXL: qxl_phys2virt unsafe address translation can lead to out-of-bounds read curl: CVE-2023-23916 HTTP multi-header compression denial of service qemu: fix compile error which imported by CVE-2022-4144 ruby: CVE-2023-28756 ReDoS vulnerability in Time curl: CVE-2023-27534 SFTP path ~ resolving discrepancy curl: CVE-2023-27538 fix SSH connection too eager reuse screen: CVE-2023-24626 allows sending SIGHUP to arbitrary PIDs Hugo SIMELIERE (2): bluez5: Exclude CVE-2022-39177 from cve-check openssl: upgrade 1.1.1s to 1.1.1t Jagadeesh Krishnanjanappa (1): qemuboot.bbclass: make sure runqemu boots bundled initramfs kernel image Jan Kircher (1): toolchain-scripts: compatibility with unbound variable protection Jermain Horsman (1): cve-check: write the cve manifest to IMGDEPLOYDIR John Edward Broadbent (1): externalsrc: git submodule--helper list unsupported Joshua Watt (6): sudo: Use specific BSD license variant classes/create-spdx: Backport classes/package: Add extended packaged data licenses: Add GPL+ licenses to map create-spdx: Use gzip for compression classes/package: Use gzip for extended package data Kenfe-Mickael Laventure (3): buildtools-tarball: Handle spaces within user $PATH toolchain-scripts: Handle spaces within user $PATH populate_sdk_ext: Handle spaces within user $PATH Khem Raj (3): libtirpc: Check if file exists before operating on it apr: Use correct strerror_r implementation based on libc type apr: Cache configure tests which use AC_TRY_RUN Lee Chee Yang (1): dropbear: fix CVE-2021-36369 Luis (1): rm_work.bbclass: use HOSTTOOLS 'rm' binary exclusively Manuel Leonhardt (1): sstate: Account for reserved characters when shortening sstate filenames Marek Vasut (2): bitbake: fetch2/git: Prevent git fetcher from fetching gitlab repository metadata bitbake: fetch2/git: Clarify the meaning of namespace Marta Rybczynska (1): cve-update-db-native: avoid incomplete updates Martin Jansa (3): externalsrc.bbclass: fix git repo detection meta: remove True option to getVar and getVarFlag calls (again) bmap-tools: switch to main branch Mathieu Dubois-Briand (1): curl: Fix CVE CVE-2022-35260 Mauro Queiros (1): image.bbclass: print all QA functions exceptions Michael Halstead (1): uninative: Upgrade to 3.7 to work with glibc 2.36 Michael Opdenacker (4): dev-manual: update session about multiconfig ref-manual: document SSTATE_EXCLUDEDEPS_SYSROOT profile-manual: update WireShark hyperlinks overview-manual: update patchwork instance URL Mike Crowe (1): kernel: improve transformation from KERNEL_IMAGETYPE_FOR_MAKE Mikko Rapeli (2): oeqa context.py: fix --target-ip comment to include ssh port number oeqa rtc.py: skip if read-only-rootfs Ming Liu (1): linux: inherit pkgconfig in kernel.bbclass Minjae Kim (2): xserver-xorg: backport fixes for CVE-2022-3550, CVE-2022-3551 and CVE-2022-3553 ppp: fix CVE-2022-4603 Nikhil R (1): openssl: Fix CVE-2023-0464 Niko Mauno (2): systemd: Consider PACKAGECONFIG in RRECOMMENDS Fix missing leading whitespace with ':append' Omkar (2): dbus: upgrade 1.12.22 -> 1.12.24 python3: Fix CVE-2022-45061 Omkar Patil (3): sudo: Fix CVE-2023-22809 openssl: Fix CVE-2023-0465 openssl: Fix CVE-2023-0466 Paul Eggleton (1): classes/kernel-fitimage: add ability to add additional signing options Pavel Zhukov (1): oeqa/rpm.py: Increase timeout and add debug output Pawan Badganchi (1): python3: Fix CVE-2022-37454 Pawel Zalewski (1): classes/fs-uuid: Fix command output decoding issue Peter Kjellerstedt (2): externalsrc.bbclass: Remove a trailing slash from ${B} devshell: Do not add scripts/git-intercept to PATH Peter Marko (2): externalsrc: fix lookup for .gitmodules go: ignore CVE-2022-41716 Piotr Łobacz (1): systemd: fix wrong nobody-group assignment Qiu, Zheng (1): vim: upgrade 9.0.0820 -> 9.0.0947 Quentin Schulz (2): cairo: update patch for CVE-2019-6461 with upstream solution cairo: fix CVE patches assigned wrong CVE number Ralph Siemsen (11): golang: fix CVE-2021-33195 golang: fix CVE-2021-33198 golang: fix CVE-2021-44716 golang: fix CVE-2022-24291 golang: fix CVE-2022-28131 golang: fix CVE-2022-28327 golang: ignore CVE-2022-29804 golang: ignore CVE-2021-33194 golang: ignore CVE-2021-41772 golang: ignore CVE-2022-30580 golang: ignore CVE-2022-30630 Randy MacLeod (2): vim: upgrade 9.0.0947 -> 9.0.1211 vim: upgrade 9.0.1403 -> 9.0.1429 Ranjitsinh Rathod (3): expat: Fix CVE-2022-43680 for expat systemd: Fix CVE-2022-3821 issue libsdl2: Add fix for CVE-2022-4743 Ravula Adhitya Siddartha (1): linux-yocto/5.4: update genericx86* machines to v5.4.219 Richard Purdie (28): bitbake: tests/fetch: Allow handling of a file:// url within a submodule qemu: Avoid accidental librdmacm linkage build-appliance-image: Update to dunfell head revision bitbake: utils: Handle lockfile filenames that are too long for filesystems bitbake: utils: Fix lockfile path length issues build-appliance-image: Update to dunfell head revision oeqa/selftest/tinfoil: Add test for separate config_data with recipe_parse_file() build-appliance-image: Update to dunfell head revision build-appliance-image: Update to dunfell head revision bitbake: runqueue: Fix multiconfig deferred task sstate validity caching issue bitbake: runqueue: Handle deferred task rehashing in multiconfig builds bitbake: runqueue: Improve multiconfig deferred task issues bitbake: runqueue: Avoid deadlock avoidance task graph corruption bitbake: runqueue: Fix issues with multiconfig deferred task deadlock messages bitbake: runqueue: Ensure deferred tasks are sorted by multiconfig bitbake: cooker: Drop sre_constants usage nativesdk: Handle chown/chgrp calls in nativesdk do_install tasks make-mod-scripts: Ensure kernel build output is deterministic libc-locale: Fix on target locale generation apr: Fix to work with autoconf 2.70 apr-util: Fix CFLAGS used in build oeqa/selftest/prservice: Improve debug output for failure build-appliance-image: Update to dunfell head revision staging: Separate out different multiconfig manifests staging/multilib: Fix manifest corruption glibc: Add missing binutils dependency base-files: Drop localhost.localdomain from hosts file pybootchartui: Fix python syntax issue Riyaz Khan (1): rpm: Fix rpm CVE CVE-2021-3521 Robert Andersson (1): go-crosssdk: avoid host contamination by GOCACHE Rodolfo Quesada Zumbado (1): tar: CVE-2022-48303 Ross Burton (14): sanity: check for GNU tar specifically pixman: backport fix for CVE-2022-44638 lib/buildstats: fix parsing of trees with reduced_proc_pressure directories bitbake: bb/utils: include SSL certificate paths in export_proxies cve-update-db-native: add more logging when fetching cve-update-db-native: show IP on failure quilt: fix intermittent failure in faildiff.test quilt: use upstreamed faildiff.test fix git: ignore CVE-2022-41953 shadow: ignore CVE-2016-15024 vim: add missing pkgconfig inherit vim: upgrade to 9.0.1403 vim: set modified-by to the recipe MAINTAINER lib/resulttool: fix typo breaking resulttool log --ptest Shubham Kulkarni (5): glibc: Security fix for CVE-2023-0687 go-runtime: Security fix for CVE-2022-41723 go-runtime: Security fix for CVE-2022-41722 go: Security fix for CVE-2020-29510 go: Ignore CVE-2022-1705 Siddharth Doshi (1): harfbuzz: Security fix for CVE-2023-25193 Steve Sakoman (30): selftest: skip virgl test on ubuntu 22.04 qemu: Avoid accidental libvdeplug linkage qemu: Add PACKAGECONFIG for rbd devtool: add HostKeyAlgorithms option to ssh and scp commands selftest: skip virgl test on all Alma Linux documentation: update for 3.1.21 poky.conf: bump version for 3.1.21 maintainers: update gcc version to 9.5 documentation: update for 3.1.22 poky.conf: bump version for 3.1.22 ovmf: fix gcc12 warning in GenFfs ovmf: fix gcc12 warning in LzmaEnc ovmf: fix gcc12 warning for device path handling documentation: update for 3.1.23 python3: fix packaging of Windows distutils installer stubs lttng-modules: update 2.11.6 -> 2.11.7 lttng-modules: update 2.11.7 -> 2.11.8 lttng-modules: update 2.11.8 -> 2.11.9 lttng-modules: fix build with 5.4.229 kernel poky.conf: bump version for 3.1.23 poky.conf: Update SANITY_TESTED_DISTROS to match autobuilder ref-system-requirements.rst: add Fedora 35, Fedora 36, and Ubuntu 22.04 to list of supported distros ref-system-requirements.rst: add AlmaLinux 8.7 to list of supported distros qemu: Fix slirp determinism issue documentation: update for 3.1.24 poky.conf: bump version for 3.1.24 bitbake: tests/fetch.py: fix link to project documentation documentation: update for 3.1.25 poky.conf: bump version for 3.1.25 build-appliance-image: Update to dunfell head revision Sundeep KOKKONDA (3): binutils: stable 2.34 branch updates glibc : stable 2.31 branch updates. gcc: upgrade to v9.5 Sunil Kumar (1): go: Security Fix for CVE-2022-2879 Teoh Jay Shen (1): vim: Upgrade 9.0.0598 -> 9.0.0614 Thomas Roos (1): devtool: fix devtool finish when gitmodules file is empty Tim Orling (2): python3: upgrade 3.8.13 -> 3.8.14 vim: upgrade 9.0.0614 -> 9.0.0820 Ulrich Ölmann (1): kernel-yocto: fix kernel-meta data detection Vijay Anusuri (4): git: Security fix for CVE-2022-41903 git: Security fix for CVE-2023-22490 and CVE-2023-23946 sudo: Security fix for CVE-2023-28486 and CVE-2023-28487 curl: Security fix CVE-2023-27533, CVE-2023-27535 and CVE-2023-27536 Virendra Thakur (2): gcc: Fix inconsistent noexcept specifier for valarray in libstdc++ qemu: Whitelist CVE-2023-0664 Vivek Kumbhar (13): curl: fix CVE-2022-32221 POST following PUT qemu: fix CVE-2021-3638 ati-vga: inconsistent check in ati_2d_blt() may lead to out-of-bounds write libtasn1: fix CVE-2021-46848 off-by-one in asn1_encode_simple_der qemu: fix CVE-2021-20196 block fdc null pointer dereference may lead to guest crash go: fix CVE-2022-41717 Excessive memory use in got server rsync: fix CVE-2022-29154 remote arbitrary files write inside the directories of connecting peers libx11: fix CVE-2022-3555 memory leak in _XFreeX11XCBStructure() of xcb_disp.c qemu: fix CVE-2021-3507 fdc heap buffer overflow in DMA read data transfers go: fix CVE-2022-1962 go/parser stack exhaustion in all Parse* functions qemu: fix CVE-2021-3929 nvme DMA reentrancy issue leads to use-after-free gnutls: fix CVE-2023-0361 timing side-channel in the TLS RSA key exchange code go: fix CVE-2023-24537 Infinite loop in parsing go: fix CVE-2023-24534 denial of service from excessive memory allocation Wang Mingyu (1): mobile-broadband-provider-info: upgrade 20220725 -> 20221107 Xiaobing Luo (1): devtool: Fix _copy_file() TypeError ciarancourtney (1): wic: swap partitions are not added to fstab jan (1): cve-update-db-native: Allow to overrule the URL in a bbappend. rajmohan r (1): systemd: Fix CVE-2023-26604 wangmy (1): dbus: upgrade 1.12.20 -> 1.12.22 meta-openembedded: 6792ebdd96..7007d14c25: Armin Kuster (1): mariadb: Update to latest lts 10.4.28 Chris Rogers (1): xterm: Remove undeclared variables introduced by backport Colin Finck (1): [dunfell] wireguard: Upgrade to 1.0.20220627 (module) and 1.0.20210914 (tools) Hitendra Prajapati (9): postgresql: CVE-2022-1552 Autovacuum, REINDEX, and others omit "security restricted operation" sandbox dnsmasq: CVE-2022-0934 Heap use after free in dhcp6_no_relay nginx: CVE-2022-41741, CVE-2022-41742 Memory corruption in the ngx_http_mp4_module postgresql: Fix CVE-2022-2625 proftpd: CVE-2021-46854 memory disclosure to radius server net-snmp: CVE-2022-44792 & CVE-2022-44793 Fix NULL Pointer Exception krb5: CVE-2022-42898 integer overflow vulnerabilities in PAC parsing postgresql: CVE-2022-41862 Client memory disclosure when connecting with Kerberos to modified server syslog-ng: CVE-2022-38725 An integer overflow in the RFC3164 parser Ivan Stepic (1): flatbuffers: adapt for cross-compilation environments Mathieu Dubois-Briand (4): networkmanager: Update to 1.22.16 nss: Add missing CVE product nss: Whitelist CVEs related to libnssdbm nss: Fix CVE-2020-25648 Omkar Patil (1): ntfs-3g-ntfsprogs: Upgrade 2022.5.17 to 2022.10.3 Poonam Jadhav (4): nodejs: Fix CVE-2022-32212 nodejs: Fix CVE-2022-35255 nodejs: Fix CVE-2022-43548 nodejs: Fix CVEs for nodejs Priyal Doshi (1): open-vm-tools: Security fix for CVE-2022-31676 Ranjitsinh Rathod (1): strongswan: Fix CVE-2022-40617 Roger Knecht (1): zeromq: 4.3.2 -> 4.3.4 Shubham Kulkarni (1): python3-pillow: Security fix for CVE-2022-45198 Siddharth Doshi (1): xterm : Fix CVE-2022-45063 code execution via OSC 50 input sequences] CVE-2022-45063 Valeria Petrov (1): php: update 7.4.28 -> 7.4.33 Virendra Thakur (2): capnproto: Fix CVE-2022-46149 nss: Fix CVE CVE-2023-0767 Wang Mingyu (2): apache2: upgrade 2.4.54 -> 2.4.55 apache2: upgrade 2.4.55 -> 2.4.56 Yi Zhao (1): postfix: upgrade 3.4.23 -> 3.4.27 vkumbhar (2): dnsmasq: fix CVE-2023-28450 default maximum EDNS.0 UDP packet size was set to 4096 but should be 1232 mariadb: fix CVE-2022-47015 NULL pointer dereference in spider_db_mbase::print_warnings() wangmy (1): apache2: upgrade 2.4.53 -> 2.4.54 meta-security: c62970fda8..eb631c12be: Hitendra Prajapati (1): sssd: CVE-2022-4254 libsss_certmap fails to sanitise certificate data used in LDAP filters Signed-off-by: Patrick Williams <patrick@stwcx.xyz> Change-Id: I0ebec73eb7e68d1ca95866bc758e49990731c8bf
-rw-r--r--meta-openembedded/meta-filesystems/recipes-filesystems/ntfs-3g-ntfsprogs/ntfs-3g-ntfsprogs_2022.10.3.bb (renamed from meta-openembedded/meta-filesystems/recipes-filesystems/ntfs-3g-ntfsprogs/ntfs-3g-ntfsprogs_2022.5.17.bb)2
-rw-r--r--meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.22.16.bb (renamed from meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.22.10.bb)3
-rw-r--r--meta-openembedded/meta-networking/recipes-daemons/postfix/postfix_3.4.27.bb (renamed from meta-openembedded/meta-networking/recipes-daemons/postfix/postfix_3.4.23.bb)2
-rw-r--r--meta-openembedded/meta-networking/recipes-daemons/proftpd/files/CVE-2021-46854.patch51
-rw-r--r--meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.6.bb1
-rw-r--r--meta-openembedded/meta-networking/recipes-kernel/wireguard/files/0001-compat-SYM_FUNC_-START-END-were-backported-to-5.4.patch29
-rw-r--r--meta-openembedded/meta-networking/recipes-kernel/wireguard/files/0001-compat-icmp_ndo_send-functions-were-backported-exten.patch32
-rw-r--r--meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20200401.bb30
-rw-r--r--meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20220627.bb23
-rw-r--r--meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20210914.bb (renamed from meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20200319.bb)4
-rw-r--r--meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/CVE-2022-44792-CVE-2022-44793.patch116
-rw-r--r--meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.8.bb1
-rw-r--r--meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq/CVE-2022-0934.patch188
-rw-r--r--meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq/CVE-2023-28450.patch63
-rw-r--r--meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq_2.81.bb2
-rw-r--r--meta-openembedded/meta-networking/recipes-support/strongswan/files/CVE-2022-40617.patch210
-rw-r--r--meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.8.4.bb1
-rw-r--r--meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5/CVE-2022-42898.patch110
-rw-r--r--meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5_1.17.1.bb1
-rw-r--r--meta-openembedded/meta-oe/recipes-connectivity/zeromq/files/0001-CMakeLists-txt-Avoid-host-specific-path-to-libsodium.patch8
-rw-r--r--meta-openembedded/meta-oe/recipes-connectivity/zeromq/zeromq_4.3.4.bb (renamed from meta-openembedded/meta-oe/recipes-connectivity/zeromq/zeromq_4.3.2.bb)4
-rw-r--r--meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb-native_10.4.28.bb (renamed from meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb-native_10.4.25.bb)0
-rw-r--r--meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc3
-rw-r--r--meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb/CVE-2022-47015.patch269
-rw-r--r--meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb_10.4.28.bb (renamed from meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb_10.4.25.bb)0
-rw-r--r--meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-1552.patch947
-rw-r--r--meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-2625.patch904
-rw-r--r--meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-41862.patch48
-rw-r--r--meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_12.9.bb3
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/capnproto/capnproto_0.7.0.bb4
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/capnproto/files/CVE-2022-46149.patch49
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/flatbuffers/flatbuffers_1.12.0.bb7
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-32212.patch133
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-35255.patch237
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-43548.patch214
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-llhttp.patch4348
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs_12.22.12.bb4
-rw-r--r--meta-openembedded/meta-oe/recipes-devtools/php/php_7.4.33.bb (renamed from meta-openembedded/meta-oe/recipes-devtools/php/php_7.4.28.bb)2
-rw-r--r--meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm/CVE-2022-45063.patch776
-rw-r--r--meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm_353.bb1
-rw-r--r--meta-openembedded/meta-oe/recipes-support/nss/nss/CVE-2020-25648.patch163
-rw-r--r--meta-openembedded/meta-oe/recipes-support/nss/nss/CVE-2023-0767.patch124
-rw-r--r--meta-openembedded/meta-oe/recipes-support/nss/nss_3.51.1.bb8
-rw-r--r--meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools/0001-Properly-check-authorization-on-incoming-guestOps-re.patch39
-rw-r--r--meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools_11.0.1.bb1
-rw-r--r--meta-openembedded/meta-oe/recipes-support/syslog-ng/files/CVE-2022-38725.patch629
-rw-r--r--meta-openembedded/meta-oe/recipes-support/syslog-ng/syslog-ng_3.24.1.bb1
-rw-r--r--meta-openembedded/meta-python/recipes-devtools/python/python3-pillow/0001-CVE-2022-45198.patch26
-rw-r--r--meta-openembedded/meta-python/recipes-devtools/python/python3-pillow_6.2.1.bb1
-rw-r--r--meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2/0004-apache2-log-the-SELinux-context-at-startup.patch12
-rw-r--r--meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.56.bb (renamed from meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.53.bb)2
-rw-r--r--meta-openembedded/meta-webserver/recipes-httpd/nginx/files/CVE-2022-41741-CVE-2022-41742.patch319
-rw-r--r--meta-openembedded/meta-webserver/recipes-httpd/nginx/nginx_1.16.1.bb4
-rw-r--r--meta-security/recipes-security/sssd/files/CVE-2022-4254-1.patch515
-rw-r--r--meta-security/recipes-security/sssd/files/CVE-2022-4254-2.patch655
-rw-r--r--meta-security/recipes-security/sssd/sssd_1.16.4.bb2
-rw-r--r--poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst4
-rw-r--r--poky/bitbake/lib/bb/cooker.py5
-rw-r--r--poky/bitbake/lib/bb/fetch2/git.py20
-rw-r--r--poky/bitbake/lib/bb/runqueue.py88
-rw-r--r--poky/bitbake/lib/bb/tests/fetch.py6
-rw-r--r--poky/bitbake/lib/bb/utils.py28
-rw-r--r--poky/documentation/conf.py1
-rw-r--r--poky/documentation/dev-manual/dev-manual-common-tasks.rst68
-rw-r--r--poky/documentation/overview-manual/overview-manual-yp-intro.rst2
-rw-r--r--poky/documentation/poky.yaml10
-rw-r--r--poky/documentation/profile-manual/profile-manual-usage.rst6
-rw-r--r--poky/documentation/ref-manual/ref-system-requirements.rst6
-rw-r--r--poky/documentation/ref-manual/ref-variables.rst26
-rw-r--r--poky/meta-poky/conf/distro/poky.conf5
-rw-r--r--poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.4.bbappend8
-rw-r--r--poky/meta/classes/base.bbclass2
-rw-r--r--poky/meta/classes/create-spdx-2.2.bbclass1067
-rw-r--r--poky/meta/classes/create-spdx.bbclass8
-rw-r--r--poky/meta/classes/cve-check.bbclass11
-rw-r--r--poky/meta/classes/devshell.bbclass2
-rw-r--r--poky/meta/classes/externalsrc.bbclass25
-rw-r--r--poky/meta/classes/fs-uuid.bbclass2
-rw-r--r--poky/meta/classes/image.bbclass4
-rw-r--r--poky/meta/classes/kernel-arch.bbclass2
-rw-r--r--poky/meta/classes/kernel-fitimage.bbclass6
-rw-r--r--poky/meta/classes/kernel-yocto.bbclass2
-rw-r--r--poky/meta/classes/kernel.bbclass31
-rw-r--r--poky/meta/classes/libc-package.bbclass1
-rw-r--r--poky/meta/classes/license_image.bbclass2
-rw-r--r--poky/meta/classes/multilib.bbclass1
-rw-r--r--poky/meta/classes/nativesdk.bbclass2
-rw-r--r--poky/meta/classes/package.bbclass39
-rw-r--r--poky/meta/classes/populate_sdk_base.bbclass4
-rw-r--r--poky/meta/classes/populate_sdk_ext.bbclass4
-rw-r--r--poky/meta/classes/qemuboot.bbclass3
-rw-r--r--poky/meta/classes/rm_work.bbclass15
-rw-r--r--poky/meta/classes/sanity.bbclass8
-rw-r--r--poky/meta/classes/sstate.bbclass2
-rw-r--r--poky/meta/classes/staging.bbclass4
-rw-r--r--poky/meta/classes/toolchain-scripts.bbclass4
-rw-r--r--poky/meta/conf/distro/include/maintainers.inc2
-rw-r--r--poky/meta/conf/distro/include/yocto-uninative.inc10
-rw-r--r--poky/meta/conf/licenses.conf7
-rw-r--r--poky/meta/files/spdx-licenses.json5937
-rw-r--r--poky/meta/lib/oe/cve_check.py37
-rw-r--r--poky/meta/lib/oe/packagedata.py11
-rw-r--r--poky/meta/lib/oe/reproducible.py3
-rw-r--r--poky/meta/lib/oe/sbom.py84
-rw-r--r--poky/meta/lib/oe/spdx.py357
-rw-r--r--poky/meta/lib/oeqa/runtime/cases/rpm.py23
-rw-r--r--poky/meta/lib/oeqa/runtime/cases/rtc.py8
-rw-r--r--poky/meta/lib/oeqa/runtime/context.py4
-rw-r--r--poky/meta/lib/oeqa/selftest/cases/cve_check.py19
-rw-r--r--poky/meta/lib/oeqa/selftest/cases/devtool.py2
-rw-r--r--poky/meta/lib/oeqa/selftest/cases/prservice.py2
-rw-r--r--poky/meta/lib/oeqa/selftest/cases/reproducible.py1
-rw-r--r--poky/meta/lib/oeqa/selftest/cases/runtime_test.py8
-rw-r--r--poky/meta/lib/oeqa/selftest/cases/tinfoil.py14
-rw-r--r--poky/meta/lib/oeqa/utils/qemurunner.py11
-rw-r--r--poky/meta/recipes-bsp/grub/files/CVE-2022-2601.patch87
-rw-r--r--poky/meta/recipes-bsp/grub/files/CVE-2022-28735.patch271
-rw-r--r--poky/meta/recipes-bsp/grub/files/CVE-2022-3775.patch97
-rw-r--r--poky/meta/recipes-bsp/grub/files/font-Fix-size-overflow-in-grub_font_get_glyph_intern.patch117
-rw-r--r--poky/meta/recipes-bsp/grub/grub2.inc4
-rw-r--r--poky/meta/recipes-connectivity/bluez5/bluez5.inc2
-rw-r--r--poky/meta/recipes-connectivity/bluez5/bluez5/CVE-2022-3637.patch39
-rw-r--r--poky/meta/recipes-connectivity/bluez5/bluez5_5.55.bb7
-rw-r--r--poky/meta/recipes-connectivity/dhcp/dhcp/CVE-2022-2928.patch120
-rw-r--r--poky/meta/recipes-connectivity/dhcp/dhcp/CVE-2022-2929.patch40
-rw-r--r--poky/meta/recipes-connectivity/dhcp/dhcp_4.4.2.bb2
-rw-r--r--poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb4
-rw-r--r--poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0464.patch226
-rw-r--r--poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0465.patch60
-rw-r--r--poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0466.patch82
-rw-r--r--poky/meta/recipes-connectivity/openssl/openssl_1.1.1t.bb (renamed from poky/meta/recipes-connectivity/openssl/openssl_1.1.1q.bb)5
-rw-r--r--poky/meta/recipes-connectivity/ppp/ppp/CVE-2022-4603.patch50
-rw-r--r--poky/meta/recipes-connectivity/ppp/ppp_2.4.7.bb1
-rw-r--r--poky/meta/recipes-core/base-files/base-files/hosts2
-rw-r--r--poky/meta/recipes-core/busybox/busybox.inc27
-rw-r--r--poky/meta/recipes-core/coreutils/coreutils_8.31.bb1
-rw-r--r--poky/meta/recipes-core/dbus/dbus-test_1.12.24.bb (renamed from poky/meta/recipes-core/dbus/dbus-test_1.12.20.bb)0
-rw-r--r--poky/meta/recipes-core/dbus/dbus.inc3
-rw-r--r--poky/meta/recipes-core/dbus/dbus_1.12.24.bb (renamed from poky/meta/recipes-core/dbus/dbus_1.12.20.bb)0
-rw-r--r--poky/meta/recipes-core/dropbear/dropbear.inc1
-rw-r--r--poky/meta/recipes-core/dropbear/dropbear/CVE-2021-36369.patch145
-rw-r--r--poky/meta/recipes-core/expat/expat/CVE-2022-43680.patch33
-rw-r--r--poky/meta/recipes-core/expat/expat_2.2.9.bb1
-rw-r--r--poky/meta/recipes-core/glibc/glibc-version.inc2
-rw-r--r--poky/meta/recipes-core/glibc/glibc.inc4
-rw-r--r--poky/meta/recipes-core/glibc/glibc/CVE-2021-33574_1.patch26
-rw-r--r--poky/meta/recipes-core/glibc/glibc/CVE-2023-0687.patch82
-rw-r--r--poky/meta/recipes-core/glibc/glibc_2.31.bb1
-rw-r--r--poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb2
-rw-r--r--poky/meta/recipes-core/libxml/libxml2/CVE-2022-40303.patch623
-rw-r--r--poky/meta/recipes-core/libxml/libxml2/CVE-2022-40304.patch104
-rw-r--r--poky/meta/recipes-core/libxml/libxml2_2.9.10.bb2
-rw-r--r--poky/meta/recipes-core/meta/buildtools-tarball.bb2
-rw-r--r--poky/meta/recipes-core/meta/cve-update-db-native.bb102
-rw-r--r--poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-genffs-fix-gcc12-warning.patch49
-rw-r--r--poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-lzmaenc-fix-gcc12-warning.patch53
-rw-r--r--poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-turn-off-gcc12-warning.patch41
-rw-r--r--poky/meta/recipes-core/ovmf/ovmf_git.bb3
-rw-r--r--poky/meta/recipes-core/psplash/files/psplash-start.service1
-rw-r--r--poky/meta/recipes-core/psplash/files/psplash-systemd.service1
-rw-r--r--poky/meta/recipes-core/systemd/systemd/CVE-2022-3821.patch47
-rw-r--r--poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-1.patch115
-rw-r--r--poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-2.patch264
-rw-r--r--poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-3.patch182
-rw-r--r--poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-4.patch32
-rw-r--r--poky/meta/recipes-core/systemd/systemd/systemd-pager.sh7
-rw-r--r--poky/meta/recipes-core/systemd/systemd_244.5.bb16
-rw-r--r--poky/meta/recipes-devtools/binutils/binutils-2.34.inc2
-rw-r--r--poky/meta/recipes-devtools/binutils/binutils/CVE-2020-16593.patch4
-rw-r--r--poky/meta/recipes-devtools/binutils/binutils/CVE-2021-3549.patch80
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.3/0001-Backport-fix-for-PR-tree-optimization-97236-fix-bad-.patch119
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.3/0001-aarch64-New-Straight-Line-Speculation-SLS-mitigation.patch204
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.3/0002-aarch64-Introduce-SLS-mitigation-for-RET-and-BR-inst.patch600
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch659
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.3/0040-fix-missing-dependencies-for-selftests.patch45
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5.inc (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3.inc)14
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0002-gcc-poison-system-directories.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0002-gcc-poison-system-directories.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0002-libstdc-Fix-inconsistent-noexcept-specific-for-valar.patch44
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0003-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0003-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0004-64-bit-multilib-hack.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0004-64-bit-multilib-hack.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0005-optional-libstdc.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0005-optional-libstdc.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0006-COLLECT_GCC_OPTIONS.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0006-COLLECT_GCC_OPTIONS.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0007-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0007-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0008-fortran-cross-compile-hack.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0008-fortran-cross-compile-hack.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0009-cpp-honor-sysroot.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0009-cpp-honor-sysroot.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0010-MIPS64-Default-to-N64-ABI.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0010-MIPS64-Default-to-N64-ABI.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0011-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0011-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0012-gcc-Fix-argument-list-too-long-error.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0012-gcc-Fix-argument-list-too-long-error.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0013-Disable-sdt.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0013-Disable-sdt.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0014-libtool.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0014-libtool.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0015-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0015-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0016-Use-the-multilib-config-files-from-B-instead-of-usin.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0016-Use-the-multilib-config-files-from-B-instead-of-usin.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0017-Avoid-using-libdir-from-.la-which-usually-points-to-.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0017-Avoid-using-libdir-from-.la-which-usually-points-to-.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0018-export-CPP.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0018-export-CPP.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0019-Ensure-target-gcc-headers-can-be-included.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0019-Ensure-target-gcc-headers-can-be-included.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0020-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0020-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0021-Don-t-search-host-directory-during-relink-if-inst_pr.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0021-Don-t-search-host-directory-during-relink-if-inst_pr.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0022-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0022-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0023-aarch64-Add-support-for-musl-ldso.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0023-aarch64-Add-support-for-musl-ldso.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0024-libcc1-fix-libcc1-s-install-path-and-rpath.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0024-libcc1-fix-libcc1-s-install-path-and-rpath.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0025-handle-sysroot-support-for-nativesdk-gcc.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0025-handle-sysroot-support-for-nativesdk-gcc.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0026-Search-target-sysroot-gcc-version-specific-dirs-with.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0026-Search-target-sysroot-gcc-version-specific-dirs-with.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0027-Fix-various-_FOR_BUILD-and-related-variables.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0027-Fix-various-_FOR_BUILD-and-related-variables.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0028-nios2-Define-MUSL_DYNAMIC_LINKER.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0028-nios2-Define-MUSL_DYNAMIC_LINKER.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0029-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0029-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0030-ldbl128-config.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0030-ldbl128-config.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0031-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0031-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0032-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0032-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0033-sync-gcc-stddef.h-with-musl.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0033-sync-gcc-stddef.h-with-musl.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0034-fix-segmentation-fault-in-precompiled-header-generat.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0034-fix-segmentation-fault-in-precompiled-header-generat.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0035-Fix-for-testsuite-failure.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0035-Fix-for-testsuite-failure.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0036-Re-introduce-spe-commandline-options.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0036-Re-introduce-spe-commandline-options.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0037-CVE-2019-14250-Check-zero-value-in-simple_object_elf.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0037-CVE-2019-14250-Check-zero-value-in-simple_object_elf.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0038-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0038-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-9.5/0039-process_alt_operands-Don-t-match-user-defined-regs-o.patch (renamed from poky/meta/recipes-devtools/gcc/gcc-9.3/0039-process_alt_operands-Don-t-match-user-defined-regs-o.patch)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-cross-canadian_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/gcc-cross-canadian_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-cross_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/gcc-cross_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-crosssdk_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/gcc-crosssdk_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-runtime_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/gcc-runtime_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-sanitizers_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/gcc-sanitizers_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc-source_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/gcc-source_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/gcc_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/gcc_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/libgcc-initial_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/libgcc-initial_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/libgcc_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/libgcc_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/gcc/libgfortran_9.5.bb (renamed from poky/meta/recipes-devtools/gcc/libgfortran_9.3.bb)0
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-23521.patch367
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-01.patch39
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-02.patch187
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-03.patch146
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-04.patch150
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-05.patch98
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-06.patch90
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-07.patch123
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-08.patch67
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-09.patch162
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-10.patch99
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-11.patch90
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2022-41903-12.patch124
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2023-22490-1.patch179
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2023-22490-2.patch122
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2023-22490-3.patch154
-rw-r--r--poky/meta/recipes-devtools/git/files/CVE-2023-23946.patch184
-rw-r--r--poky/meta/recipes-devtools/git/git.inc22
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14.inc34
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2020-29510.patch65
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2021-33195.patch373
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2021-33198.patch113
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2021-44716.patch93
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-1962.patch357
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-24921.patch198
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-28131.patch104
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-28327.patch36
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-2879.patch111
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-2880.patch164
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41715.patch271
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41717.patch75
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41722-1.patch53
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41722-2.patch104
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41723.patch156
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2023-24534.patch200
-rw-r--r--poky/meta/recipes-devtools/go/go-1.14/CVE-2023-24537.patch76
-rw-r--r--poky/meta/recipes-devtools/go/go-crosssdk.inc2
-rw-r--r--poky/meta/recipes-devtools/go/go_1.14.bb4
-rw-r--r--poky/meta/recipes-devtools/opkg/opkg_0.4.2.bb4
-rw-r--r--poky/meta/recipes-devtools/python/files/CVE-2022-45061.patch100
-rw-r--r--poky/meta/recipes-devtools/python/python3/CVE-2021-28861.patch135
-rw-r--r--poky/meta/recipes-devtools/python/python3/CVE-2022-37454.patch105
-rw-r--r--poky/meta/recipes-devtools/python/python3/python3-manifest.json4
-rw-r--r--poky/meta/recipes-devtools/python/python3_3.8.14.bb (renamed from poky/meta/recipes-devtools/python/python3_3.8.13.bb)7
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu-system-native_4.2.0.bb2
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu.inc36
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-1.patch50
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-2.patch69
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-3.patch49
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-4.patch53
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-5.patch53
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-6.patch61
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-7.patch50
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-8.patch44
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15859.patch39
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-35504.patch51
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2020-35505.patch42
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-20196.patch62
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-1.patch85
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-2.patch103
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-3.patch71
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-4.patch52
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-5.patch93
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507.patch87
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3638.patch80
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3750.patch180
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3929.patch81
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2022-26354.patch57
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/CVE-2022-4144.patch103
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/hw-block-nvme-handle-dma-errors.patch146
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/hw-block-nvme-refactor-nvme_addr_read.patch55
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu/hw-display-qxl-Pass-requested-buffer-size-to-qxl_phy.patch236
-rw-r--r--poky/meta/recipes-devtools/qemu/qemu_4.2.0.bb4
-rw-r--r--poky/meta/recipes-devtools/quilt/quilt.inc1
-rw-r--r--poky/meta/recipes-devtools/quilt/quilt/faildiff-order.patch41
-rw-r--r--poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-01.patch60
-rw-r--r--poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-02.patch55
-rw-r--r--poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-03.patch34
-rw-r--r--poky/meta/recipes-devtools/rpm/files/CVE-2021-3521.patch330
-rw-r--r--poky/meta/recipes-devtools/rpm/rpm_4.14.2.1.bb4
-rw-r--r--poky/meta/recipes-devtools/rsync/files/CVE-2022-29154.patch334
-rw-r--r--poky/meta/recipes-devtools/rsync/rsync_3.1.3.bb1
-rw-r--r--poky/meta/recipes-devtools/ruby/ruby/CVE-2023-28756.patch61
-rw-r--r--poky/meta/recipes-devtools/ruby/ruby_2.7.6.bb1
-rw-r--r--poky/meta/recipes-extended/bc/bc_1.07.1.bb2
-rw-r--r--poky/meta/recipes-extended/ghostscript/ghostscript/check-stack-limits-after-function-evalution.patch2
-rw-r--r--poky/meta/recipes-extended/libarchive/libarchive/CVE-2022-26280.patch29
-rw-r--r--poky/meta/recipes-extended/libarchive/libarchive/CVE-2022-36227.patch43
-rw-r--r--poky/meta/recipes-extended/libarchive/libarchive_3.4.2.bb2
-rw-r--r--poky/meta/recipes-extended/libtirpc/libtirpc_1.2.6.bb2
-rw-r--r--poky/meta/recipes-extended/screen/screen/CVE-2023-24626.patch40
-rw-r--r--poky/meta/recipes-extended/screen/screen_4.8.0.bb1
-rw-r--r--poky/meta/recipes-extended/shadow/shadow_4.8.1.bb4
-rw-r--r--poky/meta/recipes-extended/sudo/files/CVE-2023-22809.patch113
-rw-r--r--poky/meta/recipes-extended/sudo/sudo.inc2
-rw-r--r--poky/meta/recipes-extended/sudo/sudo/CVE-2022-43995.patch59
-rw-r--r--poky/meta/recipes-extended/sudo/sudo/CVE-2023-28486_CVE-2023-28487-1.patch646
-rw-r--r--poky/meta/recipes-extended/sudo/sudo/CVE-2023-28486_CVE-2023-28487-2.patch26
-rw-r--r--poky/meta/recipes-extended/sudo/sudo_1.8.32.bb4
-rw-r--r--poky/meta/recipes-extended/sysstat/sysstat/CVE-2022-39377.patch92
-rw-r--r--poky/meta/recipes-extended/sysstat/sysstat_12.2.1.bb4
-rw-r--r--poky/meta/recipes-extended/tar/tar/CVE-2022-48303.patch43
-rw-r--r--poky/meta/recipes-extended/tar/tar_1.32.bb1
-rw-r--r--poky/meta/recipes-extended/timezone/timezone.inc7
-rw-r--r--poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6461.patch21
-rw-r--r--poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6462.patch46
-rw-r--r--poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193-pre0.patch335
-rw-r--r--poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193-pre1.patch135
-rw-r--r--poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193.patch179
-rw-r--r--poky/meta/recipes-graphics/harfbuzz/harfbuzz_2.6.4.bb5
-rw-r--r--poky/meta/recipes-graphics/libsdl2/libsdl2/CVE-2022-4743.patch38
-rw-r--r--poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.12.bb1
-rw-r--r--poky/meta/recipes-graphics/xorg-lib/libx11/CVE-2022-3554.patch58
-rw-r--r--poky/meta/recipes-graphics/xorg-lib/libx11/CVE-2022-3555.patch38
-rw-r--r--poky/meta/recipes-graphics/xorg-lib/libx11_1.6.9.bb2
-rw-r--r--poky/meta/recipes-graphics/xorg-lib/pixman/CVE-2022-44638.patch34
-rw-r--r--poky/meta/recipes-graphics/xorg-lib/pixman_0.38.4.bb1
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3550.patch40
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3551.patch64
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3553.patch49
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-4283.patch39
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46340.patch55
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46341.patch86
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46342.patch78
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46343.patch51
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46344.patch75
-rw-r--r--poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.20.14.bb11
-rw-r--r--poky/meta/recipes-kernel/linux-firmware/linux-firmware_20230210.bb (renamed from poky/meta/recipes-kernel/linux-firmware/linux-firmware_20220913.bb)44
-rw-r--r--poky/meta/recipes-kernel/linux/linux-yocto-dev.bb2
-rw-r--r--poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb6
-rw-r--r--poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.4.bb8
-rw-r--r--poky/meta/recipes-kernel/linux/linux-yocto_5.4.bb22
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0001-fix-strncpy-equals-destination-size-warning.patch42
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0002-fix-objtool-Rename-frame.h-objtool.h-v5.10.patch88
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0003-fix-btrfs-tracepoints-output-proper-root-owner-for-t.patch316
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0004-fix-btrfs-make-ordered-extent-tracepoint-take-btrfs_.patch179
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0005-fix-ext4-fast-commit-recovery-path-v5.10.patch91
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0006-fix-KVM-x86-Add-intr-vectoring-info-and-error-code-t.patch124
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0007-fix-kvm-x86-mmu-Add-TDP-MMU-PF-handler-v5.10.patch82
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0008-fix-KVM-x86-mmu-Return-unique-RET_PF_-values-if-the-.patch71
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0009-fix-tracepoint-Optimize-using-static_call-v5.10.patch155
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0010-fix-include-order-for-older-kernels.patch31
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0011-Add-release-maintainer-script.patch59
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0012-Improve-the-release-script.patch173
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0013-fix-backport-of-fix-ext4-fast-commit-recovery-path-v.patch32
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0014-Revert-fix-include-order-for-older-kernels.patch32
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0015-fix-backport-of-fix-tracepoint-Optimize-using-static.patch46
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/0016-fix-adjust-version-range-for-trace_find_free_extent.patch30
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules/fix-jbd2-use-the-correct-print-format.patch147
-rw-r--r--poky/meta/recipes-kernel/lttng/lttng-modules_2.11.9.bb (renamed from poky/meta/recipes-kernel/lttng/lttng-modules_2.11.6.bb)21
-rw-r--r--poky/meta/recipes-kernel/make-mod-scripts/make-mod-scripts_1.0.bb2
-rw-r--r--poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2023.02.13.bb (renamed from poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.08.12.bb)2
-rw-r--r--poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2022-3109.patch41
-rw-r--r--poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2022-3341.patch67
-rw-r--r--poky/meta/recipes-multimedia/ffmpeg/ffmpeg_4.2.2.bb2
-rw-r--r--poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3570_3598.patch659
-rw-r--r--poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3597_3626_3627.patch123
-rw-r--r--poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3599.patch277
-rw-r--r--poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3970.patch45
-rw-r--r--poky/meta/recipes-multimedia/libtiff/files/CVE-2022-48281.patch26
-rw-r--r--poky/meta/recipes-multimedia/libtiff/files/CVE-2023-0795_0796_0797_0798_0799.patch157
-rw-r--r--poky/meta/recipes-multimedia/libtiff/files/CVE-2023-0800_0801_0802_0803_0804.patch135
-rw-r--r--poky/meta/recipes-multimedia/libtiff/tiff_4.1.0.bb7
-rw-r--r--poky/meta/recipes-support/apr/apr-util/0001-Fix-error-handling-in-gdbm.patch135
-rw-r--r--poky/meta/recipes-support/apr/apr-util_1.6.3.bb (renamed from poky/meta/recipes-support/apr/apr-util_1.6.1.bb)8
-rw-r--r--poky/meta/recipes-support/apr/apr/0001-Add-option-to-disable-timed-dependant-tests.patch20
-rw-r--r--poky/meta/recipes-support/apr/apr/0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch58
-rw-r--r--poky/meta/recipes-support/apr/apr/0002-apr-Remove-workdir-path-references-from-installed-ap.patch25
-rw-r--r--poky/meta/recipes-support/apr/apr/0003-Makefile.in-configure.in-support-cross-compiling.patch63
-rw-r--r--poky/meta/recipes-support/apr/apr/0006-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch76
-rw-r--r--poky/meta/recipes-support/apr/apr/CVE-2021-35940.patch58
-rw-r--r--poky/meta/recipes-support/apr/apr/libtoolize_check.patch21
-rw-r--r--poky/meta/recipes-support/apr/apr_1.7.2.bb (renamed from poky/meta/recipes-support/apr/apr_1.7.0.bb)24
-rw-r--r--poky/meta/recipes-support/bmap-tools/bmap-tools_3.5.bb2
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2022-32221.patch29
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2022-35260.patch68
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2022-43552.patch82
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2023-23916.patch231
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2023-27533.patch59
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2023-27534.patch123
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2023-27535-pre1.patch236
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2023-27535.patch170
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2023-27536.patch55
-rw-r--r--poky/meta/recipes-support/curl/curl/CVE-2023-27538.patch31
-rw-r--r--poky/meta/recipes-support/curl/curl_7.69.1.bb10
-rw-r--r--poky/meta/recipes-support/gnutls/gnutls/CVE-2023-0361.patch85
-rw-r--r--poky/meta/recipes-support/gnutls/gnutls_3.6.14.bb1
-rw-r--r--poky/meta/recipes-support/gnutls/libtasn1/CVE-2021-46848.patch45
-rw-r--r--poky/meta/recipes-support/gnutls/libtasn1_4.16.0.bb1
-rw-r--r--poky/meta/recipes-support/libksba/libksba/CVE-2022-3515.patch47
-rw-r--r--poky/meta/recipes-support/libksba/libksba/CVE-2022-47629.patch69
-rw-r--r--poky/meta/recipes-support/libksba/libksba_1.3.5.bb5
-rw-r--r--poky/meta/recipes-support/vim/vim.inc10
-rw-r--r--poky/scripts/lib/buildstats.py4
-rw-r--r--poky/scripts/lib/devtool/deploy.py8
-rw-r--r--poky/scripts/lib/devtool/menuconfig.py2
-rw-r--r--poky/scripts/lib/devtool/standard.py2
-rw-r--r--poky/scripts/lib/resulttool/resultutils.py2
-rw-r--r--poky/scripts/lib/wic/plugins/imager/direct.py2
-rwxr-xr-xpoky/scripts/nativesdk-intercept/chgrp27
-rwxr-xr-xpoky/scripts/nativesdk-intercept/chown27
-rw-r--r--poky/scripts/pybootchartgui/pybootchartgui/parsing.py2
428 files changed, 36346 insertions, 4211 deletions
diff --git a/meta-openembedded/meta-filesystems/recipes-filesystems/ntfs-3g-ntfsprogs/ntfs-3g-ntfsprogs_2022.5.17.bb b/meta-openembedded/meta-filesystems/recipes-filesystems/ntfs-3g-ntfsprogs/ntfs-3g-ntfsprogs_2022.10.3.bb
index cb52c55676..efb331d7b2 100644
--- a/meta-openembedded/meta-filesystems/recipes-filesystems/ntfs-3g-ntfsprogs/ntfs-3g-ntfsprogs_2022.5.17.bb
+++ b/meta-openembedded/meta-filesystems/recipes-filesystems/ntfs-3g-ntfsprogs/ntfs-3g-ntfsprogs_2022.10.3.bb
@@ -10,7 +10,7 @@ SRC_URI = "http://tuxera.com/opensource/ntfs-3g_ntfsprogs-${PV}.tgz \
file://0001-libntfs-3g-Makefile.am-fix-install-failed-while-host.patch \
"
S = "${WORKDIR}/ntfs-3g_ntfsprogs-${PV}"
-SRC_URI[sha256sum] = "0489fbb6972581e1b417ab578d543f6ae522e7fa648c3c9b49c789510fd5eb93"
+SRC_URI[sha256sum] = "f20e36ee68074b845e3629e6bced4706ad053804cbaf062fbae60738f854170c"
UPSTREAM_CHECK_URI = "https://www.tuxera.com/community/open-source-ntfs-3g/"
UPSTREAM_CHECK_REGEX = "ntfs-3g_ntfsprogs-(?P<pver>\d+(\.\d+)+)\.tgz"
diff --git a/meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.22.10.bb b/meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.22.16.bb
index 33a2b7c0ce..a28372dd1f 100644
--- a/meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.22.10.bb
+++ b/meta-openembedded/meta-networking/recipes-connectivity/networkmanager/networkmanager_1.22.16.bb
@@ -33,11 +33,12 @@ SRC_URI_append_libc-musl = " \
file://musl/0003-Fix-build-with-musl-for-n-dhcp4.patch \
file://musl/0004-Fix-build-with-musl-systemd-specific.patch \
"
-SRC_URI[sha256sum] = "2b29ccc1531ba7ebba95a97f40c22b963838e8b6833745efe8e6fb71fd8fca77"
+SRC_URI[sha256sum] = "377aa053752eaa304b72c9906f9efcd9fbd5f7f6cb4cd4ad72425a68982cffc6"
S = "${WORKDIR}/NetworkManager-${PV}"
EXTRA_OECONF = " \
+ --disable-firewalld-zone \
--disable-ifcfg-rh \
--disable-more-warnings \
--with-iptables=${sbindir}/iptables \
diff --git a/meta-openembedded/meta-networking/recipes-daemons/postfix/postfix_3.4.23.bb b/meta-openembedded/meta-networking/recipes-daemons/postfix/postfix_3.4.27.bb
index bb66345805..2612e12be4 100644
--- a/meta-openembedded/meta-networking/recipes-daemons/postfix/postfix_3.4.23.bb
+++ b/meta-openembedded/meta-networking/recipes-daemons/postfix/postfix_3.4.27.bb
@@ -15,5 +15,5 @@ SRC_URI += "ftp://ftp.porcupine.org/mirrors/postfix-release/official/postfix-${P
file://0001-makedefs-add-lnsl-and-lresolv-to-SYSLIBS-by-default.patch \
file://0001-fix-build-with-glibc-2.34.patch \
"
-SRC_URI[sha256sum] = "1759e953bf7baccb533899845c17753bf57a99ebac9c21717626262966a122f9"
+SRC_URI[sha256sum] = "5f71658546d9b65863249dec3a189d084ea0596e23dc4613c579ad3ae75b10d2"
UPSTREAM_CHECK_REGEX = "postfix\-(?P<pver>3\.4(\.\d+)+).tar.gz"
diff --git a/meta-openembedded/meta-networking/recipes-daemons/proftpd/files/CVE-2021-46854.patch b/meta-openembedded/meta-networking/recipes-daemons/proftpd/files/CVE-2021-46854.patch
new file mode 100644
index 0000000000..712d5db07d
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-daemons/proftpd/files/CVE-2021-46854.patch
@@ -0,0 +1,51 @@
+From ed31fe2cbd5b8b1148b467f84f7acea66fa43bb8 Mon Sep 17 00:00:00 2001
+From: Chris Hofstaedtler <chris.hofstaedtler@deduktiva.com>
+Date: Tue, 3 Aug 2021 21:53:28 +0200
+Subject: [PATCH] CVE-2021-46854
+
+mod_radius: copy _only_ the password
+
+Upstream-Status: Backport [https://github.com/proftpd/proftpd/commit/10a227b4d50e0a2cd2faf87926f58d865da44e43]
+CVE: CVE-2021-46854
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ contrib/mod_radius.c | 11 ++++++++---
+ 1 file changed, 8 insertions(+), 3 deletions(-)
+
+diff --git a/contrib/mod_radius.c b/contrib/mod_radius.c
+index b56cdfe..f234dd5 100644
+--- a/contrib/mod_radius.c
++++ b/contrib/mod_radius.c
+@@ -2319,21 +2319,26 @@ static void radius_add_passwd(radius_packet_t *packet, unsigned char type,
+
+ pwlen = strlen((const char *) passwd);
+
++ /* Clear the buffers. */
++ memset(pwhash, '\0', sizeof(pwhash));
++
+ if (pwlen == 0) {
+ pwlen = RADIUS_PASSWD_LEN;
+
+ } if ((pwlen & (RADIUS_PASSWD_LEN - 1)) != 0) {
++ /* pwlen is not a multiple of RADIUS_PASSWD_LEN, need to prepare a proper buffer */
++ memcpy(pwhash, passwd, pwlen);
+
+ /* Round up the length. */
+ pwlen += (RADIUS_PASSWD_LEN - 1);
+
+ /* Truncate the length, as necessary. */
+ pwlen &= ~(RADIUS_PASSWD_LEN - 1);
++ } else {
++ /* pwlen is a multiple of RADIUS_PASSWD_LEN, we can just use it. */
++ memcpy(pwhash, passwd, pwlen);
+ }
+
+- /* Clear the buffers. */
+- memset(pwhash, '\0', sizeof(pwhash));
+- memcpy(pwhash, passwd, pwlen);
+
+ /* Find the password attribute. */
+ attrib = radius_get_attrib(packet, RADIUS_PASSWORD);
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.6.bb b/meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.6.bb
index 1e4697a633..9ec97b9237 100644
--- a/meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.6.bb
+++ b/meta-openembedded/meta-networking/recipes-daemons/proftpd/proftpd_1.3.6.bb
@@ -12,6 +12,7 @@ SRC_URI = "ftp://ftp.proftpd.org/distrib/source/${BPN}-${PV}.tar.gz \
file://contrib.patch \
file://build_fixup.patch \
file://proftpd.service \
+ file://CVE-2021-46854.patch \
"
SRC_URI[md5sum] = "13270911c42aac842435f18205546a1b"
SRC_URI[sha256sum] = "91ef74b143495d5ff97c4d4770c6804072a8c8eb1ad1ecc8cc541b40e152ecaf"
diff --git a/meta-openembedded/meta-networking/recipes-kernel/wireguard/files/0001-compat-SYM_FUNC_-START-END-were-backported-to-5.4.patch b/meta-openembedded/meta-networking/recipes-kernel/wireguard/files/0001-compat-SYM_FUNC_-START-END-were-backported-to-5.4.patch
deleted file mode 100644
index a9dc9dc2b7..0000000000
--- a/meta-openembedded/meta-networking/recipes-kernel/wireguard/files/0001-compat-SYM_FUNC_-START-END-were-backported-to-5.4.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From ce8faa3ee266ea69431805e6ed4bd7102d982508 Mon Sep 17 00:00:00 2001
-From: "Jason A. Donenfeld" <Jason@zx2c4.com>
-Date: Thu, 12 Nov 2020 09:43:38 +0100
-Subject: [PATCH] compat: SYM_FUNC_{START,END} were backported to 5.4
-
-Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
-
-Upstream-Status: Backport
-Fixes build failure in Dunfell.
-
-Signed-off-by: Armin Kuster <akuster808@gmail.com>
-
----
- compat/compat-asm.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-Index: src/compat/compat-asm.h
-===================================================================
---- src.orig/compat/compat-asm.h
-+++ src/compat/compat-asm.h
-@@ -40,7 +40,7 @@
- #undef pull
- #endif
-
--#if LINUX_VERSION_CODE < KERNEL_VERSION(5, 5, 0)
-+#if LINUX_VERSION_CODE < KERNEL_VERSION(5, 4, 76)
- #define SYM_FUNC_START ENTRY
- #define SYM_FUNC_END ENDPROC
- #endif
diff --git a/meta-openembedded/meta-networking/recipes-kernel/wireguard/files/0001-compat-icmp_ndo_send-functions-were-backported-exten.patch b/meta-openembedded/meta-networking/recipes-kernel/wireguard/files/0001-compat-icmp_ndo_send-functions-were-backported-exten.patch
deleted file mode 100644
index f01cfe4e1c..0000000000
--- a/meta-openembedded/meta-networking/recipes-kernel/wireguard/files/0001-compat-icmp_ndo_send-functions-were-backported-exten.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From 122f06bfd8fc7b06a0899fa9adc4ce8e06900d98 Mon Sep 17 00:00:00 2001
-From: "Jason A. Donenfeld" <Jason@zx2c4.com>
-Date: Sun, 7 Mar 2021 08:14:33 -0700
-Subject: [PATCH] compat: icmp_ndo_send functions were backported extensively
-
-Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
-
-Upstream-Status: Backport
-
-Fixes build with 5.4.103 update.
-/include/linux/icmpv6.h:56:6: note: previous declaration of 'icmpv6_ndo_send' was here
-| 56 | void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info);
-
-Signed-of-by: Armin Kuster <akuster808@gmail.com>
-
----
- src/compat/compat.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-Index: src/compat/compat.h
-===================================================================
---- src.orig/compat/compat.h
-+++ src/compat/compat.h
-@@ -946,7 +946,7 @@ static inline int skb_ensure_writable(st
- }
- #endif
-
--#if LINUX_VERSION_CODE < KERNEL_VERSION(5, 6, 0)
-+#if (LINUX_VERSION_CODE < KERNEL_VERSION(5, 6, 0) && LINUX_VERSION_CODE >= KERNEL_VERSION(5, 5, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(5, 4, 102) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 20, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(4, 19, 178) && LINUX_VERSION_CODE >= KERNEL_VERSION(4, 15, 0)) || (LINUX_VERSION_CODE < KERNEL_VERSION(4, 14, 223) && LINUX_VERSION_CODE > KERNEL_VERSION(4, 10, 0)) || LINUX_VERSION_CODE < KERNEL_VERSION(4, 9, 259) || defined(ISRHEL8) || defined(ISUBUNTU1804)
- #if IS_ENABLED(CONFIG_NF_NAT)
- #include <linux/ip.h>
- #include <linux/icmpv6.h>
diff --git a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20200401.bb b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20200401.bb
deleted file mode 100644
index 9215f4a6d8..0000000000
--- a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20200401.bb
+++ /dev/null
@@ -1,30 +0,0 @@
-require wireguard.inc
-
-SRCREV = "43f57dac7b8305024f83addc533c9eede6509129"
-
-SRC_URI = "git://git.zx2c4.com/wireguard-linux-compat;branch=master \
- file://0001-compat-SYM_FUNC_-START-END-were-backported-to-5.4.patch \
- file://0001-compat-icmp_ndo_send-functions-were-backported-exten.patch "
-
-inherit module kernel-module-split
-
-DEPENDS = "virtual/kernel libmnl"
-
-# This module requires Linux 3.10 higher and several networking related
-# configuration options. For exact kernel requirements visit:
-# https://www.wireguard.io/install/#kernel-requirements
-
-EXTRA_OEMAKE_append = " \
- KERNELDIR=${STAGING_KERNEL_DIR} \
- "
-
-MAKE_TARGETS = "module"
-
-RRECOMMENDS_${PN} = "kernel-module-xt-hashlimit"
-MODULE_NAME = "wireguard"
-
-module_do_install() {
- install -d ${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}/kernel/${MODULE_NAME}
- install -m 0644 ${MODULE_NAME}.ko \
- ${D}${nonarch_base_libdir}/modules/${KERNEL_VERSION}/kernel/${MODULE_NAME}/${MODULE_NAME}.ko
-}
diff --git a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20220627.bb b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20220627.bb
new file mode 100644
index 0000000000..df2db15349
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-module_1.0.20220627.bb
@@ -0,0 +1,23 @@
+require wireguard.inc
+
+SRCREV = "18fbcd68a35a892527345dc5679d0b2d860ee004"
+
+SRC_URI = "git://git.zx2c4.com/wireguard-linux-compat;protocol=https;branch=master"
+
+inherit module kernel-module-split
+
+DEPENDS = "virtual/kernel libmnl"
+
+# This module requires Linux 3.10 higher and several networking related
+# configuration options. For exact kernel requirements visit:
+# https://www.wireguard.io/install/#kernel-requirements
+
+EXTRA_OEMAKE_append = " \
+ KERNELDIR=${STAGING_KERNEL_DIR} \
+ "
+
+MAKE_TARGETS = "module"
+MODULES_INSTALL_TARGET = "module-install"
+
+RRECOMMENDS_${PN} = "kernel-module-xt-hashlimit"
+MODULE_NAME = "wireguard"
diff --git a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20200319.bb b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20210914.bb
index 9e486ecc34..b63ef88182 100644
--- a/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20200319.bb
+++ b/meta-openembedded/meta-networking/recipes-kernel/wireguard/wireguard-tools_1.0.20210914.bb
@@ -1,6 +1,6 @@
require wireguard.inc
-SRCREV = "a8063adc8ae9b4fc9848500e93f94bee8ad2e585"
+SRCREV = "3ba6527130c502144e7388b900138bca6260f4e8"
SRC_URI = "git://git.zx2c4.com/wireguard-tools;branch=master"
inherit bash-completion systemd pkgconfig
@@ -9,7 +9,7 @@ DEPENDS += "wireguard-module libmnl"
do_install () {
oe_runmake DESTDIR="${D}" PREFIX="${prefix}" SYSCONFDIR="${sysconfdir}" \
- SYSTEMDUNITDIR="${systemd_unitdir}" \
+ SYSTEMDUNITDIR="${systemd_system_unitdir}" \
WITH_SYSTEMDUNITS=${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'yes', '', d)} \
WITH_BASHCOMPLETION=yes \
WITH_WGQUICK=yes \
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/CVE-2022-44792-CVE-2022-44793.patch b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/CVE-2022-44792-CVE-2022-44793.patch
new file mode 100644
index 0000000000..4e537c8859
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp/CVE-2022-44792-CVE-2022-44793.patch
@@ -0,0 +1,116 @@
+From 4589352dac3ae111c7621298cf231742209efd9b Mon Sep 17 00:00:00 2001
+From: Bill Fenner <fenner@gmail.com>
+Date: Fri, 25 Nov 2022 08:41:24 -0800
+Subject: [PATCH ] snmp_agent: disallow SET with NULL varbind
+
+Upstream-Status: Backport [https://github.com/net-snmp/net-snmp/commit/be804106fd0771a7d05236cff36e199af077af57]
+CVE: CVE-2022-44792 & CVE-2022-44793
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ agent/snmp_agent.c | 32 +++++++++++++++++++
+ apps/snmpset.c | 1 +
+ .../default/T0142snmpv2csetnull_simple | 31 ++++++++++++++++++
+ 3 files changed, 64 insertions(+)
+ create mode 100644 testing/fulltests/default/T0142snmpv2csetnull_simple
+
+diff --git a/agent/snmp_agent.c b/agent/snmp_agent.c
+index 26653f4..eba5b4e 100644
+--- a/agent/snmp_agent.c
++++ b/agent/snmp_agent.c
+@@ -3708,12 +3708,44 @@ netsnmp_handle_request(netsnmp_agent_session *asp, int status)
+ return 1;
+ }
+
++static int
++check_set_pdu_for_null_varbind(netsnmp_agent_session *asp)
++{
++ int i;
++ netsnmp_variable_list *v = NULL;
++
++ for (i = 1, v = asp->pdu->variables; v != NULL; i++, v = v->next_variable) {
++ if (v->type == ASN_NULL) {
++ /*
++ * Protect SET implementations that do not protect themselves
++ * against wrong type.
++ */
++ DEBUGMSGTL(("snmp_agent", "disallowing SET with NULL var for varbind %d\n", i));
++ asp->index = i;
++ return SNMP_ERR_WRONGTYPE;
++ }
++ }
++ return SNMP_ERR_NOERROR;
++}
++
+ int
+ handle_pdu(netsnmp_agent_session *asp)
+ {
+ int status, inclusives = 0;
+ netsnmp_variable_list *v = NULL;
+
++#ifndef NETSNMP_NO_WRITE_SUPPORT
++ /*
++ * Check for ASN_NULL in SET request
++ */
++ if (asp->pdu->command == SNMP_MSG_SET) {
++ status = check_set_pdu_for_null_varbind(asp);
++ if (status != SNMP_ERR_NOERROR) {
++ return status;
++ }
++ }
++#endif /* NETSNMP_NO_WRITE_SUPPORT */
++
+ /*
+ * for illegal requests, mark all nodes as ASN_NULL
+ */
+diff --git a/apps/snmpset.c b/apps/snmpset.c
+index a2374bc..cd01b9a 100644
+--- a/apps/snmpset.c
++++ b/apps/snmpset.c
+@@ -182,6 +182,7 @@ main(int argc, char *argv[])
+ case 'x':
+ case 'd':
+ case 'b':
++ case 'n': /* undocumented */
+ #ifdef NETSNMP_WITH_OPAQUE_SPECIAL_TYPES
+ case 'I':
+ case 'U':
+diff --git a/testing/fulltests/default/T0142snmpv2csetnull_simple b/testing/fulltests/default/T0142snmpv2csetnull_simple
+new file mode 100644
+index 0000000..0f1b8f3
+--- /dev/null
++++ b/testing/fulltests/default/T0142snmpv2csetnull_simple
+@@ -0,0 +1,31 @@
++#!/bin/sh
++
++. ../support/simple_eval_tools.sh
++
++HEADER SNMPv2c set of system.sysContact.0 with NULL varbind
++
++SKIPIF NETSNMP_DISABLE_SET_SUPPORT
++SKIPIF NETSNMP_NO_WRITE_SUPPORT
++SKIPIF NETSNMP_DISABLE_SNMPV2C
++SKIPIFNOT USING_MIBII_SYSTEM_MIB_MODULE
++
++#
++# Begin test
++#
++
++# standard V2C configuration: testcomunnity
++snmp_write_access='all'
++. ./Sv2cconfig
++STARTAGENT
++
++CAPTURE "snmpget -On $SNMP_FLAGS -c testcommunity -v 2c $SNMP_TRANSPORT_SPEC:$SNMP_TEST_DEST$SNMP_SNMPD_PORT .1.3.6.1.2.1.1.4.0"
++
++CHECK ".1.3.6.1.2.1.1.4.0 = STRING:"
++
++CAPTURE "snmpset -On $SNMP_FLAGS -c testcommunity -v 2c $SNMP_TRANSPORT_SPEC:$SNMP_TEST_DEST$SNMP_SNMPD_PORT .1.3.6.1.2.1.1.4.0 n x"
++
++CHECK "Reason: wrongType"
++
++STOPAGENT
++
++FINISHED
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.8.bb b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.8.bb
index 6b4b6ce8ed..79f2c1d89d 100644
--- a/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.8.bb
+++ b/meta-openembedded/meta-networking/recipes-protocols/net-snmp/net-snmp_5.8.bb
@@ -35,6 +35,7 @@ SRC_URI = "${SOURCEFORGE_MIRROR}/net-snmp/net-snmp-${PV}.tar.gz \
file://CVE-2020-15861-0004.patch \
file://CVE-2020-15861-0005.patch \
file://CVE-2020-15862.patch \
+ file://CVE-2022-44792-CVE-2022-44793.patch \
"
SRC_URI[md5sum] = "63bfc65fbb86cdb616598df1aff6458a"
SRC_URI[sha256sum] = "b2fc3500840ebe532734c4786b0da4ef0a5f67e51ef4c86b3345d697e4976adf"
diff --git a/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq/CVE-2022-0934.patch b/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq/CVE-2022-0934.patch
new file mode 100644
index 0000000000..b2ef22c06f
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq/CVE-2022-0934.patch
@@ -0,0 +1,188 @@
+From 70df9f9104c8f0661966298b58caf794b99e26e1 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Thu, 22 Sep 2022 17:39:21 +0530
+Subject: [PATCH] CVE-2022-0934
+
+Upstream-Status: Backport [https://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commit;h=03345ecefeb0d82e3c3a4c28f27c3554f0611b39]
+CVE: CVE-2022-0934
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ CHANGELOG | 2 ++
+ src/rfc3315.c | 48 +++++++++++++++++++++++++++---------------------
+ 2 files changed, 29 insertions(+), 21 deletions(-)
+
+diff --git a/CHANGELOG b/CHANGELOG
+index 60b08d0..d1d7e41 100644
+--- a/CHANGELOG
++++ b/CHANGELOG
+@@ -88,6 +88,8 @@ version 2.81
+
+ Add --script-on-renewal option.
+
++ Fix write-after-free error in DHCPv6 server code.
++ CVE-2022-0934 refers.
+
+ version 2.80
+ Add support for RFC 4039 DHCP rapid commit. Thanks to Ashram Method
+diff --git a/src/rfc3315.c b/src/rfc3315.c
+index b3f0a0a..eef1360 100644
+--- a/src/rfc3315.c
++++ b/src/rfc3315.c
+@@ -33,9 +33,9 @@ struct state {
+ unsigned int mac_len, mac_type;
+ };
+
+-static int dhcp6_maybe_relay(struct state *state, void *inbuff, size_t sz,
++static int dhcp6_maybe_relay(struct state *state, unsigned char *inbuff, size_t sz,
+ struct in6_addr *client_addr, int is_unicast, time_t now);
+-static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_t sz, int is_unicast, time_t now);
++static int dhcp6_no_relay(struct state *state, int msg_type, unsigned char *inbuff, size_t sz, int is_unicast, time_t now);
+ static void log6_opts(int nest, unsigned int xid, void *start_opts, void *end_opts);
+ static void log6_packet(struct state *state, char *type, struct in6_addr *addr, char *string);
+ static void log6_quiet(struct state *state, char *type, struct in6_addr *addr, char *string);
+@@ -104,12 +104,12 @@ unsigned short dhcp6_reply(struct dhcp_context *context, int interface, char *if
+ }
+
+ /* This cost me blood to write, it will probably cost you blood to understand - srk. */
+-static int dhcp6_maybe_relay(struct state *state, void *inbuff, size_t sz,
++static int dhcp6_maybe_relay(struct state *state, unsigned char *inbuff, size_t sz,
+ struct in6_addr *client_addr, int is_unicast, time_t now)
+ {
+ void *end = inbuff + sz;
+ void *opts = inbuff + 34;
+- int msg_type = *((unsigned char *)inbuff);
++ int msg_type = *inbuff;
+ unsigned char *outmsgtypep;
+ void *opt;
+ struct dhcp_vendor *vendor;
+@@ -259,15 +259,15 @@ static int dhcp6_maybe_relay(struct state *state, void *inbuff, size_t sz,
+ return 1;
+ }
+
+-static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_t sz, int is_unicast, time_t now)
++static int dhcp6_no_relay(struct state *state, int msg_type, unsigned char *inbuff, size_t sz, int is_unicast, time_t now)
+ {
+ void *opt;
+- int i, o, o1, start_opts;
++ int i, o, o1, start_opts, start_msg;
+ struct dhcp_opt *opt_cfg;
+ struct dhcp_netid *tagif;
+ struct dhcp_config *config = NULL;
+ struct dhcp_netid known_id, iface_id, v6_id;
+- unsigned char *outmsgtypep;
++ unsigned char outmsgtype;
+ struct dhcp_vendor *vendor;
+ struct dhcp_context *context_tmp;
+ struct dhcp_mac *mac_opt;
+@@ -296,12 +296,13 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ v6_id.next = state->tags;
+ state->tags = &v6_id;
+
+- /* copy over transaction-id, and save pointer to message type */
+- if (!(outmsgtypep = put_opt6(inbuff, 4)))
++ start_msg = save_counter(-1);
++ /* copy over transaction-id */
++ if (!put_opt6(inbuff, 4))
+ return 0;
+ start_opts = save_counter(-1);
+- state->xid = outmsgtypep[3] | outmsgtypep[2] << 8 | outmsgtypep[1] << 16;
+-
++ state->xid = inbuff[3] | inbuff[2] << 8 | inbuff[1] << 16;
++
+ /* We're going to be linking tags from all context we use.
+ mark them as unused so we don't link one twice and break the list */
+ for (context_tmp = state->context; context_tmp; context_tmp = context_tmp->current)
+@@ -347,7 +348,7 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ (msg_type == DHCP6REQUEST || msg_type == DHCP6RENEW || msg_type == DHCP6RELEASE || msg_type == DHCP6DECLINE))
+
+ {
+- *outmsgtypep = DHCP6REPLY;
++ outmsgtype = DHCP6REPLY;
+ o1 = new_opt6(OPTION6_STATUS_CODE);
+ put_opt6_short(DHCP6USEMULTI);
+ put_opt6_string("Use multicast");
+@@ -619,11 +620,11 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ struct dhcp_netid *solicit_tags;
+ struct dhcp_context *c;
+
+- *outmsgtypep = DHCP6ADVERTISE;
++ outmsgtype = DHCP6ADVERTISE;
+
+ if (opt6_find(state->packet_options, state->end, OPTION6_RAPID_COMMIT, 0))
+ {
+- *outmsgtypep = DHCP6REPLY;
++ outmsgtype = DHCP6REPLY;
+ state->lease_allocate = 1;
+ o = new_opt6(OPTION6_RAPID_COMMIT);
+ end_opt6(o);
+@@ -809,7 +810,7 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ int start = save_counter(-1);
+
+ /* set reply message type */
+- *outmsgtypep = DHCP6REPLY;
++ outmsgtype = DHCP6REPLY;
+ state->lease_allocate = 1;
+
+ log6_quiet(state, "DHCPREQUEST", NULL, ignore ? _("ignored") : NULL);
+@@ -921,7 +922,7 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ case DHCP6RENEW:
+ {
+ /* set reply message type */
+- *outmsgtypep = DHCP6REPLY;
++ outmsgtype = DHCP6REPLY;
+
+ log6_quiet(state, "DHCPRENEW", NULL, NULL);
+
+@@ -1033,7 +1034,7 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ int good_addr = 0;
+
+ /* set reply message type */
+- *outmsgtypep = DHCP6REPLY;
++ outmsgtype = DHCP6REPLY;
+
+ log6_quiet(state, "DHCPCONFIRM", NULL, NULL);
+
+@@ -1097,7 +1098,7 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ log6_quiet(state, "DHCPINFORMATION-REQUEST", NULL, ignore ? _("ignored") : state->hostname);
+ if (ignore)
+ return 0;
+- *outmsgtypep = DHCP6REPLY;
++ outmsgtype = DHCP6REPLY;
+ tagif = add_options(state, 1);
+ break;
+ }
+@@ -1106,7 +1107,7 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ case DHCP6RELEASE:
+ {
+ /* set reply message type */
+- *outmsgtypep = DHCP6REPLY;
++ outmsgtype = DHCP6REPLY;
+
+ log6_quiet(state, "DHCPRELEASE", NULL, NULL);
+
+@@ -1171,7 +1172,7 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ case DHCP6DECLINE:
+ {
+ /* set reply message type */
+- *outmsgtypep = DHCP6REPLY;
++ outmsgtype = DHCP6REPLY;
+
+ log6_quiet(state, "DHCPDECLINE", NULL, NULL);
+
+@@ -1251,7 +1252,12 @@ static int dhcp6_no_relay(struct state *state, int msg_type, void *inbuff, size_
+ }
+
+ }
+-
++
++ /* Fill in the message type. Note that we store the offset,
++ not a direct pointer, since the packet memory may have been
++ reallocated. */
++ ((unsigned char *)(daemon->outpacket.iov_base))[start_msg] = outmsgtype;
++
+ log_tags(tagif, state->xid);
+ log6_opts(0, state->xid, daemon->outpacket.iov_base + start_opts, daemon->outpacket.iov_base + save_counter(-1));
+
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq/CVE-2023-28450.patch b/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq/CVE-2023-28450.patch
new file mode 100644
index 0000000000..dd3bd27408
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq/CVE-2023-28450.patch
@@ -0,0 +1,63 @@
+From eb92fb32b746f2104b0f370b5b295bb8dd4bd5e5 Mon Sep 17 00:00:00 2001
+From: Simon Kelley <simon@thekelleys.org.uk>
+Date: Tue, 7 Mar 2023 22:07:46 +0000
+Subject: [PATCH] Set the default maximum DNS UDP packet size to 1232.
+
+Upstream-Status: Backport [https://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commit;h=eb92fb32b746f2104b0f370b5b295bb8dd4bd5e5]
+CVE: CVE-2023-28450
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ CHANGELOG | 8 ++++++++
+ man/dnsmasq.8 | 3 ++-
+ src/config.h | 2 +-
+ 3 files changed, 11 insertions(+), 2 deletions(-)
+
+diff --git a/CHANGELOG b/CHANGELOG
+index d1d7e41..7a560d3 100644
+--- a/CHANGELOG
++++ b/CHANGELOG
+@@ -91,6 +91,14 @@ version 2.81
+ Fix write-after-free error in DHCPv6 server code.
+ CVE-2022-0934 refers.
+
++ Set the default maximum DNS UDP packet sice to 1232. This
++ has been the recommended value since 2020 because it's the
++ largest value that avoid fragmentation, and fragmentation
++ is just not reliable on the modern internet, especially
++ for IPv6. It's still possible to override this with
++ --edns-packet-max for special circumstances.
++
++
+ version 2.80
+ Add support for RFC 4039 DHCP rapid commit. Thanks to Ashram Method
+ for the initial patch and motivation.
+diff --git a/man/dnsmasq.8 b/man/dnsmasq.8
+index f2803f9..3cca4bc 100644
+--- a/man/dnsmasq.8
++++ b/man/dnsmasq.8
+@@ -168,7 +168,8 @@ to zero completely disables DNS function, leaving only DHCP and/or TFTP.
+ .TP
+ .B \-P, --edns-packet-max=<size>
+ Specify the largest EDNS.0 UDP packet which is supported by the DNS
+-forwarder. Defaults to 4096, which is the RFC5625-recommended size.
++forwarder. Defaults to 1232, which is the recommended size following the
++DNS flag day in 2020. Only increase if you know what you are doing.
+ .TP
+ .B \-Q, --query-port=<query_port>
+ Send outbound DNS queries from, and listen for their replies on, the
+diff --git a/src/config.h b/src/config.h
+index 54f6f48..29ac3e7 100644
+--- a/src/config.h
++++ b/src/config.h
+@@ -19,7 +19,7 @@
+ #define CHILD_LIFETIME 150 /* secs 'till terminated (RFC1035 suggests > 120s) */
+ #define TCP_MAX_QUERIES 100 /* Maximum number of queries per incoming TCP connection */
+ #define TCP_BACKLOG 32 /* kernel backlog limit for TCP connections */
+-#define EDNS_PKTSZ 4096 /* default max EDNS.0 UDP packet from RFC5625 */
++#define EDNS_PKTSZ 1232 /* default max EDNS.0 UDP packet from from /dnsflagday.net/2020 */
+ #define SAFE_PKTSZ 1280 /* "go anywhere" UDP packet size */
+ #define KEYBLOCK_LEN 40 /* choose to minimise fragmentation when storing DNSSEC keys */
+ #define DNSSEC_WORK 50 /* Max number of queries to validate one question */
+--
+2.18.2
+
diff --git a/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq_2.81.bb b/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq_2.81.bb
index 2fb389915b..f2b8feac56 100644
--- a/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq_2.81.bb
+++ b/meta-openembedded/meta-networking/recipes-support/dnsmasq/dnsmasq_2.81.bb
@@ -11,4 +11,6 @@ SRC_URI += "\
file://CVE-2020-25686-1.patch \
file://CVE-2020-25686-2.patch \
file://CVE-2021-3448.patch \
+ file://CVE-2022-0934.patch \
+ file://CVE-2023-28450.patch \
"
diff --git a/meta-openembedded/meta-networking/recipes-support/strongswan/files/CVE-2022-40617.patch b/meta-openembedded/meta-networking/recipes-support/strongswan/files/CVE-2022-40617.patch
new file mode 100644
index 0000000000..66e5047125
--- /dev/null
+++ b/meta-openembedded/meta-networking/recipes-support/strongswan/files/CVE-2022-40617.patch
@@ -0,0 +1,210 @@
+From 66d3b2e0e596a6eac1ebcd15c83a8d9368fe7b34 Mon Sep 17 00:00:00 2001
+From: Tobias Brunner <tobias@strongswan.org>
+Date: Fri, 22 Jul 2022 15:37:43 +0200
+Subject: [PATCH] credential-manager: Do online revocation checks only after
+ basic trust chain validation
+
+This avoids querying URLs of potentially untrusted certificates, e.g. if
+an attacker sends a specially crafted end-entity and intermediate CA
+certificate with a CDP that points to a server that completes the
+TCP handshake but then does not send any further data, which will block
+the fetcher thread (depending on the plugin) for as long as the default
+timeout for TCP. Doing that multiple times will block all worker threads,
+leading to a DoS attack.
+
+The logging during the certificate verification obviously changes. The
+following example shows the output of `pki --verify` for the current
+strongswan.org certificate:
+
+new:
+
+ using certificate "CN=www.strongswan.org"
+ using trusted intermediate ca certificate "C=US, O=Let's Encrypt, CN=R3"
+ using trusted ca certificate "C=US, O=Internet Security Research Group, CN=ISRG Root X1"
+ reached self-signed root ca with a path length of 1
+checking certificate status of "CN=www.strongswan.org"
+ requesting ocsp status from 'http://r3.o.lencr.org' ...
+ ocsp response correctly signed by "C=US, O=Let's Encrypt, CN=R3"
+ ocsp response is valid: until Jul 27 12:59:58 2022
+certificate status is good
+checking certificate status of "C=US, O=Let's Encrypt, CN=R3"
+ocsp response verification failed, no signer certificate 'C=US, O=Let's Encrypt, CN=R3' found
+ fetching crl from 'http://x1.c.lencr.org/' ...
+ using trusted certificate "C=US, O=Internet Security Research Group, CN=ISRG Root X1"
+ crl correctly signed by "C=US, O=Internet Security Research Group, CN=ISRG Root X1"
+ crl is valid: until Apr 18 01:59:59 2023
+certificate status is good
+certificate trusted, lifetimes valid, certificate not revoked
+
+old:
+
+ using certificate "CN=www.strongswan.org"
+ using trusted intermediate ca certificate "C=US, O=Let's Encrypt, CN=R3"
+checking certificate status of "CN=www.strongswan.org"
+ requesting ocsp status from 'http://r3.o.lencr.org' ...
+ ocsp response correctly signed by "C=US, O=Let's Encrypt, CN=R3"
+ ocsp response is valid: until Jul 27 12:59:58 2022
+certificate status is good
+ using trusted ca certificate "C=US, O=Internet Security Research Group, CN=ISRG Root X1"
+checking certificate status of "C=US, O=Let's Encrypt, CN=R3"
+ocsp response verification failed, no signer certificate 'C=US, O=Let's Encrypt, CN=R3' found
+ fetching crl from 'http://x1.c.lencr.org/' ...
+ using trusted certificate "C=US, O=Internet Security Research Group, CN=ISRG Root X1"
+ crl correctly signed by "C=US, O=Internet Security Research Group, CN=ISRG Root X1"
+ crl is valid: until Apr 18 01:59:59 2023
+certificate status is good
+ reached self-signed root ca with a path length of 1
+certificate trusted, lifetimes valid, certificate not revoked
+
+Note that this also fixes an issue with the previous dual-use of the
+`trusted` flag. It not only indicated whether the chain is trusted but
+also whether the current issuer is the root anchor (the corresponding
+flag in the `cert_validator_t` interface is called `anchor`). This was
+a problem when building multi-level trust chains for pre-trusted
+end-entity certificates (i.e. where `trusted` is TRUE from the start).
+This caused the main loop to get aborted after the first intermediate CA
+certificate and the mentioned `anchor` flag wasn't correct in any calls
+to `cert_validator_t` implementations.
+
+Fixes: CVE-2022-40617
+
+CVE: CVE-2022-40617
+Upstream-Status: Backport [https://download.strongswan.org/security/CVE-2022-40617/strongswan-5.1.0-5.9.7_cert_online_validate.patch]
+Signed-off-by: Ranjitsinh Rathod <ranjitsinh.rathod@kpit.com>
+
+---
+ .../credentials/credential_manager.c | 54 +++++++++++++++----
+ 1 file changed, 45 insertions(+), 9 deletions(-)
+
+diff --git a/src/libstrongswan/credentials/credential_manager.c b/src/libstrongswan/credentials/credential_manager.c
+index e93b5943a3a7..798785544e41 100644
+--- a/src/libstrongswan/credentials/credential_manager.c
++++ b/src/libstrongswan/credentials/credential_manager.c
+@@ -556,7 +556,7 @@ static void cache_queue(private_credential_manager_t *this)
+ */
+ static bool check_lifetime(private_credential_manager_t *this,
+ certificate_t *cert, char *label,
+- int pathlen, bool trusted, auth_cfg_t *auth)
++ int pathlen, bool anchor, auth_cfg_t *auth)
+ {
+ time_t not_before, not_after;
+ cert_validator_t *validator;
+@@ -571,7 +571,7 @@ static bool check_lifetime(private_credential_manager_t *this,
+ continue;
+ }
+ status = validator->check_lifetime(validator, cert,
+- pathlen, trusted, auth);
++ pathlen, anchor, auth);
+ if (status != NEED_MORE)
+ {
+ break;
+@@ -604,13 +604,13 @@ static bool check_lifetime(private_credential_manager_t *this,
+ */
+ static bool check_certificate(private_credential_manager_t *this,
+ certificate_t *subject, certificate_t *issuer, bool online,
+- int pathlen, bool trusted, auth_cfg_t *auth)
++ int pathlen, bool anchor, auth_cfg_t *auth)
+ {
+ cert_validator_t *validator;
+ enumerator_t *enumerator;
+
+ if (!check_lifetime(this, subject, "subject", pathlen, FALSE, auth) ||
+- !check_lifetime(this, issuer, "issuer", pathlen + 1, trusted, auth))
++ !check_lifetime(this, issuer, "issuer", pathlen + 1, anchor, auth))
+ {
+ return FALSE;
+ }
+@@ -623,7 +623,7 @@ static bool check_certificate(private_credential_manager_t *this,
+ continue;
+ }
+ if (!validator->validate(validator, subject, issuer,
+- online, pathlen, trusted, auth))
++ online, pathlen, anchor, auth))
+ {
+ enumerator->destroy(enumerator);
+ return FALSE;
+@@ -726,6 +726,7 @@ static bool verify_trust_chain(private_credential_manager_t *this,
+ auth_cfg_t *auth;
+ signature_params_t *scheme;
+ int pathlen;
++ bool is_anchor = FALSE;
+
+ auth = auth_cfg_create();
+ get_key_strength(subject, auth);
+@@ -743,7 +744,7 @@ static bool verify_trust_chain(private_credential_manager_t *this,
+ auth->add(auth, AUTH_RULE_CA_CERT, issuer->get_ref(issuer));
+ DBG1(DBG_CFG, " using trusted ca certificate \"%Y\"",
+ issuer->get_subject(issuer));
+- trusted = TRUE;
++ trusted = is_anchor = TRUE;
+ }
+ else
+ {
+@@ -778,11 +779,18 @@ static bool verify_trust_chain(private_credential_manager_t *this,
+ DBG1(DBG_CFG, " issuer is \"%Y\"",
+ current->get_issuer(current));
+ call_hook(this, CRED_HOOK_NO_ISSUER, current);
++ if (trusted)
++ {
++ DBG1(DBG_CFG, " reached end of incomplete trust chain for "
++ "trusted certificate \"%Y\"",
++ subject->get_subject(subject));
++ }
+ break;
+ }
+ }
+- if (!check_certificate(this, current, issuer, online,
+- pathlen, trusted, auth))
++ /* don't do online verification here */
++ if (!check_certificate(this, current, issuer, FALSE,
++ pathlen, is_anchor, auth))
+ {
+ trusted = FALSE;
+ issuer->destroy(issuer);
+@@ -794,7 +802,7 @@ static bool verify_trust_chain(private_credential_manager_t *this,
+ }
+ current->destroy(current);
+ current = issuer;
+- if (trusted)
++ if (is_anchor)
+ {
+ DBG1(DBG_CFG, " reached self-signed root ca with a "
+ "path length of %d", pathlen);
+@@ -807,6 +815,34 @@ static bool verify_trust_chain(private_credential_manager_t *this,
+ DBG1(DBG_CFG, "maximum path length of %d exceeded", MAX_TRUST_PATH_LEN);
+ call_hook(this, CRED_HOOK_EXCEEDED_PATH_LEN, subject);
+ }
++ else if (trusted && online)
++ {
++ enumerator_t *enumerator;
++ auth_rule_t rule;
++
++ /* do online revocation checks after basic validation of the chain */
++ pathlen = 0;
++ current = subject;
++ enumerator = auth->create_enumerator(auth);
++ while (enumerator->enumerate(enumerator, &rule, &issuer))
++ {
++ if (rule == AUTH_RULE_CA_CERT || rule == AUTH_RULE_IM_CERT)
++ {
++ if (!check_certificate(this, current, issuer, TRUE, pathlen++,
++ rule == AUTH_RULE_CA_CERT, auth))
++ {
++ trusted = FALSE;
++ break;
++ }
++ else if (rule == AUTH_RULE_CA_CERT)
++ {
++ break;
++ }
++ current = issuer;
++ }
++ }
++ enumerator->destroy(enumerator);
++ }
+ if (trusted)
+ {
+ result->merge(result, auth, FALSE);
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.8.4.bb b/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.8.4.bb
index 8a5855fb87..c11748645c 100644
--- a/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.8.4.bb
+++ b/meta-openembedded/meta-networking/recipes-support/strongswan/strongswan_5.8.4.bb
@@ -14,6 +14,7 @@ SRC_URI = "http://download.strongswan.org/strongswan-${PV}.tar.bz2 \
file://CVE-2021-41990.patch \
file://CVE-2021-41991.patch \
file://CVE-2021-45079.patch \
+ file://CVE-2022-40617.patch \
"
SRC_URI[md5sum] = "0634e7f40591bd3f6770e583c3f27d29"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5/CVE-2022-42898.patch b/meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5/CVE-2022-42898.patch
new file mode 100644
index 0000000000..6d04bf8980
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5/CVE-2022-42898.patch
@@ -0,0 +1,110 @@
+From 4e661f0085ec5f969c76c0896a34322c6c432de4 Mon Sep 17 00:00:00 2001
+From: Greg Hudson <ghudson@mit.edu>
+Date: Mon, 17 Oct 2022 20:25:11 -0400
+Subject: [PATCH] Fix integer overflows in PAC parsing
+
+In krb5_parse_pac(), check for buffer counts large enough to threaten
+integer overflow in the header length and memory length calculations.
+Avoid potential integer overflows when checking the length of each
+buffer. Credit to OSS-Fuzz for discovering one of the issues.
+
+CVE-2022-42898:
+
+In MIT krb5 releases 1.8 and later, an authenticated attacker may be
+able to cause a KDC or kadmind process to crash by reading beyond the
+bounds of allocated memory, creating a denial of service. A
+privileged attacker may similarly be able to cause a Kerberos or GSS
+application service to crash. On 32-bit platforms, an attacker can
+also cause insufficient memory to be allocated for the result,
+potentially leading to remote code execution in a KDC, kadmind, or GSS
+or Kerberos application server process. An attacker with the
+privileges of a cross-realm KDC may be able to extract secrets from a
+KDC process's memory by having them copied into the PAC of a new
+ticket.
+
+(cherry picked from commit ea92d2f0fcceb54a70910fa32e9a0d7a5afc3583)
+
+ticket: 9074
+version_fixed: 1.19.4
+
+Upstream-Status: Backport [https://github.com/krb5/krb5/commit/4e661f0085ec5f969c76c0896a34322c6c432de4]
+CVE: CVE-2022-42898
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ src/lib/krb5/krb/pac.c | 9 +++++++--
+ src/lib/krb5/krb/t_pac.c | 18 ++++++++++++++++++
+ 2 files changed, 25 insertions(+), 2 deletions(-)
+
+diff --git a/src/lib/krb5/krb/pac.c b/src/lib/krb5/krb/pac.c
+index cc74f37..70428a1 100644
+--- a/src/lib/krb5/krb/pac.c
++++ b/src/lib/krb5/krb/pac.c
+@@ -27,6 +27,8 @@
+ #include "k5-int.h"
+ #include "authdata.h"
+
++#define MAX_BUFFERS 4096
++
+ /* draft-brezak-win2k-krb-authz-00 */
+
+ /*
+@@ -316,6 +318,9 @@ krb5_pac_parse(krb5_context context,
+ if (version != 0)
+ return EINVAL;
+
++ if (cbuffers < 1 || cbuffers > MAX_BUFFERS)
++ return ERANGE;
++
+ header_len = PACTYPE_LENGTH + (cbuffers * PAC_INFO_BUFFER_LENGTH);
+ if (len < header_len)
+ return ERANGE;
+@@ -348,8 +353,8 @@ krb5_pac_parse(krb5_context context,
+ krb5_pac_free(context, pac);
+ return EINVAL;
+ }
+- if (buffer->Offset < header_len ||
+- buffer->Offset + buffer->cbBufferSize > len) {
++ if (buffer->Offset < header_len || buffer->Offset > len ||
++ buffer->cbBufferSize > len - buffer->Offset) {
+ krb5_pac_free(context, pac);
+ return ERANGE;
+ }
+diff --git a/src/lib/krb5/krb/t_pac.c b/src/lib/krb5/krb/t_pac.c
+index 7b756a2..2353e9f 100644
+--- a/src/lib/krb5/krb/t_pac.c
++++ b/src/lib/krb5/krb/t_pac.c
+@@ -431,6 +431,16 @@ static const unsigned char s4u_pac_ent_xrealm[] = {
+ 0x8a, 0x81, 0x9c, 0x9c, 0x00, 0x00, 0x00, 0x00
+ };
+
++static const unsigned char fuzz1[] = {
++ 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00, 0x00,
++ 0x06, 0xff, 0xff, 0xff, 0x00, 0x00, 0xf5
++};
++
++static const unsigned char fuzz2[] = {
++ 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00,
++ 0x20, 0x20
++};
++
+ static const char *s4u_principal = "w2k8u@ACME.COM";
+ static const char *s4u_enterprise = "w2k8u@abc@ACME.COM";
+
+@@ -646,6 +656,14 @@ main(int argc, char **argv)
+ krb5_free_principal(context, sep);
+ }
+
++ /* Check problematic PACs found by fuzzing. */
++ ret = krb5_pac_parse(context, fuzz1, sizeof(fuzz1), &pac);
++ if (!ret)
++ err(context, ret, "krb5_pac_parse should have failed");
++ ret = krb5_pac_parse(context, fuzz2, sizeof(fuzz2), &pac);
++ if (!ret)
++ err(context, ret, "krb5_pac_parse should have failed");
++
+ /*
+ * Test empty free
+ */
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5_1.17.1.bb b/meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5_1.17.1.bb
index ae58e2df35..ebcfbc524c 100644
--- a/meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5_1.17.1.bb
+++ b/meta-openembedded/meta-oe/recipes-connectivity/krb5/krb5_1.17.1.bb
@@ -31,6 +31,7 @@ SRC_URI = "http://web.mit.edu/kerberos/dist/${BPN}/${SHRT_VER}/${BP}.tar.gz \
file://krb5-kdc.service \
file://krb5-admin-server.service \
file://CVE-2021-36222.patch \
+ file://CVE-2022-42898.patch;striplevel=2 \
"
SRC_URI[md5sum] = "417d654c72526ac51466e7fe84608878"
SRC_URI[sha256sum] = "3706d7ec2eaa773e0e32d3a87bf742ebaecae7d064e190443a3acddfd8afb181"
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/zeromq/files/0001-CMakeLists-txt-Avoid-host-specific-path-to-libsodium.patch b/meta-openembedded/meta-oe/recipes-connectivity/zeromq/files/0001-CMakeLists-txt-Avoid-host-specific-path-to-libsodium.patch
index eb3dee4d31..31f6529225 100644
--- a/meta-openembedded/meta-oe/recipes-connectivity/zeromq/files/0001-CMakeLists-txt-Avoid-host-specific-path-to-libsodium.patch
+++ b/meta-openembedded/meta-oe/recipes-connectivity/zeromq/files/0001-CMakeLists-txt-Avoid-host-specific-path-to-libsodium.patch
@@ -19,8 +19,8 @@ Signed-off-by: Niko Mauno <niko.mauno@vaisala.com>
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
-@@ -1210,7 +1210,7 @@
- target_link_libraries(libzmq ${OPTIONAL_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
+@@ -1440,7 +1440,7 @@ if(BUILD_SHARED)
+ endif()
if(SODIUM_FOUND)
- target_link_libraries(libzmq ${SODIUM_LIBRARIES})
@@ -28,8 +28,8 @@ Signed-off-by: Niko Mauno <niko.mauno@vaisala.com>
# On Solaris, libsodium depends on libssp
if(${CMAKE_SYSTEM_NAME} MATCHES "SunOS")
target_link_libraries(libzmq ssp)
-@@ -1240,7 +1240,7 @@
- target_link_libraries(libzmq-static ${OPTIONAL_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
+@@ -1485,7 +1485,7 @@ if(BUILD_STATIC)
+ endif()
if(SODIUM_FOUND)
- target_link_libraries(libzmq-static ${SODIUM_LIBRARIES})
diff --git a/meta-openembedded/meta-oe/recipes-connectivity/zeromq/zeromq_4.3.2.bb b/meta-openembedded/meta-oe/recipes-connectivity/zeromq/zeromq_4.3.4.bb
index 02a4c04fd7..4381f2d6d6 100644
--- a/meta-openembedded/meta-oe/recipes-connectivity/zeromq/zeromq_4.3.2.bb
+++ b/meta-openembedded/meta-oe/recipes-connectivity/zeromq/zeromq_4.3.4.bb
@@ -10,8 +10,8 @@ SRC_URI = "http://github.com/zeromq/libzmq/releases/download/v${PV}/zeromq-${PV}
file://0001-CMakeLists-txt-Avoid-host-specific-path-to-libsodium.patch \
file://run-ptest \
"
-SRC_URI[md5sum] = "2047e917c2cc93505e2579bcba67a573"
-SRC_URI[sha256sum] = "ebd7b5c830d6428956b67a0454a7f8cbed1de74b3b01e5c33c5378e22740f763"
+SRC_URI[md5sum] = "c897d4005a3f0b8276b00b7921412379"
+SRC_URI[sha256sum] = "c593001a89f5a85dd2ddf564805deb860e02471171b3f204944857336295c3e5"
UPSTREAM_CHECK_URI = "https://github.com/${BPN}/libzmq/releases"
diff --git a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb-native_10.4.25.bb b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb-native_10.4.28.bb
index e1a038dfa3..e1a038dfa3 100644
--- a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb-native_10.4.25.bb
+++ b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb-native_10.4.28.bb
diff --git a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc
index 565f4d5613..e4eb48492a 100644
--- a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc
+++ b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb.inc
@@ -16,9 +16,10 @@ SRC_URI = "https://downloads.mariadb.org/interstitial/${BP}/source/${BP}.tar.gz
file://sql-CMakeLists.txt-fix-gen_lex_hash-not-found.patch \
file://0001-disable-ucontext-on-musl.patch \
file://fix-arm-atomic.patch \
+ file://CVE-2022-47015.patch \
"
-SRC_URI[sha256sum] = "ff963c4e11bc06b775f66f2b1ddef184996208fb4b23cfdb50d95fb02eaa7ef8"
+SRC_URI[sha256sum] = "003fd23f3c6ee516176e1b62b0b43cdb6cdd3dcd4e30f855c1c5ab2baaf5a86c"
UPSTREAM_CHECK_URI = "https://github.com/MariaDB/server/releases"
diff --git a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb/CVE-2022-47015.patch b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb/CVE-2022-47015.patch
new file mode 100644
index 0000000000..0ddcdc028c
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb/CVE-2022-47015.patch
@@ -0,0 +1,269 @@
+From be0a46b3d52b58956fd0d47d040b9f4514406954 Mon Sep 17 00:00:00 2001
+From: Nayuta Yanagisawa <nayuta.yanagisawa@hey.com>
+Date: Tue, 27 Sep 2022 15:22:57 +0900
+Subject: [PATCH] MDEV-29644 a potential bug of null pointer dereference in
+ spider_db_mbase::print_warnings()
+
+Upstream-Status: Backport [https://github.com/MariaDB/server/commit/be0a46b3d52b58956fd0d47d040b9f4514406954]
+CVE: CVE-2022-47015
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ .../spider/bugfix/r/mdev_29644.result | 44 ++++++++++
+ .../mysql-test/spider/bugfix/t/mdev_29644.cnf | 3 +
+ .../spider/bugfix/t/mdev_29644.test | 58 ++++++++++++
+ storage/spider/spd_db_mysql.cc | 88 ++++++++-----------
+ storage/spider/spd_db_mysql.h | 4 +-
+ 5 files changed, 141 insertions(+), 56 deletions(-)
+ create mode 100644 spider/mysql-test/spider/bugfix/r/mdev_29644.result
+ create mode 100644 spider/mysql-test/spider/bugfix/t/mdev_29644.cnf
+ create mode 100644 spider/mysql-test/spider/bugfix/t/mdev_29644.test
+
+diff --git a/spider/mysql-test/spider/bugfix/r/mdev_29644.result b/spider/mysql-test/spider/bugfix/r/mdev_29644.result
+new file mode 100644
+index 00000000..eb725602
+--- /dev/null
++++ b/spider/mysql-test/spider/bugfix/r/mdev_29644.result
+@@ -0,0 +1,44 @@
++#
++# MDEV-29644 a potential bug of null pointer dereference in spider_db_mbase::print_warnings()
++#
++for master_1
++for child2
++child2_1
++child2_2
++child2_3
++for child3
++connection child2_1;
++CREATE DATABASE auto_test_remote;
++USE auto_test_remote;
++CREATE TABLE tbl_a (
++a CHAR(5)
++) ENGINE=InnoDB DEFAULT CHARSET=utf8;
++set @orig_sql_mode=@@global.sql_mode;
++SET GLOBAL sql_mode='';
++connection master_1;
++CREATE DATABASE auto_test_local;
++USE auto_test_local;
++CREATE TABLE tbl_a (
++a CHAR(255)
++) ENGINE=Spider DEFAULT CHARSET=utf8 COMMENT='table "tbl_a", srv "s_2_1"';
++SET @orig_sql_mode=@@global.sql_mode;
++SET GLOBAL sql_mode='';
++INSERT INTO tbl_a VALUES ("this will be truncated");
++NOT FOUND /\[WARN SPIDER RESULT\].* Warning 1265 Data truncated for column 'a' at row 1.*/ in mysqld.1.1.err
++SET @orig_log_result_errors=@@global.spider_log_result_errors;
++SET GLOBAL spider_log_result_errors=4;
++INSERT INTO tbl_a VALUES ("this will be truncated");
++FOUND 1 /\[WARN SPIDER RESULT\].* Warning 1265 Data truncated for column 'a' at row 1.*/ in mysqld.1.1.err
++connection master_1;
++SET GLOBAL spider_log_result_errors=@orig_log_result_errors;
++SET GLOBAL sql_mode=@orig_sql_mode;
++DROP DATABASE IF EXISTS auto_test_local;
++connection child2_1;
++SET GLOBAL sql_mode=@orig_sql_mode;
++DROP DATABASE IF EXISTS auto_test_remote;
++for master_1
++for child2
++child2_1
++child2_2
++child2_3
++for child3
+diff --git a/spider/mysql-test/spider/bugfix/t/mdev_29644.cnf b/spider/mysql-test/spider/bugfix/t/mdev_29644.cnf
+new file mode 100644
+index 00000000..05dfd8a0
+--- /dev/null
++++ b/spider/mysql-test/spider/bugfix/t/mdev_29644.cnf
+@@ -0,0 +1,3 @@
++!include include/default_mysqld.cnf
++!include ../my_1_1.cnf
++!include ../my_2_1.cnf
+diff --git a/spider/mysql-test/spider/bugfix/t/mdev_29644.test b/spider/mysql-test/spider/bugfix/t/mdev_29644.test
+new file mode 100644
+index 00000000..4ebdf317
+--- /dev/null
++++ b/spider/mysql-test/spider/bugfix/t/mdev_29644.test
+@@ -0,0 +1,58 @@
++--echo #
++--echo # MDEV-29644 a potential bug of null pointer dereference in spider_db_mbase::print_warnings()
++--echo #
++
++# The test case below does not cause the potential null pointer dereference.
++# It is just for checking spider_db_mbase::fetch_and_print_warnings() works.
++
++--disable_query_log
++--disable_result_log
++--source ../../t/test_init.inc
++--enable_result_log
++--enable_query_log
++
++--connection child2_1
++CREATE DATABASE auto_test_remote;
++USE auto_test_remote;
++eval CREATE TABLE tbl_a (
++ a CHAR(5)
++) $CHILD2_1_ENGINE $CHILD2_1_CHARSET;
++set @orig_sql_mode=@@global.sql_mode;
++SET GLOBAL sql_mode='';
++
++--connection master_1
++CREATE DATABASE auto_test_local;
++USE auto_test_local;
++eval CREATE TABLE tbl_a (
++ a CHAR(255)
++) $MASTER_1_ENGINE $MASTER_1_CHARSET COMMENT='table "tbl_a", srv "s_2_1"';
++
++SET @orig_sql_mode=@@global.sql_mode;
++SET GLOBAL sql_mode='';
++
++let SEARCH_FILE= $MYSQLTEST_VARDIR/log/mysqld.1.1.err;
++let SEARCH_PATTERN= \[WARN SPIDER RESULT\].* Warning 1265 Data truncated for column 'a' at row 1.*;
++
++INSERT INTO tbl_a VALUES ("this will be truncated");
++--source include/search_pattern_in_file.inc # should not find
++
++SET @orig_log_result_errors=@@global.spider_log_result_errors;
++SET GLOBAL spider_log_result_errors=4;
++
++INSERT INTO tbl_a VALUES ("this will be truncated");
++--source include/search_pattern_in_file.inc # should find
++
++--connection master_1
++SET GLOBAL spider_log_result_errors=@orig_log_result_errors;
++SET GLOBAL sql_mode=@orig_sql_mode;
++DROP DATABASE IF EXISTS auto_test_local;
++
++--connection child2_1
++SET GLOBAL sql_mode=@orig_sql_mode;
++DROP DATABASE IF EXISTS auto_test_remote;
++
++--disable_query_log
++--disable_result_log
++--source ../t/test_deinit.inc
++--enable_query_log
++--enable_result_log
+diff --git a/storage/spider/spd_db_mysql.cc b/storage/spider/spd_db_mysql.cc
+index 85f910aa..7d6bd599 100644
+--- a/storage/spider/spd_db_mysql.cc
++++ b/storage/spider/spd_db_mysql.cc
+@@ -2197,7 +2197,7 @@ int spider_db_mbase::exec_query(
+ db_conn->affected_rows, db_conn->insert_id,
+ db_conn->server_status, db_conn->warning_count);
+ if (spider_param_log_result_errors() >= 3)
+- print_warnings(l_time);
++ fetch_and_print_warnings(l_time);
+ } else if (log_result_errors >= 4)
+ {
+ time_t cur_time = (time_t) time((time_t*) 0);
+@@ -2279,61 +2279,43 @@ bool spider_db_mbase::is_xa_nota_error(
+ DBUG_RETURN(xa_nota);
+ }
+
+-void spider_db_mbase::print_warnings(
+- struct tm *l_time
+-) {
+- DBUG_ENTER("spider_db_mbase::print_warnings");
+- DBUG_PRINT("info",("spider this=%p", this));
+- if (db_conn->status == MYSQL_STATUS_READY)
++void spider_db_mbase::fetch_and_print_warnings(struct tm *l_time)
++{
++ DBUG_ENTER("spider_db_mbase::fetch_and_print_warnings");
++
++ if (spider_param_dry_access() || db_conn->status != MYSQL_STATUS_READY ||
++ db_conn->server_status & SERVER_MORE_RESULTS_EXISTS)
++ DBUG_VOID_RETURN;
++
++ if (mysql_real_query(db_conn, SPIDER_SQL_SHOW_WARNINGS_STR,
++ SPIDER_SQL_SHOW_WARNINGS_LEN))
++ DBUG_VOID_RETURN;
++
++ MYSQL_RES *res= mysql_store_result(db_conn);
++ if (!res)
++ DBUG_VOID_RETURN;
++
++ uint num_fields= mysql_num_fields(res);
++ if (num_fields != 3)
+ {
+-#if MYSQL_VERSION_ID < 50500
+- if (!(db_conn->last_used_con->server_status & SERVER_MORE_RESULTS_EXISTS))
+-#else
+- if (!(db_conn->server_status & SERVER_MORE_RESULTS_EXISTS))
+-#endif
+- {
+- if (
+- spider_param_dry_access() ||
+- !mysql_real_query(db_conn, SPIDER_SQL_SHOW_WARNINGS_STR,
+- SPIDER_SQL_SHOW_WARNINGS_LEN)
+- ) {
+- MYSQL_RES *res = NULL;
+- MYSQL_ROW row = NULL;
+- uint num_fields;
+- if (
+- spider_param_dry_access() ||
+- !(res = mysql_store_result(db_conn)) ||
+- !(row = mysql_fetch_row(res))
+- ) {
+- if (mysql_errno(db_conn))
+- {
+- if (res)
+- mysql_free_result(res);
+- DBUG_VOID_RETURN;
+- }
+- /* no record is ok */
+- }
+- num_fields = mysql_num_fields(res);
+- if (num_fields != 3)
+- {
+- mysql_free_result(res);
+- DBUG_VOID_RETURN;
+- }
+- while (row)
+- {
+- fprintf(stderr, "%04d%02d%02d %02d:%02d:%02d [WARN SPIDER RESULT] "
+- "from [%s] %ld to %ld: %s %s %s\n",
++ mysql_free_result(res);
++ DBUG_VOID_RETURN;
++ }
++
++ MYSQL_ROW row= mysql_fetch_row(res);
++ while (row)
++ {
++ fprintf(stderr,
++ "%04d%02d%02d %02d:%02d:%02d [WARN SPIDER RESULT] from [%s] %ld "
++ "to %ld: %s %s %s\n",
+ l_time->tm_year + 1900, l_time->tm_mon + 1, l_time->tm_mday,
+- l_time->tm_hour, l_time->tm_min, l_time->tm_sec,
+- conn->tgt_host, (ulong) db_conn->thread_id,
+- (ulong) current_thd->thread_id, row[0], row[1], row[2]);
+- row = mysql_fetch_row(res);
+- }
+- if (res)
+- mysql_free_result(res);
+- }
+- }
++ l_time->tm_hour, l_time->tm_min, l_time->tm_sec, conn->tgt_host,
++ (ulong) db_conn->thread_id, (ulong) current_thd->thread_id, row[0],
++ row[1], row[2]);
++ row= mysql_fetch_row(res);
+ }
++ mysql_free_result(res);
++
+ DBUG_VOID_RETURN;
+ }
+
+diff --git a/storage/spider/spd_db_mysql.h b/storage/spider/spd_db_mysql.h
+index 626bb4d5..82c7c0ec 100644
+--- a/storage/spider/spd_db_mysql.h
++++ b/storage/spider/spd_db_mysql.h
+@@ -439,9 +439,7 @@ class spider_db_mbase: public spider_db_conn
+ bool is_xa_nota_error(
+ int error_num
+ );
+- void print_warnings(
+- struct tm *l_time
+- );
++ void fetch_and_print_warnings(struct tm *l_time);
+ spider_db_result *store_result(
+ spider_db_result_buffer **spider_res_buf,
+ st_spider_db_request_key *request_key,
+--
+2.25.1
diff --git a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb_10.4.25.bb b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb_10.4.28.bb
index c0b53379d9..c0b53379d9 100644
--- a/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb_10.4.25.bb
+++ b/meta-openembedded/meta-oe/recipes-dbs/mysql/mariadb_10.4.28.bb
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-1552.patch b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-1552.patch
new file mode 100644
index 0000000000..6f0d5ac06f
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-1552.patch
@@ -0,0 +1,947 @@
+From 31eefa1efc8eecb6ab91c8835d2952d44a3b1ae1 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Thu, 22 Sep 2022 11:20:41 +0530
+Subject: [PATCH] CVE-2022-1552
+
+Upstream-Status: Backport [https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=ab49ce7c3414ac19e4afb386d7843ce2d2fb8bda && https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=677a494789062ca88e0142a17bedd5415f6ab0aa]
+
+CVE: CVE-2022-1552
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ contrib/amcheck/expected/check_btree.out | 23 ++++++
+ contrib/amcheck/sql/check_btree.sql | 21 +++++
+ contrib/amcheck/verify_nbtree.c | 27 +++++++
+ src/backend/access/brin/brin.c | 29 ++++++-
+ src/backend/catalog/index.c | 65 ++++++++++++----
+ src/backend/commands/cluster.c | 37 ++++++---
+ src/backend/commands/indexcmds.c | 98 ++++++++++++++++++++----
+ src/backend/commands/matview.c | 30 +++-----
+ src/backend/utils/init/miscinit.c | 24 +++---
+ src/test/regress/expected/privileges.out | 71 +++++++++++++++++
+ src/test/regress/sql/privileges.sql | 64 ++++++++++++++++
+ 11 files changed, 422 insertions(+), 67 deletions(-)
+
+diff --git a/contrib/amcheck/expected/check_btree.out b/contrib/amcheck/expected/check_btree.out
+index 59a805d..0fd6ea0 100644
+--- a/contrib/amcheck/expected/check_btree.out
++++ b/contrib/amcheck/expected/check_btree.out
+@@ -168,11 +168,34 @@ SELECT bt_index_check('toasty', true);
+
+ (1 row)
+
++--
++-- Check that index expressions and predicates are run as the table's owner
++--
++TRUNCATE bttest_a;
++INSERT INTO bttest_a SELECT * FROM generate_series(1, 1000);
++ALTER TABLE bttest_a OWNER TO regress_bttest_role;
++-- A dummy index function checking current_user
++CREATE FUNCTION ifun(int8) RETURNS int8 AS $$
++BEGIN
++ ASSERT current_user = 'regress_bttest_role',
++ format('ifun(%s) called by %s', $1, current_user);
++ RETURN $1;
++END;
++$$ LANGUAGE plpgsql IMMUTABLE;
++CREATE INDEX bttest_a_expr_idx ON bttest_a ((ifun(id) + ifun(0)))
++ WHERE ifun(id + 10) > ifun(10);
++SELECT bt_index_check('bttest_a_expr_idx', true);
++ bt_index_check
++----------------
++
++(1 row)
++
+ -- cleanup
+ DROP TABLE bttest_a;
+ DROP TABLE bttest_b;
+ DROP TABLE bttest_multi;
+ DROP TABLE delete_test_table;
+ DROP TABLE toast_bug;
++DROP FUNCTION ifun(int8);
+ DROP OWNED BY regress_bttest_role; -- permissions
+ DROP ROLE regress_bttest_role;
+diff --git a/contrib/amcheck/sql/check_btree.sql b/contrib/amcheck/sql/check_btree.sql
+index 99acbc8..3248187 100644
+--- a/contrib/amcheck/sql/check_btree.sql
++++ b/contrib/amcheck/sql/check_btree.sql
+@@ -110,11 +110,32 @@ INSERT INTO toast_bug SELECT repeat('a', 2200);
+ -- Should not get false positive report of corruption:
+ SELECT bt_index_check('toasty', true);
+
++--
++-- Check that index expressions and predicates are run as the table's owner
++--
++TRUNCATE bttest_a;
++INSERT INTO bttest_a SELECT * FROM generate_series(1, 1000);
++ALTER TABLE bttest_a OWNER TO regress_bttest_role;
++-- A dummy index function checking current_user
++CREATE FUNCTION ifun(int8) RETURNS int8 AS $$
++BEGIN
++ ASSERT current_user = 'regress_bttest_role',
++ format('ifun(%s) called by %s', $1, current_user);
++ RETURN $1;
++END;
++$$ LANGUAGE plpgsql IMMUTABLE;
++
++CREATE INDEX bttest_a_expr_idx ON bttest_a ((ifun(id) + ifun(0)))
++ WHERE ifun(id + 10) > ifun(10);
++
++SELECT bt_index_check('bttest_a_expr_idx', true);
++
+ -- cleanup
+ DROP TABLE bttest_a;
+ DROP TABLE bttest_b;
+ DROP TABLE bttest_multi;
+ DROP TABLE delete_test_table;
+ DROP TABLE toast_bug;
++DROP FUNCTION ifun(int8);
+ DROP OWNED BY regress_bttest_role; -- permissions
+ DROP ROLE regress_bttest_role;
+diff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c
+index 700a02f..cb6475d 100644
+--- a/contrib/amcheck/verify_nbtree.c
++++ b/contrib/amcheck/verify_nbtree.c
+@@ -228,6 +228,9 @@ bt_index_check_internal(Oid indrelid, bool parentcheck, bool heapallindexed,
+ Relation indrel;
+ Relation heaprel;
+ LOCKMODE lockmode;
++ Oid save_userid;
++ int save_sec_context;
++ int save_nestlevel;
+
+ if (parentcheck)
+ lockmode = ShareLock;
+@@ -244,9 +247,27 @@ bt_index_check_internal(Oid indrelid, bool parentcheck, bool heapallindexed,
+ */
+ heapid = IndexGetRelation(indrelid, true);
+ if (OidIsValid(heapid))
++ {
+ heaprel = table_open(heapid, lockmode);
++
++ /*
++ * Switch to the table owner's userid, so that any index functions are
++ * run as that user. Also lock down security-restricted operations
++ * and arrange to make GUC variable changes local to this command.
++ */
++ GetUserIdAndSecContext(&save_userid, &save_sec_context);
++ SetUserIdAndSecContext(heaprel->rd_rel->relowner,
++ save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ save_nestlevel = NewGUCNestLevel();
++ }
+ else
++ {
+ heaprel = NULL;
++ /* for "gcc -Og" https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78394 */
++ save_userid = InvalidOid;
++ save_sec_context = -1;
++ save_nestlevel = -1;
++ }
+
+ /*
+ * Open the target index relations separately (like relation_openrv(), but
+@@ -293,6 +314,12 @@ bt_index_check_internal(Oid indrelid, bool parentcheck, bool heapallindexed,
+ heapallindexed, rootdescend);
+ }
+
++ /* Roll back any GUC changes executed by index functions */
++ AtEOXact_GUC(false, save_nestlevel);
++
++ /* Restore userid and security context */
++ SetUserIdAndSecContext(save_userid, save_sec_context);
++
+ /*
+ * Release locks early. That's ok here because nothing in the called
+ * routines will trigger shared cache invalidations to be sent, so we can
+diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c
+index c7b403b..781cac2 100644
+--- a/src/backend/access/brin/brin.c
++++ b/src/backend/access/brin/brin.c
+@@ -873,6 +873,9 @@ brin_summarize_range(PG_FUNCTION_ARGS)
+ Oid heapoid;
+ Relation indexRel;
+ Relation heapRel;
++ Oid save_userid;
++ int save_sec_context;
++ int save_nestlevel;
+ double numSummarized = 0;
+
+ if (RecoveryInProgress())
+@@ -899,7 +902,22 @@ brin_summarize_range(PG_FUNCTION_ARGS)
+ */
+ heapoid = IndexGetRelation(indexoid, true);
+ if (OidIsValid(heapoid))
++ {
+ heapRel = table_open(heapoid, ShareUpdateExclusiveLock);
++
++ /*
++ * Autovacuum calls us. For its benefit, switch to the table owner's
++ * userid, so that any index functions are run as that user. Also
++ * lock down security-restricted operations and arrange to make GUC
++ * variable changes local to this command. This is harmless, albeit
++ * unnecessary, when called from SQL, because we fail shortly if the
++ * user does not own the index.
++ */
++ GetUserIdAndSecContext(&save_userid, &save_sec_context);
++ SetUserIdAndSecContext(heapRel->rd_rel->relowner,
++ save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ save_nestlevel = NewGUCNestLevel();
++ }
+ else
+ heapRel = NULL;
+
+@@ -914,7 +932,7 @@ brin_summarize_range(PG_FUNCTION_ARGS)
+ RelationGetRelationName(indexRel))));
+
+ /* User must own the index (comparable to privileges needed for VACUUM) */
+- if (!pg_class_ownercheck(indexoid, GetUserId()))
++ if (heapRel != NULL && !pg_class_ownercheck(indexoid, save_userid))
+ aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_INDEX,
+ RelationGetRelationName(indexRel));
+
+@@ -932,6 +950,12 @@ brin_summarize_range(PG_FUNCTION_ARGS)
+ /* OK, do it */
+ brinsummarize(indexRel, heapRel, heapBlk, true, &numSummarized, NULL);
+
++ /* Roll back any GUC changes executed by index functions */
++ AtEOXact_GUC(false, save_nestlevel);
++
++ /* Restore userid and security context */
++ SetUserIdAndSecContext(save_userid, save_sec_context);
++
+ relation_close(indexRel, ShareUpdateExclusiveLock);
+ relation_close(heapRel, ShareUpdateExclusiveLock);
+
+@@ -973,6 +997,9 @@ brin_desummarize_range(PG_FUNCTION_ARGS)
+ * passed indexoid isn't an index then IndexGetRelation() will fail.
+ * Rather than emitting a not-very-helpful error message, postpone
+ * complaining, expecting that the is-it-an-index test below will fail.
++ *
++ * Unlike brin_summarize_range(), autovacuum never calls this. Hence, we
++ * don't switch userid.
+ */
+ heapoid = IndexGetRelation(indexoid, true);
+ if (OidIsValid(heapoid))
+diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
+index 3ece136..0333bfd 100644
+--- a/src/backend/catalog/index.c
++++ b/src/backend/catalog/index.c
+@@ -1400,6 +1400,9 @@ index_concurrently_build(Oid heapRelationId,
+ Oid indexRelationId)
+ {
+ Relation heapRel;
++ Oid save_userid;
++ int save_sec_context;
++ int save_nestlevel;
+ Relation indexRelation;
+ IndexInfo *indexInfo;
+
+@@ -1409,7 +1412,16 @@ index_concurrently_build(Oid heapRelationId,
+ /* Open and lock the parent heap relation */
+ heapRel = table_open(heapRelationId, ShareUpdateExclusiveLock);
+
+- /* And the target index relation */
++ /*
++ * Switch to the table owner's userid, so that any index functions are run
++ * as that user. Also lock down security-restricted operations and
++ * arrange to make GUC variable changes local to this command.
++ */
++ GetUserIdAndSecContext(&save_userid, &save_sec_context);
++ SetUserIdAndSecContext(heapRel->rd_rel->relowner,
++ save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ save_nestlevel = NewGUCNestLevel();
++
+ indexRelation = index_open(indexRelationId, RowExclusiveLock);
+
+ /*
+@@ -1425,6 +1437,12 @@ index_concurrently_build(Oid heapRelationId,
+ /* Now build the index */
+ index_build(heapRel, indexRelation, indexInfo, false, true);
+
++ /* Roll back any GUC changes executed by index functions */
++ AtEOXact_GUC(false, save_nestlevel);
++
++ /* Restore userid and security context */
++ SetUserIdAndSecContext(save_userid, save_sec_context);
++
+ /* Close both the relations, but keep the locks */
+ table_close(heapRel, NoLock);
+ index_close(indexRelation, NoLock);
+@@ -3271,7 +3289,17 @@ validate_index(Oid heapId, Oid indexId, Snapshot snapshot)
+
+ /* Open and lock the parent heap relation */
+ heapRelation = table_open(heapId, ShareUpdateExclusiveLock);
+- /* And the target index relation */
++
++ /*
++ * Switch to the table owner's userid, so that any index functions are run
++ * as that user. Also lock down security-restricted operations and
++ * arrange to make GUC variable changes local to this command.
++ */
++ GetUserIdAndSecContext(&save_userid, &save_sec_context);
++ SetUserIdAndSecContext(heapRelation->rd_rel->relowner,
++ save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ save_nestlevel = NewGUCNestLevel();
++
+ indexRelation = index_open(indexId, RowExclusiveLock);
+
+ /*
+@@ -3284,16 +3312,6 @@ validate_index(Oid heapId, Oid indexId, Snapshot snapshot)
+ /* mark build is concurrent just for consistency */
+ indexInfo->ii_Concurrent = true;
+
+- /*
+- * Switch to the table owner's userid, so that any index functions are run
+- * as that user. Also lock down security-restricted operations and
+- * arrange to make GUC variable changes local to this command.
+- */
+- GetUserIdAndSecContext(&save_userid, &save_sec_context);
+- SetUserIdAndSecContext(heapRelation->rd_rel->relowner,
+- save_sec_context | SECURITY_RESTRICTED_OPERATION);
+- save_nestlevel = NewGUCNestLevel();
+-
+ /*
+ * Scan the index and gather up all the TIDs into a tuplesort object.
+ */
+@@ -3497,6 +3515,9 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,
+ Relation iRel,
+ heapRelation;
+ Oid heapId;
++ Oid save_userid;
++ int save_sec_context;
++ int save_nestlevel;
+ IndexInfo *indexInfo;
+ volatile bool skipped_constraint = false;
+ PGRUsage ru0;
+@@ -3527,6 +3548,16 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,
+ */
+ iRel = index_open(indexId, AccessExclusiveLock);
+
++ /*
++ * Switch to the table owner's userid, so that any index functions are run
++ * as that user. Also lock down security-restricted operations and
++ * arrange to make GUC variable changes local to this command.
++ */
++ GetUserIdAndSecContext(&save_userid, &save_sec_context);
++ SetUserIdAndSecContext(heapRelation->rd_rel->relowner,
++ save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ save_nestlevel = NewGUCNestLevel();
++
+ if (progress)
+ pgstat_progress_update_param(PROGRESS_CREATEIDX_ACCESS_METHOD_OID,
+ iRel->rd_rel->relam);
+@@ -3684,12 +3715,18 @@ reindex_index(Oid indexId, bool skip_constraint_checks, char persistence,
+ errdetail_internal("%s",
+ pg_rusage_show(&ru0))));
+
+- if (progress)
+- pgstat_progress_end_command();
++ /* Roll back any GUC changes executed by index functions */
++ AtEOXact_GUC(false, save_nestlevel);
++
++ /* Restore userid and security context */
++ SetUserIdAndSecContext(save_userid, save_sec_context);
+
+ /* Close rels, but keep locks */
+ index_close(iRel, NoLock);
+ table_close(heapRelation, NoLock);
++
++ if (progress)
++ pgstat_progress_end_command();
+ }
+
+ /*
+diff --git a/src/backend/commands/cluster.c b/src/backend/commands/cluster.c
+index bd6f408..74db03e 100644
+--- a/src/backend/commands/cluster.c
++++ b/src/backend/commands/cluster.c
+@@ -266,6 +266,9 @@ void
+ cluster_rel(Oid tableOid, Oid indexOid, int options)
+ {
+ Relation OldHeap;
++ Oid save_userid;
++ int save_sec_context;
++ int save_nestlevel;
+ bool verbose = ((options & CLUOPT_VERBOSE) != 0);
+ bool recheck = ((options & CLUOPT_RECHECK) != 0);
+
+@@ -295,6 +298,16 @@ cluster_rel(Oid tableOid, Oid indexOid, int options)
+ return;
+ }
+
++ /*
++ * Switch to the table owner's userid, so that any index functions are run
++ * as that user. Also lock down security-restricted operations and
++ * arrange to make GUC variable changes local to this command.
++ */
++ GetUserIdAndSecContext(&save_userid, &save_sec_context);
++ SetUserIdAndSecContext(OldHeap->rd_rel->relowner,
++ save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ save_nestlevel = NewGUCNestLevel();
++
+ /*
+ * Since we may open a new transaction for each relation, we have to check
+ * that the relation still is what we think it is.
+@@ -309,11 +322,10 @@ cluster_rel(Oid tableOid, Oid indexOid, int options)
+ Form_pg_index indexForm;
+
+ /* Check that the user still owns the relation */
+- if (!pg_class_ownercheck(tableOid, GetUserId()))
++ if (!pg_class_ownercheck(tableOid, save_userid))
+ {
+ relation_close(OldHeap, AccessExclusiveLock);
+- pgstat_progress_end_command();
+- return;
++ goto out;
+ }
+
+ /*
+@@ -327,8 +339,7 @@ cluster_rel(Oid tableOid, Oid indexOid, int options)
+ if (RELATION_IS_OTHER_TEMP(OldHeap))
+ {
+ relation_close(OldHeap, AccessExclusiveLock);
+- pgstat_progress_end_command();
+- return;
++ goto out;
+ }
+
+ if (OidIsValid(indexOid))
+@@ -339,8 +350,7 @@ cluster_rel(Oid tableOid, Oid indexOid, int options)
+ if (!SearchSysCacheExists1(RELOID, ObjectIdGetDatum(indexOid)))
+ {
+ relation_close(OldHeap, AccessExclusiveLock);
+- pgstat_progress_end_command();
+- return;
++ goto out;
+ }
+
+ /*
+@@ -350,8 +360,7 @@ cluster_rel(Oid tableOid, Oid indexOid, int options)
+ if (!HeapTupleIsValid(tuple)) /* probably can't happen */
+ {
+ relation_close(OldHeap, AccessExclusiveLock);
+- pgstat_progress_end_command();
+- return;
++ goto out;
+ }
+ indexForm = (Form_pg_index) GETSTRUCT(tuple);
+ if (!indexForm->indisclustered)
+@@ -413,8 +422,7 @@ cluster_rel(Oid tableOid, Oid indexOid, int options)
+ !RelationIsPopulated(OldHeap))
+ {
+ relation_close(OldHeap, AccessExclusiveLock);
+- pgstat_progress_end_command();
+- return;
++ goto out;
+ }
+
+ /*
+@@ -430,6 +438,13 @@ cluster_rel(Oid tableOid, Oid indexOid, int options)
+
+ /* NB: rebuild_relation does table_close() on OldHeap */
+
++out:
++ /* Roll back any GUC changes executed by index functions */
++ AtEOXact_GUC(false, save_nestlevel);
++
++ /* Restore userid and security context */
++ SetUserIdAndSecContext(save_userid, save_sec_context);
++
+ pgstat_progress_end_command();
+ }
+
+diff --git a/src/backend/commands/indexcmds.c b/src/backend/commands/indexcmds.c
+index be1cf8c..167b377 100644
+--- a/src/backend/commands/indexcmds.c
++++ b/src/backend/commands/indexcmds.c
+@@ -470,21 +470,22 @@ DefineIndex(Oid relationId,
+ LOCKTAG heaplocktag;
+ LOCKMODE lockmode;
+ Snapshot snapshot;
+- int save_nestlevel = -1;
++ Oid root_save_userid;
++ int root_save_sec_context;
++ int root_save_nestlevel;
+ int i;
+
++ root_save_nestlevel = NewGUCNestLevel();
++
+ /*
+ * Some callers need us to run with an empty default_tablespace; this is a
+ * necessary hack to be able to reproduce catalog state accurately when
+ * recreating indexes after table-rewriting ALTER TABLE.
+ */
+ if (stmt->reset_default_tblspc)
+- {
+- save_nestlevel = NewGUCNestLevel();
+ (void) set_config_option("default_tablespace", "",
+ PGC_USERSET, PGC_S_SESSION,
+ GUC_ACTION_SAVE, true, 0, false);
+- }
+
+ /*
+ * Force non-concurrent build on temporary relations, even if CONCURRENTLY
+@@ -563,6 +564,15 @@ DefineIndex(Oid relationId,
+ lockmode = concurrent ? ShareUpdateExclusiveLock : ShareLock;
+ rel = table_open(relationId, lockmode);
+
++ /*
++ * Switch to the table owner's userid, so that any index functions are run
++ * as that user. Also lock down security-restricted operations. We
++ * already arranged to make GUC variable changes local to this command.
++ */
++ GetUserIdAndSecContext(&root_save_userid, &root_save_sec_context);
++ SetUserIdAndSecContext(rel->rd_rel->relowner,
++ root_save_sec_context | SECURITY_RESTRICTED_OPERATION);
++
+ namespaceId = RelationGetNamespace(rel);
+
+ /* Ensure that it makes sense to index this kind of relation */
+@@ -648,7 +658,7 @@ DefineIndex(Oid relationId,
+ {
+ AclResult aclresult;
+
+- aclresult = pg_namespace_aclcheck(namespaceId, GetUserId(),
++ aclresult = pg_namespace_aclcheck(namespaceId, root_save_userid,
+ ACL_CREATE);
+ if (aclresult != ACLCHECK_OK)
+ aclcheck_error(aclresult, OBJECT_SCHEMA,
+@@ -680,7 +690,7 @@ DefineIndex(Oid relationId,
+ {
+ AclResult aclresult;
+
+- aclresult = pg_tablespace_aclcheck(tablespaceId, GetUserId(),
++ aclresult = pg_tablespace_aclcheck(tablespaceId, root_save_userid,
+ ACL_CREATE);
+ if (aclresult != ACLCHECK_OK)
+ aclcheck_error(aclresult, OBJECT_TABLESPACE,
+@@ -1066,15 +1076,17 @@ DefineIndex(Oid relationId,
+
+ ObjectAddressSet(address, RelationRelationId, indexRelationId);
+
+- /*
+- * Revert to original default_tablespace. Must do this before any return
+- * from this function, but after index_create, so this is a good time.
+- */
+- if (save_nestlevel >= 0)
+- AtEOXact_GUC(true, save_nestlevel);
+-
+ if (!OidIsValid(indexRelationId))
+ {
++ /*
++ * Roll back any GUC changes executed by index functions. Also revert
++ * to original default_tablespace if we changed it above.
++ */
++ AtEOXact_GUC(false, root_save_nestlevel);
++
++ /* Restore userid and security context */
++ SetUserIdAndSecContext(root_save_userid, root_save_sec_context);
++
+ table_close(rel, NoLock);
+
+ /* If this is the top-level index, we're done */
+@@ -1084,6 +1096,17 @@ DefineIndex(Oid relationId,
+ return address;
+ }
+
++ /*
++ * Roll back any GUC changes executed by index functions, and keep
++ * subsequent changes local to this command. It's barely possible that
++ * some index function changed a behavior-affecting GUC, e.g. xmloption,
++ * that affects subsequent steps. This improves bug-compatibility with
++ * older PostgreSQL versions. They did the AtEOXact_GUC() here for the
++ * purpose of clearing the above default_tablespace change.
++ */
++ AtEOXact_GUC(false, root_save_nestlevel);
++ root_save_nestlevel = NewGUCNestLevel();
++
+ /* Add any requested comment */
+ if (stmt->idxcomment != NULL)
+ CreateComments(indexRelationId, RelationRelationId, 0,
+@@ -1130,6 +1153,9 @@ DefineIndex(Oid relationId,
+ {
+ Oid childRelid = part_oids[i];
+ Relation childrel;
++ Oid child_save_userid;
++ int child_save_sec_context;
++ int child_save_nestlevel;
+ List *childidxs;
+ ListCell *cell;
+ AttrNumber *attmap;
+@@ -1138,6 +1164,12 @@ DefineIndex(Oid relationId,
+
+ childrel = table_open(childRelid, lockmode);
+
++ GetUserIdAndSecContext(&child_save_userid,
++ &child_save_sec_context);
++ SetUserIdAndSecContext(childrel->rd_rel->relowner,
++ child_save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ child_save_nestlevel = NewGUCNestLevel();
++
+ /*
+ * Don't try to create indexes on foreign tables, though. Skip
+ * those if a regular index, or fail if trying to create a
+@@ -1153,6 +1185,9 @@ DefineIndex(Oid relationId,
+ errdetail("Table \"%s\" contains partitions that are foreign tables.",
+ RelationGetRelationName(rel))));
+
++ AtEOXact_GUC(false, child_save_nestlevel);
++ SetUserIdAndSecContext(child_save_userid,
++ child_save_sec_context);
+ table_close(childrel, lockmode);
+ continue;
+ }
+@@ -1226,6 +1261,9 @@ DefineIndex(Oid relationId,
+ }
+
+ list_free(childidxs);
++ AtEOXact_GUC(false, child_save_nestlevel);
++ SetUserIdAndSecContext(child_save_userid,
++ child_save_sec_context);
+ table_close(childrel, NoLock);
+
+ /*
+@@ -1280,12 +1318,21 @@ DefineIndex(Oid relationId,
+ if (found_whole_row)
+ elog(ERROR, "cannot convert whole-row table reference");
+
++ /*
++ * Recurse as the starting user ID. Callee will use that
++ * for permission checks, then switch again.
++ */
++ Assert(GetUserId() == child_save_userid);
++ SetUserIdAndSecContext(root_save_userid,
++ root_save_sec_context);
+ DefineIndex(childRelid, childStmt,
+ InvalidOid, /* no predefined OID */
+ indexRelationId, /* this is our child */
+ createdConstraintId,
+ is_alter_table, check_rights, check_not_in_use,
+ skip_build, quiet);
++ SetUserIdAndSecContext(child_save_userid,
++ child_save_sec_context);
+ }
+
+ pgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_DONE,
+@@ -1322,12 +1369,17 @@ DefineIndex(Oid relationId,
+ * Indexes on partitioned tables are not themselves built, so we're
+ * done here.
+ */
++ AtEOXact_GUC(false, root_save_nestlevel);
++ SetUserIdAndSecContext(root_save_userid, root_save_sec_context);
+ table_close(rel, NoLock);
+ if (!OidIsValid(parentIndexId))
+ pgstat_progress_end_command();
+ return address;
+ }
+
++ AtEOXact_GUC(false, root_save_nestlevel);
++ SetUserIdAndSecContext(root_save_userid, root_save_sec_context);
++
+ if (!concurrent)
+ {
+ /* Close the heap and we're done, in the non-concurrent case */
+@@ -3040,6 +3092,9 @@ ReindexRelationConcurrently(Oid relationOid, int options)
+ Oid newIndexId;
+ Relation indexRel;
+ Relation heapRel;
++ Oid save_userid;
++ int save_sec_context;
++ int save_nestlevel;
+ Relation newIndexRel;
+ LockRelId *lockrelid;
+
+@@ -3047,6 +3102,16 @@ ReindexRelationConcurrently(Oid relationOid, int options)
+ heapRel = table_open(indexRel->rd_index->indrelid,
+ ShareUpdateExclusiveLock);
+
++ /*
++ * Switch to the table owner's userid, so that any index functions are
++ * run as that user. Also lock down security-restricted operations
++ * and arrange to make GUC variable changes local to this command.
++ */
++ GetUserIdAndSecContext(&save_userid, &save_sec_context);
++ SetUserIdAndSecContext(heapRel->rd_rel->relowner,
++ save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ save_nestlevel = NewGUCNestLevel();
++
+ /* This function shouldn't be called for temporary relations. */
+ if (indexRel->rd_rel->relpersistence == RELPERSISTENCE_TEMP)
+ elog(ERROR, "cannot reindex a temporary table concurrently");
+@@ -3101,6 +3166,13 @@ ReindexRelationConcurrently(Oid relationOid, int options)
+
+ index_close(indexRel, NoLock);
+ index_close(newIndexRel, NoLock);
++
++ /* Roll back any GUC changes executed by index functions */
++ AtEOXact_GUC(false, save_nestlevel);
++
++ /* Restore userid and security context */
++ SetUserIdAndSecContext(save_userid, save_sec_context);
++
+ table_close(heapRel, NoLock);
+ }
+
+diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
+index 80e9ec0..e485661 100644
+--- a/src/backend/commands/matview.c
++++ b/src/backend/commands/matview.c
+@@ -167,6 +167,17 @@ ExecRefreshMatView(RefreshMatViewStmt *stmt, const char *queryString,
+ lockmode, 0,
+ RangeVarCallbackOwnsTable, NULL);
+ matviewRel = table_open(matviewOid, NoLock);
++ relowner = matviewRel->rd_rel->relowner;
++
++ /*
++ * Switch to the owner's userid, so that any functions are run as that
++ * user. Also lock down security-restricted operations and arrange to
++ * make GUC variable changes local to this command.
++ */
++ GetUserIdAndSecContext(&save_userid, &save_sec_context);
++ SetUserIdAndSecContext(relowner,
++ save_sec_context | SECURITY_RESTRICTED_OPERATION);
++ save_nestlevel = NewGUCNestLevel();
+
+ /* Make sure it is a materialized view. */
+ if (matviewRel->rd_rel->relkind != RELKIND_MATVIEW)
+@@ -268,19 +279,6 @@ ExecRefreshMatView(RefreshMatViewStmt *stmt, const char *queryString,
+ */
+ SetMatViewPopulatedState(matviewRel, !stmt->skipData);
+
+- relowner = matviewRel->rd_rel->relowner;
+-
+- /*
+- * Switch to the owner's userid, so that any functions are run as that
+- * user. Also arrange to make GUC variable changes local to this command.
+- * Don't lock it down too tight to create a temporary table just yet. We
+- * will switch modes when we are about to execute user code.
+- */
+- GetUserIdAndSecContext(&save_userid, &save_sec_context);
+- SetUserIdAndSecContext(relowner,
+- save_sec_context | SECURITY_LOCAL_USERID_CHANGE);
+- save_nestlevel = NewGUCNestLevel();
+-
+ /* Concurrent refresh builds new data in temp tablespace, and does diff. */
+ if (concurrent)
+ {
+@@ -303,12 +301,6 @@ ExecRefreshMatView(RefreshMatViewStmt *stmt, const char *queryString,
+ LockRelationOid(OIDNewHeap, AccessExclusiveLock);
+ dest = CreateTransientRelDestReceiver(OIDNewHeap);
+
+- /*
+- * Now lock down security-restricted operations.
+- */
+- SetUserIdAndSecContext(relowner,
+- save_sec_context | SECURITY_RESTRICTED_OPERATION);
+-
+ /* Generate the data, if wanted. */
+ if (!stmt->skipData)
+ processed = refresh_matview_datafill(dest, dataQuery, queryString);
+diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c
+index de554e2..c9f858e 100644
+--- a/src/backend/utils/init/miscinit.c
++++ b/src/backend/utils/init/miscinit.c
+@@ -455,15 +455,21 @@ GetAuthenticatedUserId(void)
+ * with guc.c's internal state, so SET ROLE has to be disallowed.
+ *
+ * SECURITY_RESTRICTED_OPERATION indicates that we are inside an operation
+- * that does not wish to trust called user-defined functions at all. This
+- * bit prevents not only SET ROLE, but various other changes of session state
+- * that normally is unprotected but might possibly be used to subvert the
+- * calling session later. An example is replacing an existing prepared
+- * statement with new code, which will then be executed with the outer
+- * session's permissions when the prepared statement is next used. Since
+- * these restrictions are fairly draconian, we apply them only in contexts
+- * where the called functions are really supposed to be side-effect-free
+- * anyway, such as VACUUM/ANALYZE/REINDEX.
++ * that does not wish to trust called user-defined functions at all. The
++ * policy is to use this before operations, e.g. autovacuum and REINDEX, that
++ * enumerate relations of a database or schema and run functions associated
++ * with each found relation. The relation owner is the new user ID. Set this
++ * as soon as possible after locking the relation. Restore the old user ID as
++ * late as possible before closing the relation; restoring it shortly after
++ * close is also tolerable. If a command has both relation-enumerating and
++ * non-enumerating modes, e.g. ANALYZE, both modes set this bit. This bit
++ * prevents not only SET ROLE, but various other changes of session state that
++ * normally is unprotected but might possibly be used to subvert the calling
++ * session later. An example is replacing an existing prepared statement with
++ * new code, which will then be executed with the outer session's permissions
++ * when the prepared statement is next used. These restrictions are fairly
++ * draconian, but the functions called in relation-enumerating operations are
++ * really supposed to be side-effect-free anyway.
+ *
+ * SECURITY_NOFORCE_RLS indicates that we are inside an operation which should
+ * ignore the FORCE ROW LEVEL SECURITY per-table indication. This is used to
+diff --git a/src/test/regress/expected/privileges.out b/src/test/regress/expected/privileges.out
+index 186d2fb..0f0c1b3 100644
+--- a/src/test/regress/expected/privileges.out
++++ b/src/test/regress/expected/privileges.out
+@@ -1336,6 +1336,61 @@ SELECT has_table_privilege('regress_priv_user1', 'atest4', 'SELECT WITH GRANT OP
+ -- security-restricted operations
+ \c -
+ CREATE ROLE regress_sro_user;
++-- Check that index expressions and predicates are run as the table's owner
++-- A dummy index function checking current_user
++CREATE FUNCTION sro_ifun(int) RETURNS int AS $$
++BEGIN
++ -- Below we set the table's owner to regress_sro_user
++ ASSERT current_user = 'regress_sro_user',
++ format('sro_ifun(%s) called by %s', $1, current_user);
++ RETURN $1;
++END;
++$$ LANGUAGE plpgsql IMMUTABLE;
++-- Create a table owned by regress_sro_user
++CREATE TABLE sro_tab (a int);
++ALTER TABLE sro_tab OWNER TO regress_sro_user;
++INSERT INTO sro_tab VALUES (1), (2), (3);
++-- Create an expression index with a predicate
++CREATE INDEX sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))
++ WHERE sro_ifun(a + 10) > sro_ifun(10);
++DROP INDEX sro_idx;
++-- Do the same concurrently
++CREATE INDEX CONCURRENTLY sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))
++ WHERE sro_ifun(a + 10) > sro_ifun(10);
++-- REINDEX
++REINDEX TABLE sro_tab;
++REINDEX INDEX sro_idx;
++REINDEX TABLE CONCURRENTLY sro_tab;
++DROP INDEX sro_idx;
++-- CLUSTER
++CREATE INDEX sro_cluster_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)));
++CLUSTER sro_tab USING sro_cluster_idx;
++DROP INDEX sro_cluster_idx;
++-- BRIN index
++CREATE INDEX sro_brin ON sro_tab USING brin ((sro_ifun(a) + sro_ifun(0)));
++SELECT brin_desummarize_range('sro_brin', 0);
++ brin_desummarize_range
++------------------------
++
++(1 row)
++
++SELECT brin_summarize_range('sro_brin', 0);
++ brin_summarize_range
++----------------------
++ 1
++(1 row)
++
++DROP TABLE sro_tab;
++-- Check with a partitioned table
++CREATE TABLE sro_ptab (a int) PARTITION BY RANGE (a);
++ALTER TABLE sro_ptab OWNER TO regress_sro_user;
++CREATE TABLE sro_part PARTITION OF sro_ptab FOR VALUES FROM (1) TO (10);
++ALTER TABLE sro_part OWNER TO regress_sro_user;
++INSERT INTO sro_ptab VALUES (1), (2), (3);
++CREATE INDEX sro_pidx ON sro_ptab ((sro_ifun(a) + sro_ifun(0)))
++ WHERE sro_ifun(a + 10) > sro_ifun(10);
++REINDEX TABLE sro_ptab;
++REINDEX INDEX CONCURRENTLY sro_pidx;
+ SET SESSION AUTHORIZATION regress_sro_user;
+ CREATE FUNCTION unwanted_grant() RETURNS void LANGUAGE sql AS
+ 'GRANT regress_priv_group2 TO regress_sro_user';
+@@ -1373,6 +1428,22 @@ CONTEXT: SQL function "unwanted_grant" statement 1
+ SQL statement "SELECT unwanted_grant()"
+ PL/pgSQL function sro_trojan() line 1 at PERFORM
+ SQL function "mv_action" statement 1
++-- REFRESH MATERIALIZED VIEW CONCURRENTLY use of eval_const_expressions()
++SET SESSION AUTHORIZATION regress_sro_user;
++CREATE FUNCTION unwanted_grant_nofail(int) RETURNS int
++ IMMUTABLE LANGUAGE plpgsql AS $$
++BEGIN
++ PERFORM unwanted_grant();
++ RAISE WARNING 'owned';
++ RETURN 1;
++EXCEPTION WHEN OTHERS THEN
++ RETURN 2;
++END$$;
++CREATE MATERIALIZED VIEW sro_index_mv AS SELECT 1 AS c;
++CREATE UNIQUE INDEX ON sro_index_mv (c) WHERE unwanted_grant_nofail(1) > 0;
++\c -
++REFRESH MATERIALIZED VIEW CONCURRENTLY sro_index_mv;
++REFRESH MATERIALIZED VIEW sro_index_mv;
+ DROP OWNED BY regress_sro_user;
+ DROP ROLE regress_sro_user;
+ -- Admin options
+diff --git a/src/test/regress/sql/privileges.sql b/src/test/regress/sql/privileges.sql
+index 34fbf0e..c0b88a6 100644
+--- a/src/test/regress/sql/privileges.sql
++++ b/src/test/regress/sql/privileges.sql
+@@ -826,6 +826,53 @@ SELECT has_table_privilege('regress_priv_user1', 'atest4', 'SELECT WITH GRANT OP
+ \c -
+ CREATE ROLE regress_sro_user;
+
++-- Check that index expressions and predicates are run as the table's owner
++
++-- A dummy index function checking current_user
++CREATE FUNCTION sro_ifun(int) RETURNS int AS $$
++BEGIN
++ -- Below we set the table's owner to regress_sro_user
++ ASSERT current_user = 'regress_sro_user',
++ format('sro_ifun(%s) called by %s', $1, current_user);
++ RETURN $1;
++END;
++$$ LANGUAGE plpgsql IMMUTABLE;
++-- Create a table owned by regress_sro_user
++CREATE TABLE sro_tab (a int);
++ALTER TABLE sro_tab OWNER TO regress_sro_user;
++INSERT INTO sro_tab VALUES (1), (2), (3);
++-- Create an expression index with a predicate
++CREATE INDEX sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))
++ WHERE sro_ifun(a + 10) > sro_ifun(10);
++DROP INDEX sro_idx;
++-- Do the same concurrently
++CREATE INDEX CONCURRENTLY sro_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)))
++ WHERE sro_ifun(a + 10) > sro_ifun(10);
++-- REINDEX
++REINDEX TABLE sro_tab;
++REINDEX INDEX sro_idx;
++REINDEX TABLE CONCURRENTLY sro_tab;
++DROP INDEX sro_idx;
++-- CLUSTER
++CREATE INDEX sro_cluster_idx ON sro_tab ((sro_ifun(a) + sro_ifun(0)));
++CLUSTER sro_tab USING sro_cluster_idx;
++DROP INDEX sro_cluster_idx;
++-- BRIN index
++CREATE INDEX sro_brin ON sro_tab USING brin ((sro_ifun(a) + sro_ifun(0)));
++SELECT brin_desummarize_range('sro_brin', 0);
++SELECT brin_summarize_range('sro_brin', 0);
++DROP TABLE sro_tab;
++-- Check with a partitioned table
++CREATE TABLE sro_ptab (a int) PARTITION BY RANGE (a);
++ALTER TABLE sro_ptab OWNER TO regress_sro_user;
++CREATE TABLE sro_part PARTITION OF sro_ptab FOR VALUES FROM (1) TO (10);
++ALTER TABLE sro_part OWNER TO regress_sro_user;
++INSERT INTO sro_ptab VALUES (1), (2), (3);
++CREATE INDEX sro_pidx ON sro_ptab ((sro_ifun(a) + sro_ifun(0)))
++ WHERE sro_ifun(a + 10) > sro_ifun(10);
++REINDEX TABLE sro_ptab;
++REINDEX INDEX CONCURRENTLY sro_pidx;
++
+ SET SESSION AUTHORIZATION regress_sro_user;
+ CREATE FUNCTION unwanted_grant() RETURNS void LANGUAGE sql AS
+ 'GRANT regress_priv_group2 TO regress_sro_user';
+@@ -852,6 +899,23 @@ REFRESH MATERIALIZED VIEW sro_mv;
+ REFRESH MATERIALIZED VIEW sro_mv;
+ BEGIN; SET CONSTRAINTS ALL IMMEDIATE; REFRESH MATERIALIZED VIEW sro_mv; COMMIT;
+
++-- REFRESH MATERIALIZED VIEW CONCURRENTLY use of eval_const_expressions()
++SET SESSION AUTHORIZATION regress_sro_user;
++CREATE FUNCTION unwanted_grant_nofail(int) RETURNS int
++ IMMUTABLE LANGUAGE plpgsql AS $$
++BEGIN
++ PERFORM unwanted_grant();
++ RAISE WARNING 'owned';
++ RETURN 1;
++EXCEPTION WHEN OTHERS THEN
++ RETURN 2;
++END$$;
++CREATE MATERIALIZED VIEW sro_index_mv AS SELECT 1 AS c;
++CREATE UNIQUE INDEX ON sro_index_mv (c) WHERE unwanted_grant_nofail(1) > 0;
++\c -
++REFRESH MATERIALIZED VIEW CONCURRENTLY sro_index_mv;
++REFRESH MATERIALIZED VIEW sro_index_mv;
++
+ DROP OWNED BY regress_sro_user;
+ DROP ROLE regress_sro_user;
+
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-2625.patch b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-2625.patch
new file mode 100644
index 0000000000..6417d8a2b7
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-2625.patch
@@ -0,0 +1,904 @@
+From 84375c1db25ef650902cf80712495fc514b0ff63 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Thu, 13 Oct 2022 10:35:32 +0530
+Subject: [PATCH] CVE-2022-2625
+
+Upstream-Status: Backport [https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5579726bd60a6e7afb04a3548bced348cd5ffd89]
+CVE: CVE-2022-2625
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ doc/src/sgml/extend.sgml | 11 --
+ src/backend/catalog/pg_collation.c | 49 ++++--
+ src/backend/catalog/pg_depend.c | 74 ++++++++-
+ src/backend/catalog/pg_operator.c | 2 +-
+ src/backend/catalog/pg_type.c | 7 +-
+ src/backend/commands/createas.c | 18 ++-
+ src/backend/commands/foreigncmds.c | 19 ++-
+ src/backend/commands/schemacmds.c | 25 ++-
+ src/backend/commands/sequence.c | 8 +
+ src/backend/commands/statscmds.c | 4 +
+ src/backend/commands/view.c | 16 +-
+ src/backend/parser/parse_utilcmd.c | 10 ++
+ src/include/catalog/dependency.h | 2 +
+ src/test/modules/test_extensions/Makefile | 5 +-
+ .../expected/test_extensions.out | 153 ++++++++++++++++++
+ .../test_extensions/sql/test_extensions.sql | 110 +++++++++++++
+ .../test_ext_cine--1.0--1.1.sql | 26 +++
+ .../test_extensions/test_ext_cine--1.0.sql | 25 +++
+ .../test_extensions/test_ext_cine.control | 3 +
+ .../test_extensions/test_ext_cor--1.0.sql | 20 +++
+ .../test_extensions/test_ext_cor.control | 3 +
+ 21 files changed, 540 insertions(+), 50 deletions(-)
+ create mode 100644 src/test/modules/test_extensions/test_ext_cine--1.0--1.1.sql
+ create mode 100644 src/test/modules/test_extensions/test_ext_cine--1.0.sql
+ create mode 100644 src/test/modules/test_extensions/test_ext_cine.control
+ create mode 100644 src/test/modules/test_extensions/test_ext_cor--1.0.sql
+ create mode 100644 src/test/modules/test_extensions/test_ext_cor.control
+
+diff --git a/doc/src/sgml/extend.sgml b/doc/src/sgml/extend.sgml
+index 53f2638..bcc7a80 100644
+--- a/doc/src/sgml/extend.sgml
++++ b/doc/src/sgml/extend.sgml
+@@ -1109,17 +1109,6 @@ SELECT * FROM pg_extension_update_paths('<replaceable>extension_name</replaceabl
+ <varname>search_path</varname>. However, no mechanism currently exists
+ to require that.
+ </para>
+-
+- <para>
+- Do <emphasis>not</emphasis> use <command>CREATE OR REPLACE
+- FUNCTION</command>, except in an update script that must change the
+- definition of a function that is known to be an extension member
+- already. (Likewise for other <literal>OR REPLACE</literal> options.)
+- Using <literal>OR REPLACE</literal> unnecessarily not only has a risk
+- of accidentally overwriting someone else's function, but it creates a
+- security hazard since the overwritten function would still be owned by
+- its original owner, who could modify it.
+- </para>
+ </sect3>
+ </sect2>
+
+diff --git a/src/backend/catalog/pg_collation.c b/src/backend/catalog/pg_collation.c
+index dd99d53..ba4c3ef 100644
+--- a/src/backend/catalog/pg_collation.c
++++ b/src/backend/catalog/pg_collation.c
+@@ -78,15 +78,25 @@ CollationCreate(const char *collname, Oid collnamespace,
+ * friendlier error message. The unique index provides a backstop against
+ * race conditions.
+ */
+- if (SearchSysCacheExists3(COLLNAMEENCNSP,
+- PointerGetDatum(collname),
+- Int32GetDatum(collencoding),
+- ObjectIdGetDatum(collnamespace)))
++ oid = GetSysCacheOid3(COLLNAMEENCNSP,
++ Anum_pg_collation_oid,
++ PointerGetDatum(collname),
++ Int32GetDatum(collencoding),
++ ObjectIdGetDatum(collnamespace));
++ if (OidIsValid(oid))
+ {
+ if (quiet)
+ return InvalidOid;
+ else if (if_not_exists)
+ {
++ /*
++ * If we are in an extension script, insist that the pre-existing
++ * object be a member of the extension, to avoid security risks.
++ */
++ ObjectAddressSet(myself, CollationRelationId, oid);
++ checkMembershipInCurrentExtension(&myself);
++
++ /* OK to skip */
+ ereport(NOTICE,
+ (errcode(ERRCODE_DUPLICATE_OBJECT),
+ collencoding == -1
+@@ -116,16 +126,19 @@ CollationCreate(const char *collname, Oid collnamespace,
+ * so we take a ShareRowExclusiveLock earlier, to protect against
+ * concurrent changes fooling this check.
+ */
+- if ((collencoding == -1 &&
+- SearchSysCacheExists3(COLLNAMEENCNSP,
+- PointerGetDatum(collname),
+- Int32GetDatum(GetDatabaseEncoding()),
+- ObjectIdGetDatum(collnamespace))) ||
+- (collencoding != -1 &&
+- SearchSysCacheExists3(COLLNAMEENCNSP,
+- PointerGetDatum(collname),
+- Int32GetDatum(-1),
+- ObjectIdGetDatum(collnamespace))))
++ if (collencoding == -1)
++ oid = GetSysCacheOid3(COLLNAMEENCNSP,
++ Anum_pg_collation_oid,
++ PointerGetDatum(collname),
++ Int32GetDatum(GetDatabaseEncoding()),
++ ObjectIdGetDatum(collnamespace));
++ else
++ oid = GetSysCacheOid3(COLLNAMEENCNSP,
++ Anum_pg_collation_oid,
++ PointerGetDatum(collname),
++ Int32GetDatum(-1),
++ ObjectIdGetDatum(collnamespace));
++ if (OidIsValid(oid))
+ {
+ if (quiet)
+ {
+@@ -134,6 +147,14 @@ CollationCreate(const char *collname, Oid collnamespace,
+ }
+ else if (if_not_exists)
+ {
++ /*
++ * If we are in an extension script, insist that the pre-existing
++ * object be a member of the extension, to avoid security risks.
++ */
++ ObjectAddressSet(myself, CollationRelationId, oid);
++ checkMembershipInCurrentExtension(&myself);
++
++ /* OK to skip */
+ table_close(rel, NoLock);
+ ereport(NOTICE,
+ (errcode(ERRCODE_DUPLICATE_OBJECT),
+diff --git a/src/backend/catalog/pg_depend.c b/src/backend/catalog/pg_depend.c
+index 9ffadbb..71c7cef 100644
+--- a/src/backend/catalog/pg_depend.c
++++ b/src/backend/catalog/pg_depend.c
+@@ -124,15 +124,23 @@ recordMultipleDependencies(const ObjectAddress *depender,
+
+ /*
+ * If we are executing a CREATE EXTENSION operation, mark the given object
+- * as being a member of the extension. Otherwise, do nothing.
++ * as being a member of the extension, or check that it already is one.
++ * Otherwise, do nothing.
+ *
+ * This must be called during creation of any user-definable object type
+ * that could be a member of an extension.
+ *
+- * If isReplace is true, the object already existed (or might have already
+- * existed), so we must check for a pre-existing extension membership entry.
+- * Passing false is a guarantee that the object is newly created, and so
+- * could not already be a member of any extension.
++ * isReplace must be true if the object already existed, and false if it is
++ * newly created. In the former case we insist that it already be a member
++ * of the current extension. In the latter case we can skip checking whether
++ * it is already a member of any extension.
++ *
++ * Note: isReplace = true is typically used when updating a object in
++ * CREATE OR REPLACE and similar commands. We used to allow the target
++ * object to not already be an extension member, instead silently absorbing
++ * it into the current extension. However, this was both error-prone
++ * (extensions might accidentally overwrite free-standing objects) and
++ * a security hazard (since the object would retain its previous ownership).
+ */
+ void
+ recordDependencyOnCurrentExtension(const ObjectAddress *object,
+@@ -150,6 +158,12 @@ recordDependencyOnCurrentExtension(const ObjectAddress *object,
+ {
+ Oid oldext;
+
++ /*
++ * Side note: these catalog lookups are safe only because the
++ * object is a pre-existing one. In the not-isReplace case, the
++ * caller has most likely not yet done a CommandCounterIncrement
++ * that would make the new object visible.
++ */
+ oldext = getExtensionOfObject(object->classId, object->objectId);
+ if (OidIsValid(oldext))
+ {
+@@ -163,6 +177,13 @@ recordDependencyOnCurrentExtension(const ObjectAddress *object,
+ getObjectDescription(object),
+ get_extension_name(oldext))));
+ }
++ /* It's a free-standing object, so reject */
++ ereport(ERROR,
++ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
++ errmsg("%s is not a member of extension \"%s\"",
++ getObjectDescription(object),
++ get_extension_name(CurrentExtensionObject)),
++ errdetail("An extension is not allowed to replace an object that it does not own.")));
+ }
+
+ /* OK, record it as a member of CurrentExtensionObject */
+@@ -174,6 +195,49 @@ recordDependencyOnCurrentExtension(const ObjectAddress *object,
+ }
+ }
+
++/*
++ * If we are executing a CREATE EXTENSION operation, check that the given
++ * object is a member of the extension, and throw an error if it isn't.
++ * Otherwise, do nothing.
++ *
++ * This must be called whenever a CREATE IF NOT EXISTS operation (for an
++ * object type that can be an extension member) has found that an object of
++ * the desired name already exists. It is insecure for an extension to use
++ * IF NOT EXISTS except when the conflicting object is already an extension
++ * member; otherwise a hostile user could substitute an object with arbitrary
++ * properties.
++ */
++void
++checkMembershipInCurrentExtension(const ObjectAddress *object)
++{
++ /*
++ * This is actually the same condition tested in
++ * recordDependencyOnCurrentExtension; but we want to issue a
++ * differently-worded error, and anyway it would be pretty confusing to
++ * call recordDependencyOnCurrentExtension in these circumstances.
++ */
++
++ /* Only whole objects can be extension members */
++ Assert(object->objectSubId == 0);
++
++ if (creating_extension)
++ {
++ Oid oldext;
++
++ oldext = getExtensionOfObject(object->classId, object->objectId);
++ /* If already a member of this extension, OK */
++ if (oldext == CurrentExtensionObject)
++ return;
++ /* Else complain */
++ ereport(ERROR,
++ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
++ errmsg("%s is not a member of extension \"%s\"",
++ getObjectDescription(object),
++ get_extension_name(CurrentExtensionObject)),
++ errdetail("An extension may only use CREATE ... IF NOT EXISTS to skip object creation if the conflicting object is one that it already owns.")));
++ }
++}
++
+ /*
+ * deleteDependencyRecordsFor -- delete all records with given depender
+ * classId/objectId. Returns the number of records deleted.
+diff --git a/src/backend/catalog/pg_operator.c b/src/backend/catalog/pg_operator.c
+index bcaa26c..84784e6 100644
+--- a/src/backend/catalog/pg_operator.c
++++ b/src/backend/catalog/pg_operator.c
+@@ -867,7 +867,7 @@ makeOperatorDependencies(HeapTuple tuple, bool isUpdate)
+ oper->oprowner);
+
+ /* Dependency on extension */
+- recordDependencyOnCurrentExtension(&myself, true);
++ recordDependencyOnCurrentExtension(&myself, isUpdate);
+
+ return myself;
+ }
+diff --git a/src/backend/catalog/pg_type.c b/src/backend/catalog/pg_type.c
+index 2a51501..3ff017f 100644
+--- a/src/backend/catalog/pg_type.c
++++ b/src/backend/catalog/pg_type.c
+@@ -528,10 +528,9 @@ TypeCreate(Oid newTypeOid,
+ * If rebuild is true, we remove existing dependencies and rebuild them
+ * from scratch. This is needed for ALTER TYPE, and also when replacing
+ * a shell type. We don't remove an existing extension dependency, though.
+- * (That means an extension can't absorb a shell type created in another
+- * extension, nor ALTER a type created by another extension. Also, if it
+- * replaces a free-standing shell type or ALTERs a free-standing type,
+- * that type will become a member of the extension.)
++ * That means an extension can't absorb a shell type that is free-standing
++ * or belongs to another extension, nor ALTER a type that is free-standing or
++ * belongs to another extension.
+ */
+ void
+ GenerateTypeDependencies(Oid typeObjectId,
+diff --git a/src/backend/commands/createas.c b/src/backend/commands/createas.c
+index 4c1d909..a68d945 100644
+--- a/src/backend/commands/createas.c
++++ b/src/backend/commands/createas.c
+@@ -243,15 +243,27 @@ ExecCreateTableAs(CreateTableAsStmt *stmt, const char *queryString,
+ if (stmt->if_not_exists)
+ {
+ Oid nspid;
++ Oid oldrelid;
+
+- nspid = RangeVarGetCreationNamespace(stmt->into->rel);
++ nspid = RangeVarGetCreationNamespace(into->rel);
+
+- if (get_relname_relid(stmt->into->rel->relname, nspid))
++ oldrelid = get_relname_relid(into->rel->relname, nspid);
++ if (OidIsValid(oldrelid))
+ {
++ /*
++ * The relation exists and IF NOT EXISTS has been specified.
++ *
++ * If we are in an extension script, insist that the pre-existing
++ * object be a member of the extension, to avoid security risks.
++ */
++ ObjectAddressSet(address, RelationRelationId, oldrelid);
++ checkMembershipInCurrentExtension(&address);
++
++ /* OK to skip */
+ ereport(NOTICE,
+ (errcode(ERRCODE_DUPLICATE_TABLE),
+ errmsg("relation \"%s\" already exists, skipping",
+- stmt->into->rel->relname)));
++ into->rel->relname)));
+ return InvalidObjectAddress;
+ }
+ }
+diff --git a/src/backend/commands/foreigncmds.c b/src/backend/commands/foreigncmds.c
+index d7bc6e3..bc583c6 100644
+--- a/src/backend/commands/foreigncmds.c
++++ b/src/backend/commands/foreigncmds.c
+@@ -887,13 +887,22 @@ CreateForeignServer(CreateForeignServerStmt *stmt)
+ ownerId = GetUserId();
+
+ /*
+- * Check that there is no other foreign server by this name. Do nothing if
+- * IF NOT EXISTS was enforced.
++ * Check that there is no other foreign server by this name. If there is
++ * one, do nothing if IF NOT EXISTS was specified.
+ */
+- if (GetForeignServerByName(stmt->servername, true) != NULL)
++ srvId = get_foreign_server_oid(stmt->servername, true);
++ if (OidIsValid(srvId))
+ {
+ if (stmt->if_not_exists)
+ {
++ /*
++ * If we are in an extension script, insist that the pre-existing
++ * object be a member of the extension, to avoid security risks.
++ */
++ ObjectAddressSet(myself, ForeignServerRelationId, srvId);
++ checkMembershipInCurrentExtension(&myself);
++
++ /* OK to skip */
+ ereport(NOTICE,
+ (errcode(ERRCODE_DUPLICATE_OBJECT),
+ errmsg("server \"%s\" already exists, skipping",
+@@ -1182,6 +1191,10 @@ CreateUserMapping(CreateUserMappingStmt *stmt)
+ {
+ if (stmt->if_not_exists)
+ {
++ /*
++ * Since user mappings aren't members of extensions (see comments
++ * below), no need for checkMembershipInCurrentExtension here.
++ */
+ ereport(NOTICE,
+ (errcode(ERRCODE_DUPLICATE_OBJECT),
+ errmsg("user mapping for \"%s\" already exists for server \"%s\", skipping",
+diff --git a/src/backend/commands/schemacmds.c b/src/backend/commands/schemacmds.c
+index 6cf94a3..6bc4edc 100644
+--- a/src/backend/commands/schemacmds.c
++++ b/src/backend/commands/schemacmds.c
+@@ -113,14 +113,25 @@ CreateSchemaCommand(CreateSchemaStmt *stmt, const char *queryString,
+ * the permissions checks, but since CREATE TABLE IF NOT EXISTS makes its
+ * creation-permission check first, we do likewise.
+ */
+- if (stmt->if_not_exists &&
+- SearchSysCacheExists1(NAMESPACENAME, PointerGetDatum(schemaName)))
++ if (stmt->if_not_exists)
+ {
+- ereport(NOTICE,
+- (errcode(ERRCODE_DUPLICATE_SCHEMA),
+- errmsg("schema \"%s\" already exists, skipping",
+- schemaName)));
+- return InvalidOid;
++ namespaceId = get_namespace_oid(schemaName, true);
++ if (OidIsValid(namespaceId))
++ {
++ /*
++ * If we are in an extension script, insist that the pre-existing
++ * object be a member of the extension, to avoid security risks.
++ */
++ ObjectAddressSet(address, NamespaceRelationId, namespaceId);
++ checkMembershipInCurrentExtension(&address);
++
++ /* OK to skip */
++ ereport(NOTICE,
++ (errcode(ERRCODE_DUPLICATE_SCHEMA),
++ errmsg("schema \"%s\" already exists, skipping",
++ schemaName)));
++ return InvalidOid;
++ }
+ }
+
+ /*
+diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
+index 0960b33..0577184 100644
+--- a/src/backend/commands/sequence.c
++++ b/src/backend/commands/sequence.c
+@@ -149,6 +149,14 @@ DefineSequence(ParseState *pstate, CreateSeqStmt *seq)
+ RangeVarGetAndCheckCreationNamespace(seq->sequence, NoLock, &seqoid);
+ if (OidIsValid(seqoid))
+ {
++ /*
++ * If we are in an extension script, insist that the pre-existing
++ * object be a member of the extension, to avoid security risks.
++ */
++ ObjectAddressSet(address, RelationRelationId, seqoid);
++ checkMembershipInCurrentExtension(&address);
++
++ /* OK to skip */
+ ereport(NOTICE,
+ (errcode(ERRCODE_DUPLICATE_TABLE),
+ errmsg("relation \"%s\" already exists, skipping",
+diff --git a/src/backend/commands/statscmds.c b/src/backend/commands/statscmds.c
+index 5678d31..409cf28 100644
+--- a/src/backend/commands/statscmds.c
++++ b/src/backend/commands/statscmds.c
+@@ -173,6 +173,10 @@ CreateStatistics(CreateStatsStmt *stmt)
+ {
+ if (stmt->if_not_exists)
+ {
++ /*
++ * Since stats objects aren't members of extensions (see comments
++ * below), no need for checkMembershipInCurrentExtension here.
++ */
+ ereport(NOTICE,
+ (errcode(ERRCODE_DUPLICATE_OBJECT),
+ errmsg("statistics object \"%s\" already exists, skipping",
+diff --git a/src/backend/commands/view.c b/src/backend/commands/view.c
+index 87ed453..dd7cc97 100644
+--- a/src/backend/commands/view.c
++++ b/src/backend/commands/view.c
+@@ -205,7 +205,7 @@ DefineVirtualRelation(RangeVar *relation, List *tlist, bool replace,
+ CommandCounterIncrement();
+
+ /*
+- * Finally update the view options.
++ * Update the view's options.
+ *
+ * The new options list replaces the existing options list, even if
+ * it's empty.
+@@ -218,8 +218,22 @@ DefineVirtualRelation(RangeVar *relation, List *tlist, bool replace,
+ /* EventTriggerAlterTableStart called by ProcessUtilitySlow */
+ AlterTableInternal(viewOid, atcmds, true);
+
++ /*
++ * There is very little to do here to update the view's dependencies.
++ * Most view-level dependency relationships, such as those on the
++ * owner, schema, and associated composite type, aren't changing.
++ * Because we don't allow changing type or collation of an existing
++ * view column, those dependencies of the existing columns don't
++ * change either, while the AT_AddColumnToView machinery took care of
++ * adding such dependencies for new view columns. The dependencies of
++ * the view's query could have changed arbitrarily, but that was dealt
++ * with inside StoreViewQuery. What remains is only to check that
++ * view replacement is allowed when we're creating an extension.
++ */
+ ObjectAddressSet(address, RelationRelationId, viewOid);
+
++ recordDependencyOnCurrentExtension(&address, true);
++
+ /*
+ * Seems okay, so return the OID of the pre-existing view.
+ */
+diff --git a/src/backend/parser/parse_utilcmd.c b/src/backend/parser/parse_utilcmd.c
+index 44aa38a..8f4d940 100644
+--- a/src/backend/parser/parse_utilcmd.c
++++ b/src/backend/parser/parse_utilcmd.c
+@@ -206,6 +206,16 @@ transformCreateStmt(CreateStmt *stmt, const char *queryString)
+ */
+ if (stmt->if_not_exists && OidIsValid(existing_relid))
+ {
++ /*
++ * If we are in an extension script, insist that the pre-existing
++ * object be a member of the extension, to avoid security risks.
++ */
++ ObjectAddress address;
++
++ ObjectAddressSet(address, RelationRelationId, existing_relid);
++ checkMembershipInCurrentExtension(&address);
++
++ /* OK to skip */
+ ereport(NOTICE,
+ (errcode(ERRCODE_DUPLICATE_TABLE),
+ errmsg("relation \"%s\" already exists, skipping",
+diff --git a/src/include/catalog/dependency.h b/src/include/catalog/dependency.h
+index 8b1e3aa..27c7509 100644
+--- a/src/include/catalog/dependency.h
++++ b/src/include/catalog/dependency.h
+@@ -201,6 +201,8 @@ extern void recordMultipleDependencies(const ObjectAddress *depender,
+ extern void recordDependencyOnCurrentExtension(const ObjectAddress *object,
+ bool isReplace);
+
++extern void checkMembershipInCurrentExtension(const ObjectAddress *object);
++
+ extern long deleteDependencyRecordsFor(Oid classId, Oid objectId,
+ bool skipExtensionDeps);
+
+diff --git a/src/test/modules/test_extensions/Makefile b/src/test/modules/test_extensions/Makefile
+index d18108e..7428f15 100644
+--- a/src/test/modules/test_extensions/Makefile
++++ b/src/test/modules/test_extensions/Makefile
+@@ -4,10 +4,13 @@ MODULE = test_extensions
+ PGFILEDESC = "test_extensions - regression testing for EXTENSION support"
+
+ EXTENSION = test_ext1 test_ext2 test_ext3 test_ext4 test_ext5 test_ext6 \
+- test_ext7 test_ext8 test_ext_cyclic1 test_ext_cyclic2
++ test_ext7 test_ext8 test_ext_cine test_ext_cor \
++ test_ext_cyclic1 test_ext_cyclic2
+ DATA = test_ext1--1.0.sql test_ext2--1.0.sql test_ext3--1.0.sql \
+ test_ext4--1.0.sql test_ext5--1.0.sql test_ext6--1.0.sql \
+ test_ext7--1.0.sql test_ext7--1.0--2.0.sql test_ext8--1.0.sql \
++ test_ext_cine--1.0.sql test_ext_cine--1.0--1.1.sql \
++ test_ext_cor--1.0.sql \
+ test_ext_cyclic1--1.0.sql test_ext_cyclic2--1.0.sql
+
+ REGRESS = test_extensions test_extdepend
+diff --git a/src/test/modules/test_extensions/expected/test_extensions.out b/src/test/modules/test_extensions/expected/test_extensions.out
+index b5cbdfc..1e91640 100644
+--- a/src/test/modules/test_extensions/expected/test_extensions.out
++++ b/src/test/modules/test_extensions/expected/test_extensions.out
+@@ -154,3 +154,156 @@ DROP TABLE test_ext4_tab;
+ DROP FUNCTION create_extension_with_temp_schema();
+ RESET client_min_messages;
+ \unset SHOW_CONTEXT
++-- It's generally bad style to use CREATE OR REPLACE unnecessarily.
++-- Test what happens if an extension does it anyway.
++-- Replacing a shell type or operator is sort of like CREATE OR REPLACE;
++-- check that too.
++CREATE FUNCTION ext_cor_func() RETURNS text
++ AS $$ SELECT 'ext_cor_func: original'::text $$ LANGUAGE sql;
++CREATE EXTENSION test_ext_cor; -- fail
++ERROR: function ext_cor_func() is not a member of extension "test_ext_cor"
++DETAIL: An extension is not allowed to replace an object that it does not own.
++SELECT ext_cor_func();
++ ext_cor_func
++------------------------
++ ext_cor_func: original
++(1 row)
++
++DROP FUNCTION ext_cor_func();
++CREATE VIEW ext_cor_view AS
++ SELECT 'ext_cor_view: original'::text AS col;
++CREATE EXTENSION test_ext_cor; -- fail
++ERROR: view ext_cor_view is not a member of extension "test_ext_cor"
++DETAIL: An extension is not allowed to replace an object that it does not own.
++SELECT ext_cor_func();
++ERROR: function ext_cor_func() does not exist
++LINE 1: SELECT ext_cor_func();
++ ^
++HINT: No function matches the given name and argument types. You might need to add explicit type casts.
++SELECT * FROM ext_cor_view;
++ col
++------------------------
++ ext_cor_view: original
++(1 row)
++
++DROP VIEW ext_cor_view;
++CREATE TYPE test_ext_type;
++CREATE EXTENSION test_ext_cor; -- fail
++ERROR: type test_ext_type is not a member of extension "test_ext_cor"
++DETAIL: An extension is not allowed to replace an object that it does not own.
++DROP TYPE test_ext_type;
++-- this makes a shell "point <<@@ polygon" operator too
++CREATE OPERATOR @@>> ( PROCEDURE = poly_contain_pt,
++ LEFTARG = polygon, RIGHTARG = point,
++ COMMUTATOR = <<@@ );
++CREATE EXTENSION test_ext_cor; -- fail
++ERROR: operator <<@@(point,polygon) is not a member of extension "test_ext_cor"
++DETAIL: An extension is not allowed to replace an object that it does not own.
++DROP OPERATOR <<@@ (point, polygon);
++CREATE EXTENSION test_ext_cor; -- now it should work
++SELECT ext_cor_func();
++ ext_cor_func
++------------------------------
++ ext_cor_func: from extension
++(1 row)
++
++SELECT * FROM ext_cor_view;
++ col
++------------------------------
++ ext_cor_view: from extension
++(1 row)
++
++SELECT 'x'::test_ext_type;
++ test_ext_type
++---------------
++ x
++(1 row)
++
++SELECT point(0,0) <<@@ polygon(circle(point(0,0),1));
++ ?column?
++----------
++ t
++(1 row)
++
++\dx+ test_ext_cor
++Objects in extension "test_ext_cor"
++ Object description
++------------------------------
++ function ext_cor_func()
++ operator <<@@(point,polygon)
++ type test_ext_type
++ view ext_cor_view
++(4 rows)
++
++--
++-- CREATE IF NOT EXISTS is an entirely unsound thing for an extension
++-- to be doing, but let's at least plug the major security hole in it.
++--
++CREATE COLLATION ext_cine_coll
++ ( LC_COLLATE = "C", LC_CTYPE = "C" );
++CREATE EXTENSION test_ext_cine; -- fail
++ERROR: collation ext_cine_coll is not a member of extension "test_ext_cine"
++DETAIL: An extension may only use CREATE ... IF NOT EXISTS to skip object creation if the conflicting object is one that it already owns.
++DROP COLLATION ext_cine_coll;
++CREATE MATERIALIZED VIEW ext_cine_mv AS SELECT 11 AS f1;
++CREATE EXTENSION test_ext_cine; -- fail
++ERROR: materialized view ext_cine_mv is not a member of extension "test_ext_cine"
++DETAIL: An extension may only use CREATE ... IF NOT EXISTS to skip object creation if the conflicting object is one that it already owns.
++DROP MATERIALIZED VIEW ext_cine_mv;
++CREATE FOREIGN DATA WRAPPER dummy;
++CREATE SERVER ext_cine_srv FOREIGN DATA WRAPPER dummy;
++CREATE EXTENSION test_ext_cine; -- fail
++ERROR: server ext_cine_srv is not a member of extension "test_ext_cine"
++DETAIL: An extension may only use CREATE ... IF NOT EXISTS to skip object creation if the conflicting object is one that it already owns.
++DROP SERVER ext_cine_srv;
++CREATE SCHEMA ext_cine_schema;
++CREATE EXTENSION test_ext_cine; -- fail
++ERROR: schema ext_cine_schema is not a member of extension "test_ext_cine"
++DETAIL: An extension may only use CREATE ... IF NOT EXISTS to skip object creation if the conflicting object is one that it already owns.
++DROP SCHEMA ext_cine_schema;
++CREATE SEQUENCE ext_cine_seq;
++CREATE EXTENSION test_ext_cine; -- fail
++ERROR: sequence ext_cine_seq is not a member of extension "test_ext_cine"
++DETAIL: An extension may only use CREATE ... IF NOT EXISTS to skip object creation if the conflicting object is one that it already owns.
++DROP SEQUENCE ext_cine_seq;
++CREATE TABLE ext_cine_tab1 (x int);
++CREATE EXTENSION test_ext_cine; -- fail
++ERROR: table ext_cine_tab1 is not a member of extension "test_ext_cine"
++DETAIL: An extension may only use CREATE ... IF NOT EXISTS to skip object creation if the conflicting object is one that it already owns.
++DROP TABLE ext_cine_tab1;
++CREATE TABLE ext_cine_tab2 AS SELECT 42 AS y;
++CREATE EXTENSION test_ext_cine; -- fail
++ERROR: table ext_cine_tab2 is not a member of extension "test_ext_cine"
++DETAIL: An extension may only use CREATE ... IF NOT EXISTS to skip object creation if the conflicting object is one that it already owns.
++DROP TABLE ext_cine_tab2;
++CREATE EXTENSION test_ext_cine;
++\dx+ test_ext_cine
++Objects in extension "test_ext_cine"
++ Object description
++-----------------------------------
++ collation ext_cine_coll
++ foreign-data wrapper ext_cine_fdw
++ materialized view ext_cine_mv
++ schema ext_cine_schema
++ sequence ext_cine_seq
++ server ext_cine_srv
++ table ext_cine_tab1
++ table ext_cine_tab2
++(8 rows)
++
++ALTER EXTENSION test_ext_cine UPDATE TO '1.1';
++\dx+ test_ext_cine
++Objects in extension "test_ext_cine"
++ Object description
++-----------------------------------
++ collation ext_cine_coll
++ foreign-data wrapper ext_cine_fdw
++ materialized view ext_cine_mv
++ schema ext_cine_schema
++ sequence ext_cine_seq
++ server ext_cine_srv
++ table ext_cine_tab1
++ table ext_cine_tab2
++ table ext_cine_tab3
++(9 rows)
++
+diff --git a/src/test/modules/test_extensions/sql/test_extensions.sql b/src/test/modules/test_extensions/sql/test_extensions.sql
+index f505466..b3d4579 100644
+--- a/src/test/modules/test_extensions/sql/test_extensions.sql
++++ b/src/test/modules/test_extensions/sql/test_extensions.sql
+@@ -93,3 +93,113 @@ DROP TABLE test_ext4_tab;
+ DROP FUNCTION create_extension_with_temp_schema();
+ RESET client_min_messages;
+ \unset SHOW_CONTEXT
++
++-- It's generally bad style to use CREATE OR REPLACE unnecessarily.
++-- Test what happens if an extension does it anyway.
++-- Replacing a shell type or operator is sort of like CREATE OR REPLACE;
++-- check that too.
++
++CREATE FUNCTION ext_cor_func() RETURNS text
++ AS $$ SELECT 'ext_cor_func: original'::text $$ LANGUAGE sql;
++
++CREATE EXTENSION test_ext_cor; -- fail
++
++SELECT ext_cor_func();
++
++DROP FUNCTION ext_cor_func();
++
++CREATE VIEW ext_cor_view AS
++ SELECT 'ext_cor_view: original'::text AS col;
++
++CREATE EXTENSION test_ext_cor; -- fail
++
++SELECT ext_cor_func();
++
++SELECT * FROM ext_cor_view;
++
++DROP VIEW ext_cor_view;
++
++CREATE TYPE test_ext_type;
++
++CREATE EXTENSION test_ext_cor; -- fail
++
++DROP TYPE test_ext_type;
++
++-- this makes a shell "point <<@@ polygon" operator too
++CREATE OPERATOR @@>> ( PROCEDURE = poly_contain_pt,
++ LEFTARG = polygon, RIGHTARG = point,
++ COMMUTATOR = <<@@ );
++
++CREATE EXTENSION test_ext_cor; -- fail
++
++DROP OPERATOR <<@@ (point, polygon);
++
++CREATE EXTENSION test_ext_cor; -- now it should work
++
++SELECT ext_cor_func();
++
++SELECT * FROM ext_cor_view;
++
++SELECT 'x'::test_ext_type;
++
++SELECT point(0,0) <<@@ polygon(circle(point(0,0),1));
++
++\dx+ test_ext_cor
++
++--
++-- CREATE IF NOT EXISTS is an entirely unsound thing for an extension
++-- to be doing, but let's at least plug the major security hole in it.
++--
++
++CREATE COLLATION ext_cine_coll
++ ( LC_COLLATE = "C", LC_CTYPE = "C" );
++
++CREATE EXTENSION test_ext_cine; -- fail
++
++DROP COLLATION ext_cine_coll;
++
++CREATE MATERIALIZED VIEW ext_cine_mv AS SELECT 11 AS f1;
++
++CREATE EXTENSION test_ext_cine; -- fail
++
++DROP MATERIALIZED VIEW ext_cine_mv;
++
++CREATE FOREIGN DATA WRAPPER dummy;
++
++CREATE SERVER ext_cine_srv FOREIGN DATA WRAPPER dummy;
++
++CREATE EXTENSION test_ext_cine; -- fail
++
++DROP SERVER ext_cine_srv;
++
++CREATE SCHEMA ext_cine_schema;
++
++CREATE EXTENSION test_ext_cine; -- fail
++
++DROP SCHEMA ext_cine_schema;
++
++CREATE SEQUENCE ext_cine_seq;
++
++CREATE EXTENSION test_ext_cine; -- fail
++
++DROP SEQUENCE ext_cine_seq;
++
++CREATE TABLE ext_cine_tab1 (x int);
++
++CREATE EXTENSION test_ext_cine; -- fail
++
++DROP TABLE ext_cine_tab1;
++
++CREATE TABLE ext_cine_tab2 AS SELECT 42 AS y;
++
++CREATE EXTENSION test_ext_cine; -- fail
++
++DROP TABLE ext_cine_tab2;
++
++CREATE EXTENSION test_ext_cine;
++
++\dx+ test_ext_cine
++
++ALTER EXTENSION test_ext_cine UPDATE TO '1.1';
++
++\dx+ test_ext_cine
+diff --git a/src/test/modules/test_extensions/test_ext_cine--1.0--1.1.sql b/src/test/modules/test_extensions/test_ext_cine--1.0--1.1.sql
+new file mode 100644
+index 0000000..6dadfd2
+--- /dev/null
++++ b/src/test/modules/test_extensions/test_ext_cine--1.0--1.1.sql
+@@ -0,0 +1,26 @@
++/* src/test/modules/test_extensions/test_ext_cine--1.0--1.1.sql */
++-- complain if script is sourced in psql, rather than via ALTER EXTENSION
++\echo Use "ALTER EXTENSION test_ext_cine UPDATE TO '1.1'" to load this file. \quit
++
++--
++-- These are the same commands as in the 1.0 script; we expect them
++-- to do nothing.
++--
++
++CREATE COLLATION IF NOT EXISTS ext_cine_coll
++ ( LC_COLLATE = "POSIX", LC_CTYPE = "POSIX" );
++
++CREATE MATERIALIZED VIEW IF NOT EXISTS ext_cine_mv AS SELECT 42 AS f1;
++
++CREATE SERVER IF NOT EXISTS ext_cine_srv FOREIGN DATA WRAPPER ext_cine_fdw;
++
++CREATE SCHEMA IF NOT EXISTS ext_cine_schema;
++
++CREATE SEQUENCE IF NOT EXISTS ext_cine_seq;
++
++CREATE TABLE IF NOT EXISTS ext_cine_tab1 (x int);
++
++CREATE TABLE IF NOT EXISTS ext_cine_tab2 AS SELECT 42 AS y;
++
++-- just to verify the script ran
++CREATE TABLE ext_cine_tab3 (z int);
+diff --git a/src/test/modules/test_extensions/test_ext_cine--1.0.sql b/src/test/modules/test_extensions/test_ext_cine--1.0.sql
+new file mode 100644
+index 0000000..01408ff
+--- /dev/null
++++ b/src/test/modules/test_extensions/test_ext_cine--1.0.sql
+@@ -0,0 +1,25 @@
++/* src/test/modules/test_extensions/test_ext_cine--1.0.sql */
++-- complain if script is sourced in psql, rather than via CREATE EXTENSION
++\echo Use "CREATE EXTENSION test_ext_cine" to load this file. \quit
++
++--
++-- CREATE IF NOT EXISTS is an entirely unsound thing for an extension
++-- to be doing, but let's at least plug the major security hole in it.
++--
++
++CREATE COLLATION IF NOT EXISTS ext_cine_coll
++ ( LC_COLLATE = "POSIX", LC_CTYPE = "POSIX" );
++
++CREATE MATERIALIZED VIEW IF NOT EXISTS ext_cine_mv AS SELECT 42 AS f1;
++
++CREATE FOREIGN DATA WRAPPER ext_cine_fdw;
++
++CREATE SERVER IF NOT EXISTS ext_cine_srv FOREIGN DATA WRAPPER ext_cine_fdw;
++
++CREATE SCHEMA IF NOT EXISTS ext_cine_schema;
++
++CREATE SEQUENCE IF NOT EXISTS ext_cine_seq;
++
++CREATE TABLE IF NOT EXISTS ext_cine_tab1 (x int);
++
++CREATE TABLE IF NOT EXISTS ext_cine_tab2 AS SELECT 42 AS y;
+diff --git a/src/test/modules/test_extensions/test_ext_cine.control b/src/test/modules/test_extensions/test_ext_cine.control
+new file mode 100644
+index 0000000..ced713b
+--- /dev/null
++++ b/src/test/modules/test_extensions/test_ext_cine.control
+@@ -0,0 +1,3 @@
++comment = 'Test extension using CREATE IF NOT EXISTS'
++default_version = '1.0'
++relocatable = true
+diff --git a/src/test/modules/test_extensions/test_ext_cor--1.0.sql b/src/test/modules/test_extensions/test_ext_cor--1.0.sql
+new file mode 100644
+index 0000000..2e8d89c
+--- /dev/null
++++ b/src/test/modules/test_extensions/test_ext_cor--1.0.sql
+@@ -0,0 +1,20 @@
++/* src/test/modules/test_extensions/test_ext_cor--1.0.sql */
++-- complain if script is sourced in psql, rather than via CREATE EXTENSION
++\echo Use "CREATE EXTENSION test_ext_cor" to load this file. \quit
++
++-- It's generally bad style to use CREATE OR REPLACE unnecessarily.
++-- Test what happens if an extension does it anyway.
++
++CREATE OR REPLACE FUNCTION ext_cor_func() RETURNS text
++ AS $$ SELECT 'ext_cor_func: from extension'::text $$ LANGUAGE sql;
++
++CREATE OR REPLACE VIEW ext_cor_view AS
++ SELECT 'ext_cor_view: from extension'::text AS col;
++
++-- These are for testing replacement of a shell type/operator, which works
++-- enough like an implicit OR REPLACE to be important to check.
++
++CREATE TYPE test_ext_type AS ENUM('x', 'y');
++
++CREATE OPERATOR <<@@ ( PROCEDURE = pt_contained_poly,
++ LEFTARG = point, RIGHTARG = polygon );
+diff --git a/src/test/modules/test_extensions/test_ext_cor.control b/src/test/modules/test_extensions/test_ext_cor.control
+new file mode 100644
+index 0000000..0e972e5
+--- /dev/null
++++ b/src/test/modules/test_extensions/test_ext_cor.control
+@@ -0,0 +1,3 @@
++comment = 'Test extension using CREATE OR REPLACE'
++default_version = '1.0'
++relocatable = true
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-41862.patch b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-41862.patch
new file mode 100644
index 0000000000..f4093f4ba7
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-dbs/postgresql/files/CVE-2022-41862.patch
@@ -0,0 +1,48 @@
+From 3f7342671341a7a137f2d8b06ab3461cdb0e1d88 Mon Sep 17 00:00:00 2001
+From: Michael Paquier <michael@paquier.xyz>
+Date: Mon, 6 Feb 2023 11:20:31 +0900
+Subject: [PATCH] Properly NULL-terminate GSS receive buffer on error packet
+ reception
+
+pqsecure_open_gss() includes a code path handling error messages with
+v2-style protocol messages coming from the server. The client-side
+buffer holding the error message does not force a NULL-termination, with
+the data of the server getting copied to the errorMessage of the
+connection. Hence, it would be possible for a server to send an
+unterminated string and copy arbitrary bytes in the buffer receiving the
+error message in the client, opening the door to a crash or even data
+exposure.
+
+As at this stage of the authentication process the exchange has not been
+completed yet, this could be abused by an attacker without Kerberos
+credentials. Clients that have a valid kerberos cache are vulnerable as
+libpq opportunistically requests for it except if gssencmode is
+disabled.
+
+Author: Jacob Champion
+Backpatch-through: 12
+Security: CVE-2022-41862
+
+CVE: CVE-2022-41862
+Upstream-Status: Backport [https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3f7342671341a7a137f2d8b06ab3461cdb0e1d88]
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ src/interfaces/libpq/fe-secure-gssapi.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
+index 7b5e383..aef201b 100644
+--- a/src/interfaces/libpq/fe-secure-gssapi.c
++++ b/src/interfaces/libpq/fe-secure-gssapi.c
+@@ -578,6 +578,8 @@ pqsecure_open_gss(PGconn *conn)
+
+ PqGSSRecvLength += ret;
+
++ Assert(PqGSSRecvLength < PQ_GSS_RECV_BUFFER_SIZE);
++ PqGSSRecvBuffer[PqGSSRecvLength] = '\0';
+ printfPQExpBuffer(&conn->errorMessage, "%s\n", PqGSSRecvBuffer + 1);
+
+ return PGRES_POLLING_FAILED;
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_12.9.bb b/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_12.9.bb
index 67bf2b9604..808c5d6e77 100644
--- a/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_12.9.bb
+++ b/meta-openembedded/meta-oe/recipes-dbs/postgresql/postgresql_12.9.bb
@@ -7,6 +7,9 @@ SRC_URI += "\
file://0001-Add-support-for-RISC-V.patch \
file://0001-Improve-reproducibility.patch \
file://remove_duplicate.patch \
+ file://CVE-2022-1552.patch \
+ file://CVE-2022-2625.patch \
+ file://CVE-2022-41862.patch \
"
SRC_URI[sha256sum] = "89fda2de33ed04a98548e43f3ee5f15b882be17505d631fe0dd1a540a2b56dce"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/capnproto/capnproto_0.7.0.bb b/meta-openembedded/meta-oe/recipes-devtools/capnproto/capnproto_0.7.0.bb
index cb748d3cb6..fa1751e566 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/capnproto/capnproto_0.7.0.bb
+++ b/meta-openembedded/meta-oe/recipes-devtools/capnproto/capnproto_0.7.0.bb
@@ -5,7 +5,9 @@ SECTION = "console/tools"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://../LICENSE;md5=a05663ae6cca874123bf667a60dca8c9"
-SRC_URI = "git://github.com/sandstorm-io/capnproto.git;branch=release-${PV};protocol=https"
+SRC_URI = "git://github.com/sandstorm-io/capnproto.git;branch=release-${PV};protocol=https \
+ file://CVE-2022-46149.patch \
+"
SRCREV = "3f44c6db0f0f6c0cab0633f15f15d0a2acd01d19"
S = "${WORKDIR}/git/c++"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/capnproto/files/CVE-2022-46149.patch b/meta-openembedded/meta-oe/recipes-devtools/capnproto/files/CVE-2022-46149.patch
new file mode 100644
index 0000000000..b6b1fa6514
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/capnproto/files/CVE-2022-46149.patch
@@ -0,0 +1,49 @@
+From 25d34c67863fd960af34fc4f82a7ca3362ee74b9 Mon Sep 17 00:00:00 2001
+From: Kenton Varda <kenton@cloudflare.com>
+Date: Wed, 23 Nov 2022 12:02:29 -0600
+Subject: [PATCH] Apply data offset for list-of-pointers at access time rather
+ than ListReader creation time.
+
+Baking this offset into `ptr` reduced ops needed at access time but made the interpretation of `ptr` inconsistent depending on what type of list was expected.
+
+CVE: CVE-2022-46149
+Upstream-Status: Backport [https://github.com/capnproto/capnproto/commit/25d34c67863fd960af34fc4f82a7ca3362ee74b9]
+Signed-off-by: Virendra Thakur <virendrak@kpit.com>
+---
+ c++/src/capnp/layout.c++ | 4 ----
+ c++/src/capnp/layout.h | 6 +++++-
+ 2 files changed, 5 insertions(+), 5 deletions(-)
+
+Index: c++/src/capnp/layout.c++
+===================================================================
+--- c++.orig/src/capnp/layout.c++
++++ c++/src/capnp/layout.c++
+@@ -2322,10 +2322,6 @@ struct WireHelpers {
+ break;
+
+ case ElementSize::POINTER:
+- // We expected a list of pointers but got a list of structs. Assuming the first field
+- // in the struct is the pointer we were looking for, we want to munge the pointer to
+- // point at the first element's pointer section.
+- ptr += tag->structRef.dataSize.get();
+ KJ_REQUIRE(tag->structRef.ptrCount.get() > ZERO * POINTERS,
+ "Expected a pointer list, but got a list of data-only structs.") {
+ goto useDefault;
+Index: c++/src/capnp/layout.h
+===================================================================
+--- c++.orig/src/capnp/layout.h
++++ c++/src/capnp/layout.h
+@@ -1235,8 +1235,12 @@ inline Void ListReader::getDataElement<V
+ }
+
+ inline PointerReader ListReader::getPointerElement(ElementCount index) const {
++ // If the list elements have data sections we need to skip those. Note that for pointers to be
++ // present at all (which already must be true if we get here), then `structDataSize` must be a
++ // whole number of words, so we don't have to worry about unaligned reads here.
++ auto offset = structDataSize / BITS_PER_BYTE;
+ return PointerReader(segment, capTable, reinterpret_cast<const WirePointer*>(
+- ptr + upgradeBound<uint64_t>(index) * step / BITS_PER_BYTE), nestingLimit);
++ ptr + offset + upgradeBound<uint64_t>(index) * step / BITS_PER_BYTE), nestingLimit);
+ }
+
+ // -------------------------------------------------------------------
diff --git a/meta-openembedded/meta-oe/recipes-devtools/flatbuffers/flatbuffers_1.12.0.bb b/meta-openembedded/meta-oe/recipes-devtools/flatbuffers/flatbuffers_1.12.0.bb
index 859d6a0b05..c4f3594f36 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/flatbuffers/flatbuffers_1.12.0.bb
+++ b/meta-openembedded/meta-oe/recipes-devtools/flatbuffers/flatbuffers_1.12.0.bb
@@ -24,12 +24,17 @@ BUILD_CXXFLAGS += "-std=c++11 -fPIC"
# BUILD_TYPE=Release is required, otherwise flatc is not installed
EXTRA_OECMAKE += "\
-DCMAKE_BUILD_TYPE=Release \
- -DFLATBUFFERS_BUILD_TESTS=OFF \
+ -DFLATBUFFERS_BUILD_TESTS=OFF \
-DFLATBUFFERS_BUILD_SHAREDLIB=ON \
"
inherit cmake
+rm_flatc_cmaketarget_for_target() {
+ rm -f "${SYSROOT_DESTDIR}/${libdir}/cmake/flatbuffers/FlatcTargets.cmake"
+}
+SYSROOT_PREPROCESS_FUNCS:class-target += "rm_flatc_cmaketarget_for_target"
+
S = "${WORKDIR}/git"
FILES_${PN}-compiler = "${bindir}"
diff --git a/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-32212.patch b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-32212.patch
new file mode 100644
index 0000000000..f7b4b61f47
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-32212.patch
@@ -0,0 +1,133 @@
+commit 48c5aa5cab718d04473fa2761d532657c84b8131
+Author: Tobias Nießen <tniessen@tnie.de>
+Date: Fri May 27 21:18:49 2022 +0000
+
+ src: fix IPv4 validation in inspector_socket
+
+ Co-authored-by: RafaelGSS <rafael.nunu@hotmail.com>
+ PR-URL: https://github.com/nodejs-private/node-private/pull/320
+ Backport-PR-URL: https://github.com/nodejs-private/node-private/pull/325
+ Reviewed-By: Matteo Collina <matteo.collina@gmail.com>
+ Reviewed-By: RafaelGSS <rafael.nunu@hotmail.com>
+ CVE-ID: CVE-2022-32212
+
+CVE: CVE-2022-32212
+Upstream-Status: Backport [https://sources.debian.org/src/nodejs/12.22.12~dfsg-1~deb11u3/debian/patches/cve-2022-32212.patch]
+Comment: No hunks refreshed
+Signed-off-by: Poonam Jadhav <Poonam.Jadhav@kpit.com>
+
+Index: nodejs-12.22.12~dfsg/src/inspector_socket.cc
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/src/inspector_socket.cc
++++ nodejs-12.22.12~dfsg/src/inspector_socket.cc
+@@ -168,14 +168,22 @@ static std::string TrimPort(const std::s
+ static bool IsIPAddress(const std::string& host) {
+ if (host.length() >= 4 && host.front() == '[' && host.back() == ']')
+ return true;
+- int quads = 0;
++ uint_fast16_t accum = 0;
++ uint_fast8_t quads = 0;
++ bool empty = true;
++ auto endOctet = [&accum, &quads, &empty](bool final = false) {
++ return !empty && accum <= 0xff && ++quads <= 4 && final == (quads == 4) &&
++ (empty = true) && !(accum = 0);
++ };
+ for (char c : host) {
+- if (c == '.')
+- quads++;
+- else if (!isdigit(c))
++ if (isdigit(c)) {
++ if ((accum = (accum * 10) + (c - '0')) > 0xff) return false;
++ empty = false;
++ } else if (c != '.' || !endOctet()) {
+ return false;
++ }
+ }
+- return quads == 3;
++ return endOctet(true);
+ }
+
+ // Constants for hybi-10 frame format.
+Index: nodejs-12.22.12~dfsg/test/cctest/test_inspector_socket.cc
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/test/cctest/test_inspector_socket.cc
++++ nodejs-12.22.12~dfsg/test/cctest/test_inspector_socket.cc
+@@ -851,4 +851,78 @@ TEST_F(InspectorSocketTest, HostCheckedF
+ expect_failure_no_delegate(UPGRADE_REQUEST);
+ }
+
++TEST_F(InspectorSocketTest, HostIPChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 10.0.2.555:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostNegativeIPChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 10.0.-23.255:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpOctetOutOfIntRangeChecked) {
++ const std::string INVALID_HOST_IP_REQUEST =
++ "GET /json HTTP/1.1\r\n"
++ "Host: 127.0.0.4294967296:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpOctetFarOutOfIntRangeChecked) {
++ const std::string INVALID_HOST_IP_REQUEST =
++ "GET /json HTTP/1.1\r\n"
++ "Host: 127.0.0.18446744073709552000:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpEmptyOctetStartChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: .0.0.1:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpEmptyOctetMidChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 127..0.1:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpEmptyOctetEndChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 127.0.0.:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpTooFewOctetsChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 127.0.1:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpTooManyOctetsChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 127.0.0.0.1:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
+ } // anonymous namespace
diff --git a/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-35255.patch b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-35255.patch
new file mode 100644
index 0000000000..e9c2e7404a
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-35255.patch
@@ -0,0 +1,237 @@
+Origin: https://github.com/nodejs/node/commit/0c2a5723beff39d1f62daec96b5389da3d427e79
+Reviewed-by: Aron Xu <aron@debian.org>
+Last-Update: 2022-01-05
+Comment:
+ Although WebCrypto is not implemented in 12.x series, this fix is introducing
+ enhancment to the crypto setup of V8:EntropySource().
+
+commit 0c2a5723beff39d1f62daec96b5389da3d427e79
+Author: Ben Noordhuis <info@bnoordhuis.nl>
+Date: Sun Sep 11 10:48:34 2022 +0200
+
+ crypto: fix weak randomness in WebCrypto keygen
+
+ Commit dae283d96f from August 2020 introduced a call to EntropySource()
+ in SecretKeyGenTraits::DoKeyGen() in src/crypto/crypto_keygen.cc. There
+ are two problems with that:
+
+ 1. It does not check the return value, it assumes EntropySource() always
+ succeeds, but it can (and sometimes will) fail.
+
+ 2. The random data returned byEntropySource() may not be
+ cryptographically strong and therefore not suitable as keying
+ material.
+
+ An example is a freshly booted system or a system without /dev/random or
+ getrandom(2).
+
+ EntropySource() calls out to openssl's RAND_poll() and RAND_bytes() in a
+ best-effort attempt to obtain random data. OpenSSL has a built-in CSPRNG
+ but that can fail to initialize, in which case it's possible either:
+
+ 1. No random data gets written to the output buffer, i.e., the output is
+ unmodified, or
+
+ 2. Weak random data is written. It's theoretically possible for the
+ output to be fully predictable because the CSPRNG starts from a
+ predictable state.
+
+ Replace EntropySource() and CheckEntropy() with new function CSPRNG()
+ that enforces checking of the return value. Abort on startup when the
+ entropy pool fails to initialize because that makes it too easy to
+ compromise the security of the process.
+
+ Refs: https://hackerone.com/bugs?report_id=1690000
+ Refs: https://github.com/nodejs/node/pull/35093
+
+ Reviewed-By: Rafael Gonzaga <rafael.nunu@hotmail.com>
+ Reviewed-By: Tobias Nießen <tniessen@tnie.de>
+ PR-URL: #346
+ Backport-PR-URL: #351
+ CVE-ID: CVE-2022-35255
+
+CVE: CVE-2022-35255
+Upstream-Status: Backport [https://sources.debian.org/src/nodejs/12.22.12~dfsg-1~deb11u3/debian/patches/cve-2022-35255.patch]
+Comment: No hunks refreshed
+Signed-off-by: Poonam Jadhav <Poonam.Jadhav@kpit.com>
+
+Index: nodejs-12.22.12~dfsg/node.gyp
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/node.gyp
++++ nodejs-12.22.12~dfsg/node.gyp
+@@ -743,6 +743,8 @@
+ 'openssl_default_cipher_list%': '',
+ },
+
++ 'cflags': ['-Werror=unused-result'],
++
+ 'defines': [
+ 'NODE_ARCH="<(target_arch)"',
+ 'NODE_PLATFORM="<(OS)"',
+Index: nodejs-12.22.12~dfsg/src/node_crypto.cc
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/src/node_crypto.cc
++++ nodejs-12.22.12~dfsg/src/node_crypto.cc
+@@ -386,48 +386,14 @@ void ThrowCryptoError(Environment* env,
+ env->isolate()->ThrowException(exception);
+ }
+
++MUST_USE_RESULT CSPRNGResult CSPRNG(void* buffer, size_t length) {
++ do {
++ if (1 == RAND_status())
++ if (1 == RAND_bytes(static_cast<unsigned char*>(buffer), length))
++ return {true};
++ } while (1 == RAND_poll());
+
+-// Ensure that OpenSSL has enough entropy (at least 256 bits) for its PRNG.
+-// The entropy pool starts out empty and needs to fill up before the PRNG
+-// can be used securely. Once the pool is filled, it never dries up again;
+-// its contents is stirred and reused when necessary.
+-//
+-// OpenSSL normally fills the pool automatically but not when someone starts
+-// generating random numbers before the pool is full: in that case OpenSSL
+-// keeps lowering the entropy estimate to thwart attackers trying to guess
+-// the initial state of the PRNG.
+-//
+-// When that happens, we will have to wait until enough entropy is available.
+-// That should normally never take longer than a few milliseconds.
+-//
+-// OpenSSL draws from /dev/random and /dev/urandom. While /dev/random may
+-// block pending "true" randomness, /dev/urandom is a CSPRNG that doesn't
+-// block under normal circumstances.
+-//
+-// The only time when /dev/urandom may conceivably block is right after boot,
+-// when the whole system is still low on entropy. That's not something we can
+-// do anything about.
+-inline void CheckEntropy() {
+- for (;;) {
+- int status = RAND_status();
+- CHECK_GE(status, 0); // Cannot fail.
+- if (status != 0)
+- break;
+-
+- // Give up, RAND_poll() not supported.
+- if (RAND_poll() == 0)
+- break;
+- }
+-}
+-
+-
+-bool EntropySource(unsigned char* buffer, size_t length) {
+- // Ensure that OpenSSL's PRNG is properly seeded.
+- CheckEntropy();
+- // RAND_bytes() can return 0 to indicate that the entropy data is not truly
+- // random. That's okay, it's still better than V8's stock source of entropy,
+- // which is /dev/urandom on UNIX platforms and the current time on Windows.
+- return RAND_bytes(buffer, length) != -1;
++ return {false};
+ }
+
+ void SecureContext::Initialize(Environment* env, Local<Object> target) {
+@@ -649,9 +615,9 @@ void SecureContext::Init(const FunctionC
+ // OpenSSL 1.1.0 changed the ticket key size, but the OpenSSL 1.0.x size was
+ // exposed in the public API. To retain compatibility, install a callback
+ // which restores the old algorithm.
+- if (RAND_bytes(sc->ticket_key_name_, sizeof(sc->ticket_key_name_)) <= 0 ||
+- RAND_bytes(sc->ticket_key_hmac_, sizeof(sc->ticket_key_hmac_)) <= 0 ||
+- RAND_bytes(sc->ticket_key_aes_, sizeof(sc->ticket_key_aes_)) <= 0) {
++ if (CSPRNG(sc->ticket_key_name_, sizeof(sc->ticket_key_name_)).is_err() ||
++ CSPRNG(sc->ticket_key_hmac_, sizeof(sc->ticket_key_hmac_)).is_err() ||
++ CSPRNG(sc->ticket_key_aes_, sizeof(sc->ticket_key_aes_)).is_err()) {
+ return env->ThrowError("Error generating ticket keys");
+ }
+ SSL_CTX_set_tlsext_ticket_key_cb(sc->ctx_.get(), TicketCompatibilityCallback);
+@@ -1643,7 +1609,7 @@ int SecureContext::TicketCompatibilityCa
+
+ if (enc) {
+ memcpy(name, sc->ticket_key_name_, sizeof(sc->ticket_key_name_));
+- if (RAND_bytes(iv, 16) <= 0 ||
++ if (CSPRNG(iv, 16).is_err() ||
+ EVP_EncryptInit_ex(ectx, EVP_aes_128_cbc(), nullptr,
+ sc->ticket_key_aes_, iv) <= 0 ||
+ HMAC_Init_ex(hctx, sc->ticket_key_hmac_, sizeof(sc->ticket_key_hmac_),
+@@ -5867,8 +5833,7 @@ struct RandomBytesJob : public CryptoJob
+ : CryptoJob(env), rc(Nothing<int>()) {}
+
+ inline void DoThreadPoolWork() override {
+- CheckEntropy(); // Ensure that OpenSSL's PRNG is properly seeded.
+- rc = Just(RAND_bytes(data, size));
++ rc = Just(int(CSPRNG(data, size).is_ok()));
+ if (0 == rc.FromJust()) errors.Capture();
+ }
+
+@@ -6318,8 +6283,8 @@ class GenerateKeyPairJob : public Crypto
+ }
+
+ inline bool GenerateKey() {
+- // Make sure that the CSPRNG is properly seeded so the results are secure.
+- CheckEntropy();
++ // Make sure that the CSPRNG is properly seeded.
++ CHECK(CSPRNG(nullptr, 0).is_ok());
+
+ // Create the key generation context.
+ EVPKeyCtxPointer ctx = config_->Setup();
+Index: nodejs-12.22.12~dfsg/src/node_crypto.h
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/src/node_crypto.h
++++ nodejs-12.22.12~dfsg/src/node_crypto.h
+@@ -840,7 +840,19 @@ class ECDH final : public BaseObject {
+ const EC_GROUP* group_;
+ };
+
+-bool EntropySource(unsigned char* buffer, size_t length);
++struct CSPRNGResult {
++ const bool ok;
++ MUST_USE_RESULT bool is_ok() const { return ok; }
++ MUST_USE_RESULT bool is_err() const { return !ok; }
++};
++
++// Either succeeds with exactly |length| bytes of cryptographically
++// strong pseudo-random data, or fails. This function may block.
++// Don't assume anything about the contents of |buffer| on error.
++// As a special case, |length == 0| can be used to check if the CSPRNG
++// is properly seeded without consuming entropy.
++MUST_USE_RESULT CSPRNGResult CSPRNG(void* buffer, size_t length);
++
+ #ifndef OPENSSL_NO_ENGINE
+ void SetEngine(const v8::FunctionCallbackInfo<v8::Value>& args);
+ #endif // !OPENSSL_NO_ENGINE
+Index: nodejs-12.22.12~dfsg/src/inspector_io.cc
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/src/inspector_io.cc
++++ nodejs-12.22.12~dfsg/src/inspector_io.cc
+@@ -46,8 +46,7 @@ std::string ScriptPath(uv_loop_t* loop,
+ // Used ver 4 - with numbers
+ std::string GenerateID() {
+ uint16_t buffer[8];
+- CHECK(crypto::EntropySource(reinterpret_cast<unsigned char*>(buffer),
+- sizeof(buffer)));
++ CHECK(crypto::CSPRNG(buffer, sizeof(buffer)).is_ok());
+
+ char uuid[256];
+ snprintf(uuid, sizeof(uuid), "%04x%04x-%04x-%04x-%04x-%04x%04x%04x",
+Index: nodejs-12.22.12~dfsg/src/node.cc
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/src/node.cc
++++ nodejs-12.22.12~dfsg/src/node.cc
+@@ -969,9 +969,17 @@ InitializationResult InitializeOncePerPr
+ // the random source is properly initialized first.
+ OPENSSL_init();
+ #endif // NODE_FIPS_MODE
+- // V8 on Windows doesn't have a good source of entropy. Seed it from
+- // OpenSSL's pool.
+- V8::SetEntropySource(crypto::EntropySource);
++ // Ensure CSPRNG is properly seeded.
++ CHECK(crypto::CSPRNG(nullptr, 0).is_ok());
++
++ V8::SetEntropySource([](unsigned char* buffer, size_t length) {
++ // V8 falls back to very weak entropy when this function fails
++ // and /dev/urandom isn't available. That wouldn't be so bad if
++ // the entropy was only used for Math.random() but it's also used for
++ // hash table and address space layout randomization. Better to abort.
++ CHECK(crypto::CSPRNG(buffer, length).is_ok());
++ return true;
++ });
+ #endif // HAVE_OPENSSL
+
+ per_process::v8_platform.Initialize(
diff --git a/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-43548.patch b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-43548.patch
new file mode 100644
index 0000000000..54da1fba99
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-2022-43548.patch
@@ -0,0 +1,214 @@
+commit 2b433af094fb79cf80f086038b7f36342cb6826f
+Author: Tobias Nießen <tniessen@tnie.de>
+Date: Sun Sep 25 12:34:05 2022 +0000
+
+ inspector: harden IP address validation again
+
+ Use inet_pton() to parse IP addresses, which restricts IP addresses
+ to a small number of well-defined formats. In particular, octal and
+ hexadecimal number formats are not allowed, and neither are leading
+ zeros. Also explicitly reject 0.0.0.0/8 and ::/128 as non-routable.
+
+ Refs: https://hackerone.com/reports/1710652
+ CVE-ID: CVE-2022-43548
+ PR-URL: https://github.com/nodejs-private/node-private/pull/354
+ Reviewed-by: Michael Dawson <midawson@redhat.com>
+ Reviewed-by: Rafael Gonzaga <rafael.nunu@hotmail.com>
+ Reviewed-by: Rich Trott <rtrott@gmail.com>
+
+CVE: CVE-2022-43548
+Upstream-Status: Backport [https://sources.debian.org/src/nodejs/12.22.12~dfsg-1~deb11u3/debian/patches/cve-2022-43548.patch]
+Comment: No hunks refreshed
+Signed-off-by: Poonam Jadhav <Poonam.Jadhav@kpit.com>
+
+Index: nodejs-12.22.12~dfsg/src/inspector_socket.cc
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/src/inspector_socket.cc
++++ nodejs-12.22.12~dfsg/src/inspector_socket.cc
+@@ -10,6 +10,7 @@
+
+ #include "openssl/sha.h" // Sha-1 hash
+
++#include <algorithm>
+ #include <cstring>
+ #include <map>
+
+@@ -166,25 +167,71 @@ static std::string TrimPort(const std::s
+ }
+
+ static bool IsIPAddress(const std::string& host) {
+- if (host.length() >= 4 && host.front() == '[' && host.back() == ']')
++ // TODO(tniessen): add CVEs to the following bullet points
++ // To avoid DNS rebinding attacks, we are aware of the following requirements:
++ // * the host name must be an IP address,
++ // * the IP address must be routable, and
++ // * the IP address must be formatted unambiguously.
++
++ // The logic below assumes that the string is null-terminated, so ensure that
++ // we did not somehow end up with null characters within the string.
++ if (host.find('\0') != std::string::npos) return false;
++
++ // All IPv6 addresses must be enclosed in square brackets, and anything
++ // enclosed in square brackets must be an IPv6 address.
++ if (host.length() >= 4 && host.front() == '[' && host.back() == ']') {
++ // INET6_ADDRSTRLEN is the maximum length of the dual format (including the
++ // terminating null character), which is the longest possible representation
++ // of an IPv6 address: xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:ddd.ddd.ddd.ddd
++ if (host.length() - 2 >= INET6_ADDRSTRLEN) return false;
++
++ // Annoyingly, libuv's implementation of inet_pton() deviates from other
++ // implementations of the function in that it allows '%' in IPv6 addresses.
++ if (host.find('%') != std::string::npos) return false;
++
++ // Parse the IPv6 address to ensure it is syntactically valid.
++ char ipv6_str[INET6_ADDRSTRLEN];
++ std::copy(host.begin() + 1, host.end() - 1, ipv6_str);
++ ipv6_str[host.length()] = '\0';
++ unsigned char ipv6[sizeof(struct in6_addr)];
++ if (uv_inet_pton(AF_INET6, ipv6_str, ipv6) != 0) return false;
++
++ // The only non-routable IPv6 address is ::/128. It should not be necessary
++ // to explicitly reject it because it will still be enclosed in square
++ // brackets and not even macOS should make DNS requests in that case, but
++ // history has taught us that we cannot be careful enough.
++ // Note that RFC 4291 defines both "IPv4-Compatible IPv6 Addresses" and
++ // "IPv4-Mapped IPv6 Addresses", which means that there are IPv6 addresses
++ // (other than ::/128) that represent non-routable IPv4 addresses. However,
++ // this translation assumes that the host is interpreted as an IPv6 address
++ // in the first place, at which point DNS rebinding should not be an issue.
++ if (std::all_of(ipv6, ipv6 + sizeof(ipv6), [](auto b) { return b == 0; })) {
++ return false;
++ }
++
++ // It is a syntactically valid and routable IPv6 address enclosed in square
++ // brackets. No client should be able to misinterpret this.
+ return true;
+- uint_fast16_t accum = 0;
+- uint_fast8_t quads = 0;
+- bool empty = true;
+- auto endOctet = [&accum, &quads, &empty](bool final = false) {
+- return !empty && accum <= 0xff && ++quads <= 4 && final == (quads == 4) &&
+- (empty = true) && !(accum = 0);
+- };
+- for (char c : host) {
+- if (isdigit(c)) {
+- if ((accum = (accum * 10) + (c - '0')) > 0xff) return false;
+- empty = false;
+- } else if (c != '.' || !endOctet()) {
+- return false;
+- }
+- }
+- return endOctet(true);
+-}
++ }
++
++ // Anything not enclosed in square brackets must be an IPv4 address. It is
++ // important here that inet_pton() accepts only the so-called dotted-decimal
++ // notation, which is a strict subset of the so-called numbers-and-dots
++ // notation that is allowed by inet_aton() and inet_addr(). This subset does
++ // not allow hexadecimal or octal number formats.
++ unsigned char ipv4[sizeof(struct in_addr)];
++ if (uv_inet_pton(AF_INET, host.c_str(), ipv4) != 0) return false;
++
++ // The only strictly non-routable IPv4 address is 0.0.0.0, and macOS will make
++ // DNS requests for this IP address, so we need to explicitly reject it. In
++ // fact, we can safely reject all of 0.0.0.0/8 (see Section 3.2 of RFC 791 and
++ // Section 3.2.1.3 of RFC 1122).
++ // Note that inet_pton() stores the IPv4 address in network byte order.
++ if (ipv4[0] == 0) return false;
++
++ // It is a routable IPv4 address in dotted-decimal notation.
++ return true;
++ }
+
+ // Constants for hybi-10 frame format.
+
+Index: nodejs-12.22.12~dfsg/test/cctest/test_inspector_socket.cc
+===================================================================
+--- nodejs-12.22.12~dfsg.orig/test/cctest/test_inspector_socket.cc
++++ nodejs-12.22.12~dfsg/test/cctest/test_inspector_socket.cc
+@@ -925,4 +925,84 @@ TEST_F(InspectorSocketTest, HostIpTooMan
+ expect_handshake_failure();
+ }
+
++TEST_F(InspectorSocketTest, HostIpInvalidOctalOctetStartChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 08.1.1.1:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpInvalidOctalOctetMidChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 1.09.1.1:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpInvalidOctalOctetEndChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 1.1.1.009:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpLeadingZeroStartChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 01.1.1.1:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpLeadingZeroMidChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 1.1.001.1:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIpLeadingZeroEndChecked) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: 1.1.1.01:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIPv6NonRoutable) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: [::]:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIPv6NonRoutableDual) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: [::0.0.0.0]:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIPv4InSquareBrackets) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: [127.0.0.1]:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
++TEST_F(InspectorSocketTest, HostIPv6InvalidAbbreviation) {
++ const std::string INVALID_HOST_IP_REQUEST = "GET /json HTTP/1.1\r\n"
++ "Host: [:::1]:9229\r\n\r\n";
++ send_in_chunks(INVALID_HOST_IP_REQUEST.c_str(),
++ INVALID_HOST_IP_REQUEST.length());
++ expect_handshake_failure();
++}
++
+ } // anonymous namespace
diff --git a/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-llhttp.patch b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-llhttp.patch
new file mode 100644
index 0000000000..790cf92d2e
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs/CVE-llhttp.patch
@@ -0,0 +1,4348 @@
+Reviewed-by: Aron Xu <aron@debian.org>
+Last-Update: 2023-01-05
+Comment:
+ This patch updates the embeded copy of llhttp from version 2.1.4 to 2.1.6,
+ which is upstream's actual fix for CVE-2022-32213, CVE-2022-32214, CVE-2022-32215,
+ CVE-2022-35256.
+ Test cases are ported to use mustCall() to replace the later introduced
+ mustSucceed(), to avoid pulling in too many dependent new test codes.
+References:
+ * https://github.com/nodejs/node/commit/da0fda0fe81d372e24c0cb11aec37534985708dd
+ * https://github.com/nodejs/node/commit/a9f1146b8827855e342834458a71f2367346ace0
+
+CVE: CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35256
+Upstream-Status: Backport [https://sources.debian.org/src/nodejs/12.22.12~dfsg-1~deb11u3/debian/patches/cve-llhttp.patch]
+Comment: No hunks refreshed
+Signed-off-by: Poonam Jadhav <Poonam.Jadhav@kpit.com>
+
+--- nodejs-12.22.12~dfsg/deps/llhttp/include/llhttp.h
++++ nodejs-12.22.12~dfsg/deps/llhttp/include/llhttp.h
+@@ -3,7 +3,7 @@
+
+ #define LLHTTP_VERSION_MAJOR 2
+ #define LLHTTP_VERSION_MINOR 1
+-#define LLHTTP_VERSION_PATCH 4
++#define LLHTTP_VERSION_PATCH 6
+
+ #ifndef LLHTTP_STRICT_MODE
+ # define LLHTTP_STRICT_MODE 0
+@@ -58,6 +58,7 @@
+ HPE_OK = 0,
+ HPE_INTERNAL = 1,
+ HPE_STRICT = 2,
++ HPE_CR_EXPECTED = 25,
+ HPE_LF_EXPECTED = 3,
+ HPE_UNEXPECTED_CONTENT_LENGTH = 4,
+ HPE_CLOSED_CONNECTION = 5,
+@@ -78,7 +79,7 @@
+ HPE_CB_CHUNK_COMPLETE = 20,
+ HPE_PAUSED = 21,
+ HPE_PAUSED_UPGRADE = 22,
+- HPE_USER = 23
++ HPE_USER = 24
+ };
+ typedef enum llhttp_errno llhttp_errno_t;
+
+@@ -153,6 +154,7 @@
+ XX(0, OK, OK) \
+ XX(1, INTERNAL, INTERNAL) \
+ XX(2, STRICT, STRICT) \
++ XX(25, CR_EXPECTED, CR_EXPECTED) \
+ XX(3, LF_EXPECTED, LF_EXPECTED) \
+ XX(4, UNEXPECTED_CONTENT_LENGTH, UNEXPECTED_CONTENT_LENGTH) \
+ XX(5, CLOSED_CONNECTION, CLOSED_CONNECTION) \
+@@ -173,7 +175,7 @@
+ XX(20, CB_CHUNK_COMPLETE, CB_CHUNK_COMPLETE) \
+ XX(21, PAUSED, PAUSED) \
+ XX(22, PAUSED_UPGRADE, PAUSED_UPGRADE) \
+- XX(23, USER, USER) \
++ XX(24, USER, USER) \
+
+
+ #define HTTP_METHOD_MAP(XX) \
+--- nodejs-12.22.12~dfsg/deps/llhttp/src/llhttp.c
++++ nodejs-12.22.12~dfsg/deps/llhttp/src/llhttp.c
+@@ -325,6 +325,7 @@
+ s_n_llhttp__internal__n_header_value_lws,
+ s_n_llhttp__internal__n_header_value_almost_done,
+ s_n_llhttp__internal__n_header_value_lenient,
++ s_n_llhttp__internal__n_error_25,
+ s_n_llhttp__internal__n_header_value_otherwise,
+ s_n_llhttp__internal__n_header_value_connection_token,
+ s_n_llhttp__internal__n_header_value_connection_ws,
+@@ -332,14 +333,16 @@
+ s_n_llhttp__internal__n_header_value_connection_2,
+ s_n_llhttp__internal__n_header_value_connection_3,
+ s_n_llhttp__internal__n_header_value_connection,
+- s_n_llhttp__internal__n_error_26,
+ s_n_llhttp__internal__n_error_27,
++ s_n_llhttp__internal__n_error_28,
+ s_n_llhttp__internal__n_header_value_content_length_ws,
+ s_n_llhttp__internal__n_header_value_content_length,
+- s_n_llhttp__internal__n_header_value_te_chunked_last,
++ s_n_llhttp__internal__n_error_30,
++ s_n_llhttp__internal__n_error_29,
+ s_n_llhttp__internal__n_header_value_te_token_ows,
+ s_n_llhttp__internal__n_header_value,
+ s_n_llhttp__internal__n_header_value_te_token,
++ s_n_llhttp__internal__n_header_value_te_chunked_last,
+ s_n_llhttp__internal__n_header_value_te_chunked,
+ s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1,
+ s_n_llhttp__internal__n_header_value_discard_ws,
+@@ -734,7 +737,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_2(
++int llhttp__internal__c_update_header_state_3(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -742,7 +745,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_4(
++int llhttp__internal__c_update_header_state_1(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -750,7 +753,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_5(
++int llhttp__internal__c_update_header_state_6(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -758,7 +761,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_6(
++int llhttp__internal__c_update_header_state_7(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -766,7 +769,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_test_flags_6(
++int llhttp__internal__c_test_flags_7(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -807,6 +810,13 @@
+ return 0;
+ }
+
++int llhttp__internal__c_test_flags_8(
++ llhttp__internal_t* state,
++ const unsigned char* p,
++ const unsigned char* endp) {
++ return (state->flags & 8) == 8;
++}
++
+ int llhttp__internal__c_or_flags_16(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+@@ -823,7 +833,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_7(
++int llhttp__internal__c_update_header_state_8(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -831,7 +841,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_or_flags_17(
++int llhttp__internal__c_or_flags_18(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -1554,7 +1564,7 @@
+ goto s_n_llhttp__internal__n_header_value_discard_lws;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_22;
++ goto s_n_llhttp__internal__n_error_23;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -1567,13 +1577,13 @@
+ }
+ switch (*p) {
+ case 9: {
+- goto s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1;
++ goto s_n_llhttp__internal__n_invoke_load_header_state_3;
+ }
+ case ' ': {
+- goto s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1;
++ goto s_n_llhttp__internal__n_invoke_load_header_state_3;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_load_header_state_3;
++ goto s_n_llhttp__internal__n_invoke_load_header_state_4;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -1590,7 +1600,7 @@
+ goto s_n_llhttp__internal__n_header_value_lws;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_23;
++ goto s_n_llhttp__internal__n_error_24;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -1603,10 +1613,10 @@
+ }
+ switch (*p) {
+ case 10: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_1;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_3;
+ }
+ case 13: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_3;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_4;
+ }
+ default: {
+ p++;
+@@ -1616,20 +1626,27 @@
+ /* UNREACHABLE */;
+ abort();
+ }
++ case s_n_llhttp__internal__n_error_25:
++ s_n_llhttp__internal__n_error_25: {
++ state->error = 0xa;
++ state->reason = "Invalid header value char";
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_error;
++ return s_error;
++ /* UNREACHABLE */;
++ abort();
++ }
+ case s_n_llhttp__internal__n_header_value_otherwise:
+ s_n_llhttp__internal__n_header_value_otherwise: {
+ if (p == endp) {
+ return s_n_llhttp__internal__n_header_value_otherwise;
+ }
+ switch (*p) {
+- case 10: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_1;
+- }
+ case 13: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_2;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_test_flags_5;
++ goto s_n_llhttp__internal__n_invoke_test_flags_6;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -1692,10 +1709,10 @@
+ }
+ case ',': {
+ p++;
+- goto s_n_llhttp__internal__n_invoke_load_header_state_4;
++ goto s_n_llhttp__internal__n_invoke_load_header_state_5;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_4;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_5;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -1713,7 +1730,7 @@
+ switch (match_seq.status) {
+ case kMatchComplete: {
+ p++;
+- goto s_n_llhttp__internal__n_invoke_update_header_state_2;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_3;
+ }
+ case kMatchPause: {
+ return s_n_llhttp__internal__n_header_value_connection_1;
+@@ -1737,7 +1754,7 @@
+ switch (match_seq.status) {
+ case kMatchComplete: {
+ p++;
+- goto s_n_llhttp__internal__n_invoke_update_header_state_5;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_6;
+ }
+ case kMatchPause: {
+ return s_n_llhttp__internal__n_header_value_connection_2;
+@@ -1761,7 +1778,7 @@
+ switch (match_seq.status) {
+ case kMatchComplete: {
+ p++;
+- goto s_n_llhttp__internal__n_invoke_update_header_state_6;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_7;
+ }
+ case kMatchPause: {
+ return s_n_llhttp__internal__n_header_value_connection_3;
+@@ -1806,8 +1823,8 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- case s_n_llhttp__internal__n_error_26:
+- s_n_llhttp__internal__n_error_26: {
++ case s_n_llhttp__internal__n_error_27:
++ s_n_llhttp__internal__n_error_27: {
+ state->error = 0xb;
+ state->reason = "Content-Length overflow";
+ state->error_pos = (const char*) p;
+@@ -1816,8 +1833,8 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- case s_n_llhttp__internal__n_error_27:
+- s_n_llhttp__internal__n_error_27: {
++ case s_n_llhttp__internal__n_error_28:
++ s_n_llhttp__internal__n_error_28: {
+ state->error = 0xb;
+ state->reason = "Invalid character in Content-Length";
+ state->error_pos = (const char*) p;
+@@ -1843,7 +1860,7 @@
+ goto s_n_llhttp__internal__n_header_value_content_length_ws;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_5;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_6;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -1912,26 +1929,23 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- case s_n_llhttp__internal__n_header_value_te_chunked_last:
+- s_n_llhttp__internal__n_header_value_te_chunked_last: {
+- if (p == endp) {
+- return s_n_llhttp__internal__n_header_value_te_chunked_last;
+- }
+- switch (*p) {
+- case 10: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_7;
+- }
+- case 13: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_7;
+- }
+- case ' ': {
+- p++;
+- goto s_n_llhttp__internal__n_header_value_te_chunked_last;
+- }
+- default: {
+- goto s_n_llhttp__internal__n_header_value_te_chunked;
+- }
+- }
++ case s_n_llhttp__internal__n_error_30:
++ s_n_llhttp__internal__n_error_30: {
++ state->error = 0xf;
++ state->reason = "Invalid `Transfer-Encoding` header value";
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_error;
++ return s_error;
++ /* UNREACHABLE */;
++ abort();
++ }
++ case s_n_llhttp__internal__n_error_29:
++ s_n_llhttp__internal__n_error_29: {
++ state->error = 0xf;
++ state->reason = "Invalid `Transfer-Encoding` header value";
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_error;
++ return s_error;
+ /* UNREACHABLE */;
+ abort();
+ }
+@@ -2048,8 +2062,34 @@
+ goto s_n_llhttp__internal__n_header_value_te_token_ows;
+ }
+ default: {
++ goto s_n_llhttp__internal__n_invoke_update_header_state_9;
++ }
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ case s_n_llhttp__internal__n_header_value_te_chunked_last:
++ s_n_llhttp__internal__n_header_value_te_chunked_last: {
++ if (p == endp) {
++ return s_n_llhttp__internal__n_header_value_te_chunked_last;
++ }
++ switch (*p) {
++ case 10: {
++ goto s_n_llhttp__internal__n_invoke_update_header_state_8;
++ }
++ case 13: {
+ goto s_n_llhttp__internal__n_invoke_update_header_state_8;
+ }
++ case ' ': {
++ p++;
++ goto s_n_llhttp__internal__n_header_value_te_chunked_last;
++ }
++ case ',': {
++ goto s_n_llhttp__internal__n_invoke_load_type_1;
++ }
++ default: {
++ goto s_n_llhttp__internal__n_header_value_te_token;
++ }
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -2101,7 +2141,7 @@
+ }
+ case 10: {
+ p++;
+- goto s_n_llhttp__internal__n_header_value_discard_lws;
++ goto s_n_llhttp__internal__n_invoke_test_flags_5;
+ }
+ case 13: {
+ p++;
+@@ -2128,7 +2168,7 @@
+ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_field_2;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_28;
++ goto s_n_llhttp__internal__n_error_31;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2218,7 +2258,7 @@
+ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_field_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_9;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_10;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2243,7 +2283,7 @@
+ return s_n_llhttp__internal__n_header_field_3;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2268,7 +2308,7 @@
+ return s_n_llhttp__internal__n_header_field_4;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2289,7 +2329,7 @@
+ goto s_n_llhttp__internal__n_header_field_4;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2313,7 +2353,7 @@
+ return s_n_llhttp__internal__n_header_field_1;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2338,7 +2378,7 @@
+ return s_n_llhttp__internal__n_header_field_5;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2363,7 +2403,7 @@
+ return s_n_llhttp__internal__n_header_field_6;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2388,7 +2428,7 @@
+ return s_n_llhttp__internal__n_header_field_7;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2417,7 +2457,7 @@
+ goto s_n_llhttp__internal__n_header_field_7;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2508,7 +2548,7 @@
+ goto s_n_llhttp__internal__n_url_to_http_09;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_29;
++ goto s_n_llhttp__internal__n_error_32;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2533,7 +2573,7 @@
+ goto s_n_llhttp__internal__n_url_skip_lf_to_http09_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_29;
++ goto s_n_llhttp__internal__n_error_32;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2550,7 +2590,7 @@
+ goto s_n_llhttp__internal__n_header_field_start;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_30;
++ goto s_n_llhttp__internal__n_error_33;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2571,7 +2611,7 @@
+ goto s_n_llhttp__internal__n_req_http_end_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_30;
++ goto s_n_llhttp__internal__n_error_33;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2634,7 +2674,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_http_minor;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_31;
++ goto s_n_llhttp__internal__n_error_34;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2651,7 +2691,7 @@
+ goto s_n_llhttp__internal__n_req_http_minor;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_32;
++ goto s_n_llhttp__internal__n_error_35;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2714,7 +2754,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_http_major;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_33;
++ goto s_n_llhttp__internal__n_error_36;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2738,7 +2778,7 @@
+ return s_n_llhttp__internal__n_req_http_start_1;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_35;
++ goto s_n_llhttp__internal__n_error_38;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2762,7 +2802,7 @@
+ return s_n_llhttp__internal__n_req_http_start_2;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_35;
++ goto s_n_llhttp__internal__n_error_38;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2787,7 +2827,7 @@
+ goto s_n_llhttp__internal__n_req_http_start_2;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_35;
++ goto s_n_llhttp__internal__n_error_38;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2878,7 +2918,7 @@
+ goto s_n_llhttp__internal__n_url_fragment;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_36;
++ goto s_n_llhttp__internal__n_error_39;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2939,7 +2979,7 @@
+ goto s_n_llhttp__internal__n_span_end_stub_query_3;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_37;
++ goto s_n_llhttp__internal__n_error_40;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -2977,7 +3017,7 @@
+ goto s_n_llhttp__internal__n_url_query;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_38;
++ goto s_n_llhttp__internal__n_error_41;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3102,10 +3142,10 @@
+ }
+ case 8: {
+ p++;
+- goto s_n_llhttp__internal__n_error_39;
++ goto s_n_llhttp__internal__n_error_42;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_40;
++ goto s_n_llhttp__internal__n_error_43;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3164,7 +3204,7 @@
+ goto s_n_llhttp__internal__n_url_server_with_at;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_41;
++ goto s_n_llhttp__internal__n_error_44;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3181,7 +3221,7 @@
+ goto s_n_llhttp__internal__n_url_server;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_43;
++ goto s_n_llhttp__internal__n_error_46;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3199,7 +3239,7 @@
+ }
+ case 10: {
+ p++;
+- goto s_n_llhttp__internal__n_error_42;
++ goto s_n_llhttp__internal__n_error_45;
+ }
+ case 12: {
+ p++;
+@@ -3207,18 +3247,18 @@
+ }
+ case 13: {
+ p++;
+- goto s_n_llhttp__internal__n_error_42;
++ goto s_n_llhttp__internal__n_error_45;
+ }
+ case ' ': {
+ p++;
+- goto s_n_llhttp__internal__n_error_42;
++ goto s_n_llhttp__internal__n_error_45;
+ }
+ case '/': {
+ p++;
+ goto s_n_llhttp__internal__n_url_schema_delim_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_43;
++ goto s_n_llhttp__internal__n_error_46;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3264,7 +3304,7 @@
+ }
+ case 2: {
+ p++;
+- goto s_n_llhttp__internal__n_error_42;
++ goto s_n_llhttp__internal__n_error_45;
+ }
+ case 3: {
+ goto s_n_llhttp__internal__n_span_end_stub_schema;
+@@ -3274,7 +3314,7 @@
+ goto s_n_llhttp__internal__n_url_schema;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_44;
++ goto s_n_llhttp__internal__n_error_47;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3310,7 +3350,7 @@
+ }
+ case 2: {
+ p++;
+- goto s_n_llhttp__internal__n_error_42;
++ goto s_n_llhttp__internal__n_error_45;
+ }
+ case 3: {
+ goto s_n_llhttp__internal__n_span_start_stub_path_2;
+@@ -3319,7 +3359,7 @@
+ goto s_n_llhttp__internal__n_url_schema;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_45;
++ goto s_n_llhttp__internal__n_error_48;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3417,7 +3457,7 @@
+ goto s_n_llhttp__internal__n_req_spaces_before_url;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_46;
++ goto s_n_llhttp__internal__n_error_49;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3442,7 +3482,7 @@
+ return s_n_llhttp__internal__n_start_req_1;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3467,7 +3507,7 @@
+ return s_n_llhttp__internal__n_start_req_2;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3492,7 +3532,7 @@
+ return s_n_llhttp__internal__n_start_req_4;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3517,7 +3557,7 @@
+ return s_n_llhttp__internal__n_start_req_6;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3535,7 +3575,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_method_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3556,7 +3596,7 @@
+ goto s_n_llhttp__internal__n_start_req_7;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3577,7 +3617,7 @@
+ goto s_n_llhttp__internal__n_start_req_5;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3602,7 +3642,7 @@
+ return s_n_llhttp__internal__n_start_req_8;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3627,7 +3667,7 @@
+ return s_n_llhttp__internal__n_start_req_9;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3652,7 +3692,7 @@
+ return s_n_llhttp__internal__n_start_req_10;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3677,7 +3717,7 @@
+ return s_n_llhttp__internal__n_start_req_12;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3702,7 +3742,7 @@
+ return s_n_llhttp__internal__n_start_req_13;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3723,7 +3763,7 @@
+ goto s_n_llhttp__internal__n_start_req_13;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3748,7 +3788,7 @@
+ return s_n_llhttp__internal__n_start_req_15;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3773,7 +3813,7 @@
+ return s_n_llhttp__internal__n_start_req_16;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3798,7 +3838,7 @@
+ return s_n_llhttp__internal__n_start_req_18;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3823,7 +3863,7 @@
+ return s_n_llhttp__internal__n_start_req_20;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3841,7 +3881,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_method_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3862,7 +3902,7 @@
+ goto s_n_llhttp__internal__n_start_req_21;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3883,7 +3923,7 @@
+ goto s_n_llhttp__internal__n_start_req_19;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3908,7 +3948,7 @@
+ return s_n_llhttp__internal__n_start_req_22;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3937,7 +3977,7 @@
+ goto s_n_llhttp__internal__n_start_req_22;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3962,7 +4002,7 @@
+ return s_n_llhttp__internal__n_start_req_23;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -3987,7 +4027,7 @@
+ return s_n_llhttp__internal__n_start_req_24;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4012,7 +4052,7 @@
+ return s_n_llhttp__internal__n_start_req_26;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4037,7 +4077,7 @@
+ return s_n_llhttp__internal__n_start_req_27;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4062,7 +4102,7 @@
+ return s_n_llhttp__internal__n_start_req_31;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4087,7 +4127,7 @@
+ return s_n_llhttp__internal__n_start_req_32;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4108,7 +4148,7 @@
+ goto s_n_llhttp__internal__n_start_req_32;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4125,7 +4165,7 @@
+ goto s_n_llhttp__internal__n_start_req_30;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4147,7 +4187,7 @@
+ goto s_n_llhttp__internal__n_start_req_29;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4172,7 +4212,7 @@
+ return s_n_llhttp__internal__n_start_req_34;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4194,7 +4234,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_method_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4223,7 +4263,7 @@
+ goto s_n_llhttp__internal__n_start_req_33;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4248,7 +4288,7 @@
+ return s_n_llhttp__internal__n_start_req_37;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4273,7 +4313,7 @@
+ return s_n_llhttp__internal__n_start_req_38;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4294,7 +4334,7 @@
+ goto s_n_llhttp__internal__n_start_req_38;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4311,7 +4351,7 @@
+ goto s_n_llhttp__internal__n_start_req_36;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4336,7 +4376,7 @@
+ return s_n_llhttp__internal__n_start_req_40;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4361,7 +4401,7 @@
+ return s_n_llhttp__internal__n_start_req_41;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4386,7 +4426,7 @@
+ return s_n_llhttp__internal__n_start_req_42;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4411,7 +4451,7 @@
+ goto s_n_llhttp__internal__n_start_req_42;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4436,7 +4476,7 @@
+ return s_n_llhttp__internal__n_start_req_43;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4461,7 +4501,7 @@
+ return s_n_llhttp__internal__n_start_req_46;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4486,7 +4526,7 @@
+ return s_n_llhttp__internal__n_start_req_48;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4511,7 +4551,7 @@
+ return s_n_llhttp__internal__n_start_req_49;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4532,7 +4572,7 @@
+ goto s_n_llhttp__internal__n_start_req_49;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4557,7 +4597,7 @@
+ return s_n_llhttp__internal__n_start_req_50;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4582,7 +4622,7 @@
+ goto s_n_llhttp__internal__n_start_req_50;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4599,7 +4639,7 @@
+ goto s_n_llhttp__internal__n_start_req_45;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4672,7 +4712,7 @@
+ goto s_n_llhttp__internal__n_start_req_44;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_55;
++ goto s_n_llhttp__internal__n_error_58;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4689,7 +4729,7 @@
+ goto s_n_llhttp__internal__n_header_field_start;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4764,7 +4804,7 @@
+ goto s_n_llhttp__internal__n_res_status_start;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_49;
++ goto s_n_llhttp__internal__n_error_52;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4844,7 +4884,7 @@
+ goto s_n_llhttp__internal__n_invoke_update_status_code;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_50;
++ goto s_n_llhttp__internal__n_error_53;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4907,7 +4947,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_http_minor_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_51;
++ goto s_n_llhttp__internal__n_error_54;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4924,7 +4964,7 @@
+ goto s_n_llhttp__internal__n_res_http_minor;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_52;
++ goto s_n_llhttp__internal__n_error_55;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -4987,7 +5027,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_http_major_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_53;
++ goto s_n_llhttp__internal__n_error_56;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -5011,7 +5051,7 @@
+ return s_n_llhttp__internal__n_start_res;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_56;
++ goto s_n_llhttp__internal__n_error_59;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -5036,7 +5076,7 @@
+ return s_n_llhttp__internal__n_req_or_res_method_2;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_54;
++ goto s_n_llhttp__internal__n_error_57;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -5060,7 +5100,7 @@
+ return s_n_llhttp__internal__n_req_or_res_method_3;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_54;
++ goto s_n_llhttp__internal__n_error_57;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -5081,7 +5121,7 @@
+ goto s_n_llhttp__internal__n_req_or_res_method_3;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_54;
++ goto s_n_llhttp__internal__n_error_57;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -5098,7 +5138,7 @@
+ goto s_n_llhttp__internal__n_req_or_res_method_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_54;
++ goto s_n_llhttp__internal__n_error_57;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -5167,7 +5207,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_42: {
++ s_n_llhttp__internal__n_error_45: {
+ state->error = 0x7;
+ state->reason = "Invalid characters in url";
+ state->error_pos = (const char*) p;
+@@ -5655,7 +5695,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_21: {
++ s_n_llhttp__internal__n_error_22: {
+ state->error = 0xb;
+ state->reason = "Empty Content-Length";
+ state->error_pos = (const char*) p;
+@@ -5740,14 +5780,33 @@
+ s_n_llhttp__internal__n_invoke_load_header_state: {
+ switch (llhttp__internal__c_load_header_state(state, p, endp)) {
+ case 2:
+- goto s_n_llhttp__internal__n_error_21;
++ goto s_n_llhttp__internal__n_error_22;
+ default:
+ goto s_n_llhttp__internal__n_invoke_load_header_state_1;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_22: {
++ s_n_llhttp__internal__n_error_21: {
++ state->error = 0xa;
++ state->reason = "Invalid header value char";
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_error;
++ return s_error;
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_test_flags_5: {
++ switch (llhttp__internal__c_test_flags_2(state, p, endp)) {
++ case 1:
++ goto s_n_llhttp__internal__n_header_value_discard_lws;
++ default:
++ goto s_n_llhttp__internal__n_error_21;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_error_23: {
+ state->error = 0x2;
+ state->reason = "Expected LF after CR";
+ state->error_pos = (const char*) p;
+@@ -5757,6 +5816,24 @@
+ abort();
+ }
+ s_n_llhttp__internal__n_invoke_update_header_state_1: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
++ default:
++ goto s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_load_header_state_3: {
++ switch (llhttp__internal__c_load_header_state(state, p, endp)) {
++ case 8:
++ goto s_n_llhttp__internal__n_invoke_update_header_state_1;
++ default:
++ goto s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_update_header_state_2: {
+ switch (llhttp__internal__c_update_header_state(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_field_start;
+@@ -5767,7 +5844,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_7: {
+ switch (llhttp__internal__c_or_flags_3(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_1;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_2;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -5775,7 +5852,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_8: {
+ switch (llhttp__internal__c_or_flags_4(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_1;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_2;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -5783,7 +5860,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_9: {
+ switch (llhttp__internal__c_or_flags_5(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_1;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_2;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -5796,7 +5873,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_load_header_state_3: {
++ s_n_llhttp__internal__n_invoke_load_header_state_4: {
+ switch (llhttp__internal__c_load_header_state(state, p, endp)) {
+ case 5:
+ goto s_n_llhttp__internal__n_invoke_or_flags_7;
+@@ -5812,7 +5889,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_23: {
++ s_n_llhttp__internal__n_error_24: {
+ state->error = 0x3;
+ state->reason = "Missing expected LF after header value";
+ state->error_pos = (const char*) p;
+@@ -5830,6 +5907,24 @@
+ err = llhttp__on_header_value(state, start, p);
+ if (err != 0) {
+ state->error = err;
++ state->error_pos = (const char*) (p + 1);
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_header_value_almost_done;
++ return s_error;
++ }
++ p++;
++ goto s_n_llhttp__internal__n_header_value_almost_done;
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_3: {
++ const unsigned char* start;
++ int err;
++
++ start = state->_span_pos0;
++ state->_span_pos0 = NULL;
++ err = llhttp__on_header_value(state, start, p);
++ if (err != 0) {
++ state->error = err;
+ state->error_pos = (const char*) p;
+ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_header_value_almost_done;
+ return s_error;
+@@ -5838,7 +5933,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_span_end_llhttp__on_header_value_2: {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_4: {
+ const unsigned char* start;
+ int err;
+
+@@ -5856,7 +5951,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_span_end_llhttp__on_header_value_3: {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_2: {
+ const unsigned char* start;
+ int err;
+
+@@ -5865,35 +5960,25 @@
+ err = llhttp__on_header_value(state, start, p);
+ if (err != 0) {
+ state->error = err;
+- state->error_pos = (const char*) (p + 1);
+- state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_header_value_almost_done;
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_25;
+ return s_error;
+ }
+- p++;
+- goto s_n_llhttp__internal__n_header_value_almost_done;
+- /* UNREACHABLE */;
+- abort();
+- }
+- s_n_llhttp__internal__n_error_24: {
+- state->error = 0xa;
+- state->reason = "Invalid header value char";
+- state->error_pos = (const char*) p;
+- state->_current = (void*) (intptr_t) s_error;
+- return s_error;
++ goto s_n_llhttp__internal__n_error_25;
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_test_flags_5: {
++ s_n_llhttp__internal__n_invoke_test_flags_6: {
+ switch (llhttp__internal__c_test_flags_2(state, p, endp)) {
+ case 1:
+ goto s_n_llhttp__internal__n_header_value_lenient;
+ default:
+- goto s_n_llhttp__internal__n_error_24;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_2;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_3: {
++ s_n_llhttp__internal__n_invoke_update_header_state_4: {
+ switch (llhttp__internal__c_update_header_state(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection;
+@@ -5904,7 +5989,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_11: {
+ switch (llhttp__internal__c_or_flags_3(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_3;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_4;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -5912,7 +5997,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_12: {
+ switch (llhttp__internal__c_or_flags_4(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_3;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_4;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -5920,7 +6005,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_13: {
+ switch (llhttp__internal__c_or_flags_5(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_3;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_4;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -5933,7 +6018,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_load_header_state_4: {
++ s_n_llhttp__internal__n_invoke_load_header_state_5: {
+ switch (llhttp__internal__c_load_header_state(state, p, endp)) {
+ case 5:
+ goto s_n_llhttp__internal__n_invoke_or_flags_11;
+@@ -5949,39 +6034,39 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_4: {
+- switch (llhttp__internal__c_update_header_state_4(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_5: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection_token;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_2: {
+- switch (llhttp__internal__c_update_header_state_2(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_3: {
++ switch (llhttp__internal__c_update_header_state_3(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection_ws;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_5: {
+- switch (llhttp__internal__c_update_header_state_5(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_6: {
++ switch (llhttp__internal__c_update_header_state_6(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection_ws;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_6: {
+- switch (llhttp__internal__c_update_header_state_6(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_7: {
++ switch (llhttp__internal__c_update_header_state_7(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection_ws;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_span_end_llhttp__on_header_value_4: {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_5: {
+ const unsigned char* start;
+ int err;
+
+@@ -5991,17 +6076,17 @@
+ if (err != 0) {
+ state->error = err;
+ state->error_pos = (const char*) p;
+- state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_26;
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_27;
+ return s_error;
+ }
+- goto s_n_llhttp__internal__n_error_26;
++ goto s_n_llhttp__internal__n_error_27;
+ /* UNREACHABLE */;
+ abort();
+ }
+ s_n_llhttp__internal__n_invoke_mul_add_content_length_1: {
+ switch (llhttp__internal__c_mul_add_content_length_1(state, p, endp, match)) {
+ case 1:
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_4;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_5;
+ default:
+ goto s_n_llhttp__internal__n_header_value_content_length;
+ }
+@@ -6016,7 +6101,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_span_end_llhttp__on_header_value_5: {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_6: {
+ const unsigned char* start;
+ int err;
+
+@@ -6026,14 +6111,14 @@
+ if (err != 0) {
+ state->error = err;
+ state->error_pos = (const char*) p;
+- state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_27;
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_28;
+ return s_error;
+ }
+- goto s_n_llhttp__internal__n_error_27;
++ goto s_n_llhttp__internal__n_error_28;
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_25: {
++ s_n_llhttp__internal__n_error_26: {
+ state->error = 0x4;
+ state->reason = "Duplicate Content-Length";
+ state->error_pos = (const char*) p;
+@@ -6042,26 +6127,82 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_test_flags_6: {
+- switch (llhttp__internal__c_test_flags_6(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_test_flags_7: {
++ switch (llhttp__internal__c_test_flags_7(state, p, endp)) {
+ case 0:
+ goto s_n_llhttp__internal__n_header_value_content_length;
+ default:
+- goto s_n_llhttp__internal__n_error_25;
++ goto s_n_llhttp__internal__n_error_26;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_7: {
+- switch (llhttp__internal__c_update_header_state_7(state, p, endp)) {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_8: {
++ const unsigned char* start;
++ int err;
++
++ start = state->_span_pos0;
++ state->_span_pos0 = NULL;
++ err = llhttp__on_header_value(state, start, p);
++ if (err != 0) {
++ state->error = err;
++ state->error_pos = (const char*) (p + 1);
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_30;
++ return s_error;
++ }
++ p++;
++ goto s_n_llhttp__internal__n_error_30;
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_update_header_state_8: {
++ switch (llhttp__internal__c_update_header_state_8(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_otherwise;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_8: {
+- switch (llhttp__internal__c_update_header_state_4(state, p, endp)) {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_7: {
++ const unsigned char* start;
++ int err;
++
++ start = state->_span_pos0;
++ state->_span_pos0 = NULL;
++ err = llhttp__on_header_value(state, start, p);
++ if (err != 0) {
++ state->error = err;
++ state->error_pos = (const char*) (p + 1);
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_29;
++ return s_error;
++ }
++ p++;
++ goto s_n_llhttp__internal__n_error_29;
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_test_flags_9: {
++ switch (llhttp__internal__c_test_flags_2(state, p, endp)) {
++ case 0:
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_7;
++ default:
++ goto s_n_llhttp__internal__n_header_value_te_chunked;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_load_type_1: {
++ switch (llhttp__internal__c_load_type(state, p, endp)) {
++ case 1:
++ goto s_n_llhttp__internal__n_invoke_test_flags_9;
++ default:
++ goto s_n_llhttp__internal__n_header_value_te_chunked;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_update_header_state_9: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value;
+ }
+@@ -6076,6 +6217,34 @@
+ /* UNREACHABLE */;
+ abort();
+ }
++ s_n_llhttp__internal__n_invoke_or_flags_17: {
++ switch (llhttp__internal__c_or_flags_16(state, p, endp)) {
++ default:
++ goto s_n_llhttp__internal__n_invoke_and_flags;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_test_flags_10: {
++ switch (llhttp__internal__c_test_flags_2(state, p, endp)) {
++ case 0:
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_8;
++ default:
++ goto s_n_llhttp__internal__n_invoke_or_flags_17;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_load_type_2: {
++ switch (llhttp__internal__c_load_type(state, p, endp)) {
++ case 1:
++ goto s_n_llhttp__internal__n_invoke_test_flags_10;
++ default:
++ goto s_n_llhttp__internal__n_invoke_or_flags_17;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
+ s_n_llhttp__internal__n_invoke_or_flags_16: {
+ switch (llhttp__internal__c_or_flags_16(state, p, endp)) {
+ default:
+@@ -6084,10 +6253,20 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_or_flags_17: {
+- switch (llhttp__internal__c_or_flags_17(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_test_flags_8: {
++ switch (llhttp__internal__c_test_flags_8(state, p, endp)) {
++ case 1:
++ goto s_n_llhttp__internal__n_invoke_load_type_2;
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_8;
++ goto s_n_llhttp__internal__n_invoke_or_flags_16;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_or_flags_18: {
++ switch (llhttp__internal__c_or_flags_18(state, p, endp)) {
++ default:
++ goto s_n_llhttp__internal__n_invoke_update_header_state_9;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -6097,11 +6276,11 @@
+ case 1:
+ goto s_n_llhttp__internal__n_header_value_connection;
+ case 2:
+- goto s_n_llhttp__internal__n_invoke_test_flags_6;
++ goto s_n_llhttp__internal__n_invoke_test_flags_7;
+ case 3:
+- goto s_n_llhttp__internal__n_invoke_or_flags_16;
++ goto s_n_llhttp__internal__n_invoke_test_flags_8;
+ case 4:
+- goto s_n_llhttp__internal__n_invoke_or_flags_17;
++ goto s_n_llhttp__internal__n_invoke_or_flags_18;
+ default:
+ goto s_n_llhttp__internal__n_header_value;
+ }
+@@ -6144,7 +6323,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_28: {
++ s_n_llhttp__internal__n_error_31: {
+ state->error = 0xa;
+ state->reason = "Invalid header token";
+ state->error_pos = (const char*) p;
+@@ -6153,8 +6332,8 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_9: {
+- switch (llhttp__internal__c_update_header_state_4(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_10: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_field_general;
+ }
+@@ -6169,8 +6348,8 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_10: {
+- switch (llhttp__internal__c_update_header_state_4(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_11: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_field_general;
+ }
+@@ -6210,7 +6389,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_29: {
++ s_n_llhttp__internal__n_error_32: {
+ state->error = 0x7;
+ state->reason = "Expected CRLF";
+ state->error_pos = (const char*) p;
+@@ -6236,7 +6415,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_30: {
++ s_n_llhttp__internal__n_error_33: {
+ state->error = 0x9;
+ state->reason = "Expected CRLF after version";
+ state->error_pos = (const char*) p;
+@@ -6253,7 +6432,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_31: {
++ s_n_llhttp__internal__n_error_34: {
+ state->error = 0x9;
+ state->reason = "Invalid minor version";
+ state->error_pos = (const char*) p;
+@@ -6262,7 +6441,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_32: {
++ s_n_llhttp__internal__n_error_35: {
+ state->error = 0x9;
+ state->reason = "Expected dot";
+ state->error_pos = (const char*) p;
+@@ -6279,7 +6458,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_33: {
++ s_n_llhttp__internal__n_error_36: {
+ state->error = 0x9;
+ state->reason = "Invalid major version";
+ state->error_pos = (const char*) p;
+@@ -6288,7 +6467,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_35: {
++ s_n_llhttp__internal__n_error_38: {
+ state->error = 0x8;
+ state->reason = "Expected HTTP/";
+ state->error_pos = (const char*) p;
+@@ -6297,7 +6476,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_34: {
++ s_n_llhttp__internal__n_error_37: {
+ state->error = 0x8;
+ state->reason = "Expected SOURCE method for ICE/x.x request";
+ state->error_pos = (const char*) p;
+@@ -6309,7 +6488,7 @@
+ s_n_llhttp__internal__n_invoke_is_equal_method_1: {
+ switch (llhttp__internal__c_is_equal_method_1(state, p, endp)) {
+ case 0:
+- goto s_n_llhttp__internal__n_error_34;
++ goto s_n_llhttp__internal__n_error_37;
+ default:
+ goto s_n_llhttp__internal__n_req_http_major;
+ }
+@@ -6384,7 +6563,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_36: {
++ s_n_llhttp__internal__n_error_39: {
+ state->error = 0x7;
+ state->reason = "Invalid char in url fragment start";
+ state->error_pos = (const char*) p;
+@@ -6444,7 +6623,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_37: {
++ s_n_llhttp__internal__n_error_40: {
+ state->error = 0x7;
+ state->reason = "Invalid char in url query";
+ state->error_pos = (const char*) p;
+@@ -6453,7 +6632,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_38: {
++ s_n_llhttp__internal__n_error_41: {
+ state->error = 0x7;
+ state->reason = "Invalid char in url path";
+ state->error_pos = (const char*) p;
+@@ -6564,7 +6743,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_39: {
++ s_n_llhttp__internal__n_error_42: {
+ state->error = 0x7;
+ state->reason = "Double @ in url";
+ state->error_pos = (const char*) p;
+@@ -6573,7 +6752,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_40: {
++ s_n_llhttp__internal__n_error_43: {
+ state->error = 0x7;
+ state->reason = "Unexpected char in url server";
+ state->error_pos = (const char*) p;
+@@ -6582,7 +6761,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_41: {
++ s_n_llhttp__internal__n_error_44: {
+ state->error = 0x7;
+ state->reason = "Unexpected char in url server";
+ state->error_pos = (const char*) p;
+@@ -6591,7 +6770,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_43: {
++ s_n_llhttp__internal__n_error_46: {
+ state->error = 0x7;
+ state->reason = "Unexpected char in url schema";
+ state->error_pos = (const char*) p;
+@@ -6600,7 +6779,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_44: {
++ s_n_llhttp__internal__n_error_47: {
+ state->error = 0x7;
+ state->reason = "Unexpected char in url schema";
+ state->error_pos = (const char*) p;
+@@ -6609,7 +6788,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_45: {
++ s_n_llhttp__internal__n_error_48: {
+ state->error = 0x7;
+ state->reason = "Unexpected start char in url";
+ state->error_pos = (const char*) p;
+@@ -6628,7 +6807,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_46: {
++ s_n_llhttp__internal__n_error_49: {
+ state->error = 0x6;
+ state->reason = "Expected space after method";
+ state->error_pos = (const char*) p;
+@@ -6645,7 +6824,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_55: {
++ s_n_llhttp__internal__n_error_58: {
+ state->error = 0x6;
+ state->reason = "Invalid method encountered";
+ state->error_pos = (const char*) p;
+@@ -6654,7 +6833,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_47: {
++ s_n_llhttp__internal__n_error_50: {
+ state->error = 0xd;
+ state->reason = "Response overflow";
+ state->error_pos = (const char*) p;
+@@ -6666,14 +6845,14 @@
+ s_n_llhttp__internal__n_invoke_mul_add_status_code: {
+ switch (llhttp__internal__c_mul_add_status_code(state, p, endp, match)) {
+ case 1:
+- goto s_n_llhttp__internal__n_error_47;
++ goto s_n_llhttp__internal__n_error_50;
+ default:
+ goto s_n_llhttp__internal__n_res_status_code;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_48: {
++ s_n_llhttp__internal__n_error_51: {
+ state->error = 0x2;
+ state->reason = "Expected LF after CR";
+ state->error_pos = (const char*) p;
+@@ -6718,7 +6897,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_49: {
++ s_n_llhttp__internal__n_error_52: {
+ state->error = 0xd;
+ state->reason = "Invalid response status";
+ state->error_pos = (const char*) p;
+@@ -6735,7 +6914,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_50: {
++ s_n_llhttp__internal__n_error_53: {
+ state->error = 0x9;
+ state->reason = "Expected space after version";
+ state->error_pos = (const char*) p;
+@@ -6752,7 +6931,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_51: {
++ s_n_llhttp__internal__n_error_54: {
+ state->error = 0x9;
+ state->reason = "Invalid minor version";
+ state->error_pos = (const char*) p;
+@@ -6761,7 +6940,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_52: {
++ s_n_llhttp__internal__n_error_55: {
+ state->error = 0x9;
+ state->reason = "Expected dot";
+ state->error_pos = (const char*) p;
+@@ -6778,7 +6957,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_53: {
++ s_n_llhttp__internal__n_error_56: {
+ state->error = 0x9;
+ state->reason = "Invalid major version";
+ state->error_pos = (const char*) p;
+@@ -6787,7 +6966,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_56: {
++ s_n_llhttp__internal__n_error_59: {
+ state->error = 0x8;
+ state->reason = "Expected HTTP/";
+ state->error_pos = (const char*) p;
+@@ -6812,7 +6991,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_54: {
++ s_n_llhttp__internal__n_error_57: {
+ state->error = 0x8;
+ state->reason = "Invalid word encountered";
+ state->error_pos = (const char*) p;
+@@ -7244,6 +7423,7 @@
+ s_n_llhttp__internal__n_header_value_lws,
+ s_n_llhttp__internal__n_header_value_almost_done,
+ s_n_llhttp__internal__n_header_value_lenient,
++ s_n_llhttp__internal__n_error_19,
+ s_n_llhttp__internal__n_header_value_otherwise,
+ s_n_llhttp__internal__n_header_value_connection_token,
+ s_n_llhttp__internal__n_header_value_connection_ws,
+@@ -7251,14 +7431,16 @@
+ s_n_llhttp__internal__n_header_value_connection_2,
+ s_n_llhttp__internal__n_header_value_connection_3,
+ s_n_llhttp__internal__n_header_value_connection,
+- s_n_llhttp__internal__n_error_20,
+ s_n_llhttp__internal__n_error_21,
++ s_n_llhttp__internal__n_error_22,
+ s_n_llhttp__internal__n_header_value_content_length_ws,
+ s_n_llhttp__internal__n_header_value_content_length,
+- s_n_llhttp__internal__n_header_value_te_chunked_last,
++ s_n_llhttp__internal__n_error_24,
++ s_n_llhttp__internal__n_error_23,
+ s_n_llhttp__internal__n_header_value_te_token_ows,
+ s_n_llhttp__internal__n_header_value,
+ s_n_llhttp__internal__n_header_value_te_token,
++ s_n_llhttp__internal__n_header_value_te_chunked_last,
+ s_n_llhttp__internal__n_header_value_te_chunked,
+ s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1,
+ s_n_llhttp__internal__n_header_value_discard_ws,
+@@ -7648,7 +7830,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_2(
++int llhttp__internal__c_update_header_state_3(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -7656,7 +7838,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_4(
++int llhttp__internal__c_update_header_state_1(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -7664,7 +7846,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_5(
++int llhttp__internal__c_update_header_state_6(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -7672,7 +7854,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_6(
++int llhttp__internal__c_update_header_state_7(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -7680,7 +7862,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_test_flags_6(
++int llhttp__internal__c_test_flags_7(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -7721,6 +7903,13 @@
+ return 0;
+ }
+
++int llhttp__internal__c_test_flags_8(
++ llhttp__internal_t* state,
++ const unsigned char* p,
++ const unsigned char* endp) {
++ return (state->flags & 8) == 8;
++}
++
+ int llhttp__internal__c_or_flags_16(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+@@ -7737,7 +7926,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_update_header_state_7(
++int llhttp__internal__c_update_header_state_8(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -7745,7 +7934,7 @@
+ return 0;
+ }
+
+-int llhttp__internal__c_or_flags_17(
++int llhttp__internal__c_or_flags_18(
+ llhttp__internal_t* state,
+ const unsigned char* p,
+ const unsigned char* endp) {
+@@ -8432,13 +8621,13 @@
+ }
+ switch (*p) {
+ case 9: {
+- goto s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1;
++ goto s_n_llhttp__internal__n_invoke_load_header_state_3;
+ }
+ case ' ': {
+- goto s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1;
++ goto s_n_llhttp__internal__n_invoke_load_header_state_3;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_load_header_state_3;
++ goto s_n_llhttp__internal__n_invoke_load_header_state_4;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -8455,7 +8644,7 @@
+ goto s_n_llhttp__internal__n_header_value_lws;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_17;
++ goto s_n_llhttp__internal__n_error_18;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -8468,10 +8657,10 @@
+ }
+ switch (*p) {
+ case 10: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_1;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_3;
+ }
+ case 13: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_3;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_4;
+ }
+ default: {
+ p++;
+@@ -8481,20 +8670,27 @@
+ /* UNREACHABLE */;
+ abort();
+ }
++ case s_n_llhttp__internal__n_error_19:
++ s_n_llhttp__internal__n_error_19: {
++ state->error = 0xa;
++ state->reason = "Invalid header value char";
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_error;
++ return s_error;
++ /* UNREACHABLE */;
++ abort();
++ }
+ case s_n_llhttp__internal__n_header_value_otherwise:
+ s_n_llhttp__internal__n_header_value_otherwise: {
+ if (p == endp) {
+ return s_n_llhttp__internal__n_header_value_otherwise;
+ }
+ switch (*p) {
+- case 10: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_1;
+- }
+ case 13: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_2;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_test_flags_5;
++ goto s_n_llhttp__internal__n_invoke_test_flags_6;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -8557,10 +8753,10 @@
+ }
+ case ',': {
+ p++;
+- goto s_n_llhttp__internal__n_invoke_load_header_state_4;
++ goto s_n_llhttp__internal__n_invoke_load_header_state_5;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_4;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_5;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -8578,7 +8774,7 @@
+ switch (match_seq.status) {
+ case kMatchComplete: {
+ p++;
+- goto s_n_llhttp__internal__n_invoke_update_header_state_2;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_3;
+ }
+ case kMatchPause: {
+ return s_n_llhttp__internal__n_header_value_connection_1;
+@@ -8602,7 +8798,7 @@
+ switch (match_seq.status) {
+ case kMatchComplete: {
+ p++;
+- goto s_n_llhttp__internal__n_invoke_update_header_state_5;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_6;
+ }
+ case kMatchPause: {
+ return s_n_llhttp__internal__n_header_value_connection_2;
+@@ -8626,7 +8822,7 @@
+ switch (match_seq.status) {
+ case kMatchComplete: {
+ p++;
+- goto s_n_llhttp__internal__n_invoke_update_header_state_6;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_7;
+ }
+ case kMatchPause: {
+ return s_n_llhttp__internal__n_header_value_connection_3;
+@@ -8671,8 +8867,8 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- case s_n_llhttp__internal__n_error_20:
+- s_n_llhttp__internal__n_error_20: {
++ case s_n_llhttp__internal__n_error_21:
++ s_n_llhttp__internal__n_error_21: {
+ state->error = 0xb;
+ state->reason = "Content-Length overflow";
+ state->error_pos = (const char*) p;
+@@ -8681,8 +8877,8 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- case s_n_llhttp__internal__n_error_21:
+- s_n_llhttp__internal__n_error_21: {
++ case s_n_llhttp__internal__n_error_22:
++ s_n_llhttp__internal__n_error_22: {
+ state->error = 0xb;
+ state->reason = "Invalid character in Content-Length";
+ state->error_pos = (const char*) p;
+@@ -8708,7 +8904,7 @@
+ goto s_n_llhttp__internal__n_header_value_content_length_ws;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_5;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_6;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -8777,26 +8973,23 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- case s_n_llhttp__internal__n_header_value_te_chunked_last:
+- s_n_llhttp__internal__n_header_value_te_chunked_last: {
+- if (p == endp) {
+- return s_n_llhttp__internal__n_header_value_te_chunked_last;
+- }
+- switch (*p) {
+- case 10: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_7;
+- }
+- case 13: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_7;
+- }
+- case ' ': {
+- p++;
+- goto s_n_llhttp__internal__n_header_value_te_chunked_last;
+- }
+- default: {
+- goto s_n_llhttp__internal__n_header_value_te_chunked;
+- }
+- }
++ case s_n_llhttp__internal__n_error_24:
++ s_n_llhttp__internal__n_error_24: {
++ state->error = 0xf;
++ state->reason = "Invalid `Transfer-Encoding` header value";
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_error;
++ return s_error;
++ /* UNREACHABLE */;
++ abort();
++ }
++ case s_n_llhttp__internal__n_error_23:
++ s_n_llhttp__internal__n_error_23: {
++ state->error = 0xf;
++ state->reason = "Invalid `Transfer-Encoding` header value";
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_error;
++ return s_error;
+ /* UNREACHABLE */;
+ abort();
+ }
+@@ -8913,8 +9106,34 @@
+ goto s_n_llhttp__internal__n_header_value_te_token_ows;
+ }
+ default: {
++ goto s_n_llhttp__internal__n_invoke_update_header_state_9;
++ }
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ case s_n_llhttp__internal__n_header_value_te_chunked_last:
++ s_n_llhttp__internal__n_header_value_te_chunked_last: {
++ if (p == endp) {
++ return s_n_llhttp__internal__n_header_value_te_chunked_last;
++ }
++ switch (*p) {
++ case 10: {
+ goto s_n_llhttp__internal__n_invoke_update_header_state_8;
+ }
++ case 13: {
++ goto s_n_llhttp__internal__n_invoke_update_header_state_8;
++ }
++ case ' ': {
++ p++;
++ goto s_n_llhttp__internal__n_header_value_te_chunked_last;
++ }
++ case ',': {
++ goto s_n_llhttp__internal__n_invoke_load_type_1;
++ }
++ default: {
++ goto s_n_llhttp__internal__n_header_value_te_token;
++ }
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -8966,7 +9185,7 @@
+ }
+ case 10: {
+ p++;
+- goto s_n_llhttp__internal__n_header_value_discard_lws;
++ goto s_n_llhttp__internal__n_invoke_test_flags_5;
+ }
+ case 13: {
+ p++;
+@@ -8993,7 +9212,7 @@
+ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_field_2;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_22;
++ goto s_n_llhttp__internal__n_error_25;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9083,7 +9302,7 @@
+ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_field_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_9;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_10;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9108,7 +9327,7 @@
+ return s_n_llhttp__internal__n_header_field_3;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9133,7 +9352,7 @@
+ return s_n_llhttp__internal__n_header_field_4;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9154,7 +9373,7 @@
+ goto s_n_llhttp__internal__n_header_field_4;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9178,7 +9397,7 @@
+ return s_n_llhttp__internal__n_header_field_1;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9203,7 +9422,7 @@
+ return s_n_llhttp__internal__n_header_field_5;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9228,7 +9447,7 @@
+ return s_n_llhttp__internal__n_header_field_6;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9253,7 +9472,7 @@
+ return s_n_llhttp__internal__n_header_field_7;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9282,7 +9501,7 @@
+ goto s_n_llhttp__internal__n_header_field_7;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_invoke_update_header_state_10;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_11;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9347,7 +9566,7 @@
+ return s_n_llhttp__internal__n_url_skip_lf_to_http09;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_23;
++ goto s_n_llhttp__internal__n_error_26;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9364,7 +9583,7 @@
+ goto s_n_llhttp__internal__n_header_field_start;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_24;
++ goto s_n_llhttp__internal__n_error_27;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9385,7 +9604,7 @@
+ goto s_n_llhttp__internal__n_req_http_end_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_24;
++ goto s_n_llhttp__internal__n_error_27;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9448,7 +9667,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_http_minor;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_25;
++ goto s_n_llhttp__internal__n_error_28;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9465,7 +9684,7 @@
+ goto s_n_llhttp__internal__n_req_http_minor;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_26;
++ goto s_n_llhttp__internal__n_error_29;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9528,7 +9747,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_http_major;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_27;
++ goto s_n_llhttp__internal__n_error_30;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9552,7 +9771,7 @@
+ return s_n_llhttp__internal__n_req_http_start_1;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_29;
++ goto s_n_llhttp__internal__n_error_32;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9576,7 +9795,7 @@
+ return s_n_llhttp__internal__n_req_http_start_2;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_29;
++ goto s_n_llhttp__internal__n_error_32;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9601,7 +9820,7 @@
+ goto s_n_llhttp__internal__n_req_http_start_2;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_29;
++ goto s_n_llhttp__internal__n_error_32;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9655,7 +9874,7 @@
+ goto s_n_llhttp__internal__n_span_end_llhttp__on_url_8;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_30;
++ goto s_n_llhttp__internal__n_error_33;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9712,7 +9931,7 @@
+ goto s_n_llhttp__internal__n_span_end_stub_query_3;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_31;
++ goto s_n_llhttp__internal__n_error_34;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9742,7 +9961,7 @@
+ goto s_n_llhttp__internal__n_url_query;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_32;
++ goto s_n_llhttp__internal__n_error_35;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9883,10 +10102,10 @@
+ }
+ case 7: {
+ p++;
+- goto s_n_llhttp__internal__n_error_33;
++ goto s_n_llhttp__internal__n_error_36;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_34;
++ goto s_n_llhttp__internal__n_error_37;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9941,7 +10160,7 @@
+ goto s_n_llhttp__internal__n_url_server_with_at;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_35;
++ goto s_n_llhttp__internal__n_error_38;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9958,7 +10177,7 @@
+ goto s_n_llhttp__internal__n_url_server;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_37;
++ goto s_n_llhttp__internal__n_error_40;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -9972,22 +10191,22 @@
+ switch (*p) {
+ case 10: {
+ p++;
+- goto s_n_llhttp__internal__n_error_36;
++ goto s_n_llhttp__internal__n_error_39;
+ }
+ case 13: {
+ p++;
+- goto s_n_llhttp__internal__n_error_36;
++ goto s_n_llhttp__internal__n_error_39;
+ }
+ case ' ': {
+ p++;
+- goto s_n_llhttp__internal__n_error_36;
++ goto s_n_llhttp__internal__n_error_39;
+ }
+ case '/': {
+ p++;
+ goto s_n_llhttp__internal__n_url_schema_delim_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_37;
++ goto s_n_llhttp__internal__n_error_40;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10029,7 +10248,7 @@
+ switch (lookup_table[(uint8_t) *p]) {
+ case 1: {
+ p++;
+- goto s_n_llhttp__internal__n_error_36;
++ goto s_n_llhttp__internal__n_error_39;
+ }
+ case 2: {
+ goto s_n_llhttp__internal__n_span_end_stub_schema;
+@@ -10039,7 +10258,7 @@
+ goto s_n_llhttp__internal__n_url_schema;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_38;
++ goto s_n_llhttp__internal__n_error_41;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10071,7 +10290,7 @@
+ switch (lookup_table[(uint8_t) *p]) {
+ case 1: {
+ p++;
+- goto s_n_llhttp__internal__n_error_36;
++ goto s_n_llhttp__internal__n_error_39;
+ }
+ case 2: {
+ goto s_n_llhttp__internal__n_span_start_stub_path_2;
+@@ -10080,7 +10299,7 @@
+ goto s_n_llhttp__internal__n_url_schema;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_39;
++ goto s_n_llhttp__internal__n_error_42;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10136,7 +10355,7 @@
+ goto s_n_llhttp__internal__n_req_spaces_before_url;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_40;
++ goto s_n_llhttp__internal__n_error_43;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10161,7 +10380,7 @@
+ return s_n_llhttp__internal__n_start_req_1;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10186,7 +10405,7 @@
+ return s_n_llhttp__internal__n_start_req_2;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10211,7 +10430,7 @@
+ return s_n_llhttp__internal__n_start_req_4;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10236,7 +10455,7 @@
+ return s_n_llhttp__internal__n_start_req_6;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10254,7 +10473,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_method_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10275,7 +10494,7 @@
+ goto s_n_llhttp__internal__n_start_req_7;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10296,7 +10515,7 @@
+ goto s_n_llhttp__internal__n_start_req_5;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10321,7 +10540,7 @@
+ return s_n_llhttp__internal__n_start_req_8;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10346,7 +10565,7 @@
+ return s_n_llhttp__internal__n_start_req_9;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10371,7 +10590,7 @@
+ return s_n_llhttp__internal__n_start_req_10;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10396,7 +10615,7 @@
+ return s_n_llhttp__internal__n_start_req_12;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10421,7 +10640,7 @@
+ return s_n_llhttp__internal__n_start_req_13;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10442,7 +10661,7 @@
+ goto s_n_llhttp__internal__n_start_req_13;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10467,7 +10686,7 @@
+ return s_n_llhttp__internal__n_start_req_15;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10492,7 +10711,7 @@
+ return s_n_llhttp__internal__n_start_req_16;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10517,7 +10736,7 @@
+ return s_n_llhttp__internal__n_start_req_18;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10542,7 +10761,7 @@
+ return s_n_llhttp__internal__n_start_req_20;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10560,7 +10779,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_method_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10581,7 +10800,7 @@
+ goto s_n_llhttp__internal__n_start_req_21;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10602,7 +10821,7 @@
+ goto s_n_llhttp__internal__n_start_req_19;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10627,7 +10846,7 @@
+ return s_n_llhttp__internal__n_start_req_22;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10656,7 +10875,7 @@
+ goto s_n_llhttp__internal__n_start_req_22;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10681,7 +10900,7 @@
+ return s_n_llhttp__internal__n_start_req_23;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10706,7 +10925,7 @@
+ return s_n_llhttp__internal__n_start_req_24;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10731,7 +10950,7 @@
+ return s_n_llhttp__internal__n_start_req_26;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10756,7 +10975,7 @@
+ return s_n_llhttp__internal__n_start_req_27;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10781,7 +11000,7 @@
+ return s_n_llhttp__internal__n_start_req_31;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10806,7 +11025,7 @@
+ return s_n_llhttp__internal__n_start_req_32;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10827,7 +11046,7 @@
+ goto s_n_llhttp__internal__n_start_req_32;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10844,7 +11063,7 @@
+ goto s_n_llhttp__internal__n_start_req_30;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10866,7 +11085,7 @@
+ goto s_n_llhttp__internal__n_start_req_29;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10891,7 +11110,7 @@
+ return s_n_llhttp__internal__n_start_req_34;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10913,7 +11132,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_method_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10942,7 +11161,7 @@
+ goto s_n_llhttp__internal__n_start_req_33;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10967,7 +11186,7 @@
+ return s_n_llhttp__internal__n_start_req_37;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -10992,7 +11211,7 @@
+ return s_n_llhttp__internal__n_start_req_38;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11013,7 +11232,7 @@
+ goto s_n_llhttp__internal__n_start_req_38;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11030,7 +11249,7 @@
+ goto s_n_llhttp__internal__n_start_req_36;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11055,7 +11274,7 @@
+ return s_n_llhttp__internal__n_start_req_40;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11080,7 +11299,7 @@
+ return s_n_llhttp__internal__n_start_req_41;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11105,7 +11324,7 @@
+ return s_n_llhttp__internal__n_start_req_42;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11130,7 +11349,7 @@
+ goto s_n_llhttp__internal__n_start_req_42;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11155,7 +11374,7 @@
+ return s_n_llhttp__internal__n_start_req_43;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11180,7 +11399,7 @@
+ return s_n_llhttp__internal__n_start_req_46;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11205,7 +11424,7 @@
+ return s_n_llhttp__internal__n_start_req_48;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11230,7 +11449,7 @@
+ return s_n_llhttp__internal__n_start_req_49;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11251,7 +11470,7 @@
+ goto s_n_llhttp__internal__n_start_req_49;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11276,7 +11495,7 @@
+ return s_n_llhttp__internal__n_start_req_50;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11301,7 +11520,7 @@
+ goto s_n_llhttp__internal__n_start_req_50;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11318,7 +11537,7 @@
+ goto s_n_llhttp__internal__n_start_req_45;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11391,7 +11610,7 @@
+ goto s_n_llhttp__internal__n_start_req_44;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_48;
++ goto s_n_llhttp__internal__n_error_51;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11476,7 +11695,7 @@
+ goto s_n_llhttp__internal__n_res_status_start;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_42;
++ goto s_n_llhttp__internal__n_error_45;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11556,7 +11775,7 @@
+ goto s_n_llhttp__internal__n_invoke_update_status_code;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_43;
++ goto s_n_llhttp__internal__n_error_46;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11619,7 +11838,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_http_minor_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_44;
++ goto s_n_llhttp__internal__n_error_47;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11636,7 +11855,7 @@
+ goto s_n_llhttp__internal__n_res_http_minor;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_45;
++ goto s_n_llhttp__internal__n_error_48;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11699,7 +11918,7 @@
+ goto s_n_llhttp__internal__n_invoke_store_http_major_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_46;
++ goto s_n_llhttp__internal__n_error_49;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11723,7 +11942,7 @@
+ return s_n_llhttp__internal__n_start_res;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_49;
++ goto s_n_llhttp__internal__n_error_52;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11748,7 +11967,7 @@
+ return s_n_llhttp__internal__n_req_or_res_method_2;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_47;
++ goto s_n_llhttp__internal__n_error_50;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11772,7 +11991,7 @@
+ return s_n_llhttp__internal__n_req_or_res_method_3;
+ }
+ case kMatchMismatch: {
+- goto s_n_llhttp__internal__n_error_47;
++ goto s_n_llhttp__internal__n_error_50;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11793,7 +12012,7 @@
+ goto s_n_llhttp__internal__n_req_or_res_method_3;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_47;
++ goto s_n_llhttp__internal__n_error_50;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11810,7 +12029,7 @@
+ goto s_n_llhttp__internal__n_req_or_res_method_1;
+ }
+ default: {
+- goto s_n_llhttp__internal__n_error_47;
++ goto s_n_llhttp__internal__n_error_50;
+ }
+ }
+ /* UNREACHABLE */;
+@@ -11870,7 +12089,7 @@
+ /* UNREACHABLE */
+ abort();
+ }
+- s_n_llhttp__internal__n_error_36: {
++ s_n_llhttp__internal__n_error_39: {
+ state->error = 0x7;
+ state->reason = "Invalid characters in url";
+ state->error_pos = (const char*) p;
+@@ -12314,7 +12533,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_16: {
++ s_n_llhttp__internal__n_error_17: {
+ state->error = 0xb;
+ state->reason = "Empty Content-Length";
+ state->error_pos = (const char*) p;
+@@ -12399,14 +12618,51 @@
+ s_n_llhttp__internal__n_invoke_load_header_state: {
+ switch (llhttp__internal__c_load_header_state(state, p, endp)) {
+ case 2:
+- goto s_n_llhttp__internal__n_error_16;
++ goto s_n_llhttp__internal__n_error_17;
+ default:
+ goto s_n_llhttp__internal__n_invoke_load_header_state_1;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
++ s_n_llhttp__internal__n_error_16: {
++ state->error = 0xa;
++ state->reason = "Invalid header value char";
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_error;
++ return s_error;
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_test_flags_5: {
++ switch (llhttp__internal__c_test_flags_2(state, p, endp)) {
++ case 1:
++ goto s_n_llhttp__internal__n_header_value_discard_lws;
++ default:
++ goto s_n_llhttp__internal__n_error_16;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
+ s_n_llhttp__internal__n_invoke_update_header_state_1: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
++ default:
++ goto s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_load_header_state_3: {
++ switch (llhttp__internal__c_load_header_state(state, p, endp)) {
++ case 8:
++ goto s_n_llhttp__internal__n_invoke_update_header_state_1;
++ default:
++ goto s_n_llhttp__internal__n_span_start_llhttp__on_header_value_1;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_update_header_state_2: {
+ switch (llhttp__internal__c_update_header_state(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_field_start;
+@@ -12417,7 +12673,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_7: {
+ switch (llhttp__internal__c_or_flags_3(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_1;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_2;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -12425,7 +12681,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_8: {
+ switch (llhttp__internal__c_or_flags_4(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_1;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_2;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -12433,7 +12689,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_9: {
+ switch (llhttp__internal__c_or_flags_5(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_1;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_2;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -12446,7 +12702,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_load_header_state_3: {
++ s_n_llhttp__internal__n_invoke_load_header_state_4: {
+ switch (llhttp__internal__c_load_header_state(state, p, endp)) {
+ case 5:
+ goto s_n_llhttp__internal__n_invoke_or_flags_7;
+@@ -12462,7 +12718,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_17: {
++ s_n_llhttp__internal__n_error_18: {
+ state->error = 0x3;
+ state->reason = "Missing expected LF after header value";
+ state->error_pos = (const char*) p;
+@@ -12480,6 +12736,24 @@
+ err = llhttp__on_header_value(state, start, p);
+ if (err != 0) {
+ state->error = err;
++ state->error_pos = (const char*) (p + 1);
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_header_value_almost_done;
++ return s_error;
++ }
++ p++;
++ goto s_n_llhttp__internal__n_header_value_almost_done;
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_3: {
++ const unsigned char* start;
++ int err;
++
++ start = state->_span_pos0;
++ state->_span_pos0 = NULL;
++ err = llhttp__on_header_value(state, start, p);
++ if (err != 0) {
++ state->error = err;
+ state->error_pos = (const char*) p;
+ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_header_value_almost_done;
+ return s_error;
+@@ -12488,7 +12762,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_span_end_llhttp__on_header_value_2: {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_4: {
+ const unsigned char* start;
+ int err;
+
+@@ -12506,7 +12780,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_span_end_llhttp__on_header_value_3: {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_2: {
+ const unsigned char* start;
+ int err;
+
+@@ -12515,35 +12789,25 @@
+ err = llhttp__on_header_value(state, start, p);
+ if (err != 0) {
+ state->error = err;
+- state->error_pos = (const char*) (p + 1);
+- state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_header_value_almost_done;
++ state->error_pos = (const char*) p;
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_19;
+ return s_error;
+ }
+- p++;
+- goto s_n_llhttp__internal__n_header_value_almost_done;
++ goto s_n_llhttp__internal__n_error_19;
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_18: {
+- state->error = 0xa;
+- state->reason = "Invalid header value char";
+- state->error_pos = (const char*) p;
+- state->_current = (void*) (intptr_t) s_error;
+- return s_error;
+- /* UNREACHABLE */;
+- abort();
+- }
+- s_n_llhttp__internal__n_invoke_test_flags_5: {
++ s_n_llhttp__internal__n_invoke_test_flags_6: {
+ switch (llhttp__internal__c_test_flags_2(state, p, endp)) {
+ case 1:
+ goto s_n_llhttp__internal__n_header_value_lenient;
+ default:
+- goto s_n_llhttp__internal__n_error_18;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_2;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_3: {
++ s_n_llhttp__internal__n_invoke_update_header_state_4: {
+ switch (llhttp__internal__c_update_header_state(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection;
+@@ -12554,7 +12818,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_11: {
+ switch (llhttp__internal__c_or_flags_3(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_3;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_4;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -12562,7 +12826,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_12: {
+ switch (llhttp__internal__c_or_flags_4(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_3;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_4;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -12570,7 +12834,7 @@
+ s_n_llhttp__internal__n_invoke_or_flags_13: {
+ switch (llhttp__internal__c_or_flags_5(state, p, endp)) {
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_3;
++ goto s_n_llhttp__internal__n_invoke_update_header_state_4;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -12583,7 +12847,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_load_header_state_4: {
++ s_n_llhttp__internal__n_invoke_load_header_state_5: {
+ switch (llhttp__internal__c_load_header_state(state, p, endp)) {
+ case 5:
+ goto s_n_llhttp__internal__n_invoke_or_flags_11;
+@@ -12599,39 +12863,39 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_4: {
+- switch (llhttp__internal__c_update_header_state_4(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_5: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection_token;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_2: {
+- switch (llhttp__internal__c_update_header_state_2(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_3: {
++ switch (llhttp__internal__c_update_header_state_3(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection_ws;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_5: {
+- switch (llhttp__internal__c_update_header_state_5(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_6: {
++ switch (llhttp__internal__c_update_header_state_6(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection_ws;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_6: {
+- switch (llhttp__internal__c_update_header_state_6(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_7: {
++ switch (llhttp__internal__c_update_header_state_7(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_connection_ws;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_span_end_llhttp__on_header_value_4: {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_5: {
+ const unsigned char* start;
+ int err;
+
+@@ -12641,17 +12905,17 @@
+ if (err != 0) {
+ state->error = err;
+ state->error_pos = (const char*) p;
+- state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_20;
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_21;
+ return s_error;
+ }
+- goto s_n_llhttp__internal__n_error_20;
++ goto s_n_llhttp__internal__n_error_21;
+ /* UNREACHABLE */;
+ abort();
+ }
+ s_n_llhttp__internal__n_invoke_mul_add_content_length_1: {
+ switch (llhttp__internal__c_mul_add_content_length_1(state, p, endp, match)) {
+ case 1:
+- goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_4;
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_5;
+ default:
+ goto s_n_llhttp__internal__n_header_value_content_length;
+ }
+@@ -12666,7 +12930,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_span_end_llhttp__on_header_value_5: {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_6: {
+ const unsigned char* start;
+ int err;
+
+@@ -12676,14 +12940,14 @@
+ if (err != 0) {
+ state->error = err;
+ state->error_pos = (const char*) p;
+- state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_21;
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_22;
+ return s_error;
+ }
+- goto s_n_llhttp__internal__n_error_21;
++ goto s_n_llhttp__internal__n_error_22;
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_19: {
++ s_n_llhttp__internal__n_error_20: {
+ state->error = 0x4;
+ state->reason = "Duplicate Content-Length";
+ state->error_pos = (const char*) p;
+@@ -12692,26 +12956,82 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_test_flags_6: {
+- switch (llhttp__internal__c_test_flags_6(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_test_flags_7: {
++ switch (llhttp__internal__c_test_flags_7(state, p, endp)) {
+ case 0:
+ goto s_n_llhttp__internal__n_header_value_content_length;
+ default:
+- goto s_n_llhttp__internal__n_error_19;
++ goto s_n_llhttp__internal__n_error_20;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_7: {
+- switch (llhttp__internal__c_update_header_state_7(state, p, endp)) {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_8: {
++ const unsigned char* start;
++ int err;
++
++ start = state->_span_pos0;
++ state->_span_pos0 = NULL;
++ err = llhttp__on_header_value(state, start, p);
++ if (err != 0) {
++ state->error = err;
++ state->error_pos = (const char*) (p + 1);
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_24;
++ return s_error;
++ }
++ p++;
++ goto s_n_llhttp__internal__n_error_24;
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_update_header_state_8: {
++ switch (llhttp__internal__c_update_header_state_8(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value_otherwise;
+ }
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_8: {
+- switch (llhttp__internal__c_update_header_state_4(state, p, endp)) {
++ s_n_llhttp__internal__n_span_end_llhttp__on_header_value_7: {
++ const unsigned char* start;
++ int err;
++
++ start = state->_span_pos0;
++ state->_span_pos0 = NULL;
++ err = llhttp__on_header_value(state, start, p);
++ if (err != 0) {
++ state->error = err;
++ state->error_pos = (const char*) (p + 1);
++ state->_current = (void*) (intptr_t) s_n_llhttp__internal__n_error_23;
++ return s_error;
++ }
++ p++;
++ goto s_n_llhttp__internal__n_error_23;
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_test_flags_9: {
++ switch (llhttp__internal__c_test_flags_2(state, p, endp)) {
++ case 0:
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_7;
++ default:
++ goto s_n_llhttp__internal__n_header_value_te_chunked;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_load_type_1: {
++ switch (llhttp__internal__c_load_type(state, p, endp)) {
++ case 1:
++ goto s_n_llhttp__internal__n_invoke_test_flags_9;
++ default:
++ goto s_n_llhttp__internal__n_header_value_te_chunked;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_update_header_state_9: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_value;
+ }
+@@ -12726,6 +13046,34 @@
+ /* UNREACHABLE */;
+ abort();
+ }
++ s_n_llhttp__internal__n_invoke_or_flags_17: {
++ switch (llhttp__internal__c_or_flags_16(state, p, endp)) {
++ default:
++ goto s_n_llhttp__internal__n_invoke_and_flags;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_test_flags_10: {
++ switch (llhttp__internal__c_test_flags_2(state, p, endp)) {
++ case 0:
++ goto s_n_llhttp__internal__n_span_end_llhttp__on_header_value_8;
++ default:
++ goto s_n_llhttp__internal__n_invoke_or_flags_17;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_load_type_2: {
++ switch (llhttp__internal__c_load_type(state, p, endp)) {
++ case 1:
++ goto s_n_llhttp__internal__n_invoke_test_flags_10;
++ default:
++ goto s_n_llhttp__internal__n_invoke_or_flags_17;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
+ s_n_llhttp__internal__n_invoke_or_flags_16: {
+ switch (llhttp__internal__c_or_flags_16(state, p, endp)) {
+ default:
+@@ -12734,10 +13082,20 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_or_flags_17: {
+- switch (llhttp__internal__c_or_flags_17(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_test_flags_8: {
++ switch (llhttp__internal__c_test_flags_8(state, p, endp)) {
++ case 1:
++ goto s_n_llhttp__internal__n_invoke_load_type_2;
+ default:
+- goto s_n_llhttp__internal__n_invoke_update_header_state_8;
++ goto s_n_llhttp__internal__n_invoke_or_flags_16;
++ }
++ /* UNREACHABLE */;
++ abort();
++ }
++ s_n_llhttp__internal__n_invoke_or_flags_18: {
++ switch (llhttp__internal__c_or_flags_18(state, p, endp)) {
++ default:
++ goto s_n_llhttp__internal__n_invoke_update_header_state_9;
+ }
+ /* UNREACHABLE */;
+ abort();
+@@ -12747,11 +13105,11 @@
+ case 1:
+ goto s_n_llhttp__internal__n_header_value_connection;
+ case 2:
+- goto s_n_llhttp__internal__n_invoke_test_flags_6;
++ goto s_n_llhttp__internal__n_invoke_test_flags_7;
+ case 3:
+- goto s_n_llhttp__internal__n_invoke_or_flags_16;
++ goto s_n_llhttp__internal__n_invoke_test_flags_8;
+ case 4:
+- goto s_n_llhttp__internal__n_invoke_or_flags_17;
++ goto s_n_llhttp__internal__n_invoke_or_flags_18;
+ default:
+ goto s_n_llhttp__internal__n_header_value;
+ }
+@@ -12794,7 +13152,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_22: {
++ s_n_llhttp__internal__n_error_25: {
+ state->error = 0xa;
+ state->reason = "Invalid header token";
+ state->error_pos = (const char*) p;
+@@ -12803,8 +13161,8 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_9: {
+- switch (llhttp__internal__c_update_header_state_4(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_10: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_field_general;
+ }
+@@ -12819,8 +13177,8 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_invoke_update_header_state_10: {
+- switch (llhttp__internal__c_update_header_state_4(state, p, endp)) {
++ s_n_llhttp__internal__n_invoke_update_header_state_11: {
++ switch (llhttp__internal__c_update_header_state_1(state, p, endp)) {
+ default:
+ goto s_n_llhttp__internal__n_header_field_general;
+ }
+@@ -12860,7 +13218,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_23: {
++ s_n_llhttp__internal__n_error_26: {
+ state->error = 0x7;
+ state->reason = "Expected CRLF";
+ state->error_pos = (const char*) p;
+@@ -12886,7 +13244,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_24: {
++ s_n_llhttp__internal__n_error_27: {
+ state->error = 0x9;
+ state->reason = "Expected CRLF after version";
+ state->error_pos = (const char*) p;
+@@ -12903,7 +13261,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_25: {
++ s_n_llhttp__internal__n_error_28: {
+ state->error = 0x9;
+ state->reason = "Invalid minor version";
+ state->error_pos = (const char*) p;
+@@ -12912,7 +13270,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_26: {
++ s_n_llhttp__internal__n_error_29: {
+ state->error = 0x9;
+ state->reason = "Expected dot";
+ state->error_pos = (const char*) p;
+@@ -12929,7 +13287,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_27: {
++ s_n_llhttp__internal__n_error_30: {
+ state->error = 0x9;
+ state->reason = "Invalid major version";
+ state->error_pos = (const char*) p;
+@@ -12938,7 +13296,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_29: {
++ s_n_llhttp__internal__n_error_32: {
+ state->error = 0x8;
+ state->reason = "Expected HTTP/";
+ state->error_pos = (const char*) p;
+@@ -12947,7 +13305,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_28: {
++ s_n_llhttp__internal__n_error_31: {
+ state->error = 0x8;
+ state->reason = "Expected SOURCE method for ICE/x.x request";
+ state->error_pos = (const char*) p;
+@@ -12959,7 +13317,7 @@
+ s_n_llhttp__internal__n_invoke_is_equal_method_1: {
+ switch (llhttp__internal__c_is_equal_method_1(state, p, endp)) {
+ case 0:
+- goto s_n_llhttp__internal__n_error_28;
++ goto s_n_llhttp__internal__n_error_31;
+ default:
+ goto s_n_llhttp__internal__n_req_http_major;
+ }
+@@ -13034,7 +13392,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_30: {
++ s_n_llhttp__internal__n_error_33: {
+ state->error = 0x7;
+ state->reason = "Invalid char in url fragment start";
+ state->error_pos = (const char*) p;
+@@ -13094,7 +13452,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_31: {
++ s_n_llhttp__internal__n_error_34: {
+ state->error = 0x7;
+ state->reason = "Invalid char in url query";
+ state->error_pos = (const char*) p;
+@@ -13103,7 +13461,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_32: {
++ s_n_llhttp__internal__n_error_35: {
+ state->error = 0x7;
+ state->reason = "Invalid char in url path";
+ state->error_pos = (const char*) p;
+@@ -13214,7 +13572,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_33: {
++ s_n_llhttp__internal__n_error_36: {
+ state->error = 0x7;
+ state->reason = "Double @ in url";
+ state->error_pos = (const char*) p;
+@@ -13223,7 +13581,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_34: {
++ s_n_llhttp__internal__n_error_37: {
+ state->error = 0x7;
+ state->reason = "Unexpected char in url server";
+ state->error_pos = (const char*) p;
+@@ -13232,7 +13590,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_35: {
++ s_n_llhttp__internal__n_error_38: {
+ state->error = 0x7;
+ state->reason = "Unexpected char in url server";
+ state->error_pos = (const char*) p;
+@@ -13241,7 +13599,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_37: {
++ s_n_llhttp__internal__n_error_40: {
+ state->error = 0x7;
+ state->reason = "Unexpected char in url schema";
+ state->error_pos = (const char*) p;
+@@ -13250,7 +13608,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_38: {
++ s_n_llhttp__internal__n_error_41: {
+ state->error = 0x7;
+ state->reason = "Unexpected char in url schema";
+ state->error_pos = (const char*) p;
+@@ -13259,7 +13617,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_39: {
++ s_n_llhttp__internal__n_error_42: {
+ state->error = 0x7;
+ state->reason = "Unexpected start char in url";
+ state->error_pos = (const char*) p;
+@@ -13278,7 +13636,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_40: {
++ s_n_llhttp__internal__n_error_43: {
+ state->error = 0x6;
+ state->reason = "Expected space after method";
+ state->error_pos = (const char*) p;
+@@ -13295,7 +13653,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_48: {
++ s_n_llhttp__internal__n_error_51: {
+ state->error = 0x6;
+ state->reason = "Invalid method encountered";
+ state->error_pos = (const char*) p;
+@@ -13304,7 +13662,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_41: {
++ s_n_llhttp__internal__n_error_44: {
+ state->error = 0xd;
+ state->reason = "Response overflow";
+ state->error_pos = (const char*) p;
+@@ -13316,7 +13674,7 @@
+ s_n_llhttp__internal__n_invoke_mul_add_status_code: {
+ switch (llhttp__internal__c_mul_add_status_code(state, p, endp, match)) {
+ case 1:
+- goto s_n_llhttp__internal__n_error_41;
++ goto s_n_llhttp__internal__n_error_44;
+ default:
+ goto s_n_llhttp__internal__n_res_status_code;
+ }
+@@ -13359,7 +13717,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_42: {
++ s_n_llhttp__internal__n_error_45: {
+ state->error = 0xd;
+ state->reason = "Invalid response status";
+ state->error_pos = (const char*) p;
+@@ -13376,7 +13734,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_43: {
++ s_n_llhttp__internal__n_error_46: {
+ state->error = 0x9;
+ state->reason = "Expected space after version";
+ state->error_pos = (const char*) p;
+@@ -13393,7 +13751,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_44: {
++ s_n_llhttp__internal__n_error_47: {
+ state->error = 0x9;
+ state->reason = "Invalid minor version";
+ state->error_pos = (const char*) p;
+@@ -13402,7 +13760,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_45: {
++ s_n_llhttp__internal__n_error_48: {
+ state->error = 0x9;
+ state->reason = "Expected dot";
+ state->error_pos = (const char*) p;
+@@ -13419,7 +13777,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_46: {
++ s_n_llhttp__internal__n_error_49: {
+ state->error = 0x9;
+ state->reason = "Invalid major version";
+ state->error_pos = (const char*) p;
+@@ -13428,7 +13786,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_49: {
++ s_n_llhttp__internal__n_error_52: {
+ state->error = 0x8;
+ state->reason = "Expected HTTP/";
+ state->error_pos = (const char*) p;
+@@ -13453,7 +13811,7 @@
+ /* UNREACHABLE */;
+ abort();
+ }
+- s_n_llhttp__internal__n_error_47: {
++ s_n_llhttp__internal__n_error_50: {
+ state->error = 0x8;
+ state->reason = "Invalid word encountered";
+ state->error_pos = (const char*) p;
+--- nodejs-12.22.12~dfsg/test/parallel/test-http-invalid-te.js
++++ nodejs-12.22.12~dfsg/test/parallel/test-http-invalid-te.js
+@@ -13,7 +13,7 @@ Content-Type: text/plain; charset=utf-8
+ Host: hacker.exploit.com
+ Connection: keep-alive
+ Content-Length: 10
+-Transfer-Encoding: chunked, eee
++Transfer-Encoding: eee, chunked
+
+ HELLOWORLDPOST / HTTP/1.1
+ Content-Type: text/plain; charset=utf-8
+--- nodejs-12.22.12~dfsg/test/parallel/test-http-missing-header-separator-cr.js
++++ nodejs-12.22.12~dfsg/test/parallel/test-http-missing-header-separator-cr.js
+@@ -0,0 +1,83 @@
++'use strict';
++
++const common = require('../common');
++const assert = require('assert');
++
++const http = require('http');
++const net = require('net');
++
++function serverHandler(server, msg) {
++ const client = net.connect(server.address().port, 'localhost');
++
++ let response = '';
++
++ client.on('data', common.mustCall((chunk) => {
++ response += chunk.toString('utf-8');
++ }));
++
++ client.setEncoding('utf8');
++ client.on('error', common.mustNotCall());
++ client.on('end', common.mustCall(() => {
++ assert.strictEqual(
++ response,
++ 'HTTP/1.1 400 Bad Request\r\nConnection: close\r\n\r\n'
++ );
++ server.close();
++ }));
++ client.write(msg);
++ client.resume();
++}
++
++{
++ const msg = [
++ 'GET / HTTP/1.1',
++ 'Host: localhost',
++ 'Dummy: x\nContent-Length: 23',
++ '',
++ 'GET / HTTP/1.1',
++ 'Dummy: GET /admin HTTP/1.1',
++ 'Host: localhost',
++ '',
++ '',
++ ].join('\r\n');
++
++ const server = http.createServer(common.mustNotCall());
++
++ server.listen(0, common.mustCall(serverHandler.bind(null, server, msg)));
++}
++
++{
++ const msg = [
++ 'POST / HTTP/1.1',
++ 'Host: localhost',
++ 'x:x\nTransfer-Encoding: chunked',
++ '',
++ '1',
++ 'A',
++ '0',
++ '',
++ '',
++ ].join('\r\n');
++
++ const server = http.createServer(common.mustNotCall());
++
++ server.listen(0, common.mustCall(serverHandler.bind(null, server, msg)));
++}
++
++{
++ const msg = [
++ 'POST / HTTP/1.1',
++ 'Host: localhost',
++ 'x:\nTransfer-Encoding: chunked',
++ '',
++ '1',
++ 'A',
++ '0',
++ '',
++ '',
++ ].join('\r\n');
++
++ const server = http.createServer(common.mustNotCall());
++
++ server.listen(0, common.mustCall(serverHandler.bind(null, server, msg)));
++}
+--- /dev/null
++++ nodejs-12.22.12~dfsg/test/parallel/test-http-transfer-encoding-repeated-chunked.js
+@@ -0,0 +1,51 @@
++'use strict';
++
++const common = require('../common');
++const assert = require('assert');
++
++const http = require('http');
++const net = require('net');
++
++const msg = [
++ 'POST / HTTP/1.1',
++ 'Host: 127.0.0.1',
++ 'Transfer-Encoding: chunkedchunked',
++ '',
++ '1',
++ 'A',
++ '0',
++ '',
++].join('\r\n');
++
++const server = http.createServer(common.mustCall((req, res) => {
++ // Verify that no data is received
++
++ req.on('data', common.mustNotCall());
++
++ req.on('end', common.mustNotCall(() => {
++ res.writeHead(200, { 'Content-Type': 'text/plain' });
++ res.end();
++ }));
++}, 1));
++
++server.listen(0, common.mustCall(() => {
++ const client = net.connect(server.address().port, 'localhost');
++
++ let response = '';
++
++ client.on('data', common.mustCall((chunk) => {
++ response += chunk.toString('utf-8');
++ }));
++
++ client.setEncoding('utf8');
++ client.on('error', common.mustNotCall());
++ client.on('end', common.mustCall(() => {
++ assert.strictEqual(
++ response,
++ 'HTTP/1.1 400 Bad Request\r\nConnection: close\r\n\r\n'
++ );
++ server.close();
++ }));
++ client.write(msg);
++ client.resume();
++}));
+--- nodejs-12.22.12~dfsg/test/parallel/test-http-transfer-encoding-smuggling.js
++++ nodejs-12.22.12~dfsg/test/parallel/test-http-transfer-encoding-smuggling.js
+@@ -1,46 +1,89 @@
+ 'use strict';
+
+ const common = require('../common');
+-
+ const assert = require('assert');
++
+ const http = require('http');
+ const net = require('net');
+
+-const msg = [
+- 'POST / HTTP/1.1',
+- 'Host: 127.0.0.1',
+- 'Transfer-Encoding: chunked',
+- 'Transfer-Encoding: chunked-false',
+- 'Connection: upgrade',
+- '',
+- '1',
+- 'A',
+- '0',
+- '',
+- 'GET /flag HTTP/1.1',
+- 'Host: 127.0.0.1',
+- '',
+- '',
+-].join('\r\n');
+-
+-// Verify that the server is called only once even with a smuggled request.
+-
+-const server = http.createServer(common.mustCall((req, res) => {
+- res.end();
+-}, 1));
+-
+-function send(next) {
+- const client = net.connect(server.address().port, 'localhost');
+- client.setEncoding('utf8');
+- client.on('error', common.mustNotCall());
+- client.on('end', next);
+- client.write(msg);
+- client.resume();
++{
++ const msg = [
++ 'POST / HTTP/1.1',
++ 'Host: 127.0.0.1',
++ 'Transfer-Encoding: chunked',
++ 'Transfer-Encoding: chunked-false',
++ 'Connection: upgrade',
++ '',
++ '1',
++ 'A',
++ '0',
++ '',
++ 'GET /flag HTTP/1.1',
++ 'Host: 127.0.0.1',
++ '',
++ '',
++ ].join('\r\n');
++
++ const server = http.createServer(common.mustNotCall((req, res) => {
++ res.end();
++ }, 1));
++
++ server.listen(0, common.mustCall(() => {
++ const client = net.connect(server.address().port, 'localhost');
++
++ let response = '';
++
++ // Verify that the server listener is never called
++
++ client.on('data', common.mustCall((chunk) => {
++ response += chunk.toString('utf-8');
++ }));
++
++ client.setEncoding('utf8');
++ client.on('error', common.mustNotCall());
++ client.on('end', common.mustCall(() => {
++ assert.strictEqual(
++ response,
++ 'HTTP/1.1 400 Bad Request\r\nConnection: close\r\n\r\n'
++ );
++ server.close();
++ }));
++ client.write(msg);
++ client.resume();
++ }));
+ }
+
+-server.listen(0, common.mustCall((err) => {
+- assert.ifError(err);
+- send(common.mustCall(() => {
+- server.close();
++{
++ const msg = [
++ 'POST / HTTP/1.1',
++ 'Host: 127.0.0.1',
++ 'Transfer-Encoding: chunked',
++ ' , chunked-false',
++ 'Connection: upgrade',
++ '',
++ '1',
++ 'A',
++ '0',
++ '',
++ 'GET /flag HTTP/1.1',
++ 'Host: 127.0.0.1',
++ '',
++ '',
++ ].join('\r\n');
++
++ const server = http.createServer(common.mustCall((request, response) => {
++ assert.notStrictEqual(request.url, '/admin');
++ response.end('hello world');
++ }), 1);
++
++ server.listen(0, common.mustCall(() => {
++ const client = net.connect(server.address().port, 'localhost');
++
++ client.on('end', common.mustCall(function() {
++ server.close();
++ }));
++
++ client.write(msg);
++ client.resume();
+ }));
+-}));
++}
+--- nodejs-12.22.12~dfsg/test/parallel/test-http-header-overflow.js
++++ nodejs-12.22.12~dfsg/test/parallel/test-http-header-overflow.js
+@@ -1,3 +1,5 @@
++// Flags: --expose-internals
++
+ 'use strict';
+ const { expectsError, mustCall } = require('../common');
+ const assert = require('assert');
+@@ -8,7 +10,7 @@ const CRLF = '\r\n';
+ const DUMMY_HEADER_NAME = 'Cookie: ';
+ const DUMMY_HEADER_VALUE = 'a'.repeat(
+ // Plus one is to make it 1 byte too big
+- maxHeaderSize - DUMMY_HEADER_NAME.length - (2 * CRLF.length) + 1
++ maxHeaderSize - DUMMY_HEADER_NAME.length + 2
+ );
+ const PAYLOAD_GET = 'GET /blah HTTP/1.1';
+ const PAYLOAD = PAYLOAD_GET + CRLF +
+@@ -21,7 +23,7 @@ server.on('connection', mustCall((socket
+ name: 'Error',
+ message: 'Parse Error: Header overflow',
+ code: 'HPE_HEADER_OVERFLOW',
+- bytesParsed: maxHeaderSize + PAYLOAD_GET.length,
++ bytesParsed: maxHeaderSize + PAYLOAD_GET.length + (CRLF.length * 2) + 1,
+ rawPacket: Buffer.from(PAYLOAD)
+ }));
+ }));
diff --git a/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs_12.22.12.bb b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs_12.22.12.bb
index 8dbdd088e9..3ededae562 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs_12.22.12.bb
+++ b/meta-openembedded/meta-oe/recipes-devtools/nodejs/nodejs_12.22.12.bb
@@ -22,6 +22,10 @@ SRC_URI = "http://nodejs.org/dist/v${PV}/node-v${PV}.tar.xz \
file://big-endian.patch \
file://mips-warnings.patch \
file://0001-Remove-use-of-register-r7-because-llvm-now-issues-an.patch \
+ file://CVE-2022-32212.patch \
+ file://CVE-2022-35255.patch \
+ file://CVE-2022-43548.patch \
+ file://CVE-llhttp.patch \
"
SRC_URI_append_class-target = " \
file://0002-Using-native-binaries.patch \
diff --git a/meta-openembedded/meta-oe/recipes-devtools/php/php_7.4.28.bb b/meta-openembedded/meta-oe/recipes-devtools/php/php_7.4.33.bb
index 3970ce097a..caaaa23426 100644
--- a/meta-openembedded/meta-oe/recipes-devtools/php/php_7.4.28.bb
+++ b/meta-openembedded/meta-oe/recipes-devtools/php/php_7.4.33.bb
@@ -33,7 +33,7 @@ SRC_URI_append_class-target = " \
"
S = "${WORKDIR}/php-${PV}"
-SRC_URI[sha256sum] = "2085086a863444b0e39547de1a4969fd1c40a0c188eb58fab2938b649b0c4b58"
+SRC_URI[sha256sum] = "4e8117458fe5a475bf203128726b71bcbba61c42ad463dffadee5667a198a98a"
inherit autotools pkgconfig python3native gettext
diff --git a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm/CVE-2022-45063.patch b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm/CVE-2022-45063.patch
new file mode 100644
index 0000000000..e63169a209
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm/CVE-2022-45063.patch
@@ -0,0 +1,776 @@
+From 787636674918873a091e7a4ef5977263ba982322 Mon Sep 17 00:00:00 2001
+From: "Thomas E. Dickey" <dickey@invisible-island.net>
+Date: Sun, 23 Oct 2022 22:59:52 +0000
+Subject: [PATCH] snapshot of project "xterm", label xterm-374c
+
+Upstream-Status: Backport [https://github.com/ThomasDickey/xterm-snapshots/commit/787636674918873a091e7a4ef5977263ba982322]
+CVE: CVE-2022-45063
+
+Signed-off-by: Siddharth Doshi <sdoshi@mvista.com>
+---
+ button.c | 16 +--
+ charproc.c | 9 +-
+ doublechr.c | 4 +-
+ fontutils.c | 266 ++++++++++++++++++++++++++-----------------------
+ fontutils.h | 4 +-
+ misc.c | 7 +-
+ screen.c | 2 +-
+ xterm.h | 2 +-
+ xterm.log.html | 6 ++
+ 9 files changed, 164 insertions(+), 152 deletions(-)
+
+diff --git a/button.c b/button.c
+index 66a6181..e05ca50 100644
+--- a/button.c
++++ b/button.c
+@@ -1619,14 +1619,9 @@ static void
+ UnmapSelections(XtermWidget xw)
+ {
+ TScreen *screen = TScreenOf(xw);
+- Cardinal n;
+
+- if (screen->mappedSelect) {
+- for (n = 0; screen->mappedSelect[n] != 0; ++n)
+- free((void *) screen->mappedSelect[n]);
+- free(screen->mappedSelect);
+- screen->mappedSelect = 0;
+- }
++ free(screen->mappedSelect);
++ screen->mappedSelect = 0;
+ }
+
+ /*
+@@ -1662,14 +1657,11 @@ MapSelections(XtermWidget xw, String *params, Cardinal num_params)
+ if ((result = TypeMallocN(String, num_params + 1)) != 0) {
+ result[num_params] = 0;
+ for (j = 0; j < num_params; ++j) {
+- result[j] = x_strdup((isSELECT(params[j])
++ result[j] = (String) (isSELECT(params[j])
+ ? mapTo
+- : params[j]));
++ : params[j]);
+ if (result[j] == 0) {
+ UnmapSelections(xw);
+- while (j != 0) {
+- free((void *) result[--j]);
+- }
+ free(result);
+ result = 0;
+ break;
+diff --git a/charproc.c b/charproc.c
+index 55f0108..b07de4c 100644
+--- a/charproc.c
++++ b/charproc.c
+@@ -12548,7 +12548,6 @@ DoSetSelectedFont(Widget w,
+ Bell(xw, XkbBI_MinorError, 0);
+ } else {
+ Boolean failed = False;
+- int oldFont = TScreenOf(xw)->menu_font_number;
+ char *save = TScreenOf(xw)->SelectFontName();
+ char *val;
+ char *test;
+@@ -12593,10 +12592,6 @@ DoSetSelectedFont(Widget w,
+ failed = True;
+ }
+ if (failed) {
+- (void) xtermLoadFont(xw,
+- xtermFontName(TScreenOf(xw)->MenuFontName(oldFont)),
+- True,
+- oldFont);
+ Bell(xw, XkbBI_MinorError, 0);
+ }
+ free(used);
+@@ -12605,7 +12600,7 @@ DoSetSelectedFont(Widget w,
+ }
+ }
+
+-void
++Bool
+ FindFontSelection(XtermWidget xw, const char *atom_name, Bool justprobe)
+ {
+ TScreen *screen = TScreenOf(xw);
+@@ -12645,7 +12640,7 @@ FindFontSelection(XtermWidget xw, const char *atom_name, Bool justprobe)
+ DoSetSelectedFont, NULL,
+ XtLastTimestampProcessed(XtDisplay(xw)));
+ }
+- return;
++ return (screen->SelectFontName() != NULL) ? True : False;
+ }
+
+ Bool
+diff --git a/doublechr.c b/doublechr.c
+index a60f5bd..f7b6bae 100644
+--- a/doublechr.c
++++ b/doublechr.c
+@@ -294,7 +294,7 @@ xterm_DoubleGC(XTermDraw * params, GC old_gc, int *inxp)
+ temp.flags = (params->attr_flags & BOLD);
+ temp.warn = fwResource;
+
+- if (!xtermOpenFont(params->xw, name, &temp, False)) {
++ if (!xtermOpenFont(params->xw, name, &temp, NULL, False)) {
+ XTermDraw local = *params;
+ char *nname;
+
+@@ -303,7 +303,7 @@ xterm_DoubleGC(XTermDraw * params, GC old_gc, int *inxp)
+ nname = xtermSpecialFont(&local);
+ if (nname != 0) {
+ found = (Boolean) xtermOpenFont(params->xw, nname, &temp,
+- False);
++ NULL, False);
+ free(nname);
+ }
+ } else {
+diff --git a/fontutils.c b/fontutils.c
+index 4b0ef85..d9bfaf8 100644
+--- a/fontutils.c
++++ b/fontutils.c
+@@ -92,9 +92,9 @@
+ }
+
+ #define FREE_FNAME(field) \
+- if (fonts == 0 || myfonts.field != fonts->field) { \
+- FREE_STRING(myfonts.field); \
+- myfonts.field = 0; \
++ if (fonts == 0 || new_fnames.field != fonts->field) { \
++ FREE_STRING(new_fnames.field); \
++ new_fnames.field = 0; \
+ }
+
+ /*
+@@ -573,7 +573,7 @@ open_italic_font(XtermWidget xw, int n, FontNameProperties *fp, XTermFonts * dat
+ if ((name = italic_font_name(fp, slant[pass])) != 0) {
+ TRACE(("open_italic_font %s %s\n",
+ whichFontEnum((VTFontEnum) n), name));
+- if (xtermOpenFont(xw, name, data, False)) {
++ if (xtermOpenFont(xw, name, data, NULL, False)) {
+ result = (data->fs != 0);
+ #if OPT_REPORT_FONTS
+ if (resource.reportFonts) {
+@@ -1006,13 +1006,14 @@ cannotFont(XtermWidget xw, const char *who, const char *tag, const char *name)
+ }
+
+ /*
+- * Open the given font and verify that it is non-empty. Return a null on
++ * Open the given font and verify that it is non-empty. Return false on
+ * failure.
+ */
+ Bool
+ xtermOpenFont(XtermWidget xw,
+ const char *name,
+ XTermFonts * result,
++ XTermFonts * current,
+ Bool force)
+ {
+ Bool code = False;
+@@ -1020,7 +1021,12 @@ xtermOpenFont(XtermWidget xw,
+
+ TRACE(("xtermOpenFont %d:%d '%s'\n",
+ result->warn, xw->misc.fontWarnings, NonNull(name)));
++
+ if (!IsEmpty(name)) {
++ Bool existing = (current != NULL
++ && current->fs != NULL
++ && current->fn != NULL);
++
+ if ((result->fs = XLoadQueryFont(screen->display, name)) != 0) {
+ code = True;
+ if (EmptyFont(result->fs)) {
+@@ -1039,9 +1045,13 @@ xtermOpenFont(XtermWidget xw,
+ } else {
+ TRACE(("xtermOpenFont: cannot load font '%s'\n", name));
+ }
+- if (force) {
++ if (existing) {
++ TRACE(("...continue using font '%s'\n", current->fn));
++ result->fn = x_strdup(current->fn);
++ result->fs = current->fs;
++ } else if (force) {
+ NoFontWarning(result);
+- code = xtermOpenFont(xw, DEFFONT, result, True);
++ code = xtermOpenFont(xw, DEFFONT, result, NULL, True);
+ }
+ }
+ }
+@@ -1289,6 +1299,7 @@ static Bool
+ loadNormFP(XtermWidget xw,
+ char **nameOutP,
+ XTermFonts * infoOut,
++ XTermFonts * current,
+ int fontnum)
+ {
+ Bool status = True;
+@@ -1298,7 +1309,7 @@ loadNormFP(XtermWidget xw,
+ if (!xtermOpenFont(xw,
+ *nameOutP,
+ infoOut,
+- (fontnum == fontMenu_default))) {
++ current, (fontnum == fontMenu_default))) {
+ /*
+ * If we are opening the default font, and it happens to be missing,
+ * force that to the compiled-in default font, e.g., "fixed". If we
+@@ -1333,10 +1344,10 @@ loadBoldFP(XtermWidget xw,
+ if (fp != 0) {
+ NoFontWarning(infoOut);
+ *nameOutP = bold_font_name(fp, fp->average_width);
+- if (!xtermOpenFont(xw, *nameOutP, infoOut, False)) {
++ if (!xtermOpenFont(xw, *nameOutP, infoOut, NULL, False)) {
+ free(*nameOutP);
+ *nameOutP = bold_font_name(fp, -1);
+- xtermOpenFont(xw, *nameOutP, infoOut, False);
++ xtermOpenFont(xw, *nameOutP, infoOut, NULL, False);
+ }
+ TRACE(("...derived bold '%s'\n", NonNull(*nameOutP)));
+ }
+@@ -1354,7 +1365,7 @@ loadBoldFP(XtermWidget xw,
+ TRACE(("...did not get a matching bold font\n"));
+ }
+ free(normal);
+- } else if (!xtermOpenFont(xw, *nameOutP, infoOut, False)) {
++ } else if (!xtermOpenFont(xw, *nameOutP, infoOut, NULL, False)) {
+ xtermCopyFontInfo(infoOut, infoRef);
+ TRACE(("...cannot load bold font '%s'\n", NonNull(*nameOutP)));
+ } else {
+@@ -1408,7 +1419,7 @@ loadWideFP(XtermWidget xw,
+ }
+
+ if (check_fontname(*nameOutP)) {
+- if (xtermOpenFont(xw, *nameOutP, infoOut, False)
++ if (xtermOpenFont(xw, *nameOutP, infoOut, NULL, False)
+ && is_derived_font_name(*nameOutP)
+ && EmptyFont(infoOut->fs)) {
+ xtermCloseFont2(xw, infoOut - fWide, fWide);
+@@ -1452,7 +1463,7 @@ loadWBoldFP(XtermWidget xw,
+
+ if (check_fontname(*nameOutP)) {
+
+- if (xtermOpenFont(xw, *nameOutP, infoOut, False)
++ if (xtermOpenFont(xw, *nameOutP, infoOut, NULL, False)
+ && is_derived_font_name(*nameOutP)
+ && !compatibleWideCounts(wideInfoRef->fs, infoOut->fs)) {
+ xtermCloseFont2(xw, infoOut - fWBold, fWBold);
+@@ -1505,6 +1516,10 @@ loadWBoldFP(XtermWidget xw,
+ }
+ #endif
+
++/*
++ * Load a given bitmap font, along with the bold/wide variants.
++ * Returns nonzero on success.
++ */
+ int
+ xtermLoadFont(XtermWidget xw,
+ const VTFontNames * fonts,
+@@ -1514,33 +1529,37 @@ xtermLoadFont(XtermWidget xw,
+ TScreen *screen = TScreenOf(xw);
+ VTwin *win = WhichVWin(screen);
+
+- VTFontNames myfonts;
+- XTermFonts fnts[fMAX];
++ VTFontNames new_fnames;
++ XTermFonts new_fonts[fMAX];
++ XTermFonts old_fonts[fMAX];
+ char *tmpname = NULL;
+ Boolean proportional = False;
++ Boolean recovered;
++ int code = 0;
+
+- memset(&myfonts, 0, sizeof(myfonts));
+- memset(fnts, 0, sizeof(fnts));
++ memset(&new_fnames, 0, sizeof(new_fnames));
++ memset(new_fonts, 0, sizeof(new_fonts));
++ memcpy(&old_fonts, screen->fnts, sizeof(old_fonts));
+
+ if (fonts != 0)
+- myfonts = *fonts;
+- if (!check_fontname(myfonts.f_n))
+- return 0;
++ new_fnames = *fonts;
++ if (!check_fontname(new_fnames.f_n))
++ return code;
+
+ if (fontnum == fontMenu_fontescape
+- && myfonts.f_n != screen->MenuFontName(fontnum)) {
+- if ((tmpname = x_strdup(myfonts.f_n)) == 0)
+- return 0;
++ && new_fnames.f_n != screen->MenuFontName(fontnum)) {
++ if ((tmpname = x_strdup(new_fnames.f_n)) == 0)
++ return code;
+ }
+
+- TRACE(("Begin Cgs - xtermLoadFont(%s)\n", myfonts.f_n));
++ TRACE(("Begin Cgs - xtermLoadFont(%s)\n", new_fnames.f_n));
+ releaseWindowGCs(xw, win);
+
+ #define DbgResource(name, field, index) \
+ TRACE(("xtermLoadFont #%d "name" %s%s\n", \
+ fontnum, \
+- (fnts[index].warn == fwResource) ? "*" : " ", \
+- NonNull(myfonts.field)))
++ (new_fonts[index].warn == fwResource) ? "*" : " ", \
++ NonNull(new_fnames.field)))
+ DbgResource("normal", f_n, fNorm);
+ DbgResource("bold ", f_b, fBold);
+ #if OPT_WIDE_CHARS
+@@ -1549,16 +1568,17 @@ xtermLoadFont(XtermWidget xw,
+ #endif
+
+ if (!loadNormFP(xw,
+- &myfonts.f_n,
+- &fnts[fNorm],
++ &new_fnames.f_n,
++ &new_fonts[fNorm],
++ &old_fonts[fNorm],
+ fontnum))
+ goto bad;
+
+ if (!loadBoldFP(xw,
+- &myfonts.f_b,
+- &fnts[fBold],
+- myfonts.f_n,
+- &fnts[fNorm],
++ &new_fnames.f_b,
++ &new_fonts[fBold],
++ new_fnames.f_n,
++ &new_fonts[fNorm],
+ fontnum))
+ goto bad;
+
+@@ -1570,20 +1590,20 @@ xtermLoadFont(XtermWidget xw,
+ if_OPT_WIDE_CHARS(screen, {
+
+ if (!loadWideFP(xw,
+- &myfonts.f_w,
+- &fnts[fWide],
+- myfonts.f_n,
+- &fnts[fNorm],
++ &new_fnames.f_w,
++ &new_fonts[fWide],
++ new_fnames.f_n,
++ &new_fonts[fNorm],
+ fontnum))
+ goto bad;
+
+ if (!loadWBoldFP(xw,
+- &myfonts.f_wb,
+- &fnts[fWBold],
+- myfonts.f_w,
+- &fnts[fWide],
+- myfonts.f_b,
+- &fnts[fBold],
++ &new_fnames.f_wb,
++ &new_fonts[fWBold],
++ new_fnames.f_w,
++ &new_fonts[fWide],
++ new_fnames.f_b,
++ &new_fonts[fBold],
+ fontnum))
+ goto bad;
+
+@@ -1593,30 +1613,30 @@ xtermLoadFont(XtermWidget xw,
+ * Normal/bold fonts should be the same width. Also, the min/max
+ * values should be the same.
+ */
+- if (fnts[fNorm].fs != 0
+- && fnts[fBold].fs != 0
+- && (!is_fixed_font(fnts[fNorm].fs)
+- || !is_fixed_font(fnts[fBold].fs)
+- || differing_widths(fnts[fNorm].fs, fnts[fBold].fs))) {
++ if (new_fonts[fNorm].fs != 0
++ && new_fonts[fBold].fs != 0
++ && (!is_fixed_font(new_fonts[fNorm].fs)
++ || !is_fixed_font(new_fonts[fBold].fs)
++ || differing_widths(new_fonts[fNorm].fs, new_fonts[fBold].fs))) {
+ TRACE(("Proportional font! normal %d/%d, bold %d/%d\n",
+- fnts[fNorm].fs->min_bounds.width,
+- fnts[fNorm].fs->max_bounds.width,
+- fnts[fBold].fs->min_bounds.width,
+- fnts[fBold].fs->max_bounds.width));
++ new_fonts[fNorm].fs->min_bounds.width,
++ new_fonts[fNorm].fs->max_bounds.width,
++ new_fonts[fBold].fs->min_bounds.width,
++ new_fonts[fBold].fs->max_bounds.width));
+ proportional = True;
+ }
+
+ if_OPT_WIDE_CHARS(screen, {
+- if (fnts[fWide].fs != 0
+- && fnts[fWBold].fs != 0
+- && (!is_fixed_font(fnts[fWide].fs)
+- || !is_fixed_font(fnts[fWBold].fs)
+- || differing_widths(fnts[fWide].fs, fnts[fWBold].fs))) {
++ if (new_fonts[fWide].fs != 0
++ && new_fonts[fWBold].fs != 0
++ && (!is_fixed_font(new_fonts[fWide].fs)
++ || !is_fixed_font(new_fonts[fWBold].fs)
++ || differing_widths(new_fonts[fWide].fs, new_fonts[fWBold].fs))) {
+ TRACE(("Proportional font! wide %d/%d, wide bold %d/%d\n",
+- fnts[fWide].fs->min_bounds.width,
+- fnts[fWide].fs->max_bounds.width,
+- fnts[fWBold].fs->min_bounds.width,
+- fnts[fWBold].fs->max_bounds.width));
++ new_fonts[fWide].fs->min_bounds.width,
++ new_fonts[fWide].fs->max_bounds.width,
++ new_fonts[fWBold].fs->min_bounds.width,
++ new_fonts[fWBold].fs->max_bounds.width));
+ proportional = True;
+ }
+ });
+@@ -1635,13 +1655,13 @@ xtermLoadFont(XtermWidget xw,
+ screen->ifnts_ok = False;
+ #endif
+
+- xtermCopyFontInfo(GetNormalFont(screen, fNorm), &fnts[fNorm]);
+- xtermCopyFontInfo(GetNormalFont(screen, fBold), &fnts[fBold]);
++ xtermCopyFontInfo(GetNormalFont(screen, fNorm), &new_fonts[fNorm]);
++ xtermCopyFontInfo(GetNormalFont(screen, fBold), &new_fonts[fBold]);
+ #if OPT_WIDE_CHARS
+- xtermCopyFontInfo(GetNormalFont(screen, fWide), &fnts[fWide]);
+- if (fnts[fWBold].fs == NULL)
+- xtermCopyFontInfo(GetNormalFont(screen, fWide), &fnts[fWide]);
+- xtermCopyFontInfo(GetNormalFont(screen, fWBold), &fnts[fWBold]);
++ xtermCopyFontInfo(GetNormalFont(screen, fWide), &new_fonts[fWide]);
++ if (new_fonts[fWBold].fs == NULL)
++ xtermCopyFontInfo(GetNormalFont(screen, fWide), &new_fonts[fWide]);
++ xtermCopyFontInfo(GetNormalFont(screen, fWBold), &new_fonts[fWBold]);
+ #endif
+
+ xtermUpdateFontGCs(xw, getNormalFont);
+@@ -1672,7 +1692,7 @@ xtermLoadFont(XtermWidget xw,
+ unsigned ch;
+
+ #if OPT_TRACE
+-#define TRACE_MISS(index) show_font_misses(#index, &fnts[index])
++#define TRACE_MISS(index) show_font_misses(#index, &new_fonts[index])
+ TRACE_MISS(fNorm);
+ TRACE_MISS(fBold);
+ #if OPT_WIDE_CHARS
+@@ -1689,8 +1709,8 @@ xtermLoadFont(XtermWidget xw,
+ if ((n != UCS_REPL)
+ && (n != ch)
+ && (screen->fnt_boxes & 2)) {
+- if (xtermMissingChar(n, &fnts[fNorm]) ||
+- xtermMissingChar(n, &fnts[fBold])) {
++ if (xtermMissingChar(n, &new_fonts[fNorm]) ||
++ xtermMissingChar(n, &new_fonts[fBold])) {
+ UIntClr(screen->fnt_boxes, 2);
+ TRACE(("missing graphics character #%d, U+%04X\n",
+ ch, n));
+@@ -1702,12 +1722,12 @@ xtermLoadFont(XtermWidget xw,
+ #endif
+
+ for (ch = 1; ch < 32; ch++) {
+- if (xtermMissingChar(ch, &fnts[fNorm])) {
++ if (xtermMissingChar(ch, &new_fonts[fNorm])) {
+ TRACE(("missing normal char #%d\n", ch));
+ UIntClr(screen->fnt_boxes, 1);
+ break;
+ }
+- if (xtermMissingChar(ch, &fnts[fBold])) {
++ if (xtermMissingChar(ch, &new_fonts[fBold])) {
+ TRACE(("missing bold char #%d\n", ch));
+ UIntClr(screen->fnt_boxes, 1);
+ break;
+@@ -1724,8 +1744,8 @@ xtermLoadFont(XtermWidget xw,
+ screen->enbolden = screen->bold_mode;
+ } else {
+ screen->enbolden = screen->bold_mode
+- && ((fnts[fNorm].fs == fnts[fBold].fs)
+- || same_font_name(myfonts.f_n, myfonts.f_b));
++ && ((new_fonts[fNorm].fs == new_fonts[fBold].fs)
++ || same_font_name(new_fnames.f_n, new_fnames.f_b));
+ }
+ TRACE(("Will %suse 1-pixel offset/overstrike to simulate bold\n",
+ screen->enbolden ? "" : "not "));
+@@ -1741,7 +1761,7 @@ xtermLoadFont(XtermWidget xw,
+ update_font_escape();
+ }
+ #if OPT_SHIFT_FONTS
+- screen->menu_font_sizes[fontnum] = FontSize(fnts[fNorm].fs);
++ screen->menu_font_sizes[fontnum] = FontSize(new_fonts[fNorm].fs);
+ #endif
+ }
+ set_cursor_gcs(xw);
+@@ -1756,20 +1776,21 @@ xtermLoadFont(XtermWidget xw,
+ FREE_FNAME(f_w);
+ FREE_FNAME(f_wb);
+ #endif
+- if (fnts[fNorm].fn == fnts[fBold].fn) {
+- free(fnts[fNorm].fn);
++ if (new_fonts[fNorm].fn == new_fonts[fBold].fn) {
++ free(new_fonts[fNorm].fn);
+ } else {
+- free(fnts[fNorm].fn);
+- free(fnts[fBold].fn);
++ free(new_fonts[fNorm].fn);
++ free(new_fonts[fBold].fn);
+ }
+ #if OPT_WIDE_CHARS
+- free(fnts[fWide].fn);
+- free(fnts[fWBold].fn);
++ free(new_fonts[fWide].fn);
++ free(new_fonts[fWBold].fn);
+ #endif
+ xtermSetWinSize(xw);
+ return 1;
+
+ bad:
++ recovered = False;
+ if (tmpname)
+ free(tmpname);
+
+@@ -1780,15 +1801,15 @@ xtermLoadFont(XtermWidget xw,
+ SetItemSensitivity(fontMenuEntries[fontnum].widget, True);
+ #endif
+ Bell(xw, XkbBI_MinorError, 0);
+- myfonts.f_n = screen->MenuFontName(old_fontnum);
+- return xtermLoadFont(xw, &myfonts, doresize, old_fontnum);
+- } else if (x_strcasecmp(myfonts.f_n, DEFFONT)) {
+- int code;
+-
+- myfonts.f_n = x_strdup(DEFFONT);
+- TRACE(("...recovering for TrueType fonts\n"));
+- code = xtermLoadFont(xw, &myfonts, doresize, fontnum);
+- if (code) {
++ new_fnames.f_n = screen->MenuFontName(old_fontnum);
++ if (xtermLoadFont(xw, &new_fnames, doresize, old_fontnum))
++ recovered = True;
++ } else if (x_strcasecmp(new_fnames.f_n, DEFFONT)
++ && x_strcasecmp(new_fnames.f_n, old_fonts[fNorm].fn)) {
++ new_fnames.f_n = x_strdup(old_fonts[fNorm].fn);
++ TRACE(("...recovering from failed font-load\n"));
++ if (xtermLoadFont(xw, &new_fnames, doresize, fontnum)) {
++ recovered = True;
+ if (fontnum != fontMenu_fontsel) {
+ SetItemSensitivity(fontMenuEntries[fontnum].widget,
+ UsingRenderFont(xw));
+@@ -1797,15 +1818,15 @@ xtermLoadFont(XtermWidget xw,
+ FontHeight(screen),
+ FontWidth(screen)));
+ }
+- return code;
+ }
+ #endif
+-
+- releaseWindowGCs(xw, win);
+-
+- xtermCloseFonts(xw, fnts);
+- TRACE(("Fail Cgs - xtermLoadFont\n"));
+- return 0;
++ if (!recovered) {
++ releaseWindowGCs(xw, win);
++ xtermCloseFonts(xw, new_fonts);
++ TRACE(("Fail Cgs - xtermLoadFont\n"));
++ code = 0;
++ }
++ return code;
+ }
+
+ #if OPT_WIDE_ATTRS
+@@ -1853,7 +1874,7 @@ xtermLoadItalics(XtermWidget xw)
+ } else {
+ xtermOpenFont(xw,
+ getNormalFont(screen, n)->fn,
+- data, False);
++ data, NULL, False);
+ }
+ }
+ }
+@@ -4317,7 +4338,7 @@ lookupOneFontSize(XtermWidget xw, int fontnum)
+
+ memset(&fnt, 0, sizeof(fnt));
+ screen->menu_font_sizes[fontnum] = -1;
+- if (xtermOpenFont(xw, screen->MenuFontName(fontnum), &fnt, True)) {
++ if (xtermOpenFont(xw, screen->MenuFontName(fontnum), &fnt, NULL, True)) {
+ if (fontnum <= fontMenu_lastBuiltin
+ || strcmp(fnt.fn, DEFFONT)) {
+ screen->menu_font_sizes[fontnum] = FontSize(fnt.fs);
+@@ -4722,13 +4743,14 @@ HandleSetFont(Widget w GCC_UNUSED,
+ }
+ }
+
+-void
++Bool
+ SetVTFont(XtermWidget xw,
+ int which,
+ Bool doresize,
+ const VTFontNames * fonts)
+ {
+ TScreen *screen = TScreenOf(xw);
++ Bool result = False;
+
+ TRACE(("SetVTFont(which=%d, f_n=%s, f_b=%s)\n", which,
+ (fonts && fonts->f_n) ? fonts->f_n : "<null>",
+@@ -4737,34 +4759,31 @@ SetVTFont(XtermWidget xw,
+ if (IsIcon(screen)) {
+ Bell(xw, XkbBI_MinorError, 0);
+ } else if (which >= 0 && which < NMENUFONTS) {
+- VTFontNames myfonts;
++ VTFontNames new_fnames;
+
+- memset(&myfonts, 0, sizeof(myfonts));
++ memset(&new_fnames, 0, sizeof(new_fnames));
+ if (fonts != 0)
+- myfonts = *fonts;
++ new_fnames = *fonts;
+
+ if (which == fontMenu_fontsel) { /* go get the selection */
+- FindFontSelection(xw, myfonts.f_n, False);
++ result = FindFontSelection(xw, new_fnames.f_n, False);
+ } else {
+- int oldFont = screen->menu_font_number;
+-
+ #define USE_CACHED(field, name) \
+- if (myfonts.field == 0) { \
+- myfonts.field = x_strdup(screen->menu_font_names[which][name]); \
+- TRACE(("set myfonts." #field " from menu_font_names[%d][" #name "] %s\n", \
+- which, NonNull(myfonts.field))); \
++ if (new_fnames.field == NULL) { \
++ new_fnames.field = x_strdup(screen->menu_font_names[which][name]); \
++ TRACE(("set new_fnames." #field " from menu_font_names[%d][" #name "] %s\n", \
++ which, NonNull(new_fnames.field))); \
+ } else { \
+- TRACE(("set myfonts." #field " reused\n")); \
++ TRACE(("set new_fnames." #field " reused\n")); \
+ }
+ #define SAVE_FNAME(field, name) \
+- if (myfonts.field != 0) { \
+- if (screen->menu_font_names[which][name] == 0 \
+- || strcmp(screen->menu_font_names[which][name], myfonts.field)) { \
+- TRACE(("updating menu_font_names[%d][" #name "] to %s\n", \
+- which, myfonts.field)); \
+- FREE_STRING(screen->menu_font_names[which][name]); \
+- screen->menu_font_names[which][name] = x_strdup(myfonts.field); \
+- } \
++ if (new_fnames.field != NULL \
++ && (screen->menu_font_names[which][name] == NULL \
++ || strcmp(screen->menu_font_names[which][name], new_fnames.field))) { \
++ TRACE(("updating menu_font_names[%d][" #name "] to \"%s\"\n", \
++ which, new_fnames.field)); \
++ FREE_STRING(screen->menu_font_names[which][name]); \
++ screen->menu_font_names[which][name] = x_strdup(new_fnames.field); \
+ }
+
+ USE_CACHED(f_n, fNorm);
+@@ -4774,7 +4793,7 @@ SetVTFont(XtermWidget xw,
+ USE_CACHED(f_wb, fWBold);
+ #endif
+ if (xtermLoadFont(xw,
+- &myfonts,
++ &new_fnames,
+ doresize, which)) {
+ /*
+ * If successful, save the data so that a subsequent query via
+@@ -4786,10 +4805,8 @@ SetVTFont(XtermWidget xw,
+ SAVE_FNAME(f_w, fWide);
+ SAVE_FNAME(f_wb, fWBold);
+ #endif
++ result = True;
+ } else {
+- (void) xtermLoadFont(xw,
+- xtermFontName(screen->MenuFontName(oldFont)),
+- doresize, oldFont);
+ Bell(xw, XkbBI_MinorError, 0);
+ }
+ FREE_FNAME(f_n);
+@@ -4802,7 +4819,8 @@ SetVTFont(XtermWidget xw,
+ } else {
+ Bell(xw, XkbBI_MinorError, 0);
+ }
+- return;
++ TRACE(("...SetVTFont: %d\n", result));
++ return result;
+ }
+
+ #if OPT_RENDERFONT
+diff --git a/fontutils.h b/fontutils.h
+index 9d530c5..ceaf44a 100644
+--- a/fontutils.h
++++ b/fontutils.h
+@@ -37,7 +37,7 @@
+ /* *INDENT-OFF* */
+
+ extern Bool xtermLoadDefaultFonts (XtermWidget /* xw */);
+-extern Bool xtermOpenFont (XtermWidget /* xw */, const char */* name */, XTermFonts * /* result */, Bool /* force */);
++extern Bool xtermOpenFont (XtermWidget /* xw */, const char */* name */, XTermFonts * /* result */, XTermFonts * /* current */, Bool /* force */);
+ extern XTermFonts * getDoubleFont (TScreen * /* screen */, int /* which */);
+ extern XTermFonts * getItalicFont (TScreen * /* screen */, int /* which */);
+ extern XTermFonts * getNormalFont (TScreen * /* screen */, int /* which */);
+@@ -50,7 +50,7 @@ extern int lookupRelativeFontSize (XtermWidget /* xw */, int /* old */, int /* r
+ extern int xtermGetFont (const char * /* param */);
+ extern int xtermLoadFont (XtermWidget /* xw */, const VTFontNames */* fonts */, Bool /* doresize */, int /* fontnum */);
+ extern void HandleSetFont PROTO_XT_ACTIONS_ARGS;
+-extern void SetVTFont (XtermWidget /* xw */, int /* i */, Bool /* doresize */, const VTFontNames */* fonts */);
++extern Bool SetVTFont (XtermWidget /* xw */, int /* i */, Bool /* doresize */, const VTFontNames */* fonts */);
+ extern void allocFontList (XtermWidget /* xw */, const char * /* name */, XtermFontNames * /* target */, VTFontEnum /* which */, const char * /* source */, Bool /* ttf */);
+ extern void copyFontList (char *** /* targetp */, char ** /* source */);
+ extern void initFontLists (XtermWidget /* xw */);
+diff --git a/misc.c b/misc.c
+index cc323f8..6c5e938 100644
+--- a/misc.c
++++ b/misc.c
+@@ -3787,9 +3787,9 @@ ChangeFontRequest(XtermWidget xw, String buf)
+ {
+ memset(&fonts, 0, sizeof(fonts));
+ fonts.f_n = name;
+- SetVTFont(xw, num, True, &fonts);
+- if (num == screen->menu_font_number &&
+- num != fontMenu_fontescape) {
++ if (SetVTFont(xw, num, True, &fonts)
++ && num == screen->menu_font_number
++ && num != fontMenu_fontescape) {
+ screen->EscapeFontName() = x_strdup(name);
+ }
+ }
+@@ -6237,7 +6237,6 @@ xtermSetenv(const char *var, const char *value)
+
+ found = envindex;
+ environ[found + 1] = NULL;
+- environ = environ;
+ }
+
+ environ[found] = TextAlloc(1 + len + strlen(value));
+diff --git a/screen.c b/screen.c
+index 690e3e2..f84254f 100644
+--- a/screen.c
++++ b/screen.c
+@@ -1497,7 +1497,7 @@ ScrnRefresh(XtermWidget xw,
+ screen->topline, toprow, leftcol,
+ nrows, ncols,
+ force ? " force" : ""));
+-
++ (void) recurse;
+ ++recurse;
+
+ if (screen->cursorp.col >= leftcol
+diff --git a/xterm.h b/xterm.h
+index ec70e43..aa71f96 100644
+--- a/xterm.h
++++ b/xterm.h
+@@ -967,7 +967,7 @@ extern Bool CheckBufPtrs (TScreen * /* screen */);
+ extern Bool set_cursor_gcs (XtermWidget /* xw */);
+ extern char * vt100ResourceToString (XtermWidget /* xw */, const char * /* name */);
+ extern int VTInit (XtermWidget /* xw */);
+-extern void FindFontSelection (XtermWidget /* xw */, const char * /* atom_name */, Bool /* justprobe */);
++extern Bool FindFontSelection (XtermWidget /* xw */, const char * /* atom_name */, Bool /* justprobe */);
+ extern void HideCursor (void);
+ extern void RestartBlinking(XtermWidget /* xw */);
+ extern void ShowCursor (void);
+diff --git a/xterm.log.html b/xterm.log.html
+index 47d590b..e27dc31 100644
+--- a/xterm.log.html
++++ b/xterm.log.html
+@@ -991,6 +991,12 @@
+ 2020/02/01</a></h1>
+
+ <ul>
++ <li>improve error-recovery when setting a bitmap font for the
++ VT100 window, e.g., in case <em>OSC&nbsp;50</em> failed,
++ restoring the most recent valid font so that a subsequent
++ <em>OSC&nbsp;50</em> reports this correctly (report by David
++ Leadbeater).</li>
++
+ <li>amend change in <a href="#xterm_352">patch #352</a> for
+ button-events to fix a case where some followup events were not
+ processed soon enough (report/patch by Jimmy Aguilar
+--
+2.24.4
+
diff --git a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm_353.bb b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm_353.bb
index 1862b250ef..4e2b0c9d53 100644
--- a/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm_353.bb
+++ b/meta-openembedded/meta-oe/recipes-graphics/xorg-app/xterm_353.bb
@@ -8,6 +8,7 @@ SRC_URI = "http://invisible-mirror.net/archives/${BPN}/${BP}.tgz \
file://0001-Add-configure-time-check-for-setsid.patch \
file://CVE-2021-27135.patch \
file://CVE-2022-24130.patch \
+ file://CVE-2022-45063.patch \
"
SRC_URI[md5sum] = "247c30ebfa44623f3a2d100e0cae5c7f"
SRC_URI[sha256sum] = "e521d3ee9def61f5d5c911afc74dd5c3a56ce147c7071c74023ea24cac9bb768"
diff --git a/meta-openembedded/meta-oe/recipes-support/nss/nss/CVE-2020-25648.patch b/meta-openembedded/meta-oe/recipes-support/nss/nss/CVE-2020-25648.patch
new file mode 100644
index 0000000000..f30d4d32cd
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/nss/nss/CVE-2020-25648.patch
@@ -0,0 +1,163 @@
+# HG changeset patch
+# User Daiki Ueno <dueno@redhat.com>
+# Date 1602524521 0
+# Node ID 57bbefa793232586d27cee83e74411171e128361
+# Parent 6e3bc17f05086854ffd2b06f7fae9371f7a0c174
+Bug 1641480, TLS 1.3: tighten CCS handling in compatibility mode, r=mt
+
+This makes the server reject CCS when the client doesn't indicate the
+use of the middlebox compatibility mode with a non-empty
+ClientHello.legacy_session_id, or it sends multiple CCS in a row.
+
+Differential Revision: https://phabricator.services.mozilla.com/D79994
+
+Upstream-Status: Backport
+CVE: CVE-2020-25648
+Reference to upstream patch: https://hg.mozilla.org/projects/nss/rev/57bbefa793232586d27cee83e74411171e128361
+Signed-off-by: Mathieu Dubois-Briand <mbriand@witekio.com>
+
+diff --color -Naur nss-3.51.1_old/nss/gtests/ssl_gtest/ssl_tls13compat_unittest.cc nss-3.51.1/nss/gtests/ssl_gtest/ssl_tls13compat_unittest.cc
+--- nss-3.51.1_old/nss/gtests/ssl_gtest/ssl_tls13compat_unittest.cc 2022-12-08 16:05:47.447142660 +0100
++++ nss-3.51.1/nss/gtests/ssl_gtest/ssl_tls13compat_unittest.cc 2022-12-08 16:12:32.645932052 +0100
+@@ -348,6 +348,85 @@
+ client_->CheckErrorCode(SSL_ERROR_HANDSHAKE_UNEXPECTED_ALERT);
+ }
+
++// The server rejects a ChangeCipherSpec if the client advertises an
++// empty session ID.
++TEST_F(TlsConnectStreamTls13, ChangeCipherSpecAfterClientHelloEmptySid) {
++ EnsureTlsSetup();
++ ConfigureVersion(SSL_LIBRARY_VERSION_TLS_1_3);
++
++ StartConnect();
++ client_->Handshake(); // Send ClientHello
++ client_->SendDirect(DataBuffer(kCannedCcs, sizeof(kCannedCcs))); // Send CCS
++
++ server_->ExpectSendAlert(kTlsAlertUnexpectedMessage);
++ server_->Handshake(); // Consume ClientHello and CCS
++ server_->CheckErrorCode(SSL_ERROR_RX_MALFORMED_CHANGE_CIPHER);
++}
++
++// The server rejects multiple ChangeCipherSpec even if the client
++// indicates compatibility mode with non-empty session ID.
++TEST_F(Tls13CompatTest, ChangeCipherSpecAfterClientHelloTwice) {
++ EnsureTlsSetup();
++ ConfigureVersion(SSL_LIBRARY_VERSION_TLS_1_3);
++ EnableCompatMode();
++
++ StartConnect();
++ client_->Handshake(); // Send ClientHello
++ // Send CCS twice in a row
++ client_->SendDirect(DataBuffer(kCannedCcs, sizeof(kCannedCcs)));
++ client_->SendDirect(DataBuffer(kCannedCcs, sizeof(kCannedCcs)));
++
++ server_->ExpectSendAlert(kTlsAlertUnexpectedMessage);
++ server_->Handshake(); // Consume ClientHello and CCS.
++ server_->CheckErrorCode(SSL_ERROR_RX_MALFORMED_CHANGE_CIPHER);
++}
++
++// The client rejects a ChangeCipherSpec if it advertises an empty
++// session ID.
++TEST_F(TlsConnectStreamTls13, ChangeCipherSpecAfterServerHelloEmptySid) {
++ EnsureTlsSetup();
++ ConfigureVersion(SSL_LIBRARY_VERSION_TLS_1_3);
++
++ // To replace Finished with a CCS below
++ auto filter = MakeTlsFilter<TlsHandshakeDropper>(server_);
++ filter->SetHandshakeTypes({kTlsHandshakeFinished});
++ filter->EnableDecryption();
++
++ StartConnect();
++ client_->Handshake(); // Send ClientHello
++ server_->Handshake(); // Consume ClientHello, and
++ // send ServerHello..CertificateVerify
++ // Send CCS
++ server_->SendDirect(DataBuffer(kCannedCcs, sizeof(kCannedCcs)));
++ client_->ExpectSendAlert(kTlsAlertUnexpectedMessage);
++ client_->Handshake(); // Consume ClientHello and CCS
++ client_->CheckErrorCode(SSL_ERROR_RX_MALFORMED_CHANGE_CIPHER);
++}
++
++// The client rejects multiple ChangeCipherSpec in a row even if the
++// client indicates compatibility mode with non-empty session ID.
++TEST_F(Tls13CompatTest, ChangeCipherSpecAfterServerHelloTwice) {
++ EnsureTlsSetup();
++ ConfigureVersion(SSL_LIBRARY_VERSION_TLS_1_3);
++ EnableCompatMode();
++
++ // To replace Finished with a CCS below
++ auto filter = MakeTlsFilter<TlsHandshakeDropper>(server_);
++ filter->SetHandshakeTypes({kTlsHandshakeFinished});
++ filter->EnableDecryption();
++
++ StartConnect();
++ client_->Handshake(); // Send ClientHello
++ server_->Handshake(); // Consume ClientHello, and
++ // send ServerHello..CertificateVerify
++ // the ServerHello is followed by CCS
++ // Send another CCS
++ server_->SendDirect(DataBuffer(kCannedCcs, sizeof(kCannedCcs)));
++ client_->ExpectSendAlert(kTlsAlertUnexpectedMessage);
++ client_->Handshake(); // Consume ClientHello and CCS
++ client_->CheckErrorCode(SSL_ERROR_RX_MALFORMED_CHANGE_CIPHER);
++}
++
+ // If we negotiate 1.2, we abort.
+ TEST_F(TlsConnectStreamTls13, ChangeCipherSpecBeforeClientHello12) {
+ EnsureTlsSetup();
+diff --color -Naur nss-3.51.1_old/nss/lib/ssl/ssl3con.c nss-3.51.1/nss/lib/ssl/ssl3con.c
+--- nss-3.51.1_old/nss/lib/ssl/ssl3con.c 2022-12-08 16:05:47.471142833 +0100
++++ nss-3.51.1/nss/lib/ssl/ssl3con.c 2022-12-08 16:12:42.037994262 +0100
+@@ -6711,7 +6711,11 @@
+
+ /* TLS 1.3: We sent a session ID. The server's should match. */
+ if (!IS_DTLS(ss) && (sentRealSid || sentFakeSid)) {
+- return sidMatch;
++ if (sidMatch) {
++ ss->ssl3.hs.allowCcs = PR_TRUE;
++ return PR_TRUE;
++ }
++ return PR_FALSE;
+ }
+
+ /* TLS 1.3 (no SID)/DTLS 1.3: The server shouldn't send a session ID. */
+@@ -8730,6 +8734,7 @@
+ errCode = PORT_GetError();
+ goto alert_loser;
+ }
++ ss->ssl3.hs.allowCcs = PR_TRUE;
+ }
+
+ /* TLS 1.3 requires that compression include only null. */
+@@ -13058,8 +13063,15 @@
+ ss->ssl3.hs.ws != idle_handshake &&
+ cText->buf->len == 1 &&
+ cText->buf->buf[0] == change_cipher_spec_choice) {
+- /* Ignore the CCS. */
+- return SECSuccess;
++ if (ss->ssl3.hs.allowCcs) {
++ /* Ignore the first CCS. */
++ ss->ssl3.hs.allowCcs = PR_FALSE;
++ return SECSuccess;
++ }
++
++ /* Compatibility mode is not negotiated. */
++ alert = unexpected_message;
++ PORT_SetError(SSL_ERROR_RX_MALFORMED_CHANGE_CIPHER);
+ }
+
+ if (IS_DTLS(ss) ||
+diff --color -Naur nss-3.51.1_old/nss/lib/ssl/sslimpl.h nss-3.51.1/nss/lib/ssl/sslimpl.h
+--- nss-3.51.1_old/nss/lib/ssl/sslimpl.h 2022-12-08 16:05:47.471142833 +0100
++++ nss-3.51.1/nss/lib/ssl/sslimpl.h 2022-12-08 16:12:45.106014567 +0100
+@@ -711,6 +711,10 @@
+ * or received. */
+ PRBool receivedCcs; /* A server received ChangeCipherSpec
+ * before the handshake started. */
++ PRBool allowCcs; /* A server allows ChangeCipherSpec
++ * as the middlebox compatibility mode
++ * is explicitly indicarted by
++ * legacy_session_id in TLS 1.3 ClientHello. */
+ PRBool clientCertRequested; /* True if CertificateRequest received. */
+ ssl3KEADef kea_def_mutable; /* Used to hold the writable kea_def
+ * we use for TLS 1.3 */
diff --git a/meta-openembedded/meta-oe/recipes-support/nss/nss/CVE-2023-0767.patch b/meta-openembedded/meta-oe/recipes-support/nss/nss/CVE-2023-0767.patch
new file mode 100644
index 0000000000..ec3b4a092a
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/nss/nss/CVE-2023-0767.patch
@@ -0,0 +1,124 @@
+
+# HG changeset patch
+# User John M. Schanck <jschanck@mozilla.com>
+# Date 1675974326 0
+# Node ID 62f6b3e9024dd72ba3af9ce23848d7573b934f18
+# Parent 52b4b7d3d3ebdb25fbf2cf1c101bfad3721680f4
+Bug 1804640 - improve handling of unknown PKCS#12 safe bag types. r=rrelyea
+
+Differential Revision: https://phabricator.services.mozilla.com/D167443
+
+CVE: CVE-2023-0767
+Upstream-Status: Backport [https://launchpad.net/ubuntu/+archive/primary/+sourcefiles/nss/2:3.35-2ubuntu2.16/nss_3.35-2ubuntu2.16.debian.tar.xz]
+Signed-off-by: Virendra Thakur <virendra.thakur@kpit.com>
+
+diff --git a/nss/lib/pkcs12/p12d.c b/nss/lib/pkcs12/p12d.c
+--- a/nss/lib/pkcs12/p12d.c
++++ b/nss/lib/pkcs12/p12d.c
+@@ -332,41 +332,48 @@ sec_pkcs12_decoder_safe_bag_update(void
+ unsigned long len, int depth,
+ SEC_ASN1EncodingPart data_kind)
+ {
+ sec_PKCS12SafeContentsContext *safeContentsCtx =
+ (sec_PKCS12SafeContentsContext *)arg;
+ SEC_PKCS12DecoderContext *p12dcx;
+ SECStatus rv;
+
+- /* make sure that we are not skipping the current safeBag,
+- * and that there are no errors. If so, just return rather
+- * than continuing to process.
+- */
+- if (!safeContentsCtx || !safeContentsCtx->p12dcx ||
+- safeContentsCtx->p12dcx->error || safeContentsCtx->skipCurrentSafeBag) {
++ if (!safeContentsCtx || !safeContentsCtx->p12dcx || !safeContentsCtx->currentSafeBagA1Dcx) {
+ return;
+ }
+ p12dcx = safeContentsCtx->p12dcx;
+
++ /* make sure that there are no errors and we are not skipping the current safeBag */
++ if (p12dcx->error || safeContentsCtx->skipCurrentSafeBag) {
++ goto loser;
++ }
++
+ rv = SEC_ASN1DecoderUpdate(safeContentsCtx->currentSafeBagA1Dcx, data, len);
+ if (rv != SECSuccess) {
+ p12dcx->errorValue = PORT_GetError();
++ p12dcx->error = PR_TRUE;
++ goto loser;
++ }
++
++ /* The update may have set safeContentsCtx->skipCurrentSafeBag, and we
++ * may not get another opportunity to clean up the decoder context.
++ */
++ if (safeContentsCtx->skipCurrentSafeBag) {
+ goto loser;
+ }
+
+ return;
+
+ loser:
+- /* set the error, and finish the decoder context. because there
++ /* Finish the decoder context. Because there
+ * is not a way of returning an error message, it may be worth
+ * while to do a check higher up and finish any decoding contexts
+ * that are still open.
+ */
+- p12dcx->error = PR_TRUE;
+ SEC_ASN1DecoderFinish(safeContentsCtx->currentSafeBagA1Dcx);
+ safeContentsCtx->currentSafeBagA1Dcx = NULL;
+ return;
+ }
+
+ /* notify function for decoding safeBags. This function is
+ * used to filter safeBag types which are not supported,
+ * initiate the decoding of nested safe contents, and decode
+diff --git a/nss/lib/pkcs12/p12t.h b/nss/lib/pkcs12/p12t.h
+--- a/nss/lib/pkcs12/p12t.h
++++ b/nss/lib/pkcs12/p12t.h
+@@ -68,16 +68,17 @@ struct sec_PKCS12SafeBagStr {
+ /* Dependent upon the type of bag being used. */
+ union {
+ SECKEYPrivateKeyInfo *pkcs8KeyBag;
+ SECKEYEncryptedPrivateKeyInfo *pkcs8ShroudedKeyBag;
+ sec_PKCS12CertBag *certBag;
+ sec_PKCS12CRLBag *crlBag;
+ sec_PKCS12SecretBag *secretBag;
+ sec_PKCS12SafeContents *safeContents;
++ SECItem *unknownBag;
+ } safeBagContent;
+
+ sec_PKCS12Attribute **attribs;
+
+ /* used locally */
+ SECOidData *bagTypeTag;
+ PLArenaPool *arena;
+ unsigned int nAttribs;
+diff --git a/nss/lib/pkcs12/p12tmpl.c b/nss/lib/pkcs12/p12tmpl.c
+--- a/nss/lib/pkcs12/p12tmpl.c
++++ b/nss/lib/pkcs12/p12tmpl.c
+@@ -25,22 +25,22 @@ sec_pkcs12_choose_safe_bag_type(void *sr
+ if (src_or_dest == NULL) {
+ return NULL;
+ }
+
+ safeBag = (sec_PKCS12SafeBag *)src_or_dest;
+
+ oiddata = SECOID_FindOID(&safeBag->safeBagType);
+ if (oiddata == NULL) {
+- return SEC_ASN1_GET(SEC_AnyTemplate);
++ return SEC_ASN1_GET(SEC_PointerToAnyTemplate);
+ }
+
+ switch (oiddata->offset) {
+ default:
+- theTemplate = SEC_ASN1_GET(SEC_AnyTemplate);
++ theTemplate = SEC_ASN1_GET(SEC_PointerToAnyTemplate);
+ break;
+ case SEC_OID_PKCS12_V1_KEY_BAG_ID:
+ theTemplate = SEC_ASN1_GET(SECKEY_PointerToPrivateKeyInfoTemplate);
+ break;
+ case SEC_OID_PKCS12_V1_CERT_BAG_ID:
+ theTemplate = sec_PKCS12PointerToCertBagTemplate;
+ break;
+ case SEC_OID_PKCS12_V1_CRL_BAG_ID:
+
diff --git a/meta-openembedded/meta-oe/recipes-support/nss/nss_3.51.1.bb b/meta-openembedded/meta-oe/recipes-support/nss/nss_3.51.1.bb
index 8b59f7ea8f..1de2a40094 100644
--- a/meta-openembedded/meta-oe/recipes-support/nss/nss_3.51.1.bb
+++ b/meta-openembedded/meta-oe/recipes-support/nss/nss_3.51.1.bb
@@ -39,8 +39,10 @@ SRC_URI = "http://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/${VERSIO
file://CVE-2020-6829_12400.patch \
file://CVE-2020-12403_1.patch \
file://CVE-2020-12403_2.patch \
+ file://CVE-2020-25648.patch \
file://CVE-2021-43527.patch \
file://CVE-2022-22747.patch \
+ file://CVE-2023-0767.patch \
"
SRC_URI[md5sum] = "6acaf1ddff69306ae30a908881c6f233"
@@ -291,5 +293,11 @@ RDEPENDS_${PN}-smime = "perl"
BBCLASSEXTEND = "native nativesdk"
+CVE_PRODUCT += "network_security_services"
+
# CVE-2006-5201 affects only Sun Solaris
CVE_CHECK_WHITELIST += "CVE-2006-5201"
+
+# CVES CVE-2017-11695 CVE-2017-11696 CVE-2017-11697 CVE-2017-11698 only affect
+# the legacy db (libnssdbm), only compiled with --enable-legacy-db.
+CVE_CHECK_WHITELIST += "CVE-2017-11695 CVE-2017-11696 CVE-2017-11697 CVE-2017-11698"
diff --git a/meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools/0001-Properly-check-authorization-on-incoming-guestOps-re.patch b/meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools/0001-Properly-check-authorization-on-incoming-guestOps-re.patch
new file mode 100644
index 0000000000..1c6657ae9f
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools/0001-Properly-check-authorization-on-incoming-guestOps-re.patch
@@ -0,0 +1,39 @@
+From d16eda269413bdb04e85c242fa28db264697c45f Mon Sep 17 00:00:00 2001
+From: John Wolfe <jwolfe@vmware.com>
+Date: Sun, 21 Aug 2022 07:56:49 -0700
+Subject: [PATCH] Properly check authorization on incoming guestOps requests.
+
+Fix public pipe request checks. Only a SessionRequest type should
+be accepted on the public pipe.
+
+Upstream-Status: Backport from https://github.com/vmware/open-vm-tools/commit/70a74758bfe0042c27f15ce590fb21a2bc54d745
+CVE: CVE-2022-31676
+Signed-off-by: Priyal Doshi <pdoshi@mvista.com>
+---
+ open-vm-tools/vgauth/serviceImpl/proto.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/open-vm-tools/vgauth/serviceImpl/proto.c b/open-vm-tools/vgauth/serviceImpl/proto.c
+index f097fb6..0ebaa7b 100644
+--- a/open-vm-tools/vgauth/serviceImpl/proto.c
++++ b/open-vm-tools/vgauth/serviceImpl/proto.c
+@@ -1,5 +1,5 @@
+ /*********************************************************
+- * Copyright (C) 2011-2016,2019 VMware, Inc. All rights reserved.
++ * Copyright (C) 2011-2016,2019-2022 VMware, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU Lesser General Public License as published
+@@ -1202,6 +1202,10 @@ Proto_SecurityCheckRequest(ServiceConnection *conn,
+ VGAuthError err;
+ gboolean isSecure = ServiceNetworkIsConnectionPrivateSuperUser(conn);
+
++ if (conn->isPublic && req->reqType != PROTO_REQUEST_SESSION_REQ) {
++ return VGAUTH_E_PERMISSION_DENIED;
++ }
++
+ switch (req->reqType) {
+ /*
+ * This comes over the public connection; alwsys let it through.
+--
+2.7.4
diff --git a/meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools_11.0.1.bb b/meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools_11.0.1.bb
index 3cf0aa8292..9a1b3f4c80 100644
--- a/meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools_11.0.1.bb
+++ b/meta-openembedded/meta-oe/recipes-support/open-vm-tools/open-vm-tools_11.0.1.bb
@@ -43,6 +43,7 @@ SRC_URI = "git://github.com/vmware/open-vm-tools.git;protocol=https;branch=maste
file://0002-hgfsServerLinux-Consider-64bit-time_t-possibility.patch;patchdir=.. \
file://0001-utilBacktrace-Ignore-Warray-bounds.patch;patchdir=.. \
file://0001-hgfsmounter-Makefile.am-support-usrmerge.patch;patchdir=.. \
+ file://0001-Properly-check-authorization-on-incoming-guestOps-re.patch;patchdir=.. \
"
SRCREV = "d3edfd142a81096f9f58aff17d84219b457f4987"
diff --git a/meta-openembedded/meta-oe/recipes-support/syslog-ng/files/CVE-2022-38725.patch b/meta-openembedded/meta-oe/recipes-support/syslog-ng/files/CVE-2022-38725.patch
new file mode 100644
index 0000000000..4a09c8c7fa
--- /dev/null
+++ b/meta-openembedded/meta-oe/recipes-support/syslog-ng/files/CVE-2022-38725.patch
@@ -0,0 +1,629 @@
+From 73b5c300b8fde5e7a4824baa83a04931279abb37 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?L=C3=A1szl=C3=B3=20V=C3=A1rady?=
+ <laszlo.varady@protonmail.com>
+Date: Sat, 20 Aug 2022 12:42:38 +0200
+Subject: [PATCH] CVE-2022-38725
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Signed-off-by: László Várady <laszlo.varady@protonmail.com>
+Signed-off-by: Balazs Scheidler <bazsi77@gmail.com>
+
+Upstream-Status: Backport from [https://github.com/syslog-ng/syslog-ng/commit/b5a060f2ebb8d794f508436a12e4d4163f94b1b8 && https://github.com/syslog-ng/syslog-ng/commit/81a07263f1e522a376d3a30f96f51df3f2879f8a && https://github.com/syslog-ng/syslog-ng/commit/4b8dc56ca8eaeac4c8751a305eb7eeefab8dc89d && https://github.com/syslog-ng/syslog-ng/commit/73b5c300b8fde5e7a4824baa83a04931279abb37 && https://github.com/syslog-ng/syslog-ng/commit/45f051239312e43bd4f92b9339fe67c6798a0321 && https://github.com/syslog-ng/syslog-ng/commit/09f489c89c826293ff8cbd282cfc866ab56054c4 && https://github.com/syslog-ng/syslog-ng/commit/8c6e2c1c41b0fcc5fbd464c35f4dac7102235396 && https://github.com/syslog-ng/syslog-ng/commit/56f881c5eaa3d8c02c96607c4b9e4eaf959a044d]
+CVE: CVE-2022-38725
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ lib/timeutils/scan-timestamp.c | 68 +++++----
+ lib/timeutils/tests/test_scan-timestamp.c | 133 ++++++++++++++++--
+ modules/syslogformat/CMakeLists.txt | 2 +
+ modules/syslogformat/Makefile.am | 2 +
+ modules/syslogformat/syslog-format.c | 12 +-
+ modules/syslogformat/tests/CMakeLists.txt | 1 +
+ modules/syslogformat/tests/Makefile.am | 9 ++
+ .../syslogformat/tests/test_syslog_format.c | 104 ++++++++++++++
+ 8 files changed, 284 insertions(+), 47 deletions(-)
+ create mode 100644 modules/syslogformat/tests/CMakeLists.txt
+ create mode 100644 modules/syslogformat/tests/Makefile.am
+ create mode 100644 modules/syslogformat/tests/test_syslog_format.c
+
+diff --git a/lib/timeutils/scan-timestamp.c b/lib/timeutils/scan-timestamp.c
+index 41ead1a..ec9746b 100644
+--- a/lib/timeutils/scan-timestamp.c
++++ b/lib/timeutils/scan-timestamp.c
+@@ -34,41 +34,43 @@ scan_day_abbrev(const gchar **buf, gint *left, gint *wday)
+ {
+ *wday = -1;
+
+- if (*left < 3)
++ const gsize abbrev_length = 3;
++
++ if (*left < abbrev_length)
+ return FALSE;
+
+ switch (**buf)
+ {
+ case 'S':
+- if (strncasecmp(*buf, "Sun", 3) == 0)
++ if (strncasecmp(*buf, "Sun", abbrev_length) == 0)
+ *wday = 0;
+- else if (strncasecmp(*buf, "Sat", 3) == 0)
++ else if (strncasecmp(*buf, "Sat", abbrev_length) == 0)
+ *wday = 6;
+ break;
+ case 'M':
+- if (strncasecmp(*buf, "Mon", 3) == 0)
++ if (strncasecmp(*buf, "Mon", abbrev_length) == 0)
+ *wday = 1;
+ break;
+ case 'T':
+- if (strncasecmp(*buf, "Tue", 3) == 0)
++ if (strncasecmp(*buf, "Tue", abbrev_length) == 0)
+ *wday = 2;
+- else if (strncasecmp(*buf, "Thu", 3) == 0)
++ else if (strncasecmp(*buf, "Thu", abbrev_length) == 0)
+ *wday = 4;
+ break;
+ case 'W':
+- if (strncasecmp(*buf, "Wed", 3) == 0)
++ if (strncasecmp(*buf, "Wed", abbrev_length) == 0)
+ *wday = 3;
+ break;
+ case 'F':
+- if (strncasecmp(*buf, "Fri", 3) == 0)
++ if (strncasecmp(*buf, "Fri", abbrev_length) == 0)
+ *wday = 5;
+ break;
+ default:
+ return FALSE;
+ }
+
+- (*buf) += 3;
+- (*left) -= 3;
++ (*buf) += abbrev_length;
++ (*left) -= abbrev_length;
+ return TRUE;
+ }
+
+@@ -77,57 +79,59 @@ scan_month_abbrev(const gchar **buf, gint *left, gint *mon)
+ {
+ *mon = -1;
+
+- if (*left < 3)
++ const gsize abbrev_length = 3;
++
++ if (*left < abbrev_length)
+ return FALSE;
+
+ switch (**buf)
+ {
+ case 'J':
+- if (strncasecmp(*buf, "Jan", 3) == 0)
++ if (strncasecmp(*buf, "Jan", abbrev_length) == 0)
+ *mon = 0;
+- else if (strncasecmp(*buf, "Jun", 3) == 0)
++ else if (strncasecmp(*buf, "Jun", abbrev_length) == 0)
+ *mon = 5;
+- else if (strncasecmp(*buf, "Jul", 3) == 0)
++ else if (strncasecmp(*buf, "Jul", abbrev_length) == 0)
+ *mon = 6;
+ break;
+ case 'F':
+- if (strncasecmp(*buf, "Feb", 3) == 0)
++ if (strncasecmp(*buf, "Feb", abbrev_length) == 0)
+ *mon = 1;
+ break;
+ case 'M':
+- if (strncasecmp(*buf, "Mar", 3) == 0)
++ if (strncasecmp(*buf, "Mar", abbrev_length) == 0)
+ *mon = 2;
+- else if (strncasecmp(*buf, "May", 3) == 0)
++ else if (strncasecmp(*buf, "May", abbrev_length) == 0)
+ *mon = 4;
+ break;
+ case 'A':
+- if (strncasecmp(*buf, "Apr", 3) == 0)
++ if (strncasecmp(*buf, "Apr", abbrev_length) == 0)
+ *mon = 3;
+- else if (strncasecmp(*buf, "Aug", 3) == 0)
++ else if (strncasecmp(*buf, "Aug", abbrev_length) == 0)
+ *mon = 7;
+ break;
+ case 'S':
+- if (strncasecmp(*buf, "Sep", 3) == 0)
++ if (strncasecmp(*buf, "Sep", abbrev_length) == 0)
+ *mon = 8;
+ break;
+ case 'O':
+- if (strncasecmp(*buf, "Oct", 3) == 0)
++ if (strncasecmp(*buf, "Oct", abbrev_length) == 0)
+ *mon = 9;
+ break;
+ case 'N':
+- if (strncasecmp(*buf, "Nov", 3) == 0)
++ if (strncasecmp(*buf, "Nov", abbrev_length) == 0)
+ *mon = 10;
+ break;
+ case 'D':
+- if (strncasecmp(*buf, "Dec", 3) == 0)
++ if (strncasecmp(*buf, "Dec", abbrev_length) == 0)
+ *mon = 11;
+ break;
+ default:
+ return FALSE;
+ }
+
+- (*buf) += 3;
+- (*left) -= 3;
++ (*buf) += abbrev_length;
++ (*left) -= abbrev_length;
+ return TRUE;
+ }
+
+@@ -302,7 +306,7 @@ __parse_usec(const guchar **data, gint *length)
+ src++;
+ (*length)--;
+ }
+- while (isdigit(*src))
++ while (*length > 0 && isdigit(*src))
+ {
+ src++;
+ (*length)--;
+@@ -316,19 +320,21 @@ __parse_usec(const guchar **data, gint *length)
+ static gboolean
+ __has_iso_timezone(const guchar *src, gint length)
+ {
+- return (length >= 5) &&
++ return (length >= 6) &&
+ (*src == '+' || *src == '-') &&
+ isdigit(*(src+1)) &&
+ isdigit(*(src+2)) &&
+ *(src+3) == ':' &&
+ isdigit(*(src+4)) &&
+ isdigit(*(src+5)) &&
+- !isdigit(*(src+6));
++ (length < 7 || !isdigit(*(src+6)));
+ }
+
+ static guint32
+ __parse_iso_timezone(const guchar **data, gint *length)
+ {
++ g_assert(*length >= 6);
++
+ gint hours, mins;
+ const guchar *src = *data;
+ guint32 tz = 0;
+@@ -338,8 +344,10 @@ __parse_iso_timezone(const guchar **data, gint *length)
+ hours = (*(src + 1) - '0') * 10 + *(src + 2) - '0';
+ mins = (*(src + 4) - '0') * 10 + *(src + 5) - '0';
+ tz = sign * (hours * 3600 + mins * 60);
++
+ src += 6;
+ (*length) -= 6;
++
+ *data = src;
+ return tz;
+ }
+@@ -393,7 +401,7 @@ __parse_bsd_timestamp(const guchar **data, gint *length, WallClockTime *wct)
+ if (!scan_pix_timestamp((const gchar **) &src, &left, wct))
+ return FALSE;
+
+- if (*src == ':')
++ if (left && *src == ':')
+ {
+ src++;
+ left--;
+@@ -444,7 +452,7 @@ scan_rfc3164_timestamp(const guchar **data, gint *length, WallClockTime *wct)
+ * looking at you, skip that as well, so we can reliably detect IPv6
+ * addresses as hostnames, which would be using ":" as well. */
+
+- if (*src == ':')
++ if (left && *src == ':')
+ {
+ ++src;
+ --left;
+diff --git a/lib/timeutils/tests/test_scan-timestamp.c b/lib/timeutils/tests/test_scan-timestamp.c
+index 4508139..ad657c6 100644
+--- a/lib/timeutils/tests/test_scan-timestamp.c
++++ b/lib/timeutils/tests/test_scan-timestamp.c
+@@ -49,17 +49,21 @@ fake_time_add(time_t diff)
+ }
+
+ static gboolean
+-_parse_rfc3164(const gchar *ts, gchar isotimestamp[32])
++_parse_rfc3164(const gchar *ts, gint len, gchar isotimestamp[32])
+ {
+ UnixTime stamp;
+- const guchar *data = (const guchar *) ts;
+- gint length = strlen(ts);
++ const guchar *tsu = (const guchar *) ts;
++ gint tsu_len = len < 0 ? strlen(ts) : len;
+ GString *result = g_string_new("");
+ WallClockTime wct = WALL_CLOCK_TIME_INIT;
+
+-
++ const guchar *data = tsu;
++ gint length = tsu_len;
+ gboolean success = scan_rfc3164_timestamp(&data, &length, &wct);
+
++ cr_assert(length >= 0);
++ cr_assert(data == &tsu[tsu_len - length]);
++
+ unix_time_unset(&stamp);
+ convert_wall_clock_time_to_unix_time(&wct, &stamp);
+
+@@ -70,16 +74,21 @@ _parse_rfc3164(const gchar *ts, gchar isotimestamp[32])
+ }
+
+ static gboolean
+-_parse_rfc5424(const gchar *ts, gchar isotimestamp[32])
++_parse_rfc5424(const gchar *ts, gint len, gchar isotimestamp[32])
+ {
+ UnixTime stamp;
+- const guchar *data = (const guchar *) ts;
+- gint length = strlen(ts);
++ const guchar *tsu = (const guchar *) ts;
++ gint tsu_len = len < 0 ? strlen(ts) : len;
+ GString *result = g_string_new("");
+ WallClockTime wct = WALL_CLOCK_TIME_INIT;
+
++ const guchar *data = tsu;
++ gint length = tsu_len;
+ gboolean success = scan_rfc5424_timestamp(&data, &length, &wct);
+
++ cr_assert(length >= 0);
++ cr_assert(data == &tsu[tsu_len - length]);
++
+ unix_time_unset(&stamp);
+ convert_wall_clock_time_to_unix_time(&wct, &stamp);
+
+@@ -90,31 +99,60 @@ _parse_rfc5424(const gchar *ts, gchar isotimestamp[32])
+ }
+
+ static gboolean
+-_rfc3164_timestamp_eq(const gchar *ts, const gchar *expected, gchar converted[32])
++_rfc3164_timestamp_eq(const gchar *ts, gint len, const gchar *expected, gchar converted[32])
+ {
+- cr_assert(_parse_rfc3164(ts, converted));
++ cr_assert(_parse_rfc3164(ts, len, converted));
+ return strcmp(converted, expected) == 0;
+ }
+
+ static gboolean
+-_rfc5424_timestamp_eq(const gchar *ts, const gchar *expected, gchar converted[32])
++_rfc5424_timestamp_eq(const gchar *ts, gint len, const gchar *expected, gchar converted[32])
+ {
+- cr_assert(_parse_rfc5424(ts, converted));
++ cr_assert(_parse_rfc5424(ts, len, converted));
+ return strcmp(converted, expected) == 0;
+ }
+
+ #define _expect_rfc3164_timestamp_eq(ts, expected) \
+ ({ \
+ gchar converted[32]; \
+- cr_expect(_rfc3164_timestamp_eq(ts, expected, converted), "Parsed RFC3164 timestamp does not equal expected, ts=%s, converted=%s, expected=%s", ts, converted, expected); \
++ cr_expect(_rfc3164_timestamp_eq(ts, -1, expected, converted), "Parsed RFC3164 timestamp does not equal expected, ts=%s, converted=%s, expected=%s", ts, converted, expected); \
++ })
++
++#define _expect_rfc3164_timestamp_len_eq(ts, len, expected) \
++ ({ \
++ gchar converted[32]; \
++ cr_expect(_rfc3164_timestamp_eq(ts, len, expected, converted), "Parsed RFC3164 timestamp does not equal expected, ts=%s, converted=%s, expected=%s", ts, converted, expected); \
++ })
++
++#define _expect_rfc3164_fails(ts, len) \
++ ({ \
++ WallClockTime wct = WALL_CLOCK_TIME_INIT; \
++ const guchar *data = (guchar *) ts; \
++ gint length = len < 0 ? strlen(ts) : len; \
++ cr_assert_not(scan_rfc3164_timestamp(&data, &length, &wct)); \
+ })
+
+ #define _expect_rfc5424_timestamp_eq(ts, expected) \
+ ({ \
+ gchar converted[32]; \
+- cr_expect(_rfc5424_timestamp_eq(ts, expected, converted), "Parsed RFC5424 timestamp does not equal expected, ts=%s, converted=%s, expected=%s", ts, converted, expected); \
++ cr_expect(_rfc5424_timestamp_eq(ts, -1, expected, converted), "Parsed RFC5424 timestamp does not equal expected, ts=%s, converted=%s, expected=%s", ts, converted, expected); \
++ })
++
++#define _expect_rfc5424_timestamp_len_eq(ts, len, expected) \
++ ({ \
++ gchar converted[32]; \
++ cr_expect(_rfc5424_timestamp_eq(ts, len, expected, converted), "Parsed RFC5424 timestamp does not equal expected, ts=%s, converted=%s, expected=%s", ts, converted, expected); \
+ })
+
++#define _expect_rfc5424_fails(ts, len) \
++ ({ \
++ WallClockTime wct = WALL_CLOCK_TIME_INIT; \
++ const guchar *data = (guchar *) ts; \
++ gint length = len < 0 ? strlen(ts) : len; \
++ cr_assert_not(scan_rfc5424_timestamp(&data, &length, &wct)); \
++ })
++
++
+ Test(parse_timestamp, standard_bsd_format)
+ {
+ _expect_rfc3164_timestamp_eq("Oct 1 17:46:12", "2017-10-01T17:46:12.000+02:00");
+@@ -148,6 +186,75 @@ Test(parse_timestamp, standard_bsd_format_year_in_the_past)
+ _expect_rfc3164_timestamp_eq("Dec 31 17:46:12", "2017-12-31T17:46:12.000+01:00");
+ }
+
++Test(parse_timestamp, non_zero_terminated_rfc3164_iso_input_is_handled_properly)
++{
++ gchar *ts = "2022-08-17T05:02:28.417Z whatever";
++ gint ts_len = 24;
++
++ _expect_rfc3164_timestamp_len_eq(ts, strlen(ts), "2022-08-17T05:02:28.417+00:00");
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len + 5, "2022-08-17T05:02:28.417+00:00");
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len, "2022-08-17T05:02:28.417+00:00");
++
++ /* no "Z" parsed, timezone defaults to local, forced CET */
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len - 1, "2022-08-17T05:02:28.417+02:00");
++
++ /* msec is partially parsed as we trim the string from the right */
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len - 2, "2022-08-17T05:02:28.410+02:00");
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len - 3, "2022-08-17T05:02:28.400+02:00");
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len - 4, "2022-08-17T05:02:28.000+02:00");
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len - 5, "2022-08-17T05:02:28.000+02:00");
++
++ for (gint i = 6; i < ts_len; i++)
++ _expect_rfc3164_fails(ts, ts_len - i);
++
++}
++
++Test(parse_timestamp, non_zero_terminated_rfc3164_bsd_pix_or_asa_input_is_handled_properly)
++{
++ gchar *ts = "Aug 17 2022 05:02:28: whatever";
++ gint ts_len = 21;
++
++ _expect_rfc3164_timestamp_len_eq(ts, strlen(ts), "2022-08-17T05:02:28.000+02:00");
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len + 5, "2022-08-17T05:02:28.000+02:00");
++ _expect_rfc3164_timestamp_len_eq(ts, ts_len, "2022-08-17T05:02:28.000+02:00");
++
++ /* no ":" at the end, that's a problem, unrecognized */
++ _expect_rfc3164_fails(ts, ts_len - 1);
++
++ for (gint i = 1; i < ts_len; i++)
++ _expect_rfc3164_fails(ts, ts_len - i);
++}
++
++Test(parse_timestamp, non_zero_terminated_rfc5424_input_is_handled_properly)
++{
++ gchar *ts = "2022-08-17T05:02:28.417Z whatever";
++ gint ts_len = 24;
++
++ _expect_rfc5424_timestamp_len_eq(ts, strlen(ts), "2022-08-17T05:02:28.417+00:00");
++ _expect_rfc5424_timestamp_len_eq(ts, ts_len + 5, "2022-08-17T05:02:28.417+00:00");
++ _expect_rfc5424_timestamp_len_eq(ts, ts_len, "2022-08-17T05:02:28.417+00:00");
++
++ /* no "Z" parsed, timezone defaults to local, forced CET */
++ _expect_rfc5424_timestamp_len_eq(ts, ts_len - 1, "2022-08-17T05:02:28.417+02:00");
++
++ /* msec is partially parsed as we trim the string from the right */
++ _expect_rfc5424_timestamp_len_eq(ts, ts_len - 2, "2022-08-17T05:02:28.410+02:00");
++ _expect_rfc5424_timestamp_len_eq(ts, ts_len - 3, "2022-08-17T05:02:28.400+02:00");
++ _expect_rfc5424_timestamp_len_eq(ts, ts_len - 4, "2022-08-17T05:02:28.000+02:00");
++ _expect_rfc5424_timestamp_len_eq(ts, ts_len - 5, "2022-08-17T05:02:28.000+02:00");
++
++ for (gint i = 6; i < ts_len; i++)
++ _expect_rfc5424_fails(ts, ts_len - i);
++
++}
++
++Test(parse_timestamp, non_zero_terminated_rfc5424_timestamp_only)
++{
++ const gchar *ts = "2022-08-17T05:02:28.417+03:00";
++ gint ts_len = strlen(ts);
++ _expect_rfc5424_timestamp_len_eq(ts, ts_len, ts);
++}
++
+
+ Test(parse_timestamp, daylight_saving_behavior_at_spring_with_explicit_timezones)
+ {
+diff --git a/modules/syslogformat/CMakeLists.txt b/modules/syslogformat/CMakeLists.txt
+index fb55ea4..a2a92bb 100644
+--- a/modules/syslogformat/CMakeLists.txt
++++ b/modules/syslogformat/CMakeLists.txt
+@@ -24,4 +24,6 @@ target_include_directories(syslogformat
+ )
+ target_link_libraries(syslogformat PRIVATE syslog-ng)
+
++add_test_subdirectory(tests)
++
+ install(TARGETS syslogformat LIBRARY DESTINATION lib/syslog-ng/)
+diff --git a/modules/syslogformat/Makefile.am b/modules/syslogformat/Makefile.am
+index f13f88c..14cdf58 100644
+--- a/modules/syslogformat/Makefile.am
++++ b/modules/syslogformat/Makefile.am
+@@ -31,3 +31,5 @@ modules_syslogformat_libsyslogformat_la_DEPENDENCIES = \
+ modules/syslogformat modules/syslogformat/ mod-syslogformat: \
+ modules/syslogformat/libsyslogformat.la
+ .PHONY: modules/syslogformat/ mod-syslogformat
++
++include modules/syslogformat/tests/Makefile.am
+diff --git a/modules/syslogformat/syslog-format.c b/modules/syslogformat/syslog-format.c
+index 6d53a32..a69f39f 100644
+--- a/modules/syslogformat/syslog-format.c
++++ b/modules/syslogformat/syslog-format.c
+@@ -200,7 +200,7 @@ log_msg_parse_cisco_sequence_id(LogMessage *self, const guchar **data, gint *len
+
+ /* if the next char is not space, then we may try to read a date */
+
+- if (*src != ' ')
++ if (!left || *src != ' ')
+ return;
+
+ log_msg_set_value(self, handles.cisco_seqid, (gchar *) *data, *length - left - 1);
+@@ -216,6 +216,9 @@ log_msg_parse_cisco_timestamp_attributes(LogMessage *self, const guchar **data,
+ const guchar *src = *data;
+ gint left = *length;
+
++ if (!left)
++ return;
++
+ /* Cisco timestamp extensions, the first '*' indicates that the clock is
+ * unsynced, '.' if it is known to be synced */
+ if (G_UNLIKELY(src[0] == '*'))
+@@ -564,7 +567,7 @@ log_msg_parse_sd(LogMessage *self, const guchar **data, gint *length, const MsgF
+ open_sd++;
+ do
+ {
+- if (!isascii(*src) || *src == '=' || *src == ' ' || *src == ']' || *src == '"')
++ if (!left || !isascii(*src) || *src == '=' || *src == ' ' || *src == ']' || *src == '"')
+ goto error;
+ /* read sd_id */
+ pos = 0;
+@@ -598,7 +601,8 @@ log_msg_parse_sd(LogMessage *self, const guchar **data, gint *length, const MsgF
+ strcpy(sd_value_name, logmsg_sd_prefix);
+ /* this strcat is safe, as sd_id_name is at most 32 chars */
+ strncpy(sd_value_name + logmsg_sd_prefix_len, sd_id_name, sizeof(sd_value_name) - logmsg_sd_prefix_len);
+- if (*src == ']')
++
++ if (left && *src == ']')
+ {
+ log_msg_set_value_by_name(self, sd_value_name, "", 0);
+ }
+@@ -615,7 +619,7 @@ log_msg_parse_sd(LogMessage *self, const guchar **data, gint *length, const MsgF
+ else
+ goto error;
+
+- if (!isascii(*src) || *src == '=' || *src == ' ' || *src == ']' || *src == '"')
++ if (!left || !isascii(*src) || *src == '=' || *src == ' ' || *src == ']' || *src == '"')
+ goto error;
+
+ /* read sd-param */
+diff --git a/modules/syslogformat/tests/CMakeLists.txt b/modules/syslogformat/tests/CMakeLists.txt
+new file mode 100644
+index 0000000..2e45b71
+--- /dev/null
++++ b/modules/syslogformat/tests/CMakeLists.txt
+@@ -0,0 +1 @@
++add_unit_test(CRITERION TARGET test_syslog_format DEPENDS syslogformat)
+diff --git a/modules/syslogformat/tests/Makefile.am b/modules/syslogformat/tests/Makefile.am
+new file mode 100644
+index 0000000..7ee66a5
+--- /dev/null
++++ b/modules/syslogformat/tests/Makefile.am
+@@ -0,0 +1,9 @@
++modules_syslogformat_tests_TESTS = \
++ modules/syslogformat/tests/test_syslog_format
++
++check_PROGRAMS += ${modules_syslogformat_tests_TESTS}
++
++EXTRA_DIST += modules/syslogformat/tests/CMakeLists.txt
++
++modules_syslogformat_tests_test_syslog_format_CFLAGS = $(TEST_CFLAGS) -I$(top_srcdir)/modules/syslogformat
++modules_syslogformat_tests_test_syslog_format_LDADD = $(TEST_LDADD) $(PREOPEN_SYSLOGFORMAT)
+diff --git a/modules/syslogformat/tests/test_syslog_format.c b/modules/syslogformat/tests/test_syslog_format.c
+new file mode 100644
+index 0000000..d0f5b40
+--- /dev/null
++++ b/modules/syslogformat/tests/test_syslog_format.c
+@@ -0,0 +1,104 @@
++/*
++ * Copyright (c) 2022 One Identity
++ * Copyright (c) 2022 László Várady
++ *
++ * This program is free software; you can redistribute it and/or modify it
++ * under the terms of the GNU General Public License version 2 as published
++ * by the Free Software Foundation, or (at your option) any later version.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program; if not, write to the Free Software
++ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
++ *
++ * As an additional exemption you are allowed to compile & link against the
++ * OpenSSL libraries as published by the OpenSSL project. See the file
++ * COPYING for details.
++ *
++ */
++
++#include <criterion/criterion.h>
++
++#include "apphook.h"
++#include "cfg.h"
++#include "syslog-format.h"
++#include "logmsg/logmsg.h"
++#include "msg-format.h"
++#include "scratch-buffers.h"
++
++#include <string.h>
++
++GlobalConfig *cfg;
++MsgFormatOptions parse_options;
++
++static void
++setup(void)
++{
++ app_startup();
++ syslog_format_init();
++
++ cfg = cfg_new_snippet();
++ msg_format_options_defaults(&parse_options);
++}
++
++static void
++teardown(void)
++{
++ scratch_buffers_explicit_gc();
++ app_shutdown();
++ cfg_free(cfg);
++}
++
++TestSuite(syslog_format, .init = setup, .fini = teardown);
++
++Test(syslog_format, parser_should_not_spin_on_non_zero_terminated_input, .timeout = 10)
++{
++ const gchar *data = "<182>2022-08-17T05:02:28.217 mymachine su: 'su root' failed for lonvick on /dev/pts/8";
++ /* chosen carefully to reproduce a bug */
++ gsize data_length = 27;
++
++ msg_format_options_init(&parse_options, cfg);
++ LogMessage *msg = msg_format_construct_message(&parse_options, (const guchar *) data, data_length);
++
++ gsize problem_position;
++ cr_assert(syslog_format_handler(&parse_options, msg, (const guchar *) data, data_length, &problem_position));
++
++ msg_format_options_destroy(&parse_options);
++ log_msg_unref(msg);
++}
++
++Test(syslog_format, cisco_sequence_id_non_zero_termination)
++{
++ const gchar *data = "<189>65536: ";
++ gsize data_length = strlen(data);
++
++ msg_format_options_init(&parse_options, cfg);
++ LogMessage *msg = msg_format_construct_message(&parse_options, (const guchar *) data, data_length);
++
++ gsize problem_position;
++ cr_assert(syslog_format_handler(&parse_options, msg, (const guchar *) data, data_length, &problem_position));
++ cr_assert_str_eq(log_msg_get_value_by_name(msg, ".SDATA.meta.sequenceId", NULL), "65536");
++
++ msg_format_options_destroy(&parse_options);
++ log_msg_unref(msg);
++}
++
++Test(syslog_format, minimal_non_zero_terminated_numeric_message_is_parsed_as_program_name)
++{
++ const gchar *data = "<189>65536";
++ gsize data_length = strlen(data);
++
++ msg_format_options_init(&parse_options, cfg);
++ LogMessage *msg = msg_format_construct_message(&parse_options, (const guchar *) data, data_length);
++
++ gsize problem_position;
++ cr_assert(syslog_format_handler(&parse_options, msg, (const guchar *) data, data_length, &problem_position));
++ cr_assert_str_eq(log_msg_get_value_by_name(msg, "PROGRAM", NULL), "65536");
++
++ msg_format_options_destroy(&parse_options);
++ log_msg_unref(msg);
++}
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-oe/recipes-support/syslog-ng/syslog-ng_3.24.1.bb b/meta-openembedded/meta-oe/recipes-support/syslog-ng/syslog-ng_3.24.1.bb
index 10bf00fdce..6e90dabd14 100644
--- a/meta-openembedded/meta-oe/recipes-support/syslog-ng/syslog-ng_3.24.1.bb
+++ b/meta-openembedded/meta-oe/recipes-support/syslog-ng/syslog-ng_3.24.1.bb
@@ -9,6 +9,7 @@ SRC_URI += " \
file://0001-syslog-ng-fix-segment-fault-during-service-start.patch \
file://shebang.patch \
file://syslog-ng-tmp.conf \
+ file://CVE-2022-38725.patch \
"
SRC_URI[md5sum] = "ef9de066793f7358af7312b964ac0450"
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pillow/0001-CVE-2022-45198.patch b/meta-openembedded/meta-python/recipes-devtools/python/python3-pillow/0001-CVE-2022-45198.patch
new file mode 100644
index 0000000000..0f0cfa7804
--- /dev/null
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pillow/0001-CVE-2022-45198.patch
@@ -0,0 +1,26 @@
+From 7df88fc2319852ace202a650703d631200080e3b Mon Sep 17 00:00:00 2001
+From: Andrew Murray <radarhere@users.noreply.github.com>
+Date: Thu, 30 Jun 2022 12:47:35 +1000
+Subject: [PATCH] Added GIF decompression bomb check
+
+Upstream-Status: Backport [https://github.com/python-pillow/Pillow/commit/884437f8a2b953a0abd2a3b130a87fcfb438092e]
+CVE: CVE-2022-45198
+Signed-off-by: Shubham Kulkarni <skulkarni@mvista.com>
+---
+ src/PIL/GifImagePlugin.py | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/PIL/GifImagePlugin.py b/src/PIL/GifImagePlugin.py
+index 9d8e96f..c477fdd 100644
+--- a/src/PIL/GifImagePlugin.py
++++ b/src/PIL/GifImagePlugin.py
+@@ -238,6 +238,7 @@ class GifImageFile(ImageFile.ImageFile):
+ x1, y1 = x0 + i16(s[4:]), y0 + i16(s[6:])
+ if x1 > self.size[0] or y1 > self.size[1]:
+ self._size = max(x1, self.size[0]), max(y1, self.size[1])
++ Image._decompression_bomb_check(self._size)
+ self.dispose_extent = x0, y0, x1, y1
+ flags = i8(s[8])
+
+--
+2.7.4
diff --git a/meta-openembedded/meta-python/recipes-devtools/python/python3-pillow_6.2.1.bb b/meta-openembedded/meta-python/recipes-devtools/python/python3-pillow_6.2.1.bb
index 80b7e941ae..35330cac6d 100644
--- a/meta-openembedded/meta-python/recipes-devtools/python/python3-pillow_6.2.1.bb
+++ b/meta-openembedded/meta-python/recipes-devtools/python/python3-pillow_6.2.1.bb
@@ -8,6 +8,7 @@ LIC_FILES_CHKSUM = "file://LICENSE;md5=55c0f320370091249c1755c0d2b48e89"
SRC_URI = "git://github.com/python-pillow/Pillow.git;branch=6.2.x;protocol=https \
file://0001-support-cross-compiling.patch \
file://0001-explicitly-set-compile-options.patch \
+ file://0001-CVE-2022-45198.patch \
"
SRCREV ?= "6e0f07bbe38def22d36ee176b2efd9ea74b453a6"
diff --git a/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2/0004-apache2-log-the-SELinux-context-at-startup.patch b/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2/0004-apache2-log-the-SELinux-context-at-startup.patch
index 5d82919685..3b080f54f6 100644
--- a/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2/0004-apache2-log-the-SELinux-context-at-startup.patch
+++ b/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2/0004-apache2-log-the-SELinux-context-at-startup.patch
@@ -1,4 +1,4 @@
-From 37699e9be04d83c5923644e298f400e077f76e85 Mon Sep 17 00:00:00 2001
+From e47cc405eadcbe37a579c375e824e20a5c53bfad Mon Sep 17 00:00:00 2001
From: Paul Eggleton <paul.eggleton@linux.intel.com>
Date: Tue, 17 Jul 2012 11:27:39 +0100
Subject: [PATCH] Log the SELinux context at startup.
@@ -8,13 +8,14 @@ Log the SELinux context at startup.
Upstream-Status: Inappropriate [other]
Note: unlikely to be any interest in this upstream
+
---
configure.in | 5 +++++
server/core.c | 26 ++++++++++++++++++++++++++
2 files changed, 31 insertions(+)
diff --git a/configure.in b/configure.in
-index c799aec..76811e7 100644
+index ea6cec3..92b74b7 100644
--- a/configure.in
+++ b/configure.in
@@ -491,6 +491,11 @@ getloadavg
@@ -30,7 +31,7 @@ index c799aec..76811e7 100644
[AC_TRY_RUN(#define _GNU_SOURCE
#include <unistd.h>
diff --git a/server/core.c b/server/core.c
-index 3020090..8fef5fd 100644
+index 4da7209..d3ca25b 100644
--- a/server/core.c
+++ b/server/core.c
@@ -65,6 +65,10 @@
@@ -43,7 +44,7 @@ index 3020090..8fef5fd 100644
+
/* LimitRequestBody handling */
#define AP_LIMIT_REQ_BODY_UNSET ((apr_off_t) -1)
- #define AP_DEFAULT_LIMIT_REQ_BODY ((apr_off_t) 0)
+ #define AP_DEFAULT_LIMIT_REQ_BODY ((apr_off_t) 1<<30) /* 1GB */
@@ -5126,6 +5130,28 @@ static int core_post_config(apr_pool_t *pconf, apr_pool_t *plog, apr_pool_t *pte
}
#endif
@@ -73,6 +74,3 @@ index 3020090..8fef5fd 100644
return OK;
}
---
-2.25.1
-
diff --git a/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.53.bb b/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.56.bb
index 225f6fc4f6..ed5690a4ab 100644
--- a/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.53.bb
+++ b/meta-openembedded/meta-webserver/recipes-httpd/apache2/apache2_2.4.56.bb
@@ -26,7 +26,7 @@ SRC_URI:append:class-target = " \
"
LIC_FILES_CHKSUM = "file://LICENSE;md5=bddeddfac80b2c9a882241d008bb41c3"
-SRC_URI[sha256sum] = "d0bbd1121a57b5f2a6ff92d7b96f8050c5a45d3f14db118f64979d525858db63"
+SRC_URI[sha256sum] = "d8d45f1398ba84edd05bb33ca7593ac2989b17cb9c7a0cafe5442d41afdb2d7c"
S = "${WORKDIR}/httpd-${PV}"
diff --git a/meta-openembedded/meta-webserver/recipes-httpd/nginx/files/CVE-2022-41741-CVE-2022-41742.patch b/meta-openembedded/meta-webserver/recipes-httpd/nginx/files/CVE-2022-41741-CVE-2022-41742.patch
new file mode 100644
index 0000000000..8a8a35b2dd
--- /dev/null
+++ b/meta-openembedded/meta-webserver/recipes-httpd/nginx/files/CVE-2022-41741-CVE-2022-41742.patch
@@ -0,0 +1,319 @@
+From 9563a2a08c007d78a6796b0232201bf7dc4a8103 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Wed, 16 Nov 2022 10:28:24 +0530
+Subject: [PATCH] CVE-2022-41741, CVE-2022-41742
+
+Upstream-Status: Backport [https://github.com/nginx/nginx/commit/6b022a5556af22b6e18532e547a6ae46b0d8c6ea]
+CVE: CVE-2022-41741, CVE-2022-41742
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+
+Mp4: disabled duplicate atoms.
+
+Most atoms should not appear more than once in a container. Previously,
+this was not enforced by the module, which could result in worker process
+crash, memory corruption and disclosure.
+---
+ src/http/modules/ngx_http_mp4_module.c | 147 +++++++++++++++++++++++++
+ 1 file changed, 147 insertions(+)
+
+diff --git a/src/http/modules/ngx_http_mp4_module.c b/src/http/modules/ngx_http_mp4_module.c
+index 618bf78..7b7184d 100644
+--- a/src/http/modules/ngx_http_mp4_module.c
++++ b/src/http/modules/ngx_http_mp4_module.c
+@@ -1076,6 +1076,12 @@ ngx_http_mp4_read_ftyp_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ return NGX_ERROR;
+ }
+
++ if (mp4->ftyp_atom.buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 ftyp atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom_size = sizeof(ngx_mp4_atom_header_t) + (size_t) atom_data_size;
+
+ ftyp_atom = ngx_palloc(mp4->request->pool, atom_size);
+@@ -1134,6 +1140,12 @@ ngx_http_mp4_read_moov_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ return NGX_DECLINED;
+ }
+
++ if (mp4->moov_atom.buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 moov atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ conf = ngx_http_get_module_loc_conf(mp4->request, ngx_http_mp4_module);
+
+ if (atom_data_size > mp4->buffer_size) {
+@@ -1201,6 +1213,12 @@ ngx_http_mp4_read_mdat_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, "mp4 mdat atom");
+
++ if (mp4->mdat_atom.buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 mdat atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ data = &mp4->mdat_data_buf;
+ data->file = &mp4->file;
+ data->in_file = 1;
+@@ -1327,6 +1345,12 @@ ngx_http_mp4_read_mvhd_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ ngx_log_debug0(NGX_LOG_DEBUG_HTTP, mp4->file.log, 0, "mp4 mvhd atom");
+
++ if (mp4->mvhd_atom.buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 mvhd atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom_header = ngx_mp4_atom_header(mp4);
+ mvhd_atom = (ngx_mp4_mvhd_atom_t *) atom_header;
+ mvhd64_atom = (ngx_mp4_mvhd64_atom_t *) atom_header;
+@@ -1592,6 +1616,13 @@ ngx_http_mp4_read_tkhd_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ atom_size = sizeof(ngx_mp4_atom_header_t) + (size_t) atom_data_size;
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_TKHD_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 tkhd atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->tkhd_size = atom_size;
+
+ ngx_mp4_set_32value(tkhd_atom->size, atom_size);
+@@ -1630,6 +1661,12 @@ ngx_http_mp4_read_mdia_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ trak = ngx_mp4_last_trak(mp4);
+
++ if (trak->out[NGX_HTTP_MP4_MDIA_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 mdia atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom = &trak->mdia_atom_buf;
+ atom->temporary = 1;
+ atom->pos = atom_header;
+@@ -1753,6 +1790,13 @@ ngx_http_mp4_read_mdhd_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ atom_size = sizeof(ngx_mp4_atom_header_t) + (size_t) atom_data_size;
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_MDHD_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 mdhd atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->mdhd_size = atom_size;
+ trak->timescale = timescale;
+
+@@ -1795,6 +1839,12 @@ ngx_http_mp4_read_hdlr_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ trak = ngx_mp4_last_trak(mp4);
+
++ if (trak->out[NGX_HTTP_MP4_HDLR_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 hdlr atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom = &trak->hdlr_atom_buf;
+ atom->temporary = 1;
+ atom->pos = atom_header;
+@@ -1823,6 +1873,12 @@ ngx_http_mp4_read_minf_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ trak = ngx_mp4_last_trak(mp4);
+
++ if (trak->out[NGX_HTTP_MP4_MINF_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 minf atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom = &trak->minf_atom_buf;
+ atom->temporary = 1;
+ atom->pos = atom_header;
+@@ -1866,6 +1922,15 @@ ngx_http_mp4_read_vmhd_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ trak = ngx_mp4_last_trak(mp4);
+
++ if (trak->out[NGX_HTTP_MP4_VMHD_ATOM].buf
++ || trak->out[NGX_HTTP_MP4_SMHD_ATOM].buf)
++ {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 vmhd/smhd atom in \"%s\"",
++ mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom = &trak->vmhd_atom_buf;
+ atom->temporary = 1;
+ atom->pos = atom_header;
+@@ -1897,6 +1962,15 @@ ngx_http_mp4_read_smhd_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ trak = ngx_mp4_last_trak(mp4);
+
++ if (trak->out[NGX_HTTP_MP4_VMHD_ATOM].buf
++ || trak->out[NGX_HTTP_MP4_SMHD_ATOM].buf)
++ {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 vmhd/smhd atom in \"%s\"",
++ mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom = &trak->smhd_atom_buf;
+ atom->temporary = 1;
+ atom->pos = atom_header;
+@@ -1928,6 +2002,12 @@ ngx_http_mp4_read_dinf_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ trak = ngx_mp4_last_trak(mp4);
+
++ if (trak->out[NGX_HTTP_MP4_DINF_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 dinf atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom = &trak->dinf_atom_buf;
+ atom->temporary = 1;
+ atom->pos = atom_header;
+@@ -1956,6 +2036,12 @@ ngx_http_mp4_read_stbl_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ trak = ngx_mp4_last_trak(mp4);
+
++ if (trak->out[NGX_HTTP_MP4_STBL_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 stbl atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom = &trak->stbl_atom_buf;
+ atom->temporary = 1;
+ atom->pos = atom_header;
+@@ -2024,6 +2110,12 @@ ngx_http_mp4_read_stsd_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+
+ trak = ngx_mp4_last_trak(mp4);
+
++ if (trak->out[NGX_HTTP_MP4_STSD_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 stsd atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ atom = &trak->stsd_atom_buf;
+ atom->temporary = 1;
+ atom->pos = atom_header;
+@@ -2092,6 +2184,13 @@ ngx_http_mp4_read_stts_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ atom_end = atom_table + entries * sizeof(ngx_mp4_stts_entry_t);
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_STTS_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 stts atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->time_to_sample_entries = entries;
+
+ atom = &trak->stts_atom_buf;
+@@ -2297,6 +2396,13 @@ ngx_http_mp4_read_stss_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ "sync sample entries:%uD", entries);
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_STSS_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 stss atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->sync_samples_entries = entries;
+
+ atom_table = atom_header + sizeof(ngx_http_mp4_stss_atom_t);
+@@ -2495,6 +2601,13 @@ ngx_http_mp4_read_ctts_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ "composition offset entries:%uD", entries);
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_CTTS_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 ctts atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->composition_offset_entries = entries;
+
+ atom_table = atom_header + sizeof(ngx_mp4_ctts_atom_t);
+@@ -2698,6 +2811,13 @@ ngx_http_mp4_read_stsc_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ atom_end = atom_table + entries * sizeof(ngx_mp4_stsc_entry_t);
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_STSC_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 stsc atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->sample_to_chunk_entries = entries;
+
+ atom = &trak->stsc_atom_buf;
+@@ -3030,6 +3150,13 @@ ngx_http_mp4_read_stsz_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ "sample uniform size:%uD, entries:%uD", size, entries);
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_STSZ_ATOM].buf) {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 stsz atom in \"%s\"", mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->sample_sizes_entries = entries;
+
+ atom_table = atom_header + sizeof(ngx_mp4_stsz_atom_t);
+@@ -3199,6 +3326,16 @@ ngx_http_mp4_read_stco_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ atom_end = atom_table + entries * sizeof(uint32_t);
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_STCO_ATOM].buf
++ || trak->out[NGX_HTTP_MP4_CO64_ATOM].buf)
++ {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 stco/co64 atom in \"%s\"",
++ mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->chunks = entries;
+
+ atom = &trak->stco_atom_buf;
+@@ -3383,6 +3520,16 @@ ngx_http_mp4_read_co64_atom(ngx_http_mp4_file_t *mp4, uint64_t atom_data_size)
+ atom_end = atom_table + entries * sizeof(uint64_t);
+
+ trak = ngx_mp4_last_trak(mp4);
++
++ if (trak->out[NGX_HTTP_MP4_STCO_ATOM].buf
++ || trak->out[NGX_HTTP_MP4_CO64_ATOM].buf)
++ {
++ ngx_log_error(NGX_LOG_ERR, mp4->file.log, 0,
++ "duplicate mp4 stco/co64 atom in \"%s\"",
++ mp4->file.name.data);
++ return NGX_ERROR;
++ }
++
+ trak->chunks = entries;
+
+ atom = &trak->co64_atom_buf;
+--
+2.25.1
+
diff --git a/meta-openembedded/meta-webserver/recipes-httpd/nginx/nginx_1.16.1.bb b/meta-openembedded/meta-webserver/recipes-httpd/nginx/nginx_1.16.1.bb
index 09d58b8fb9..07e9f6ddbc 100644
--- a/meta-openembedded/meta-webserver/recipes-httpd/nginx/nginx_1.16.1.bb
+++ b/meta-openembedded/meta-webserver/recipes-httpd/nginx/nginx_1.16.1.bb
@@ -5,4 +5,6 @@ LIC_FILES_CHKSUM = "file://LICENSE;md5=52e384aaac868b755b93ad5535e2d075"
SRC_URI[md5sum] = "45a80f75336c980d240987badc3dcf60"
SRC_URI[sha256sum] = "f11c2a6dd1d3515736f0324857957db2de98be862461b5a542a3ac6188dbe32b"
-SRC_URI += "file://CVE-2019-20372.patch"
+SRC_URI += "file://CVE-2019-20372.patch \
+ file://CVE-2022-41741-CVE-2022-41742.patch \
+ "
diff --git a/meta-security/recipes-security/sssd/files/CVE-2022-4254-1.patch b/meta-security/recipes-security/sssd/files/CVE-2022-4254-1.patch
new file mode 100644
index 0000000000..a52ce1aaa7
--- /dev/null
+++ b/meta-security/recipes-security/sssd/files/CVE-2022-4254-1.patch
@@ -0,0 +1,515 @@
+From 1c40208aa1e0f9a17cc4f336c99bcaa6977592d3 Mon Sep 17 00:00:00 2001
+From: Sumit Bose <sbose@redhat.com>
+Date: Tue, 27 Nov 2018 16:40:01 +0100
+Subject: [PATCH] certmap: add sss_certmap_display_cert_content()
+
+To make debugging and writing certificate mapping and matching rules
+more easy a new function is added to libsss_certmap to display the
+certificate content as seen by libsss_certmap. Please note that the
+actual output might change in future.
+
+Reviewed-by: Jakub Hrozek <jhrozek@redhat.com>
+
+CVE: CVE-2022-4254
+Upstream-Status: Backport [https://github.com/SSSD/sssd/commit/1c40208aa1e0f9a17cc4f336c99bcaa6977592d3]
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ Makefile.am | 2 +-
+ src/lib/certmap/sss_certmap.c | 142 ++++++++++++++++++++++
+ src/lib/certmap/sss_certmap.exports | 5 +
+ src/lib/certmap/sss_certmap.h | 18 +++
+ src/lib/certmap/sss_certmap_int.h | 31 ++++-
+ src/lib/certmap/sss_certmap_krb5_match.c | 145 +++++++++++------------
+ 6 files changed, 261 insertions(+), 82 deletions(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 4475b3d..29cd93c 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -1835,7 +1835,7 @@ libsss_certmap_la_LIBADD = \
+ $(NULL)
+ libsss_certmap_la_LDFLAGS = \
+ -Wl,--version-script,$(srcdir)/src/lib/certmap/sss_certmap.exports \
+- -version-info 0:0:0
++ -version-info 1:0:1
+
+ if HAVE_NSS
+ libsss_certmap_la_SOURCES += \
+diff --git a/src/lib/certmap/sss_certmap.c b/src/lib/certmap/sss_certmap.c
+index f6f6f98..c60ac24 100644
+--- a/src/lib/certmap/sss_certmap.c
++++ b/src/lib/certmap/sss_certmap.c
+@@ -914,3 +914,145 @@ void sss_certmap_free_filter_and_domains(char *filter, char **domains)
+ talloc_free(filter);
+ talloc_free(domains);
+ }
++
++static const char *sss_eku_oid2name(const char *oid)
++{
++ size_t c;
++
++ for (c = 0; sss_ext_key_usage[c].name != NULL; c++) {
++ if (strcmp(sss_ext_key_usage[c].oid, oid) == 0) {
++ return sss_ext_key_usage[c].name;
++ }
++ }
++
++ return NULL;
++}
++
++struct parsed_template san_parsed_template[] = {
++ { NULL, NULL, NULL }, /* SAN_OTHER_NAME handled separately */
++ { "subject_rfc822_name", NULL, NULL},
++ { "subject_dns_name", NULL, NULL},
++ { "subject_x400_address", NULL, NULL},
++ { "subject_directory_name", NULL, NULL},
++ { "subject_ediparty_name", NULL, NULL},
++ { "subject_uri", NULL, NULL},
++ { "subject_ip_address", NULL, NULL},
++ { "subject_registered_id", NULL, NULL},
++ { "subject_pkinit_principal", NULL, NULL},
++ { "subject_nt_principal", NULL, NULL},
++ { "subject_principal", NULL, NULL},
++ { NULL, NULL, NULL }, /* SAN_STRING_OTHER_NAME handled separately */
++ { NULL, NULL, NULL } /* SAN_END */
++};
++
++int sss_cert_dump_content(TALLOC_CTX *mem_ctx, struct sss_cert_content *c,
++ char **content_str)
++{
++ char *out = NULL;
++ size_t o;
++ struct san_list *s;
++ struct sss_certmap_ctx *ctx = NULL;
++ char *expanded = NULL;
++ int ret;
++ char *b64 = NULL;
++ const char *eku_str = NULL;
++
++ ret = sss_certmap_init(mem_ctx, NULL, NULL, &ctx);
++ if (ret != EOK) {
++ return ret;
++ }
++
++ out = talloc_strdup(mem_ctx, "sss cert content (format might change):\n");
++ if (out == NULL) return ENOMEM;
++
++ out = talloc_asprintf_append(out, "Issuer: %s\n", c->issuer_str != NULL
++ ? c->issuer_str
++ : "- not available -");
++ if (out == NULL) return ENOMEM;
++ out = talloc_asprintf_append(out, "Subject: %s\n", c->subject_str != NULL
++ ? c->subject_str
++ : "- not available -");
++ if (out == NULL) return ENOMEM;
++
++ out = talloc_asprintf_append(out, "Key Usage: %u(0x%04x)", c->key_usage,
++ c->key_usage);
++ if (out == NULL) return ENOMEM;
++
++ if (c->key_usage != 0) {
++ out = talloc_asprintf_append(out, " (");
++ if (out == NULL) return ENOMEM;
++ for (o = 0; sss_key_usage[o].name != NULL; o++) {
++ if ((c->key_usage & sss_key_usage[o].flag) != 0) {
++ out = talloc_asprintf_append(out, "%s%s",
++ o == 0 ? "" : ",",
++ sss_key_usage[o].name);
++ if (out == NULL) return ENOMEM;
++ }
++ }
++ out = talloc_asprintf_append(out, ")");
++ if (out == NULL) return ENOMEM;
++ }
++ out = talloc_asprintf_append(out, "\n");
++ if (out == NULL) return ENOMEM;
++
++ for (o = 0; c->extended_key_usage_oids[o] != NULL; o++) {
++ eku_str = sss_eku_oid2name(c->extended_key_usage_oids[o]);
++ out = talloc_asprintf_append(out, "Extended Key Usage #%zu: %s%s%s%s\n",
++ o, c->extended_key_usage_oids[o],
++ eku_str == NULL ? "" : " (",
++ eku_str == NULL ? "" : eku_str,
++ eku_str == NULL ? "" : ")");
++ if (out == NULL) return ENOMEM;
++ }
++
++ DLIST_FOR_EACH(s, c->san_list) {
++ out = talloc_asprintf_append(out, "SAN type: %s\n",
++ s->san_opt < SAN_END
++ ? sss_san_names[s->san_opt].name
++ : "- unsupported -");
++ if (out == NULL) return ENOMEM;
++
++ if (san_parsed_template[s->san_opt].name != NULL) {
++ ret = expand_san(ctx, &san_parsed_template[s->san_opt], c->san_list,
++ &expanded);
++ if (ret != EOK) {
++ return ret;
++ }
++ out = talloc_asprintf_append(out, " %s=%s\n\n",
++ san_parsed_template[s->san_opt].name,
++ expanded);
++ talloc_free(expanded);
++ if (out == NULL) return ENOMEM;
++ } else if (s->san_opt == SAN_STRING_OTHER_NAME) {
++ b64 = sss_base64_encode(mem_ctx, s->bin_val, s->bin_val_len);
++ out = talloc_asprintf_append(out, " %s=%s\n\n", s->other_name_oid,
++ b64 != NULL ? b64
++ : "- cannot encode -");
++ talloc_free(b64);
++ }
++ }
++
++ *content_str = out;
++
++ return EOK;
++}
++
++int sss_certmap_display_cert_content(TALLOC_CTX *mem_cxt,
++ const uint8_t *der_cert, size_t der_size,
++ char **desc)
++{
++ int ret;
++ struct sss_cert_content *content;
++
++ ret = sss_cert_get_content(mem_cxt, der_cert, der_size, &content);
++ if (ret != EOK) {
++ return ret;
++ }
++
++ ret = sss_cert_dump_content(mem_cxt, content, desc);
++ if (ret != EOK) {
++ return ret;
++ }
++
++ return 0;
++}
+diff --git a/src/lib/certmap/sss_certmap.exports b/src/lib/certmap/sss_certmap.exports
+index 8b5d536..a9e48d6 100644
+--- a/src/lib/certmap/sss_certmap.exports
++++ b/src/lib/certmap/sss_certmap.exports
+@@ -11,3 +11,8 @@ SSS_CERTMAP_0.0 {
+ local:
+ *;
+ };
++
++SSS_CERTMAP_0.1 {
++ global:
++ sss_certmap_display_cert_content;
++} SSS_CERTMAP_0.0;
+diff --git a/src/lib/certmap/sss_certmap.h b/src/lib/certmap/sss_certmap.h
+index 646e0f3..7da2d1c 100644
+--- a/src/lib/certmap/sss_certmap.h
++++ b/src/lib/certmap/sss_certmap.h
+@@ -146,6 +146,24 @@ int sss_certmap_get_search_filter(struct sss_certmap_ctx *ctx,
+ */
+ void sss_certmap_free_filter_and_domains(char *filter, char **domains);
+
++/**
++ * @brief Get a string with the content of the certificate used by the library
++ *
++ * @param[in] mem_ctx Talloc memory context, may be NULL
++ * @param[in] der_cert binary blog with the DER encoded certificate
++ * @param[in] der_size size of the certificate blob
++ * @param[out] desc Multiline string showing the certificate content
++ * which is used by libsss_certmap
++ *
++ * @return
++ * - 0: success
++ * - EINVAL: certificate cannot be parsed
++ * - ENOMEM: memory allocation failure
++ */
++int sss_certmap_display_cert_content(TALLOC_CTX *mem_cxt,
++ const uint8_t *der_cert, size_t der_size,
++ char **desc);
++
+ /**
+ * @}
+ */
+diff --git a/src/lib/certmap/sss_certmap_int.h b/src/lib/certmap/sss_certmap_int.h
+index 479cc16..b1155e2 100644
+--- a/src/lib/certmap/sss_certmap_int.h
++++ b/src/lib/certmap/sss_certmap_int.h
+@@ -101,9 +101,9 @@ enum comp_type {
+ };
+
+ struct parsed_template {
+- char *name;
+- char *attr_name;
+- char *conversion;
++ const char *name;
++ const char *attr_name;
++ const char *conversion;
+ };
+
+ struct ldap_mapping_rule_comp {
+@@ -166,6 +166,28 @@ struct san_list {
+ #define SSS_KU_ENCIPHER_ONLY 0x0001
+ #define SSS_KU_DECIPHER_ONLY 0x8000
+
++struct sss_key_usage {
++ const char *name;
++ uint32_t flag;
++};
++
++extern const struct sss_key_usage sss_key_usage[];
++
++struct sss_ext_key_usage {
++ const char *name;
++ const char *oid;
++};
++
++extern const struct sss_ext_key_usage sss_ext_key_usage[];
++
++struct sss_san_name {
++ const char *name;
++ enum san_opt san_opt;
++ bool is_string;
++};
++
++extern const struct sss_san_name sss_san_names[];
++
+ struct sss_cert_content {
+ char *issuer_str;
+ const char **issuer_rdn_list;
+@@ -183,6 +205,9 @@ int sss_cert_get_content(TALLOC_CTX *mem_ctx,
+ const uint8_t *der_blob, size_t der_size,
+ struct sss_cert_content **content);
+
++int sss_cert_dump_content(TALLOC_CTX *mem_ctx, struct sss_cert_content *c,
++ char **content_str);
++
+ char *check_ad_attr_name(TALLOC_CTX *mem_ctx, const char *rdn);
+
+ char *openssl_2_nss_attr_name(const char *attr);
+diff --git a/src/lib/certmap/sss_certmap_krb5_match.c b/src/lib/certmap/sss_certmap_krb5_match.c
+index 125e925..398d3d2 100644
+--- a/src/lib/certmap/sss_certmap_krb5_match.c
++++ b/src/lib/certmap/sss_certmap_krb5_match.c
+@@ -29,6 +29,59 @@
+ #include "lib/certmap/sss_certmap.h"
+ #include "lib/certmap/sss_certmap_int.h"
+
++const struct sss_key_usage sss_key_usage[] = {
++ {"digitalSignature" , SSS_KU_DIGITAL_SIGNATURE},
++ {"nonRepudiation" , SSS_KU_NON_REPUDIATION},
++ {"keyEncipherment" , SSS_KU_KEY_ENCIPHERMENT},
++ {"dataEncipherment" , SSS_KU_DATA_ENCIPHERMENT},
++ {"keyAgreement" , SSS_KU_KEY_AGREEMENT},
++ {"keyCertSign" , SSS_KU_KEY_CERT_SIGN},
++ {"cRLSign" , SSS_KU_CRL_SIGN},
++ {"encipherOnly" , SSS_KU_ENCIPHER_ONLY},
++ {"decipherOnly" , SSS_KU_DECIPHER_ONLY},
++ {NULL ,0}
++};
++
++const struct sss_ext_key_usage sss_ext_key_usage[] = {
++ /* RFC 3280 section 4.2.1.13 */
++ {"serverAuth", "1.3.6.1.5.5.7.3.1"},
++ {"clientAuth", "1.3.6.1.5.5.7.3.2"},
++ {"codeSigning", "1.3.6.1.5.5.7.3.3"},
++ {"emailProtection", "1.3.6.1.5.5.7.3.4"},
++ {"timeStamping", "1.3.6.1.5.5.7.3.8"},
++ {"OCSPSigning", "1.3.6.1.5.5.7.3.9"},
++
++ /* RFC 4556 section 3.2.2 */
++ {"KPClientAuth", "1.3.6.1.5.2.3.4"},
++ {"pkinit", "1.3.6.1.5.2.3.4"},
++
++ /* https://support.microsoft.com/en-us/help/287547/object-ids-associated-with-microsoft-cryptography*/
++ {"msScLogin", "1.3.6.1.4.1.311.20.2.2"},
++
++ {NULL ,0}
++};
++
++const struct sss_san_name sss_san_names[] = {
++ /* https://www.ietf.org/rfc/rfc3280.txt section 4.2.1.7 */
++ {"otherName", SAN_OTHER_NAME, false},
++ {"rfc822Name", SAN_RFC822_NAME, true},
++ {"dNSName", SAN_DNS_NAME, true},
++ {"x400Address", SAN_X400_ADDRESS, false},
++ {"directoryName", SAN_DIRECTORY_NAME, true},
++ {"ediPartyName", SAN_EDIPART_NAME, false},
++ {"uniformResourceIdentifier", SAN_URI, true},
++ {"iPAddress", SAN_IP_ADDRESS, true},
++ {"registeredID", SAN_REGISTERED_ID, true},
++ /* https://www.ietf.org/rfc/rfc4556.txt section 3.2.2 */
++ {"pkinitSAN", SAN_PKINIT, true},
++ /* https://support.microsoft.com/en-us/help/287547/object-ids-associated-with-microsoft-cryptography */
++ {"ntPrincipalName", SAN_NT, true},
++ /* both previous principal types */
++ {"Principal", SAN_PRINCIPAL, true},
++ {"stringOtherName", SAN_STRING_OTHER_NAME, true},
++ {NULL, SAN_END, false}
++};
++
+ static bool is_dotted_decimal(const char *s, size_t len)
+ {
+ size_t c = 0;
+@@ -145,28 +198,6 @@ static int parse_krb5_get_eku_value(TALLOC_CTX *mem_ctx,
+ size_t e = 0;
+ int eku_list_size;
+
+- struct ext_key_usage {
+- const char *name;
+- const char *oid;
+- } ext_key_usage[] = {
+- /* RFC 3280 section 4.2.1.13 */
+- {"serverAuth", "1.3.6.1.5.5.7.3.1"},
+- {"clientAuth", "1.3.6.1.5.5.7.3.2"},
+- {"codeSigning", "1.3.6.1.5.5.7.3.3"},
+- {"emailProtection", "1.3.6.1.5.5.7.3.4"},
+- {"timeStamping", "1.3.6.1.5.5.7.3.8"},
+- {"OCSPSigning", "1.3.6.1.5.5.7.3.9"},
+-
+- /* RFC 4556 section 3.2.2 */
+- {"KPClientAuth", "1.3.6.1.5.2.3.4"},
+- {"pkinit", "1.3.6.1.5.2.3.4"},
+-
+- /* https://support.microsoft.com/en-us/help/287547/object-ids-associated-with-microsoft-cryptography*/
+- {"msScLogin", "1.3.6.1.4.1.311.20.2.2"},
+-
+- {NULL ,0}
+- };
+-
+ ret = get_comp_value(mem_ctx, ctx, cur, &comp);
+ if (ret != 0) {
+ CM_DEBUG(ctx, "Failed to parse regexp.");
+@@ -188,11 +219,11 @@ static int parse_krb5_get_eku_value(TALLOC_CTX *mem_ctx,
+ }
+
+ for (c = 0; eku_list[c] != NULL; c++) {
+- for (k = 0; ext_key_usage[k].name != NULL; k++) {
+-CM_DEBUG(ctx, "[%s][%s].", eku_list[c], ext_key_usage[k].name);
+- if (strcasecmp(eku_list[c], ext_key_usage[k].name) == 0) {
++ for (k = 0; sss_ext_key_usage[k].name != NULL; k++) {
++CM_DEBUG(ctx, "[%s][%s].", eku_list[c], sss_ext_key_usage[k].name);
++ if (strcasecmp(eku_list[c], sss_ext_key_usage[k].name) == 0) {
+ comp->eku_oid_list[e] = talloc_strdup(comp->eku_oid_list,
+- ext_key_usage[k].oid);
++ sss_ext_key_usage[k].oid);
+ if (comp->eku_oid_list[e] == NULL) {
+ ret = ENOMEM;
+ goto done;
+@@ -202,7 +233,7 @@ CM_DEBUG(ctx, "[%s][%s].", eku_list[c], ext_key_usage[k].name);
+ }
+ }
+
+- if (ext_key_usage[k].name == NULL) {
++ if (sss_ext_key_usage[k].name == NULL) {
+ /* check for an dotted-decimal OID */
+ if (*(eku_list[c]) != '.') {
+ o = eku_list[c];
+@@ -252,23 +283,6 @@ static int parse_krb5_get_ku_value(TALLOC_CTX *mem_ctx,
+ size_t c;
+ size_t k;
+
+- struct key_usage {
+- const char *name;
+- uint32_t flag;
+- } key_usage[] = {
+- {"digitalSignature" , SSS_KU_DIGITAL_SIGNATURE},
+- {"nonRepudiation" , SSS_KU_NON_REPUDIATION},
+- {"keyEncipherment" , SSS_KU_KEY_ENCIPHERMENT},
+- {"dataEncipherment" , SSS_KU_DATA_ENCIPHERMENT},
+- {"keyAgreement" , SSS_KU_KEY_AGREEMENT},
+- {"keyCertSign" , SSS_KU_KEY_CERT_SIGN},
+- {"cRLSign" , SSS_KU_CRL_SIGN},
+- {"encipherOnly" , SSS_KU_ENCIPHER_ONLY},
+- {"decipherOnly" , SSS_KU_DECIPHER_ONLY},
+- {NULL ,0}
+- };
+-
+-
+ ret = get_comp_value(mem_ctx, ctx, cur, &comp);
+ if (ret != 0) {
+ CM_DEBUG(ctx, "Failed to get value.");
+@@ -283,14 +297,14 @@ static int parse_krb5_get_ku_value(TALLOC_CTX *mem_ctx,
+ }
+
+ for (c = 0; ku_list[c] != NULL; c++) {
+- for (k = 0; key_usage[k].name != NULL; k++) {
+- if (strcasecmp(ku_list[c], key_usage[k].name) == 0) {
+- comp->ku |= key_usage[k].flag;
++ for (k = 0; sss_key_usage[k].name != NULL; k++) {
++ if (strcasecmp(ku_list[c], sss_key_usage[k].name) == 0) {
++ comp->ku |= sss_key_usage[k].flag;
+ break;
+ }
+ }
+
+- if (key_usage[k].name == NULL) {
++ if (sss_key_usage[k].name == NULL) {
+ /* FIXME: add check for numerical ku */
+ CM_DEBUG(ctx, "No matching key usage found.");
+ ret = EINVAL;
+@@ -342,31 +356,6 @@ done:
+ return ret;
+ }
+
+-struct san_name {
+- const char *name;
+- enum san_opt san_opt;
+- bool is_string;
+-} san_names[] = {
+- /* https://www.ietf.org/rfc/rfc3280.txt section 4.2.1.7 */
+- {"otherName", SAN_OTHER_NAME, false},
+- {"rfc822Name", SAN_RFC822_NAME,true},
+- {"dNSName", SAN_DNS_NAME, true},
+- {"x400Address", SAN_X400_ADDRESS, false},
+- {"directoryName", SAN_DIRECTORY_NAME, true},
+- {"ediPartyName", SAN_EDIPART_NAME, false},
+- {"uniformResourceIdentifier", SAN_URI, true},
+- {"iPAddress", SAN_IP_ADDRESS, true},
+- {"registeredID", SAN_REGISTERED_ID, true},
+- /* https://www.ietf.org/rfc/rfc4556.txt section 3.2.2 */
+- {"pkinitSAN", SAN_PKINIT, true},
+- /* https://support.microsoft.com/en-us/help/287547/object-ids-associated-with-microsoft-cryptography */
+- {"ntPrincipalName", SAN_NT, true},
+- /* both previous principal types */
+- {"Principal", SAN_PRINCIPAL, true},
+- {"stringOtherName", SAN_STRING_OTHER_NAME, true},
+- {NULL, SAN_END, false}
+-};
+-
+ static int parse_krb5_get_san_option(TALLOC_CTX *mem_ctx,
+ struct sss_certmap_ctx *ctx,
+ const char **cur,
+@@ -388,12 +377,12 @@ static int parse_krb5_get_san_option(TALLOC_CTX *mem_ctx,
+ if (len == 0) {
+ c= SAN_PRINCIPAL;
+ } else {
+- for (c = 0; san_names[c].name != NULL; c++) {
+- if (strncasecmp(*cur, san_names[c].name, len) == 0) {
++ for (c = 0; sss_san_names[c].name != NULL; c++) {
++ if (strncasecmp(*cur, sss_san_names[c].name, len) == 0) {
+ break;
+ }
+ }
+- if (san_names[c].name == NULL) {
++ if (sss_san_names[c].name == NULL) {
+ if (is_dotted_decimal(*cur, len)) {
+ c = SAN_STRING_OTHER_NAME;
+ *str_other_name_oid = talloc_strndup(mem_ctx, *cur, len);
+@@ -408,7 +397,7 @@ static int parse_krb5_get_san_option(TALLOC_CTX *mem_ctx,
+ }
+ }
+
+- *option = san_names[c].san_opt;
++ *option = sss_san_names[c].san_opt;
+ *cur = end + 1;
+
+ return 0;
+@@ -432,7 +421,7 @@ static int parse_krb5_get_san_value(TALLOC_CTX *mem_ctx,
+ }
+ }
+
+- if (san_names[san_opt].is_string) {
++ if (sss_san_names[san_opt].is_string) {
+ ret = parse_krb5_get_component_value(mem_ctx, ctx, cur, &comp);
+ if (ret != 0) {
+ goto done;
+--
+2.25.1
+
diff --git a/meta-security/recipes-security/sssd/files/CVE-2022-4254-2.patch b/meta-security/recipes-security/sssd/files/CVE-2022-4254-2.patch
new file mode 100644
index 0000000000..018b95c42f
--- /dev/null
+++ b/meta-security/recipes-security/sssd/files/CVE-2022-4254-2.patch
@@ -0,0 +1,655 @@
+From a2b9a84460429181f2a4fa7e2bb5ab49fd561274 Mon Sep 17 00:00:00 2001
+From: Sumit Bose <sbose@redhat.com>
+Date: Mon, 9 Dec 2019 11:31:14 +0100
+Subject: [PATCH] certmap: sanitize LDAP search filter
+
+The sss_certmap_get_search_filter() will now sanitize the values read
+from the certificates before adding them to a search filter. To be able
+to get the plain values as well sss_certmap_expand_mapping_rule() is
+added.
+
+Resolves:
+https://github.com/SSSD/sssd/issues/5135
+
+Reviewed-by: Alexey Tikhonov <atikhono@redhat.com>
+
+CVE: CVE-2022-4254
+Upstream-Status: Backport [https://github.com/SSSD/sssd/commit/a2b9a84460429181f2a4fa7e2bb5ab49fd561274]
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ Makefile.am | 2 +-
+ src/lib/certmap/sss_certmap.c | 42 ++++++++++--
+ src/lib/certmap/sss_certmap.exports | 5 ++
+ src/lib/certmap/sss_certmap.h | 35 ++++++++--
+ src/responder/pam/pamsrv_p11.c | 5 +-
+ src/tests/cmocka/test_certmap.c | 98 +++++++++++++++++++++++++++-
+ src/util/util.c | 94 ---------------------------
+ src/util/util_ext.c | 99 +++++++++++++++++++++++++++++
+ 8 files changed, 272 insertions(+), 108 deletions(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 29cd93c..dd6add2 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -1835,7 +1835,7 @@ libsss_certmap_la_LIBADD = \
+ $(NULL)
+ libsss_certmap_la_LDFLAGS = \
+ -Wl,--version-script,$(srcdir)/src/lib/certmap/sss_certmap.exports \
+- -version-info 1:0:1
++ -version-info 2:0:2
+
+ if HAVE_NSS
+ libsss_certmap_la_SOURCES += \
+diff --git a/src/lib/certmap/sss_certmap.c b/src/lib/certmap/sss_certmap.c
+index c60ac24..d7bc992 100644
+--- a/src/lib/certmap/sss_certmap.c
++++ b/src/lib/certmap/sss_certmap.c
+@@ -441,10 +441,12 @@ static int expand_san(struct sss_certmap_ctx *ctx,
+ static int expand_template(struct sss_certmap_ctx *ctx,
+ struct parsed_template *parsed_template,
+ struct sss_cert_content *cert_content,
++ bool sanitize,
+ char **expanded)
+ {
+ int ret;
+ char *exp = NULL;
++ char *exp_sanitized = NULL;
+
+ if (strcmp("issuer_dn", parsed_template->name) == 0) {
+ ret = rdn_list_2_dn_str(ctx, parsed_template->conversion,
+@@ -455,6 +457,8 @@ static int expand_template(struct sss_certmap_ctx *ctx,
+ } else if (strncmp("subject_", parsed_template->name, 8) == 0) {
+ ret = expand_san(ctx, parsed_template, cert_content->san_list, &exp);
+ } else if (strcmp("cert", parsed_template->name) == 0) {
++ /* cert blob is already sanitized */
++ sanitize = false;
+ ret = expand_cert(ctx, parsed_template, cert_content, &exp);
+ } else {
+ CM_DEBUG(ctx, "Unsupported template name.");
+@@ -471,6 +475,16 @@ static int expand_template(struct sss_certmap_ctx *ctx,
+ goto done;
+ }
+
++ if (sanitize) {
++ ret = sss_filter_sanitize(ctx, exp, &exp_sanitized);
++ if (ret != EOK) {
++ CM_DEBUG(ctx, "Failed to sanitize expanded template.");
++ goto done;
++ }
++ talloc_free(exp);
++ exp = exp_sanitized;
++ }
++
+ ret = 0;
+
+ done:
+@@ -485,7 +499,7 @@ done:
+
+ static int get_filter(struct sss_certmap_ctx *ctx,
+ struct ldap_mapping_rule *parsed_mapping_rule,
+- struct sss_cert_content *cert_content,
++ struct sss_cert_content *cert_content, bool sanitize,
+ char **filter)
+ {
+ struct ldap_mapping_rule_comp *comp;
+@@ -503,7 +517,7 @@ static int get_filter(struct sss_certmap_ctx *ctx,
+ result = talloc_strdup_append(result, comp->val);
+ } else if (comp->type == comp_template) {
+ ret = expand_template(ctx, comp->parsed_template, cert_content,
+- &expanded);
++ sanitize, &expanded);
+ if (ret != 0) {
+ CM_DEBUG(ctx, "Failed to expanded template.");
+ goto done;
+@@ -791,8 +805,9 @@ done:
+ return ret;
+ }
+
+-int sss_certmap_get_search_filter(struct sss_certmap_ctx *ctx,
++static int expand_mapping_rule_ex(struct sss_certmap_ctx *ctx,
+ const uint8_t *der_cert, size_t der_size,
++ bool sanitize,
+ char **_filter, char ***_domains)
+ {
+ int ret;
+@@ -819,7 +834,8 @@ int sss_certmap_get_search_filter(struct sss_certmap_ctx *ctx,
+ return EINVAL;
+ }
+
+- ret = get_filter(ctx, ctx->default_mapping_rule, cert_content, &filter);
++ ret = get_filter(ctx, ctx->default_mapping_rule, cert_content, sanitize,
++ &filter);
+ goto done;
+ }
+
+@@ -829,7 +845,7 @@ int sss_certmap_get_search_filter(struct sss_certmap_ctx *ctx,
+ if (ret == 0) {
+ /* match */
+ ret = get_filter(ctx, r->parsed_mapping_rule, cert_content,
+- &filter);
++ sanitize, &filter);
+ if (ret != 0) {
+ CM_DEBUG(ctx, "Failed to get filter");
+ goto done;
+@@ -873,6 +889,22 @@ done:
+ return ret;
+ }
+
++int sss_certmap_get_search_filter(struct sss_certmap_ctx *ctx,
++ const uint8_t *der_cert, size_t der_size,
++ char **_filter, char ***_domains)
++{
++ return expand_mapping_rule_ex(ctx, der_cert, der_size, true,
++ _filter, _domains);
++}
++
++int sss_certmap_expand_mapping_rule(struct sss_certmap_ctx *ctx,
++ const uint8_t *der_cert, size_t der_size,
++ char **_expanded, char ***_domains)
++{
++ return expand_mapping_rule_ex(ctx, der_cert, der_size, false,
++ _expanded, _domains);
++}
++
+ int sss_certmap_init(TALLOC_CTX *mem_ctx,
+ sss_certmap_ext_debug *debug, void *debug_priv,
+ struct sss_certmap_ctx **ctx)
+diff --git a/src/lib/certmap/sss_certmap.exports b/src/lib/certmap/sss_certmap.exports
+index a9e48d6..7d76677 100644
+--- a/src/lib/certmap/sss_certmap.exports
++++ b/src/lib/certmap/sss_certmap.exports
+@@ -16,3 +16,8 @@ SSS_CERTMAP_0.1 {
+ global:
+ sss_certmap_display_cert_content;
+ } SSS_CERTMAP_0.0;
++
++SSS_CERTMAP_0.2 {
++ global:
++ sss_certmap_expand_mapping_rule;
++} SSS_CERTMAP_0.1;
+diff --git a/src/lib/certmap/sss_certmap.h b/src/lib/certmap/sss_certmap.h
+index 7da2d1c..058d4f9 100644
+--- a/src/lib/certmap/sss_certmap.h
++++ b/src/lib/certmap/sss_certmap.h
+@@ -103,7 +103,7 @@ int sss_certmap_add_rule(struct sss_certmap_ctx *ctx,
+ *
+ * @param[in] ctx certmap context previously initialized with
+ * @ref sss_certmap_init
+- * @param[in] der_cert binary blog with the DER encoded certificate
++ * @param[in] der_cert binary blob with the DER encoded certificate
+ * @param[in] der_size size of the certificate blob
+ *
+ * @return
+@@ -119,10 +119,11 @@ int sss_certmap_match_cert(struct sss_certmap_ctx *ctx,
+ *
+ * @param[in] ctx certmap context previously initialized with
+ * @ref sss_certmap_init
+- * @param[in] der_cert binary blog with the DER encoded certificate
++ * @param[in] der_cert binary blob with the DER encoded certificate
+ * @param[in] der_size size of the certificate blob
+- * @param[out] filter LDAP filter string, caller should free the data by
+- * calling sss_certmap_free_filter_and_domains
++ * @param[out] filter LDAP filter string, expanded templates are sanitized,
++ * caller should free the data by calling
++ * sss_certmap_free_filter_and_domains
+ * @param[out] domains NULL-terminated array of strings with the domains the
+ * rule applies, caller should free the data by calling
+ * sss_certmap_free_filter_and_domains
+@@ -136,8 +137,32 @@ int sss_certmap_get_search_filter(struct sss_certmap_ctx *ctx,
+ const uint8_t *der_cert, size_t der_size,
+ char **filter, char ***domains);
+
++/**
++ * @brief Expand the mapping rule by replacing the templates
++ *
++ * @param[in] ctx certmap context previously initialized with
++ * @ref sss_certmap_init
++ * @param[in] der_cert binary blob with the DER encoded certificate
++ * @param[in] der_size size of the certificate blob
++ * @param[out] expanded expanded mapping rule, templates are filled in
++ * verbatim in contrast to sss_certmap_get_search_filter,
++ * caller should free the data by
++ * calling sss_certmap_free_filter_and_domains
++ * @param[out] domains NULL-terminated array of strings with the domains the
++ * rule applies, caller should free the data by calling
++ * sss_certmap_free_filter_and_domains
++ *
++ * @return
++ * - 0: certificate matches a rule
++ * - ENOENT: certificate does not match
++ * - EINVAL: internal error
++ */
++int sss_certmap_expand_mapping_rule(struct sss_certmap_ctx *ctx,
++ const uint8_t *der_cert, size_t der_size,
++ char **_expanded, char ***_domains);
+ /**
+ * @brief Free data returned by @ref sss_certmap_get_search_filter
++ * and @ref sss_certmap_expand_mapping_rule
+ *
+ * @param[in] filter LDAP filter strings returned by
+ * sss_certmap_get_search_filter
+@@ -150,7 +175,7 @@ void sss_certmap_free_filter_and_domains(char *filter, char **domains);
+ * @brief Get a string with the content of the certificate used by the library
+ *
+ * @param[in] mem_ctx Talloc memory context, may be NULL
+- * @param[in] der_cert binary blog with the DER encoded certificate
++ * @param[in] der_cert binary blob with the DER encoded certificate
+ * @param[in] der_size size of the certificate blob
+ * @param[out] desc Multiline string showing the certificate content
+ * which is used by libsss_certmap
+diff --git a/src/responder/pam/pamsrv_p11.c b/src/responder/pam/pamsrv_p11.c
+index c7e57be..b9f6787 100644
+--- a/src/responder/pam/pamsrv_p11.c
++++ b/src/responder/pam/pamsrv_p11.c
+@@ -1023,9 +1023,10 @@ static char *get_cert_prompt(TALLOC_CTX *mem_ctx,
+ goto done;
+ }
+
+- ret = sss_certmap_get_search_filter(ctx, der, der_size, &filter, &domains);
++ ret = sss_certmap_expand_mapping_rule(ctx, der, der_size,
++ &filter, &domains);
+ if (ret != 0) {
+- DEBUG(SSSDBG_OP_FAILURE, "sss_certmap_get_search_filter failed.\n");
++ DEBUG(SSSDBG_OP_FAILURE, "sss_certmap_expand_mapping_rule failed.\n");
+ goto done;
+ }
+
+diff --git a/src/tests/cmocka/test_certmap.c b/src/tests/cmocka/test_certmap.c
+index 3091e1a..abf1dba 100644
+--- a/src/tests/cmocka/test_certmap.c
++++ b/src/tests/cmocka/test_certmap.c
+@@ -1387,6 +1387,15 @@ static void test_sss_certmap_get_search_filter(void **state)
+ &filter, &domains);
+ assert_int_equal(ret, 0);
+ assert_non_null(filter);
++ assert_string_equal(filter, "rule100=<I>CN=Certificate\\20Authority,O=IPA.DEVEL"
++ "<S>CN=ipa-devel.ipa.devel,O=IPA.DEVEL");
++ assert_null(domains);
++
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert_der),
++ sizeof(test_cert_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
+ assert_string_equal(filter, "rule100=<I>CN=Certificate Authority,O=IPA.DEVEL"
+ "<S>CN=ipa-devel.ipa.devel,O=IPA.DEVEL");
+ assert_null(domains);
+@@ -1401,6 +1410,17 @@ static void test_sss_certmap_get_search_filter(void **state)
+ &filter, &domains);
+ assert_int_equal(ret, 0);
+ assert_non_null(filter);
++ assert_string_equal(filter, "rule99=<I>CN=Certificate\\20Authority,O=IPA.DEVEL"
++ "<S>CN=ipa-devel.ipa.devel,O=IPA.DEVEL");
++ assert_non_null(domains);
++ assert_string_equal(domains[0], "test.dom");
++ assert_null(domains[1]);
++
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert_der),
++ sizeof(test_cert_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
+ assert_string_equal(filter, "rule99=<I>CN=Certificate Authority,O=IPA.DEVEL"
+ "<S>CN=ipa-devel.ipa.devel,O=IPA.DEVEL");
+ assert_non_null(domains);
+@@ -1422,6 +1442,16 @@ static void test_sss_certmap_get_search_filter(void **state)
+ assert_string_equal(domains[0], "test.dom");
+ assert_null(domains[1]);
+
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert_der),
++ sizeof(test_cert_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
++ assert_string_equal(filter, "rule98=userCertificate;binary=" TEST_CERT_BIN);
++ assert_non_null(domains);
++ assert_string_equal(domains[0], "test.dom");
++ assert_null(domains[1]);
++
+ ret = sss_certmap_add_rule(ctx, 97,
+ "KRB5:<ISSUER>CN=Certificate Authority,O=IPA.DEVEL",
+ "LDAP:rule97=<I>{issuer_dn!nss_x500}<S>{subject_dn}",
+@@ -1432,6 +1462,17 @@ static void test_sss_certmap_get_search_filter(void **state)
+ &filter, &domains);
+ assert_int_equal(ret, 0);
+ assert_non_null(filter);
++ assert_string_equal(filter, "rule97=<I>O=IPA.DEVEL,CN=Certificate\\20Authority"
++ "<S>CN=ipa-devel.ipa.devel,O=IPA.DEVEL");
++ assert_non_null(domains);
++ assert_string_equal(domains[0], "test.dom");
++ assert_null(domains[1]);
++
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert_der),
++ sizeof(test_cert_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
+ assert_string_equal(filter, "rule97=<I>O=IPA.DEVEL,CN=Certificate Authority"
+ "<S>CN=ipa-devel.ipa.devel,O=IPA.DEVEL");
+ assert_non_null(domains);
+@@ -1448,6 +1489,17 @@ static void test_sss_certmap_get_search_filter(void **state)
+ &filter, &domains);
+ assert_int_equal(ret, 0);
+ assert_non_null(filter);
++ assert_string_equal(filter, "rule96=<I>O=IPA.DEVEL,CN=Certificate\\20Authority"
++ "<S>O=IPA.DEVEL,CN=ipa-devel.ipa.devel");
++ assert_non_null(domains);
++ assert_string_equal(domains[0], "test.dom");
++ assert_null(domains[1]);
++
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert_der),
++ sizeof(test_cert_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
+ assert_string_equal(filter, "rule96=<I>O=IPA.DEVEL,CN=Certificate Authority"
+ "<S>O=IPA.DEVEL,CN=ipa-devel.ipa.devel");
+ assert_non_null(domains);
+@@ -1466,6 +1518,14 @@ static void test_sss_certmap_get_search_filter(void **state)
+ assert_string_equal(filter, "(userCertificate;binary=" TEST_CERT_BIN ")");
+ assert_null(domains);
+
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert_der),
++ sizeof(test_cert_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
++ assert_string_equal(filter, "(userCertificate;binary=" TEST_CERT_BIN ")");
++ assert_null(domains);
++
+ ret = sss_certmap_add_rule(ctx, 94,
+ "KRB5:<ISSUER>CN=Certificate Authority,O=IPA.DEVEL",
+ "LDAP:rule94=<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}",
+@@ -1476,12 +1536,22 @@ static void test_sss_certmap_get_search_filter(void **state)
+ &filter, &domains);
+ assert_int_equal(ret, 0);
+ assert_non_null(filter);
+- assert_string_equal(filter, "rule94=<I>O=IPA.DEVEL,CN=Certificate Authority"
++ assert_string_equal(filter, "rule94=<I>O=IPA.DEVEL,CN=Certificate\\20Authority"
+ "<S>O=IPA.DEVEL,CN=ipa-devel.ipa.devel");
+ assert_non_null(domains);
+ assert_string_equal(domains[0], "test.dom");
+ assert_null(domains[1]);
+
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert_der),
++ sizeof(test_cert_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
++ assert_string_equal(filter, "rule94=<I>O=IPA.DEVEL,CN=Certificate Authority"
++ "<S>O=IPA.DEVEL,CN=ipa-devel.ipa.devel");
++ assert_non_null(domains);
++ assert_string_equal(domains[0], "test.dom");
++ assert_null(domains[1]);
+
+ ret = sss_certmap_add_rule(ctx, 89, NULL,
+ "(rule89={subject_nt_principal})",
+@@ -1495,6 +1565,14 @@ static void test_sss_certmap_get_search_filter(void **state)
+ assert_string_equal(filter, "(rule89=tu1@ad.devel)");
+ assert_null(domains);
+
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert2_der),
++ sizeof(test_cert2_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
++ assert_string_equal(filter, "(rule89=tu1@ad.devel)");
++ assert_null(domains);
++
+ ret = sss_certmap_add_rule(ctx, 88, NULL,
+ "(rule88={subject_nt_principal.short_name})",
+ NULL);
+@@ -1516,6 +1594,15 @@ static void test_sss_certmap_get_search_filter(void **state)
+ &filter, &domains);
+ assert_int_equal(ret, 0);
+ assert_non_null(filter);
++ assert_string_equal(filter, "rule87=<I>DC=devel,DC=ad,CN=ad-AD-SERVER-CA"
++ "<S>DC=devel,DC=ad,CN=Users,CN=t\\20u,E=test.user@email.domain");
++ assert_null(domains);
++
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert2_der),
++ sizeof(test_cert2_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
+ assert_string_equal(filter, "rule87=<I>DC=devel,DC=ad,CN=ad-AD-SERVER-CA"
+ "<S>DC=devel,DC=ad,CN=Users,CN=t u,E=test.user@email.domain");
+ assert_null(domains);
+@@ -1529,6 +1616,15 @@ static void test_sss_certmap_get_search_filter(void **state)
+ &filter, &domains);
+ assert_int_equal(ret, 0);
+ assert_non_null(filter);
++ assert_string_equal(filter, "rule86=<I>DC=devel,DC=ad,CN=ad-AD-SERVER-CA"
++ "<S>DC=devel,DC=ad,CN=Users,CN=t\\20u,E=test.user@email.domain");
++ assert_null(domains);
++
++ ret = sss_certmap_expand_mapping_rule(ctx, discard_const(test_cert2_der),
++ sizeof(test_cert2_der),
++ &filter, &domains);
++ assert_int_equal(ret, 0);
++ assert_non_null(filter);
+ assert_string_equal(filter, "rule86=<I>DC=devel,DC=ad,CN=ad-AD-SERVER-CA"
+ "<S>DC=devel,DC=ad,CN=Users,CN=t u,E=test.user@email.domain");
+ assert_null(domains);
+diff --git a/src/util/util.c b/src/util/util.c
+index e3efa7f..0653638 100644
+--- a/src/util/util.c
++++ b/src/util/util.c
+@@ -436,100 +436,6 @@ errno_t sss_hash_create(TALLOC_CTX *mem_ctx, unsigned long count,
+ return sss_hash_create_ex(mem_ctx, count, tbl, 0, 0, 0, 0, NULL, NULL);
+ }
+
+-errno_t sss_filter_sanitize_ex(TALLOC_CTX *mem_ctx,
+- const char *input,
+- char **sanitized,
+- const char *ignore)
+-{
+- char *output;
+- size_t i = 0;
+- size_t j = 0;
+- char *allowed;
+-
+- /* Assume the worst-case. We'll resize it later, once */
+- output = talloc_array(mem_ctx, char, strlen(input) * 3 + 1);
+- if (!output) {
+- return ENOMEM;
+- }
+-
+- while (input[i]) {
+- /* Even though this character might have a special meaning, if it's
+- * expliticly allowed, just copy it and move on
+- */
+- if (ignore == NULL) {
+- allowed = NULL;
+- } else {
+- allowed = strchr(ignore, input[i]);
+- }
+- if (allowed) {
+- output[j++] = input[i++];
+- continue;
+- }
+-
+- switch(input[i]) {
+- case '\t':
+- output[j++] = '\\';
+- output[j++] = '0';
+- output[j++] = '9';
+- break;
+- case ' ':
+- output[j++] = '\\';
+- output[j++] = '2';
+- output[j++] = '0';
+- break;
+- case '*':
+- output[j++] = '\\';
+- output[j++] = '2';
+- output[j++] = 'a';
+- break;
+- case '(':
+- output[j++] = '\\';
+- output[j++] = '2';
+- output[j++] = '8';
+- break;
+- case ')':
+- output[j++] = '\\';
+- output[j++] = '2';
+- output[j++] = '9';
+- break;
+- case '\\':
+- output[j++] = '\\';
+- output[j++] = '5';
+- output[j++] = 'c';
+- break;
+- case '\r':
+- output[j++] = '\\';
+- output[j++] = '0';
+- output[j++] = 'd';
+- break;
+- case '\n':
+- output[j++] = '\\';
+- output[j++] = '0';
+- output[j++] = 'a';
+- break;
+- default:
+- output[j++] = input[i];
+- }
+-
+- i++;
+- }
+- output[j] = '\0';
+- *sanitized = talloc_realloc(mem_ctx, output, char, j+1);
+- if (!*sanitized) {
+- talloc_free(output);
+- return ENOMEM;
+- }
+-
+- return EOK;
+-}
+-
+-errno_t sss_filter_sanitize(TALLOC_CTX *mem_ctx,
+- const char *input,
+- char **sanitized)
+-{
+- return sss_filter_sanitize_ex(mem_ctx, input, sanitized, NULL);
+-}
+-
+ char *
+ sss_escape_ip_address(TALLOC_CTX *mem_ctx, int family, const char *addr)
+ {
+diff --git a/src/util/util_ext.c b/src/util/util_ext.c
+index 04dc02a..a89b60f 100644
+--- a/src/util/util_ext.c
++++ b/src/util/util_ext.c
+@@ -29,6 +29,11 @@
+
+ #define EOK 0
+
++#ifndef HAVE_ERRNO_T
++#define HAVE_ERRNO_T
++typedef int errno_t;
++#endif
++
+ int split_on_separator(TALLOC_CTX *mem_ctx, const char *str,
+ const char sep, bool trim, bool skip_empty,
+ char ***_list, int *size)
+@@ -141,3 +146,97 @@ bool string_in_list(const char *string, char **list, bool case_sensitive)
+
+ return false;
+ }
++
++errno_t sss_filter_sanitize_ex(TALLOC_CTX *mem_ctx,
++ const char *input,
++ char **sanitized,
++ const char *ignore)
++{
++ char *output;
++ size_t i = 0;
++ size_t j = 0;
++ char *allowed;
++
++ /* Assume the worst-case. We'll resize it later, once */
++ output = talloc_array(mem_ctx, char, strlen(input) * 3 + 1);
++ if (!output) {
++ return ENOMEM;
++ }
++
++ while (input[i]) {
++ /* Even though this character might have a special meaning, if it's
++ * explicitly allowed, just copy it and move on
++ */
++ if (ignore == NULL) {
++ allowed = NULL;
++ } else {
++ allowed = strchr(ignore, input[i]);
++ }
++ if (allowed) {
++ output[j++] = input[i++];
++ continue;
++ }
++
++ switch(input[i]) {
++ case '\t':
++ output[j++] = '\\';
++ output[j++] = '0';
++ output[j++] = '9';
++ break;
++ case ' ':
++ output[j++] = '\\';
++ output[j++] = '2';
++ output[j++] = '0';
++ break;
++ case '*':
++ output[j++] = '\\';
++ output[j++] = '2';
++ output[j++] = 'a';
++ break;
++ case '(':
++ output[j++] = '\\';
++ output[j++] = '2';
++ output[j++] = '8';
++ break;
++ case ')':
++ output[j++] = '\\';
++ output[j++] = '2';
++ output[j++] = '9';
++ break;
++ case '\\':
++ output[j++] = '\\';
++ output[j++] = '5';
++ output[j++] = 'c';
++ break;
++ case '\r':
++ output[j++] = '\\';
++ output[j++] = '0';
++ output[j++] = 'd';
++ break;
++ case '\n':
++ output[j++] = '\\';
++ output[j++] = '0';
++ output[j++] = 'a';
++ break;
++ default:
++ output[j++] = input[i];
++ }
++
++ i++;
++ }
++ output[j] = '\0';
++ *sanitized = talloc_realloc(mem_ctx, output, char, j+1);
++ if (!*sanitized) {
++ talloc_free(output);
++ return ENOMEM;
++ }
++
++ return EOK;
++}
++
++errno_t sss_filter_sanitize(TALLOC_CTX *mem_ctx,
++ const char *input,
++ char **sanitized)
++{
++ return sss_filter_sanitize_ex(mem_ctx, input, sanitized, NULL);
++}
+--
+2.25.1
+
diff --git a/meta-security/recipes-security/sssd/sssd_1.16.4.bb b/meta-security/recipes-security/sssd/sssd_1.16.4.bb
index 186c9e0b9e..e512dbf8b7 100644
--- a/meta-security/recipes-security/sssd/sssd_1.16.4.bb
+++ b/meta-security/recipes-security/sssd/sssd_1.16.4.bb
@@ -18,6 +18,8 @@ SRC_URI = "https://releases.pagure.org/SSSD/${BPN}/${BP}.tar.gz \
file://volatiles.99_sssd \
file://fix-ldblibdir.patch \
file://0001-build-Don-t-use-AC_CHECK_FILE-when-building-manpages.patch \
+ file://CVE-2022-4254-1.patch \
+ file://CVE-2022-4254-2.patch \
"
SRC_URI[md5sum] = "757bbb6f15409d8d075f4f06cb678d50"
diff --git a/poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst b/poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst
index 93ac18b78a..75e8dd69d9 100644
--- a/poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst
+++ b/poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.rst
@@ -405,8 +405,8 @@ This fetcher supports the following parameters:
- *"nobranch":* Tells the fetcher to not check the SHA validation for
the branch when set to "1". The default is "0". Set this option for
- the recipe that refers to the commit that is valid for a tag instead
- of the branch.
+ the recipe that refers to the commit that is valid for any namespace
+ (branch, tag, ...) instead of the branch.
- *"bareclone":* Tells the fetcher to clone a bare clone into the
destination directory without checking out a working tree. Only the
diff --git a/poky/bitbake/lib/bb/cooker.py b/poky/bitbake/lib/bb/cooker.py
index ac54d4378d..6743bce585 100644
--- a/poky/bitbake/lib/bb/cooker.py
+++ b/poky/bitbake/lib/bb/cooker.py
@@ -13,7 +13,6 @@ import sys, os, glob, os.path, re, time
import itertools
import logging
import multiprocessing
-import sre_constants
import threading
from io import StringIO, UnsupportedOperation
from contextlib import closing
@@ -1795,7 +1794,7 @@ class CookerCollectFiles(object):
try:
re.compile(mask)
bbmasks.append(mask)
- except sre_constants.error:
+ except re.error:
collectlog.critical("BBMASK contains an invalid regular expression, ignoring: %s" % mask)
# Then validate the combined regular expressions. This should never
@@ -1803,7 +1802,7 @@ class CookerCollectFiles(object):
bbmask = "|".join(bbmasks)
try:
bbmask_compiled = re.compile(bbmask)
- except sre_constants.error:
+ except re.error:
collectlog.critical("BBMASK is not a valid regular expression, ignoring: %s" % bbmask)
bbmask = None
diff --git a/poky/bitbake/lib/bb/fetch2/git.py b/poky/bitbake/lib/bb/fetch2/git.py
index 63a9f92b0a..cad1ae8207 100644
--- a/poky/bitbake/lib/bb/fetch2/git.py
+++ b/poky/bitbake/lib/bb/fetch2/git.py
@@ -44,7 +44,8 @@ Supported SRC_URI options are:
- nobranch
Don't check the SHA validation for branch. set this option for the recipe
- referring to commit which is valid in tag instead of branch.
+ referring to commit which is valid in any namespace (branch, tag, ...)
+ instead of branch.
The default is "0", set nobranch=1 if needed.
- usehead
@@ -63,6 +64,7 @@ import errno
import fnmatch
import os
import re
+import shlex
import subprocess
import tempfile
import bb
@@ -352,7 +354,7 @@ class Git(FetchMethod):
# We do this since git will use a "-l" option automatically for local urls where possible
if repourl.startswith("file://"):
repourl = repourl[7:]
- clone_cmd = "LANG=C %s clone --bare --mirror \"%s\" %s --progress" % (ud.basecmd, repourl, ud.clonedir)
+ clone_cmd = "LANG=C %s clone --bare --mirror %s %s --progress" % (ud.basecmd, shlex.quote(repourl), ud.clonedir)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, clone_cmd, ud.url)
progresshandler = GitProgressHandler(d)
@@ -364,8 +366,12 @@ class Git(FetchMethod):
if "origin" in output:
runfetchcmd("%s remote rm origin" % ud.basecmd, d, workdir=ud.clonedir)
- runfetchcmd("%s remote add --mirror=fetch origin \"%s\"" % (ud.basecmd, repourl), d, workdir=ud.clonedir)
- fetch_cmd = "LANG=C %s fetch -f --progress \"%s\" refs/*:refs/*" % (ud.basecmd, repourl)
+ runfetchcmd("%s remote add --mirror=fetch origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=ud.clonedir)
+
+ if ud.nobranch:
+ fetch_cmd = "LANG=C %s fetch -f --progress %s refs/*:refs/*" % (ud.basecmd, shlex.quote(repourl))
+ else:
+ fetch_cmd = "LANG=C %s fetch -f --progress %s refs/heads/*:refs/heads/* refs/tags/*:refs/tags/*" % (ud.basecmd, shlex.quote(repourl))
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, fetch_cmd, ud.url)
progresshandler = GitProgressHandler(d)
@@ -559,7 +565,7 @@ class Git(FetchMethod):
raise bb.fetch2.UnpackError("No up to date source found: " + "; ".join(source_error), ud.url)
repourl = self._get_repo_url(ud)
- runfetchcmd("%s remote set-url origin \"%s\"" % (ud.basecmd, repourl), d, workdir=destdir)
+ runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, shlex.quote(repourl)), d, workdir=destdir)
if self._contains_lfs(ud, d, destdir):
if need_lfs and not self._find_git_lfs(d):
@@ -687,8 +693,8 @@ class Git(FetchMethod):
d.setVar('_BB_GIT_IN_LSREMOTE', '1')
try:
repourl = self._get_repo_url(ud)
- cmd = "%s ls-remote \"%s\" %s" % \
- (ud.basecmd, repourl, search)
+ cmd = "%s ls-remote %s %s" % \
+ (ud.basecmd, shlex.quote(repourl), search)
if ud.proto.lower() != 'file':
bb.fetch2.check_network_access(d, cmd, repourl)
output = runfetchcmd(cmd, d, True)
diff --git a/poky/bitbake/lib/bb/runqueue.py b/poky/bitbake/lib/bb/runqueue.py
index 6cdc72a85b..2a1299db39 100644
--- a/poky/bitbake/lib/bb/runqueue.py
+++ b/poky/bitbake/lib/bb/runqueue.py
@@ -1975,6 +1975,12 @@ class RunQueueExecute:
self.setbuildable(revdep)
logger.debug(1, "Marking task %s as buildable", revdep)
+ for t in self.sq_deferred.copy():
+ if self.sq_deferred[t] == task:
+ logger.debug(2, "Deferred task %s now buildable" % t)
+ del self.sq_deferred[t]
+ update_scenequeue_data([t], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
+
def task_complete(self, task):
self.stats.taskCompleted()
bb.event.fire(runQueueTaskCompleted(task, self.stats, self.rq), self.cfgData)
@@ -2084,8 +2090,6 @@ class RunQueueExecute:
logger.debug(1, "%s didn't become valid, skipping setscene" % nexttask)
self.sq_task_failoutright(nexttask)
return True
- else:
- self.sqdata.outrightfail.remove(nexttask)
if nexttask in self.sqdata.outrightfail:
logger.debug(2, 'No package found, so skipping setscene task %s', nexttask)
self.sq_task_failoutright(nexttask)
@@ -2236,7 +2240,8 @@ class RunQueueExecute:
if self.sq_deferred:
tid = self.sq_deferred.pop(list(self.sq_deferred.keys())[0])
logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s" % tid)
- self.sq_task_failoutright(tid)
+ if tid not in self.runq_complete:
+ self.sq_task_failoutright(tid)
return True
if len(self.failed_tids) != 0:
@@ -2350,10 +2355,16 @@ class RunQueueExecute:
self.updated_taskhash_queue.remove((tid, unihash))
if unihash != self.rqdata.runtaskentries[tid].unihash:
- hashequiv_logger.verbose("Task %s unihash changed to %s" % (tid, unihash))
- self.rqdata.runtaskentries[tid].unihash = unihash
- bb.parse.siggen.set_unihash(tid, unihash)
- toprocess.add(tid)
+ # Make sure we rehash any other tasks with the same task hash that we're deferred against.
+ torehash = [tid]
+ for deftid in self.sq_deferred:
+ if self.sq_deferred[deftid] == tid:
+ torehash.append(deftid)
+ for hashtid in torehash:
+ hashequiv_logger.verbose("Task %s unihash changed to %s" % (hashtid, unihash))
+ self.rqdata.runtaskentries[hashtid].unihash = unihash
+ bb.parse.siggen.set_unihash(hashtid, unihash)
+ toprocess.add(hashtid)
# Work out all tasks which depend upon these
total = set()
@@ -2492,6 +2503,14 @@ class RunQueueExecute:
if update_tasks:
self.sqdone = False
+ for mc in sorted(self.sqdata.multiconfigs):
+ for tid in sorted([t[0] for t in update_tasks]):
+ if mc_from_tid(tid) != mc:
+ continue
+ h = pending_hash_index(tid, self.rqdata)
+ if h in self.sqdata.hashes and tid != self.sqdata.hashes[h]:
+ self.sq_deferred[tid] = self.sqdata.hashes[h]
+ bb.note("Deferring %s after %s" % (tid, self.sqdata.hashes[h]))
update_scenequeue_data([t[0] for t in update_tasks], self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self, summary=False)
for (tid, harddepfail, origvalid) in update_tasks:
@@ -2832,6 +2851,19 @@ def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
sqdata.stamppresent = set()
sqdata.valid = set()
+ sqdata.hashes = {}
+ sqrq.sq_deferred = {}
+ for mc in sorted(sqdata.multiconfigs):
+ for tid in sorted(sqdata.sq_revdeps):
+ if mc_from_tid(tid) != mc:
+ continue
+ h = pending_hash_index(tid, rqdata)
+ if h not in sqdata.hashes:
+ sqdata.hashes[h] = tid
+ else:
+ sqrq.sq_deferred[tid] = sqdata.hashes[h]
+ bb.note("Deferring %s after %s" % (tid, sqdata.hashes[h]))
+
update_scenequeue_data(sqdata.sq_revdeps, sqdata, rqdata, rq, cooker, stampcache, sqrq, summary=True)
def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, summary=True):
@@ -2843,6 +2875,8 @@ def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, s
sqdata.stamppresent.remove(tid)
if tid in sqdata.valid:
sqdata.valid.remove(tid)
+ if tid in sqdata.outrightfail:
+ sqdata.outrightfail.remove(tid)
(mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
@@ -2870,32 +2904,20 @@ def update_scenequeue_data(tids, sqdata, rqdata, rq, cooker, stampcache, sqrq, s
sqdata.valid |= rq.validate_hashes(tocheck, cooker.data, len(sqdata.stamppresent), False, summary=summary)
- sqdata.hashes = {}
- sqrq.sq_deferred = {}
- for mc in sorted(sqdata.multiconfigs):
- for tid in sorted(sqdata.sq_revdeps):
- if mc_from_tid(tid) != mc:
- continue
- if tid in sqdata.stamppresent:
- continue
- if tid in sqdata.valid:
- continue
- if tid in sqdata.noexec:
- continue
- if tid in sqrq.scenequeue_notcovered:
- continue
- if tid in sqrq.scenequeue_covered:
- continue
-
- sqdata.outrightfail.add(tid)
-
- h = pending_hash_index(tid, rqdata)
- if h not in sqdata.hashes:
- sqdata.hashes[h] = tid
- else:
- sqrq.sq_deferred[tid] = sqdata.hashes[h]
- bb.note("Deferring %s after %s" % (tid, sqdata.hashes[h]))
-
+ for tid in tids:
+ if tid in sqdata.stamppresent:
+ continue
+ if tid in sqdata.valid:
+ continue
+ if tid in sqdata.noexec:
+ continue
+ if tid in sqrq.scenequeue_covered:
+ continue
+ if tid in sqrq.scenequeue_notcovered:
+ continue
+ if tid in sqrq.sq_deferred:
+ continue
+ sqdata.outrightfail.add(tid)
class TaskFailure(Exception):
"""
diff --git a/poky/bitbake/lib/bb/tests/fetch.py b/poky/bitbake/lib/bb/tests/fetch.py
index 484fa58295..61dd5cccaf 100644
--- a/poky/bitbake/lib/bb/tests/fetch.py
+++ b/poky/bitbake/lib/bb/tests/fetch.py
@@ -1338,7 +1338,7 @@ class FetchCheckStatusTest(FetcherTest):
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.2.tar.gz",
"http://downloads.yoctoproject.org/releases/sato/sato-engine-0.3.tar.gz",
"https://yoctoproject.org/",
- "https://yoctoproject.org/documentation",
+ "https://docs.yoctoproject.org/",
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.1.7.tar.gz",
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.3.0.tar.gz",
"ftp://sourceware.org/pub/libffi/libffi-1.20.tar.gz",
@@ -1750,7 +1750,7 @@ class GitShallowTest(FetcherTest):
self.add_empty_file('bsub', cwd=smdir)
self.git('submodule init', cwd=self.srcdir)
- self.git('submodule add file://%s' % smdir, cwd=self.srcdir)
+ self.git('-c protocol.file.allow=always submodule add file://%s' % smdir, cwd=self.srcdir)
self.git('submodule update', cwd=self.srcdir)
self.git('commit -m submodule -a', cwd=self.srcdir)
@@ -1782,7 +1782,7 @@ class GitShallowTest(FetcherTest):
self.add_empty_file('bsub', cwd=smdir)
self.git('submodule init', cwd=self.srcdir)
- self.git('submodule add file://%s' % smdir, cwd=self.srcdir)
+ self.git('-c protocol.file.allow=always submodule add file://%s' % smdir, cwd=self.srcdir)
self.git('submodule update', cwd=self.srcdir)
self.git('commit -m submodule -a', cwd=self.srcdir)
diff --git a/poky/bitbake/lib/bb/utils.py b/poky/bitbake/lib/bb/utils.py
index 6592eb00dd..1a5a0aae6c 100644
--- a/poky/bitbake/lib/bb/utils.py
+++ b/poky/bitbake/lib/bb/utils.py
@@ -461,9 +461,16 @@ def lockfile(name, shared=False, retry=True, block=False):
consider the possibility of sending a signal to the process to break
out - at which point you want block=True rather than retry=True.
"""
+ basename = os.path.basename(name)
+ if len(basename) > 255:
+ root, ext = os.path.splitext(basename)
+ basename = root[:255 - len(ext)] + ext
+
dirname = os.path.dirname(name)
mkdirhier(dirname)
+ name = os.path.join(dirname, basename)
+
if not os.access(dirname, os.W_OK):
logger.error("Unable to acquire lock '%s', directory is not writable",
name)
@@ -497,7 +504,7 @@ def lockfile(name, shared=False, retry=True, block=False):
return lf
lf.close()
except OSError as e:
- if e.errno == errno.EACCES:
+ if e.errno == errno.EACCES or e.errno == errno.ENAMETOOLONG:
logger.error("Unable to acquire lock '%s', %s",
e.strerror, name)
sys.exit(1)
@@ -1563,21 +1570,22 @@ def set_process_name(name):
# export common proxies variables from datastore to environment
def export_proxies(d):
- import os
+ """ export common proxies variables from datastore to environment """
variables = ['http_proxy', 'HTTP_PROXY', 'https_proxy', 'HTTPS_PROXY',
'ftp_proxy', 'FTP_PROXY', 'no_proxy', 'NO_PROXY',
- 'GIT_PROXY_COMMAND']
+ 'GIT_PROXY_COMMAND', 'SSL_CERT_FILE', 'SSL_CERT_DIR']
exported = False
- for v in variables:
- if v in os.environ.keys():
+ origenv = d.getVar("BB_ORIGENV")
+
+ for name in variables:
+ value = d.getVar(name)
+ if not value and origenv:
+ value = origenv.getVar(name)
+ if value:
+ os.environ[name] = value
exported = True
- else:
- v_proxy = d.getVar(v)
- if v_proxy is not None:
- os.environ[v] = v_proxy
- exported = True
return exported
diff --git a/poky/documentation/conf.py b/poky/documentation/conf.py
index df67a5cdf2..e9078e054e 100644
--- a/poky/documentation/conf.py
+++ b/poky/documentation/conf.py
@@ -97,6 +97,7 @@ extlinks = {
'yocto_git': ('https://git.yoctoproject.org%s', None),
'oe_home': ('https://www.openembedded.org%s', None),
'oe_lists': ('https://lists.openembedded.org%s', None),
+ 'oe_git': ('https://git.openembedded.org%s', None),
}
# Intersphinx config to use cross reference with Bitbake user manual
diff --git a/poky/documentation/dev-manual/dev-manual-common-tasks.rst b/poky/documentation/dev-manual/dev-manual-common-tasks.rst
index 9dcafb2783..8ee386a678 100644
--- a/poky/documentation/dev-manual/dev-manual-common-tasks.rst
+++ b/poky/documentation/dev-manual/dev-manual-common-tasks.rst
@@ -3854,7 +3854,7 @@ Setting Up and Running a Multiple Configuration Build
To accomplish a multiple configuration build, you must define each
target's configuration separately using a parallel configuration file in
-the :term:`Build Directory`, and you
+the :term:`Build Directory` or configuration directory within a layer, and you
must follow a required file hierarchy. Additionally, you must enable the
multiple configuration builds in your ``local.conf`` file.
@@ -3862,47 +3862,47 @@ Follow these steps to set up and execute multiple configuration builds:
- *Create Separate Configuration Files*: You need to create a single
configuration file for each build target (each multiconfig).
- Minimally, each configuration file must define the machine and the
- temporary directory BitBake uses for the build. Suggested practice
- dictates that you do not overlap the temporary directories used
- during the builds. However, it is possible that you can share the
- temporary directory
- (:term:`TMPDIR`). For example,
- consider a scenario with two different multiconfigs for the same
+ The configuration definitions are implementation dependent but often
+ each configuration file will define the machine and the
+ temporary directory BitBake uses for the build. Whether the same
+ temporary directory (:term:`TMPDIR`) can be shared will depend on what is
+ similar and what is different between the configurations. Multiple MACHINE
+ targets can share the same (:term:`TMPDIR`) as long as the rest of the
+ configuration is the same, multiple DISTRO settings would need separate
+ (:term:`TMPDIR`) directories.
+
+ For example, consider a scenario with two different multiconfigs for the same
:term:`MACHINE`: "qemux86" built
for two distributions such as "poky" and "poky-lsb". In this case,
- you might want to use the same ``TMPDIR``.
+ you would need to use the different :term:`TMPDIR`.
Here is an example showing the minimal statements needed in a
configuration file for a "qemux86" target whose temporary build
- directory is ``tmpmultix86``:
- ::
+ directory is ``tmpmultix86``::
MACHINE = "qemux86"
TMPDIR = "${TOPDIR}/tmpmultix86"
The location for these multiconfig configuration files is specific.
- They must reside in the current build directory in a sub-directory of
- ``conf`` named ``multiconfig``. Following is an example that defines
+ They must reside in the current :term:`Build Directory` in a sub-directory of
+ ``conf`` named ``multiconfig`` or within a layer's ``conf`` directory
+ under a directory named ``multiconfig``. Following is an example that defines
two configuration files for the "x86" and "arm" multiconfigs:
.. image:: figures/multiconfig_files.png
:align: center
+ :width: 50%
- The reason for this required file hierarchy is because the ``BBPATH``
- variable is not constructed until the layers are parsed.
- Consequently, using the configuration file as a pre-configuration
- file is not possible unless it is located in the current working
- directory.
+ The usual :term:`BBPATH` search path is used to locate multiconfig files in
+ a similar way to other conf files.
- *Add the BitBake Multi-configuration Variable to the Local
Configuration File*: Use the
:term:`BBMULTICONFIG`
variable in your ``conf/local.conf`` configuration file to specify
each multiconfig. Continuing with the example from the previous
- figure, the ``BBMULTICONFIG`` variable needs to enable two
- multiconfigs: "x86" and "arm" by specifying each configuration file:
- ::
+ figure, the :term:`BBMULTICONFIG` variable needs to enable two
+ multiconfigs: "x86" and "arm" by specifying each configuration file::
BBMULTICONFIG = "x86 arm"
@@ -3916,13 +3916,11 @@ Follow these steps to set up and execute multiple configuration builds:
with "".
- *Launch BitBake*: Use the following BitBake command form to launch
- the multiple configuration build:
- ::
+ the multiple configuration build::
$ bitbake [mc:multiconfigname:]target [[[mc:multiconfigname:]target] ... ]
- For the example in this section, the following command applies:
- ::
+ For the example in this section, the following command applies::
$ bitbake mc:x86:core-image-minimal mc:arm:core-image-sato mc::core-image-base
@@ -3937,7 +3935,7 @@ Follow these steps to set up and execute multiple configuration builds:
Support for multiple configuration builds in the Yocto Project &DISTRO;
(&DISTRO_NAME;) Release does not include Shared State (sstate)
optimizations. Consequently, if a build uses the same object twice
- in, for example, two different ``TMPDIR``
+ in, for example, two different :term:`TMPDIR`
directories, the build either loads from an existing sstate cache for
that build at the start or builds the object fresh.
@@ -3958,38 +3956,34 @@ essentially that the
To enable dependencies in a multiple configuration build, you must
declare the dependencies in the recipe using the following statement
-form:
-::
+form::
task_or_package[mcdepends] = "mc:from_multiconfig:to_multiconfig:recipe_name:task_on_which_to_depend"
To better show how to use this statement, consider the example scenario
from the first paragraph of this section. The following statement needs
-to be added to the recipe that builds the ``core-image-sato`` image:
-::
+to be added to the recipe that builds the ``core-image-sato`` image::
do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_rootfs"
In this example, the `from_multiconfig` is "x86". The `to_multiconfig` is "arm". The
-task on which the ``do_image`` task in the recipe depends is the
-``do_rootfs`` task from the ``core-image-minimal`` recipe associated
+task on which the :ref:`ref-tasks-image` task in the recipe depends is the
+:ref:`ref-tasks-rootfs` task from the ``core-image-minimal`` recipe associated
with the "arm" multiconfig.
Once you set up this dependency, you can build the "x86" multiconfig
-using a BitBake command as follows:
-::
+using a BitBake command as follows::
$ bitbake mc:x86:core-image-sato
This command executes all the tasks needed to create the
``core-image-sato`` image for the "x86" multiconfig. Because of the
-dependency, BitBake also executes through the ``do_rootfs`` task for the
+dependency, BitBake also executes through the :ref:`ref-tasks-rootfs` task for the
"arm" multiconfig build.
Having a recipe depend on the root filesystem of another build might not
seem that useful. Consider this change to the statement in the
-``core-image-sato`` recipe:
-::
+``core-image-sato`` recipe::
do_image[mcdepends] = "mc:x86:arm:core-image-minimal:do_image"
diff --git a/poky/documentation/overview-manual/overview-manual-yp-intro.rst b/poky/documentation/overview-manual/overview-manual-yp-intro.rst
index 6dd10f2187..2675074f14 100644
--- a/poky/documentation/overview-manual/overview-manual-yp-intro.rst
+++ b/poky/documentation/overview-manual/overview-manual-yp-intro.rst
@@ -377,7 +377,7 @@ activities using the Yocto Project:
Index <http://layers.openembedded.org/layerindex/layers/>`__, which
is a website that indexes OpenEmbedded-Core layers.
-- *Patchwork:* `Patchwork <http://jk.ozlabs.org/projects/patchwork/>`__
+- *Patchwork:* `Patchwork <https://patchwork.yoctoproject.org/>`__
is a fork of a project originally started by
`OzLabs <http://ozlabs.org/>`__. The project is a web-based tracking
system designed to streamline the process of bringing contributions
diff --git a/poky/documentation/poky.yaml b/poky/documentation/poky.yaml
index b6baac7d0d..62d69f9c86 100644
--- a/poky/documentation/poky.yaml
+++ b/poky/documentation/poky.yaml
@@ -1,13 +1,13 @@
-DISTRO : "3.1.20"
+DISTRO : "3.1.25"
DISTRO_NAME_NO_CAP : "dunfell"
DISTRO_NAME : "Dunfell"
DISTRO_NAME_NO_CAP_MINUS_ONE : "zeus"
-YOCTO_DOC_VERSION : "3.1.20"
+YOCTO_DOC_VERSION : "3.1.25"
YOCTO_DOC_VERSION_MINUS_ONE : "3.0.4"
-DISTRO_REL_TAG : "yocto-3.1.20"
-DOCCONF_VERSION : "3.1.20"
+DISTRO_REL_TAG : "yocto-3.1.25"
+DOCCONF_VERSION : "3.1.25"
BITBAKE_SERIES : "1.46"
-POKYVERSION : "23.0.20"
+POKYVERSION : "23.0.25"
YOCTO_POKY : "poky-&DISTRO_NAME_NO_CAP;-&POKYVERSION;"
YOCTO_DL_URL : "https://downloads.yoctoproject.org"
YOCTO_AB_URL : "https://autobuilder.yoctoproject.org"
diff --git a/poky/documentation/profile-manual/profile-manual-usage.rst b/poky/documentation/profile-manual/profile-manual-usage.rst
index 15cf1efe1c..e389a13fc0 100644
--- a/poky/documentation/profile-manual/profile-manual-usage.rst
+++ b/poky/documentation/profile-manual/profile-manual-usage.rst
@@ -1734,7 +1734,7 @@ events':
The tool is pretty self-explanatory, but for more detailed information
on navigating through the data, see the `kernelshark
-website <http://rostedt.homelinux.com/kernelshark/>`__.
+website <https://kernelshark.org/Documentation.html>`__.
.. _ftrace-documentation:
@@ -1765,8 +1765,8 @@ There is a nice series of articles on using ftrace and trace-cmd at LWN:
- `trace-cmd: A front-end for
Ftrace <https://lwn.net/Articles/410200/>`__
-There's more detailed documentation kernelshark usage here:
-`KernelShark <http://rostedt.homelinux.com/kernelshark/>`__
+See also `KernelShark's documentation <https://kernelshark.org/Documentation.html>`__
+for further usage details.
An amusing yet useful README (a tracing mini-HOWTO) can be found in
``/sys/kernel/debug/tracing/README``.
diff --git a/poky/documentation/ref-manual/ref-system-requirements.rst b/poky/documentation/ref-manual/ref-system-requirements.rst
index 109aa60d05..ac963dcdb9 100644
--- a/poky/documentation/ref-manual/ref-system-requirements.rst
+++ b/poky/documentation/ref-manual/ref-system-requirements.rst
@@ -45,6 +45,8 @@ distributions:
- Ubuntu 20.04
+- Ubuntu 22.04
+
- Fedora 28
- Fedora 29
@@ -61,6 +63,8 @@ distributions:
- Fedora 35
+- Fedora 36
+
- CentOS 7.x
- Debian GNU/Linux 8.x (Jessie)
@@ -79,6 +83,8 @@ distributions:
- AlmaLinux 8.5
+- AlmaLinux 8.7
+
.. note::
- While the Yocto Project Team attempts to ensure all Yocto Project
diff --git a/poky/documentation/ref-manual/ref-variables.rst b/poky/documentation/ref-manual/ref-variables.rst
index b8d56a082b..f582bc72ea 100644
--- a/poky/documentation/ref-manual/ref-variables.rst
+++ b/poky/documentation/ref-manual/ref-variables.rst
@@ -7147,6 +7147,32 @@ system and gives an overview of their function and contents.
:term:`SSTATE_DIR`
The directory for the shared state cache.
+ :term:`SSTATE_EXCLUDEDEPS_SYSROOT`
+ This variable allows to specify indirect dependencies to exclude
+ from sysroots, for example to avoid the situations when a dependency on
+ any ``-native`` recipe will pull in all dependencies of that recipe
+ in the recipe sysroot. This behaviour might not always be wanted,
+ for example when that ``-native`` recipe depends on build tools
+ that are not relevant for the current recipe.
+
+ This way, irrelevant dependencies are ignored, which could have
+ prevented the reuse of prebuilt artifacts stored in the Shared
+ State Cache.
+
+ :term:`SSTATE_EXCLUDEDEPS_SYSROOT` is evaluated as two regular
+ expressions of recipe and dependency to ignore. An example
+ is the rule in :oe_git:`meta/conf/layer.conf </openembedded-core/tree/meta/conf/layer.conf>`::
+
+ # Nothing needs to depend on libc-initial
+ # base-passwd/shadow-sysroot don't need their dependencies
+ SSTATE_EXCLUDEDEPS_SYSROOT += "\
+ .*->.*-initial.* \
+ .*(base-passwd|shadow-sysroot)->.* \
+ "
+
+ The ``->`` substring represents the dependency between
+ the two regular expressions.
+
:term:`SSTATE_MIRROR_ALLOW_NETWORK`
If set to "1", allows fetches from mirrors that are specified in
:term:`SSTATE_MIRRORS` to work even when
diff --git a/poky/meta-poky/conf/distro/poky.conf b/poky/meta-poky/conf/distro/poky.conf
index ffea526dd0..91f4cbe2fc 100644
--- a/poky/meta-poky/conf/distro/poky.conf
+++ b/poky/meta-poky/conf/distro/poky.conf
@@ -1,6 +1,6 @@
DISTRO = "poky"
DISTRO_NAME = "Poky (Yocto Project Reference Distro)"
-DISTRO_VERSION = "3.1.20"
+DISTRO_VERSION = "3.1.25"
DISTRO_CODENAME = "dunfell"
SDK_VENDOR = "-pokysdk"
SDK_VERSION = "${@d.getVar('DISTRO_VERSION').replace('snapshot-${DATE}', 'snapshot')}"
@@ -47,12 +47,14 @@ SANITY_TESTED_DISTROS ?= " \
ubuntu-18.04 \n \
ubuntu-19.04 \n \
ubuntu-20.04 \n \
+ ubuntu-22.04 \n \
fedora-30 \n \
fedora-31 \n \
fedora-32 \n \
fedora-33 \n \
fedora-34 \n \
fedora-35 \n \
+ fedora-36 \n \
centos-7 \n \
centos-8 \n \
debian-8 \n \
@@ -63,6 +65,7 @@ SANITY_TESTED_DISTROS ?= " \
opensuseleap-15.2 \n \
opensuseleap-15.3 \n \
almalinux-8.5 \n \
+ almalinux-8.7 \n \
"
# add poky sanity bbclass
INHERIT += "poky-sanity"
diff --git a/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.4.bbappend b/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.4.bbappend
index 219e788f47..fbe039aa95 100644
--- a/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.4.bbappend
+++ b/poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_5.4.bbappend
@@ -7,8 +7,8 @@ KMACHINE_genericx86 ?= "common-pc"
KMACHINE_genericx86-64 ?= "common-pc-64"
KMACHINE_beaglebone-yocto ?= "beaglebone"
-SRCREV_machine_genericx86 ?= "8a59dfded81659402005acfb06fbb00b71c8ce86"
-SRCREV_machine_genericx86-64 ?= "8a59dfded81659402005acfb06fbb00b71c8ce86"
+SRCREV_machine_genericx86 ?= "35826e154ee014b64ccfa0d1f12d36b8f8a75939"
+SRCREV_machine_genericx86-64 ?= "35826e154ee014b64ccfa0d1f12d36b8f8a75939"
SRCREV_machine_edgerouter ?= "706efec4c1e270ec5dda92275898cd465dfdc7dd"
SRCREV_machine_beaglebone-yocto ?= "706efec4c1e270ec5dda92275898cd465dfdc7dd"
@@ -17,7 +17,7 @@ COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
COMPATIBLE_MACHINE_edgerouter = "edgerouter"
COMPATIBLE_MACHINE_beaglebone-yocto = "beaglebone-yocto"
-LINUX_VERSION_genericx86 = "5.4.205"
-LINUX_VERSION_genericx86-64 = "5.4.205"
+LINUX_VERSION_genericx86 = "5.4.219"
+LINUX_VERSION_genericx86-64 = "5.4.219"
LINUX_VERSION_edgerouter = "5.4.58"
LINUX_VERSION_beaglebone-yocto = "5.4.58"
diff --git a/poky/meta/classes/base.bbclass b/poky/meta/classes/base.bbclass
index 19604a4646..3cae577a0e 100644
--- a/poky/meta/classes/base.bbclass
+++ b/poky/meta/classes/base.bbclass
@@ -139,7 +139,7 @@ def setup_hosttools_dir(dest, toolsvar, d, fatal=True):
# /usr/local/bin/ccache/gcc -> /usr/bin/ccache, then which(gcc)
# would return /usr/local/bin/ccache/gcc, but what we need is
# /usr/bin/gcc, this code can check and fix that.
- if "ccache" in srctool:
+ if os.path.islink(srctool) and os.path.basename(os.readlink(srctool)) == 'ccache':
srctool = bb.utils.which(path, tool, executable=True, direction=1)
if srctool:
os.symlink(srctool, desttool)
diff --git a/poky/meta/classes/create-spdx-2.2.bbclass b/poky/meta/classes/create-spdx-2.2.bbclass
new file mode 100644
index 0000000000..42b693d586
--- /dev/null
+++ b/poky/meta/classes/create-spdx-2.2.bbclass
@@ -0,0 +1,1067 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+DEPLOY_DIR_SPDX ??= "${DEPLOY_DIR}/spdx/${MACHINE}"
+
+# The product name that the CVE database uses. Defaults to BPN, but may need to
+# be overriden per recipe (for example tiff.bb sets CVE_PRODUCT=libtiff).
+CVE_PRODUCT ??= "${BPN}"
+CVE_VERSION ??= "${PV}"
+
+SPDXDIR ??= "${WORKDIR}/spdx"
+SPDXDEPLOY = "${SPDXDIR}/deploy"
+SPDXWORK = "${SPDXDIR}/work"
+SPDXIMAGEWORK = "${SPDXDIR}/image-work"
+SPDXSDKWORK = "${SPDXDIR}/sdk-work"
+
+SPDX_TOOL_NAME ??= "oe-spdx-creator"
+SPDX_TOOL_VERSION ??= "1.0"
+
+SPDXRUNTIMEDEPLOY = "${SPDXDIR}/runtime-deploy"
+
+SPDX_INCLUDE_SOURCES ??= "0"
+SPDX_ARCHIVE_SOURCES ??= "0"
+SPDX_ARCHIVE_PACKAGED ??= "0"
+
+SPDX_UUID_NAMESPACE ??= "sbom.openembedded.org"
+SPDX_NAMESPACE_PREFIX ??= "http://spdx.org/spdxdoc"
+SPDX_PRETTY ??= "0"
+
+SPDX_LICENSES ??= "${COREBASE}/meta/files/spdx-licenses.json"
+
+SPDX_CUSTOM_ANNOTATION_VARS ??= ""
+
+SPDX_ORG ??= "OpenEmbedded ()"
+SPDX_SUPPLIER ??= "Organization: ${SPDX_ORG}"
+SPDX_SUPPLIER[doc] = "The SPDX PackageSupplier field for SPDX packages created from \
+ this recipe. For SPDX documents create using this class during the build, this \
+ is the contact information for the person or organization who is doing the \
+ build."
+
+def extract_licenses(filename):
+ import re
+
+ lic_regex = re.compile(rb'^\W*SPDX-License-Identifier:\s*([ \w\d.()+-]+?)(?:\s+\W*)?$', re.MULTILINE)
+
+ try:
+ with open(filename, 'rb') as f:
+ size = min(15000, os.stat(filename).st_size)
+ txt = f.read(size)
+ licenses = re.findall(lic_regex, txt)
+ if licenses:
+ ascii_licenses = [lic.decode('ascii') for lic in licenses]
+ return ascii_licenses
+ except Exception as e:
+ bb.warn(f"Exception reading {filename}: {e}")
+ return None
+
+def get_doc_namespace(d, doc):
+ import uuid
+ namespace_uuid = uuid.uuid5(uuid.NAMESPACE_DNS, d.getVar("SPDX_UUID_NAMESPACE"))
+ return "%s/%s-%s" % (d.getVar("SPDX_NAMESPACE_PREFIX"), doc.name, str(uuid.uuid5(namespace_uuid, doc.name)))
+
+def create_annotation(d, comment):
+ from datetime import datetime, timezone
+
+ creation_time = datetime.now(tz=timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
+ annotation = oe.spdx.SPDXAnnotation()
+ annotation.annotationDate = creation_time
+ annotation.annotationType = "OTHER"
+ annotation.annotator = "Tool: %s - %s" % (d.getVar("SPDX_TOOL_NAME"), d.getVar("SPDX_TOOL_VERSION"))
+ annotation.comment = comment
+ return annotation
+
+def recipe_spdx_is_native(d, recipe):
+ return any(a.annotationType == "OTHER" and
+ a.annotator == "Tool: %s - %s" % (d.getVar("SPDX_TOOL_NAME"), d.getVar("SPDX_TOOL_VERSION")) and
+ a.comment == "isNative" for a in recipe.annotations)
+
+def is_work_shared_spdx(d):
+ return bb.data.inherits_class('kernel', d) or ('work-shared' in d.getVar('WORKDIR'))
+
+def get_json_indent(d):
+ if d.getVar("SPDX_PRETTY") == "1":
+ return 2
+ return None
+
+python() {
+ import json
+ if d.getVar("SPDX_LICENSE_DATA"):
+ return
+
+ with open(d.getVar("SPDX_LICENSES"), "r") as f:
+ data = json.load(f)
+ # Transform the license array to a dictionary
+ data["licenses"] = {l["licenseId"]: l for l in data["licenses"]}
+ d.setVar("SPDX_LICENSE_DATA", data)
+}
+
+def convert_license_to_spdx(lic, document, d, existing={}):
+ from pathlib import Path
+ import oe.spdx
+
+ license_data = d.getVar("SPDX_LICENSE_DATA")
+ extracted = {}
+
+ def add_extracted_license(ident, name):
+ nonlocal document
+
+ if name in extracted:
+ return
+
+ extracted_info = oe.spdx.SPDXExtractedLicensingInfo()
+ extracted_info.name = name
+ extracted_info.licenseId = ident
+ extracted_info.extractedText = None
+
+ if name == "PD":
+ # Special-case this.
+ extracted_info.extractedText = "Software released to the public domain"
+ else:
+ # Seach for the license in COMMON_LICENSE_DIR and LICENSE_PATH
+ for directory in [d.getVar('COMMON_LICENSE_DIR')] + (d.getVar('LICENSE_PATH') or '').split():
+ try:
+ with (Path(directory) / name).open(errors="replace") as f:
+ extracted_info.extractedText = f.read()
+ break
+ except FileNotFoundError:
+ pass
+ if extracted_info.extractedText is None:
+ # If it's not SPDX or PD, then NO_GENERIC_LICENSE must be set
+ filename = d.getVarFlag('NO_GENERIC_LICENSE', name)
+ if filename:
+ filename = d.expand("${S}/" + filename)
+ with open(filename, errors="replace") as f:
+ extracted_info.extractedText = f.read()
+ else:
+ bb.error("Cannot find any text for license %s" % name)
+
+ extracted[name] = extracted_info
+ document.hasExtractedLicensingInfos.append(extracted_info)
+
+ def convert(l):
+ if l == "(" or l == ")":
+ return l
+
+ if l == "&":
+ return "AND"
+
+ if l == "|":
+ return "OR"
+
+ if l == "CLOSED":
+ return "NONE"
+
+ spdx_license = d.getVarFlag("SPDXLICENSEMAP", l) or l
+ if spdx_license in license_data["licenses"]:
+ return spdx_license
+
+ try:
+ spdx_license = existing[l]
+ except KeyError:
+ spdx_license = "LicenseRef-" + l
+ add_extracted_license(spdx_license, l)
+
+ return spdx_license
+
+ lic_split = lic.replace("(", " ( ").replace(")", " ) ").split()
+
+ return ' '.join(convert(l) for l in lic_split)
+
+def process_sources(d):
+ pn = d.getVar('PN')
+ assume_provided = (d.getVar("ASSUME_PROVIDED") or "").split()
+ if pn in assume_provided:
+ for p in d.getVar("PROVIDES").split():
+ if p != pn:
+ pn = p
+ break
+
+ # glibc-locale: do_fetch, do_unpack and do_patch tasks have been deleted,
+ # so avoid archiving source here.
+ if pn.startswith('glibc-locale'):
+ return False
+ if d.getVar('PN') == "libtool-cross":
+ return False
+ if d.getVar('PN') == "libgcc-initial":
+ return False
+ if d.getVar('PN') == "shadow-sysroot":
+ return False
+
+ # We just archive gcc-source for all the gcc related recipes
+ if d.getVar('BPN') in ['gcc', 'libgcc']:
+ bb.debug(1, 'spdx: There is bug in scan of %s is, do nothing' % pn)
+ return False
+
+ return True
+
+
+def add_package_files(d, doc, spdx_pkg, topdir, get_spdxid, get_types, *, archive=None, ignore_dirs=[], ignore_top_level_dirs=[]):
+ from pathlib import Path
+ import oe.spdx
+ import hashlib
+
+ source_date_epoch = d.getVar("SOURCE_DATE_EPOCH")
+ if source_date_epoch:
+ source_date_epoch = int(source_date_epoch)
+
+ sha1s = []
+ spdx_files = []
+
+ file_counter = 1
+ for subdir, dirs, files in os.walk(topdir):
+ dirs[:] = [d for d in dirs if d not in ignore_dirs]
+ if subdir == str(topdir):
+ dirs[:] = [d for d in dirs if d not in ignore_top_level_dirs]
+
+ for file in files:
+ filepath = Path(subdir) / file
+ filename = str(filepath.relative_to(topdir))
+
+ if not filepath.is_symlink() and filepath.is_file():
+ spdx_file = oe.spdx.SPDXFile()
+ spdx_file.SPDXID = get_spdxid(file_counter)
+ for t in get_types(filepath):
+ spdx_file.fileTypes.append(t)
+ spdx_file.fileName = filename
+
+ if archive is not None:
+ with filepath.open("rb") as f:
+ info = archive.gettarinfo(fileobj=f)
+ info.name = filename
+ info.uid = 0
+ info.gid = 0
+ info.uname = "root"
+ info.gname = "root"
+
+ if source_date_epoch is not None and info.mtime > source_date_epoch:
+ info.mtime = source_date_epoch
+
+ archive.addfile(info, f)
+
+ sha1 = bb.utils.sha1_file(filepath)
+ sha1s.append(sha1)
+ spdx_file.checksums.append(oe.spdx.SPDXChecksum(
+ algorithm="SHA1",
+ checksumValue=sha1,
+ ))
+ spdx_file.checksums.append(oe.spdx.SPDXChecksum(
+ algorithm="SHA256",
+ checksumValue=bb.utils.sha256_file(filepath),
+ ))
+
+ if "SOURCE" in spdx_file.fileTypes:
+ extracted_lics = extract_licenses(filepath)
+ if extracted_lics:
+ spdx_file.licenseInfoInFiles = extracted_lics
+
+ doc.files.append(spdx_file)
+ doc.add_relationship(spdx_pkg, "CONTAINS", spdx_file)
+ spdx_pkg.hasFiles.append(spdx_file.SPDXID)
+
+ spdx_files.append(spdx_file)
+
+ file_counter += 1
+
+ sha1s.sort()
+ verifier = hashlib.sha1()
+ for v in sha1s:
+ verifier.update(v.encode("utf-8"))
+ spdx_pkg.packageVerificationCode.packageVerificationCodeValue = verifier.hexdigest()
+
+ return spdx_files
+
+
+def add_package_sources_from_debug(d, package_doc, spdx_package, package, package_files, sources):
+ from pathlib import Path
+ import hashlib
+ import oe.packagedata
+ import oe.spdx
+
+ debug_search_paths = [
+ Path(d.getVar('PKGD')),
+ Path(d.getVar('STAGING_DIR_TARGET')),
+ Path(d.getVar('STAGING_DIR_NATIVE')),
+ Path(d.getVar('STAGING_KERNEL_DIR')),
+ ]
+
+ pkg_data = oe.packagedata.read_subpkgdata_extended(package, d)
+
+ if pkg_data is None:
+ return
+
+ for file_path, file_data in pkg_data["files_info"].items():
+ if not "debugsrc" in file_data:
+ continue
+
+ for pkg_file in package_files:
+ if file_path.lstrip("/") == pkg_file.fileName.lstrip("/"):
+ break
+ else:
+ bb.fatal("No package file found for %s" % str(file_path))
+ continue
+
+ for debugsrc in file_data["debugsrc"]:
+ ref_id = "NOASSERTION"
+ for search in debug_search_paths:
+ if debugsrc.startswith("/usr/src/kernel"):
+ debugsrc_path = search / debugsrc.replace('/usr/src/kernel/', '')
+ else:
+ debugsrc_path = search / debugsrc.lstrip("/")
+ if not debugsrc_path.exists():
+ continue
+
+ file_sha256 = bb.utils.sha256_file(debugsrc_path)
+
+ if file_sha256 in sources:
+ source_file = sources[file_sha256]
+
+ doc_ref = package_doc.find_external_document_ref(source_file.doc.documentNamespace)
+ if doc_ref is None:
+ doc_ref = oe.spdx.SPDXExternalDocumentRef()
+ doc_ref.externalDocumentId = "DocumentRef-dependency-" + source_file.doc.name
+ doc_ref.spdxDocument = source_file.doc.documentNamespace
+ doc_ref.checksum.algorithm = "SHA1"
+ doc_ref.checksum.checksumValue = source_file.doc_sha1
+ package_doc.externalDocumentRefs.append(doc_ref)
+
+ ref_id = "%s:%s" % (doc_ref.externalDocumentId, source_file.file.SPDXID)
+ else:
+ bb.debug(1, "Debug source %s with SHA256 %s not found in any dependency" % (str(debugsrc_path), file_sha256))
+ break
+ else:
+ bb.debug(1, "Debug source %s not found" % debugsrc)
+
+ package_doc.add_relationship(pkg_file, "GENERATED_FROM", ref_id, comment=debugsrc)
+
+def collect_dep_recipes(d, doc, spdx_recipe):
+ from pathlib import Path
+ import oe.sbom
+ import oe.spdx
+
+ deploy_dir_spdx = Path(d.getVar("DEPLOY_DIR_SPDX"))
+
+ dep_recipes = []
+ taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+ deps = sorted(set(
+ dep[0] for dep in taskdepdata.values() if
+ dep[1] == "do_create_spdx" and dep[0] != d.getVar("PN")
+ ))
+ for dep_pn in deps:
+ dep_recipe_path = deploy_dir_spdx / "recipes" / ("recipe-%s.spdx.json" % dep_pn)
+
+ spdx_dep_doc, spdx_dep_sha1 = oe.sbom.read_doc(dep_recipe_path)
+
+ for pkg in spdx_dep_doc.packages:
+ if pkg.name == dep_pn:
+ spdx_dep_recipe = pkg
+ break
+ else:
+ continue
+
+ dep_recipes.append(oe.sbom.DepRecipe(spdx_dep_doc, spdx_dep_sha1, spdx_dep_recipe))
+
+ dep_recipe_ref = oe.spdx.SPDXExternalDocumentRef()
+ dep_recipe_ref.externalDocumentId = "DocumentRef-dependency-" + spdx_dep_doc.name
+ dep_recipe_ref.spdxDocument = spdx_dep_doc.documentNamespace
+ dep_recipe_ref.checksum.algorithm = "SHA1"
+ dep_recipe_ref.checksum.checksumValue = spdx_dep_sha1
+
+ doc.externalDocumentRefs.append(dep_recipe_ref)
+
+ doc.add_relationship(
+ "%s:%s" % (dep_recipe_ref.externalDocumentId, spdx_dep_recipe.SPDXID),
+ "BUILD_DEPENDENCY_OF",
+ spdx_recipe
+ )
+
+ return dep_recipes
+
+collect_dep_recipes[vardepsexclude] += "BB_TASKDEPDATA"
+collect_dep_recipes[vardeps] += "DEPENDS"
+
+def collect_dep_sources(d, dep_recipes):
+ import oe.sbom
+
+ sources = {}
+ for dep in dep_recipes:
+ # Don't collect sources from native recipes as they
+ # match non-native sources also.
+ if recipe_spdx_is_native(d, dep.recipe):
+ continue
+ recipe_files = set(dep.recipe.hasFiles)
+
+ for spdx_file in dep.doc.files:
+ if spdx_file.SPDXID not in recipe_files:
+ continue
+
+ if "SOURCE" in spdx_file.fileTypes:
+ for checksum in spdx_file.checksums:
+ if checksum.algorithm == "SHA256":
+ sources[checksum.checksumValue] = oe.sbom.DepSource(dep.doc, dep.doc_sha1, dep.recipe, spdx_file)
+ break
+
+ return sources
+
+def add_download_packages(d, doc, recipe):
+ import os.path
+ from bb.fetch2 import decodeurl, CHECKSUM_LIST
+ import bb.process
+ import oe.spdx
+ import oe.sbom
+
+ for download_idx, src_uri in enumerate(d.getVar('SRC_URI').split()):
+ f = bb.fetch2.FetchData(src_uri, d)
+
+ for name in f.names:
+ package = oe.spdx.SPDXPackage()
+ package.name = "%s-source-%d" % (d.getVar("PN"), download_idx + 1)
+ package.SPDXID = oe.sbom.get_download_spdxid(d, download_idx + 1)
+
+ if f.type == "file":
+ continue
+
+ uri = f.type
+ proto = getattr(f, "proto", None)
+ if proto is not None:
+ uri = uri + "+" + proto
+ uri = uri + "://" + f.host + f.path
+
+ if f.method.supports_srcrev():
+ uri = uri + "@" + f.revisions[name]
+
+ if f.method.supports_checksum(f):
+ for checksum_id in CHECKSUM_LIST:
+ if checksum_id.upper() not in oe.spdx.SPDXPackage.ALLOWED_CHECKSUMS:
+ continue
+
+ expected_checksum = getattr(f, "%s_expected" % checksum_id)
+ if expected_checksum is None:
+ continue
+
+ c = oe.spdx.SPDXChecksum()
+ c.algorithm = checksum_id.upper()
+ c.checksumValue = expected_checksum
+ package.checksums.append(c)
+
+ package.downloadLocation = uri
+ doc.packages.append(package)
+ doc.add_relationship(doc, "DESCRIBES", package)
+ # In the future, we might be able to do more fancy dependencies,
+ # but this should be sufficient for now
+ doc.add_relationship(package, "BUILD_DEPENDENCY_OF", recipe)
+
+python do_create_spdx() {
+ from datetime import datetime, timezone
+ import oe.sbom
+ import oe.spdx
+ import uuid
+ from pathlib import Path
+ from contextlib import contextmanager
+ import oe.cve_check
+
+ @contextmanager
+ def optional_tarfile(name, guard, mode="w"):
+ import tarfile
+ import gzip
+
+ if guard:
+ name.parent.mkdir(parents=True, exist_ok=True)
+ with gzip.open(name, mode=mode + "b") as f:
+ with tarfile.open(fileobj=f, mode=mode + "|") as tf:
+ yield tf
+ else:
+ yield None
+
+
+ deploy_dir_spdx = Path(d.getVar("DEPLOY_DIR_SPDX"))
+ spdx_workdir = Path(d.getVar("SPDXWORK"))
+ include_sources = d.getVar("SPDX_INCLUDE_SOURCES") == "1"
+ archive_sources = d.getVar("SPDX_ARCHIVE_SOURCES") == "1"
+ archive_packaged = d.getVar("SPDX_ARCHIVE_PACKAGED") == "1"
+
+ creation_time = datetime.now(tz=timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
+
+ doc = oe.spdx.SPDXDocument()
+
+ doc.name = "recipe-" + d.getVar("PN")
+ doc.documentNamespace = get_doc_namespace(d, doc)
+ doc.creationInfo.created = creation_time
+ doc.creationInfo.comment = "This document was created by analyzing recipe files during the build."
+ doc.creationInfo.licenseListVersion = d.getVar("SPDX_LICENSE_DATA")["licenseListVersion"]
+ doc.creationInfo.creators.append("Tool: OpenEmbedded Core create-spdx.bbclass")
+ doc.creationInfo.creators.append("Organization: %s" % d.getVar("SPDX_ORG"))
+ doc.creationInfo.creators.append("Person: N/A ()")
+
+ recipe = oe.spdx.SPDXPackage()
+ recipe.name = d.getVar("PN")
+ recipe.versionInfo = d.getVar("PV")
+ recipe.SPDXID = oe.sbom.get_recipe_spdxid(d)
+ recipe.supplier = d.getVar("SPDX_SUPPLIER")
+ if bb.data.inherits_class("native", d) or bb.data.inherits_class("cross", d):
+ recipe.annotations.append(create_annotation(d, "isNative"))
+
+ homepage = d.getVar("HOMEPAGE")
+ if homepage:
+ recipe.homepage = homepage
+
+ license = d.getVar("LICENSE")
+ if license:
+ recipe.licenseDeclared = convert_license_to_spdx(license, doc, d)
+
+ summary = d.getVar("SUMMARY")
+ if summary:
+ recipe.summary = summary
+
+ description = d.getVar("DESCRIPTION")
+ if description:
+ recipe.description = description
+
+ if d.getVar("SPDX_CUSTOM_ANNOTATION_VARS"):
+ for var in d.getVar('SPDX_CUSTOM_ANNOTATION_VARS').split():
+ recipe.annotations.append(create_annotation(d, var + "=" + d.getVar(var)))
+
+ # Some CVEs may be patched during the build process without incrementing the version number,
+ # so querying for CVEs based on the CPE id can lead to false positives. To account for this,
+ # save the CVEs fixed by patches to source information field in the SPDX.
+ patched_cves = oe.cve_check.get_patched_cves(d)
+ patched_cves = list(patched_cves)
+ patched_cves = ' '.join(patched_cves)
+ if patched_cves:
+ recipe.sourceInfo = "CVEs fixed: " + patched_cves
+
+ cpe_ids = oe.cve_check.get_cpe_ids(d.getVar("CVE_PRODUCT"), d.getVar("CVE_VERSION"))
+ if cpe_ids:
+ for cpe_id in cpe_ids:
+ cpe = oe.spdx.SPDXExternalReference()
+ cpe.referenceCategory = "SECURITY"
+ cpe.referenceType = "http://spdx.org/rdf/references/cpe23Type"
+ cpe.referenceLocator = cpe_id
+ recipe.externalRefs.append(cpe)
+
+ doc.packages.append(recipe)
+ doc.add_relationship(doc, "DESCRIBES", recipe)
+
+ add_download_packages(d, doc, recipe)
+
+ if process_sources(d) and include_sources:
+ recipe_archive = deploy_dir_spdx / "recipes" / (doc.name + ".tar.gz")
+ with optional_tarfile(recipe_archive, archive_sources) as archive:
+ spdx_get_src(d)
+
+ add_package_files(
+ d,
+ doc,
+ recipe,
+ spdx_workdir,
+ lambda file_counter: "SPDXRef-SourceFile-%s-%d" % (d.getVar("PN"), file_counter),
+ lambda filepath: ["SOURCE"],
+ ignore_dirs=[".git"],
+ ignore_top_level_dirs=["temp"],
+ archive=archive,
+ )
+
+ if archive is not None:
+ recipe.packageFileName = str(recipe_archive.name)
+
+ dep_recipes = collect_dep_recipes(d, doc, recipe)
+
+ doc_sha1 = oe.sbom.write_doc(d, doc, "recipes", indent=get_json_indent(d))
+ dep_recipes.append(oe.sbom.DepRecipe(doc, doc_sha1, recipe))
+
+ recipe_ref = oe.spdx.SPDXExternalDocumentRef()
+ recipe_ref.externalDocumentId = "DocumentRef-recipe-" + recipe.name
+ recipe_ref.spdxDocument = doc.documentNamespace
+ recipe_ref.checksum.algorithm = "SHA1"
+ recipe_ref.checksum.checksumValue = doc_sha1
+
+ sources = collect_dep_sources(d, dep_recipes)
+ found_licenses = {license.name:recipe_ref.externalDocumentId + ":" + license.licenseId for license in doc.hasExtractedLicensingInfos}
+
+ if not recipe_spdx_is_native(d, recipe):
+ bb.build.exec_func("read_subpackage_metadata", d)
+
+ pkgdest = Path(d.getVar("PKGDEST"))
+ for package in d.getVar("PACKAGES").split():
+ if not oe.packagedata.packaged(package, d):
+ continue
+
+ package_doc = oe.spdx.SPDXDocument()
+ pkg_name = d.getVar("PKG:%s" % package) or package
+ package_doc.name = pkg_name
+ package_doc.documentNamespace = get_doc_namespace(d, package_doc)
+ package_doc.creationInfo.created = creation_time
+ package_doc.creationInfo.comment = "This document was created by analyzing packages created during the build."
+ package_doc.creationInfo.licenseListVersion = d.getVar("SPDX_LICENSE_DATA")["licenseListVersion"]
+ package_doc.creationInfo.creators.append("Tool: OpenEmbedded Core create-spdx.bbclass")
+ package_doc.creationInfo.creators.append("Organization: %s" % d.getVar("SPDX_ORG"))
+ package_doc.creationInfo.creators.append("Person: N/A ()")
+ package_doc.externalDocumentRefs.append(recipe_ref)
+
+ package_license = d.getVar("LICENSE:%s" % package) or d.getVar("LICENSE")
+
+ spdx_package = oe.spdx.SPDXPackage()
+
+ spdx_package.SPDXID = oe.sbom.get_package_spdxid(pkg_name)
+ spdx_package.name = pkg_name
+ spdx_package.versionInfo = d.getVar("PV")
+ spdx_package.licenseDeclared = convert_license_to_spdx(package_license, package_doc, d, found_licenses)
+ spdx_package.supplier = d.getVar("SPDX_SUPPLIER")
+
+ package_doc.packages.append(spdx_package)
+
+ package_doc.add_relationship(spdx_package, "GENERATED_FROM", "%s:%s" % (recipe_ref.externalDocumentId, recipe.SPDXID))
+ package_doc.add_relationship(package_doc, "DESCRIBES", spdx_package)
+
+ package_archive = deploy_dir_spdx / "packages" / (package_doc.name + ".tar.gz")
+ with optional_tarfile(package_archive, archive_packaged) as archive:
+ package_files = add_package_files(
+ d,
+ package_doc,
+ spdx_package,
+ pkgdest / package,
+ lambda file_counter: oe.sbom.get_packaged_file_spdxid(pkg_name, file_counter),
+ lambda filepath: ["BINARY"],
+ ignore_top_level_dirs=['CONTROL', 'DEBIAN'],
+ archive=archive,
+ )
+
+ if archive is not None:
+ spdx_package.packageFileName = str(package_archive.name)
+
+ add_package_sources_from_debug(d, package_doc, spdx_package, package, package_files, sources)
+
+ oe.sbom.write_doc(d, package_doc, "packages", indent=get_json_indent(d))
+}
+# NOTE: depending on do_unpack is a hack that is necessary to get it's dependencies for archive the source
+addtask do_create_spdx after do_package do_packagedata do_unpack before do_populate_sdk do_build do_rm_work
+
+SSTATETASKS += "do_create_spdx"
+do_create_spdx[sstate-inputdirs] = "${SPDXDEPLOY}"
+do_create_spdx[sstate-outputdirs] = "${DEPLOY_DIR_SPDX}"
+
+python do_create_spdx_setscene () {
+ sstate_setscene(d)
+}
+addtask do_create_spdx_setscene
+
+do_create_spdx[dirs] = "${SPDXWORK}"
+do_create_spdx[cleandirs] = "${SPDXDEPLOY} ${SPDXWORK}"
+do_create_spdx[depends] += "${PATCHDEPENDENCY}"
+do_create_spdx[deptask] = "do_create_spdx"
+
+def collect_package_providers(d):
+ from pathlib import Path
+ import oe.sbom
+ import oe.spdx
+ import json
+
+ deploy_dir_spdx = Path(d.getVar("DEPLOY_DIR_SPDX"))
+
+ providers = {}
+
+ taskdepdata = d.getVar("BB_TASKDEPDATA", False)
+ deps = sorted(set(
+ dep[0] for dep in taskdepdata.values() if dep[0] != d.getVar("PN")
+ ))
+ deps.append(d.getVar("PN"))
+
+ for dep_pn in deps:
+ recipe_data = oe.packagedata.read_pkgdata(dep_pn, d)
+
+ for pkg in recipe_data.get("PACKAGES", "").split():
+
+ pkg_data = oe.packagedata.read_subpkgdata_dict(pkg, d)
+ rprovides = set(n for n, _ in bb.utils.explode_dep_versions2(pkg_data.get("RPROVIDES", "")).items())
+ rprovides.add(pkg)
+
+ for r in rprovides:
+ providers[r] = pkg
+
+ return providers
+
+collect_package_providers[vardepsexclude] += "BB_TASKDEPDATA"
+
+python do_create_runtime_spdx() {
+ from datetime import datetime, timezone
+ import oe.sbom
+ import oe.spdx
+ import oe.packagedata
+ from pathlib import Path
+
+ deploy_dir_spdx = Path(d.getVar("DEPLOY_DIR_SPDX"))
+ spdx_deploy = Path(d.getVar("SPDXRUNTIMEDEPLOY"))
+ is_native = bb.data.inherits_class("native", d) or bb.data.inherits_class("cross", d)
+
+ creation_time = datetime.now(tz=timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
+
+ providers = collect_package_providers(d)
+
+ if not is_native:
+ bb.build.exec_func("read_subpackage_metadata", d)
+
+ dep_package_cache = {}
+
+ pkgdest = Path(d.getVar("PKGDEST"))
+ for package in d.getVar("PACKAGES").split():
+ localdata = bb.data.createCopy(d)
+ pkg_name = d.getVar("PKG:%s" % package) or package
+ localdata.setVar("PKG", pkg_name)
+ localdata.setVar('OVERRIDES', d.getVar("OVERRIDES", False) + ":" + package)
+
+ if not oe.packagedata.packaged(package, localdata):
+ continue
+
+ pkg_spdx_path = deploy_dir_spdx / "packages" / (pkg_name + ".spdx.json")
+
+ package_doc, package_doc_sha1 = oe.sbom.read_doc(pkg_spdx_path)
+
+ for p in package_doc.packages:
+ if p.name == pkg_name:
+ spdx_package = p
+ break
+ else:
+ bb.fatal("Package '%s' not found in %s" % (pkg_name, pkg_spdx_path))
+
+ runtime_doc = oe.spdx.SPDXDocument()
+ runtime_doc.name = "runtime-" + pkg_name
+ runtime_doc.documentNamespace = get_doc_namespace(localdata, runtime_doc)
+ runtime_doc.creationInfo.created = creation_time
+ runtime_doc.creationInfo.comment = "This document was created by analyzing package runtime dependencies."
+ runtime_doc.creationInfo.licenseListVersion = d.getVar("SPDX_LICENSE_DATA")["licenseListVersion"]
+ runtime_doc.creationInfo.creators.append("Tool: OpenEmbedded Core create-spdx.bbclass")
+ runtime_doc.creationInfo.creators.append("Organization: %s" % d.getVar("SPDX_ORG"))
+ runtime_doc.creationInfo.creators.append("Person: N/A ()")
+
+ package_ref = oe.spdx.SPDXExternalDocumentRef()
+ package_ref.externalDocumentId = "DocumentRef-package-" + package
+ package_ref.spdxDocument = package_doc.documentNamespace
+ package_ref.checksum.algorithm = "SHA1"
+ package_ref.checksum.checksumValue = package_doc_sha1
+
+ runtime_doc.externalDocumentRefs.append(package_ref)
+
+ runtime_doc.add_relationship(
+ runtime_doc.SPDXID,
+ "AMENDS",
+ "%s:%s" % (package_ref.externalDocumentId, package_doc.SPDXID)
+ )
+
+ deps = bb.utils.explode_dep_versions2(localdata.getVar("RDEPENDS") or "")
+ seen_deps = set()
+ for dep, _ in deps.items():
+ if dep in seen_deps:
+ continue
+
+ if dep not in providers:
+ continue
+
+ dep = providers[dep]
+
+ if not oe.packagedata.packaged(dep, localdata):
+ continue
+
+ dep_pkg_data = oe.packagedata.read_subpkgdata_dict(dep, d)
+ dep_pkg = dep_pkg_data["PKG"]
+
+ if dep in dep_package_cache:
+ (dep_spdx_package, dep_package_ref) = dep_package_cache[dep]
+ else:
+ dep_path = deploy_dir_spdx / "packages" / ("%s.spdx.json" % dep_pkg)
+
+ spdx_dep_doc, spdx_dep_sha1 = oe.sbom.read_doc(dep_path)
+
+ for pkg in spdx_dep_doc.packages:
+ if pkg.name == dep_pkg:
+ dep_spdx_package = pkg
+ break
+ else:
+ bb.fatal("Package '%s' not found in %s" % (dep_pkg, dep_path))
+
+ dep_package_ref = oe.spdx.SPDXExternalDocumentRef()
+ dep_package_ref.externalDocumentId = "DocumentRef-runtime-dependency-" + spdx_dep_doc.name
+ dep_package_ref.spdxDocument = spdx_dep_doc.documentNamespace
+ dep_package_ref.checksum.algorithm = "SHA1"
+ dep_package_ref.checksum.checksumValue = spdx_dep_sha1
+
+ dep_package_cache[dep] = (dep_spdx_package, dep_package_ref)
+
+ runtime_doc.externalDocumentRefs.append(dep_package_ref)
+
+ runtime_doc.add_relationship(
+ "%s:%s" % (dep_package_ref.externalDocumentId, dep_spdx_package.SPDXID),
+ "RUNTIME_DEPENDENCY_OF",
+ "%s:%s" % (package_ref.externalDocumentId, spdx_package.SPDXID)
+ )
+ seen_deps.add(dep)
+
+ oe.sbom.write_doc(d, runtime_doc, "runtime", spdx_deploy, indent=get_json_indent(d))
+}
+
+addtask do_create_runtime_spdx after do_create_spdx before do_build do_rm_work
+SSTATETASKS += "do_create_runtime_spdx"
+do_create_runtime_spdx[sstate-inputdirs] = "${SPDXRUNTIMEDEPLOY}"
+do_create_runtime_spdx[sstate-outputdirs] = "${DEPLOY_DIR_SPDX}"
+
+python do_create_runtime_spdx_setscene () {
+ sstate_setscene(d)
+}
+addtask do_create_runtime_spdx_setscene
+
+do_create_runtime_spdx[dirs] = "${SPDXRUNTIMEDEPLOY}"
+do_create_runtime_spdx[cleandirs] = "${SPDXRUNTIMEDEPLOY}"
+do_create_runtime_spdx[rdeptask] = "do_create_spdx"
+
+def spdx_get_src(d):
+ """
+ save patched source of the recipe in SPDX_WORKDIR.
+ """
+ import shutil
+ spdx_workdir = d.getVar('SPDXWORK')
+ spdx_sysroot_native = d.getVar('STAGING_DIR_NATIVE')
+ pn = d.getVar('PN')
+
+ workdir = d.getVar("WORKDIR")
+
+ try:
+ # The kernel class functions require it to be on work-shared, so we dont change WORKDIR
+ if not is_work_shared_spdx(d):
+ # Change the WORKDIR to make do_unpack do_patch run in another dir.
+ d.setVar('WORKDIR', spdx_workdir)
+ # Restore the original path to recipe's native sysroot (it's relative to WORKDIR).
+ d.setVar('STAGING_DIR_NATIVE', spdx_sysroot_native)
+
+ # The changed 'WORKDIR' also caused 'B' changed, create dir 'B' for the
+ # possibly requiring of the following tasks (such as some recipes's
+ # do_patch required 'B' existed).
+ bb.utils.mkdirhier(d.getVar('B'))
+
+ bb.build.exec_func('do_unpack', d)
+ # Copy source of kernel to spdx_workdir
+ if is_work_shared_spdx(d):
+ share_src = d.getVar('WORKDIR')
+ d.setVar('WORKDIR', spdx_workdir)
+ d.setVar('STAGING_DIR_NATIVE', spdx_sysroot_native)
+ src_dir = spdx_workdir + "/" + d.getVar('PN')+ "-" + d.getVar('PV') + "-" + d.getVar('PR')
+ bb.utils.mkdirhier(src_dir)
+ if bb.data.inherits_class('kernel',d):
+ share_src = d.getVar('STAGING_KERNEL_DIR')
+ cmd_copy_share = "cp -rf " + share_src + "/* " + src_dir + "/"
+ cmd_copy_shared_res = os.popen(cmd_copy_share).read()
+ bb.note("cmd_copy_shared_result = " + cmd_copy_shared_res)
+
+ git_path = src_dir + "/.git"
+ if os.path.exists(git_path):
+ shutils.rmtree(git_path)
+
+ # Make sure gcc and kernel sources are patched only once
+ if not (d.getVar('SRC_URI') == "" or is_work_shared_spdx(d)):
+ bb.build.exec_func('do_patch', d)
+
+ # Some userland has no source.
+ if not os.path.exists( spdx_workdir ):
+ bb.utils.mkdirhier(spdx_workdir)
+ finally:
+ d.setVar("WORKDIR", workdir)
+
+do_rootfs[recrdeptask] += "do_create_spdx do_create_runtime_spdx"
+do_rootfs[cleandirs] += "${SPDXIMAGEWORK}"
+
+ROOTFS_POSTUNINSTALL_COMMAND =+ "image_combine_spdx ; "
+
+do_populate_sdk[recrdeptask] += "do_create_spdx do_create_runtime_spdx"
+do_populate_sdk[cleandirs] += "${SPDXSDKWORK}"
+POPULATE_SDK_POST_HOST_COMMAND:append:task-populate-sdk = " sdk_host_combine_spdx; "
+POPULATE_SDK_POST_TARGET_COMMAND:append:task-populate-sdk = " sdk_target_combine_spdx; "
+
+python image_combine_spdx() {
+ import os
+ import oe.sbom
+ from pathlib import Path
+ from oe.rootfs import image_list_installed_packages
+
+ image_name = d.getVar("IMAGE_NAME")
+ image_link_name = d.getVar("IMAGE_LINK_NAME")
+ imgdeploydir = Path(d.getVar("IMGDEPLOYDIR"))
+ img_spdxid = oe.sbom.get_image_spdxid(image_name)
+ packages = image_list_installed_packages(d)
+
+ combine_spdx(d, image_name, imgdeploydir, img_spdxid, packages, Path(d.getVar("SPDXIMAGEWORK")))
+
+ def make_image_link(target_path, suffix):
+ if image_link_name:
+ link = imgdeploydir / (image_link_name + suffix)
+ if link != target_path:
+ link.symlink_to(os.path.relpath(target_path, link.parent))
+
+ spdx_tar_path = imgdeploydir / (image_name + ".spdx.tar.gz")
+ make_image_link(spdx_tar_path, ".spdx.tar.gz")
+}
+
+python sdk_host_combine_spdx() {
+ sdk_combine_spdx(d, "host")
+}
+
+python sdk_target_combine_spdx() {
+ sdk_combine_spdx(d, "target")
+}
+
+def sdk_combine_spdx(d, sdk_type):
+ import oe.sbom
+ from pathlib import Path
+ from oe.sdk import sdk_list_installed_packages
+
+ sdk_name = d.getVar("SDK_NAME") + "-" + sdk_type
+ sdk_deploydir = Path(d.getVar("SDKDEPLOYDIR"))
+ sdk_spdxid = oe.sbom.get_sdk_spdxid(sdk_name)
+ sdk_packages = sdk_list_installed_packages(d, sdk_type == "target")
+ combine_spdx(d, sdk_name, sdk_deploydir, sdk_spdxid, sdk_packages, Path(d.getVar('SPDXSDKWORK')))
+
+def combine_spdx(d, rootfs_name, rootfs_deploydir, rootfs_spdxid, packages, spdx_workdir):
+ import os
+ import oe.spdx
+ import oe.sbom
+ import io
+ import json
+ from datetime import timezone, datetime
+ from pathlib import Path
+ import tarfile
+ import gzip
+
+ creation_time = datetime.now(tz=timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ")
+ deploy_dir_spdx = Path(d.getVar("DEPLOY_DIR_SPDX"))
+ source_date_epoch = d.getVar("SOURCE_DATE_EPOCH")
+
+ doc = oe.spdx.SPDXDocument()
+ doc.name = rootfs_name
+ doc.documentNamespace = get_doc_namespace(d, doc)
+ doc.creationInfo.created = creation_time
+ doc.creationInfo.comment = "This document was created by analyzing the source of the Yocto recipe during the build."
+ doc.creationInfo.licenseListVersion = d.getVar("SPDX_LICENSE_DATA")["licenseListVersion"]
+ doc.creationInfo.creators.append("Tool: OpenEmbedded Core create-spdx.bbclass")
+ doc.creationInfo.creators.append("Organization: %s" % d.getVar("SPDX_ORG"))
+ doc.creationInfo.creators.append("Person: N/A ()")
+
+ image = oe.spdx.SPDXPackage()
+ image.name = d.getVar("PN")
+ image.versionInfo = d.getVar("PV")
+ image.SPDXID = rootfs_spdxid
+ image.supplier = d.getVar("SPDX_SUPPLIER")
+
+ doc.packages.append(image)
+
+ for name in sorted(packages.keys()):
+ pkg_spdx_path = deploy_dir_spdx / "packages" / (name + ".spdx.json")
+ pkg_doc, pkg_doc_sha1 = oe.sbom.read_doc(pkg_spdx_path)
+
+ for p in pkg_doc.packages:
+ if p.name == name:
+ pkg_ref = oe.spdx.SPDXExternalDocumentRef()
+ pkg_ref.externalDocumentId = "DocumentRef-%s" % pkg_doc.name
+ pkg_ref.spdxDocument = pkg_doc.documentNamespace
+ pkg_ref.checksum.algorithm = "SHA1"
+ pkg_ref.checksum.checksumValue = pkg_doc_sha1
+
+ doc.externalDocumentRefs.append(pkg_ref)
+ doc.add_relationship(image, "CONTAINS", "%s:%s" % (pkg_ref.externalDocumentId, p.SPDXID))
+ break
+ else:
+ bb.fatal("Unable to find package with name '%s' in SPDX file %s" % (name, pkg_spdx_path))
+
+ runtime_spdx_path = deploy_dir_spdx / "runtime" / ("runtime-" + name + ".spdx.json")
+ runtime_doc, runtime_doc_sha1 = oe.sbom.read_doc(runtime_spdx_path)
+
+ runtime_ref = oe.spdx.SPDXExternalDocumentRef()
+ runtime_ref.externalDocumentId = "DocumentRef-%s" % runtime_doc.name
+ runtime_ref.spdxDocument = runtime_doc.documentNamespace
+ runtime_ref.checksum.algorithm = "SHA1"
+ runtime_ref.checksum.checksumValue = runtime_doc_sha1
+
+ # "OTHER" isn't ideal here, but I can't find a relationship that makes sense
+ doc.externalDocumentRefs.append(runtime_ref)
+ doc.add_relationship(
+ image,
+ "OTHER",
+ "%s:%s" % (runtime_ref.externalDocumentId, runtime_doc.SPDXID),
+ comment="Runtime dependencies for %s" % name
+ )
+
+ image_spdx_path = spdx_workdir / (rootfs_name + ".spdx.json")
+
+ with image_spdx_path.open("wb") as f:
+ doc.to_json(f, sort_keys=True, indent=get_json_indent(d))
+
+ num_threads = int(d.getVar("BB_NUMBER_THREADS"))
+
+ visited_docs = set()
+
+ index = {"documents": []}
+
+ spdx_tar_path = rootfs_deploydir / (rootfs_name + ".spdx.tar.gz")
+ with gzip.open(spdx_tar_path, "w") as f:
+ with tarfile.open(fileobj=f, mode="w|") as tar:
+ def collect_spdx_document(path):
+ nonlocal tar
+ nonlocal deploy_dir_spdx
+ nonlocal source_date_epoch
+ nonlocal index
+
+ if path in visited_docs:
+ return
+
+ visited_docs.add(path)
+
+ with path.open("rb") as f:
+ doc, sha1 = oe.sbom.read_doc(f)
+ f.seek(0)
+
+ if doc.documentNamespace in visited_docs:
+ return
+
+ bb.note("Adding SPDX document %s" % path)
+ visited_docs.add(doc.documentNamespace)
+ info = tar.gettarinfo(fileobj=f)
+
+ info.name = doc.name + ".spdx.json"
+ info.uid = 0
+ info.gid = 0
+ info.uname = "root"
+ info.gname = "root"
+
+ if source_date_epoch is not None and info.mtime > int(source_date_epoch):
+ info.mtime = int(source_date_epoch)
+
+ tar.addfile(info, f)
+
+ index["documents"].append({
+ "filename": info.name,
+ "documentNamespace": doc.documentNamespace,
+ "sha1": sha1,
+ })
+
+ for ref in doc.externalDocumentRefs:
+ ref_path = deploy_dir_spdx / "by-namespace" / ref.spdxDocument.replace("/", "_")
+ collect_spdx_document(ref_path)
+
+ collect_spdx_document(image_spdx_path)
+
+ index["documents"].sort(key=lambda x: x["filename"])
+
+ index_str = io.BytesIO(json.dumps(
+ index,
+ sort_keys=True,
+ indent=get_json_indent(d),
+ ).encode("utf-8"))
+
+ info = tarfile.TarInfo()
+ info.name = "index.json"
+ info.size = len(index_str.getvalue())
+ info.uid = 0
+ info.gid = 0
+ info.uname = "root"
+ info.gname = "root"
+
+ tar.addfile(info, fileobj=index_str)
diff --git a/poky/meta/classes/create-spdx.bbclass b/poky/meta/classes/create-spdx.bbclass
new file mode 100644
index 0000000000..19c6c0ff0b
--- /dev/null
+++ b/poky/meta/classes/create-spdx.bbclass
@@ -0,0 +1,8 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Include this class when you don't care what version of SPDX you get; it will
+# be updated to the latest stable version that is supported
+inherit create-spdx-2.2
diff --git a/poky/meta/classes/cve-check.bbclass b/poky/meta/classes/cve-check.bbclass
index 4fc4e545e4..05b9cb47dc 100644
--- a/poky/meta/classes/cve-check.bbclass
+++ b/poky/meta/classes/cve-check.bbclass
@@ -42,8 +42,8 @@ CVE_CHECK_LOG_JSON ?= "${T}/cve.json"
CVE_CHECK_DIR ??= "${DEPLOY_DIR}/cve"
CVE_CHECK_RECIPE_FILE ?= "${CVE_CHECK_DIR}/${PN}"
CVE_CHECK_RECIPE_FILE_JSON ?= "${CVE_CHECK_DIR}/${PN}_cve.json"
-CVE_CHECK_MANIFEST ?= "${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cve"
-CVE_CHECK_MANIFEST_JSON ?= "${DEPLOY_DIR_IMAGE}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.json"
+CVE_CHECK_MANIFEST ?= "${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.cve"
+CVE_CHECK_MANIFEST_JSON ?= "${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.json"
CVE_CHECK_COPY_FILES ??= "1"
CVE_CHECK_CREATE_MANIFEST ??= "1"
@@ -195,7 +195,7 @@ python cve_check_write_rootfs_manifest () {
recipies.add(pkg_data["PN"])
bb.note("Writing rootfs CVE manifest")
- deploy_dir = d.getVar("DEPLOY_DIR_IMAGE")
+ deploy_dir = d.getVar("IMGDEPLOYDIR")
link_name = d.getVar("IMAGE_LINK_NAME")
json_data = {"version":"1", "package": []}
@@ -253,7 +253,7 @@ def check_cves(d, patched_cves):
"""
Connect to the NVD database and find unpatched cves.
"""
- from oe.cve_check import Version
+ from oe.cve_check import Version, convert_cve_version
pn = d.getVar("PN")
real_pv = d.getVar("PV")
@@ -317,6 +317,9 @@ def check_cves(d, patched_cves):
if cve in cve_whitelist:
ignored = True
+ version_start = convert_cve_version(version_start)
+ version_end = convert_cve_version(version_end)
+
if (operator_start == '=' and pv == version_start) or version_start == '-':
vulnerable = True
else:
diff --git a/poky/meta/classes/devshell.bbclass b/poky/meta/classes/devshell.bbclass
index b6212ebd89..76dd0b42ee 100644
--- a/poky/meta/classes/devshell.bbclass
+++ b/poky/meta/classes/devshell.bbclass
@@ -2,8 +2,6 @@ inherit terminal
DEVSHELL = "${SHELL}"
-PATH:prepend:task-devshell = "${COREBASE}/scripts/git-intercept:"
-
python do_devshell () {
if d.getVarFlag("do_devshell", "manualfakeroot"):
d.prependVar("DEVSHELL", "pseudo ")
diff --git a/poky/meta/classes/externalsrc.bbclass b/poky/meta/classes/externalsrc.bbclass
index 0e0a3ae89c..9c9451e528 100644
--- a/poky/meta/classes/externalsrc.bbclass
+++ b/poky/meta/classes/externalsrc.bbclass
@@ -60,7 +60,7 @@ python () {
if externalsrcbuild:
d.setVar('B', externalsrcbuild)
else:
- d.setVar('B', '${WORKDIR}/${BPN}-${PV}/')
+ d.setVar('B', '${WORKDIR}/${BPN}-${PV}')
local_srcuri = []
fetch = bb.fetch2.Fetch((d.getVar('SRC_URI') or '').split(), d)
@@ -207,8 +207,8 @@ def srctree_hash_files(d, srcdir=None):
try:
git_dir = os.path.join(s_dir,
subprocess.check_output(['git', '-C', s_dir, 'rev-parse', '--git-dir'], stderr=subprocess.DEVNULL).decode("utf-8").rstrip())
- top_git_dir = os.path.join(s_dir, subprocess.check_output(['git', '-C', d.getVar("TOPDIR"), 'rev-parse', '--git-dir'],
- stderr=subprocess.DEVNULL).decode("utf-8").rstrip())
+ top_git_dir = os.path.join(d.getVar("TOPDIR"),
+ subprocess.check_output(['git', '-C', d.getVar("TOPDIR"), 'rev-parse', '--git-dir'], stderr=subprocess.DEVNULL).decode("utf-8").rstrip())
if git_dir == top_git_dir:
git_dir = None
except subprocess.CalledProcessError:
@@ -225,15 +225,16 @@ def srctree_hash_files(d, srcdir=None):
env['GIT_INDEX_FILE'] = tmp_index.name
subprocess.check_output(['git', 'add', '-A', '.'], cwd=s_dir, env=env)
git_sha1 = subprocess.check_output(['git', 'write-tree'], cwd=s_dir, env=env).decode("utf-8")
- submodule_helper = subprocess.check_output(['git', 'submodule--helper', 'list'], cwd=s_dir, env=env).decode("utf-8")
- for line in submodule_helper.splitlines():
- module_dir = os.path.join(s_dir, line.rsplit(maxsplit=1)[1])
- if os.path.isdir(module_dir):
- proc = subprocess.Popen(['git', 'add', '-A', '.'], cwd=module_dir, env=env, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
- proc.communicate()
- proc = subprocess.Popen(['git', 'write-tree'], cwd=module_dir, env=env, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
- stdout, _ = proc.communicate()
- git_sha1 += stdout.decode("utf-8")
+ if os.path.exists(os.path.join(s_dir, ".gitmodules")) and os.path.getsize(os.path.join(s_dir, ".gitmodules")) > 0:
+ submodule_helper = subprocess.check_output(["git", "config", "--file", ".gitmodules", "--get-regexp", "path"], cwd=s_dir, env=env).decode("utf-8")
+ for line in submodule_helper.splitlines():
+ module_dir = os.path.join(s_dir, line.rsplit(maxsplit=1)[1])
+ if os.path.isdir(module_dir):
+ proc = subprocess.Popen(['git', 'add', '-A', '.'], cwd=module_dir, env=env, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
+ proc.communicate()
+ proc = subprocess.Popen(['git', 'write-tree'], cwd=module_dir, env=env, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
+ stdout, _ = proc.communicate()
+ git_sha1 += stdout.decode("utf-8")
sha1 = hashlib.sha1(git_sha1.encode("utf-8")).hexdigest()
with open(oe_hash_file, 'w') as fobj:
fobj.write(sha1)
diff --git a/poky/meta/classes/fs-uuid.bbclass b/poky/meta/classes/fs-uuid.bbclass
index 9b53dfba7a..731ea575bd 100644
--- a/poky/meta/classes/fs-uuid.bbclass
+++ b/poky/meta/classes/fs-uuid.bbclass
@@ -4,7 +4,7 @@
def get_rootfs_uuid(d):
import subprocess
rootfs = d.getVar('ROOTFS')
- output = subprocess.check_output(['tune2fs', '-l', rootfs])
+ output = subprocess.check_output(['tune2fs', '-l', rootfs], text=True)
for line in output.split('\n'):
if line.startswith('Filesystem UUID:'):
uuid = line.split()[-1]
diff --git a/poky/meta/classes/image.bbclass b/poky/meta/classes/image.bbclass
index 0d77d2f676..fbf7206d04 100644
--- a/poky/meta/classes/image.bbclass
+++ b/poky/meta/classes/image.bbclass
@@ -311,7 +311,7 @@ fakeroot python do_image_qa () {
except oe.utils.ImageQAFailed as e:
qamsg = qamsg + '\tImage QA function %s failed: %s\n' % (e.name, e.description)
except Exception as e:
- qamsg = qamsg + '\tImage QA function %s failed\n' % cmd
+ qamsg = qamsg + '\tImage QA function %s failed: %s\n' % (cmd, e)
if qamsg:
imgname = d.getVar('IMAGE_NAME')
@@ -437,7 +437,7 @@ python () {
localdata.delVar('DATETIME')
localdata.delVar('DATE')
localdata.delVar('TMPDIR')
- vardepsexclude = (d.getVarFlag('IMAGE_CMD_' + realt, 'vardepsexclude', True) or '').split()
+ vardepsexclude = (d.getVarFlag('IMAGE_CMD_' + realt, 'vardepsexclude') or '').split()
for dep in vardepsexclude:
localdata.delVar(dep)
diff --git a/poky/meta/classes/kernel-arch.bbclass b/poky/meta/classes/kernel-arch.bbclass
index 348a3adf22..4cd08b96fb 100644
--- a/poky/meta/classes/kernel-arch.bbclass
+++ b/poky/meta/classes/kernel-arch.bbclass
@@ -64,5 +64,5 @@ HOST_AR_KERNEL_ARCH ?= "${TARGET_AR_KERNEL_ARCH}"
KERNEL_CC = "${CCACHE}${HOST_PREFIX}gcc ${HOST_CC_KERNEL_ARCH} -fuse-ld=bfd ${DEBUG_PREFIX_MAP} -fdebug-prefix-map=${STAGING_KERNEL_DIR}=${KERNEL_SRC_PATH} -fdebug-prefix-map=${STAGING_KERNEL_BUILDDIR}=${KERNEL_SRC_PATH}"
KERNEL_LD = "${CCACHE}${HOST_PREFIX}ld.bfd ${HOST_LD_KERNEL_ARCH}"
KERNEL_AR = "${CCACHE}${HOST_PREFIX}ar ${HOST_AR_KERNEL_ARCH}"
-TOOLCHAIN = "gcc"
+TOOLCHAIN ?= "gcc"
diff --git a/poky/meta/classes/kernel-fitimage.bbclass b/poky/meta/classes/kernel-fitimage.bbclass
index 7c0d93625b..e0dd215167 100644
--- a/poky/meta/classes/kernel-fitimage.bbclass
+++ b/poky/meta/classes/kernel-fitimage.bbclass
@@ -59,6 +59,9 @@ FIT_SIGN_ALG ?= "rsa2048"
# fitImage Padding Algo
FIT_PAD_ALG ?= "pkcs-1.5"
+# Arguments passed to mkimage for signing
+UBOOT_MKIMAGE_SIGN_ARGS ?= ""
+
#
# Emit the fitImage ITS header
#
@@ -479,7 +482,8 @@ fitimage_assemble() {
${@'-D "${UBOOT_MKIMAGE_DTCOPTS}"' if len('${UBOOT_MKIMAGE_DTCOPTS}') else ''} \
-F -k "${UBOOT_SIGN_KEYDIR}" \
$add_key_to_u_boot \
- -r arch/${ARCH}/boot/${2}
+ -r arch/${ARCH}/boot/${2} \
+ ${UBOOT_MKIMAGE_SIGN_ARGS}
fi
}
diff --git a/poky/meta/classes/kernel-yocto.bbclass b/poky/meta/classes/kernel-yocto.bbclass
index 2a6231803b..2abbc2ff66 100644
--- a/poky/meta/classes/kernel-yocto.bbclass
+++ b/poky/meta/classes/kernel-yocto.bbclass
@@ -194,7 +194,7 @@ do_kernel_metadata() {
# SRC_URI. If they were supplied, we convert them into include directives
# for the update part of the process
for f in ${feat_dirs}; do
- if [ -d "${WORKDIR}/$f/meta" ]; then
+ if [ -d "${WORKDIR}/$f/kernel-meta" ]; then
includes="$includes -I${WORKDIR}/$f/kernel-meta"
elif [ -d "${WORKDIR}/../oe-local-files/$f" ]; then
includes="$includes -I${WORKDIR}/../oe-local-files/$f"
diff --git a/poky/meta/classes/kernel.bbclass b/poky/meta/classes/kernel.bbclass
index 2a3cb21fc0..c6310d8de7 100644
--- a/poky/meta/classes/kernel.bbclass
+++ b/poky/meta/classes/kernel.bbclass
@@ -75,7 +75,7 @@ python __anonymous () {
# KERNEL_IMAGETYPES may contain a mixture of image types supported directly
# by the kernel build system and types which are created by post-processing
# the output of the kernel build system (e.g. compressing vmlinux ->
- # vmlinux.gz in kernel_do_compile()).
+ # vmlinux.gz in kernel_do_transform_kernel()).
# KERNEL_IMAGETYPE_FOR_MAKE should contain only image types supported
# directly by the kernel build system.
if not d.getVar('KERNEL_IMAGETYPE_FOR_MAKE'):
@@ -106,6 +106,8 @@ python __anonymous () {
# standalone for use by wic and other tools.
if image:
d.appendVarFlag('do_bundle_initramfs', 'depends', ' ${INITRAMFS_IMAGE}:do_image_complete')
+ if image and bb.utils.to_boolean(d.getVar('INITRAMFS_IMAGE_BUNDLE')):
+ bb.build.addtask('do_transform_bundled_initramfs', 'do_deploy', 'do_bundle_initramfs', d)
# NOTE: setting INITRAMFS_TASK is for backward compatibility
# The preferred method is to set INITRAMFS_IMAGE, because
@@ -280,6 +282,14 @@ do_bundle_initramfs () {
}
do_bundle_initramfs[dirs] = "${B}"
+kernel_do_transform_bundled_initramfs() {
+ # vmlinux.gz is not built by kernel
+ if (echo "${KERNEL_IMAGETYPES}" | grep -wq "vmlinux\.gz"); then
+ gzip -9cn < ${KERNEL_OUTPUT_DIR}/vmlinux.initramfs > ${KERNEL_OUTPUT_DIR}/vmlinux.gz.initramfs
+ fi
+}
+do_transform_bundled_initramfs[dirs] = "${B}"
+
python do_devshell_prepend () {
os.environ["LDFLAGS"] = ''
}
@@ -311,6 +321,10 @@ kernel_do_compile() {
export KBUILD_BUILD_TIMESTAMP="$ts"
export KCONFIG_NOTIMESTAMP=1
bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
+ else
+ ts=`LC_ALL=C date`
+ export KBUILD_BUILD_TIMESTAMP="$ts"
+ bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
fi
# The $use_alternate_initrd is only set from
# do_bundle_initramfs() This variable is specifically for the
@@ -329,12 +343,17 @@ kernel_do_compile() {
for typeformake in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do
oe_runmake ${typeformake} CC="${KERNEL_CC} $cc_extra " LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS} $use_alternate_initrd
done
+}
+
+kernel_do_transform_kernel() {
# vmlinux.gz is not built by kernel
if (echo "${KERNEL_IMAGETYPES}" | grep -wq "vmlinux\.gz"); then
mkdir -p "${KERNEL_OUTPUT_DIR}"
gzip -9cn < ${B}/vmlinux > "${KERNEL_OUTPUT_DIR}/vmlinux.gz"
fi
}
+do_transform_kernel[dirs] = "${B}"
+addtask transform_kernel after do_compile before do_install
do_compile_kernelmodules() {
unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
@@ -352,6 +371,10 @@ do_compile_kernelmodules() {
export KBUILD_BUILD_TIMESTAMP="$ts"
export KCONFIG_NOTIMESTAMP=1
bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
+ else
+ ts=`LC_ALL=C date`
+ export KBUILD_BUILD_TIMESTAMP="$ts"
+ bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
fi
if (grep -q -i -e '^CONFIG_MODULES=y$' ${B}/.config); then
cc_extra=$(get_cc_option)
@@ -572,11 +595,11 @@ do_savedefconfig() {
do_savedefconfig[nostamp] = "1"
addtask savedefconfig after do_configure
-inherit cml1
+inherit cml1 pkgconfig
KCONFIG_CONFIG_COMMAND_append = " LD='${KERNEL_LD}' HOSTLDFLAGS='${BUILD_LDFLAGS}'"
-EXPORT_FUNCTIONS do_compile do_install do_configure
+EXPORT_FUNCTIONS do_compile do_transform_kernel do_transform_bundled_initramfs do_install do_configure
# kernel-base becomes kernel-${KERNEL_VERSION}
# kernel-image becomes kernel-image-${KERNEL_VERSION}
@@ -721,7 +744,7 @@ kernel_do_deploy() {
fi
if [ ! -z "${INITRAMFS_IMAGE}" -a x"${INITRAMFS_IMAGE_BUNDLE}" = x1 ]; then
- for imageType in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do
+ for imageType in ${KERNEL_IMAGETYPES} ; do
if [ "$imageType" = "fitImage" ] ; then
continue
fi
diff --git a/poky/meta/classes/libc-package.bbclass b/poky/meta/classes/libc-package.bbclass
index 1143f538d6..72f489d673 100644
--- a/poky/meta/classes/libc-package.bbclass
+++ b/poky/meta/classes/libc-package.bbclass
@@ -45,6 +45,7 @@ PACKAGE_NO_GCONV ?= "0"
OVERRIDES_append = ":${TARGET_ARCH}-${TARGET_OS}"
locale_base_postinst_ontarget() {
+mkdir ${libdir}/locale
localedef --inputfile=${datadir}/i18n/locales/%s --charmap=%s %s
}
diff --git a/poky/meta/classes/license_image.bbclass b/poky/meta/classes/license_image.bbclass
index 9f3a0c3727..325b3cbba7 100644
--- a/poky/meta/classes/license_image.bbclass
+++ b/poky/meta/classes/license_image.bbclass
@@ -211,7 +211,7 @@ def get_deployed_dependencies(d):
deploy = {}
# Get all the dependencies for the current task (rootfs).
taskdata = d.getVar("BB_TASKDEPDATA", False)
- pn = d.getVar("PN", True)
+ pn = d.getVar("PN")
depends = list(set([dep[0] for dep
in list(taskdata.values())
if not dep[0].endswith("-native") and not dep[0] == pn]))
diff --git a/poky/meta/classes/multilib.bbclass b/poky/meta/classes/multilib.bbclass
index 9a8b02d4f6..b5c59ac593 100644
--- a/poky/meta/classes/multilib.bbclass
+++ b/poky/meta/classes/multilib.bbclass
@@ -45,6 +45,7 @@ python multilib_virtclass_handler () {
e.data.setVar("RECIPE_SYSROOT", "${WORKDIR}/recipe-sysroot")
e.data.setVar("STAGING_DIR_TARGET", "${WORKDIR}/recipe-sysroot")
e.data.setVar("STAGING_DIR_HOST", "${WORKDIR}/recipe-sysroot")
+ e.data.setVar("RECIPE_SYSROOT_MANIFEST_SUBDIR", "nativesdk-" + variant)
e.data.setVar("MLPREFIX", variant + "-")
override = ":virtclass-multilib-" + variant
e.data.setVar("OVERRIDES", e.data.getVar("OVERRIDES", False) + override)
diff --git a/poky/meta/classes/nativesdk.bbclass b/poky/meta/classes/nativesdk.bbclass
index 7f2692c51a..dc5a9756b6 100644
--- a/poky/meta/classes/nativesdk.bbclass
+++ b/poky/meta/classes/nativesdk.bbclass
@@ -113,3 +113,5 @@ do_packagedata[stamp-extra-info] = ""
USE_NLS = "${SDKUSE_NLS}"
OLDEST_KERNEL = "${SDK_OLDEST_KERNEL}"
+
+PATH_prepend = "${COREBASE}/scripts/nativesdk-intercept:"
diff --git a/poky/meta/classes/package.bbclass b/poky/meta/classes/package.bbclass
index 702427fecc..49d30caef7 100644
--- a/poky/meta/classes/package.bbclass
+++ b/poky/meta/classes/package.bbclass
@@ -1140,6 +1140,14 @@ python split_and_strip_files () {
# Modified the file so clear the cache
cpath.updatecache(file)
+ def strip_pkgd_prefix(f):
+ nonlocal dvar
+
+ if f.startswith(dvar):
+ return f[len(dvar):]
+
+ return f
+
#
# First lets process debug splitting
#
@@ -1153,6 +1161,8 @@ python split_and_strip_files () {
for file in staticlibs:
results.append( (file,source_info(file, d)) )
+ d.setVar("PKGDEBUGSOURCES", {strip_pkgd_prefix(f): sorted(s) for f, s in results})
+
sources = set()
for r in results:
sources.update(r[1])
@@ -1460,6 +1470,7 @@ PKGDATA_VARS = "PN PE PV PR PKGE PKGV PKGR LICENSE DESCRIPTION SUMMARY RDEPENDS
python emit_pkgdata() {
from glob import glob
import json
+ import gzip
def process_postinst_on_target(pkg, mlprefix):
pkgval = d.getVar('PKG_%s' % pkg)
@@ -1532,6 +1543,8 @@ fi
with open(data_file, 'w') as fd:
fd.write("PACKAGES: %s\n" % packages)
+ pkgdebugsource = d.getVar("PKGDEBUGSOURCES") or []
+
pn = d.getVar('PN')
global_variants = (d.getVar('MULTILIB_GLOBAL_VARIANTS') or "").split()
variants = (d.getVar('MULTILIB_VARIANTS') or "").split()
@@ -1551,17 +1564,32 @@ fi
pkgval = pkg
d.setVar('PKG_%s' % pkg, pkg)
+ extended_data = {
+ "files_info": {}
+ }
+
pkgdestpkg = os.path.join(pkgdest, pkg)
files = {}
+ files_extra = {}
total_size = 0
seen = set()
for f in pkgfiles[pkg]:
- relpth = os.path.relpath(f, pkgdestpkg)
+ fpath = os.sep + os.path.relpath(f, pkgdestpkg)
+
fstat = os.lstat(f)
- files[os.sep + relpth] = fstat.st_size
+ files[fpath] = fstat.st_size
+
+ extended_data["files_info"].setdefault(fpath, {})
+ extended_data["files_info"][fpath]['size'] = fstat.st_size
+
if fstat.st_ino not in seen:
seen.add(fstat.st_ino)
total_size += fstat.st_size
+
+ if fpath in pkgdebugsource:
+ extended_data["files_info"][fpath]['debugsrc'] = pkgdebugsource[fpath]
+ del pkgdebugsource[fpath]
+
d.setVar('FILES_INFO', json.dumps(files, sort_keys=True))
process_postinst_on_target(pkg, d.getVar("MLPREFIX"))
@@ -1582,6 +1610,10 @@ fi
sf.write('%s_%s: %d\n' % ('PKGSIZE', pkg, total_size))
+ subdata_extended_file = pkgdatadir + "/extended/%s.json.gz" % pkg
+ with gzip.open(subdata_extended_file, "wt", encoding="utf-8") as f:
+ json.dump(extended_data, f, sort_keys=True, separators=(",", ":"))
+
# Symlinks needed for rprovides lookup
rprov = d.getVar('RPROVIDES_%s' % pkg) or d.getVar('RPROVIDES')
if rprov:
@@ -1612,7 +1644,8 @@ fi
write_extra_runtime_pkgs(global_variants, packages, pkgdatadir)
}
-emit_pkgdata[dirs] = "${PKGDESTWORK}/runtime ${PKGDESTWORK}/runtime-reverse ${PKGDESTWORK}/runtime-rprovides"
+emit_pkgdata[dirs] = "${PKGDESTWORK}/runtime ${PKGDESTWORK}/runtime-reverse ${PKGDESTWORK}/runtime-rprovides ${PKGDESTWORK}/extended"
+emit_pkgdata[vardepsexclude] = "BB_NUMBER_THREADS"
ldconfig_postinst_fragment() {
if [ x"$D" = "x" ]; then
diff --git a/poky/meta/classes/populate_sdk_base.bbclass b/poky/meta/classes/populate_sdk_base.bbclass
index 396792f0f7..49fdfaa93d 100644
--- a/poky/meta/classes/populate_sdk_base.bbclass
+++ b/poky/meta/classes/populate_sdk_base.bbclass
@@ -51,6 +51,8 @@ TOOLCHAIN_OUTPUTNAME ?= "${SDK_NAME}-toolchain-${SDK_VERSION}"
SDK_ARCHIVE_TYPE ?= "tar.xz"
SDK_XZ_COMPRESSION_LEVEL ?= "-9"
SDK_XZ_OPTIONS ?= "${XZ_DEFAULTS} ${SDK_XZ_COMPRESSION_LEVEL}"
+SDK_ZIP_OPTIONS ?= "-y"
+
# To support different sdk type according to SDK_ARCHIVE_TYPE, now support zip and tar.xz
python () {
@@ -58,7 +60,7 @@ python () {
d.setVar('SDK_ARCHIVE_DEPENDS', 'zip-native')
# SDK_ARCHIVE_CMD used to generate archived sdk ${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} from input dir ${SDK_OUTPUT}/${SDKPATH} to output dir ${SDKDEPLOYDIR}
# recommand to cd into input dir first to avoid archive with buildpath
- d.setVar('SDK_ARCHIVE_CMD', 'cd ${SDK_OUTPUT}/${SDKPATH}; zip -r -y ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} .')
+ d.setVar('SDK_ARCHIVE_CMD', 'cd ${SDK_OUTPUT}/${SDKPATH}; zip -r ${SDK_ZIP_OPTIONS} ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE} .')
else:
d.setVar('SDK_ARCHIVE_DEPENDS', 'xz-native')
d.setVar('SDK_ARCHIVE_CMD', 'cd ${SDK_OUTPUT}/${SDKPATH}; tar ${SDKTAROPTS} -cf - . | xz ${SDK_XZ_OPTIONS} > ${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.${SDK_ARCHIVE_TYPE}')
diff --git a/poky/meta/classes/populate_sdk_ext.bbclass b/poky/meta/classes/populate_sdk_ext.bbclass
index aa00d5397c..a43ff3fb32 100644
--- a/poky/meta/classes/populate_sdk_ext.bbclass
+++ b/poky/meta/classes/populate_sdk_ext.bbclass
@@ -117,7 +117,7 @@ python write_host_sdk_ext_manifest () {
f.write("%s %s %s\n" % (info[1], info[2], info[3]))
}
-SDK_POSTPROCESS_COMMAND_append_task-populate-sdk-ext = "write_target_sdk_ext_manifest; write_host_sdk_ext_manifest; "
+SDK_POSTPROCESS_COMMAND_append_task-populate-sdk-ext = " write_target_sdk_ext_manifest; write_host_sdk_ext_manifest; "
SDK_TITLE_task-populate-sdk-ext = "${@d.getVar('DISTRO_NAME') or d.getVar('DISTRO')} Extensible SDK"
@@ -669,7 +669,7 @@ sdk_ext_postinst() {
# A bit of another hack, but we need this in the path only for devtool
# so put it at the end of $PATH.
- echo "export PATH=$target_sdk_dir/sysroots/${SDK_SYS}${bindir_nativesdk}:\$PATH" >> $env_setup_script
+ echo "export PATH=\"$target_sdk_dir/sysroots/${SDK_SYS}${bindir_nativesdk}:\$PATH\"" >> $env_setup_script
echo "printf 'SDK environment now set up; additionally you may now run devtool to perform development tasks.\nRun devtool --help for further details.\n'" >> $env_setup_script
diff --git a/poky/meta/classes/qemuboot.bbclass b/poky/meta/classes/qemuboot.bbclass
index 648af09b6e..92ae69d9f2 100644
--- a/poky/meta/classes/qemuboot.bbclass
+++ b/poky/meta/classes/qemuboot.bbclass
@@ -7,6 +7,7 @@
# QB_OPT_APPEND: options to append to qemu, e.g., "-show-cursor"
#
# QB_DEFAULT_KERNEL: default kernel to boot, e.g., "bzImage"
+# e.g., "bzImage-initramfs-qemux86-64.bin" if INITRAMFS_IMAGE_BUNDLE is set to 1.
#
# QB_DEFAULT_FSTYPE: default FSTYPE to boot, e.g., "ext4"
#
@@ -75,7 +76,7 @@
QB_MEM ?= "-m 256"
QB_SERIAL_OPT ?= "-serial mon:stdio -serial null"
-QB_DEFAULT_KERNEL ?= "${KERNEL_IMAGETYPE}"
+QB_DEFAULT_KERNEL ?= "${@bb.utils.contains("INITRAMFS_IMAGE_BUNDLE", "1", "${KERNEL_IMAGETYPE}-${INITRAMFS_LINK_NAME}.bin", "${KERNEL_IMAGETYPE}", d)}"
QB_DEFAULT_FSTYPE ?= "ext4"
QB_OPT_APPEND ?= "-show-cursor"
QB_NETWORK_DEVICE ?= "-device virtio-net-pci,netdev=net0,mac=@MAC@"
diff --git a/poky/meta/classes/rm_work.bbclass b/poky/meta/classes/rm_work.bbclass
index 2d5a56c238..24051aa378 100644
--- a/poky/meta/classes/rm_work.bbclass
+++ b/poky/meta/classes/rm_work.bbclass
@@ -27,6 +27,13 @@ BB_SCHEDULER ?= "completion"
BB_TASK_IONICE_LEVEL_task-rm_work = "3.0"
do_rm_work () {
+ # Force using the HOSTTOOLS 'rm' - otherwise the SYSROOT_NATIVE 'rm' can be selected depending on PATH
+ # Avoids race-condition accessing 'rm' when deleting WORKDIR folders at the end of this function
+ RM_BIN="$(PATH=${HOSTTOOLS_DIR} command -v rm)"
+ if [ -z "${RM_BIN}" ]; then
+ bbfatal "Binary 'rm' not found in HOSTTOOLS_DIR, cannot remove WORKDIR data."
+ fi
+
# If the recipe name is in the RM_WORK_EXCLUDE, skip the recipe.
for p in ${RM_WORK_EXCLUDE}; do
if [ "$p" = "${PN}" ]; then
@@ -73,7 +80,7 @@ do_rm_work () {
# sstate version since otherwise we'd need to leave 'plaindirs' around
# such as 'packages' and 'packages-split' and these can be large. No end
# of chain tasks depend directly on do_package anymore.
- rm -f -- $i;
+ "${RM_BIN}" -f -- $i;
;;
*_setscene*)
# Skip stamps which are already setscene versions
@@ -90,7 +97,7 @@ do_rm_work () {
;;
esac
done
- rm -f -- $i
+ "${RM_BIN}" -f -- $i
esac
done
@@ -100,9 +107,9 @@ do_rm_work () {
# Retain only logs and other files in temp, safely ignore
# failures of removing pseudo folers on NFS2/3 server.
if [ $dir = 'pseudo' ]; then
- rm -rf -- $dir 2> /dev/null || true
+ "${RM_BIN}" -rf -- $dir 2> /dev/null || true
elif ! echo "$excludes" | grep -q -w "$dir"; then
- rm -rf -- $dir
+ "${RM_BIN}" -rf -- $dir
fi
done
}
diff --git a/poky/meta/classes/sanity.bbclass b/poky/meta/classes/sanity.bbclass
index 37354af9d5..33e5e5952f 100644
--- a/poky/meta/classes/sanity.bbclass
+++ b/poky/meta/classes/sanity.bbclass
@@ -561,6 +561,14 @@ def check_tar_version(sanity_data):
version = result.split()[3]
if LooseVersion(version) < LooseVersion("1.28"):
return "Your version of tar is older than 1.28 and does not have the support needed to enable reproducible builds. Please install a newer version of tar (you could use the project's buildtools-tarball from our last release or use scripts/install-buildtools).\n"
+
+ try:
+ result = subprocess.check_output(["tar", "--help"], stderr=subprocess.STDOUT).decode('utf-8')
+ if "--xattrs" not in result:
+ return "Your tar doesn't support --xattrs, please use GNU tar.\n"
+ except subprocess.CalledProcessError as e:
+ return "Unable to execute tar --help, exit code %d\n%s\n" % (e.returncode, e.output)
+
return None
# We use git parameters and functionality only found in 1.7.8 or later
diff --git a/poky/meta/classes/sstate.bbclass b/poky/meta/classes/sstate.bbclass
index 3d6fb84d63..1058778980 100644
--- a/poky/meta/classes/sstate.bbclass
+++ b/poky/meta/classes/sstate.bbclass
@@ -20,7 +20,7 @@ def generate_sstatefn(spec, hash, taskname, siginfo, d):
components = spec.split(":")
# Fields 0,5,6 are mandatory, 1 is most useful, 2,3,4 are just for information
# 7 is for the separators
- avail = (254 - len(hash + "_" + taskname + extension) - len(components[0]) - len(components[1]) - len(components[5]) - len(components[6]) - 7) // 3
+ avail = (limit - len(hash + "_" + taskname + extension) - len(components[0]) - len(components[1]) - len(components[5]) - len(components[6]) - 7) // 3
components[2] = components[2][:avail]
components[3] = components[3][:avail]
components[4] = components[4][:avail]
diff --git a/poky/meta/classes/staging.bbclass b/poky/meta/classes/staging.bbclass
index 78eb914921..21523c8f75 100644
--- a/poky/meta/classes/staging.bbclass
+++ b/poky/meta/classes/staging.bbclass
@@ -267,6 +267,10 @@ python extend_recipe_sysroot() {
pn = d.getVar("PN")
stagingdir = d.getVar("STAGING_DIR")
sharedmanifests = d.getVar("COMPONENTS_DIR") + "/manifests"
+ # only needed by multilib cross-canadian since it redefines RECIPE_SYSROOT
+ manifestprefix = d.getVar("RECIPE_SYSROOT_MANIFEST_SUBDIR")
+ if manifestprefix:
+ sharedmanifests = sharedmanifests + "/" + manifestprefix
recipesysroot = d.getVar("RECIPE_SYSROOT")
recipesysrootnative = d.getVar("RECIPE_SYSROOT_NATIVE")
diff --git a/poky/meta/classes/toolchain-scripts.bbclass b/poky/meta/classes/toolchain-scripts.bbclass
index db1d3215ef..21762b803b 100644
--- a/poky/meta/classes/toolchain-scripts.bbclass
+++ b/poky/meta/classes/toolchain-scripts.bbclass
@@ -29,7 +29,7 @@ toolchain_create_sdk_env_script () {
echo '# http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html#AEN80' >> $script
echo '# http://xahlee.info/UnixResource_dir/_/ldpath.html' >> $script
echo '# Only disable this check if you are absolutely know what you are doing!' >> $script
- echo 'if [ ! -z "$LD_LIBRARY_PATH" ]; then' >> $script
+ echo 'if [ ! -z "${LD_LIBRARY_PATH:-}" ]; then' >> $script
echo " echo \"Your environment is misconfigured, you probably need to 'unset LD_LIBRARY_PATH'\"" >> $script
echo " echo \"but please check why this was set in the first place and that it's safe to unset.\"" >> $script
echo ' echo "The SDK will not operate correctly in most cases when LD_LIBRARY_PATH is set."' >> $script
@@ -44,7 +44,7 @@ toolchain_create_sdk_env_script () {
for i in ${CANADIANEXTRAOS}; do
EXTRAPATH="$EXTRAPATH:$sdkpathnative$bindir/${TARGET_ARCH}${TARGET_VENDOR}-$i"
done
- echo "export PATH=$sdkpathnative$bindir:$sdkpathnative$sbindir:$sdkpathnative$base_bindir:$sdkpathnative$base_sbindir:$sdkpathnative$bindir/../${HOST_SYS}/bin:$sdkpathnative$bindir/${TARGET_SYS}"$EXTRAPATH':$PATH' >> $script
+ echo "export PATH=$sdkpathnative$bindir:$sdkpathnative$sbindir:$sdkpathnative$base_bindir:$sdkpathnative$base_sbindir:$sdkpathnative$bindir/../${HOST_SYS}/bin:$sdkpathnative$bindir/${TARGET_SYS}"$EXTRAPATH':"$PATH"' >> $script
echo 'export PKG_CONFIG_SYSROOT_DIR=$SDKTARGETSYSROOT' >> $script
echo 'export PKG_CONFIG_PATH=$SDKTARGETSYSROOT'"$libdir"'/pkgconfig:$SDKTARGETSYSROOT'"$prefix"'/share/pkgconfig' >> $script
echo 'export CONFIG_SITE=${SDKPATH}/site-config-'"${multimach_target_sys}" >> $script
diff --git a/poky/meta/conf/distro/include/maintainers.inc b/poky/meta/conf/distro/include/maintainers.inc
index 1575fce8c7..11a35a2c59 100644
--- a/poky/meta/conf/distro/include/maintainers.inc
+++ b/poky/meta/conf/distro/include/maintainers.inc
@@ -194,7 +194,7 @@ RECIPE_MAINTAINER_pn-gcc-cross-canadian-${TRANSLATED_TARGET_ARCH} = "Khem Raj <r
RECIPE_MAINTAINER_pn-gcc-crosssdk-${SDK_SYS} = "Khem Raj <raj.khem@gmail.com>"
RECIPE_MAINTAINER_pn-gcc-runtime = "Khem Raj <raj.khem@gmail.com>"
RECIPE_MAINTAINER_pn-gcc-sanitizers = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-source-9.3.0 = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-source-9.5.0 = "Khem Raj <raj.khem@gmail.com>"
RECIPE_MAINTAINER_pn-gconf = "Ross Burton <ross.burton@arm.com>"
RECIPE_MAINTAINER_pn-gcr = "Alexander Kanavin <alex.kanavin@gmail.com>"
RECIPE_MAINTAINER_pn-gdb = "Khem Raj <raj.khem@gmail.com>"
diff --git a/poky/meta/conf/distro/include/yocto-uninative.inc b/poky/meta/conf/distro/include/yocto-uninative.inc
index 411fe45a24..7012db441b 100644
--- a/poky/meta/conf/distro/include/yocto-uninative.inc
+++ b/poky/meta/conf/distro/include/yocto-uninative.inc
@@ -6,10 +6,10 @@
# to the distro running on the build machine.
#
-UNINATIVE_MAXGLIBCVERSION = "2.35"
-UNINATIVE_VERSION = "3.6"
+UNINATIVE_MAXGLIBCVERSION = "2.36"
+UNINATIVE_VERSION = "3.7"
UNINATIVE_URL ?= "http://downloads.yoctoproject.org/releases/uninative/${UNINATIVE_VERSION}/"
-UNINATIVE_CHECKSUM[aarch64] ?= "d64831cf2792c8e470c2e42230660e1a8e5de56a579cdd59978791f663c2f3ed"
-UNINATIVE_CHECKSUM[i686] ?= "2f0ee9b66b1bb2c85e2b592fb3c9c7f5d77399fa638d74961330cdb8de34ca3b"
-UNINATIVE_CHECKSUM[x86_64] ?= "9bfc4c970495b3716b2f9e52c4df9f968c02463a9a95000f6657fbc3fde1f098"
+UNINATIVE_CHECKSUM[aarch64] ?= "6a29bcae4b5b716d2d520e18800b33943b65f8a835eac1ff8793fc5ee65b4be6"
+UNINATIVE_CHECKSUM[i686] ?= "3f6d52e64996570c716108d49f8108baccf499a283bbefae438c7266b7a93305"
+UNINATIVE_CHECKSUM[x86_64] ?= "b110bf2e10fe420f5ca2f3ec55f048ee5f0a54c7e34856a3594e51eb2aea0570"
diff --git a/poky/meta/conf/licenses.conf b/poky/meta/conf/licenses.conf
index 0149b1dc44..d14c365977 100644
--- a/poky/meta/conf/licenses.conf
+++ b/poky/meta/conf/licenses.conf
@@ -22,21 +22,28 @@ SPDXLICENSEMAP[GPLv1.0] = "GPL-1.0"
SPDXLICENSEMAP[GPL-1.0-only] = "GPL-1.0"
SPDXLICENSEMAP[GPL-2] = "GPL-2.0"
SPDXLICENSEMAP[GPLv2] = "GPL-2.0"
+SPDXLICENSEMAP[GPLv2+] = "GPL-2.0+"
SPDXLICENSEMAP[GPLv2.0] = "GPL-2.0"
+SPDXLICENSEMAP[GPLv2.0+] = "GPL-2.0+"
SPDXLICENSEMAP[GPL-2.0-only] = "GPL-2.0"
SPDXLICENSEMAP[GPL-3] = "GPL-3.0"
SPDXLICENSEMAP[GPLv3] = "GPL-3.0"
+SPDXLICENSEMAP[GPLv3+] = "GPL-3.0+"
SPDXLICENSEMAP[GPLv3.0] = "GPL-3.0"
+SPDXLICENSEMAP[GPLv3.0+] = "GPL-3.0+"
SPDXLICENSEMAP[GPL-3.0-only] = "GPL-3.0"
#LGPL variations
SPDXLICENSEMAP[LGPLv2] = "LGPL-2.0"
+SPDXLICENSEMAP[LGPLv2+] = "LGPL-2.0+"
SPDXLICENSEMAP[LGPLv2.0] = "LGPL-2.0"
SPDXLICENSEMAP[LGPL-2.0-only] = "LGPL-2.0"
SPDXLICENSEMAP[LGPL2.1] = "LGPL-2.1"
SPDXLICENSEMAP[LGPLv2.1] = "LGPL-2.1"
+SPDXLICENSEMAP[LGPLv2.1+] = "LGPL-2.1+"
SPDXLICENSEMAP[LGPL-2.1-only] = "LGPL-2.1"
SPDXLICENSEMAP[LGPLv3] = "LGPL-3.0"
+SPDXLICENSEMAP[LGPLv3+] = "LGPL-3.0+"
SPDXLICENSEMAP[LGPL-3.0-only] = "LGPL-3.0"
#MPL variations
diff --git a/poky/meta/files/spdx-licenses.json b/poky/meta/files/spdx-licenses.json
new file mode 100644
index 0000000000..ef926164ec
--- /dev/null
+++ b/poky/meta/files/spdx-licenses.json
@@ -0,0 +1,5937 @@
+{
+ "licenseListVersion": "3.14",
+ "licenses": [
+ {
+ "reference": "https://spdx.org/licenses/GPL-1.0.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-1.0.json",
+ "referenceNumber": 0,
+ "name": "GNU General Public License v1.0 only",
+ "licenseId": "GPL-1.0",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/gpl-1.0-standalone.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/bzip2-1.0.6.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/bzip2-1.0.6.json",
+ "referenceNumber": 1,
+ "name": "bzip2 and libbzip2 License v1.0.6",
+ "licenseId": "bzip2-1.0.6",
+ "seeAlso": [
+ "https://sourceware.org/git/?p\u003dbzip2.git;a\u003dblob;f\u003dLICENSE;hb\u003dbzip2-1.0.6",
+ "http://bzip.org/1.0.5/bzip2-manual-1.0.5.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Intel-ACPI.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Intel-ACPI.json",
+ "referenceNumber": 2,
+ "name": "Intel ACPI Software License Agreement",
+ "licenseId": "Intel-ACPI",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Intel_ACPI_Software_License_Agreement"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/XSkat.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/XSkat.json",
+ "referenceNumber": 3,
+ "name": "XSkat License",
+ "licenseId": "XSkat",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/XSkat_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-2.0.json",
+ "referenceNumber": 4,
+ "name": "Creative Commons Attribution Non Commercial Share Alike 2.0 Generic",
+ "licenseId": "CC-BY-NC-SA-2.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/2.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Plexus.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Plexus.json",
+ "referenceNumber": 5,
+ "name": "Plexus Classworlds License",
+ "licenseId": "Plexus",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Plexus_Classworlds_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Giftware.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Giftware.json",
+ "referenceNumber": 6,
+ "name": "Giftware License",
+ "licenseId": "Giftware",
+ "seeAlso": [
+ "http://liballeg.org/license.html#allegro-4-the-giftware-license"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BitTorrent-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BitTorrent-1.0.json",
+ "referenceNumber": 7,
+ "name": "BitTorrent Open Source License v1.0",
+ "licenseId": "BitTorrent-1.0",
+ "seeAlso": [
+ "http://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/licenses/BitTorrent?r1\u003d1.1\u0026r2\u003d1.1.1.1\u0026diff_format\u003ds"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/APSL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/APSL-1.1.json",
+ "referenceNumber": 8,
+ "name": "Apple Public Source License 1.1",
+ "licenseId": "APSL-1.1",
+ "seeAlso": [
+ "http://www.opensource.apple.com/source/IOSerialFamily/IOSerialFamily-7/APPLE_LICENSE"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0-with-GCC-exception.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0-with-GCC-exception.json",
+ "referenceNumber": 9,
+ "name": "GNU General Public License v2.0 w/GCC Runtime Library exception",
+ "licenseId": "GPL-2.0-with-GCC-exception",
+ "seeAlso": [
+ "https://gcc.gnu.org/git/?p\u003dgcc.git;a\u003dblob;f\u003dgcc/libgcc1.c;h\u003d762f5143fc6eed57b6797c82710f3538aa52b40b;hb\u003dcb143a3ce4fb417c68f5fa2691a1b1b1053dfba9#l10"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/UPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/UPL-1.0.json",
+ "referenceNumber": 10,
+ "name": "Universal Permissive License v1.0",
+ "licenseId": "UPL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/UPL"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/wxWindows.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/wxWindows.json",
+ "referenceNumber": 11,
+ "name": "wxWindows Library License",
+ "licenseId": "wxWindows",
+ "seeAlso": [
+ "https://opensource.org/licenses/WXwindows"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Caldera.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Caldera.json",
+ "referenceNumber": 12,
+ "name": "Caldera License",
+ "licenseId": "Caldera",
+ "seeAlso": [
+ "http://www.lemis.com/grog/UNIX/ancient-source-all.pdf"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Zend-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Zend-2.0.json",
+ "referenceNumber": 13,
+ "name": "Zend License v2.0",
+ "licenseId": "Zend-2.0",
+ "seeAlso": [
+ "https://web.archive.org/web/20130517195954/http://www.zend.com/license/2_00.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CUA-OPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CUA-OPL-1.0.json",
+ "referenceNumber": 14,
+ "name": "CUA Office Public License v1.0",
+ "licenseId": "CUA-OPL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/CUA-OPL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/JPNIC.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/JPNIC.json",
+ "referenceNumber": 15,
+ "name": "Japan Network Information Center License",
+ "licenseId": "JPNIC",
+ "seeAlso": [
+ "https://gitlab.isc.org/isc-projects/bind9/blob/master/COPYRIGHT#L366"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SAX-PD.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SAX-PD.json",
+ "referenceNumber": 16,
+ "name": "Sax Public Domain Notice",
+ "licenseId": "SAX-PD",
+ "seeAlso": [
+ "http://www.saxproject.org/copying.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-ND-2.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-ND-2.5.json",
+ "referenceNumber": 17,
+ "name": "Creative Commons Attribution No Derivatives 2.5 Generic",
+ "licenseId": "CC-BY-ND-2.5",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nd/2.5/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/eGenix.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/eGenix.json",
+ "referenceNumber": 18,
+ "name": "eGenix.com Public License 1.1.0",
+ "licenseId": "eGenix",
+ "seeAlso": [
+ "http://www.egenix.com/products/eGenix.com-Public-License-1.1.0.pdf",
+ "https://fedoraproject.org/wiki/Licensing/eGenix.com_Public_License_1.1.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPLLR.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LGPLLR.json",
+ "referenceNumber": 19,
+ "name": "Lesser General Public License For Linguistic Resources",
+ "licenseId": "LGPLLR",
+ "seeAlso": [
+ "http://www-igm.univ-mlv.fr/~unitex/lgpllr.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.2.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.2.2.json",
+ "referenceNumber": 20,
+ "name": "Open LDAP Public License 2.2.2",
+ "licenseId": "OLDAP-2.2.2",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003ddf2cc1e21eb7c160695f5b7cffd6296c151ba188"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-ND-3.0-DE.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-ND-3.0-DE.json",
+ "referenceNumber": 21,
+ "name": "Creative Commons Attribution No Derivatives 3.0 Germany",
+ "licenseId": "CC-BY-ND-3.0-DE",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nd/3.0/de/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/IPA.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/IPA.json",
+ "referenceNumber": 22,
+ "name": "IPA Font License",
+ "licenseId": "IPA",
+ "seeAlso": [
+ "https://opensource.org/licenses/IPA"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NCSA.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NCSA.json",
+ "referenceNumber": 23,
+ "name": "University of Illinois/NCSA Open Source License",
+ "licenseId": "NCSA",
+ "seeAlso": [
+ "http://otm.illinois.edu/uiuc_openSource",
+ "https://opensource.org/licenses/NCSA"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/W3C.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/W3C.json",
+ "referenceNumber": 24,
+ "name": "W3C Software Notice and License (2002-12-31)",
+ "licenseId": "W3C",
+ "seeAlso": [
+ "http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231.html",
+ "https://opensource.org/licenses/W3C"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Adobe-2006.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Adobe-2006.json",
+ "referenceNumber": 25,
+ "name": "Adobe Systems Incorporated Source Code License Agreement",
+ "licenseId": "Adobe-2006",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/AdobeLicense"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Net-SNMP.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Net-SNMP.json",
+ "referenceNumber": 26,
+ "name": "Net-SNMP License",
+ "licenseId": "Net-SNMP",
+ "seeAlso": [
+ "http://net-snmp.sourceforge.net/about/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-4.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-4.0.json",
+ "referenceNumber": 27,
+ "name": "Creative Commons Attribution Share Alike 4.0 International",
+ "licenseId": "CC-BY-SA-4.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/4.0/legalcode"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/YPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/YPL-1.0.json",
+ "referenceNumber": 28,
+ "name": "Yahoo! Public License v1.0",
+ "licenseId": "YPL-1.0",
+ "seeAlso": [
+ "http://www.zimbra.com/license/yahoo_public_license_1.0.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Nunit.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/Nunit.json",
+ "referenceNumber": 29,
+ "name": "Nunit License",
+ "licenseId": "Nunit",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Nunit"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MITNFA.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MITNFA.json",
+ "referenceNumber": 30,
+ "name": "MIT +no-false-attribs license",
+ "licenseId": "MITNFA",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/MITNFA"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/PHP-3.01.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/PHP-3.01.json",
+ "referenceNumber": 31,
+ "name": "PHP License v3.01",
+ "licenseId": "PHP-3.01",
+ "seeAlso": [
+ "http://www.php.net/license/3_01.txt"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-Source-Code.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-Source-Code.json",
+ "referenceNumber": 32,
+ "name": "BSD Source Code Attribution",
+ "licenseId": "BSD-Source-Code",
+ "seeAlso": [
+ "https://github.com/robbiehanson/CocoaHTTPServer/blob/master/LICENSE.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-2.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-2.5.json",
+ "referenceNumber": 33,
+ "name": "Creative Commons Attribution Share Alike 2.5 Generic",
+ "licenseId": "CC-BY-SA-2.5",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/2.5/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Motosoto.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Motosoto.json",
+ "referenceNumber": 34,
+ "name": "Motosoto License",
+ "licenseId": "Motosoto",
+ "seeAlso": [
+ "https://opensource.org/licenses/Motosoto"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OSL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OSL-1.1.json",
+ "referenceNumber": 35,
+ "name": "Open Software License 1.1",
+ "licenseId": "OSL-1.1",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/OSL1.1"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NGPL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NGPL.json",
+ "referenceNumber": 36,
+ "name": "Nethack General Public License",
+ "licenseId": "NGPL",
+ "seeAlso": [
+ "https://opensource.org/licenses/NGPL"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-2.5-AU.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-2.5-AU.json",
+ "referenceNumber": 37,
+ "name": "Creative Commons Attribution 2.5 Australia",
+ "licenseId": "CC-BY-2.5-AU",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/2.5/au/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Unicode-TOU.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Unicode-TOU.json",
+ "referenceNumber": 38,
+ "name": "Unicode Terms of Use",
+ "licenseId": "Unicode-TOU",
+ "seeAlso": [
+ "http://www.unicode.org/copyright.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-No-Nuclear-License.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-No-Nuclear-License.json",
+ "referenceNumber": 39,
+ "name": "BSD 3-Clause No Nuclear License",
+ "licenseId": "BSD-3-Clause-No-Nuclear-License",
+ "seeAlso": [
+ "http://download.oracle.com/otn-pub/java/licenses/bsd.txt?AuthParam\u003d1467140197_43d516ce1776bd08a58235a7785be1cc"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OPUBL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OPUBL-1.0.json",
+ "referenceNumber": 40,
+ "name": "Open Publication License v1.0",
+ "licenseId": "OPUBL-1.0",
+ "seeAlso": [
+ "http://opencontent.org/openpub/",
+ "https://www.debian.org/opl",
+ "https://www.ctan.org/license/opl"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-2.0-UK.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-2.0-UK.json",
+ "referenceNumber": 41,
+ "name": "Creative Commons Attribution Non Commercial Share Alike 2.0 England and Wales",
+ "licenseId": "CC-BY-NC-SA-2.0-UK",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/2.0/uk/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NLOD-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NLOD-2.0.json",
+ "referenceNumber": 42,
+ "name": "Norwegian Licence for Open Government Data (NLOD) 2.0",
+ "licenseId": "NLOD-2.0",
+ "seeAlso": [
+ "http://data.norge.no/nlod/en/2.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/gnuplot.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/gnuplot.json",
+ "referenceNumber": 43,
+ "name": "gnuplot License",
+ "licenseId": "gnuplot",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Gnuplot"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/EPICS.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EPICS.json",
+ "referenceNumber": 44,
+ "name": "EPICS Open License",
+ "licenseId": "EPICS",
+ "seeAlso": [
+ "https://epics.anl.gov/license/open.php"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Info-ZIP.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Info-ZIP.json",
+ "referenceNumber": 45,
+ "name": "Info-ZIP License",
+ "licenseId": "Info-ZIP",
+ "seeAlso": [
+ "http://www.info-zip.org/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.0.json",
+ "referenceNumber": 46,
+ "name": "Open LDAP Public License v2.0 (or possibly 2.0A and 2.0B)",
+ "licenseId": "OLDAP-2.0",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003dcbf50f4e1185a21abd4c0a54d3f4341fe28f36ea"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CERN-OHL-P-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CERN-OHL-P-2.0.json",
+ "referenceNumber": 47,
+ "name": "CERN Open Hardware Licence Version 2 - Permissive",
+ "licenseId": "CERN-OHL-P-2.0",
+ "seeAlso": [
+ "https://www.ohwr.org/project/cernohl/wikis/Documents/CERN-OHL-version-2"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-No-Nuclear-Warranty.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-No-Nuclear-Warranty.json",
+ "referenceNumber": 48,
+ "name": "BSD 3-Clause No Nuclear Warranty",
+ "licenseId": "BSD-3-Clause-No-Nuclear-Warranty",
+ "seeAlso": [
+ "https://jogamp.org/git/?p\u003dgluegen.git;a\u003dblob_plain;f\u003dLICENSE.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AML.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AML.json",
+ "referenceNumber": 49,
+ "name": "Apple MIT License",
+ "licenseId": "AML",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Apple_MIT_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MulanPSL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MulanPSL-1.0.json",
+ "referenceNumber": 50,
+ "name": "Mulan Permissive Software License, Version 1",
+ "licenseId": "MulanPSL-1.0",
+ "seeAlso": [
+ "https://license.coscl.org.cn/MulanPSL/",
+ "https://github.com/yuwenlong/longphp/blob/25dfb70cc2a466dc4bb55ba30901cbce08d164b5/LICENSE"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Multics.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Multics.json",
+ "referenceNumber": 51,
+ "name": "Multics License",
+ "licenseId": "Multics",
+ "seeAlso": [
+ "https://opensource.org/licenses/Multics"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/VSL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/VSL-1.0.json",
+ "referenceNumber": 52,
+ "name": "Vovida Software License v1.0",
+ "licenseId": "VSL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/VSL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/RSA-MD.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/RSA-MD.json",
+ "referenceNumber": 53,
+ "name": "RSA Message-Digest License",
+ "licenseId": "RSA-MD",
+ "seeAlso": [
+ "http://www.faqs.org/rfcs/rfc1321.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-PDDC.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-PDDC.json",
+ "referenceNumber": 54,
+ "name": "Creative Commons Public Domain Dedication and Certification",
+ "licenseId": "CC-PDDC",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/publicdomain/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-2.1-JP.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-2.1-JP.json",
+ "referenceNumber": 55,
+ "name": "Creative Commons Attribution Share Alike 2.1 Japan",
+ "licenseId": "CC-BY-SA-2.1-JP",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/2.1/jp/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LPPL-1.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LPPL-1.2.json",
+ "referenceNumber": 56,
+ "name": "LaTeX Project Public License v1.2",
+ "licenseId": "LPPL-1.2",
+ "seeAlso": [
+ "http://www.latex-project.org/lppl/lppl-1-2.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Spencer-94.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Spencer-94.json",
+ "referenceNumber": 57,
+ "name": "Spencer License 94",
+ "licenseId": "Spencer-94",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Henry_Spencer_Reg-Ex_Library_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-1.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-1.2.json",
+ "referenceNumber": 58,
+ "name": "Open LDAP Public License v1.2",
+ "licenseId": "OLDAP-1.2",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003d42b0383c50c299977b5893ee695cf4e486fb0dc7"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/O-UDA-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/O-UDA-1.0.json",
+ "referenceNumber": 59,
+ "name": "Open Use of Data Agreement v1.0",
+ "licenseId": "O-UDA-1.0",
+ "seeAlso": [
+ "https://github.com/microsoft/Open-Use-of-Data-Agreement/blob/v1.0/O-UDA-1.0.md",
+ "https://cdla.dev/open-use-of-data-agreement-v1-0/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.7.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.7.json",
+ "referenceNumber": 60,
+ "name": "Open LDAP Public License v2.7",
+ "licenseId": "OLDAP-2.7",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003d47c2415c1df81556eeb39be6cad458ef87c534a2"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Glulxe.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Glulxe.json",
+ "referenceNumber": 61,
+ "name": "Glulxe License",
+ "licenseId": "Glulxe",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Glulxe"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/iMatix.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/iMatix.json",
+ "referenceNumber": 62,
+ "name": "iMatix Standard Function Library Agreement",
+ "licenseId": "iMatix",
+ "seeAlso": [
+ "http://legacy.imatix.com/html/sfl/sfl4.htm#license"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/TAPR-OHL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/TAPR-OHL-1.0.json",
+ "referenceNumber": 63,
+ "name": "TAPR Open Hardware License v1.0",
+ "licenseId": "TAPR-OHL-1.0",
+ "seeAlso": [
+ "https://www.tapr.org/OHL"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NBPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NBPL-1.0.json",
+ "referenceNumber": 64,
+ "name": "Net Boolean Public License v1",
+ "licenseId": "NBPL-1.0",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003d37b4b3f6cc4bf34e1d3dec61e69914b9819d8894"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LiLiQ-R-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LiLiQ-R-1.1.json",
+ "referenceNumber": 65,
+ "name": "Licence Libre du Québec – Réciprocité version 1.1",
+ "licenseId": "LiLiQ-R-1.1",
+ "seeAlso": [
+ "https://www.forge.gouv.qc.ca/participez/licence-logicielle/licence-libre-du-quebec-liliq-en-francais/licence-libre-du-quebec-reciprocite-liliq-r-v1-1/",
+ "http://opensource.org/licenses/LiLiQ-R-1.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Noweb.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Noweb.json",
+ "referenceNumber": 66,
+ "name": "Noweb License",
+ "licenseId": "Noweb",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Noweb"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC0-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC0-1.0.json",
+ "referenceNumber": 67,
+ "name": "Creative Commons Zero v1.0 Universal",
+ "licenseId": "CC0-1.0",
+ "seeAlso": [
+ "https://creativecommons.org/publicdomain/zero/1.0/legalcode"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-Protection.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-Protection.json",
+ "referenceNumber": 68,
+ "name": "BSD Protection License",
+ "licenseId": "BSD-Protection",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/BSD_Protection_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-2.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-2.5.json",
+ "referenceNumber": 69,
+ "name": "Creative Commons Attribution Non Commercial 2.5 Generic",
+ "licenseId": "CC-BY-NC-2.5",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc/2.5/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Zlib.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Zlib.json",
+ "referenceNumber": 70,
+ "name": "zlib License",
+ "licenseId": "Zlib",
+ "seeAlso": [
+ "http://www.zlib.net/zlib_license.html",
+ "https://opensource.org/licenses/Zlib"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.3-invariants-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.3-invariants-or-later.json",
+ "referenceNumber": 71,
+ "name": "GNU Free Documentation License v1.3 or later - invariants",
+ "licenseId": "GFDL-1.3-invariants-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/fdl-1.3.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-3.0-AT.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-3.0-AT.json",
+ "referenceNumber": 72,
+ "name": "Creative Commons Attribution 3.0 Austria",
+ "licenseId": "CC-BY-3.0-AT",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/3.0/at/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LPPL-1.3c.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LPPL-1.3c.json",
+ "referenceNumber": 73,
+ "name": "LaTeX Project Public License v1.3c",
+ "licenseId": "LPPL-1.3c",
+ "seeAlso": [
+ "http://www.latex-project.org/lppl/lppl-1-3c.txt",
+ "https://opensource.org/licenses/LPPL-1.3c"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/EPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EPL-1.0.json",
+ "referenceNumber": 74,
+ "name": "Eclipse Public License 1.0",
+ "licenseId": "EPL-1.0",
+ "seeAlso": [
+ "http://www.eclipse.org/legal/epl-v10.html",
+ "https://opensource.org/licenses/EPL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.1-invariants-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.1-invariants-or-later.json",
+ "referenceNumber": 75,
+ "name": "GNU Free Documentation License v1.1 or later - invariants",
+ "licenseId": "GFDL-1.1-invariants-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/ANTLR-PD-fallback.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ANTLR-PD-fallback.json",
+ "referenceNumber": 76,
+ "name": "ANTLR Software Rights Notice with license fallback",
+ "licenseId": "ANTLR-PD-fallback",
+ "seeAlso": [
+ "http://www.antlr2.org/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.4.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.4.json",
+ "referenceNumber": 77,
+ "name": "Open LDAP Public License v2.4",
+ "licenseId": "OLDAP-2.4",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003dcd1284c4a91a8a380d904eee68d1583f989ed386"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.3.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.3.json",
+ "referenceNumber": 78,
+ "name": "Open LDAP Public License v2.3",
+ "licenseId": "OLDAP-2.3",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003dd32cf54a32d581ab475d23c810b0a7fbaf8d63c3"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/ZPL-2.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ZPL-2.1.json",
+ "referenceNumber": 79,
+ "name": "Zope Public License 2.1",
+ "licenseId": "ZPL-2.1",
+ "seeAlso": [
+ "http://old.zope.org/Resources/ZPL/"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Apache-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Apache-2.0.json",
+ "referenceNumber": 80,
+ "name": "Apache License 2.0",
+ "licenseId": "Apache-2.0",
+ "seeAlso": [
+ "https://www.apache.org/licenses/LICENSE-2.0",
+ "https://opensource.org/licenses/Apache-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/SGI-B-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SGI-B-2.0.json",
+ "referenceNumber": 81,
+ "name": "SGI Free Software License B v2.0",
+ "licenseId": "SGI-B-2.0",
+ "seeAlso": [
+ "http://oss.sgi.com/projects/FreeB/SGIFreeSWLicB.2.0.pdf"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Hippocratic-2.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Hippocratic-2.1.json",
+ "referenceNumber": 82,
+ "name": "Hippocratic License 2.1",
+ "licenseId": "Hippocratic-2.1",
+ "seeAlso": [
+ "https://firstdonoharm.dev/version/2/1/license.html",
+ "https://github.com/EthicalSource/hippocratic-license/blob/58c0e646d64ff6fbee275bfe2b9492f914e3ab2a/LICENSE.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-3.0-DE.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-3.0-DE.json",
+ "referenceNumber": 83,
+ "name": "Creative Commons Attribution Share Alike 3.0 Germany",
+ "licenseId": "CC-BY-SA-3.0-DE",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/3.0/de/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-1.0.json",
+ "referenceNumber": 84,
+ "name": "Creative Commons Attribution Non Commercial Share Alike 1.0 Generic",
+ "licenseId": "CC-BY-NC-SA-1.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/1.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-2.1-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-2.1-or-later.json",
+ "referenceNumber": 85,
+ "name": "GNU Lesser General Public License v2.1 or later",
+ "licenseId": "LGPL-2.1-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/lgpl-2.1-standalone.html",
+ "https://opensource.org/licenses/LGPL-2.1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-3.0-US.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-3.0-US.json",
+ "referenceNumber": 86,
+ "name": "Creative Commons Attribution 3.0 United States",
+ "licenseId": "CC-BY-3.0-US",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/3.0/us/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/TCP-wrappers.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/TCP-wrappers.json",
+ "referenceNumber": 87,
+ "name": "TCP Wrappers License",
+ "licenseId": "TCP-wrappers",
+ "seeAlso": [
+ "http://rc.quest.com/topics/openssh/license.php#tcpwrappers"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.2-invariants-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.2-invariants-or-later.json",
+ "referenceNumber": 88,
+ "name": "GNU Free Documentation License v1.2 or later - invariants",
+ "licenseId": "GFDL-1.2-invariants-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Eurosym.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Eurosym.json",
+ "referenceNumber": 89,
+ "name": "Eurosym License",
+ "licenseId": "Eurosym",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Eurosym"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.1.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.1.json",
+ "referenceNumber": 90,
+ "name": "GNU Free Documentation License v1.1",
+ "licenseId": "GFDL-1.1",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LPPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LPPL-1.0.json",
+ "referenceNumber": 91,
+ "name": "LaTeX Project Public License v1.0",
+ "licenseId": "LPPL-1.0",
+ "seeAlso": [
+ "http://www.latex-project.org/lppl/lppl-1-0.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-2.0+.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-2.0+.json",
+ "referenceNumber": 92,
+ "name": "GNU Library General Public License v2 or later",
+ "licenseId": "LGPL-2.0+",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/lgpl-2.0-standalone.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/SGI-B-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SGI-B-1.0.json",
+ "referenceNumber": 93,
+ "name": "SGI Free Software License B v1.0",
+ "licenseId": "SGI-B-1.0",
+ "seeAlso": [
+ "http://oss.sgi.com/projects/FreeB/SGIFreeSWLicB.1.0.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/APL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/APL-1.0.json",
+ "referenceNumber": 94,
+ "name": "Adaptive Public License 1.0",
+ "licenseId": "APL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/APL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/libtiff.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/libtiff.json",
+ "referenceNumber": 95,
+ "name": "libtiff License",
+ "licenseId": "libtiff",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/libtiff"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AFL-2.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AFL-2.1.json",
+ "referenceNumber": 96,
+ "name": "Academic Free License v2.1",
+ "licenseId": "AFL-2.1",
+ "seeAlso": [
+ "http://opensource.linux-mirror.org/licenses/afl-2.1.txt"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-1.0.json",
+ "referenceNumber": 97,
+ "name": "Creative Commons Attribution Non Commercial 1.0 Generic",
+ "licenseId": "CC-BY-NC-1.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc/1.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GD.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GD.json",
+ "referenceNumber": 98,
+ "name": "GD License",
+ "licenseId": "GD",
+ "seeAlso": [
+ "https://libgd.github.io/manuals/2.3.0/files/license-txt.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AFL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AFL-1.1.json",
+ "referenceNumber": 99,
+ "name": "Academic Free License v1.1",
+ "licenseId": "AFL-1.1",
+ "seeAlso": [
+ "http://opensource.linux-mirror.org/licenses/afl-1.1.txt",
+ "http://wayback.archive.org/web/20021004124254/http://www.opensource.org/licenses/academic.php"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-ND-3.0-IGO.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-ND-3.0-IGO.json",
+ "referenceNumber": 100,
+ "name": "Creative Commons Attribution Non Commercial No Derivatives 3.0 IGO",
+ "licenseId": "CC-BY-NC-ND-3.0-IGO",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-nd/3.0/igo/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Unicode-DFS-2015.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Unicode-DFS-2015.json",
+ "referenceNumber": 101,
+ "name": "Unicode License Agreement - Data Files and Software (2015)",
+ "licenseId": "Unicode-DFS-2015",
+ "seeAlso": [
+ "https://web.archive.org/web/20151224134844/http://unicode.org/copyright.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.2-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.2-only.json",
+ "referenceNumber": 102,
+ "name": "GNU Free Documentation License v1.2 only",
+ "licenseId": "GFDL-1.2-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MPL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MPL-1.1.json",
+ "referenceNumber": 103,
+ "name": "Mozilla Public License 1.1",
+ "licenseId": "MPL-1.1",
+ "seeAlso": [
+ "http://www.mozilla.org/MPL/MPL-1.1.html",
+ "https://opensource.org/licenses/MPL-1.1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0-only.json",
+ "referenceNumber": 104,
+ "name": "GNU General Public License v2.0 only",
+ "licenseId": "GPL-2.0-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html",
+ "https://opensource.org/licenses/GPL-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-4.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-4.0.json",
+ "referenceNumber": 105,
+ "name": "Creative Commons Attribution Non Commercial 4.0 International",
+ "licenseId": "CC-BY-NC-4.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc/4.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/FreeImage.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/FreeImage.json",
+ "referenceNumber": 106,
+ "name": "FreeImage Public License v1.0",
+ "licenseId": "FreeImage",
+ "seeAlso": [
+ "http://freeimage.sourceforge.net/freeimage-license.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SHL-0.51.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SHL-0.51.json",
+ "referenceNumber": 107,
+ "name": "Solderpad Hardware License, Version 0.51",
+ "licenseId": "SHL-0.51",
+ "seeAlso": [
+ "https://solderpad.org/licenses/SHL-0.51/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CNRI-Jython.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CNRI-Jython.json",
+ "referenceNumber": 108,
+ "name": "CNRI Jython License",
+ "licenseId": "CNRI-Jython",
+ "seeAlso": [
+ "http://www.jython.org/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/ZPL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ZPL-1.1.json",
+ "referenceNumber": 109,
+ "name": "Zope Public License 1.1",
+ "licenseId": "ZPL-1.1",
+ "seeAlso": [
+ "http://old.zope.org/Resources/License/ZPL-1.1"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Afmparse.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Afmparse.json",
+ "referenceNumber": 110,
+ "name": "Afmparse License",
+ "licenseId": "Afmparse",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Afmparse"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.1.json",
+ "referenceNumber": 111,
+ "name": "Open LDAP Public License v2.1",
+ "licenseId": "OLDAP-2.1",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003db0d176738e96a0d3b9f85cb51e140a86f21be715"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Rdisc.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Rdisc.json",
+ "referenceNumber": 112,
+ "name": "Rdisc License",
+ "licenseId": "Rdisc",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Rdisc_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Imlib2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Imlib2.json",
+ "referenceNumber": 113,
+ "name": "Imlib2 License",
+ "licenseId": "Imlib2",
+ "seeAlso": [
+ "http://trac.enlightenment.org/e/browser/trunk/imlib2/COPYING",
+ "https://git.enlightenment.org/legacy/imlib2.git/tree/COPYING"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-4-Clause-Shortened.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-4-Clause-Shortened.json",
+ "referenceNumber": 114,
+ "name": "BSD 4 Clause Shortened",
+ "licenseId": "BSD-4-Clause-Shortened",
+ "seeAlso": [
+ "https://metadata.ftp-master.debian.org/changelogs//main/a/arpwatch/arpwatch_2.1a15-7_copyright"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Sendmail.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Sendmail.json",
+ "referenceNumber": 115,
+ "name": "Sendmail License",
+ "licenseId": "Sendmail",
+ "seeAlso": [
+ "http://www.sendmail.com/pdfs/open_source/sendmail_license.pdf",
+ "https://web.archive.org/web/20160322142305/https://www.sendmail.com/pdfs/open_source/sendmail_license.pdf"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-2.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-2.5.json",
+ "referenceNumber": 116,
+ "name": "Creative Commons Attribution 2.5 Generic",
+ "licenseId": "CC-BY-2.5",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/2.5/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AAL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AAL.json",
+ "referenceNumber": 117,
+ "name": "Attribution Assurance License",
+ "licenseId": "AAL",
+ "seeAlso": [
+ "https://opensource.org/licenses/attribution"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MPL-2.0-no-copyleft-exception.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MPL-2.0-no-copyleft-exception.json",
+ "referenceNumber": 118,
+ "name": "Mozilla Public License 2.0 (no copyleft exception)",
+ "licenseId": "MPL-2.0-no-copyleft-exception",
+ "seeAlso": [
+ "http://www.mozilla.org/MPL/2.0/",
+ "https://opensource.org/licenses/MPL-2.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-ND-2.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-ND-2.5.json",
+ "referenceNumber": 119,
+ "name": "Creative Commons Attribution Non Commercial No Derivatives 2.5 Generic",
+ "licenseId": "CC-BY-NC-ND-2.5",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-nd/2.5/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-3.0-NL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-3.0-NL.json",
+ "referenceNumber": 120,
+ "name": "Creative Commons Attribution 3.0 Netherlands",
+ "licenseId": "CC-BY-3.0-NL",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/3.0/nl/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LPL-1.02.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LPL-1.02.json",
+ "referenceNumber": 121,
+ "name": "Lucent Public License v1.02",
+ "licenseId": "LPL-1.02",
+ "seeAlso": [
+ "http://plan9.bell-labs.com/plan9/license.html",
+ "https://opensource.org/licenses/LPL-1.02"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/ECL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ECL-1.0.json",
+ "referenceNumber": 122,
+ "name": "Educational Community License v1.0",
+ "licenseId": "ECL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/ECL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OFL-1.0-no-RFN.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OFL-1.0-no-RFN.json",
+ "referenceNumber": 123,
+ "name": "SIL Open Font License 1.0 with no Reserved Font Name",
+ "licenseId": "OFL-1.0-no-RFN",
+ "seeAlso": [
+ "http://scripts.sil.org/cms/scripts/page.php?item_id\u003dOFL10_web"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-3.0-DE.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-3.0-DE.json",
+ "referenceNumber": 124,
+ "name": "Creative Commons Attribution Non Commercial Share Alike 3.0 Germany",
+ "licenseId": "CC-BY-NC-SA-3.0-DE",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/3.0/de/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-3.0.json",
+ "referenceNumber": 125,
+ "name": "Creative Commons Attribution Share Alike 3.0 Unported",
+ "licenseId": "CC-BY-SA-3.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/3.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NTP.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NTP.json",
+ "referenceNumber": 126,
+ "name": "NTP License",
+ "licenseId": "NTP",
+ "seeAlso": [
+ "https://opensource.org/licenses/NTP"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MPL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MPL-2.0.json",
+ "referenceNumber": 127,
+ "name": "Mozilla Public License 2.0",
+ "licenseId": "MPL-2.0",
+ "seeAlso": [
+ "https://www.mozilla.org/MPL/2.0/",
+ "https://opensource.org/licenses/MPL-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/APSL-1.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/APSL-1.2.json",
+ "referenceNumber": 128,
+ "name": "Apple Public Source License 1.2",
+ "licenseId": "APSL-1.2",
+ "seeAlso": [
+ "http://www.samurajdata.se/opensource/mirror/licenses/apsl.php"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.2-no-invariants-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.2-no-invariants-only.json",
+ "referenceNumber": 129,
+ "name": "GNU Free Documentation License v1.2 only - no invariants",
+ "licenseId": "GFDL-1.2-no-invariants-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Artistic-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Artistic-2.0.json",
+ "referenceNumber": 130,
+ "name": "Artistic License 2.0",
+ "licenseId": "Artistic-2.0",
+ "seeAlso": [
+ "http://www.perlfoundation.org/artistic_license_2_0",
+ "https://www.perlfoundation.org/artistic-license-20.html",
+ "https://opensource.org/licenses/artistic-license-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0.json",
+ "referenceNumber": 131,
+ "name": "GNU General Public License v2.0 only",
+ "licenseId": "GPL-2.0",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html",
+ "https://opensource.org/licenses/GPL-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/RSCPL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/RSCPL.json",
+ "referenceNumber": 132,
+ "name": "Ricoh Source Code Public License",
+ "licenseId": "RSCPL",
+ "seeAlso": [
+ "http://wayback.archive.org/web/20060715140826/http://www.risource.org/RPL/RPL-1.0A.shtml",
+ "https://opensource.org/licenses/RSCPL"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Sleepycat.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Sleepycat.json",
+ "referenceNumber": 133,
+ "name": "Sleepycat License",
+ "licenseId": "Sleepycat",
+ "seeAlso": [
+ "https://opensource.org/licenses/Sleepycat"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/xpp.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/xpp.json",
+ "referenceNumber": 134,
+ "name": "XPP License",
+ "licenseId": "xpp",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/xpp"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CDLA-Sharing-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CDLA-Sharing-1.0.json",
+ "referenceNumber": 135,
+ "name": "Community Data License Agreement Sharing 1.0",
+ "licenseId": "CDLA-Sharing-1.0",
+ "seeAlso": [
+ "https://cdla.io/sharing-1-0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/ClArtistic.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ClArtistic.json",
+ "referenceNumber": 136,
+ "name": "Clarified Artistic License",
+ "licenseId": "ClArtistic",
+ "seeAlso": [
+ "http://gianluca.dellavedova.org/2011/01/03/clarified-artistic-license/",
+ "http://www.ncftp.com/ncftp/doc/LICENSE.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/AGPL-1.0-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AGPL-1.0-only.json",
+ "referenceNumber": 137,
+ "name": "Affero General Public License v1.0 only",
+ "licenseId": "AGPL-1.0-only",
+ "seeAlso": [
+ "http://www.affero.org/oagpl.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-3.0-DE.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-3.0-DE.json",
+ "referenceNumber": 138,
+ "name": "Creative Commons Attribution 3.0 Germany",
+ "licenseId": "CC-BY-3.0-DE",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/3.0/de/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AFL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AFL-2.0.json",
+ "referenceNumber": 139,
+ "name": "Academic Free License v2.0",
+ "licenseId": "AFL-2.0",
+ "seeAlso": [
+ "http://wayback.archive.org/web/20060924134533/http://www.opensource.org/licenses/afl-2.0.txt"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Intel.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Intel.json",
+ "referenceNumber": 140,
+ "name": "Intel Open Source License",
+ "licenseId": "Intel",
+ "seeAlso": [
+ "https://opensource.org/licenses/Intel"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.1-no-invariants-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.1-no-invariants-or-later.json",
+ "referenceNumber": 141,
+ "name": "GNU Free Documentation License v1.1 or later - no invariants",
+ "licenseId": "GFDL-1.1-no-invariants-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/APAFML.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/APAFML.json",
+ "referenceNumber": 142,
+ "name": "Adobe Postscript AFM License",
+ "licenseId": "APAFML",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/AdobePostscriptAFM"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.2.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.2.json",
+ "referenceNumber": 143,
+ "name": "GNU Free Documentation License v1.2",
+ "licenseId": "GFDL-1.2",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/SISSL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SISSL.json",
+ "referenceNumber": 144,
+ "name": "Sun Industry Standards Source License v1.1",
+ "licenseId": "SISSL",
+ "seeAlso": [
+ "http://www.openoffice.org/licenses/sissl_license.html",
+ "https://opensource.org/licenses/SISSL"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Naumen.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Naumen.json",
+ "referenceNumber": 145,
+ "name": "Naumen Public License",
+ "licenseId": "Naumen",
+ "seeAlso": [
+ "https://opensource.org/licenses/Naumen"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/HTMLTIDY.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/HTMLTIDY.json",
+ "referenceNumber": 146,
+ "name": "HTML Tidy License",
+ "licenseId": "HTMLTIDY",
+ "seeAlso": [
+ "https://github.com/htacg/tidy-html5/blob/next/README/LICENSE.md"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.8.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.8.json",
+ "referenceNumber": 147,
+ "name": "Open LDAP Public License v2.8",
+ "licenseId": "OLDAP-2.8",
+ "seeAlso": [
+ "http://www.openldap.org/software/release/license.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/blessing.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/blessing.json",
+ "referenceNumber": 148,
+ "name": "SQLite Blessing",
+ "licenseId": "blessing",
+ "seeAlso": [
+ "https://www.sqlite.org/src/artifact/e33a4df7e32d742a?ln\u003d4-9",
+ "https://sqlite.org/src/artifact/df5091916dbb40e6"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-ND-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-ND-2.0.json",
+ "referenceNumber": 149,
+ "name": "Creative Commons Attribution No Derivatives 2.0 Generic",
+ "licenseId": "CC-BY-ND-2.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nd/2.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OGTSL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OGTSL.json",
+ "referenceNumber": 150,
+ "name": "Open Group Test Suite License",
+ "licenseId": "OGTSL",
+ "seeAlso": [
+ "http://www.opengroup.org/testing/downloads/The_Open_Group_TSL.txt",
+ "https://opensource.org/licenses/OGTSL"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-2.0-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-2.0-or-later.json",
+ "referenceNumber": 151,
+ "name": "GNU Library General Public License v2 or later",
+ "licenseId": "LGPL-2.0-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/lgpl-2.0-standalone.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Parity-7.0.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Parity-7.0.0.json",
+ "referenceNumber": 152,
+ "name": "The Parity Public License 7.0.0",
+ "licenseId": "Parity-7.0.0",
+ "seeAlso": [
+ "https://paritylicense.com/versions/7.0.0.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-ND-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-ND-1.0.json",
+ "referenceNumber": 153,
+ "name": "Creative Commons Attribution No Derivatives 1.0 Generic",
+ "licenseId": "CC-BY-ND-1.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nd/1.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/dvipdfm.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/dvipdfm.json",
+ "referenceNumber": 154,
+ "name": "dvipdfm License",
+ "licenseId": "dvipdfm",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/dvipdfm"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CNRI-Python.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CNRI-Python.json",
+ "referenceNumber": 155,
+ "name": "CNRI Python License",
+ "licenseId": "CNRI-Python",
+ "seeAlso": [
+ "https://opensource.org/licenses/CNRI-Python"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-4-Clause-UC.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-4-Clause-UC.json",
+ "referenceNumber": 156,
+ "name": "BSD-4-Clause (University of California-Specific)",
+ "licenseId": "BSD-4-Clause-UC",
+ "seeAlso": [
+ "http://www.freebsd.org/copyright/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NLOD-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NLOD-1.0.json",
+ "referenceNumber": 157,
+ "name": "Norwegian Licence for Open Government Data (NLOD) 1.0",
+ "licenseId": "NLOD-1.0",
+ "seeAlso": [
+ "http://data.norge.no/nlod/en/1.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MS-RL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MS-RL.json",
+ "referenceNumber": 158,
+ "name": "Microsoft Reciprocal License",
+ "licenseId": "MS-RL",
+ "seeAlso": [
+ "http://www.microsoft.com/opensource/licenses.mspx",
+ "https://opensource.org/licenses/MS-RL"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-4.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-4.0.json",
+ "referenceNumber": 159,
+ "name": "Creative Commons Attribution Non Commercial Share Alike 4.0 International",
+ "licenseId": "CC-BY-NC-SA-4.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/HaskellReport.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/HaskellReport.json",
+ "referenceNumber": 160,
+ "name": "Haskell Language Report License",
+ "licenseId": "HaskellReport",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Haskell_Language_Report_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-1.0.json",
+ "referenceNumber": 161,
+ "name": "Creative Commons Attribution 1.0 Generic",
+ "licenseId": "CC-BY-1.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/1.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/UCL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/UCL-1.0.json",
+ "referenceNumber": 162,
+ "name": "Upstream Compatibility License v1.0",
+ "licenseId": "UCL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/UCL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Mup.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Mup.json",
+ "referenceNumber": 163,
+ "name": "Mup License",
+ "licenseId": "Mup",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Mup"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SMPPL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SMPPL.json",
+ "referenceNumber": 164,
+ "name": "Secure Messaging Protocol Public License",
+ "licenseId": "SMPPL",
+ "seeAlso": [
+ "https://github.com/dcblake/SMP/blob/master/Documentation/License.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/PHP-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/PHP-3.0.json",
+ "referenceNumber": 165,
+ "name": "PHP License v3.0",
+ "licenseId": "PHP-3.0",
+ "seeAlso": [
+ "http://www.php.net/license/3_0.txt",
+ "https://opensource.org/licenses/PHP-3.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GL2PS.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GL2PS.json",
+ "referenceNumber": 166,
+ "name": "GL2PS License",
+ "licenseId": "GL2PS",
+ "seeAlso": [
+ "http://www.geuz.org/gl2ps/COPYING.GL2PS"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CrystalStacker.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CrystalStacker.json",
+ "referenceNumber": 167,
+ "name": "CrystalStacker License",
+ "licenseId": "CrystalStacker",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing:CrystalStacker?rd\u003dLicensing/CrystalStacker"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/W3C-20150513.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/W3C-20150513.json",
+ "referenceNumber": 168,
+ "name": "W3C Software Notice and Document License (2015-05-13)",
+ "licenseId": "W3C-20150513",
+ "seeAlso": [
+ "https://www.w3.org/Consortium/Legal/2015/copyright-software-and-document"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NIST-PD-fallback.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NIST-PD-fallback.json",
+ "referenceNumber": 169,
+ "name": "NIST Public Domain Notice with license fallback",
+ "licenseId": "NIST-PD-fallback",
+ "seeAlso": [
+ "https://github.com/usnistgov/jsip/blob/59700e6926cbe96c5cdae897d9a7d2656b42abe3/LICENSE",
+ "https://github.com/usnistgov/fipy/blob/86aaa5c2ba2c6f1be19593c5986071cf6568cc34/LICENSE.rst"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OGL-UK-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OGL-UK-1.0.json",
+ "referenceNumber": 170,
+ "name": "Open Government Licence v1.0",
+ "licenseId": "OGL-UK-1.0",
+ "seeAlso": [
+ "http://www.nationalarchives.gov.uk/doc/open-government-licence/version/1/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CPL-1.0.json",
+ "referenceNumber": 171,
+ "name": "Common Public License 1.0",
+ "licenseId": "CPL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/CPL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-2.1-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-2.1-only.json",
+ "referenceNumber": 172,
+ "name": "GNU Lesser General Public License v2.1 only",
+ "licenseId": "LGPL-2.1-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/lgpl-2.1-standalone.html",
+ "https://opensource.org/licenses/LGPL-2.1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/ZPL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ZPL-2.0.json",
+ "referenceNumber": 173,
+ "name": "Zope Public License 2.0",
+ "licenseId": "ZPL-2.0",
+ "seeAlso": [
+ "http://old.zope.org/Resources/License/ZPL-2.0",
+ "https://opensource.org/licenses/ZPL-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Frameworx-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Frameworx-1.0.json",
+ "referenceNumber": 174,
+ "name": "Frameworx Open License 1.0",
+ "licenseId": "Frameworx-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/Frameworx-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/AGPL-3.0-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AGPL-3.0-only.json",
+ "referenceNumber": 175,
+ "name": "GNU Affero General Public License v3.0 only",
+ "licenseId": "AGPL-3.0-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/agpl.txt",
+ "https://opensource.org/licenses/AGPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/DRL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/DRL-1.0.json",
+ "referenceNumber": 176,
+ "name": "Detection Rule License 1.0",
+ "licenseId": "DRL-1.0",
+ "seeAlso": [
+ "https://github.com/Neo23x0/sigma/blob/master/LICENSE.Detection.Rules.md"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/EFL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EFL-2.0.json",
+ "referenceNumber": 177,
+ "name": "Eiffel Forum License v2.0",
+ "licenseId": "EFL-2.0",
+ "seeAlso": [
+ "http://www.eiffel-nice.org/license/eiffel-forum-license-2.html",
+ "https://opensource.org/licenses/EFL-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Spencer-99.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Spencer-99.json",
+ "referenceNumber": 178,
+ "name": "Spencer License 99",
+ "licenseId": "Spencer-99",
+ "seeAlso": [
+ "http://www.opensource.apple.com/source/tcl/tcl-5/tcl/generic/regfronts.c"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CAL-1.0-Combined-Work-Exception.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CAL-1.0-Combined-Work-Exception.json",
+ "referenceNumber": 179,
+ "name": "Cryptographic Autonomy License 1.0 (Combined Work Exception)",
+ "licenseId": "CAL-1.0-Combined-Work-Exception",
+ "seeAlso": [
+ "http://cryptographicautonomylicense.com/license-text.html",
+ "https://opensource.org/licenses/CAL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.1-invariants-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.1-invariants-only.json",
+ "referenceNumber": 180,
+ "name": "GNU Free Documentation License v1.1 only - invariants",
+ "licenseId": "GFDL-1.1-invariants-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/TCL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/TCL.json",
+ "referenceNumber": 181,
+ "name": "TCL/TK License",
+ "licenseId": "TCL",
+ "seeAlso": [
+ "http://www.tcl.tk/software/tcltk/license.html",
+ "https://fedoraproject.org/wiki/Licensing/TCL"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SHL-0.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SHL-0.5.json",
+ "referenceNumber": 182,
+ "name": "Solderpad Hardware License v0.5",
+ "licenseId": "SHL-0.5",
+ "seeAlso": [
+ "https://solderpad.org/licenses/SHL-0.5/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OFL-1.0-RFN.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OFL-1.0-RFN.json",
+ "referenceNumber": 183,
+ "name": "SIL Open Font License 1.0 with Reserved Font Name",
+ "licenseId": "OFL-1.0-RFN",
+ "seeAlso": [
+ "http://scripts.sil.org/cms/scripts/page.php?item_id\u003dOFL10_web"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-2.0.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-2.0.json",
+ "referenceNumber": 184,
+ "name": "GNU Library General Public License v2 only",
+ "licenseId": "LGPL-2.0",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/lgpl-2.0-standalone.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CERN-OHL-W-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CERN-OHL-W-2.0.json",
+ "referenceNumber": 185,
+ "name": "CERN Open Hardware Licence Version 2 - Weakly Reciprocal",
+ "licenseId": "CERN-OHL-W-2.0",
+ "seeAlso": [
+ "https://www.ohwr.org/project/cernohl/wikis/Documents/CERN-OHL-version-2"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Glide.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Glide.json",
+ "referenceNumber": 186,
+ "name": "3dfx Glide License",
+ "licenseId": "Glide",
+ "seeAlso": [
+ "http://www.users.on.net/~triforce/glidexp/COPYING.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/mpich2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/mpich2.json",
+ "referenceNumber": 187,
+ "name": "mpich2 License",
+ "licenseId": "mpich2",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/MIT"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/psutils.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/psutils.json",
+ "referenceNumber": 188,
+ "name": "psutils License",
+ "licenseId": "psutils",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/psutils"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SPL-1.0.json",
+ "referenceNumber": 189,
+ "name": "Sun Public License v1.0",
+ "licenseId": "SPL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/SPL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Apache-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Apache-1.1.json",
+ "referenceNumber": 190,
+ "name": "Apache License 1.1",
+ "licenseId": "Apache-1.1",
+ "seeAlso": [
+ "http://apache.org/licenses/LICENSE-1.1",
+ "https://opensource.org/licenses/Apache-1.1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-ND-4.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-ND-4.0.json",
+ "referenceNumber": 191,
+ "name": "Creative Commons Attribution No Derivatives 4.0 International",
+ "licenseId": "CC-BY-ND-4.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nd/4.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/FreeBSD-DOC.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/FreeBSD-DOC.json",
+ "referenceNumber": 192,
+ "name": "FreeBSD Documentation License",
+ "licenseId": "FreeBSD-DOC",
+ "seeAlso": [
+ "https://www.freebsd.org/copyright/freebsd-doc-license/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SCEA.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SCEA.json",
+ "referenceNumber": 193,
+ "name": "SCEA Shared Source License",
+ "licenseId": "SCEA",
+ "seeAlso": [
+ "http://research.scea.com/scea_shared_source_license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Latex2e.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Latex2e.json",
+ "referenceNumber": 194,
+ "name": "Latex2e License",
+ "licenseId": "Latex2e",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Latex2e"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Artistic-1.0-cl8.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Artistic-1.0-cl8.json",
+ "referenceNumber": 195,
+ "name": "Artistic License 1.0 w/clause 8",
+ "licenseId": "Artistic-1.0-cl8",
+ "seeAlso": [
+ "https://opensource.org/licenses/Artistic-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/SGI-B-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SGI-B-1.1.json",
+ "referenceNumber": 196,
+ "name": "SGI Free Software License B v1.1",
+ "licenseId": "SGI-B-1.1",
+ "seeAlso": [
+ "http://oss.sgi.com/projects/FreeB/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NRL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NRL.json",
+ "referenceNumber": 197,
+ "name": "NRL License",
+ "licenseId": "NRL",
+ "seeAlso": [
+ "http://web.mit.edu/network/isakmp/nrllicense.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SWL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SWL.json",
+ "referenceNumber": 198,
+ "name": "Scheme Widget Library (SWL) Software License Agreement",
+ "licenseId": "SWL",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/SWL"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Zed.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Zed.json",
+ "referenceNumber": 199,
+ "name": "Zed License",
+ "licenseId": "Zed",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Zed"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CERN-OHL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CERN-OHL-1.1.json",
+ "referenceNumber": 200,
+ "name": "CERN Open Hardware Licence v1.1",
+ "licenseId": "CERN-OHL-1.1",
+ "seeAlso": [
+ "https://www.ohwr.org/project/licenses/wikis/cern-ohl-v1.1"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/RHeCos-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/RHeCos-1.1.json",
+ "referenceNumber": 201,
+ "name": "Red Hat eCos Public License v1.1",
+ "licenseId": "RHeCos-1.1",
+ "seeAlso": [
+ "http://ecos.sourceware.org/old-license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/JasPer-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/JasPer-2.0.json",
+ "referenceNumber": 202,
+ "name": "JasPer License",
+ "licenseId": "JasPer-2.0",
+ "seeAlso": [
+ "http://www.ece.uvic.ca/~mdadams/jasper/LICENSE"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SSPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SSPL-1.0.json",
+ "referenceNumber": 203,
+ "name": "Server Side Public License, v 1",
+ "licenseId": "SSPL-1.0",
+ "seeAlso": [
+ "https://www.mongodb.com/licensing/server-side-public-license"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0+.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0+.json",
+ "referenceNumber": 204,
+ "name": "GNU General Public License v2.0 or later",
+ "licenseId": "GPL-2.0+",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html",
+ "https://opensource.org/licenses/GPL-2.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-1.4.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-1.4.json",
+ "referenceNumber": 205,
+ "name": "Open LDAP Public License v1.4",
+ "licenseId": "OLDAP-1.4",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003dc9f95c2f3f2ffb5e0ae55fe7388af75547660941"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/libpng-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/libpng-2.0.json",
+ "referenceNumber": 206,
+ "name": "PNG Reference Library version 2",
+ "licenseId": "libpng-2.0",
+ "seeAlso": [
+ "http://www.libpng.org/pub/png/src/libpng-LICENSE.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CNRI-Python-GPL-Compatible.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CNRI-Python-GPL-Compatible.json",
+ "referenceNumber": 207,
+ "name": "CNRI Python Open Source GPL Compatible License Agreement",
+ "licenseId": "CNRI-Python-GPL-Compatible",
+ "seeAlso": [
+ "http://www.python.org/download/releases/1.6.1/download_win/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Aladdin.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Aladdin.json",
+ "referenceNumber": 208,
+ "name": "Aladdin Free Public License",
+ "licenseId": "Aladdin",
+ "seeAlso": [
+ "http://pages.cs.wisc.edu/~ghost/doc/AFPL/6.01/Public.htm"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CECILL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CECILL-1.0.json",
+ "referenceNumber": 209,
+ "name": "CeCILL Free Software License Agreement v1.0",
+ "licenseId": "CECILL-1.0",
+ "seeAlso": [
+ "http://www.cecill.info/licences/Licence_CeCILL_V1-fr.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Ruby.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Ruby.json",
+ "referenceNumber": 210,
+ "name": "Ruby License",
+ "licenseId": "Ruby",
+ "seeAlso": [
+ "http://www.ruby-lang.org/en/LICENSE.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NPL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NPL-1.1.json",
+ "referenceNumber": 211,
+ "name": "Netscape Public License v1.1",
+ "licenseId": "NPL-1.1",
+ "seeAlso": [
+ "http://www.mozilla.org/MPL/NPL/1.1/"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/ImageMagick.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ImageMagick.json",
+ "referenceNumber": 212,
+ "name": "ImageMagick License",
+ "licenseId": "ImageMagick",
+ "seeAlso": [
+ "http://www.imagemagick.org/script/license.php"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Cube.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Cube.json",
+ "referenceNumber": 213,
+ "name": "Cube License",
+ "licenseId": "Cube",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Cube"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.1-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.1-only.json",
+ "referenceNumber": 214,
+ "name": "GNU Free Documentation License v1.1 only",
+ "licenseId": "GFDL-1.1-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-2.0.json",
+ "referenceNumber": 215,
+ "name": "Creative Commons Attribution 2.0 Generic",
+ "licenseId": "CC-BY-2.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/2.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AFL-1.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AFL-1.2.json",
+ "referenceNumber": 216,
+ "name": "Academic Free License v1.2",
+ "licenseId": "AFL-1.2",
+ "seeAlso": [
+ "http://opensource.linux-mirror.org/licenses/afl-1.2.txt",
+ "http://wayback.archive.org/web/20021204204652/http://www.opensource.org/licenses/academic.php"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-2.0.json",
+ "referenceNumber": 217,
+ "name": "Creative Commons Attribution Share Alike 2.0 Generic",
+ "licenseId": "CC-BY-SA-2.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/2.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CECILL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CECILL-2.0.json",
+ "referenceNumber": 218,
+ "name": "CeCILL Free Software License Agreement v2.0",
+ "licenseId": "CECILL-2.0",
+ "seeAlso": [
+ "http://www.cecill.info/licences/Licence_CeCILL_V2-en.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MIT-advertising.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MIT-advertising.json",
+ "referenceNumber": 219,
+ "name": "Enlightenment License (e16)",
+ "licenseId": "MIT-advertising",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/MIT_With_Advertising"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-2.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-2.5.json",
+ "referenceNumber": 220,
+ "name": "Creative Commons Attribution Non Commercial Share Alike 2.5 Generic",
+ "licenseId": "CC-BY-NC-SA-2.5",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/2.5/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Artistic-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Artistic-1.0.json",
+ "referenceNumber": 221,
+ "name": "Artistic License 1.0",
+ "licenseId": "Artistic-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/Artistic-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OSL-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OSL-3.0.json",
+ "referenceNumber": 222,
+ "name": "Open Software License 3.0",
+ "licenseId": "OSL-3.0",
+ "seeAlso": [
+ "https://web.archive.org/web/20120101081418/http://rosenlaw.com:80/OSL3.0.htm",
+ "https://opensource.org/licenses/OSL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/X11.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/X11.json",
+ "referenceNumber": 223,
+ "name": "X11 License",
+ "licenseId": "X11",
+ "seeAlso": [
+ "http://www.xfree86.org/3.3.6/COPYRIGHT2.html#3"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Bahyph.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Bahyph.json",
+ "referenceNumber": 224,
+ "name": "Bahyph License",
+ "licenseId": "Bahyph",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Bahyph"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.0.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.0.1.json",
+ "referenceNumber": 225,
+ "name": "Open LDAP Public License v2.0.1",
+ "licenseId": "OLDAP-2.0.1",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003db6d68acd14e51ca3aab4428bf26522aa74873f0e"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/EUDatagrid.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EUDatagrid.json",
+ "referenceNumber": 226,
+ "name": "EU DataGrid Software License",
+ "licenseId": "EUDatagrid",
+ "seeAlso": [
+ "http://eu-datagrid.web.cern.ch/eu-datagrid/license.html",
+ "https://opensource.org/licenses/EUDatagrid"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MTLL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MTLL.json",
+ "referenceNumber": 227,
+ "name": "Matrix Template Library License",
+ "licenseId": "MTLL",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Matrix_Template_Library_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.2-invariants-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.2-invariants-only.json",
+ "referenceNumber": 228,
+ "name": "GNU Free Documentation License v1.2 only - invariants",
+ "licenseId": "GFDL-1.2-invariants-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.3-no-invariants-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.3-no-invariants-or-later.json",
+ "referenceNumber": 229,
+ "name": "GNU Free Documentation License v1.3 or later - no invariants",
+ "licenseId": "GFDL-1.3-no-invariants-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/fdl-1.3.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/curl.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/curl.json",
+ "referenceNumber": 230,
+ "name": "curl License",
+ "licenseId": "curl",
+ "seeAlso": [
+ "https://github.com/bagder/curl/blob/master/COPYING"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LAL-1.3.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LAL-1.3.json",
+ "referenceNumber": 231,
+ "name": "Licence Art Libre 1.3",
+ "licenseId": "LAL-1.3",
+ "seeAlso": [
+ "https://artlibre.org/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/DSDP.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/DSDP.json",
+ "referenceNumber": 232,
+ "name": "DSDP License",
+ "licenseId": "DSDP",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/DSDP"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CERN-OHL-1.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CERN-OHL-1.2.json",
+ "referenceNumber": 233,
+ "name": "CERN Open Hardware Licence v1.2",
+ "licenseId": "CERN-OHL-1.2",
+ "seeAlso": [
+ "https://www.ohwr.org/project/licenses/wikis/cern-ohl-v1.2"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/TOSL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/TOSL.json",
+ "referenceNumber": 234,
+ "name": "Trusster Open Source License",
+ "licenseId": "TOSL",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/TOSL"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-3.0-with-autoconf-exception.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-3.0-with-autoconf-exception.json",
+ "referenceNumber": 235,
+ "name": "GNU General Public License v3.0 w/Autoconf exception",
+ "licenseId": "GPL-3.0-with-autoconf-exception",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/autoconf-exception-3.0.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-3.0.json",
+ "referenceNumber": 236,
+ "name": "Creative Commons Attribution 3.0 Unported",
+ "licenseId": "CC-BY-3.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/3.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Qhull.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Qhull.json",
+ "referenceNumber": 237,
+ "name": "Qhull License",
+ "licenseId": "Qhull",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Qhull"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.3-no-invariants-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.3-no-invariants-only.json",
+ "referenceNumber": 238,
+ "name": "GNU Free Documentation License v1.3 only - no invariants",
+ "licenseId": "GFDL-1.3-no-invariants-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/fdl-1.3.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/TORQUE-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/TORQUE-1.1.json",
+ "referenceNumber": 239,
+ "name": "TORQUE v2.5+ Software License v1.1",
+ "licenseId": "TORQUE-1.1",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/TORQUEv1.1"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MS-PL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MS-PL.json",
+ "referenceNumber": 240,
+ "name": "Microsoft Public License",
+ "licenseId": "MS-PL",
+ "seeAlso": [
+ "http://www.microsoft.com/opensource/licenses.mspx",
+ "https://opensource.org/licenses/MS-PL"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Apache-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Apache-1.0.json",
+ "referenceNumber": 241,
+ "name": "Apache License 1.0",
+ "licenseId": "Apache-1.0",
+ "seeAlso": [
+ "http://www.apache.org/licenses/LICENSE-1.0"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/copyleft-next-0.3.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/copyleft-next-0.3.1.json",
+ "referenceNumber": 242,
+ "name": "copyleft-next 0.3.1",
+ "licenseId": "copyleft-next-0.3.1",
+ "seeAlso": [
+ "https://github.com/copyleft-next/copyleft-next/blob/master/Releases/copyleft-next-0.3.1"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.2-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.2-or-later.json",
+ "referenceNumber": 243,
+ "name": "GNU Free Documentation License v1.2 or later",
+ "licenseId": "GFDL-1.2-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-3.0+.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-3.0+.json",
+ "referenceNumber": 244,
+ "name": "GNU General Public License v3.0 or later",
+ "licenseId": "GPL-3.0+",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/gpl-3.0-standalone.html",
+ "https://opensource.org/licenses/GPL-3.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MulanPSL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MulanPSL-2.0.json",
+ "referenceNumber": 245,
+ "name": "Mulan Permissive Software License, Version 2",
+ "licenseId": "MulanPSL-2.0",
+ "seeAlso": [
+ "https://license.coscl.org.cn/MulanPSL2/"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/FSFAP.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/FSFAP.json",
+ "referenceNumber": 246,
+ "name": "FSF All Permissive License",
+ "licenseId": "FSFAP",
+ "seeAlso": [
+ "https://www.gnu.org/prep/maintain/html_node/License-Notices-for-Other-Files.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Xerox.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Xerox.json",
+ "referenceNumber": 247,
+ "name": "Xerox License",
+ "licenseId": "Xerox",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Xerox"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CDDL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CDDL-1.0.json",
+ "referenceNumber": 248,
+ "name": "Common Development and Distribution License 1.0",
+ "licenseId": "CDDL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/cddl1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.3-invariants-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.3-invariants-only.json",
+ "referenceNumber": 249,
+ "name": "GNU Free Documentation License v1.3 only - invariants",
+ "licenseId": "GFDL-1.3-invariants-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/fdl-1.3.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/etalab-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/etalab-2.0.json",
+ "referenceNumber": 250,
+ "name": "Etalab Open License 2.0",
+ "licenseId": "etalab-2.0",
+ "seeAlso": [
+ "https://github.com/DISIC/politique-de-contribution-open-source/blob/master/LICENSE.pdf",
+ "https://raw.githubusercontent.com/DISIC/politique-de-contribution-open-source/master/LICENSE"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/XFree86-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/XFree86-1.1.json",
+ "referenceNumber": 251,
+ "name": "XFree86 License 1.1",
+ "licenseId": "XFree86-1.1",
+ "seeAlso": [
+ "http://www.xfree86.org/current/LICENSE4.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/SNIA.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SNIA.json",
+ "referenceNumber": 252,
+ "name": "SNIA Public License 1.1",
+ "licenseId": "SNIA",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/SNIA_Public_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LPPL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LPPL-1.1.json",
+ "referenceNumber": 253,
+ "name": "LaTeX Project Public License v1.1",
+ "licenseId": "LPPL-1.1",
+ "seeAlso": [
+ "http://www.latex-project.org/lppl/lppl-1-1.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CATOSL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CATOSL-1.1.json",
+ "referenceNumber": 254,
+ "name": "Computer Associates Trusted Open Source License 1.1",
+ "licenseId": "CATOSL-1.1",
+ "seeAlso": [
+ "https://opensource.org/licenses/CATOSL-1.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/TU-Berlin-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/TU-Berlin-2.0.json",
+ "referenceNumber": 255,
+ "name": "Technische Universitaet Berlin License 2.0",
+ "licenseId": "TU-Berlin-2.0",
+ "seeAlso": [
+ "https://github.com/CorsixTH/deps/blob/fd339a9f526d1d9c9f01ccf39e438a015da50035/licences/libgsm.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.3.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.3.json",
+ "referenceNumber": 256,
+ "name": "GNU Free Documentation License v1.3",
+ "licenseId": "GFDL-1.3",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/fdl-1.3.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.3-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.3-or-later.json",
+ "referenceNumber": 257,
+ "name": "GNU Free Documentation License v1.3 or later",
+ "licenseId": "GFDL-1.3-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/fdl-1.3.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LAL-1.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LAL-1.2.json",
+ "referenceNumber": 258,
+ "name": "Licence Art Libre 1.2",
+ "licenseId": "LAL-1.2",
+ "seeAlso": [
+ "http://artlibre.org/licence/lal/licence-art-libre-12/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/ICU.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ICU.json",
+ "referenceNumber": 259,
+ "name": "ICU License",
+ "licenseId": "ICU",
+ "seeAlso": [
+ "http://source.icu-project.org/repos/icu/icu/trunk/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/FTL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/FTL.json",
+ "referenceNumber": 260,
+ "name": "Freetype Project License",
+ "licenseId": "FTL",
+ "seeAlso": [
+ "http://freetype.fis.uniroma2.it/FTL.TXT",
+ "http://git.savannah.gnu.org/cgit/freetype/freetype2.git/tree/docs/FTL.TXT",
+ "http://gitlab.freedesktop.org/freetype/freetype/-/raw/master/docs/FTL.TXT"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MirOS.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MirOS.json",
+ "referenceNumber": 261,
+ "name": "The MirOS Licence",
+ "licenseId": "MirOS",
+ "seeAlso": [
+ "https://opensource.org/licenses/MirOS"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-2-Clause-NetBSD.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/BSD-2-Clause-NetBSD.json",
+ "referenceNumber": 262,
+ "name": "BSD 2-Clause NetBSD License",
+ "licenseId": "BSD-2-Clause-NetBSD",
+ "seeAlso": [
+ "http://www.netbsd.org/about/redistribution.html#default"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-ND-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-ND-3.0.json",
+ "referenceNumber": 263,
+ "name": "Creative Commons Attribution Non Commercial No Derivatives 3.0 Unported",
+ "licenseId": "CC-BY-NC-ND-3.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-nd/3.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OSET-PL-2.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OSET-PL-2.1.json",
+ "referenceNumber": 264,
+ "name": "OSET Public License version 2.1",
+ "licenseId": "OSET-PL-2.1",
+ "seeAlso": [
+ "http://www.osetfoundation.org/public-license",
+ "https://opensource.org/licenses/OPL-2.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-ND-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-ND-2.0.json",
+ "referenceNumber": 265,
+ "name": "Creative Commons Attribution Non Commercial No Derivatives 2.0 Generic",
+ "licenseId": "CC-BY-NC-ND-2.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-nd/2.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SISSL-1.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SISSL-1.2.json",
+ "referenceNumber": 266,
+ "name": "Sun Industry Standards Source License v1.2",
+ "licenseId": "SISSL-1.2",
+ "seeAlso": [
+ "http://gridscheduler.sourceforge.net/Gridengine_SISSL_license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Wsuipa.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Wsuipa.json",
+ "referenceNumber": 267,
+ "name": "Wsuipa License",
+ "licenseId": "Wsuipa",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Wsuipa"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Zimbra-1.4.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Zimbra-1.4.json",
+ "referenceNumber": 268,
+ "name": "Zimbra Public License v1.4",
+ "licenseId": "Zimbra-1.4",
+ "seeAlso": [
+ "http://www.zimbra.com/legal/zimbra-public-license-1-4"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Linux-OpenIB.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Linux-OpenIB.json",
+ "referenceNumber": 269,
+ "name": "Linux Kernel Variant of OpenIB.org license",
+ "licenseId": "Linux-OpenIB",
+ "seeAlso": [
+ "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/infiniband/core/sa.h"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-3.0.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-3.0.json",
+ "referenceNumber": 270,
+ "name": "GNU Lesser General Public License v3.0 only",
+ "licenseId": "LGPL-3.0",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/lgpl-3.0-standalone.html",
+ "https://opensource.org/licenses/LGPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.5.json",
+ "referenceNumber": 271,
+ "name": "Open LDAP Public License v2.5",
+ "licenseId": "OLDAP-2.5",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003d6852b9d90022e8593c98205413380536b1b5a7cf"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AMPAS.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AMPAS.json",
+ "referenceNumber": 272,
+ "name": "Academy of Motion Picture Arts and Sciences BSD",
+ "licenseId": "AMPAS",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/BSD#AMPASBSD"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-1.0-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GPL-1.0-or-later.json",
+ "referenceNumber": 273,
+ "name": "GNU General Public License v1.0 or later",
+ "licenseId": "GPL-1.0-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/gpl-1.0-standalone.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BUSL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BUSL-1.1.json",
+ "referenceNumber": 274,
+ "name": "Business Source License 1.1",
+ "licenseId": "BUSL-1.1",
+ "seeAlso": [
+ "https://mariadb.com/bsl11/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Adobe-Glyph.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Adobe-Glyph.json",
+ "referenceNumber": 275,
+ "name": "Adobe Glyph List License",
+ "licenseId": "Adobe-Glyph",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/MIT#AdobeGlyph"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/0BSD.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/0BSD.json",
+ "referenceNumber": 276,
+ "name": "BSD Zero Clause License",
+ "licenseId": "0BSD",
+ "seeAlso": [
+ "http://landley.net/toybox/license.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/W3C-19980720.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/W3C-19980720.json",
+ "referenceNumber": 277,
+ "name": "W3C Software Notice and License (1998-07-20)",
+ "licenseId": "W3C-19980720",
+ "seeAlso": [
+ "http://www.w3.org/Consortium/Legal/copyright-software-19980720.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/FSFUL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/FSFUL.json",
+ "referenceNumber": 278,
+ "name": "FSF Unlimited License",
+ "licenseId": "FSFUL",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/FSF_Unlimited_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-3.0.json",
+ "referenceNumber": 279,
+ "name": "Creative Commons Attribution Non Commercial Share Alike 3.0 Unported",
+ "licenseId": "CC-BY-NC-SA-3.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/3.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/DOC.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/DOC.json",
+ "referenceNumber": 280,
+ "name": "DOC License",
+ "licenseId": "DOC",
+ "seeAlso": [
+ "http://www.cs.wustl.edu/~schmidt/ACE-copying.html",
+ "https://www.dre.vanderbilt.edu/~schmidt/ACE-copying.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/TMate.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/TMate.json",
+ "referenceNumber": 281,
+ "name": "TMate Open Source License",
+ "licenseId": "TMate",
+ "seeAlso": [
+ "http://svnkit.com/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MIT-open-group.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MIT-open-group.json",
+ "referenceNumber": 282,
+ "name": "MIT Open Group variant",
+ "licenseId": "MIT-open-group",
+ "seeAlso": [
+ "https://gitlab.freedesktop.org/xorg/app/iceauth/-/blob/master/COPYING",
+ "https://gitlab.freedesktop.org/xorg/app/xvinfo/-/blob/master/COPYING",
+ "https://gitlab.freedesktop.org/xorg/app/xsetroot/-/blob/master/COPYING",
+ "https://gitlab.freedesktop.org/xorg/app/xauth/-/blob/master/COPYING"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AMDPLPA.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AMDPLPA.json",
+ "referenceNumber": 283,
+ "name": "AMD\u0027s plpa_map.c License",
+ "licenseId": "AMDPLPA",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/AMD_plpa_map_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Condor-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Condor-1.1.json",
+ "referenceNumber": 284,
+ "name": "Condor Public License v1.1",
+ "licenseId": "Condor-1.1",
+ "seeAlso": [
+ "http://research.cs.wisc.edu/condor/license.html#condor",
+ "http://web.archive.org/web/20111123062036/http://research.cs.wisc.edu/condor/license.html#condor"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/PolyForm-Noncommercial-1.0.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/PolyForm-Noncommercial-1.0.0.json",
+ "referenceNumber": 285,
+ "name": "PolyForm Noncommercial License 1.0.0",
+ "licenseId": "PolyForm-Noncommercial-1.0.0",
+ "seeAlso": [
+ "https://polyformproject.org/licenses/noncommercial/1.0.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-No-Military-License.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-No-Military-License.json",
+ "referenceNumber": 286,
+ "name": "BSD 3-Clause No Military License",
+ "licenseId": "BSD-3-Clause-No-Military-License",
+ "seeAlso": [
+ "https://gitlab.syncad.com/hive/dhive/-/blob/master/LICENSE",
+ "https://github.com/greymass/swift-eosio/blob/master/LICENSE"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-4.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-4.0.json",
+ "referenceNumber": 287,
+ "name": "Creative Commons Attribution 4.0 International",
+ "licenseId": "CC-BY-4.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by/4.0/legalcode"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OGL-Canada-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OGL-Canada-2.0.json",
+ "referenceNumber": 288,
+ "name": "Open Government Licence - Canada",
+ "licenseId": "OGL-Canada-2.0",
+ "seeAlso": [
+ "https://open.canada.ca/en/open-government-licence-canada"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-3.0-IGO.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-3.0-IGO.json",
+ "referenceNumber": 289,
+ "name": "Creative Commons Attribution Non Commercial Share Alike 3.0 IGO",
+ "licenseId": "CC-BY-NC-SA-3.0-IGO",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/3.0/igo/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/EFL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EFL-1.0.json",
+ "referenceNumber": 290,
+ "name": "Eiffel Forum License v1.0",
+ "licenseId": "EFL-1.0",
+ "seeAlso": [
+ "http://www.eiffel-nice.org/license/forum.txt",
+ "https://opensource.org/licenses/EFL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Newsletr.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Newsletr.json",
+ "referenceNumber": 291,
+ "name": "Newsletr License",
+ "licenseId": "Newsletr",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Newsletr"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/copyleft-next-0.3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/copyleft-next-0.3.0.json",
+ "referenceNumber": 292,
+ "name": "copyleft-next 0.3.0",
+ "licenseId": "copyleft-next-0.3.0",
+ "seeAlso": [
+ "https://github.com/copyleft-next/copyleft-next/blob/master/Releases/copyleft-next-0.3.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-3.0-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GPL-3.0-or-later.json",
+ "referenceNumber": 293,
+ "name": "GNU General Public License v3.0 or later",
+ "licenseId": "GPL-3.0-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/gpl-3.0-standalone.html",
+ "https://opensource.org/licenses/GPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CDLA-Permissive-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CDLA-Permissive-2.0.json",
+ "referenceNumber": 294,
+ "name": "Community Data License Agreement Permissive 2.0",
+ "licenseId": "CDLA-Permissive-2.0",
+ "seeAlso": [
+ "https://cdla.dev/permissive-2-0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-ND-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-ND-3.0.json",
+ "referenceNumber": 295,
+ "name": "Creative Commons Attribution No Derivatives 3.0 Unported",
+ "licenseId": "CC-BY-ND-3.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nd/3.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/C-UDA-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/C-UDA-1.0.json",
+ "referenceNumber": 296,
+ "name": "Computational Use of Data Agreement v1.0",
+ "licenseId": "C-UDA-1.0",
+ "seeAlso": [
+ "https://github.com/microsoft/Computational-Use-of-Data-Agreement/blob/master/C-UDA-1.0.md",
+ "https://cdla.dev/computational-use-of-data-agreement-v1-0/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Barr.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Barr.json",
+ "referenceNumber": 297,
+ "name": "Barr License",
+ "licenseId": "Barr",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Barr"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Vim.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Vim.json",
+ "referenceNumber": 298,
+ "name": "Vim License",
+ "licenseId": "Vim",
+ "seeAlso": [
+ "http://vimdoc.sourceforge.net/htmldoc/uganda.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0-with-classpath-exception.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0-with-classpath-exception.json",
+ "referenceNumber": 299,
+ "name": "GNU General Public License v2.0 w/Classpath exception",
+ "licenseId": "GPL-2.0-with-classpath-exception",
+ "seeAlso": [
+ "https://www.gnu.org/software/classpath/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BitTorrent-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BitTorrent-1.1.json",
+ "referenceNumber": 300,
+ "name": "BitTorrent Open Source License v1.1",
+ "licenseId": "BitTorrent-1.1",
+ "seeAlso": [
+ "http://directory.fsf.org/wiki/License:BitTorrentOSL1.1"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CDL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CDL-1.0.json",
+ "referenceNumber": 301,
+ "name": "Common Documentation License 1.0",
+ "licenseId": "CDL-1.0",
+ "seeAlso": [
+ "http://www.opensource.apple.com/cdl/",
+ "https://fedoraproject.org/wiki/Licensing/Common_Documentation_License",
+ "https://www.gnu.org/licenses/license-list.html#ACDL"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-1.0.json",
+ "referenceNumber": 302,
+ "name": "Creative Commons Attribution Share Alike 1.0 Generic",
+ "licenseId": "CC-BY-SA-1.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/1.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/ADSL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ADSL.json",
+ "referenceNumber": 303,
+ "name": "Amazon Digital Services License",
+ "licenseId": "ADSL",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/AmazonDigitalServicesLicense"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/PostgreSQL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/PostgreSQL.json",
+ "referenceNumber": 304,
+ "name": "PostgreSQL License",
+ "licenseId": "PostgreSQL",
+ "seeAlso": [
+ "http://www.postgresql.org/about/licence",
+ "https://opensource.org/licenses/PostgreSQL"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OFL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OFL-1.1.json",
+ "referenceNumber": 305,
+ "name": "SIL Open Font License 1.1",
+ "licenseId": "OFL-1.1",
+ "seeAlso": [
+ "http://scripts.sil.org/cms/scripts/page.php?item_id\u003dOFL_web",
+ "https://opensource.org/licenses/OFL-1.1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NPL-1.0.json",
+ "referenceNumber": 306,
+ "name": "Netscape Public License v1.0",
+ "licenseId": "NPL-1.0",
+ "seeAlso": [
+ "http://www.mozilla.org/MPL/NPL/1.0/"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/xinetd.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/xinetd.json",
+ "referenceNumber": 307,
+ "name": "xinetd License",
+ "licenseId": "xinetd",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Xinetd_License"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-2.0-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-2.0-only.json",
+ "referenceNumber": 308,
+ "name": "GNU Library General Public License v2 only",
+ "licenseId": "LGPL-2.0-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/lgpl-2.0-standalone.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/zlib-acknowledgement.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/zlib-acknowledgement.json",
+ "referenceNumber": 309,
+ "name": "zlib/libpng License with Acknowledgement",
+ "licenseId": "zlib-acknowledgement",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/ZlibWithAcknowledgement"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.2.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.2.1.json",
+ "referenceNumber": 310,
+ "name": "Open LDAP Public License v2.2.1",
+ "licenseId": "OLDAP-2.2.1",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003d4bc786f34b50aa301be6f5600f58a980070f481e"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/APSL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/APSL-1.0.json",
+ "referenceNumber": 311,
+ "name": "Apple Public Source License 1.0",
+ "licenseId": "APSL-1.0",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Apple_Public_Source_License_1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-LBNL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-LBNL.json",
+ "referenceNumber": 312,
+ "name": "Lawrence Berkeley National Labs BSD variant license",
+ "licenseId": "BSD-3-Clause-LBNL",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/LBNLBSD"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GLWTPL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GLWTPL.json",
+ "referenceNumber": 313,
+ "name": "Good Luck With That Public License",
+ "licenseId": "GLWTPL",
+ "seeAlso": [
+ "https://github.com/me-shaon/GLWTPL/commit/da5f6bc734095efbacb442c0b31e33a65b9d6e85"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-3.0-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-3.0-only.json",
+ "referenceNumber": 314,
+ "name": "GNU Lesser General Public License v3.0 only",
+ "licenseId": "LGPL-3.0-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/lgpl-3.0-standalone.html",
+ "https://opensource.org/licenses/LGPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OGC-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OGC-1.0.json",
+ "referenceNumber": 315,
+ "name": "OGC Software License, Version 1.0",
+ "licenseId": "OGC-1.0",
+ "seeAlso": [
+ "https://www.ogc.org/ogc/software/1.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Dotseqn.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Dotseqn.json",
+ "referenceNumber": 316,
+ "name": "Dotseqn License",
+ "licenseId": "Dotseqn",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Dotseqn"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MakeIndex.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MakeIndex.json",
+ "referenceNumber": 317,
+ "name": "MakeIndex License",
+ "licenseId": "MakeIndex",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/MakeIndex"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-3.0-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GPL-3.0-only.json",
+ "referenceNumber": 318,
+ "name": "GNU General Public License v3.0 only",
+ "licenseId": "GPL-3.0-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/gpl-3.0-standalone.html",
+ "https://opensource.org/licenses/GPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-No-Nuclear-License-2014.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-No-Nuclear-License-2014.json",
+ "referenceNumber": 319,
+ "name": "BSD 3-Clause No Nuclear License 2014",
+ "licenseId": "BSD-3-Clause-No-Nuclear-License-2014",
+ "seeAlso": [
+ "https://java.net/projects/javaeetutorial/pages/BerkeleyLicense"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-1.0-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GPL-1.0-only.json",
+ "referenceNumber": 320,
+ "name": "GNU General Public License v1.0 only",
+ "licenseId": "GPL-1.0-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/gpl-1.0-standalone.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/IJG.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/IJG.json",
+ "referenceNumber": 321,
+ "name": "Independent JPEG Group License",
+ "licenseId": "IJG",
+ "seeAlso": [
+ "http://dev.w3.org/cvsweb/Amaya/libjpeg/Attic/README?rev\u003d1.2"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/AGPL-1.0-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AGPL-1.0-or-later.json",
+ "referenceNumber": 322,
+ "name": "Affero General Public License v1.0 or later",
+ "licenseId": "AGPL-1.0-or-later",
+ "seeAlso": [
+ "http://www.affero.org/oagpl.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OFL-1.1-no-RFN.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OFL-1.1-no-RFN.json",
+ "referenceNumber": 323,
+ "name": "SIL Open Font License 1.1 with no Reserved Font Name",
+ "licenseId": "OFL-1.1-no-RFN",
+ "seeAlso": [
+ "http://scripts.sil.org/cms/scripts/page.php?item_id\u003dOFL_web",
+ "https://opensource.org/licenses/OFL-1.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSL-1.0.json",
+ "referenceNumber": 324,
+ "name": "Boost Software License 1.0",
+ "licenseId": "BSL-1.0",
+ "seeAlso": [
+ "http://www.boost.org/LICENSE_1_0.txt",
+ "https://opensource.org/licenses/BSL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Libpng.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Libpng.json",
+ "referenceNumber": 325,
+ "name": "libpng License",
+ "licenseId": "Libpng",
+ "seeAlso": [
+ "http://www.libpng.org/pub/png/src/libpng-LICENSE.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-3.0.json",
+ "referenceNumber": 326,
+ "name": "Creative Commons Attribution Non Commercial 3.0 Unported",
+ "licenseId": "CC-BY-NC-3.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc/3.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-2.0.json",
+ "referenceNumber": 327,
+ "name": "Creative Commons Attribution Non Commercial 2.0 Generic",
+ "licenseId": "CC-BY-NC-2.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc/2.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Unlicense.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Unlicense.json",
+ "referenceNumber": 328,
+ "name": "The Unlicense",
+ "licenseId": "Unlicense",
+ "seeAlso": [
+ "https://unlicense.org/"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LPL-1.0.json",
+ "referenceNumber": 329,
+ "name": "Lucent Public License Version 1.0",
+ "licenseId": "LPL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/LPL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/bzip2-1.0.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/bzip2-1.0.5.json",
+ "referenceNumber": 330,
+ "name": "bzip2 and libbzip2 License v1.0.5",
+ "licenseId": "bzip2-1.0.5",
+ "seeAlso": [
+ "https://sourceware.org/bzip2/1.0.5/bzip2-manual-1.0.5.html",
+ "http://bzip.org/1.0.5/bzip2-manual-1.0.5.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Entessa.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Entessa.json",
+ "referenceNumber": 331,
+ "name": "Entessa Public License v1.0",
+ "licenseId": "Entessa",
+ "seeAlso": [
+ "https://opensource.org/licenses/Entessa"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-2-Clause-Patent.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-2-Clause-Patent.json",
+ "referenceNumber": 332,
+ "name": "BSD-2-Clause Plus Patent License",
+ "licenseId": "BSD-2-Clause-Patent",
+ "seeAlso": [
+ "https://opensource.org/licenses/BSDplusPatent"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/ECL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ECL-2.0.json",
+ "referenceNumber": 333,
+ "name": "Educational Community License v2.0",
+ "licenseId": "ECL-2.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/ECL-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Crossword.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Crossword.json",
+ "referenceNumber": 334,
+ "name": "Crossword License",
+ "licenseId": "Crossword",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Crossword"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-ND-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-ND-1.0.json",
+ "referenceNumber": 335,
+ "name": "Creative Commons Attribution Non Commercial No Derivatives 1.0 Generic",
+ "licenseId": "CC-BY-NC-ND-1.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nd-nc/1.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OCLC-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OCLC-2.0.json",
+ "referenceNumber": 336,
+ "name": "OCLC Research Public License 2.0",
+ "licenseId": "OCLC-2.0",
+ "seeAlso": [
+ "http://www.oclc.org/research/activities/software/license/v2final.htm",
+ "https://opensource.org/licenses/OCLC-2.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CECILL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CECILL-1.1.json",
+ "referenceNumber": 337,
+ "name": "CeCILL Free Software License Agreement v1.1",
+ "licenseId": "CECILL-1.1",
+ "seeAlso": [
+ "http://www.cecill.info/licences/Licence_CeCILL_V1.1-US.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CECILL-2.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CECILL-2.1.json",
+ "referenceNumber": 338,
+ "name": "CeCILL Free Software License Agreement v2.1",
+ "licenseId": "CECILL-2.1",
+ "seeAlso": [
+ "http://www.cecill.info/licences/Licence_CeCILL_V2.1-en.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OGDL-Taiwan-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OGDL-Taiwan-1.0.json",
+ "referenceNumber": 339,
+ "name": "Taiwan Open Government Data License, version 1.0",
+ "licenseId": "OGDL-Taiwan-1.0",
+ "seeAlso": [
+ "https://data.gov.tw/license"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Abstyles.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Abstyles.json",
+ "referenceNumber": 340,
+ "name": "Abstyles License",
+ "licenseId": "Abstyles",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Abstyles"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/libselinux-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/libselinux-1.0.json",
+ "referenceNumber": 341,
+ "name": "libselinux public domain notice",
+ "licenseId": "libselinux-1.0",
+ "seeAlso": [
+ "https://github.com/SELinuxProject/selinux/blob/master/libselinux/LICENSE"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/ANTLR-PD.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ANTLR-PD.json",
+ "referenceNumber": 342,
+ "name": "ANTLR Software Rights Notice",
+ "licenseId": "ANTLR-PD",
+ "seeAlso": [
+ "http://www.antlr2.org/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0-or-later.json",
+ "referenceNumber": 343,
+ "name": "GNU General Public License v2.0 or later",
+ "licenseId": "GPL-2.0-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/gpl-2.0-standalone.html",
+ "https://opensource.org/licenses/GPL-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/IPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/IPL-1.0.json",
+ "referenceNumber": 344,
+ "name": "IBM Public License v1.0",
+ "licenseId": "IPL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/IPL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MIT-enna.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MIT-enna.json",
+ "referenceNumber": 345,
+ "name": "enna License",
+ "licenseId": "MIT-enna",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/MIT#enna"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CPOL-1.02.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CPOL-1.02.json",
+ "referenceNumber": 346,
+ "name": "Code Project Open License 1.02",
+ "licenseId": "CPOL-1.02",
+ "seeAlso": [
+ "http://www.codeproject.com/info/cpol10.aspx"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-3.0-AT.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-3.0-AT.json",
+ "referenceNumber": 347,
+ "name": "Creative Commons Attribution Share Alike 3.0 Austria",
+ "licenseId": "CC-BY-SA-3.0-AT",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/3.0/at/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-3.0-with-GCC-exception.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-3.0-with-GCC-exception.json",
+ "referenceNumber": 348,
+ "name": "GNU General Public License v3.0 w/GCC Runtime Library exception",
+ "licenseId": "GPL-3.0-with-GCC-exception",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/gcc-exception-3.1.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-1-Clause.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-1-Clause.json",
+ "referenceNumber": 349,
+ "name": "BSD 1-Clause License",
+ "licenseId": "BSD-1-Clause",
+ "seeAlso": [
+ "https://svnweb.freebsd.org/base/head/include/ifaddrs.h?revision\u003d326823"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NTP-0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NTP-0.json",
+ "referenceNumber": 350,
+ "name": "NTP No Attribution",
+ "licenseId": "NTP-0",
+ "seeAlso": [
+ "https://github.com/tytso/e2fsprogs/blob/master/lib/et/et_name.c"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SugarCRM-1.1.3.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SugarCRM-1.1.3.json",
+ "referenceNumber": 351,
+ "name": "SugarCRM Public License v1.1.3",
+ "licenseId": "SugarCRM-1.1.3",
+ "seeAlso": [
+ "http://www.sugarcrm.com/crm/SPL"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MIT.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MIT.json",
+ "referenceNumber": 352,
+ "name": "MIT License",
+ "licenseId": "MIT",
+ "seeAlso": [
+ "https://opensource.org/licenses/MIT"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OFL-1.1-RFN.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OFL-1.1-RFN.json",
+ "referenceNumber": 353,
+ "name": "SIL Open Font License 1.1 with Reserved Font Name",
+ "licenseId": "OFL-1.1-RFN",
+ "seeAlso": [
+ "http://scripts.sil.org/cms/scripts/page.php?item_id\u003dOFL_web",
+ "https://opensource.org/licenses/OFL-1.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Watcom-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Watcom-1.0.json",
+ "referenceNumber": 354,
+ "name": "Sybase Open Watcom Public License 1.0",
+ "licenseId": "Watcom-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/Watcom-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-SA-2.0-FR.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-SA-2.0-FR.json",
+ "referenceNumber": 355,
+ "name": "Creative Commons Attribution-NonCommercial-ShareAlike 2.0 France",
+ "licenseId": "CC-BY-NC-SA-2.0-FR",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-sa/2.0/fr/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/ODbL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ODbL-1.0.json",
+ "referenceNumber": 356,
+ "name": "Open Data Commons Open Database License v1.0",
+ "licenseId": "ODbL-1.0",
+ "seeAlso": [
+ "http://www.opendatacommons.org/licenses/odbl/1.0/",
+ "https://opendatacommons.org/licenses/odbl/1-0/"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/FSFULLR.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/FSFULLR.json",
+ "referenceNumber": 357,
+ "name": "FSF Unlimited License (with License Retention)",
+ "licenseId": "FSFULLR",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/FSF_Unlimited_License#License_Retention_Variant"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-1.3.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-1.3.json",
+ "referenceNumber": 358,
+ "name": "Open LDAP Public License v1.3",
+ "licenseId": "OLDAP-1.3",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003de5f8117f0ce088d0bd7a8e18ddf37eaa40eb09b1"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SSH-OpenSSH.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SSH-OpenSSH.json",
+ "referenceNumber": 359,
+ "name": "SSH OpenSSH license",
+ "licenseId": "SSH-OpenSSH",
+ "seeAlso": [
+ "https://github.com/openssh/openssh-portable/blob/1b11ea7c58cd5c59838b5fa574cd456d6047b2d4/LICENCE#L10"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-2-Clause.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-2-Clause.json",
+ "referenceNumber": 360,
+ "name": "BSD 2-Clause \"Simplified\" License",
+ "licenseId": "BSD-2-Clause",
+ "seeAlso": [
+ "https://opensource.org/licenses/BSD-2-Clause"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/HPND.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/HPND.json",
+ "referenceNumber": 361,
+ "name": "Historical Permission Notice and Disclaimer",
+ "licenseId": "HPND",
+ "seeAlso": [
+ "https://opensource.org/licenses/HPND"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Zimbra-1.3.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Zimbra-1.3.json",
+ "referenceNumber": 362,
+ "name": "Zimbra Public License v1.3",
+ "licenseId": "Zimbra-1.3",
+ "seeAlso": [
+ "http://web.archive.org/web/20100302225219/http://www.zimbra.com/license/zimbra-public-license-1-3.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Borceux.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Borceux.json",
+ "referenceNumber": 363,
+ "name": "Borceux license",
+ "licenseId": "Borceux",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Borceux"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-1.1.json",
+ "referenceNumber": 364,
+ "name": "Open LDAP Public License v1.1",
+ "licenseId": "OLDAP-1.1",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003d806557a5ad59804ef3a44d5abfbe91d706b0791f"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OFL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OFL-1.0.json",
+ "referenceNumber": 365,
+ "name": "SIL Open Font License 1.0",
+ "licenseId": "OFL-1.0",
+ "seeAlso": [
+ "http://scripts.sil.org/cms/scripts/page.php?item_id\u003dOFL10_web"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NASA-1.3.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NASA-1.3.json",
+ "referenceNumber": 366,
+ "name": "NASA Open Source Agreement 1.3",
+ "licenseId": "NASA-1.3",
+ "seeAlso": [
+ "http://ti.arc.nasa.gov/opensource/nosa/",
+ "https://opensource.org/licenses/NASA-1.3"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/VOSTROM.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/VOSTROM.json",
+ "referenceNumber": 367,
+ "name": "VOSTROM Public License for Open Source",
+ "licenseId": "VOSTROM",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/VOSTROM"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MIT-0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MIT-0.json",
+ "referenceNumber": 368,
+ "name": "MIT No Attribution",
+ "licenseId": "MIT-0",
+ "seeAlso": [
+ "https://github.com/aws/mit-0",
+ "https://romanrm.net/mit-zero",
+ "https://github.com/awsdocs/aws-cloud9-user-guide/blob/master/LICENSE-SAMPLECODE"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/ISC.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ISC.json",
+ "referenceNumber": 369,
+ "name": "ISC License",
+ "licenseId": "ISC",
+ "seeAlso": [
+ "https://www.isc.org/licenses/",
+ "https://www.isc.org/downloads/software-support-policy/isc-license/",
+ "https://opensource.org/licenses/ISC"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Unicode-DFS-2016.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Unicode-DFS-2016.json",
+ "referenceNumber": 370,
+ "name": "Unicode License Agreement - Data Files and Software (2016)",
+ "licenseId": "Unicode-DFS-2016",
+ "seeAlso": [
+ "http://www.unicode.org/copyright.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BlueOak-1.0.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BlueOak-1.0.0.json",
+ "referenceNumber": 371,
+ "name": "Blue Oak Model License 1.0.0",
+ "licenseId": "BlueOak-1.0.0",
+ "seeAlso": [
+ "https://blueoakcouncil.org/license/1.0.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LiLiQ-Rplus-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LiLiQ-Rplus-1.1.json",
+ "referenceNumber": 372,
+ "name": "Licence Libre du Québec – Réciprocité forte version 1.1",
+ "licenseId": "LiLiQ-Rplus-1.1",
+ "seeAlso": [
+ "https://www.forge.gouv.qc.ca/participez/licence-logicielle/licence-libre-du-quebec-liliq-en-francais/licence-libre-du-quebec-reciprocite-forte-liliq-r-v1-1/",
+ "http://opensource.org/licenses/LiLiQ-Rplus-1.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NOSL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NOSL.json",
+ "referenceNumber": 373,
+ "name": "Netizen Open Source License",
+ "licenseId": "NOSL",
+ "seeAlso": [
+ "http://bits.netizen.com.au/licenses/NOSL/nosl.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/SMLNJ.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SMLNJ.json",
+ "referenceNumber": 374,
+ "name": "Standard ML of New Jersey License",
+ "licenseId": "SMLNJ",
+ "seeAlso": [
+ "https://www.smlnj.org/license.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-3.0+.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-3.0+.json",
+ "referenceNumber": 375,
+ "name": "GNU Lesser General Public License v3.0 or later",
+ "licenseId": "LGPL-3.0+",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/lgpl-3.0-standalone.html",
+ "https://opensource.org/licenses/LGPL-3.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CPAL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CPAL-1.0.json",
+ "referenceNumber": 376,
+ "name": "Common Public Attribution License 1.0",
+ "licenseId": "CPAL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/CPAL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/PSF-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/PSF-2.0.json",
+ "referenceNumber": 377,
+ "name": "Python Software Foundation License 2.0",
+ "licenseId": "PSF-2.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/Python-2.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/RPL-1.5.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/RPL-1.5.json",
+ "referenceNumber": 378,
+ "name": "Reciprocal Public License 1.5",
+ "licenseId": "RPL-1.5",
+ "seeAlso": [
+ "https://opensource.org/licenses/RPL-1.5"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-2-Clause-FreeBSD.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/BSD-2-Clause-FreeBSD.json",
+ "referenceNumber": 379,
+ "name": "BSD 2-Clause FreeBSD License",
+ "licenseId": "BSD-2-Clause-FreeBSD",
+ "seeAlso": [
+ "http://www.freebsd.org/copyright/freebsd-license.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MIT-Modern-Variant.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MIT-Modern-Variant.json",
+ "referenceNumber": 380,
+ "name": "MIT License Modern Variant",
+ "licenseId": "MIT-Modern-Variant",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing:MIT#Modern_Variants",
+ "https://ptolemy.berkeley.edu/copyright.htm",
+ "https://pirlwww.lpl.arizona.edu/resources/guide/software/PerlTk/Tixlic.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Nokia.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Nokia.json",
+ "referenceNumber": 381,
+ "name": "Nokia Open Source License",
+ "licenseId": "Nokia",
+ "seeAlso": [
+ "https://opensource.org/licenses/nokia"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.1-no-invariants-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.1-no-invariants-only.json",
+ "referenceNumber": 382,
+ "name": "GNU Free Documentation License v1.1 only - no invariants",
+ "licenseId": "GFDL-1.1-no-invariants-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/PDDL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/PDDL-1.0.json",
+ "referenceNumber": 383,
+ "name": "Open Data Commons Public Domain Dedication \u0026 License 1.0",
+ "licenseId": "PDDL-1.0",
+ "seeAlso": [
+ "http://opendatacommons.org/licenses/pddl/1.0/",
+ "https://opendatacommons.org/licenses/pddl/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/EUPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EUPL-1.0.json",
+ "referenceNumber": 384,
+ "name": "European Union Public License 1.0",
+ "licenseId": "EUPL-1.0",
+ "seeAlso": [
+ "http://ec.europa.eu/idabc/en/document/7330.html",
+ "http://ec.europa.eu/idabc/servlets/Doc027f.pdf?id\u003d31096"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CDDL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CDDL-1.1.json",
+ "referenceNumber": 385,
+ "name": "Common Development and Distribution License 1.1",
+ "licenseId": "CDDL-1.1",
+ "seeAlso": [
+ "http://glassfish.java.net/public/CDDL+GPL_1_1.html",
+ "https://javaee.github.io/glassfish/LICENSE"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.3-only.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.3-only.json",
+ "referenceNumber": 386,
+ "name": "GNU Free Documentation License v1.3 only",
+ "licenseId": "GFDL-1.3-only",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/fdl-1.3.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.6.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.6.json",
+ "referenceNumber": 387,
+ "name": "Open LDAP Public License v2.6",
+ "licenseId": "OLDAP-2.6",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003d1cae062821881f41b73012ba816434897abf4205"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/JSON.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/JSON.json",
+ "referenceNumber": 388,
+ "name": "JSON License",
+ "licenseId": "JSON",
+ "seeAlso": [
+ "http://www.json.org/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-3.0-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-3.0-or-later.json",
+ "referenceNumber": 389,
+ "name": "GNU Lesser General Public License v3.0 or later",
+ "licenseId": "LGPL-3.0-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/lgpl-3.0-standalone.html",
+ "https://opensource.org/licenses/LGPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-3.0.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-3.0.json",
+ "referenceNumber": 390,
+ "name": "GNU General Public License v3.0 only",
+ "licenseId": "GPL-3.0",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/gpl-3.0-standalone.html",
+ "https://opensource.org/licenses/GPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Fair.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Fair.json",
+ "referenceNumber": 391,
+ "name": "Fair License",
+ "licenseId": "Fair",
+ "seeAlso": [
+ "http://fairlicense.org/",
+ "https://opensource.org/licenses/Fair"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0-with-font-exception.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0-with-font-exception.json",
+ "referenceNumber": 392,
+ "name": "GNU General Public License v2.0 w/Font exception",
+ "licenseId": "GPL-2.0-with-font-exception",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/gpl-faq.html#FontException"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OSL-2.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OSL-2.1.json",
+ "referenceNumber": 393,
+ "name": "Open Software License 2.1",
+ "licenseId": "OSL-2.1",
+ "seeAlso": [
+ "http://web.archive.org/web/20050212003940/http://www.rosenlaw.com/osl21.htm",
+ "https://opensource.org/licenses/OSL-2.1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LPPL-1.3a.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LPPL-1.3a.json",
+ "referenceNumber": 394,
+ "name": "LaTeX Project Public License v1.3a",
+ "licenseId": "LPPL-1.3a",
+ "seeAlso": [
+ "http://www.latex-project.org/lppl/lppl-1-3a.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NAIST-2003.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NAIST-2003.json",
+ "referenceNumber": 395,
+ "name": "Nara Institute of Science and Technology License (2003)",
+ "licenseId": "NAIST-2003",
+ "seeAlso": [
+ "https://enterprise.dejacode.com/licenses/public/naist-2003/#license-text",
+ "https://github.com/nodejs/node/blob/4a19cc8947b1bba2b2d27816ec3d0edf9b28e503/LICENSE#L343"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-ND-4.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-ND-4.0.json",
+ "referenceNumber": 396,
+ "name": "Creative Commons Attribution Non Commercial No Derivatives 4.0 International",
+ "licenseId": "CC-BY-NC-ND-4.0",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-3.0-DE.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-3.0-DE.json",
+ "referenceNumber": 397,
+ "name": "Creative Commons Attribution Non Commercial 3.0 Germany",
+ "licenseId": "CC-BY-NC-3.0-DE",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc/3.0/de/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-2.1+.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-2.1+.json",
+ "referenceNumber": 398,
+ "name": "GNU Library General Public License v2.1 or later",
+ "licenseId": "LGPL-2.1+",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/lgpl-2.1-standalone.html",
+ "https://opensource.org/licenses/LGPL-2.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OPL-1.0.json",
+ "referenceNumber": 399,
+ "name": "Open Public License v1.0",
+ "licenseId": "OPL-1.0",
+ "seeAlso": [
+ "http://old.koalateam.com/jackaroo/OPL_1_0.TXT",
+ "https://fedoraproject.org/wiki/Licensing/Open_Public_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/HPND-sell-variant.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/HPND-sell-variant.json",
+ "referenceNumber": 400,
+ "name": "Historical Permission Notice and Disclaimer - sell variant",
+ "licenseId": "HPND-sell-variant",
+ "seeAlso": [
+ "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/sunrpc/auth_gss/gss_generic_token.c?h\u003dv4.19"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/QPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/QPL-1.0.json",
+ "referenceNumber": 401,
+ "name": "Q Public License 1.0",
+ "licenseId": "QPL-1.0",
+ "seeAlso": [
+ "http://doc.qt.nokia.com/3.3/license.html",
+ "https://opensource.org/licenses/QPL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/EUPL-1.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EUPL-1.2.json",
+ "referenceNumber": 402,
+ "name": "European Union Public License 1.2",
+ "licenseId": "EUPL-1.2",
+ "seeAlso": [
+ "https://joinup.ec.europa.eu/page/eupl-text-11-12",
+ "https://joinup.ec.europa.eu/sites/default/files/custom-page/attachment/eupl_v1.2_en.pdf",
+ "https://joinup.ec.europa.eu/sites/default/files/custom-page/attachment/2020-03/EUPL-1.2%20EN.txt",
+ "https://joinup.ec.europa.eu/sites/default/files/inline-files/EUPL%20v1_2%20EN(1).txt",
+ "http://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri\u003dCELEX:32017D0863",
+ "https://opensource.org/licenses/EUPL-1.2"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.2-no-invariants-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.2-no-invariants-or-later.json",
+ "referenceNumber": 403,
+ "name": "GNU Free Documentation License v1.2 or later - no invariants",
+ "licenseId": "GFDL-1.2-no-invariants-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.2.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/eCos-2.0.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/eCos-2.0.json",
+ "referenceNumber": 404,
+ "name": "eCos license version 2.0",
+ "licenseId": "eCos-2.0",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/ecos-license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NCGL-UK-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NCGL-UK-2.0.json",
+ "referenceNumber": 405,
+ "name": "Non-Commercial Government Licence",
+ "licenseId": "NCGL-UK-2.0",
+ "seeAlso": [
+ "http://www.nationalarchives.gov.uk/doc/non-commercial-government-licence/version/2/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Beerware.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Beerware.json",
+ "referenceNumber": 406,
+ "name": "Beerware License",
+ "licenseId": "Beerware",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Beerware",
+ "https://people.freebsd.org/~phk/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-Open-MPI.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-Open-MPI.json",
+ "referenceNumber": 407,
+ "name": "BSD 3-Clause Open MPI variant",
+ "licenseId": "BSD-3-Clause-Open-MPI",
+ "seeAlso": [
+ "https://www.open-mpi.org/community/license.php",
+ "http://www.netlib.org/lapack/LICENSE.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0-with-bison-exception.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0-with-bison-exception.json",
+ "referenceNumber": 408,
+ "name": "GNU General Public License v2.0 w/Bison exception",
+ "licenseId": "GPL-2.0-with-bison-exception",
+ "seeAlso": [
+ "http://git.savannah.gnu.org/cgit/bison.git/tree/data/yacc.c?id\u003d193d7c7054ba7197b0789e14965b739162319b5e#n141"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CECILL-B.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CECILL-B.json",
+ "referenceNumber": 409,
+ "name": "CeCILL-B Free Software License Agreement",
+ "licenseId": "CECILL-B",
+ "seeAlso": [
+ "http://www.cecill.info/licences/Licence_CeCILL-B_V1-en.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-2.0-with-autoconf-exception.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-2.0-with-autoconf-exception.json",
+ "referenceNumber": 410,
+ "name": "GNU General Public License v2.0 w/Autoconf exception",
+ "licenseId": "GPL-2.0-with-autoconf-exception",
+ "seeAlso": [
+ "http://ac-archive.sourceforge.net/doc/copyright.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/EPL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EPL-2.0.json",
+ "referenceNumber": 411,
+ "name": "Eclipse Public License 2.0",
+ "licenseId": "EPL-2.0",
+ "seeAlso": [
+ "https://www.eclipse.org/legal/epl-2.0",
+ "https://www.opensource.org/licenses/EPL-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MIT-feh.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MIT-feh.json",
+ "referenceNumber": 412,
+ "name": "feh License",
+ "licenseId": "MIT-feh",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/MIT#feh"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/RPL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/RPL-1.1.json",
+ "referenceNumber": 413,
+ "name": "Reciprocal Public License 1.1",
+ "licenseId": "RPL-1.1",
+ "seeAlso": [
+ "https://opensource.org/licenses/RPL-1.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CDLA-Permissive-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CDLA-Permissive-1.0.json",
+ "referenceNumber": 414,
+ "name": "Community Data License Agreement Permissive 1.0",
+ "licenseId": "CDLA-Permissive-1.0",
+ "seeAlso": [
+ "https://cdla.io/permissive-1-0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Python-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Python-2.0.json",
+ "referenceNumber": 415,
+ "name": "Python License 2.0",
+ "licenseId": "Python-2.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/Python-2.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/MPL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MPL-1.0.json",
+ "referenceNumber": 416,
+ "name": "Mozilla Public License 1.0",
+ "licenseId": "MPL-1.0",
+ "seeAlso": [
+ "http://www.mozilla.org/MPL/MPL-1.0.html",
+ "https://opensource.org/licenses/MPL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/GFDL-1.1-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/GFDL-1.1-or-later.json",
+ "referenceNumber": 417,
+ "name": "GNU Free Documentation License v1.1 or later",
+ "licenseId": "GFDL-1.1-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/fdl-1.1.txt"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/diffmark.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/diffmark.json",
+ "referenceNumber": 418,
+ "name": "diffmark license",
+ "licenseId": "diffmark",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/diffmark"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/GPL-1.0+.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/GPL-1.0+.json",
+ "referenceNumber": 419,
+ "name": "GNU General Public License v1.0 or later",
+ "licenseId": "GPL-1.0+",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/gpl-1.0-standalone.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OpenSSL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OpenSSL.json",
+ "referenceNumber": 420,
+ "name": "OpenSSL License",
+ "licenseId": "OpenSSL",
+ "seeAlso": [
+ "http://www.openssl.org/source/license.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OSL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OSL-1.0.json",
+ "referenceNumber": 421,
+ "name": "Open Software License 1.0",
+ "licenseId": "OSL-1.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/OSL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Parity-6.0.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Parity-6.0.0.json",
+ "referenceNumber": 422,
+ "name": "The Parity Public License 6.0.0",
+ "licenseId": "Parity-6.0.0",
+ "seeAlso": [
+ "https://paritylicense.com/versions/6.0.0.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AGPL-1.0.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/AGPL-1.0.json",
+ "referenceNumber": 423,
+ "name": "Affero General Public License v1.0",
+ "licenseId": "AGPL-1.0",
+ "seeAlso": [
+ "http://www.affero.org/oagpl.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/YPL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/YPL-1.1.json",
+ "referenceNumber": 424,
+ "name": "Yahoo! Public License v1.1",
+ "licenseId": "YPL-1.1",
+ "seeAlso": [
+ "http://www.zimbra.com/license/yahoo_public_license_1.1.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/SSH-short.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SSH-short.json",
+ "referenceNumber": 425,
+ "name": "SSH short notice",
+ "licenseId": "SSH-short",
+ "seeAlso": [
+ "https://github.com/openssh/openssh-portable/blob/1b11ea7c58cd5c59838b5fa574cd456d6047b2d4/pathnames.h",
+ "http://web.mit.edu/kolya/.f/root/athena.mit.edu/sipb.mit.edu/project/openssh/OldFiles/src/openssh-2.9.9p2/ssh-add.1",
+ "https://joinup.ec.europa.eu/svn/lesoll/trunk/italc/lib/src/dsa_key.cpp"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/IBM-pibs.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/IBM-pibs.json",
+ "referenceNumber": 426,
+ "name": "IBM PowerPC Initialization and Boot Software",
+ "licenseId": "IBM-pibs",
+ "seeAlso": [
+ "http://git.denx.de/?p\u003du-boot.git;a\u003dblob;f\u003darch/powerpc/cpu/ppc4xx/miiphy.c;h\u003d297155fdafa064b955e53e9832de93bfb0cfb85b;hb\u003d9fab4bf4cc077c21e43941866f3f2c196f28670d"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Xnet.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Xnet.json",
+ "referenceNumber": 427,
+ "name": "X.Net License",
+ "licenseId": "Xnet",
+ "seeAlso": [
+ "https://opensource.org/licenses/Xnet"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/TU-Berlin-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/TU-Berlin-1.0.json",
+ "referenceNumber": 428,
+ "name": "Technische Universitaet Berlin License 1.0",
+ "licenseId": "TU-Berlin-1.0",
+ "seeAlso": [
+ "https://github.com/swh/ladspa/blob/7bf6f3799fdba70fda297c2d8fd9f526803d9680/gsm/COPYRIGHT"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AGPL-3.0.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/AGPL-3.0.json",
+ "referenceNumber": 429,
+ "name": "GNU Affero General Public License v3.0",
+ "licenseId": "AGPL-3.0",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/agpl.txt",
+ "https://opensource.org/licenses/AGPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CAL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CAL-1.0.json",
+ "referenceNumber": 430,
+ "name": "Cryptographic Autonomy License 1.0",
+ "licenseId": "CAL-1.0",
+ "seeAlso": [
+ "http://cryptographicautonomylicense.com/license-text.html",
+ "https://opensource.org/licenses/CAL-1.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/AFL-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AFL-3.0.json",
+ "referenceNumber": 431,
+ "name": "Academic Free License v3.0",
+ "licenseId": "AFL-3.0",
+ "seeAlso": [
+ "http://www.rosenlaw.com/AFL3.0.htm",
+ "https://opensource.org/licenses/afl-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CECILL-C.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CECILL-C.json",
+ "referenceNumber": 432,
+ "name": "CeCILL-C Free Software License Agreement",
+ "licenseId": "CECILL-C",
+ "seeAlso": [
+ "http://www.cecill.info/licences/Licence_CeCILL-C_V1-en.html"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OGL-UK-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OGL-UK-3.0.json",
+ "referenceNumber": 433,
+ "name": "Open Government Licence v3.0",
+ "licenseId": "OGL-UK-3.0",
+ "seeAlso": [
+ "http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-Clear.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-Clear.json",
+ "referenceNumber": 434,
+ "name": "BSD 3-Clause Clear License",
+ "licenseId": "BSD-3-Clause-Clear",
+ "seeAlso": [
+ "http://labs.metacarta.com/license-explanation.html#license"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-Modification.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-Modification.json",
+ "referenceNumber": 435,
+ "name": "BSD 3-Clause Modification",
+ "licenseId": "BSD-3-Clause-Modification",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing:BSD#Modification_Variant"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-SA-2.0-UK.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-SA-2.0-UK.json",
+ "referenceNumber": 436,
+ "name": "Creative Commons Attribution Share Alike 2.0 England and Wales",
+ "licenseId": "CC-BY-SA-2.0-UK",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-sa/2.0/uk/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Saxpath.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Saxpath.json",
+ "referenceNumber": 437,
+ "name": "Saxpath License",
+ "licenseId": "Saxpath",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Saxpath_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NLPL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NLPL.json",
+ "referenceNumber": 438,
+ "name": "No Limit Public License",
+ "licenseId": "NLPL",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/NLPL"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/SimPL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/SimPL-2.0.json",
+ "referenceNumber": 439,
+ "name": "Simple Public License 2.0",
+ "licenseId": "SimPL-2.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/SimPL-2.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/psfrag.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/psfrag.json",
+ "referenceNumber": 440,
+ "name": "psfrag License",
+ "licenseId": "psfrag",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/psfrag"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Spencer-86.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Spencer-86.json",
+ "referenceNumber": 441,
+ "name": "Spencer License 86",
+ "licenseId": "Spencer-86",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Henry_Spencer_Reg-Ex_Library_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OCCT-PL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OCCT-PL.json",
+ "referenceNumber": 442,
+ "name": "Open CASCADE Technology Public License",
+ "licenseId": "OCCT-PL",
+ "seeAlso": [
+ "http://www.opencascade.com/content/occt-public-license"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/CERN-OHL-S-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CERN-OHL-S-2.0.json",
+ "referenceNumber": 443,
+ "name": "CERN Open Hardware Licence Version 2 - Strongly Reciprocal",
+ "licenseId": "CERN-OHL-S-2.0",
+ "seeAlso": [
+ "https://www.ohwr.org/project/cernohl/wikis/Documents/CERN-OHL-version-2"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/ErlPL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ErlPL-1.1.json",
+ "referenceNumber": 444,
+ "name": "Erlang Public License v1.1",
+ "licenseId": "ErlPL-1.1",
+ "seeAlso": [
+ "http://www.erlang.org/EPLICENSE"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/MIT-CMU.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/MIT-CMU.json",
+ "referenceNumber": 445,
+ "name": "CMU License",
+ "licenseId": "MIT-CMU",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing:MIT?rd\u003dLicensing/MIT#CMU_Style",
+ "https://github.com/python-pillow/Pillow/blob/fffb426092c8db24a5f4b6df243a8a3c01fb63cd/LICENSE"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/NIST-PD.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NIST-PD.json",
+ "referenceNumber": 446,
+ "name": "NIST Public Domain Notice",
+ "licenseId": "NIST-PD",
+ "seeAlso": [
+ "https://github.com/tcheneau/simpleRPL/blob/e645e69e38dd4e3ccfeceb2db8cba05b7c2e0cd3/LICENSE.txt",
+ "https://github.com/tcheneau/Routing/blob/f09f46fcfe636107f22f2c98348188a65a135d98/README.md"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OSL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OSL-2.0.json",
+ "referenceNumber": 447,
+ "name": "Open Software License 2.0",
+ "licenseId": "OSL-2.0",
+ "seeAlso": [
+ "http://web.archive.org/web/20041020171434/http://www.rosenlaw.com/osl2.0.html"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/APSL-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/APSL-2.0.json",
+ "referenceNumber": 448,
+ "name": "Apple Public Source License 2.0",
+ "licenseId": "APSL-2.0",
+ "seeAlso": [
+ "http://www.opensource.apple.com/license/apsl/"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Leptonica.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Leptonica.json",
+ "referenceNumber": 449,
+ "name": "Leptonica License",
+ "licenseId": "Leptonica",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Leptonica"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/PolyForm-Small-Business-1.0.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/PolyForm-Small-Business-1.0.0.json",
+ "referenceNumber": 450,
+ "name": "PolyForm Small Business License 1.0.0",
+ "licenseId": "PolyForm-Small-Business-1.0.0",
+ "seeAlso": [
+ "https://polyformproject.org/licenses/small-business/1.0.0"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/LiLiQ-P-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/LiLiQ-P-1.1.json",
+ "referenceNumber": 451,
+ "name": "Licence Libre du Québec – Permissive version 1.1",
+ "licenseId": "LiLiQ-P-1.1",
+ "seeAlso": [
+ "https://forge.gouv.qc.ca/licence/fr/liliq-v1-1/",
+ "http://opensource.org/licenses/LiLiQ-P-1.1"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NetCDF.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NetCDF.json",
+ "referenceNumber": 452,
+ "name": "NetCDF license",
+ "licenseId": "NetCDF",
+ "seeAlso": [
+ "http://www.unidata.ucar.edu/software/netcdf/copyright.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/OML.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OML.json",
+ "referenceNumber": 453,
+ "name": "Open Market License",
+ "licenseId": "OML",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/Open_Market_License"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/AGPL-3.0-or-later.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/AGPL-3.0-or-later.json",
+ "referenceNumber": 454,
+ "name": "GNU Affero General Public License v3.0 or later",
+ "licenseId": "AGPL-3.0-or-later",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/agpl.txt",
+ "https://opensource.org/licenses/AGPL-3.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OLDAP-2.2.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OLDAP-2.2.json",
+ "referenceNumber": 455,
+ "name": "Open LDAP Public License v2.2",
+ "licenseId": "OLDAP-2.2",
+ "seeAlso": [
+ "http://www.openldap.org/devel/gitweb.cgi?p\u003dopenldap.git;a\u003dblob;f\u003dLICENSE;hb\u003d470b0c18ec67621c85881b2733057fecf4a1acc3"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause.json",
+ "referenceNumber": 456,
+ "name": "BSD 3-Clause \"New\" or \"Revised\" License",
+ "licenseId": "BSD-3-Clause",
+ "seeAlso": [
+ "https://opensource.org/licenses/BSD-3-Clause"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/WTFPL.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/WTFPL.json",
+ "referenceNumber": 457,
+ "name": "Do What The F*ck You Want To Public License",
+ "licenseId": "WTFPL",
+ "seeAlso": [
+ "http://www.wtfpl.net/about/",
+ "http://sam.zoy.org/wtfpl/COPYING"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/OGL-UK-2.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/OGL-UK-2.0.json",
+ "referenceNumber": 458,
+ "name": "Open Government Licence v2.0",
+ "licenseId": "OGL-UK-2.0",
+ "seeAlso": [
+ "http://www.nationalarchives.gov.uk/doc/open-government-licence/version/2/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-3-Clause-Attribution.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-3-Clause-Attribution.json",
+ "referenceNumber": 459,
+ "name": "BSD with attribution",
+ "licenseId": "BSD-3-Clause-Attribution",
+ "seeAlso": [
+ "https://fedoraproject.org/wiki/Licensing/BSD_with_Attribution"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/RPSL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/RPSL-1.0.json",
+ "referenceNumber": 460,
+ "name": "RealNetworks Public Source License v1.0",
+ "licenseId": "RPSL-1.0",
+ "seeAlso": [
+ "https://helixcommunity.org/content/rpsl",
+ "https://opensource.org/licenses/RPSL-1.0"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/CC-BY-NC-ND-3.0-DE.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/CC-BY-NC-ND-3.0-DE.json",
+ "referenceNumber": 461,
+ "name": "Creative Commons Attribution Non Commercial No Derivatives 3.0 Germany",
+ "licenseId": "CC-BY-NC-ND-3.0-DE",
+ "seeAlso": [
+ "https://creativecommons.org/licenses/by-nc-nd/3.0/de/legalcode"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/EUPL-1.1.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/EUPL-1.1.json",
+ "referenceNumber": 462,
+ "name": "European Union Public License 1.1",
+ "licenseId": "EUPL-1.1",
+ "seeAlso": [
+ "https://joinup.ec.europa.eu/software/page/eupl/licence-eupl",
+ "https://joinup.ec.europa.eu/sites/default/files/custom-page/attachment/eupl1.1.-licence-en_0.pdf",
+ "https://opensource.org/licenses/EUPL-1.1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/Sendmail-8.23.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Sendmail-8.23.json",
+ "referenceNumber": 463,
+ "name": "Sendmail License 8.23",
+ "licenseId": "Sendmail-8.23",
+ "seeAlso": [
+ "https://www.proofpoint.com/sites/default/files/sendmail-license.pdf",
+ "https://web.archive.org/web/20181003101040/https://www.proofpoint.com/sites/default/files/sendmail-license.pdf"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/ODC-By-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/ODC-By-1.0.json",
+ "referenceNumber": 464,
+ "name": "Open Data Commons Attribution License v1.0",
+ "licenseId": "ODC-By-1.0",
+ "seeAlso": [
+ "https://opendatacommons.org/licenses/by/1.0/"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/D-FSL-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/D-FSL-1.0.json",
+ "referenceNumber": 465,
+ "name": "Deutsche Freie Software Lizenz",
+ "licenseId": "D-FSL-1.0",
+ "seeAlso": [
+ "http://www.dipp.nrw.de/d-fsl/lizenzen/",
+ "http://www.dipp.nrw.de/d-fsl/index_html/lizenzen/de/D-FSL-1_0_de.txt",
+ "http://www.dipp.nrw.de/d-fsl/index_html/lizenzen/en/D-FSL-1_0_en.txt",
+ "https://www.hbz-nrw.de/produkte/open-access/lizenzen/dfsl",
+ "https://www.hbz-nrw.de/produkte/open-access/lizenzen/dfsl/deutsche-freie-software-lizenz",
+ "https://www.hbz-nrw.de/produkte/open-access/lizenzen/dfsl/german-free-software-license",
+ "https://www.hbz-nrw.de/produkte/open-access/lizenzen/dfsl/D-FSL-1_0_de.txt/at_download/file",
+ "https://www.hbz-nrw.de/produkte/open-access/lizenzen/dfsl/D-FSL-1_0_en.txt/at_download/file"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-4-Clause.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-4-Clause.json",
+ "referenceNumber": 466,
+ "name": "BSD 4-Clause \"Original\" or \"Old\" License",
+ "licenseId": "BSD-4-Clause",
+ "seeAlso": [
+ "http://directory.fsf.org/wiki/License:BSD_4Clause"
+ ],
+ "isOsiApproved": false,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/LGPL-2.1.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/LGPL-2.1.json",
+ "referenceNumber": 467,
+ "name": "GNU Lesser General Public License v2.1 only",
+ "licenseId": "LGPL-2.1",
+ "seeAlso": [
+ "https://www.gnu.org/licenses/old-licenses/lgpl-2.1-standalone.html",
+ "https://opensource.org/licenses/LGPL-2.1"
+ ],
+ "isOsiApproved": true,
+ "isFsfLibre": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/BSD-2-Clause-Views.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/BSD-2-Clause-Views.json",
+ "referenceNumber": 468,
+ "name": "BSD 2-Clause with views sentence",
+ "licenseId": "BSD-2-Clause-Views",
+ "seeAlso": [
+ "http://www.freebsd.org/copyright/freebsd-license.html",
+ "https://people.freebsd.org/~ivoras/wine/patch-wine-nvidia.sh",
+ "https://github.com/protegeproject/protege/blob/master/license.txt"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Artistic-1.0-Perl.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Artistic-1.0-Perl.json",
+ "referenceNumber": 469,
+ "name": "Artistic License 1.0 (Perl)",
+ "licenseId": "Artistic-1.0-Perl",
+ "seeAlso": [
+ "http://dev.perl.org/licenses/artistic.html"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/NPOSL-3.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/NPOSL-3.0.json",
+ "referenceNumber": 470,
+ "name": "Non-Profit Open Software License 3.0",
+ "licenseId": "NPOSL-3.0",
+ "seeAlso": [
+ "https://opensource.org/licenses/NOSL3.0"
+ ],
+ "isOsiApproved": true
+ },
+ {
+ "reference": "https://spdx.org/licenses/gSOAP-1.3b.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/gSOAP-1.3b.json",
+ "referenceNumber": 471,
+ "name": "gSOAP Public License v1.3b",
+ "licenseId": "gSOAP-1.3b",
+ "seeAlso": [
+ "http://www.cs.fsu.edu/~engelen/license.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/Interbase-1.0.html",
+ "isDeprecatedLicenseId": false,
+ "detailsUrl": "https://spdx.org/licenses/Interbase-1.0.json",
+ "referenceNumber": 472,
+ "name": "Interbase Public License v1.0",
+ "licenseId": "Interbase-1.0",
+ "seeAlso": [
+ "https://web.archive.org/web/20060319014854/http://info.borland.com/devsupport/interbase/opensource/IPL.html"
+ ],
+ "isOsiApproved": false
+ },
+ {
+ "reference": "https://spdx.org/licenses/StandardML-NJ.html",
+ "isDeprecatedLicenseId": true,
+ "detailsUrl": "https://spdx.org/licenses/StandardML-NJ.json",
+ "referenceNumber": 473,
+ "name": "Standard ML of New Jersey License",
+ "licenseId": "StandardML-NJ",
+ "seeAlso": [
+ "http://www.smlnj.org//license.html"
+ ],
+ "isOsiApproved": false
+ }
+ ],
+ "releaseDate": "2021-08-08"
+} \ No newline at end of file
diff --git a/poky/meta/lib/oe/cve_check.py b/poky/meta/lib/oe/cve_check.py
index 67f0644889..c508865738 100644
--- a/poky/meta/lib/oe/cve_check.py
+++ b/poky/meta/lib/oe/cve_check.py
@@ -172,3 +172,40 @@ def get_cpe_ids(cve_product, version):
cpe_ids.append(cpe_id)
return cpe_ids
+
+def convert_cve_version(version):
+ """
+ This function converts from CVE format to Yocto version format.
+ eg 8.3_p1 -> 8.3p1, 6.2_rc1 -> 6.2-rc1
+
+ Unless it is redefined using CVE_VERSION in the recipe,
+ cve_check uses the version in the name of the recipe (${PV})
+ to check vulnerabilities against a CVE in the database downloaded from NVD.
+
+ When the version has an update, i.e.
+ "p1" in OpenSSH 8.3p1,
+ "-rc1" in linux kernel 6.2-rc1,
+ the database stores the version as version_update (8.3_p1, 6.2_rc1).
+ Therefore, we must transform this version before comparing to the
+ recipe version.
+
+ In this case, the parameter of the function is 8.3_p1.
+ If the version uses the Release Candidate format, "rc",
+ this function replaces the '_' by '-'.
+ If the version uses the Update format, "p",
+ this function removes the '_' completely.
+ """
+ import re
+
+ matches = re.match('^([0-9.]+)_((p|rc)[0-9]+)$', version)
+
+ if not matches:
+ return version
+
+ version = matches.group(1)
+ update = matches.group(2)
+
+ if matches.group(3) == "rc":
+ return version + '-' + update
+
+ return version + update
diff --git a/poky/meta/lib/oe/packagedata.py b/poky/meta/lib/oe/packagedata.py
index a82085a792..feb834c0e3 100644
--- a/poky/meta/lib/oe/packagedata.py
+++ b/poky/meta/lib/oe/packagedata.py
@@ -57,6 +57,17 @@ def read_subpkgdata_dict(pkg, d):
ret[newvar] = subd[var]
return ret
+def read_subpkgdata_extended(pkg, d):
+ import json
+ import gzip
+
+ fn = d.expand("${PKGDATA_DIR}/extended/%s.json.gz" % pkg)
+ try:
+ with gzip.open(fn, "rt", encoding="utf-8") as f:
+ return json.load(f)
+ except FileNotFoundError:
+ return None
+
def _pkgmap(d):
"""Return a dictionary mapping package to recipe name."""
diff --git a/poky/meta/lib/oe/reproducible.py b/poky/meta/lib/oe/reproducible.py
index 0938e4cb39..1ed79b18ca 100644
--- a/poky/meta/lib/oe/reproducible.py
+++ b/poky/meta/lib/oe/reproducible.py
@@ -62,7 +62,8 @@ def get_source_date_epoch_from_git(d, sourcedir):
return None
bb.debug(1, "git repository: %s" % gitpath)
- p = subprocess.run(['git', '--git-dir', gitpath, 'log', '-1', '--pretty=%ct'], check=True, stdout=subprocess.PIPE)
+ p = subprocess.run(['git', '-c', 'log.showSignature=false', '--git-dir', gitpath, 'log', '-1', '--pretty=%ct'],
+ check=True, stdout=subprocess.PIPE)
return int(p.stdout.decode('utf-8'))
def get_source_date_epoch_from_youngest_file(d, sourcedir):
diff --git a/poky/meta/lib/oe/sbom.py b/poky/meta/lib/oe/sbom.py
new file mode 100644
index 0000000000..22ed5070ea
--- /dev/null
+++ b/poky/meta/lib/oe/sbom.py
@@ -0,0 +1,84 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import collections
+
+DepRecipe = collections.namedtuple("DepRecipe", ("doc", "doc_sha1", "recipe"))
+DepSource = collections.namedtuple("DepSource", ("doc", "doc_sha1", "recipe", "file"))
+
+
+def get_recipe_spdxid(d):
+ return "SPDXRef-%s-%s" % ("Recipe", d.getVar("PN"))
+
+
+def get_download_spdxid(d, idx):
+ return "SPDXRef-Download-%s-%d" % (d.getVar("PN"), idx)
+
+
+def get_package_spdxid(pkg):
+ return "SPDXRef-Package-%s" % pkg
+
+
+def get_source_file_spdxid(d, idx):
+ return "SPDXRef-SourceFile-%s-%d" % (d.getVar("PN"), idx)
+
+
+def get_packaged_file_spdxid(pkg, idx):
+ return "SPDXRef-PackagedFile-%s-%d" % (pkg, idx)
+
+
+def get_image_spdxid(img):
+ return "SPDXRef-Image-%s" % img
+
+
+def get_sdk_spdxid(sdk):
+ return "SPDXRef-SDK-%s" % sdk
+
+
+def write_doc(d, spdx_doc, subdir, spdx_deploy=None, indent=None):
+ from pathlib import Path
+
+ if spdx_deploy is None:
+ spdx_deploy = Path(d.getVar("SPDXDEPLOY"))
+
+ dest = spdx_deploy / subdir / (spdx_doc.name + ".spdx.json")
+ dest.parent.mkdir(exist_ok=True, parents=True)
+ with dest.open("wb") as f:
+ doc_sha1 = spdx_doc.to_json(f, sort_keys=True, indent=indent)
+
+ l = spdx_deploy / "by-namespace" / spdx_doc.documentNamespace.replace("/", "_")
+ l.parent.mkdir(exist_ok=True, parents=True)
+ l.symlink_to(os.path.relpath(dest, l.parent))
+
+ return doc_sha1
+
+
+def read_doc(fn):
+ import hashlib
+ import oe.spdx
+ import io
+ import contextlib
+
+ @contextlib.contextmanager
+ def get_file():
+ if isinstance(fn, io.IOBase):
+ yield fn
+ else:
+ with fn.open("rb") as f:
+ yield f
+
+ with get_file() as f:
+ sha1 = hashlib.sha1()
+ while True:
+ chunk = f.read(4096)
+ if not chunk:
+ break
+ sha1.update(chunk)
+
+ f.seek(0)
+ doc = oe.spdx.SPDXDocument.from_json(f)
+
+ return (doc, sha1.hexdigest())
diff --git a/poky/meta/lib/oe/spdx.py b/poky/meta/lib/oe/spdx.py
new file mode 100644
index 0000000000..7aaf2af5ed
--- /dev/null
+++ b/poky/meta/lib/oe/spdx.py
@@ -0,0 +1,357 @@
+#
+# Copyright OpenEmbedded Contributors
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+#
+# This library is intended to capture the JSON SPDX specification in a type
+# safe manner. It is not intended to encode any particular OE specific
+# behaviors, see the sbom.py for that.
+#
+# The documented SPDX spec document doesn't cover the JSON syntax for
+# particular configuration, which can make it hard to determine what the JSON
+# syntax should be. I've found it is actually much simpler to read the official
+# SPDX JSON schema which can be found here: https://github.com/spdx/spdx-spec
+# in schemas/spdx-schema.json
+#
+
+import hashlib
+import itertools
+import json
+
+SPDX_VERSION = "2.2"
+
+
+#
+# The following are the support classes that are used to implement SPDX object
+#
+
+class _Property(object):
+ """
+ A generic SPDX object property. The different types will derive from this
+ class
+ """
+
+ def __init__(self, *, default=None):
+ self.default = default
+
+ def setdefault(self, dest, name):
+ if self.default is not None:
+ dest.setdefault(name, self.default)
+
+
+class _String(_Property):
+ """
+ A scalar string property for an SPDX object
+ """
+
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+
+ def set_property(self, attrs, name):
+ def get_helper(obj):
+ return obj._spdx[name]
+
+ def set_helper(obj, value):
+ obj._spdx[name] = value
+
+ def del_helper(obj):
+ del obj._spdx[name]
+
+ attrs[name] = property(get_helper, set_helper, del_helper)
+
+ def init(self, source):
+ return source
+
+
+class _Object(_Property):
+ """
+ A scalar SPDX object property of a SPDX object
+ """
+
+ def __init__(self, cls, **kwargs):
+ super().__init__(**kwargs)
+ self.cls = cls
+
+ def set_property(self, attrs, name):
+ def get_helper(obj):
+ if not name in obj._spdx:
+ obj._spdx[name] = self.cls()
+ return obj._spdx[name]
+
+ def set_helper(obj, value):
+ obj._spdx[name] = value
+
+ def del_helper(obj):
+ del obj._spdx[name]
+
+ attrs[name] = property(get_helper, set_helper)
+
+ def init(self, source):
+ return self.cls(**source)
+
+
+class _ListProperty(_Property):
+ """
+ A list of SPDX properties
+ """
+
+ def __init__(self, prop, **kwargs):
+ super().__init__(**kwargs)
+ self.prop = prop
+
+ def set_property(self, attrs, name):
+ def get_helper(obj):
+ if not name in obj._spdx:
+ obj._spdx[name] = []
+ return obj._spdx[name]
+
+ def set_helper(obj, value):
+ obj._spdx[name] = list(value)
+
+ def del_helper(obj):
+ del obj._spdx[name]
+
+ attrs[name] = property(get_helper, set_helper, del_helper)
+
+ def init(self, source):
+ return [self.prop.init(o) for o in source]
+
+
+class _StringList(_ListProperty):
+ """
+ A list of strings as a property for an SPDX object
+ """
+
+ def __init__(self, **kwargs):
+ super().__init__(_String(), **kwargs)
+
+
+class _ObjectList(_ListProperty):
+ """
+ A list of SPDX objects as a property for an SPDX object
+ """
+
+ def __init__(self, cls, **kwargs):
+ super().__init__(_Object(cls), **kwargs)
+
+
+class MetaSPDXObject(type):
+ """
+ A metaclass that allows properties (anything derived from a _Property
+ class) to be defined for a SPDX object
+ """
+ def __new__(mcls, name, bases, attrs):
+ attrs["_properties"] = {}
+
+ for key in attrs.keys():
+ if isinstance(attrs[key], _Property):
+ prop = attrs[key]
+ attrs["_properties"][key] = prop
+ prop.set_property(attrs, key)
+
+ return super().__new__(mcls, name, bases, attrs)
+
+
+class SPDXObject(metaclass=MetaSPDXObject):
+ """
+ The base SPDX object; all SPDX spec classes must derive from this class
+ """
+ def __init__(self, **d):
+ self._spdx = {}
+
+ for name, prop in self._properties.items():
+ prop.setdefault(self._spdx, name)
+ if name in d:
+ self._spdx[name] = prop.init(d[name])
+
+ def serializer(self):
+ return self._spdx
+
+ def __setattr__(self, name, value):
+ if name in self._properties or name == "_spdx":
+ super().__setattr__(name, value)
+ return
+ raise KeyError("%r is not a valid SPDX property" % name)
+
+#
+# These are the SPDX objects implemented from the spec. The *only* properties
+# that can be added to these objects are ones directly specified in the SPDX
+# spec, however you may add helper functions to make operations easier.
+#
+# Defaults should *only* be specified if the SPDX spec says there is a certain
+# required value for a field (e.g. dataLicense), or if the field is mandatory
+# and has some sane "this field is unknown" (e.g. "NOASSERTION")
+#
+
+class SPDXAnnotation(SPDXObject):
+ annotationDate = _String()
+ annotationType = _String()
+ annotator = _String()
+ comment = _String()
+
+class SPDXChecksum(SPDXObject):
+ algorithm = _String()
+ checksumValue = _String()
+
+
+class SPDXRelationship(SPDXObject):
+ spdxElementId = _String()
+ relatedSpdxElement = _String()
+ relationshipType = _String()
+ comment = _String()
+ annotations = _ObjectList(SPDXAnnotation)
+
+
+class SPDXExternalReference(SPDXObject):
+ referenceCategory = _String()
+ referenceType = _String()
+ referenceLocator = _String()
+
+
+class SPDXPackageVerificationCode(SPDXObject):
+ packageVerificationCodeValue = _String()
+ packageVerificationCodeExcludedFiles = _StringList()
+
+
+class SPDXPackage(SPDXObject):
+ ALLOWED_CHECKSUMS = [
+ "SHA1",
+ "SHA224",
+ "SHA256",
+ "SHA384",
+ "SHA512",
+ "MD2",
+ "MD4",
+ "MD5",
+ "MD6",
+ ]
+
+ name = _String()
+ SPDXID = _String()
+ versionInfo = _String()
+ downloadLocation = _String(default="NOASSERTION")
+ supplier = _String(default="NOASSERTION")
+ homepage = _String()
+ licenseConcluded = _String(default="NOASSERTION")
+ licenseDeclared = _String(default="NOASSERTION")
+ summary = _String()
+ description = _String()
+ sourceInfo = _String()
+ copyrightText = _String(default="NOASSERTION")
+ licenseInfoFromFiles = _StringList(default=["NOASSERTION"])
+ externalRefs = _ObjectList(SPDXExternalReference)
+ packageVerificationCode = _Object(SPDXPackageVerificationCode)
+ hasFiles = _StringList()
+ packageFileName = _String()
+ annotations = _ObjectList(SPDXAnnotation)
+ checksums = _ObjectList(SPDXChecksum)
+
+
+class SPDXFile(SPDXObject):
+ SPDXID = _String()
+ fileName = _String()
+ licenseConcluded = _String(default="NOASSERTION")
+ copyrightText = _String(default="NOASSERTION")
+ licenseInfoInFiles = _StringList(default=["NOASSERTION"])
+ checksums = _ObjectList(SPDXChecksum)
+ fileTypes = _StringList()
+
+
+class SPDXCreationInfo(SPDXObject):
+ created = _String()
+ licenseListVersion = _String()
+ comment = _String()
+ creators = _StringList()
+
+
+class SPDXExternalDocumentRef(SPDXObject):
+ externalDocumentId = _String()
+ spdxDocument = _String()
+ checksum = _Object(SPDXChecksum)
+
+
+class SPDXExtractedLicensingInfo(SPDXObject):
+ name = _String()
+ comment = _String()
+ licenseId = _String()
+ extractedText = _String()
+
+
+class SPDXDocument(SPDXObject):
+ spdxVersion = _String(default="SPDX-" + SPDX_VERSION)
+ dataLicense = _String(default="CC0-1.0")
+ SPDXID = _String(default="SPDXRef-DOCUMENT")
+ name = _String()
+ documentNamespace = _String()
+ creationInfo = _Object(SPDXCreationInfo)
+ packages = _ObjectList(SPDXPackage)
+ files = _ObjectList(SPDXFile)
+ relationships = _ObjectList(SPDXRelationship)
+ externalDocumentRefs = _ObjectList(SPDXExternalDocumentRef)
+ hasExtractedLicensingInfos = _ObjectList(SPDXExtractedLicensingInfo)
+
+ def __init__(self, **d):
+ super().__init__(**d)
+
+ def to_json(self, f, *, sort_keys=False, indent=None, separators=None):
+ class Encoder(json.JSONEncoder):
+ def default(self, o):
+ if isinstance(o, SPDXObject):
+ return o.serializer()
+
+ return super().default(o)
+
+ sha1 = hashlib.sha1()
+ for chunk in Encoder(
+ sort_keys=sort_keys,
+ indent=indent,
+ separators=separators,
+ ).iterencode(self):
+ chunk = chunk.encode("utf-8")
+ f.write(chunk)
+ sha1.update(chunk)
+
+ return sha1.hexdigest()
+
+ @classmethod
+ def from_json(cls, f):
+ return cls(**json.load(f))
+
+ def add_relationship(self, _from, relationship, _to, *, comment=None, annotation=None):
+ if isinstance(_from, SPDXObject):
+ from_spdxid = _from.SPDXID
+ else:
+ from_spdxid = _from
+
+ if isinstance(_to, SPDXObject):
+ to_spdxid = _to.SPDXID
+ else:
+ to_spdxid = _to
+
+ r = SPDXRelationship(
+ spdxElementId=from_spdxid,
+ relatedSpdxElement=to_spdxid,
+ relationshipType=relationship,
+ )
+
+ if comment is not None:
+ r.comment = comment
+
+ if annotation is not None:
+ r.annotations.append(annotation)
+
+ self.relationships.append(r)
+
+ def find_by_spdxid(self, spdxid):
+ for o in itertools.chain(self.packages, self.files):
+ if o.SPDXID == spdxid:
+ return o
+ return None
+
+ def find_external_document_ref(self, namespace):
+ for r in self.externalDocumentRefs:
+ if r.spdxDocument == namespace:
+ return r
+ return None
diff --git a/poky/meta/lib/oeqa/runtime/cases/rpm.py b/poky/meta/lib/oeqa/runtime/cases/rpm.py
index 7a9d62c003..2b6cfe5ff2 100644
--- a/poky/meta/lib/oeqa/runtime/cases/rpm.py
+++ b/poky/meta/lib/oeqa/runtime/cases/rpm.py
@@ -49,21 +49,20 @@ class RpmBasicTest(OERuntimeTestCase):
msg = 'status: %s. Cannot run rpm -qa: %s' % (status, output)
self.assertEqual(status, 0, msg=msg)
- def check_no_process_for_user(u):
- _, output = self.target.run(self.tc.target_cmds['ps'])
- if u + ' ' in output:
- return False
- else:
- return True
+ def wait_for_no_process_for_user(u, timeout = 120):
+ timeout_at = time.time() + timeout
+ while time.time() < timeout_at:
+ _, output = self.target.run(self.tc.target_cmds['ps'])
+ if u + ' ' not in output:
+ return
+ time.sleep(1)
+ user_pss = [ps for ps in output.split("\n") if u + ' ' in ps]
+ msg = "There're %s 's process(es) still running: %s".format(u, "\n".join(user_pss))
+ assertTrue(True, msg=msg)
def unset_up_test_user(u):
# ensure no test1 process in running
- timeout = time.time() + 30
- while time.time() < timeout:
- if check_no_process_for_user(u):
- break
- else:
- time.sleep(1)
+ wait_for_no_process_for_user(u)
status, output = self.target.run('userdel -r %s' % u)
msg = 'Failed to erase user: %s' % output
self.assertTrue(status == 0, msg=msg)
diff --git a/poky/meta/lib/oeqa/runtime/cases/rtc.py b/poky/meta/lib/oeqa/runtime/cases/rtc.py
index c4e6681324..39f4d29f23 100644
--- a/poky/meta/lib/oeqa/runtime/cases/rtc.py
+++ b/poky/meta/lib/oeqa/runtime/cases/rtc.py
@@ -1,5 +1,6 @@
from oeqa.runtime.case import OERuntimeTestCase
from oeqa.core.decorator.depends import OETestDepends
+from oeqa.core.decorator.data import skipIfFeature
from oeqa.runtime.decorator.package import OEHasPackage
import re
@@ -16,12 +17,14 @@ class RTCTest(OERuntimeTestCase):
self.logger.debug('Starting systemd-timesyncd daemon')
self.target.run('systemctl enable --now --runtime systemd-timesyncd')
+ @skipIfFeature('read-only-rootfs',
+ 'Test does not work with read-only-rootfs in IMAGE_FEATURES')
@OETestDepends(['ssh.SSHTest.test_ssh'])
@OEHasPackage(['coreutils', 'busybox'])
def test_rtc(self):
(status, output) = self.target.run('hwclock -r')
self.assertEqual(status, 0, msg='Failed to get RTC time, output: %s' % output)
-
+
(status, current_datetime) = self.target.run('date +"%m%d%H%M%Y"')
self.assertEqual(status, 0, msg='Failed to get system current date & time, output: %s' % current_datetime)
@@ -32,7 +35,6 @@ class RTCTest(OERuntimeTestCase):
(status, output) = self.target.run('date %s' % current_datetime)
self.assertEqual(status, 0, msg='Failed to reset system date & time, output: %s' % output)
-
+
(status, output) = self.target.run('hwclock -w')
self.assertEqual(status, 0, msg='Failed to reset RTC time, output: %s' % output)
-
diff --git a/poky/meta/lib/oeqa/runtime/context.py b/poky/meta/lib/oeqa/runtime/context.py
index d707ab263a..8a0dbd0736 100644
--- a/poky/meta/lib/oeqa/runtime/context.py
+++ b/poky/meta/lib/oeqa/runtime/context.py
@@ -67,11 +67,11 @@ class OERuntimeTestContextExecutor(OETestContextExecutor):
% self.default_target_type)
runtime_group.add_argument('--target-ip', action='store',
default=self.default_target_ip,
- help="IP address of device under test, default: %s" \
+ help="IP address and optionally ssh port (default 22) of device under test, for example '192.168.0.7:22'. Default: %s" \
% self.default_target_ip)
runtime_group.add_argument('--server-ip', action='store',
default=self.default_target_ip,
- help="IP address of device under test, default: %s" \
+ help="IP address of the test host from test target machine, default: %s" \
% self.default_server_ip)
runtime_group.add_argument('--host-dumper-dir', action='store',
diff --git a/poky/meta/lib/oeqa/selftest/cases/cve_check.py b/poky/meta/lib/oeqa/selftest/cases/cve_check.py
index d0b2213703..22ffeffd29 100644
--- a/poky/meta/lib/oeqa/selftest/cases/cve_check.py
+++ b/poky/meta/lib/oeqa/selftest/cases/cve_check.py
@@ -48,6 +48,25 @@ class CVECheck(OESelftestTestCase):
self.assertTrue( result ,msg="Failed to compare version with suffix '1.0_patch2' < '1.0_patch3'")
+ def test_convert_cve_version(self):
+ from oe.cve_check import convert_cve_version
+
+ # Default format
+ self.assertEqual(convert_cve_version("8.3"), "8.3")
+ self.assertEqual(convert_cve_version(""), "")
+
+ # OpenSSL format version
+ self.assertEqual(convert_cve_version("1.1.1t"), "1.1.1t")
+
+ # OpenSSH format
+ self.assertEqual(convert_cve_version("8.3_p1"), "8.3p1")
+ self.assertEqual(convert_cve_version("8.3_p22"), "8.3p22")
+
+ # Linux kernel format
+ self.assertEqual(convert_cve_version("6.2_rc8"), "6.2-rc8")
+ self.assertEqual(convert_cve_version("6.2_rc31"), "6.2-rc31")
+
+
def test_recipe_report_json(self):
config = """
INHERIT += "cve-check"
diff --git a/poky/meta/lib/oeqa/selftest/cases/devtool.py b/poky/meta/lib/oeqa/selftest/cases/devtool.py
index 87e71632ab..5febdde28e 100644
--- a/poky/meta/lib/oeqa/selftest/cases/devtool.py
+++ b/poky/meta/lib/oeqa/selftest/cases/devtool.py
@@ -1323,7 +1323,7 @@ class DevtoolExtractTests(DevtoolBase):
# Now really test deploy-target
result = runCmd('devtool deploy-target -c %s root@%s' % (testrecipe, qemu.ip))
# Run a test command to see if it was installed properly
- sshargs = '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
+ sshargs = '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o HostKeyAlgorithms=+ssh-rsa'
result = runCmd('ssh %s root@%s %s' % (sshargs, qemu.ip, testcommand))
# Check if it deployed all of the files with the right ownership/perms
# First look on the host - need to do this under pseudo to get the correct ownership/perms
diff --git a/poky/meta/lib/oeqa/selftest/cases/prservice.py b/poky/meta/lib/oeqa/selftest/cases/prservice.py
index 578b2b4dd9..fdc1e40058 100644
--- a/poky/meta/lib/oeqa/selftest/cases/prservice.py
+++ b/poky/meta/lib/oeqa/selftest/cases/prservice.py
@@ -75,7 +75,7 @@ class BitbakePrTests(OESelftestTestCase):
exported_db_path = os.path.join(self.builddir, 'export.inc')
export_result = runCmd("bitbake-prserv-tool export %s" % exported_db_path, ignore_status=True)
self.assertEqual(export_result.status, 0, msg="PR Service database export failed: %s" % export_result.output)
- self.assertTrue(os.path.exists(exported_db_path))
+ self.assertTrue(os.path.exists(exported_db_path), msg="%s didn't exist, tool output %s" % (exported_db_path, export_result.output))
if replace_current_db:
current_db_path = os.path.join(get_bb_var('PERSISTENT_DIR'), 'prserv.sqlite3')
diff --git a/poky/meta/lib/oeqa/selftest/cases/reproducible.py b/poky/meta/lib/oeqa/selftest/cases/reproducible.py
index 4b606e7e64..adaabee47b 100644
--- a/poky/meta/lib/oeqa/selftest/cases/reproducible.py
+++ b/poky/meta/lib/oeqa/selftest/cases/reproducible.py
@@ -39,7 +39,6 @@ exclude_packages = [
'gstreamer1.0-python',
'hwlatdetect',
'kernel-devsrc',
- 'libaprutil',
'libcap-ng',
'libjson',
'libproxy',
diff --git a/poky/meta/lib/oeqa/selftest/cases/runtime_test.py b/poky/meta/lib/oeqa/selftest/cases/runtime_test.py
index df11984713..5439bd426b 100644
--- a/poky/meta/lib/oeqa/selftest/cases/runtime_test.py
+++ b/poky/meta/lib/oeqa/selftest/cases/runtime_test.py
@@ -175,8 +175,8 @@ class TestImage(OESelftestTestCase):
if "DISPLAY" not in os.environ:
self.skipTest("virgl gtk test must be run inside a X session")
distro = oe.lsb.distro_identifier()
- if distro and distro == 'almalinux-8.6':
- self.skipTest('virgl isn\'t working with Alma 8')
+ if distro and distro.startswith('almalinux'):
+ self.skipTest('virgl isn\'t working with Alma Linux')
if distro and distro == 'debian-8':
self.skipTest('virgl isn\'t working with Debian 8')
if distro and distro == 'centos-7':
@@ -191,6 +191,8 @@ class TestImage(OESelftestTestCase):
self.skipTest('virgl isn\'t working with Fedora 36')
if distro and distro == 'opensuseleap-15.0':
self.skipTest('virgl isn\'t working with Opensuse 15.0')
+ if distro and distro == 'ubuntu-22.04':
+ self.skipTest('virgl isn\'t working with Ubuntu 22.04')
qemu_packageconfig = get_bb_var('PACKAGECONFIG', 'qemu-system-native')
sdl_packageconfig = get_bb_var('PACKAGECONFIG', 'libsdl2-native')
@@ -234,7 +236,7 @@ class TestImage(OESelftestTestCase):
except FileNotFoundError:
self.skipTest("/dev/dri directory does not exist; no render nodes available on this machine.")
try:
- dripath = subprocess.check_output("pkg-config --variable=dridriverdir dri", shell=True)
+ dripath = subprocess.check_output("PATH=/bin:/usr/bin:$PATH pkg-config --variable=dridriverdir dri", shell=True)
except subprocess.CalledProcessError as e:
self.skipTest("Could not determine the path to dri drivers on the host via pkg-config.\nPlease install Mesa development files (particularly, dri.pc) on the host machine.")
qemu_packageconfig = get_bb_var('PACKAGECONFIG', 'qemu-system-native')
diff --git a/poky/meta/lib/oeqa/selftest/cases/tinfoil.py b/poky/meta/lib/oeqa/selftest/cases/tinfoil.py
index 686ce7e6b9..6668d7cdc8 100644
--- a/poky/meta/lib/oeqa/selftest/cases/tinfoil.py
+++ b/poky/meta/lib/oeqa/selftest/cases/tinfoil.py
@@ -65,6 +65,20 @@ class TinfoilTests(OESelftestTestCase):
localdata.setVar('PN', 'hello')
self.assertEqual('hello', localdata.getVar('BPN'))
+ # The config_data API tp parse_recipe_file is used by:
+ # layerindex-web layerindex/update_layer.py
+ def test_parse_recipe_custom_data(self):
+ with bb.tinfoil.Tinfoil() as tinfoil:
+ tinfoil.prepare(config_only=False, quiet=2)
+ localdata = bb.data.createCopy(tinfoil.config_data)
+ localdata.setVar("TESTVAR", "testval")
+ testrecipe = 'mdadm'
+ best = tinfoil.find_best_provider(testrecipe)
+ if not best:
+ self.fail('Unable to find recipe providing %s' % testrecipe)
+ rd = tinfoil.parse_recipe_file(best[3], config_data=localdata)
+ self.assertEqual("testval", rd.getVar('TESTVAR'))
+
def test_list_recipes(self):
with bb.tinfoil.Tinfoil() as tinfoil:
tinfoil.prepare(config_only=False, quiet=2)
diff --git a/poky/meta/lib/oeqa/utils/qemurunner.py b/poky/meta/lib/oeqa/utils/qemurunner.py
index de0dff3ff0..c84d299a80 100644
--- a/poky/meta/lib/oeqa/utils/qemurunner.py
+++ b/poky/meta/lib/oeqa/utils/qemurunner.py
@@ -432,10 +432,13 @@ class QemuRunner:
except OSError as e:
if e.errno != errno.ESRCH:
raise
- endtime = time.time() + self.runqemutime
- while self.runqemu.poll() is None and time.time() < endtime:
- time.sleep(1)
- if self.runqemu.poll() is None:
+ try:
+ outs, errs = self.runqemu.communicate(timeout = self.runqemutime)
+ if outs:
+ self.logger.info("Output from runqemu:\n%s", outs.decode("utf-8"))
+ if errs:
+ self.logger.info("Stderr from runqemu:\n%s", errs.decode("utf-8"))
+ except TimeoutExpired:
self.logger.debug("Sending SIGKILL to runqemu")
os.killpg(os.getpgid(self.runqemu.pid), signal.SIGKILL)
if not self.runqemu.stdout.closed:
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2022-2601.patch b/poky/meta/recipes-bsp/grub/files/CVE-2022-2601.patch
new file mode 100644
index 0000000000..090f693be3
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2022-2601.patch
@@ -0,0 +1,87 @@
+From e8060722acf0bcca037982d7fb29472363ccdfd4 Mon Sep 17 00:00:00 2001
+From: Zhang Boyang <zhangboyang.id@gmail.com>
+Date: Fri, 5 Aug 2022 01:58:27 +0800
+Subject: [PATCH] font: Fix several integer overflows in
+ grub_font_construct_glyph()
+
+This patch fixes several integer overflows in grub_font_construct_glyph().
+Glyphs of invalid size, zero or leading to an overflow, are rejected.
+The inconsistency between "glyph" and "max_glyph_size" when grub_malloc()
+returns NULL is fixed too.
+
+Fixes: CVE-2022-2601
+
+Reported-by: Zhang Boyang <zhangboyang.id@gmail.com>
+Signed-off-by: Zhang Boyang <zhangboyang.id@gmail.com>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Signed-off-by: Xiangyu Chen <xiangyu.chen@windriver.com>
+
+Upstream-Status: Backport [https://git.savannah.gnu.org/cgit/grub.git/commit/?id=768e1ef2fc159f6e14e7246e4be09363708ac39e]
+CVE: CVE-2022-2601
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ grub-core/font/font.c | 29 +++++++++++++++++------------
+ 1 file changed, 17 insertions(+), 12 deletions(-)
+
+diff --git a/grub-core/font/font.c b/grub-core/font/font.c
+index df17dba..f110db9 100644
+--- a/grub-core/font/font.c
++++ b/grub-core/font/font.c
+@@ -1509,6 +1509,7 @@ grub_font_construct_glyph (grub_font_t hinted_font,
+ struct grub_video_signed_rect bounds;
+ static struct grub_font_glyph *glyph = 0;
+ static grub_size_t max_glyph_size = 0;
++ grub_size_t cur_glyph_size;
+
+ ensure_comb_space (glyph_id);
+
+@@ -1525,29 +1526,33 @@ grub_font_construct_glyph (grub_font_t hinted_font,
+ if (!glyph_id->ncomb && !glyph_id->attributes)
+ return main_glyph;
+
+- if (max_glyph_size < sizeof (*glyph) + (bounds.width * bounds.height + GRUB_CHAR_BIT - 1) / GRUB_CHAR_BIT)
++ if (grub_video_bitmap_calc_1bpp_bufsz (bounds.width, bounds.height, &cur_glyph_size) ||
++ grub_add (sizeof (*glyph), cur_glyph_size, &cur_glyph_size))
++ return main_glyph;
++
++ if (max_glyph_size < cur_glyph_size)
+ {
+ grub_free (glyph);
+- max_glyph_size = (sizeof (*glyph) + (bounds.width * bounds.height + GRUB_CHAR_BIT - 1) / GRUB_CHAR_BIT) * 2;
+- if (max_glyph_size < 8)
+- max_glyph_size = 8;
+- glyph = grub_malloc (max_glyph_size);
++ if (grub_mul (cur_glyph_size, 2, &max_glyph_size))
++ max_glyph_size = 0;
++ glyph = max_glyph_size > 0 ? grub_malloc (max_glyph_size) : NULL;
+ }
+ if (!glyph)
+ {
++ max_glyph_size = 0;
+ grub_errno = GRUB_ERR_NONE;
+ return main_glyph;
+ }
+
+- grub_memset (glyph, 0, sizeof (*glyph)
+- + (bounds.width * bounds.height
+- + GRUB_CHAR_BIT - 1) / GRUB_CHAR_BIT);
++ grub_memset (glyph, 0, cur_glyph_size);
+
+ glyph->font = main_glyph->font;
+- glyph->width = bounds.width;
+- glyph->height = bounds.height;
+- glyph->offset_x = bounds.x;
+- glyph->offset_y = bounds.y;
++ if (bounds.width == 0 || bounds.height == 0 ||
++ grub_cast (bounds.width, &glyph->width) ||
++ grub_cast (bounds.height, &glyph->height) ||
++ grub_cast (bounds.x, &glyph->offset_x) ||
++ grub_cast (bounds.y, &glyph->offset_y))
++ return main_glyph;
+
+ if (glyph_id->attributes & GRUB_UNICODE_GLYPH_ATTRIBUTE_MIRROR)
+ grub_font_blit_glyph_mirror (glyph, main_glyph,
+--
+2.25.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2022-28735.patch b/poky/meta/recipes-bsp/grub/files/CVE-2022-28735.patch
new file mode 100644
index 0000000000..89b653a8da
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2022-28735.patch
@@ -0,0 +1,271 @@
+From 6fe755c5c07bb386fda58306bfd19e4a1c974c53 Mon Sep 17 00:00:00 2001
+From: Julian Andres Klode <julian.klode@canonical.com>
+Date: Thu, 2 Dec 2021 15:03:53 +0100
+Subject: kern/efi/sb: Reject non-kernel files in the shim_lock verifier
+
+Upstream-Status: Backport [https://git.savannah.gnu.org/cgit/grub.git/commit/?id=6fe755c5c07bb386fda58306bfd19e4a1c974c53]
+CVE: CVE-2022-28735
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+
+We must not allow other verifiers to pass things like the GRUB modules.
+Instead of maintaining a blocklist, maintain an allowlist of things
+that we do not care about.
+
+This allowlist really should be made reusable, and shared by the
+lockdown verifier, but this is the minimal patch addressing
+security concerns where the TPM verifier was able to mark modules
+as verified (or the OpenPGP verifier for that matter), when it
+should not do so on shim-powered secure boot systems.
+
+Fixes: CVE-2022-28735
+
+Signed-off-by: Julian Andres Klode <julian.klode@canonical.com>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+---
+ grub-core/kern/efi/sb.c | 221 ++++++++++++++++++++++++++++++++++++++++
+ include/grub/verify.h | 1 +
+ 2 files changed, 222 insertions(+)
+ create mode 100644 grub-core/kern/efi/sb.c
+
+diff --git a/grub-core/kern/efi/sb.c b/grub-core/kern/efi/sb.c
+new file mode 100644
+index 0000000..89c4bb3
+--- /dev/null
++++ b/grub-core/kern/efi/sb.c
+@@ -0,0 +1,221 @@
++/*
++ * GRUB -- GRand Unified Bootloader
++ * Copyright (C) 2020 Free Software Foundation, Inc.
++ *
++ * GRUB is free software: you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License as published by
++ * the Free Software Foundation, either version 3 of the License, or
++ * (at your option) any later version.
++ *
++ * GRUB is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with GRUB. If not, see <http://www.gnu.org/licenses/>.
++ *
++ * UEFI Secure Boot related checkings.
++ */
++
++#include <grub/efi/efi.h>
++#include <grub/efi/pe32.h>
++#include <grub/efi/sb.h>
++#include <grub/env.h>
++#include <grub/err.h>
++#include <grub/file.h>
++#include <grub/i386/linux.h>
++#include <grub/kernel.h>
++#include <grub/mm.h>
++#include <grub/types.h>
++#include <grub/verify.h>
++
++static grub_efi_guid_t shim_lock_guid = GRUB_EFI_SHIM_LOCK_GUID;
++
++/*
++ * Determine whether we're in secure boot mode.
++ *
++ * Please keep the logic in sync with the Linux kernel,
++ * drivers/firmware/efi/libstub/secureboot.c:efi_get_secureboot().
++ */
++grub_uint8_t
++grub_efi_get_secureboot (void)
++{
++ static grub_efi_guid_t efi_variable_guid = GRUB_EFI_GLOBAL_VARIABLE_GUID;
++ grub_efi_status_t status;
++ grub_efi_uint32_t attr = 0;
++ grub_size_t size = 0;
++ grub_uint8_t *secboot = NULL;
++ grub_uint8_t *setupmode = NULL;
++ grub_uint8_t *moksbstate = NULL;
++ grub_uint8_t secureboot = GRUB_EFI_SECUREBOOT_MODE_UNKNOWN;
++ const char *secureboot_str = "UNKNOWN";
++
++ status = grub_efi_get_variable ("SecureBoot", &efi_variable_guid,
++ &size, (void **) &secboot);
++
++ if (status == GRUB_EFI_NOT_FOUND)
++ {
++ secureboot = GRUB_EFI_SECUREBOOT_MODE_DISABLED;
++ goto out;
++ }
++
++ if (status != GRUB_EFI_SUCCESS)
++ goto out;
++
++ status = grub_efi_get_variable ("SetupMode", &efi_variable_guid,
++ &size, (void **) &setupmode);
++
++ if (status != GRUB_EFI_SUCCESS)
++ goto out;
++
++ if ((*secboot == 0) || (*setupmode == 1))
++ {
++ secureboot = GRUB_EFI_SECUREBOOT_MODE_DISABLED;
++ goto out;
++ }
++
++ /*
++ * See if a user has put the shim into insecure mode. If so, and if the
++ * variable doesn't have the runtime attribute set, we might as well
++ * honor that.
++ */
++ status = grub_efi_get_variable_with_attributes ("MokSBState", &shim_lock_guid,
++ &size, (void **) &moksbstate, &attr);
++
++ /* If it fails, we don't care why. Default to secure. */
++ if (status != GRUB_EFI_SUCCESS)
++ {
++ secureboot = GRUB_EFI_SECUREBOOT_MODE_ENABLED;
++ goto out;
++ }
++
++ if (!(attr & GRUB_EFI_VARIABLE_RUNTIME_ACCESS) && *moksbstate == 1)
++ {
++ secureboot = GRUB_EFI_SECUREBOOT_MODE_DISABLED;
++ goto out;
++ }
++
++ secureboot = GRUB_EFI_SECUREBOOT_MODE_ENABLED;
++
++ out:
++ grub_free (moksbstate);
++ grub_free (setupmode);
++ grub_free (secboot);
++
++ if (secureboot == GRUB_EFI_SECUREBOOT_MODE_DISABLED)
++ secureboot_str = "Disabled";
++ else if (secureboot == GRUB_EFI_SECUREBOOT_MODE_ENABLED)
++ secureboot_str = "Enabled";
++
++ grub_dprintf ("efi", "UEFI Secure Boot state: %s\n", secureboot_str);
++
++ return secureboot;
++}
++
++static grub_err_t
++shim_lock_verifier_init (grub_file_t io __attribute__ ((unused)),
++ enum grub_file_type type,
++ void **context __attribute__ ((unused)),
++ enum grub_verify_flags *flags)
++{
++ *flags = GRUB_VERIFY_FLAGS_NONE;
++
++ switch (type & GRUB_FILE_TYPE_MASK)
++ {
++ /* Files we check. */
++ case GRUB_FILE_TYPE_LINUX_KERNEL:
++ case GRUB_FILE_TYPE_MULTIBOOT_KERNEL:
++ case GRUB_FILE_TYPE_BSD_KERNEL:
++ case GRUB_FILE_TYPE_XNU_KERNEL:
++ case GRUB_FILE_TYPE_PLAN9_KERNEL:
++ case GRUB_FILE_TYPE_EFI_CHAINLOADED_IMAGE:
++ *flags = GRUB_VERIFY_FLAGS_SINGLE_CHUNK;
++ return GRUB_ERR_NONE;
++
++ /* Files that do not affect secureboot state. */
++ case GRUB_FILE_TYPE_NONE:
++ case GRUB_FILE_TYPE_LOOPBACK:
++ case GRUB_FILE_TYPE_LINUX_INITRD:
++ case GRUB_FILE_TYPE_OPENBSD_RAMDISK:
++ case GRUB_FILE_TYPE_XNU_RAMDISK:
++ case GRUB_FILE_TYPE_SIGNATURE:
++ case GRUB_FILE_TYPE_PUBLIC_KEY:
++ case GRUB_FILE_TYPE_PUBLIC_KEY_TRUST:
++ case GRUB_FILE_TYPE_PRINT_BLOCKLIST:
++ case GRUB_FILE_TYPE_TESTLOAD:
++ case GRUB_FILE_TYPE_GET_SIZE:
++ case GRUB_FILE_TYPE_FONT:
++ case GRUB_FILE_TYPE_ZFS_ENCRYPTION_KEY:
++ case GRUB_FILE_TYPE_CAT:
++ case GRUB_FILE_TYPE_HEXCAT:
++ case GRUB_FILE_TYPE_CMP:
++ case GRUB_FILE_TYPE_HASHLIST:
++ case GRUB_FILE_TYPE_TO_HASH:
++ case GRUB_FILE_TYPE_KEYBOARD_LAYOUT:
++ case GRUB_FILE_TYPE_PIXMAP:
++ case GRUB_FILE_TYPE_GRUB_MODULE_LIST:
++ case GRUB_FILE_TYPE_CONFIG:
++ case GRUB_FILE_TYPE_THEME:
++ case GRUB_FILE_TYPE_GETTEXT_CATALOG:
++ case GRUB_FILE_TYPE_FS_SEARCH:
++ case GRUB_FILE_TYPE_LOADENV:
++ case GRUB_FILE_TYPE_SAVEENV:
++ case GRUB_FILE_TYPE_VERIFY_SIGNATURE:
++ *flags = GRUB_VERIFY_FLAGS_SKIP_VERIFICATION;
++ return GRUB_ERR_NONE;
++
++ /* Other files. */
++ default:
++ return grub_error (GRUB_ERR_ACCESS_DENIED, N_("prohibited by secure boot policy"));
++ }
++}
++
++static grub_err_t
++shim_lock_verifier_write (void *context __attribute__ ((unused)), void *buf, grub_size_t size)
++{
++ grub_efi_shim_lock_protocol_t *sl = grub_efi_locate_protocol (&shim_lock_guid, 0);
++
++ if (!sl)
++ return grub_error (GRUB_ERR_ACCESS_DENIED, N_("shim_lock protocol not found"));
++
++ if (sl->verify (buf, size) != GRUB_EFI_SUCCESS)
++ return grub_error (GRUB_ERR_BAD_SIGNATURE, N_("bad shim signature"));
++
++ return GRUB_ERR_NONE;
++}
++
++struct grub_file_verifier shim_lock_verifier =
++ {
++ .name = "shim_lock_verifier",
++ .init = shim_lock_verifier_init,
++ .write = shim_lock_verifier_write
++ };
++
++void
++grub_shim_lock_verifier_setup (void)
++{
++ struct grub_module_header *header;
++ grub_efi_shim_lock_protocol_t *sl =
++ grub_efi_locate_protocol (&shim_lock_guid, 0);
++
++ /* shim_lock is missing, check if GRUB image is built with --disable-shim-lock. */
++ if (!sl)
++ {
++ FOR_MODULES (header)
++ {
++ if (header->type == OBJ_TYPE_DISABLE_SHIM_LOCK)
++ return;
++ }
++ }
++
++ /* Secure Boot is off. Do not load shim_lock. */
++ if (grub_efi_get_secureboot () != GRUB_EFI_SECUREBOOT_MODE_ENABLED)
++ return;
++
++ /* Enforce shim_lock_verifier. */
++ grub_verifier_register (&shim_lock_verifier);
++
++ grub_env_set ("shim_lock", "y");
++ grub_env_export ("shim_lock");
++}
+diff --git a/include/grub/verify.h b/include/grub/verify.h
+index cd129c3..672ae16 100644
+--- a/include/grub/verify.h
++++ b/include/grub/verify.h
+@@ -24,6 +24,7 @@
+
+ enum grub_verify_flags
+ {
++ GRUB_VERIFY_FLAGS_NONE = 0,
+ GRUB_VERIFY_FLAGS_SKIP_VERIFICATION = 1,
+ GRUB_VERIFY_FLAGS_SINGLE_CHUNK = 2,
+ /* Defer verification to another authority. */
+--
+2.25.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/CVE-2022-3775.patch b/poky/meta/recipes-bsp/grub/files/CVE-2022-3775.patch
new file mode 100644
index 0000000000..e2e3f35584
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/CVE-2022-3775.patch
@@ -0,0 +1,97 @@
+From fdbe7209152ad6f09a1166f64f162017f2145ba3 Mon Sep 17 00:00:00 2001
+From: Zhang Boyang <zhangboyang.id@gmail.com>
+Date: Mon, 24 Oct 2022 08:05:35 +0800
+Subject: [PATCH] font: Fix an integer underflow in blit_comb()
+
+The expression (ctx.bounds.height - combining_glyphs[i]->height) / 2 may
+evaluate to a very big invalid value even if both ctx.bounds.height and
+combining_glyphs[i]->height are small integers. For example, if
+ctx.bounds.height is 10 and combining_glyphs[i]->height is 12, this
+expression evaluates to 2147483647 (expected -1). This is because
+coordinates are allowed to be negative but ctx.bounds.height is an
+unsigned int. So, the subtraction operates on unsigned ints and
+underflows to a very big value. The division makes things even worse.
+The quotient is still an invalid value even if converted back to int.
+
+This patch fixes the problem by casting ctx.bounds.height to int. As
+a result the subtraction will operate on int and grub_uint16_t which
+will be promoted to an int. So, the underflow will no longer happen. Other
+uses of ctx.bounds.height (and ctx.bounds.width) are also casted to int,
+to ensure coordinates are always calculated on signed integers.
+
+Fixes: CVE-2022-3775
+
+Reported-by: Daniel Axtens <dja@axtens.net>
+Signed-off-by: Zhang Boyang <zhangboyang.id@gmail.com>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Signed-off-by: Xiangyu Chen <xiangyu.chen@windriver.com>
+
+Upstream-Status: Backport [https://git.savannah.gnu.org/cgit/grub.git/commit/?id=992c06191babc1e109caf40d6a07ec6fdef427af]
+CVE: CVE-2022-3775
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ grub-core/font/font.c | 16 ++++++++--------
+ 1 file changed, 8 insertions(+), 8 deletions(-)
+
+diff --git a/grub-core/font/font.c b/grub-core/font/font.c
+index f110db9..3b76b22 100644
+--- a/grub-core/font/font.c
++++ b/grub-core/font/font.c
+@@ -1200,12 +1200,12 @@ blit_comb (const struct grub_unicode_glyph *glyph_id,
+ ctx.bounds.height = main_glyph->height;
+
+ above_rightx = main_glyph->offset_x + main_glyph->width;
+- above_righty = ctx.bounds.y + ctx.bounds.height;
++ above_righty = ctx.bounds.y + (int) ctx.bounds.height;
+
+ above_leftx = main_glyph->offset_x;
+- above_lefty = ctx.bounds.y + ctx.bounds.height;
++ above_lefty = ctx.bounds.y + (int) ctx.bounds.height;
+
+- below_rightx = ctx.bounds.x + ctx.bounds.width;
++ below_rightx = ctx.bounds.x + (int) ctx.bounds.width;
+ below_righty = ctx.bounds.y;
+
+ comb = grub_unicode_get_comb (glyph_id);
+@@ -1218,7 +1218,7 @@ blit_comb (const struct grub_unicode_glyph *glyph_id,
+
+ if (!combining_glyphs[i])
+ continue;
+- targetx = (ctx.bounds.width - combining_glyphs[i]->width) / 2 + ctx.bounds.x;
++ targetx = ((int) ctx.bounds.width - combining_glyphs[i]->width) / 2 + ctx.bounds.x;
+ /* CGJ is to avoid diacritics reordering. */
+ if (comb[i].code
+ == GRUB_UNICODE_COMBINING_GRAPHEME_JOINER)
+@@ -1228,8 +1228,8 @@ blit_comb (const struct grub_unicode_glyph *glyph_id,
+ case GRUB_UNICODE_COMB_OVERLAY:
+ do_blit (combining_glyphs[i],
+ targetx,
+- (ctx.bounds.height - combining_glyphs[i]->height) / 2
+- - (ctx.bounds.height + ctx.bounds.y), &ctx);
++ ((int) ctx.bounds.height - combining_glyphs[i]->height) / 2
++ - ((int) ctx.bounds.height + ctx.bounds.y), &ctx);
+ if (min_devwidth < combining_glyphs[i]->width)
+ min_devwidth = combining_glyphs[i]->width;
+ break;
+@@ -1302,7 +1302,7 @@ blit_comb (const struct grub_unicode_glyph *glyph_id,
+ /* Fallthrough. */
+ case GRUB_UNICODE_STACK_ATTACHED_ABOVE:
+ do_blit (combining_glyphs[i], targetx,
+- -(ctx.bounds.height + ctx.bounds.y + space
++ -((int) ctx.bounds.height + ctx.bounds.y + space
+ + combining_glyphs[i]->height), &ctx);
+ if (min_devwidth < combining_glyphs[i]->width)
+ min_devwidth = combining_glyphs[i]->width;
+@@ -1310,7 +1310,7 @@ blit_comb (const struct grub_unicode_glyph *glyph_id,
+
+ case GRUB_UNICODE_COMB_HEBREW_DAGESH:
+ do_blit (combining_glyphs[i], targetx,
+- -(ctx.bounds.height / 2 + ctx.bounds.y
++ -((int) ctx.bounds.height / 2 + ctx.bounds.y
+ + combining_glyphs[i]->height / 2), &ctx);
+ if (min_devwidth < combining_glyphs[i]->width)
+ min_devwidth = combining_glyphs[i]->width;
+--
+2.25.1
+
diff --git a/poky/meta/recipes-bsp/grub/files/font-Fix-size-overflow-in-grub_font_get_glyph_intern.patch b/poky/meta/recipes-bsp/grub/files/font-Fix-size-overflow-in-grub_font_get_glyph_intern.patch
new file mode 100644
index 0000000000..d4ba3cafc5
--- /dev/null
+++ b/poky/meta/recipes-bsp/grub/files/font-Fix-size-overflow-in-grub_font_get_glyph_intern.patch
@@ -0,0 +1,117 @@
+From 1f511ae054fe42dce7aedfbfe0f234fa1e0a7a3e Mon Sep 17 00:00:00 2001
+From: Zhang Boyang <zhangboyang.id@gmail.com>
+Date: Fri, 5 Aug 2022 00:51:20 +0800
+Subject: [PATCH] font: Fix size overflow in grub_font_get_glyph_internal()
+
+The length of memory allocation and file read may overflow. This patch
+fixes the problem by using safemath macros.
+
+There is a lot of code repetition like "(x * y + 7) / 8". It is unsafe
+if overflow happens. This patch introduces grub_video_bitmap_calc_1bpp_bufsz().
+It is safe replacement for such code. It has safemath-like prototype.
+
+This patch also introduces grub_cast(value, pointer), it casts value to
+typeof(*pointer) then store the value to *pointer. It returns true when
+overflow occurs or false if there is no overflow. The semantics of arguments
+and return value are designed to be consistent with other safemath macros.
+
+Signed-off-by: Zhang Boyang <zhangboyang.id@gmail.com>
+Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>
+
+Upstream-Status: Backport [https://git.savannah.gnu.org/cgit/grub.git/commit/?id=9c76ec09ae08155df27cd237eaea150b4f02f532]
+
+Signed-off-by: Xiangyu Chen <xiangyu.chen@windriver.com>
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ grub-core/font/font.c | 17 +++++++++++++----
+ include/grub/bitmap.h | 18 ++++++++++++++++++
+ include/grub/safemath.h | 2 ++
+ 3 files changed, 33 insertions(+), 4 deletions(-)
+
+diff --git a/grub-core/font/font.c b/grub-core/font/font.c
+index 5edb477..df17dba 100644
+--- a/grub-core/font/font.c
++++ b/grub-core/font/font.c
+@@ -733,7 +733,8 @@ grub_font_get_glyph_internal (grub_font_t font, grub_uint32_t code)
+ grub_int16_t xoff;
+ grub_int16_t yoff;
+ grub_int16_t dwidth;
+- int len;
++ grub_ssize_t len;
++ grub_size_t sz;
+
+ if (index_entry->glyph)
+ /* Return cached glyph. */
+@@ -760,9 +761,17 @@ grub_font_get_glyph_internal (grub_font_t font, grub_uint32_t code)
+ return 0;
+ }
+
+- len = (width * height + 7) / 8;
+- glyph = grub_malloc (sizeof (struct grub_font_glyph) + len);
+- if (!glyph)
++ /* Calculate real struct size of current glyph. */
++ if (grub_video_bitmap_calc_1bpp_bufsz (width, height, &len) ||
++ grub_add (sizeof (struct grub_font_glyph), len, &sz))
++ {
++ remove_font (font);
++ return 0;
++ }
++
++ /* Allocate and initialize the glyph struct. */
++ glyph = grub_malloc (sz);
++ if (glyph == NULL)
+ {
+ remove_font (font);
+ return 0;
+diff --git a/include/grub/bitmap.h b/include/grub/bitmap.h
+index 5728f8c..0d9603f 100644
+--- a/include/grub/bitmap.h
++++ b/include/grub/bitmap.h
+@@ -23,6 +23,7 @@
+ #include <grub/symbol.h>
+ #include <grub/types.h>
+ #include <grub/video.h>
++#include <grub/safemath.h>
+
+ struct grub_video_bitmap
+ {
+@@ -79,6 +80,23 @@ grub_video_bitmap_get_height (struct grub_video_bitmap *bitmap)
+ return bitmap->mode_info.height;
+ }
+
++/*
++ * Calculate and store the size of data buffer of 1bit bitmap in result.
++ * Equivalent to "*result = (width * height + 7) / 8" if no overflow occurs.
++ * Return true when overflow occurs or false if there is no overflow.
++ * This function is intentionally implemented as a macro instead of
++ * an inline function. Although a bit awkward, it preserves data types for
++ * safemath macros and reduces macro side effects as much as possible.
++ *
++ * XXX: Will report false overflow if width * height > UINT64_MAX.
++ */
++#define grub_video_bitmap_calc_1bpp_bufsz(width, height, result) \
++({ \
++ grub_uint64_t _bitmap_pixels; \
++ grub_mul ((width), (height), &_bitmap_pixels) ? 1 : \
++ grub_cast (_bitmap_pixels / GRUB_CHAR_BIT + !!(_bitmap_pixels % GRUB_CHAR_BIT), (result)); \
++})
++
+ void EXPORT_FUNC (grub_video_bitmap_get_mode_info) (struct grub_video_bitmap *bitmap,
+ struct grub_video_mode_info *mode_info);
+
+diff --git a/include/grub/safemath.h b/include/grub/safemath.h
+index c17b89b..bb0f826 100644
+--- a/include/grub/safemath.h
++++ b/include/grub/safemath.h
+@@ -30,6 +30,8 @@
+ #define grub_sub(a, b, res) __builtin_sub_overflow(a, b, res)
+ #define grub_mul(a, b, res) __builtin_mul_overflow(a, b, res)
+
++#define grub_cast(a, res) grub_add ((a), 0, (res))
++
+ #else
+ #error gcc 5.1 or newer or clang 3.8 or newer is required
+ #endif
+--
+2.25.1
+
diff --git a/poky/meta/recipes-bsp/grub/grub2.inc b/poky/meta/recipes-bsp/grub/grub2.inc
index a248af0073..d09eecd8ac 100644
--- a/poky/meta/recipes-bsp/grub/grub2.inc
+++ b/poky/meta/recipes-bsp/grub/grub2.inc
@@ -102,6 +102,10 @@ SRC_URI = "${GNU_MIRROR}/grub/grub-${PV}.tar.gz \
file://CVE-2022-28733.patch \
file://CVE-2022-28734.patch \
file://CVE-2022-28736.patch \
+ file://CVE-2022-28735.patch \
+ file://font-Fix-size-overflow-in-grub_font_get_glyph_intern.patch \
+ file://CVE-2022-2601.patch \
+ file://CVE-2022-3775.patch \
"
SRC_URI[md5sum] = "5ce674ca6b2612d8939b9e6abed32934"
SRC_URI[sha256sum] = "f10c85ae3e204dbaec39ae22fa3c5e99f0665417e91c2cb49b7e5031658ba6ea"
diff --git a/poky/meta/recipes-connectivity/bluez5/bluez5.inc b/poky/meta/recipes-connectivity/bluez5/bluez5.inc
index eaac9ee849..a71d339928 100644
--- a/poky/meta/recipes-connectivity/bluez5/bluez5.inc
+++ b/poky/meta/recipes-connectivity/bluez5/bluez5.inc
@@ -7,6 +7,7 @@ LIC_FILES_CHKSUM = "file://COPYING;md5=12f884d2ae1ff87c09e5b7ccc2c4ca7e \
file://COPYING.LIB;md5=fb504b67c50331fc78734fed90fb0e09 \
file://src/main.c;beginline=1;endline=24;md5=9bc54b93cd7e17bf03f52513f39f926e"
DEPENDS = "dbus glib-2.0"
+RDEPENDS:${PN} += "dbus"
PROVIDES += "bluez-hcidump"
RPROVIDES_${PN} += "bluez-hcidump"
@@ -57,6 +58,7 @@ SRC_URI = "${KERNELORG_MIRROR}/linux/bluetooth/bluez-${PV}.tar.xz \
file://CVE-2021-3658.patch \
file://CVE-2022-0204.patch \
file://CVE-2022-39176.patch \
+ file://CVE-2022-3637.patch \
"
S = "${WORKDIR}/bluez-${PV}"
diff --git a/poky/meta/recipes-connectivity/bluez5/bluez5/CVE-2022-3637.patch b/poky/meta/recipes-connectivity/bluez5/bluez5/CVE-2022-3637.patch
new file mode 100644
index 0000000000..4ca60f99d5
--- /dev/null
+++ b/poky/meta/recipes-connectivity/bluez5/bluez5/CVE-2022-3637.patch
@@ -0,0 +1,39 @@
+From b808b2852a0b48c6f9dbb038f932613cea3126c2 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Thu, 27 Oct 2022 09:51:27 +0530
+Subject: [PATCH] CVE-2022-3637
+
+Upstream-Status: Backport [https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/monitor/jlink.c?id=1d6cfb8e625a944010956714c1802bc1e1fc6c4f]
+CVE: CVE-2022-3637
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+
+monitor: Fix crash when using RTT backend
+
+This fix regression introduced by "monitor: Fix memory leaks".
+J-Link shared library is in use if jlink_init() returns 0 and thus
+handle shall not be closed.
+---
+ monitor/jlink.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/monitor/jlink.c b/monitor/jlink.c
+index afa9d93..5bd4aed 100644
+--- a/monitor/jlink.c
++++ b/monitor/jlink.c
+@@ -120,9 +120,12 @@ int jlink_init(void)
+ !jlink.tif_select || !jlink.setspeed ||
+ !jlink.connect || !jlink.getsn ||
+ !jlink.emu_getproductname ||
+- !jlink.rtterminal_control || !jlink.rtterminal_read)
++ !jlink.rtterminal_control || !jlink.rtterminal_read) {
++ dlclose(so);
+ return -EIO;
++ }
+
++ /* don't dlclose(so) here cause symbols from it are in use now */
+ return 0;
+ }
+
+--
+2.25.1
+
diff --git a/poky/meta/recipes-connectivity/bluez5/bluez5_5.55.bb b/poky/meta/recipes-connectivity/bluez5/bluez5_5.55.bb
index e5353bd815..be74a35e0a 100644
--- a/poky/meta/recipes-connectivity/bluez5/bluez5_5.55.bb
+++ b/poky/meta/recipes-connectivity/bluez5/bluez5_5.55.bb
@@ -6,6 +6,13 @@ SRC_URI[sha256sum] = "8863717113c4897e2ad3271fc808ea245319e6fd95eed2e934fae8e089
# These issues have kernel fixes rather than bluez fixes so exclude here
CVE_CHECK_WHITELIST += "CVE-2020-12352 CVE-2020-24490"
+# Commit 7a80d2096f1b7125085e21448112aa02f49f5e9a, e2b0f0d8d63e1223bb714a9efb37e2257818268b
+# and 0388794dc5fdb73a4ea88bcf148de0a12b4364d4 to fix CVE-2022-39177
+# already backport in CVE-2022-39176.patch
+# https://bugs.launchpad.net/ubuntu/+source/bluez/+bug/1977968
+
+CVE_CHECK_WHITELIST += "CVE-2022-39177"
+
# noinst programs in Makefile.tools that are conditional on READLINE
# support
NOINST_TOOLS_READLINE ?= " \
diff --git a/poky/meta/recipes-connectivity/dhcp/dhcp/CVE-2022-2928.patch b/poky/meta/recipes-connectivity/dhcp/dhcp/CVE-2022-2928.patch
new file mode 100644
index 0000000000..11f162cbda
--- /dev/null
+++ b/poky/meta/recipes-connectivity/dhcp/dhcp/CVE-2022-2928.patch
@@ -0,0 +1,120 @@
+From 8a5d739eea10ee6e193f053b1662142d5657cbc6 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Thu, 6 Oct 2022 09:39:18 +0530
+Subject: [PATCH] CVE-2022-2928
+
+Upstream-Status: Backport [https://downloads.isc.org/isc/dhcp/4.4.3-P1/patches/]
+CVE: CVE-2022-2928
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ common/options.c | 7 +++++
+ common/tests/option_unittest.c | 54 ++++++++++++++++++++++++++++++++++
+ 2 files changed, 61 insertions(+)
+
+diff --git a/common/options.c b/common/options.c
+index a7ed84c..4e53bb4 100644
+--- a/common/options.c
++++ b/common/options.c
+@@ -4452,6 +4452,8 @@ add_option(struct option_state *options,
+ if (!option_cache_allocate(&oc, MDL)) {
+ log_error("No memory for option cache adding %s (option %d).",
+ option->name, option_num);
++ /* Get rid of reference created during hash lookup. */
++ option_dereference(&option, MDL);
+ return 0;
+ }
+
+@@ -4463,6 +4465,8 @@ add_option(struct option_state *options,
+ MDL)) {
+ log_error("No memory for constant data adding %s (option %d).",
+ option->name, option_num);
++ /* Get rid of reference created during hash lookup. */
++ option_dereference(&option, MDL);
+ option_cache_dereference(&oc, MDL);
+ return 0;
+ }
+@@ -4471,6 +4475,9 @@ add_option(struct option_state *options,
+ save_option(&dhcp_universe, options, oc);
+ option_cache_dereference(&oc, MDL);
+
++ /* Get rid of reference created during hash lookup. */
++ option_dereference(&option, MDL);
++
+ return 1;
+ }
+
+diff --git a/common/tests/option_unittest.c b/common/tests/option_unittest.c
+index cd52cfb..690704d 100644
+--- a/common/tests/option_unittest.c
++++ b/common/tests/option_unittest.c
+@@ -130,6 +130,59 @@ ATF_TC_BODY(pretty_print_option, tc)
+ }
+
+
++ATF_TC(add_option_ref_cnt);
++
++ATF_TC_HEAD(add_option_ref_cnt, tc)
++{
++ atf_tc_set_md_var(tc, "descr",
++ "Verify add_option() does not leak option ref counts.");
++}
++
++ATF_TC_BODY(add_option_ref_cnt, tc)
++{
++ struct option_state *options = NULL;
++ struct option *option = NULL;
++ unsigned int cid_code = DHO_DHCP_CLIENT_IDENTIFIER;
++ char *cid_str = "1234";
++ int refcnt_before = 0;
++
++ // Look up the option we're going to add.
++ initialize_common_option_spaces();
++ if (!option_code_hash_lookup(&option, dhcp_universe.code_hash,
++ &cid_code, 0, MDL)) {
++ atf_tc_fail("cannot find option definition?");
++ }
++
++ // Get the option's reference count before we call add_options.
++ refcnt_before = option->refcnt;
++
++ // Allocate a option_state to which to add an option.
++ if (!option_state_allocate(&options, MDL)) {
++ atf_tc_fail("cannot allocat options state");
++ }
++
++ // Call add_option() to add the option to the option state.
++ if (!add_option(options, cid_code, cid_str, strlen(cid_str))) {
++ atf_tc_fail("add_option returned 0");
++ }
++
++ // Verify that calling add_option() only adds 1 to the option ref count.
++ if (option->refcnt != (refcnt_before + 1)) {
++ atf_tc_fail("after add_option(), count is wrong, before %d, after: %d",
++ refcnt_before, option->refcnt);
++ }
++
++ // Derefrence the option_state, this should reduce the ref count to
++ // it's starting value.
++ option_state_dereference(&options, MDL);
++
++ // Verify that dereferencing option_state restores option ref count.
++ if (option->refcnt != refcnt_before) {
++ atf_tc_fail("after state deref, count is wrong, before %d, after: %d",
++ refcnt_before, option->refcnt);
++ }
++}
++
+ /* This macro defines main() method that will call specified
+ test cases. tp and simple_test_case names can be whatever you want
+ as long as it is a valid variable identifier. */
+@@ -137,6 +190,7 @@ ATF_TP_ADD_TCS(tp)
+ {
+ ATF_TP_ADD_TC(tp, option_refcnt);
+ ATF_TP_ADD_TC(tp, pretty_print_option);
++ ATF_TP_ADD_TC(tp, add_option_ref_cnt);
+
+ return (atf_no_error());
+ }
+--
+2.25.1
+
diff --git a/poky/meta/recipes-connectivity/dhcp/dhcp/CVE-2022-2929.patch b/poky/meta/recipes-connectivity/dhcp/dhcp/CVE-2022-2929.patch
new file mode 100644
index 0000000000..d605204f89
--- /dev/null
+++ b/poky/meta/recipes-connectivity/dhcp/dhcp/CVE-2022-2929.patch
@@ -0,0 +1,40 @@
+From 5c959166ebee7605e2048de573f2475b4d731ff7 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Thu, 6 Oct 2022 09:42:59 +0530
+Subject: [PATCH] CVE-2022-2929
+
+Upstream-Status: Backport [https://downloads.isc.org/isc/dhcp/4.4.3-P1/patches/]
+CVE: CVE-2022-2929
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ common/options.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/common/options.c b/common/options.c
+index 4e53bb4..28800fc 100644
+--- a/common/options.c
++++ b/common/options.c
+@@ -454,16 +454,16 @@ int fqdn_universe_decode (struct option_state *options,
+ while (s < &bp -> data[0] + length + 2) {
+ len = *s;
+ if (len > 63) {
+- log_info ("fancy bits in fqdn option");
+- return 0;
++ log_info ("label length exceeds 63 in fqdn option");
++ goto bad;
+ }
+ if (len == 0) {
+ terminated = 1;
+ break;
+ }
+ if (s + len > &bp -> data [0] + length + 3) {
+- log_info ("fqdn tag longer than buffer");
+- return 0;
++ log_info ("fqdn label longer than buffer");
++ goto bad;
+ }
+
+ if (first_len == 0) {
+--
+2.25.1
+
diff --git a/poky/meta/recipes-connectivity/dhcp/dhcp_4.4.2.bb b/poky/meta/recipes-connectivity/dhcp/dhcp_4.4.2.bb
index 5609a350cc..d3c87d0d07 100644
--- a/poky/meta/recipes-connectivity/dhcp/dhcp_4.4.2.bb
+++ b/poky/meta/recipes-connectivity/dhcp/dhcp_4.4.2.bb
@@ -11,6 +11,8 @@ SRC_URI += "file://0001-define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.pat
file://0013-fixup_use_libbind.patch \
file://0001-workaround-busybox-limitation-in-linux-dhclient-script.patch \
file://CVE-2021-25217.patch \
+ file://CVE-2022-2928.patch \
+ file://CVE-2022-2929.patch \
"
SRC_URI[md5sum] = "2afdaf8498dc1edaf3012efdd589b3e1"
diff --git a/poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb b/poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
index 2cc92b7b47..e802bcee18 100644
--- a/poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
+++ b/poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
@@ -5,8 +5,8 @@ SECTION = "network"
LICENSE = "PD"
LIC_FILES_CHKSUM = "file://COPYING;md5=87964579b2a8ece4bc6744d2dc9a8b04"
-SRCREV = "fe19892a8168bf19d81e3bc4ee319bf7f9f058f5"
-PV = "20220725"
+SRCREV = "22a5de3ef637990ce03141f786fbdb327e9c5a3f"
+PV = "20221107"
PE = "1"
SRC_URI = "git://gitlab.gnome.org/GNOME/mobile-broadband-provider-info.git;protocol=https;branch=main"
diff --git a/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0464.patch b/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0464.patch
new file mode 100644
index 0000000000..cce5bad9f0
--- /dev/null
+++ b/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0464.patch
@@ -0,0 +1,226 @@
+From 879f7080d7e141f415c79eaa3a8ac4a3dad0348b Mon Sep 17 00:00:00 2001
+From: Pauli <pauli@openssl.org>
+Date: Wed, 8 Mar 2023 15:28:20 +1100
+Subject: [PATCH] x509: excessive resource use verifying policy constraints
+
+A security vulnerability has been identified in all supported versions
+of OpenSSL related to the verification of X.509 certificate chains
+that include policy constraints. Attackers may be able to exploit this
+vulnerability by creating a malicious certificate chain that triggers
+exponential use of computational resources, leading to a denial-of-service
+(DoS) attack on affected systems.
+
+Fixes CVE-2023-0464
+
+Reviewed-by: Tomas Mraz <tomas@openssl.org>
+Reviewed-by: Shane Lontis <shane.lontis@oracle.com>
+(Merged from https://github.com/openssl/openssl/pull/20569)
+
+CVE: CVE-2023-0464
+Upstream-Status: Backport [https://git.openssl.org/gitweb/?p=openssl.git;a=patch;h=879f7080d7e141f415c79eaa3a8ac4a3dad0348b]
+Signed-off-by: Nikhil R <nikhil.r@kpit.com>
+
+---
+ crypto/x509v3/pcy_local.h | 8 +++++++-
+ crypto/x509v3/pcy_node.c | 12 +++++++++---
+ crypto/x509v3/pcy_tree.c | 37 +++++++++++++++++++++++++++----------
+ 3 files changed, 43 insertions(+), 14 deletions(-)
+
+diff --git a/crypto/x509v3/pcy_local.h b/crypto/x509v3/pcy_local.h
+index 5daf78de45..344aa06765 100644
+--- a/crypto/x509v3/pcy_local.h
++++ b/crypto/x509v3/pcy_local.h
+@@ -111,6 +111,11 @@ struct X509_POLICY_LEVEL_st {
+ };
+
+ struct X509_POLICY_TREE_st {
++ /* The number of nodes in the tree */
++ size_t node_count;
++ /* The maximum number of nodes in the tree */
++ size_t node_maximum;
++
+ /* This is the tree 'level' data */
+ X509_POLICY_LEVEL *levels;
+ int nlevel;
+@@ -159,7 +164,8 @@ X509_POLICY_NODE *tree_find_sk(STACK_OF(X509_POLICY_NODE) *sk,
+ X509_POLICY_NODE *level_add_node(X509_POLICY_LEVEL *level,
+ X509_POLICY_DATA *data,
+ X509_POLICY_NODE *parent,
+- X509_POLICY_TREE *tree);
++ X509_POLICY_TREE *tree,
++ int extra_data);
+ void policy_node_free(X509_POLICY_NODE *node);
+ int policy_node_match(const X509_POLICY_LEVEL *lvl,
+ const X509_POLICY_NODE *node, const ASN1_OBJECT *oid);
+diff --git a/crypto/x509v3/pcy_node.c b/crypto/x509v3/pcy_node.c
+index e2d7b15322..d574fb9d66 100644
+--- a/crypto/x509v3/pcy_node.c
++++ b/crypto/x509v3/pcy_node.c
+@@ -59,10 +59,15 @@ X509_POLICY_NODE *level_find_node(const X509_POLICY_LEVEL *level,
+ X509_POLICY_NODE *level_add_node(X509_POLICY_LEVEL *level,
+ X509_POLICY_DATA *data,
+ X509_POLICY_NODE *parent,
+- X509_POLICY_TREE *tree)
++ X509_POLICY_TREE *tree,
++ int extra_data)
+ {
+ X509_POLICY_NODE *node;
+
++ /* Verify that the tree isn't too large. This mitigates CVE-2023-0464 */
++ if (tree->node_maximum > 0 && tree->node_count >= tree->node_maximum)
++ return NULL;
++
+ node = OPENSSL_zalloc(sizeof(*node));
+ if (node == NULL) {
+ X509V3err(X509V3_F_LEVEL_ADD_NODE, ERR_R_MALLOC_FAILURE);
+@@ -70,7 +75,7 @@ X509_POLICY_NODE *level_add_node(X509_POLICY_LEVEL *level,
+ }
+ node->data = data;
+ node->parent = parent;
+- if (level) {
++ if (level != NULL) {
+ if (OBJ_obj2nid(data->valid_policy) == NID_any_policy) {
+ if (level->anyPolicy)
+ goto node_error;
+@@ -90,7 +95,7 @@ X509_POLICY_NODE *level_add_node(X509_POLICY_LEVEL *level,
+ }
+ }
+
+- if (tree) {
++ if (extra_data) {
+ if (tree->extra_data == NULL)
+ tree->extra_data = sk_X509_POLICY_DATA_new_null();
+ if (tree->extra_data == NULL){
+@@ -103,6 +108,7 @@ X509_POLICY_NODE *level_add_node(X509_POLICY_LEVEL *level,
+ }
+ }
+
++ tree->node_count++;
+ if (parent)
+ parent->nchild++;
+
+diff --git a/crypto/x509v3/pcy_tree.c b/crypto/x509v3/pcy_tree.c
+index 6e8322cbc5..6c7fd35405 100644
+--- a/crypto/x509v3/pcy_tree.c
++++ b/crypto/x509v3/pcy_tree.c
+@@ -13,6 +13,18 @@
+
+ #include "pcy_local.h"
+
++/*
++ * If the maximum number of nodes in the policy tree isn't defined, set it to
++ * a generous default of 1000 nodes.
++ *
++ * Defining this to be zero means unlimited policy tree growth which opens the
++ * door on CVE-2023-0464.
++ */
++
++#ifndef OPENSSL_POLICY_TREE_NODES_MAX
++# define OPENSSL_POLICY_TREE_NODES_MAX 1000
++#endif
++
+ /*
+ * Enable this to print out the complete policy tree at various point during
+ * evaluation.
+@@ -168,6 +180,9 @@ static int tree_init(X509_POLICY_TREE **ptree, STACK_OF(X509) *certs,
+ return X509_PCY_TREE_INTERNAL;
+ }
+
++ /* Limit the growth of the tree to mitigate CVE-2023-0464 */
++ tree->node_maximum = OPENSSL_POLICY_TREE_NODES_MAX;
++
+ /*
+ * http://tools.ietf.org/html/rfc5280#section-6.1.2, figure 3.
+ *
+@@ -184,7 +199,7 @@ static int tree_init(X509_POLICY_TREE **ptree, STACK_OF(X509) *certs,
+ level = tree->levels;
+ if ((data = policy_data_new(NULL, OBJ_nid2obj(NID_any_policy), 0)) == NULL)
+ goto bad_tree;
+- if (level_add_node(level, data, NULL, tree) == NULL) {
++ if (level_add_node(level, data, NULL, tree, 1) == NULL) {
+ policy_data_free(data);
+ goto bad_tree;
+ }
+@@ -243,7 +258,8 @@ static int tree_init(X509_POLICY_TREE **ptree, STACK_OF(X509) *certs,
+ * Return value: 1 on success, 0 otherwise
+ */
+ static int tree_link_matching_nodes(X509_POLICY_LEVEL *curr,
+- X509_POLICY_DATA *data)
++ X509_POLICY_DATA *data,
++ X509_POLICY_TREE *tree)
+ {
+ X509_POLICY_LEVEL *last = curr - 1;
+ int i, matched = 0;
+@@ -253,13 +269,13 @@ static int tree_link_matching_nodes(X509_POLICY_LEVEL *curr,
+ X509_POLICY_NODE *node = sk_X509_POLICY_NODE_value(last->nodes, i);
+
+ if (policy_node_match(last, node, data->valid_policy)) {
+- if (level_add_node(curr, data, node, NULL) == NULL)
++ if (level_add_node(curr, data, node, tree, 0) == NULL)
+ return 0;
+ matched = 1;
+ }
+ }
+ if (!matched && last->anyPolicy) {
+- if (level_add_node(curr, data, last->anyPolicy, NULL) == NULL)
++ if (level_add_node(curr, data, last->anyPolicy, tree, 0) == NULL)
+ return 0;
+ }
+ return 1;
+@@ -272,7 +288,8 @@ static int tree_link_matching_nodes(X509_POLICY_LEVEL *curr,
+ * Return value: 1 on success, 0 otherwise.
+ */
+ static int tree_link_nodes(X509_POLICY_LEVEL *curr,
+- const X509_POLICY_CACHE *cache)
++ const X509_POLICY_CACHE *cache,
++ X509_POLICY_TREE *tree)
+ {
+ int i;
+
+@@ -280,7 +297,7 @@ static int tree_link_nodes(X509_POLICY_LEVEL *curr,
+ X509_POLICY_DATA *data = sk_X509_POLICY_DATA_value(cache->data, i);
+
+ /* Look for matching nodes in previous level */
+- if (!tree_link_matching_nodes(curr, data))
++ if (!tree_link_matching_nodes(curr, data, tree))
+ return 0;
+ }
+ return 1;
+@@ -311,7 +328,7 @@ static int tree_add_unmatched(X509_POLICY_LEVEL *curr,
+ /* Curr may not have anyPolicy */
+ data->qualifier_set = cache->anyPolicy->qualifier_set;
+ data->flags |= POLICY_DATA_FLAG_SHARED_QUALIFIERS;
+- if (level_add_node(curr, data, node, tree) == NULL) {
++ if (level_add_node(curr, data, node, tree, 1) == NULL) {
+ policy_data_free(data);
+ return 0;
+ }
+@@ -373,7 +390,7 @@ static int tree_link_any(X509_POLICY_LEVEL *curr,
+ }
+ /* Finally add link to anyPolicy */
+ if (last->anyPolicy &&
+- level_add_node(curr, cache->anyPolicy, last->anyPolicy, NULL) == NULL)
++ level_add_node(curr, cache->anyPolicy, last->anyPolicy, tree, 0) == NULL)
+ return 0;
+ return 1;
+ }
+@@ -555,7 +572,7 @@ static int tree_calculate_user_set(X509_POLICY_TREE *tree,
+ extra->qualifier_set = anyPolicy->data->qualifier_set;
+ extra->flags = POLICY_DATA_FLAG_SHARED_QUALIFIERS
+ | POLICY_DATA_FLAG_EXTRA_NODE;
+- node = level_add_node(NULL, extra, anyPolicy->parent, tree);
++ node = level_add_node(NULL, extra, anyPolicy->parent, tree, 1);
+ }
+ if (!tree->user_policies) {
+ tree->user_policies = sk_X509_POLICY_NODE_new_null();
+@@ -582,7 +599,7 @@ static int tree_evaluate(X509_POLICY_TREE *tree)
+
+ for (i = 1; i < tree->nlevel; i++, curr++) {
+ cache = policy_cache_set(curr->cert);
+- if (!tree_link_nodes(curr, cache))
++ if (!tree_link_nodes(curr, cache, tree))
+ return X509_PCY_TREE_INTERNAL;
+
+ if (!(curr->flags & X509_V_FLAG_INHIBIT_ANY)
+--
+2.34.1
diff --git a/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0465.patch b/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0465.patch
new file mode 100644
index 0000000000..be5068074e
--- /dev/null
+++ b/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0465.patch
@@ -0,0 +1,60 @@
+From b013765abfa80036dc779dd0e50602c57bb3bf95 Mon Sep 17 00:00:00 2001
+From: Matt Caswell <matt@openssl.org>
+Date: Tue, 7 Mar 2023 16:52:55 +0000
+Subject: [PATCH] Ensure that EXFLAG_INVALID_POLICY is checked even in leaf
+ certs
+
+Even though we check the leaf cert to confirm it is valid, we
+later ignored the invalid flag and did not notice that the leaf
+cert was bad.
+
+Fixes: CVE-2023-0465
+
+Reviewed-by: Hugo Landau <hlandau@openssl.org>
+Reviewed-by: Tomas Mraz <tomas@openssl.org>
+(Merged from https://github.com/openssl/openssl/pull/20588)
+
+CVE: CVE-2023-0465
+Upstream-Status: Backport [https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=b013765abfa80036dc779dd0e50602c57bb3bf95]
+Comment: Refreshed first hunk
+Signed-off-by: Omkar Patil <omkar.patil@kpit.com>
+
+---
+ crypto/x509/x509_vfy.c | 11 +++++++++--
+ 1 file changed, 9 insertions(+), 2 deletions(-)
+
+diff --git a/crypto/x509/x509_vfy.c b/crypto/x509/x509_vfy.c
+index 925fbb5412..1dfe4f9f31 100644
+--- a/crypto/x509/x509_vfy.c
++++ b/crypto/x509/x509_vfy.c
+@@ -1649,18 +1649,25 @@
+ }
+ /* Invalid or inconsistent extensions */
+ if (ret == X509_PCY_TREE_INVALID) {
+- int i;
++ int i, cbcalled = 0;
+
+ /* Locate certificates with bad extensions and notify callback. */
+- for (i = 1; i < sk_X509_num(ctx->chain); i++) {
++ for (i = 0; i < sk_X509_num(ctx->chain); i++) {
+ X509 *x = sk_X509_value(ctx->chain, i);
+
+ if (!(x->ex_flags & EXFLAG_INVALID_POLICY))
+ continue;
++ cbcalled = 1;
+ if (!verify_cb_cert(ctx, x, i,
+ X509_V_ERR_INVALID_POLICY_EXTENSION))
+ return 0;
+ }
++ if (!cbcalled) {
++ /* Should not be able to get here */
++ X509err(X509_F_CHECK_POLICY, ERR_R_INTERNAL_ERROR);
++ return 0;
++ }
++ /* The callback ignored the error so we return success */
+ return 1;
+ }
+ if (ret == X509_PCY_TREE_FAILURE) {
+--
+2.34.1
+
diff --git a/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0466.patch b/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0466.patch
new file mode 100644
index 0000000000..f042aa5da1
--- /dev/null
+++ b/poky/meta/recipes-connectivity/openssl/openssl/CVE-2023-0466.patch
@@ -0,0 +1,82 @@
+From 0d16b7e99aafc0b4a6d729eec65a411a7e025f0a Mon Sep 17 00:00:00 2001
+From: Tomas Mraz <tomas@openssl.org>
+Date: Tue, 21 Mar 2023 16:15:47 +0100
+Subject: [PATCH] Fix documentation of X509_VERIFY_PARAM_add0_policy()
+
+The function was incorrectly documented as enabling policy checking.
+
+Fixes: CVE-2023-0466
+
+Reviewed-by: Matt Caswell <matt@openssl.org>
+Reviewed-by: Paul Dale <pauli@openssl.org>
+(Merged from https://github.com/openssl/openssl/pull/20564)
+
+CVE: CVE-2023-0466
+Upstream-Status: Backport [https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=0d16b7e99aafc0b4a6d729eec65a411a7e025f0a]
+Comment: Refreshed first hunk from CHANGE and NEWS
+Signed-off-by: Omkar Patil <omkar.patil@kpit.com>
+
+---
+ CHANGES | 5 +++++
+ NEWS | 1 +
+ doc/man3/X509_VERIFY_PARAM_set_flags.pod | 9 +++++++--
+ 3 files changed, 13 insertions(+), 2 deletions(-)
+
+diff --git a/CHANGES b/CHANGES
+index efccf7838e..b19f1429bb 100644
+--- a/CHANGES
++++ b/CHANGES
+@@ -9,6 +9,11 @@
+
+ Changes between 1.1.1s and 1.1.1t [7 Feb 2023]
+
++ *) Corrected documentation of X509_VERIFY_PARAM_add0_policy() to mention
++ that it does not enable policy checking. Thanks to
++ David Benjamin for discovering this issue. (CVE-2023-0466)
++ [Tomas Mraz]
++
+ *) Fixed X.400 address type confusion in X.509 GeneralName.
+
+ There is a type confusion vulnerability relating to X.400 address processing
+diff --git a/NEWS b/NEWS
+index 36a9bb6890..62615693fa 100644
+--- a/NEWS
++++ b/NEWS
+@@ -7,6 +7,7 @@
+
+ Major changes between OpenSSL 1.1.1s and OpenSSL 1.1.1t [7 Feb 2023]
+
++ o Fixed documentation of X509_VERIFY_PARAM_add0_policy() (CVE-2023-0466)
+ o Fixed X.400 address type confusion in X.509 GeneralName (CVE-2023-0286)
+ o Fixed Use-after-free following BIO_new_NDEF (CVE-2023-0215)
+ o Fixed Double free after calling PEM_read_bio_ex (CVE-2022-4450)
+diff --git a/doc/man3/X509_VERIFY_PARAM_set_flags.pod b/doc/man3/X509_VERIFY_PARAM_set_flags.pod
+index f6f304bf7b..aa292f9336 100644
+--- a/doc/man3/X509_VERIFY_PARAM_set_flags.pod
++++ b/doc/man3/X509_VERIFY_PARAM_set_flags.pod
+@@ -92,8 +92,9 @@ B<trust>.
+ X509_VERIFY_PARAM_set_time() sets the verification time in B<param> to
+ B<t>. Normally the current time is used.
+
+-X509_VERIFY_PARAM_add0_policy() enables policy checking (it is disabled
+-by default) and adds B<policy> to the acceptable policy set.
++X509_VERIFY_PARAM_add0_policy() adds B<policy> to the acceptable policy set.
++Contrary to preexisting documentation of this function it does not enable
++policy checking.
+
+ X509_VERIFY_PARAM_set1_policies() enables policy checking (it is disabled
+ by default) and sets the acceptable policy set to B<policies>. Any existing
+@@ -377,6 +378,10 @@ and has no effect.
+
+ The X509_VERIFY_PARAM_get_hostflags() function was added in OpenSSL 1.1.0i.
+
++The function X509_VERIFY_PARAM_add0_policy() was historically documented as
++enabling policy checking however the implementation has never done this.
++The documentation was changed to align with the implementation.
++
+ =head1 COPYRIGHT
+
+ Copyright 2009-2020 The OpenSSL Project Authors. All Rights Reserved.
+--
+2.34.1
+
diff --git a/poky/meta/recipes-connectivity/openssl/openssl_1.1.1q.bb b/poky/meta/recipes-connectivity/openssl/openssl_1.1.1t.bb
index 139b7fe935..46875b525c 100644
--- a/poky/meta/recipes-connectivity/openssl/openssl_1.1.1q.bb
+++ b/poky/meta/recipes-connectivity/openssl/openssl_1.1.1t.bb
@@ -18,13 +18,16 @@ SRC_URI = "http://www.openssl.org/source/openssl-${PV}.tar.gz \
file://afalg.patch \
file://reproducible.patch \
file://reproducibility.patch \
+ file://CVE-2023-0464.patch \
+ file://CVE-2023-0465.patch \
+ file://CVE-2023-0466.patch \
"
SRC_URI_append_class-nativesdk = " \
file://environment.d-openssl.sh \
"
-SRC_URI[sha256sum] = "d7939ce614029cdff0b6c20f0e2e5703158a489a72b2507b8bd51bf8c8fd10ca"
+SRC_URI[sha256sum] = "8dee9b24bdb1dcbf0c3d1e9b02fb8f6bf22165e807f45adeb7c9677536859d3b"
inherit lib_package multilib_header multilib_script ptest
MULTILIB_SCRIPTS = "${PN}-bin:${bindir}/c_rehash"
diff --git a/poky/meta/recipes-connectivity/ppp/ppp/CVE-2022-4603.patch b/poky/meta/recipes-connectivity/ppp/ppp/CVE-2022-4603.patch
new file mode 100644
index 0000000000..27b8863a4e
--- /dev/null
+++ b/poky/meta/recipes-connectivity/ppp/ppp/CVE-2022-4603.patch
@@ -0,0 +1,50 @@
+From 2aeb41a9a3a43b11b1e46628d0bf98197ff9f141 Mon Sep 17 00:00:00 2001
+From: Paul Mackerras <paulus@ozlabs.org>
+Date: Thu, 29 Dec 2022 18:00:20 +0100
+Subject: [PATCH] pppdump: Avoid out-of-range access to packet buffer
+
+This fixes a potential vulnerability where data is written to spkt.buf
+and rpkt.buf without a check on the array index. To fix this, we
+check the array index (pkt->cnt) before storing the byte or
+incrementing the count. This also means we no longer have a potential
+signed integer overflow on the increment of pkt->cnt.
+
+Fortunately, pppdump is not used in the normal process of setting up a
+PPP connection, is not installed setuid-root, and is not invoked
+automatically in any scenario that I am aware of.
+
+Ustream-Status: Backport [https://github.com/ppp-project/ppp/commit/a75fb7b198eed50d769c80c36629f38346882cbf]
+CVE: CVE-2022-4603
+Signed-off-by:Minjae Kim <flowergom@gmail.com>
+---
+ pppdump/pppdump.c | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/pppdump/pppdump.c b/pppdump/pppdump.c
+index 87c2e8f..dec4def 100644
+--- a/pppdump/pppdump.c
++++ b/pppdump/pppdump.c
+@@ -296,6 +296,10 @@ dumpppp(f)
+ printf("%s aborted packet:\n ", dir);
+ q = " ";
+ }
++ if (pkt->cnt >= sizeof(pkt->buf)) {
++ printf("%s over-long packet truncated:\n ", dir);
++ q = " ";
++ }
+ nb = pkt->cnt;
+ p = pkt->buf;
+ pkt->cnt = 0;
+@@ -399,7 +403,8 @@ dumpppp(f)
+ c ^= 0x20;
+ pkt->esc = 0;
+ }
+- pkt->buf[pkt->cnt++] = c;
++ if (pkt->cnt < sizeof(pkt->buf))
++ pkt->buf[pkt->cnt++] = c;
+ break;
+ }
+ }
+--
+2.25.1
+
diff --git a/poky/meta/recipes-connectivity/ppp/ppp_2.4.7.bb b/poky/meta/recipes-connectivity/ppp/ppp_2.4.7.bb
index 76c1cc62a7..51ec25e660 100644
--- a/poky/meta/recipes-connectivity/ppp/ppp_2.4.7.bb
+++ b/poky/meta/recipes-connectivity/ppp/ppp_2.4.7.bb
@@ -34,6 +34,7 @@ SRC_URI = "https://download.samba.org/pub/${BPN}/${BP}.tar.gz \
file://0001-ppp-Remove-unneeded-include.patch \
file://ppp-2.4.7-DES-openssl.patch \
file://0001-pppd-Fix-bounds-check-in-EAP-code.patch \
+ file://CVE-2022-4603.patch \
"
SRC_URI_append_libc-musl = "\
diff --git a/poky/meta/recipes-core/base-files/base-files/hosts b/poky/meta/recipes-core/base-files/base-files/hosts
index b94f414d5c..10a5b6c704 100644
--- a/poky/meta/recipes-core/base-files/base-files/hosts
+++ b/poky/meta/recipes-core/base-files/base-files/hosts
@@ -1,4 +1,4 @@
-127.0.0.1 localhost.localdomain localhost
+127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
diff --git a/poky/meta/recipes-core/busybox/busybox.inc b/poky/meta/recipes-core/busybox/busybox.inc
index 3553376582..f0c5666f47 100644
--- a/poky/meta/recipes-core/busybox/busybox.inc
+++ b/poky/meta/recipes-core/busybox/busybox.inc
@@ -139,6 +139,10 @@ do_configure () {
do_prepare_config
merge_config.sh -m .config ${@" ".join(find_cfgs(d))}
cml1_do_configure
+
+ # Save a copy of .config and autoconf.h.
+ cp .config .config.orig
+ cp include/autoconf.h include/autoconf.h.orig
}
do_compile() {
@@ -146,13 +150,17 @@ do_compile() {
if [ "${BUILD_REPRODUCIBLE_BINARIES}" = "1" ]; then
export KCONFIG_NOTIMESTAMP=1
fi
+
+ # Ensure we start do_compile with the original .config and autoconf.h.
+ # These files should always have matching timestamps.
+ cp .config.orig .config
+ cp include/autoconf.h.orig include/autoconf.h
+
if [ "${BUSYBOX_SPLIT_SUID}" = "1" -a x`grep "CONFIG_FEATURE_INDIVIDUAL=y" .config` = x ]; then
+ # Guard againt interrupted do_compile: clean temporary files.
+ rm -f .config.app.suid .config.app.nosuid .config.disable.apps .config.nonapps
+
# split the .config into two parts, and make two busybox binaries
- if [ -e .config.orig ]; then
- # Need to guard again an interrupted do_compile - restore any backup
- cp .config.orig .config
- fi
- cp .config .config.orig
oe_runmake busybox.cfg.suid
oe_runmake busybox.cfg.nosuid
@@ -189,15 +197,18 @@ do_compile() {
bbfatal "busybox suid binary incorrectly provides /bin/sh"
fi
- # copy .config.orig back to .config, because the install process may check this file
- cp .config.orig .config
# cleanup
- rm .config.orig .config.app.suid .config.app.nosuid .config.disable.apps .config.nonapps
+ rm .config.app.suid .config.app.nosuid .config.disable.apps .config.nonapps
else
oe_runmake busybox_unstripped
cp busybox_unstripped busybox
oe_runmake busybox.links
fi
+
+ # restore original .config and autoconf.h, because the install process
+ # may check these files
+ cp .config.orig .config
+ cp include/autoconf.h.orig include/autoconf.h
}
do_install () {
diff --git a/poky/meta/recipes-core/coreutils/coreutils_8.31.bb b/poky/meta/recipes-core/coreutils/coreutils_8.31.bb
index 3d569881e8..3841f71155 100644
--- a/poky/meta/recipes-core/coreutils/coreutils_8.31.bb
+++ b/poky/meta/recipes-core/coreutils/coreutils_8.31.bb
@@ -51,6 +51,7 @@ PACKAGECONFIG_class-nativesdk ??= "xattr"
PACKAGECONFIG[acl] = "--enable-acl,--disable-acl,acl,"
PACKAGECONFIG[xattr] = "--enable-xattr,--disable-xattr,attr,"
PACKAGECONFIG[single-binary] = "--enable-single-binary,--disable-single-binary,,"
+PACKAGECONFIG[openssl] = "--with-openssl=yes,--with-openssl=no,openssl"
# [ df mktemp nice printenv base64 gets a special treatment and is not included in this
bindir_progs = "arch basename chcon cksum comm csplit cut dir dircolors dirname du \
diff --git a/poky/meta/recipes-core/dbus/dbus-test_1.12.20.bb b/poky/meta/recipes-core/dbus/dbus-test_1.12.24.bb
index 755c841bad..755c841bad 100644
--- a/poky/meta/recipes-core/dbus/dbus-test_1.12.20.bb
+++ b/poky/meta/recipes-core/dbus/dbus-test_1.12.24.bb
diff --git a/poky/meta/recipes-core/dbus/dbus.inc b/poky/meta/recipes-core/dbus/dbus.inc
index dcbcc0a9d6..82e91c7b13 100644
--- a/poky/meta/recipes-core/dbus/dbus.inc
+++ b/poky/meta/recipes-core/dbus/dbus.inc
@@ -10,8 +10,7 @@ SRC_URI = "https://dbus.freedesktop.org/releases/dbus/dbus-${PV}.tar.gz \
file://clear-guid_from_server-if-send_negotiate_unix_f.patch \
"
-SRC_URI[md5sum] = "dfe8a71f412e0b53be26ed4fbfdc91c4"
-SRC_URI[sha256sum] = "f77620140ecb4cdc67f37fb444f8a6bea70b5b6461f12f1cbe2cec60fa7de5fe"
+SRC_URI[sha256sum] = "bc42d196c1756ac520d61bf3ccd6f42013617def45dd1e591a6091abf51dca38"
EXTRA_OECONF = "--disable-xml-docs \
--disable-doxygen-docs \
diff --git a/poky/meta/recipes-core/dbus/dbus_1.12.20.bb b/poky/meta/recipes-core/dbus/dbus_1.12.24.bb
index cf6f7dc0ef..cf6f7dc0ef 100644
--- a/poky/meta/recipes-core/dbus/dbus_1.12.20.bb
+++ b/poky/meta/recipes-core/dbus/dbus_1.12.24.bb
diff --git a/poky/meta/recipes-core/dropbear/dropbear.inc b/poky/meta/recipes-core/dropbear/dropbear.inc
index 026292230c..0f5e9ba4ac 100644
--- a/poky/meta/recipes-core/dropbear/dropbear.inc
+++ b/poky/meta/recipes-core/dropbear/dropbear.inc
@@ -29,6 +29,7 @@ SRC_URI = "http://matt.ucc.asn.au/dropbear/releases/dropbear-${PV}.tar.bz2 \
${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
${@bb.utils.contains('PACKAGECONFIG', 'disable-weak-ciphers', 'file://dropbear-disable-weak-ciphers.patch', '', d)} \
file://CVE-2020-36254.patch \
+ file://CVE-2021-36369.patch \
"
PAM_SRC_URI = "file://0005-dropbear-enable-pam.patch \
diff --git a/poky/meta/recipes-core/dropbear/dropbear/CVE-2021-36369.patch b/poky/meta/recipes-core/dropbear/dropbear/CVE-2021-36369.patch
new file mode 100644
index 0000000000..5cabe8339d
--- /dev/null
+++ b/poky/meta/recipes-core/dropbear/dropbear/CVE-2021-36369.patch
@@ -0,0 +1,145 @@
+From e10dec82930863e487b22978d3df107274f366b2 Mon Sep 17 00:00:00 2001
+From: Manfred Kaiser <37737811+manfred-kaiser@users.noreply.github.com>
+Date: Thu, 19 Aug 2021 17:37:14 +0200
+Subject: [PATCH] added option to disable trivial auth methods (#128)
+
+* added option to disable trivial auth methods
+
+* rename argument to match with other ssh clients
+
+* fixed trivial auth detection for pubkeys
+
+[https://github.com/mkj/dropbear/pull/128]
+Upstream-Status: Backport
+CVE: CVE-2021-36369
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+
+---
+ cli-auth.c | 3 +++
+ cli-authinteract.c | 1 +
+ cli-authpasswd.c | 2 +-
+ cli-authpubkey.c | 1 +
+ cli-runopts.c | 7 +++++++
+ cli-session.c | 1 +
+ runopts.h | 1 +
+ session.h | 1 +
+ 8 files changed, 16 insertions(+), 1 deletion(-)
+
+diff --git a/cli-auth.c b/cli-auth.c
+index 2e509e5..6f04495 100644
+--- a/cli-auth.c
++++ b/cli-auth.c
+@@ -267,6 +267,9 @@ void recv_msg_userauth_success() {
+ if DROPBEAR_CLI_IMMEDIATE_AUTH is set */
+
+ TRACE(("received msg_userauth_success"))
++ if (cli_opts.disable_trivial_auth && cli_ses.is_trivial_auth) {
++ dropbear_exit("trivial authentication not allowed");
++ }
+ /* Note: in delayed-zlib mode, setting authdone here
+ * will enable compression in the transport layer */
+ ses.authstate.authdone = 1;
+diff --git a/cli-authinteract.c b/cli-authinteract.c
+index e1cc9a1..f7128ee 100644
+--- a/cli-authinteract.c
++++ b/cli-authinteract.c
+@@ -114,6 +114,7 @@ void recv_msg_userauth_info_request() {
+ m_free(instruction);
+
+ for (i = 0; i < num_prompts; i++) {
++ cli_ses.is_trivial_auth = 0;
+ unsigned int response_len = 0;
+ prompt = buf_getstring(ses.payload, NULL);
+ cleantext(prompt);
+diff --git a/cli-authpasswd.c b/cli-authpasswd.c
+index 00fdd8b..a24d43e 100644
+--- a/cli-authpasswd.c
++++ b/cli-authpasswd.c
+@@ -155,7 +155,7 @@ void cli_auth_password() {
+
+ encrypt_packet();
+ m_burn(password, strlen(password));
+-
++ cli_ses.is_trivial_auth = 0;
+ TRACE(("leave cli_auth_password"))
+ }
+ #endif /* DROPBEAR_CLI_PASSWORD_AUTH */
+diff --git a/cli-authpubkey.c b/cli-authpubkey.c
+index 7cee164..7da1a04 100644
+--- a/cli-authpubkey.c
++++ b/cli-authpubkey.c
+@@ -174,6 +174,7 @@ static void send_msg_userauth_pubkey(sign_key *key, int type, int realsign) {
+ buf_putbytes(sigbuf, ses.writepayload->data, ses.writepayload->len);
+ cli_buf_put_sign(ses.writepayload, key, type, sigbuf);
+ buf_free(sigbuf); /* Nothing confidential in the buffer */
++ cli_ses.is_trivial_auth = 0;
+ }
+
+ encrypt_packet();
+diff --git a/cli-runopts.c b/cli-runopts.c
+index 7d1fffe..6bf8b8e 100644
+--- a/cli-runopts.c
++++ b/cli-runopts.c
+@@ -152,6 +152,7 @@ void cli_getopts(int argc, char ** argv) {
+ #if DROPBEAR_CLI_ANYTCPFWD
+ cli_opts.exit_on_fwd_failure = 0;
+ #endif
++ cli_opts.disable_trivial_auth = 0;
+ #if DROPBEAR_CLI_LOCALTCPFWD
+ cli_opts.localfwds = list_new();
+ opts.listen_fwd_all = 0;
+@@ -888,6 +889,7 @@ static void add_extendedopt(const char* origstr) {
+ #if DROPBEAR_CLI_ANYTCPFWD
+ "\tExitOnForwardFailure\n"
+ #endif
++ "\tDisableTrivialAuth\n"
+ #ifndef DISABLE_SYSLOG
+ "\tUseSyslog\n"
+ #endif
+@@ -915,5 +917,10 @@ static void add_extendedopt(const char* origstr) {
+ return;
+ }
+
++ if (match_extendedopt(&optstr, "DisableTrivialAuth") == DROPBEAR_SUCCESS) {
++ cli_opts.disable_trivial_auth = parse_flag_value(optstr);
++ return;
++ }
++
+ dropbear_log(LOG_WARNING, "Ignoring unknown configuration option '%s'", origstr);
+ }
+diff --git a/cli-session.c b/cli-session.c
+index 56dd4af..73ef0db 100644
+--- a/cli-session.c
++++ b/cli-session.c
+@@ -164,6 +164,7 @@ static void cli_session_init(pid_t proxy_cmd_pid) {
+ /* Auth */
+ cli_ses.lastprivkey = NULL;
+ cli_ses.lastauthtype = 0;
++ cli_ses.is_trivial_auth = 1;
+
+ /* For printing "remote host closed" for the user */
+ ses.remoteclosed = cli_remoteclosed;
+diff --git a/runopts.h b/runopts.h
+index 31eae1f..8519626 100644
+--- a/runopts.h
++++ b/runopts.h
+@@ -154,6 +154,7 @@ typedef struct cli_runopts {
+ #if DROPBEAR_CLI_ANYTCPFWD
+ int exit_on_fwd_failure;
+ #endif
++ int disable_trivial_auth;
+ #if DROPBEAR_CLI_REMOTETCPFWD
+ m_list * remotefwds;
+ #endif
+diff --git a/session.h b/session.h
+index 0f77055..8676054 100644
+--- a/session.h
++++ b/session.h
+@@ -287,6 +287,7 @@ struct clientsession {
+
+ int lastauthtype; /* either AUTH_TYPE_PUBKEY or AUTH_TYPE_PASSWORD,
+ for the last type of auth we tried */
++ int is_trivial_auth;
+ int ignore_next_auth_response;
+ #if DROPBEAR_CLI_INTERACT_AUTH
+ int auth_interact_failed; /* flag whether interactive auth can still
diff --git a/poky/meta/recipes-core/expat/expat/CVE-2022-43680.patch b/poky/meta/recipes-core/expat/expat/CVE-2022-43680.patch
new file mode 100644
index 0000000000..6f93bc3ed7
--- /dev/null
+++ b/poky/meta/recipes-core/expat/expat/CVE-2022-43680.patch
@@ -0,0 +1,33 @@
+From 5290462a7ea1278a8d5c0d5b2860d4e244f997e4 Mon Sep 17 00:00:00 2001
+From: Sebastian Pipping <sebastian@pipping.org>
+Date: Tue, 20 Sep 2022 02:44:34 +0200
+Subject: [PATCH] lib: Fix overeager DTD destruction in
+ XML_ExternalEntityParserCreate
+
+CVE: CVE-2022-43680
+Upstream-Status: Backport [https://github.com/libexpat/libexpat/commit/5290462a7ea1278a8d5c0d5b2860d4e244f997e4.patch]
+Signed-off-by: Ranjitsinh Rathod <ranjitsinh.rathod@kpit.com>
+Comments: Hunk refreshed
+---
+ lib/xmlparse.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/lib/xmlparse.c b/lib/xmlparse.c
+index aacd6e7fc..57bf103cc 100644
+--- a/lib/xmlparse.c
++++ b/lib/xmlparse.c
+@@ -1035,6 +1035,14 @@ parserCreate(const XML_Char *encodingNam
+ parserInit(parser, encodingName);
+
+ if (encodingName && ! parser->m_protocolEncodingName) {
++ if (dtd) {
++ // We need to stop the upcoming call to XML_ParserFree from happily
++ // destroying parser->m_dtd because the DTD is shared with the parent
++ // parser and the only guard that keeps XML_ParserFree from destroying
++ // parser->m_dtd is parser->m_isParamEntity but it will be set to
++ // XML_TRUE only later in XML_ExternalEntityParserCreate (or not at all).
++ parser->m_dtd = NULL;
++ }
+ XML_ParserFree(parser);
+ return NULL;
+ }
diff --git a/poky/meta/recipes-core/expat/expat_2.2.9.bb b/poky/meta/recipes-core/expat/expat_2.2.9.bb
index 578edfcbff..8a5006e59a 100644
--- a/poky/meta/recipes-core/expat/expat_2.2.9.bb
+++ b/poky/meta/recipes-core/expat/expat_2.2.9.bb
@@ -21,6 +21,7 @@ SRC_URI = "git://github.com/libexpat/libexpat.git;protocol=https;branch=master \
file://CVE-2022-25315.patch \
file://libtool-tag.patch \
file://CVE-2022-40674.patch \
+ file://CVE-2022-43680.patch \
"
SRCREV = "a7bc26b69768f7fb24f0c7976fae24b157b85b13"
diff --git a/poky/meta/recipes-core/glibc/glibc-version.inc b/poky/meta/recipes-core/glibc/glibc-version.inc
index 68efd09ece..5414297ba1 100644
--- a/poky/meta/recipes-core/glibc/glibc-version.inc
+++ b/poky/meta/recipes-core/glibc/glibc-version.inc
@@ -1,6 +1,6 @@
SRCBRANCH ?= "release/2.31/master"
PV = "2.31+git${SRCPV}"
-SRCREV_glibc ?= "3ef8be9b89ef98300951741f381eb79126ac029f"
+SRCREV_glibc ?= "d4b75594574ab8a9c2c41209cd8c62aac76b5a04"
SRCREV_localedef ?= "cd9f958c4c94a638fa7b2b4e21627364f1a1a655"
GLIBC_GIT_URI ?= "git://sourceware.org/git/glibc.git"
diff --git a/poky/meta/recipes-core/glibc/glibc.inc b/poky/meta/recipes-core/glibc/glibc.inc
index 23a6ca99ae..e42040f3dc 100644
--- a/poky/meta/recipes-core/glibc/glibc.inc
+++ b/poky/meta/recipes-core/glibc/glibc.inc
@@ -1,7 +1,9 @@
require glibc-common.inc
require glibc-ld.inc
-DEPENDS = "virtual/${TARGET_PREFIX}gcc libgcc-initial linux-libc-headers"
+DEPENDS = "virtual/${TARGET_PREFIX}gcc virtual/${TARGET_PREFIX}binutils${BUSUFFIX} libgcc-initial linux-libc-headers"
+BUSUFFIX= ""
+BUSUFFIX:class-nativesdk = "-crosssdk"
PROVIDES = "virtual/libc"
PROVIDES += "virtual/libintl virtual/libiconv"
diff --git a/poky/meta/recipes-core/glibc/glibc/CVE-2021-33574_1.patch b/poky/meta/recipes-core/glibc/glibc/CVE-2021-33574_1.patch
index cef0ce54ed..7561e87121 100644
--- a/poky/meta/recipes-core/glibc/glibc/CVE-2021-33574_1.patch
+++ b/poky/meta/recipes-core/glibc/glibc/CVE-2021-33574_1.patch
@@ -11,14 +11,10 @@ CVE: CVE-2021-33574 patch#1
Signed-off-by: Armin Kuster <akuster@mvista.com>
---
- NEWS | 4 ++++
- sysdeps/unix/sysv/linux/mq_notify.c | 15 ++++++++++-----
- 2 files changed, 14 insertions(+), 5 deletions(-)
-
-Index: git/NEWS
-===================================================================
---- git.orig/NEWS
-+++ git/NEWS
+diff --git a/NEWS b/NEWS
+index 8a20d3c4e3..be489243ac 100644
+--- a/NEWS
++++ b/NEWS
@@ -7,6 +7,10 @@ using `glibc' in the "product" field.
Version 2.31.1
@@ -28,12 +24,12 @@ Index: git/NEWS
+ attribute with a non-default affinity mask.
+
The following bugs are resolved with this release:
+ [14231] stdio-common tests memory requirements
[19519] iconv(1) with -c option hangs on illegal multi-byte sequences
- (CVE-2016-10228)
-Index: git/sysdeps/unix/sysv/linux/mq_notify.c
-===================================================================
---- git.orig/sysdeps/unix/sysv/linux/mq_notify.c
-+++ git/sysdeps/unix/sysv/linux/mq_notify.c
+diff --git a/sysdeps/unix/sysv/linux/mq_notify.c b/sysdeps/unix/sysv/linux/mq_notify.c
+index f288bac477..dd47f0b777 100644
+--- a/sysdeps/unix/sysv/linux/mq_notify.c
++++ b/sysdeps/unix/sysv/linux/mq_notify.c
@@ -135,8 +135,11 @@ helper_thread (void *arg)
(void) __pthread_barrier_wait (&notify_barrier);
}
@@ -48,7 +44,7 @@ Index: git/sysdeps/unix/sysv/linux/mq_notify.c
}
return NULL;
}
-@@ -257,8 +260,7 @@ mq_notify (mqd_t mqdes, const struct sig
+@@ -257,8 +260,7 @@ mq_notify (mqd_t mqdes, const struct sigevent *notification)
if (data.attr == NULL)
return -1;
@@ -58,7 +54,7 @@ Index: git/sysdeps/unix/sysv/linux/mq_notify.c
}
/* Construct the new request. */
-@@ -272,7 +274,10 @@ mq_notify (mqd_t mqdes, const struct sig
+@@ -272,7 +274,10 @@ mq_notify (mqd_t mqdes, const struct sigevent *notification)
/* If it failed, free the allocated memory. */
if (__glibc_unlikely (retval != 0))
diff --git a/poky/meta/recipes-core/glibc/glibc/CVE-2023-0687.patch b/poky/meta/recipes-core/glibc/glibc/CVE-2023-0687.patch
new file mode 100644
index 0000000000..10c7e5666d
--- /dev/null
+++ b/poky/meta/recipes-core/glibc/glibc/CVE-2023-0687.patch
@@ -0,0 +1,82 @@
+From 952aff5c00ad7c6b83c3f310f2643939538827f8 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=D0=9B=D0=B5=D0=BE=D0=BD=D0=B8=D0=B4=20=D0=AE=D1=80=D1=8C?=
+ =?UTF-8?q?=D0=B5=D0=B2=20=28Leonid=20Yuriev=29?= <leo@yuriev.ru>
+Date: Sat, 4 Feb 2023 14:41:38 +0300
+Subject: [PATCH] gmon: Fix allocated buffer overflow (bug 29444)
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+The `__monstartup()` allocates a buffer used to store all the data
+accumulated by the monitor.
+
+The size of this buffer depends on the size of the internal structures
+used and the address range for which the monitor is activated, as well
+as on the maximum density of call instructions and/or callable functions
+that could be potentially on a segment of executable code.
+
+In particular a hash table of arcs is placed at the end of this buffer.
+The size of this hash table is calculated in bytes as
+ p->fromssize = p->textsize / HASHFRACTION;
+
+but actually should be
+ p->fromssize = ROUNDUP(p->textsize / HASHFRACTION, sizeof(*p->froms));
+
+This results in writing beyond the end of the allocated buffer when an
+added arc corresponds to a call near from the end of the monitored
+address range, since `_mcount()` check the incoming caller address for
+monitored range but not the intermediate result hash-like index that
+uses to write into the table.
+
+It should be noted that when the results are output to `gmon.out`, the
+table is read to the last element calculated from the allocated size in
+bytes, so the arcs stored outside the buffer boundary did not fall into
+`gprof` for analysis. Thus this "feature" help me to found this bug
+during working with https://sourceware.org/bugzilla/show_bug.cgi?id=29438
+
+Just in case, I will explicitly note that the problem breaks the
+`make test t=gmon/tst-gmon-dso` added for Bug 29438.
+There, the arc of the `f3()` call disappears from the output, since in
+the DSO case, the call to `f3` is located close to the end of the
+monitored range.
+
+Signed-off-by: Леонид Юрьев (Leonid Yuriev) <leo@yuriev.ru>
+
+Another minor error seems a related typo in the calculation of
+`kcountsize`, but since kcounts are smaller than froms, this is
+actually to align the p->froms data.
+
+Co-authored-by: DJ Delorie <dj@redhat.com>
+Reviewed-by: Carlos O'Donell <carlos@redhat.com>
+
+Upstream-Status: Backport [https://sourceware.org/git/?p=glibc.git;a=commit;h=801af9fafd4689337ebf27260aa115335a0cb2bc]
+CVE: CVE-2023-0687
+Signed-off-by: Shubham Kulkarni <skulkarni@mvista.com>
+---
+ gmon/gmon.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/gmon/gmon.c b/gmon/gmon.c
+index dee6480..bf76358 100644
+--- a/gmon/gmon.c
++++ b/gmon/gmon.c
+@@ -132,6 +132,8 @@ __monstartup (u_long lowpc, u_long highpc)
+ p->lowpc = ROUNDDOWN(lowpc, HISTFRACTION * sizeof(HISTCOUNTER));
+ p->highpc = ROUNDUP(highpc, HISTFRACTION * sizeof(HISTCOUNTER));
+ p->textsize = p->highpc - p->lowpc;
++ /* This looks like a typo, but it's here to align the p->froms
++ section. */
+ p->kcountsize = ROUNDUP(p->textsize / HISTFRACTION, sizeof(*p->froms));
+ p->hashfraction = HASHFRACTION;
+ p->log_hashfraction = -1;
+@@ -142,7 +144,7 @@ __monstartup (u_long lowpc, u_long highpc)
+ instead of integer division. Precompute shift amount. */
+ p->log_hashfraction = ffs(p->hashfraction * sizeof(*p->froms)) - 1;
+ }
+- p->fromssize = p->textsize / HASHFRACTION;
++ p->fromssize = ROUNDUP(p->textsize / HASHFRACTION, sizeof(*p->froms));
+ p->tolimit = p->textsize * ARCDENSITY / 100;
+ if (p->tolimit < MINARCS)
+ p->tolimit = MINARCS;
+--
+2.7.4
diff --git a/poky/meta/recipes-core/glibc/glibc_2.31.bb b/poky/meta/recipes-core/glibc/glibc_2.31.bb
index 0c37467fe4..8d216f6ed1 100644
--- a/poky/meta/recipes-core/glibc/glibc_2.31.bb
+++ b/poky/meta/recipes-core/glibc/glibc_2.31.bb
@@ -79,6 +79,7 @@ SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
file://0035-x86_64-Avoid-lazy-relocation-of-tlsdesc-BZ-27137.patch \
file://0036-i386-Avoid-lazy-relocation-of-tlsdesc-BZ-27137.patch \
file://0037-Avoid-deadlock-between-pthread_create-and-ctors.patch \
+ file://CVE-2023-0687.patch \
"
S = "${WORKDIR}/git"
B = "${WORKDIR}/build-${TARGET_SYS}"
diff --git a/poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb b/poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb
index 7426eb077a..f592158209 100644
--- a/poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb
+++ b/poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb
@@ -24,7 +24,7 @@ IMAGE_FSTYPES = "wic.vmdk"
inherit core-image setuptools3
-SRCREV ?= "9ae91384970637cd8880c07071fb44b7f5574012"
+SRCREV ?= "ee461b42358db458f39e558b8667fbcffb6d8044"
SRC_URI = "git://git.yoctoproject.org/poky;branch=dunfell \
file://Yocto_Build_Appliance.vmx \
file://Yocto_Build_Appliance.vmxf \
diff --git a/poky/meta/recipes-core/libxml/libxml2/CVE-2022-40303.patch b/poky/meta/recipes-core/libxml/libxml2/CVE-2022-40303.patch
new file mode 100644
index 0000000000..bdb9e9eb7a
--- /dev/null
+++ b/poky/meta/recipes-core/libxml/libxml2/CVE-2022-40303.patch
@@ -0,0 +1,623 @@
+From c846986356fc149915a74972bf198abc266bc2c0 Mon Sep 17 00:00:00 2001
+From: Nick Wellnhofer <wellnhofer@aevum.de>
+Date: Thu, 25 Aug 2022 17:43:08 +0200
+Subject: [PATCH] [CVE-2022-40303] Fix integer overflows with XML_PARSE_HUGE
+
+Also impose size limits when XML_PARSE_HUGE is set. Limit size of names
+to XML_MAX_TEXT_LENGTH (10 million bytes) and other content to
+XML_MAX_HUGE_LENGTH (1 billion bytes).
+
+Move some the length checks to the end of the respective loop to make
+them strict.
+
+xmlParseEntityValue didn't have a length limitation at all. But without
+XML_PARSE_HUGE, this should eventually trigger an error in xmlGROW.
+
+Thanks to Maddie Stone working with Google Project Zero for the report!
+
+CVE: CVE-2022-40303
+Upstream-Status: Backport [https://gitlab.gnome.org/GNOME/libxml2/-/commit/c846986356fc149915a74972bf198abc266bc2c0]
+Comments: Refreshed hunk
+
+Signed-off-by: Bhabu Bindu <bhabu.bindu@kpit.com>
+---
+ parser.c | 233 +++++++++++++++++++++++++++++--------------------------
+ 1 file changed, 121 insertions(+), 112 deletions(-)
+
+diff --git a/parser.c b/parser.c
+index 93f031be..79479979 100644
+--- a/parser.c
++++ b/parser.c
+@@ -102,6 +102,8 @@ xmlParseElementEnd(xmlParserCtxtPtr ctxt);
+ * *
+ ************************************************************************/
+
++#define XML_MAX_HUGE_LENGTH 1000000000
++
+ #define XML_PARSER_BIG_ENTITY 1000
+ #define XML_PARSER_LOT_ENTITY 5000
+
+@@ -552,7 +554,7 @@ xmlFatalErr(xmlParserCtxtPtr ctxt, xmlParserErrors error, const char *info)
+ errmsg = "Malformed declaration expecting version";
+ break;
+ case XML_ERR_NAME_TOO_LONG:
+- errmsg = "Name too long use XML_PARSE_HUGE option";
++ errmsg = "Name too long";
+ break;
+ #if 0
+ case:
+@@ -3202,6 +3204,9 @@ xmlParseNameComplex(xmlParserCtxtPtr ctxt) {
+ int len = 0, l;
+ int c;
+ int count = 0;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_TEXT_LENGTH :
++ XML_MAX_NAME_LENGTH;
+
+ #ifdef DEBUG
+ nbParseNameComplex++;
+@@ -3267,7 +3272,8 @@ xmlParseNameComplex(xmlParserCtxtPtr ctxt) {
+ if (ctxt->instate == XML_PARSER_EOF)
+ return(NULL);
+ }
+- len += l;
++ if (len <= INT_MAX - l)
++ len += l;
+ NEXTL(l);
+ c = CUR_CHAR(l);
+ }
+@@ -3293,13 +3299,13 @@ xmlParseNameComplex(xmlParserCtxtPtr ctxt) {
+ if (ctxt->instate == XML_PARSER_EOF)
+ return(NULL);
+ }
+- len += l;
++ if (len <= INT_MAX - l)
++ len += l;
+ NEXTL(l);
+ c = CUR_CHAR(l);
+ }
+ }
+- if ((len > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if (len > maxLength) {
+ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "Name");
+ return(NULL);
+ }
+@@ -3338,7 +3344,10 @@ const xmlChar *
+ xmlParseName(xmlParserCtxtPtr ctxt) {
+ const xmlChar *in;
+ const xmlChar *ret;
+- int count = 0;
++ size_t count = 0;
++ size_t maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_TEXT_LENGTH :
++ XML_MAX_NAME_LENGTH;
+
+ GROW;
+
+@@ -3362,8 +3371,7 @@ xmlParseName(xmlParserCtxtPtr ctxt) {
+ in++;
+ if ((*in > 0) && (*in < 0x80)) {
+ count = in - ctxt->input->cur;
+- if ((count > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if (count > maxLength) {
+ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "Name");
+ return(NULL);
+ }
+@@ -3384,6 +3392,9 @@ xmlParseNCNameComplex(xmlParserCtxtPtr ctxt) {
+ int len = 0, l;
+ int c;
+ int count = 0;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_TEXT_LENGTH :
++ XML_MAX_NAME_LENGTH;
+ size_t startPosition = 0;
+
+ #ifdef DEBUG
+@@ -3404,17 +3415,13 @@ xmlParseNCNameComplex(xmlParserCtxtPtr ctxt) {
+ while ((c != ' ') && (c != '>') && (c != '/') && /* test bigname.xml */
+ (xmlIsNameChar(ctxt, c) && (c != ':'))) {
+ if (count++ > XML_PARSER_CHUNK_SIZE) {
+- if ((len > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NCName");
+- return(NULL);
+- }
+ count = 0;
+ GROW;
+ if (ctxt->instate == XML_PARSER_EOF)
+ return(NULL);
+ }
+- len += l;
++ if (len <= INT_MAX - l)
++ len += l;
+ NEXTL(l);
+ c = CUR_CHAR(l);
+ if (c == 0) {
+@@ -3432,8 +3439,7 @@ xmlParseNCNameComplex(xmlParserCtxtPtr ctxt) {
+ c = CUR_CHAR(l);
+ }
+ }
+- if ((len > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if (len > maxLength) {
+ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NCName");
+ return(NULL);
+ }
+@@ -3459,7 +3465,10 @@ static const xmlChar *
+ xmlParseNCName(xmlParserCtxtPtr ctxt) {
+ const xmlChar *in, *e;
+ const xmlChar *ret;
+- int count = 0;
++ size_t count = 0;
++ size_t maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_TEXT_LENGTH :
++ XML_MAX_NAME_LENGTH;
+
+ #ifdef DEBUG
+ nbParseNCName++;
+@@ -3484,8 +3493,7 @@ xmlParseNCName(xmlParserCtxtPtr ctxt) {
+ goto complex;
+ if ((*in > 0) && (*in < 0x80)) {
+ count = in - ctxt->input->cur;
+- if ((count > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if (count > maxLength) {
+ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NCName");
+ return(NULL);
+ }
+@@ -3567,6 +3575,9 @@ xmlParseStringName(xmlParserCtxtPtr ctxt, const xmlChar** str) {
+ const xmlChar *cur = *str;
+ int len = 0, l;
+ int c;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_TEXT_LENGTH :
++ XML_MAX_NAME_LENGTH;
+
+ #ifdef DEBUG
+ nbParseStringName++;
+@@ -3602,12 +3613,6 @@ xmlParseStringName(xmlParserCtxtPtr ctxt, const xmlChar** str) {
+ if (len + 10 > max) {
+ xmlChar *tmp;
+
+- if ((len > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NCName");
+- xmlFree(buffer);
+- return(NULL);
+- }
+ max *= 2;
+ tmp = (xmlChar *) xmlRealloc(buffer,
+ max * sizeof(xmlChar));
+@@ -3621,14 +3626,18 @@ xmlParseStringName(xmlParserCtxtPtr ctxt, const xmlChar** str) {
+ COPY_BUF(l,buffer,len,c);
+ cur += l;
+ c = CUR_SCHAR(cur, l);
++ if (len > maxLength) {
++ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NCName");
++ xmlFree(buffer);
++ return(NULL);
++ }
+ }
+ buffer[len] = 0;
+ *str = cur;
+ return(buffer);
+ }
+ }
+- if ((len > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if (len > maxLength) {
+ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NCName");
+ return(NULL);
+ }
+@@ -3655,6 +3664,9 @@ xmlParseNmtoken(xmlParserCtxtPtr ctxt) {
+ int len = 0, l;
+ int c;
+ int count = 0;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_TEXT_LENGTH :
++ XML_MAX_NAME_LENGTH;
+
+ #ifdef DEBUG
+ nbParseNmToken++;
+@@ -3706,12 +3718,6 @@ xmlParseNmtoken(xmlParserCtxtPtr ctxt) {
+ if (len + 10 > max) {
+ xmlChar *tmp;
+
+- if ((max > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NmToken");
+- xmlFree(buffer);
+- return(NULL);
+- }
+ max *= 2;
+ tmp = (xmlChar *) xmlRealloc(buffer,
+ max * sizeof(xmlChar));
+@@ -3725,6 +3731,11 @@ xmlParseNmtoken(xmlParserCtxtPtr ctxt) {
+ COPY_BUF(l,buffer,len,c);
+ NEXTL(l);
+ c = CUR_CHAR(l);
++ if (len > maxLength) {
++ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NmToken");
++ xmlFree(buffer);
++ return(NULL);
++ }
+ }
+ buffer[len] = 0;
+ return(buffer);
+@@ -3732,8 +3743,7 @@ xmlParseNmtoken(xmlParserCtxtPtr ctxt) {
+ }
+ if (len == 0)
+ return(NULL);
+- if ((len > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if (len > maxLength) {
+ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "NmToken");
+ return(NULL);
+ }
+@@ -3759,6 +3769,9 @@ xmlParseEntityValue(xmlParserCtxtPtr ctxt, xmlChar **orig) {
+ int len = 0;
+ int size = XML_PARSER_BUFFER_SIZE;
+ int c, l;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_HUGE_LENGTH :
++ XML_MAX_TEXT_LENGTH;
+ xmlChar stop;
+ xmlChar *ret = NULL;
+ const xmlChar *cur = NULL;
+@@ -3818,6 +3831,12 @@ xmlParseEntityValue(xmlParserCtxtPtr ctxt, xmlChar **orig) {
+ GROW;
+ c = CUR_CHAR(l);
+ }
++
++ if (len > maxLength) {
++ xmlFatalErrMsg(ctxt, XML_ERR_ENTITY_NOT_FINISHED,
++ "entity value too long\n");
++ goto error;
++ }
+ }
+ buf[len] = 0;
+ if (ctxt->instate == XML_PARSER_EOF)
+@@ -3905,6 +3924,9 @@ xmlParseAttValueComplex(xmlParserCtxtPtr ctxt, int *attlen, int normalize) {
+ xmlChar *rep = NULL;
+ size_t len = 0;
+ size_t buf_size = 0;
++ size_t maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_HUGE_LENGTH :
++ XML_MAX_TEXT_LENGTH;
+ int c, l, in_space = 0;
+ xmlChar *current = NULL;
+ xmlEntityPtr ent;
+@@ -3925,16 +3925,6 @@
+ while (((NXT(0) != limit) && /* checked */
+ (IS_CHAR(c)) && (c != '<')) &&
+ (ctxt->instate != XML_PARSER_EOF)) {
+- /*
+- * Impose a reasonable limit on attribute size, unless XML_PARSE_HUGE
+- * special option is given
+- */
+- if ((len > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
+- "AttValue length too long\n");
+- goto mem_error;
+- }
+ if (c == 0) break;
+ if (c == '&') {
+ in_space = 0;
+@@ -4093,6 +4105,11 @@ xmlParseAttValueComplex(xmlParserCtxtPtr ctxt, int *attlen, int normalize) {
+ }
+ GROW;
+ c = CUR_CHAR(l);
++ if (len > maxLength) {
++ xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
++ "AttValue length too long\n");
++ goto mem_error;
++ }
+ }
+ if (ctxt->instate == XML_PARSER_EOF)
+ goto error;
+@@ -4114,16 +4131,6 @@ xmlParseAttValueComplex(xmlParserCtxtPtr ctxt, int *attlen, int normalize) {
+ } else
+ NEXT;
+
+- /*
+- * There we potentially risk an overflow, don't allow attribute value of
+- * length more than INT_MAX it is a very reasonable assumption !
+- */
+- if (len >= INT_MAX) {
+- xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
+- "AttValue length too long\n");
+- goto mem_error;
+- }
+-
+ if (attlen != NULL) *attlen = (int) len;
+ return(buf);
+
+@@ -4194,6 +4201,9 @@ xmlParseSystemLiteral(xmlParserCtxtPtr ctxt) {
+ int len = 0;
+ int size = XML_PARSER_BUFFER_SIZE;
+ int cur, l;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_TEXT_LENGTH :
++ XML_MAX_NAME_LENGTH;
+ xmlChar stop;
+ int state = ctxt->instate;
+ int count = 0;
+@@ -4221,13 +4231,6 @@ xmlParseSystemLiteral(xmlParserCtxtPtr ctxt) {
+ if (len + 5 >= size) {
+ xmlChar *tmp;
+
+- if ((size > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "SystemLiteral");
+- xmlFree(buf);
+- ctxt->instate = (xmlParserInputState) state;
+- return(NULL);
+- }
+ size *= 2;
+ tmp = (xmlChar *) xmlRealloc(buf, size * sizeof(xmlChar));
+ if (tmp == NULL) {
+@@ -4256,6 +4259,12 @@ xmlParseSystemLiteral(xmlParserCtxtPtr ctxt) {
+ SHRINK;
+ cur = CUR_CHAR(l);
+ }
++ if (len > maxLength) {
++ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "SystemLiteral");
++ xmlFree(buf);
++ ctxt->instate = (xmlParserInputState) state;
++ return(NULL);
++ }
+ }
+ buf[len] = 0;
+ ctxt->instate = (xmlParserInputState) state;
+@@ -4283,6 +4292,9 @@ xmlParsePubidLiteral(xmlParserCtxtPtr ctxt) {
+ xmlChar *buf = NULL;
+ int len = 0;
+ int size = XML_PARSER_BUFFER_SIZE;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_TEXT_LENGTH :
++ XML_MAX_NAME_LENGTH;
+ xmlChar cur;
+ xmlChar stop;
+ int count = 0;
+@@ -4310,12 +4322,6 @@ xmlParsePubidLiteral(xmlParserCtxtPtr ctxt) {
+ if (len + 1 >= size) {
+ xmlChar *tmp;
+
+- if ((size > XML_MAX_NAME_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "Public ID");
+- xmlFree(buf);
+- return(NULL);
+- }
+ size *= 2;
+ tmp = (xmlChar *) xmlRealloc(buf, size * sizeof(xmlChar));
+ if (tmp == NULL) {
+@@ -4343,6 +4349,11 @@ xmlParsePubidLiteral(xmlParserCtxtPtr ctxt) {
+ SHRINK;
+ cur = CUR;
+ }
++ if (len > maxLength) {
++ xmlFatalErr(ctxt, XML_ERR_NAME_TOO_LONG, "Public ID");
++ xmlFree(buf);
++ return(NULL);
++ }
+ }
+ buf[len] = 0;
+ if (cur != stop) {
+@@ -4742,6 +4753,9 @@ xmlParseCommentComplex(xmlParserCtxtPtr ctxt, xmlChar *buf,
+ int r, rl;
+ int cur, l;
+ size_t count = 0;
++ size_t maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_HUGE_LENGTH :
++ XML_MAX_TEXT_LENGTH;
+ int inputid;
+
+ inputid = ctxt->input->id;
+@@ -4787,13 +4801,6 @@ xmlParseCommentComplex(xmlParserCtxtPtr ctxt, xmlChar *buf,
+ if ((r == '-') && (q == '-')) {
+ xmlFatalErr(ctxt, XML_ERR_HYPHEN_IN_COMMENT, NULL);
+ }
+- if ((len > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErrMsgStr(ctxt, XML_ERR_COMMENT_NOT_FINISHED,
+- "Comment too big found", NULL);
+- xmlFree (buf);
+- return;
+- }
+ if (len + 5 >= size) {
+ xmlChar *new_buf;
+ size_t new_size;
+@@ -4831,6 +4838,13 @@ xmlParseCommentComplex(xmlParserCtxtPtr ctxt, xmlChar *buf,
+ GROW;
+ cur = CUR_CHAR(l);
+ }
++
++ if (len > maxLength) {
++ xmlFatalErrMsgStr(ctxt, XML_ERR_COMMENT_NOT_FINISHED,
++ "Comment too big found", NULL);
++ xmlFree (buf);
++ return;
++ }
+ }
+ buf[len] = 0;
+ if (cur == 0) {
+@@ -4875,6 +4889,9 @@ xmlParseComment(xmlParserCtxtPtr ctxt) {
+ xmlChar *buf = NULL;
+ size_t size = XML_PARSER_BUFFER_SIZE;
+ size_t len = 0;
++ size_t maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_HUGE_LENGTH :
++ XML_MAX_TEXT_LENGTH;
+ xmlParserInputState state;
+ const xmlChar *in;
+ size_t nbchar = 0;
+@@ -4958,8 +4975,7 @@ get_more:
+ buf[len] = 0;
+ }
+ }
+- if ((len > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if (len > maxLength) {
+ xmlFatalErrMsgStr(ctxt, XML_ERR_COMMENT_NOT_FINISHED,
+ "Comment too big found", NULL);
+ xmlFree (buf);
+@@ -5159,6 +5175,9 @@ xmlParsePI(xmlParserCtxtPtr ctxt) {
+ xmlChar *buf = NULL;
+ size_t len = 0;
+ size_t size = XML_PARSER_BUFFER_SIZE;
++ size_t maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_HUGE_LENGTH :
++ XML_MAX_TEXT_LENGTH;
+ int cur, l;
+ const xmlChar *target;
+ xmlParserInputState state;
+@@ -5234,14 +5253,6 @@ xmlParsePI(xmlParserCtxtPtr ctxt) {
+ return;
+ }
+ count = 0;
+- if ((len > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErrMsgStr(ctxt, XML_ERR_PI_NOT_FINISHED,
+- "PI %s too big found", target);
+- xmlFree(buf);
+- ctxt->instate = state;
+- return;
+- }
+ }
+ COPY_BUF(l,buf,len,cur);
+ NEXTL(l);
+@@ -5251,15 +5262,14 @@ xmlParsePI(xmlParserCtxtPtr ctxt) {
+ GROW;
+ cur = CUR_CHAR(l);
+ }
++ if (len > maxLength) {
++ xmlFatalErrMsgStr(ctxt, XML_ERR_PI_NOT_FINISHED,
++ "PI %s too big found", target);
++ xmlFree(buf);
++ ctxt->instate = state;
++ return;
++ }
+ }
+- if ((len > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErrMsgStr(ctxt, XML_ERR_PI_NOT_FINISHED,
+- "PI %s too big found", target);
+- xmlFree(buf);
+- ctxt->instate = state;
+- return;
+- }
+ buf[len] = 0;
+ if (cur != '?') {
+ xmlFatalErrMsgStr(ctxt, XML_ERR_PI_NOT_FINISHED,
+@@ -8954,6 +8964,9 @@ xmlParseAttValueInternal(xmlParserCtxtPtr ctxt, int *len, int *alloc,
+ const xmlChar *in = NULL, *start, *end, *last;
+ xmlChar *ret = NULL;
+ int line, col;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_HUGE_LENGTH :
++ XML_MAX_TEXT_LENGTH;
+
+ GROW;
+ in = (xmlChar *) CUR_PTR;
+@@ -8993,8 +9006,7 @@ xmlParseAttValueInternal(xmlParserCtxtPtr ctxt, int *len, int *alloc,
+ start = in;
+ if (in >= end) {
+ GROW_PARSE_ATT_VALUE_INTERNAL(ctxt, in, start, end)
+- if (((in - start) > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if ((in - start) > maxLength) {
+ xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
+ "AttValue length too long\n");
+ return(NULL);
+@@ -9007,8 +9019,7 @@ xmlParseAttValueInternal(xmlParserCtxtPtr ctxt, int *len, int *alloc,
+ if ((*in++ == 0x20) && (*in == 0x20)) break;
+ if (in >= end) {
+ GROW_PARSE_ATT_VALUE_INTERNAL(ctxt, in, start, end)
+- if (((in - start) > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if ((in - start) > maxLength) {
+ xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
+ "AttValue length too long\n");
+ return(NULL);
+@@ -9041,16 +9052,14 @@ xmlParseAttValueInternal(xmlParserCtxtPtr ctxt, int *len, int *alloc,
+ last = last + delta;
+ }
+ end = ctxt->input->end;
+- if (((in - start) > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if ((in - start) > maxLength) {
+ xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
+ "AttValue length too long\n");
+ return(NULL);
+ }
+ }
+ }
+- if (((in - start) > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if ((in - start) > maxLength) {
+ xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
+ "AttValue length too long\n");
+ return(NULL);
+@@ -9063,8 +9072,7 @@ xmlParseAttValueInternal(xmlParserCtxtPtr ctxt, int *len, int *alloc,
+ col++;
+ if (in >= end) {
+ GROW_PARSE_ATT_VALUE_INTERNAL(ctxt, in, start, end)
+- if (((in - start) > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if ((in - start) > maxLength) {
+ xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
+ "AttValue length too long\n");
+ return(NULL);
+@@ -9072,8 +9080,7 @@ xmlParseAttValueInternal(xmlParserCtxtPtr ctxt, int *len, int *alloc,
+ }
+ }
+ last = in;
+- if (((in - start) > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
++ if ((in - start) > maxLength) {
+ xmlFatalErrMsg(ctxt, XML_ERR_ATTRIBUTE_NOT_FINISHED,
+ "AttValue length too long\n");
+ return(NULL);
+@@ -9763,6 +9770,9 @@ xmlParseCDSect(xmlParserCtxtPtr ctxt) {
+ int s, sl;
+ int cur, l;
+ int count = 0;
++ int maxLength = (ctxt->options & XML_PARSE_HUGE) ?
++ XML_MAX_HUGE_LENGTH :
++ XML_MAX_TEXT_LENGTH;
+
+ /* Check 2.6.0 was NXT(0) not RAW */
+ if (CMP9(CUR_PTR, '<', '!', '[', 'C', 'D', 'A', 'T', 'A', '[')) {
+@@ -9796,13 +9806,6 @@ xmlParseCDSect(xmlParserCtxtPtr ctxt) {
+ if (len + 5 >= size) {
+ xmlChar *tmp;
+
+- if ((size > XML_MAX_TEXT_LENGTH) &&
+- ((ctxt->options & XML_PARSE_HUGE) == 0)) {
+- xmlFatalErrMsgStr(ctxt, XML_ERR_CDATA_NOT_FINISHED,
+- "CData section too big found", NULL);
+- xmlFree (buf);
+- return;
+- }
+ tmp = (xmlChar *) xmlRealloc(buf, size * 2 * sizeof(xmlChar));
+ if (tmp == NULL) {
+ xmlFree(buf);
+@@ -9829,6 +9832,12 @@ xmlParseCDSect(xmlParserCtxtPtr ctxt) {
+ }
+ NEXTL(l);
+ cur = CUR_CHAR(l);
++ if (len > maxLength) {
++ xmlFatalErrMsg(ctxt, XML_ERR_CDATA_NOT_FINISHED,
++ "CData section too big found\n");
++ xmlFree(buf);
++ return;
++ }
+ }
+ buf[len] = 0;
+ ctxt->instate = XML_PARSER_CONTENT;
+--
+GitLab
diff --git a/poky/meta/recipes-core/libxml/libxml2/CVE-2022-40304.patch b/poky/meta/recipes-core/libxml/libxml2/CVE-2022-40304.patch
new file mode 100644
index 0000000000..c19726fe9f
--- /dev/null
+++ b/poky/meta/recipes-core/libxml/libxml2/CVE-2022-40304.patch
@@ -0,0 +1,104 @@
+From 1b41ec4e9433b05bb0376be4725804c54ef1d80b Mon Sep 17 00:00:00 2001
+From: Nick Wellnhofer <wellnhofer@aevum.de>
+Date: Wed, 31 Aug 2022 22:11:25 +0200
+Subject: [PATCH] [CVE-2022-40304] Fix dict corruption caused by entity
+ reference cycles
+
+When an entity reference cycle is detected, the entity content is
+cleared by setting its first byte to zero. But the entity content might
+be allocated from a dict. In this case, the dict entry becomes corrupted
+leading to all kinds of logic errors, including memory errors like
+double-frees.
+
+Stop storing entity content, orig, ExternalID and SystemID in a dict.
+These values are unlikely to occur multiple times in a document, so they
+shouldn't have been stored in a dict in the first place.
+
+Thanks to Ned Williamson and Nathan Wachholz working with Google Project
+Zero for the report!
+
+CVE: CVE-2022-40304
+Upstream-Status: Backport [https://gitlab.gnome.org/GNOME/libxml2/-/commit/1b41ec4e9433b05bb0376be4725804c54ef1d80b]
+Signed-off-by: Bhabu Bindu <bhabu.bindu@kpit.com>
+---
+ entities.c | 55 ++++++++++++++++--------------------------------------
+ 1 file changed, 16 insertions(+), 39 deletions(-)
+
+diff --git a/entities.c b/entities.c
+index 84435515..d4e5412e 100644
+--- a/entities.c
++++ b/entities.c
+@@ -128,36 +128,19 @@ xmlFreeEntity(xmlEntityPtr entity)
+ if ((entity->children) && (entity->owner == 1) &&
+ (entity == (xmlEntityPtr) entity->children->parent))
+ xmlFreeNodeList(entity->children);
+- if (dict != NULL) {
+- if ((entity->name != NULL) && (!xmlDictOwns(dict, entity->name)))
+- xmlFree((char *) entity->name);
+- if ((entity->ExternalID != NULL) &&
+- (!xmlDictOwns(dict, entity->ExternalID)))
+- xmlFree((char *) entity->ExternalID);
+- if ((entity->SystemID != NULL) &&
+- (!xmlDictOwns(dict, entity->SystemID)))
+- xmlFree((char *) entity->SystemID);
+- if ((entity->URI != NULL) && (!xmlDictOwns(dict, entity->URI)))
+- xmlFree((char *) entity->URI);
+- if ((entity->content != NULL)
+- && (!xmlDictOwns(dict, entity->content)))
+- xmlFree((char *) entity->content);
+- if ((entity->orig != NULL) && (!xmlDictOwns(dict, entity->orig)))
+- xmlFree((char *) entity->orig);
+- } else {
+- if (entity->name != NULL)
+- xmlFree((char *) entity->name);
+- if (entity->ExternalID != NULL)
+- xmlFree((char *) entity->ExternalID);
+- if (entity->SystemID != NULL)
+- xmlFree((char *) entity->SystemID);
+- if (entity->URI != NULL)
+- xmlFree((char *) entity->URI);
+- if (entity->content != NULL)
+- xmlFree((char *) entity->content);
+- if (entity->orig != NULL)
+- xmlFree((char *) entity->orig);
+- }
++ if ((entity->name != NULL) &&
++ ((dict == NULL) || (!xmlDictOwns(dict, entity->name))))
++ xmlFree((char *) entity->name);
++ if (entity->ExternalID != NULL)
++ xmlFree((char *) entity->ExternalID);
++ if (entity->SystemID != NULL)
++ xmlFree((char *) entity->SystemID);
++ if (entity->URI != NULL)
++ xmlFree((char *) entity->URI);
++ if (entity->content != NULL)
++ xmlFree((char *) entity->content);
++ if (entity->orig != NULL)
++ xmlFree((char *) entity->orig);
+ xmlFree(entity);
+ }
+
+@@ -193,18 +176,12 @@ xmlCreateEntity(xmlDictPtr dict, const xmlChar *name, int type,
+ ret->SystemID = xmlStrdup(SystemID);
+ } else {
+ ret->name = xmlDictLookup(dict, name, -1);
+- if (ExternalID != NULL)
+- ret->ExternalID = xmlDictLookup(dict, ExternalID, -1);
+- if (SystemID != NULL)
+- ret->SystemID = xmlDictLookup(dict, SystemID, -1);
++ ret->ExternalID = xmlStrdup(ExternalID);
++ ret->SystemID = xmlStrdup(SystemID);
+ }
+ if (content != NULL) {
+ ret->length = xmlStrlen(content);
+- if ((dict != NULL) && (ret->length < 5))
+- ret->content = (xmlChar *)
+- xmlDictLookup(dict, content, ret->length);
+- else
+- ret->content = xmlStrndup(content, ret->length);
++ ret->content = xmlStrndup(content, ret->length);
+ } else {
+ ret->length = 0;
+ ret->content = NULL;
+--
+GitLab
diff --git a/poky/meta/recipes-core/libxml/libxml2_2.9.10.bb b/poky/meta/recipes-core/libxml/libxml2_2.9.10.bb
index dc62991739..40e3434ead 100644
--- a/poky/meta/recipes-core/libxml/libxml2_2.9.10.bb
+++ b/poky/meta/recipes-core/libxml/libxml2_2.9.10.bb
@@ -34,6 +34,8 @@ SRC_URI += "http://www.w3.org/XML/Test/xmlts20080827.tar.gz;subdir=${BP};name=te
file://CVE-2022-29824.patch \
file://0001-Port-gentest.py-to-Python-3.patch \
file://CVE-2016-3709.patch \
+ file://CVE-2022-40303.patch \
+ file://CVE-2022-40304.patch \
"
SRC_URI[archive.sha256sum] = "593b7b751dd18c2d6abcd0c4bcb29efc203d0b4373a6df98e3a455ea74ae2813"
diff --git a/poky/meta/recipes-core/meta/buildtools-tarball.bb b/poky/meta/recipes-core/meta/buildtools-tarball.bb
index faf7108a86..24f5f28589 100644
--- a/poky/meta/recipes-core/meta/buildtools-tarball.bb
+++ b/poky/meta/recipes-core/meta/buildtools-tarball.bb
@@ -66,7 +66,7 @@ create_sdk_files_append () {
# Generate new (mini) sdk-environment-setup file
script=${1:-${SDK_OUTPUT}/${SDKPATH}/environment-setup-${SDK_SYS}}
touch $script
- echo 'export PATH=${SDKPATHNATIVE}${bindir_nativesdk}:${SDKPATHNATIVE}${sbindir_nativesdk}:${SDKPATHNATIVE}${base_bindir_nativesdk}:${SDKPATHNATIVE}${base_sbindir_nativesdk}:$PATH' >> $script
+ echo 'export PATH="${SDKPATHNATIVE}${bindir_nativesdk}:${SDKPATHNATIVE}${sbindir_nativesdk}:${SDKPATHNATIVE}${base_bindir_nativesdk}:${SDKPATHNATIVE}${base_sbindir_nativesdk}:$PATH"' >> $script
echo 'export OECORE_NATIVE_SYSROOT="${SDKPATHNATIVE}"' >> $script
echo 'export GIT_SSL_CAINFO="${SDKPATHNATIVE}${sysconfdir}/ssl/certs/ca-certificates.crt"' >>$script
echo 'export SSL_CERT_FILE="${SDKPATHNATIVE}${sysconfdir}/ssl/certs/ca-certificates.crt"' >>$script
diff --git a/poky/meta/recipes-core/meta/cve-update-db-native.bb b/poky/meta/recipes-core/meta/cve-update-db-native.bb
index 85874ead01..efc32470d3 100644
--- a/poky/meta/recipes-core/meta/cve-update-db-native.bb
+++ b/poky/meta/recipes-core/meta/cve-update-db-native.bb
@@ -17,6 +17,12 @@ deltask do_populate_sysroot
# Use a negative value to skip the update
CVE_DB_UPDATE_INTERVAL ?= "86400"
+# Timeout for blocking socket operations, such as the connection attempt.
+CVE_SOCKET_TIMEOUT ?= "60"
+NVDCVE_URL ?= "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
+
+CVE_DB_TEMP_FILE ?= "${CVE_CHECK_DB_DIR}/temp_nvdcve_1.1.db"
+
python () {
if not bb.data.inherits_class("cve-check", d):
raise bb.parse.SkipRecipe("Skip recipe when cve-check class is not loaded.")
@@ -28,24 +34,15 @@ python do_fetch() {
"""
import bb.utils
import bb.progress
- import sqlite3, urllib, urllib.parse, shutil, gzip
- from datetime import date
+ import shutil
bb.utils.export_proxies(d)
- BASE_URL = "https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-"
- YEAR_START = 2002
-
db_file = d.getVar("CVE_CHECK_DB_FILE")
db_dir = os.path.dirname(db_file)
+ db_tmp_file = d.getVar("CVE_DB_TEMP_FILE")
- if os.path.exists("{0}-journal".format(db_file)):
- # If a journal is present the last update might have been interrupted. In that case,
- # just wipe any leftovers and force the DB to be recreated.
- os.remove("{0}-journal".format(db_file))
-
- if os.path.exists(db_file):
- os.remove(db_file)
+ cleanup_db_download(db_file, db_tmp_file)
# The NVD database changes once a day, so no need to update more frequently
# Allow the user to force-update
@@ -62,26 +59,81 @@ python do_fetch() {
pass
bb.utils.mkdirhier(db_dir)
+ if os.path.exists(db_file):
+ shutil.copy2(db_file, db_tmp_file)
+
+ if update_db_file(db_tmp_file, d) == True:
+ # Update downloaded correctly, can swap files
+ shutil.move(db_tmp_file, db_file)
+ else:
+ # Update failed, do not modify the database
+ bb.note("CVE database update failed")
+ os.remove(db_tmp_file)
+}
+
+do_fetch[lockfiles] += "${CVE_CHECK_DB_FILE_LOCK}"
+do_fetch[file-checksums] = ""
+do_fetch[vardeps] = ""
+
+def cleanup_db_download(db_file, db_tmp_file):
+ """
+ Cleanup the download space from possible failed downloads
+ """
+
+ # Clean up the updates done on the main file
+ # Remove it only if a journal file exists - it means a complete re-download
+ if os.path.exists("{0}-journal".format(db_file)):
+ # If a journal is present the last update might have been interrupted. In that case,
+ # just wipe any leftovers and force the DB to be recreated.
+ os.remove("{0}-journal".format(db_file))
+
+ if os.path.exists(db_file):
+ os.remove(db_file)
+
+ # Clean-up the temporary file downloads, we can remove both journal
+ # and the temporary database
+ if os.path.exists("{0}-journal".format(db_tmp_file)):
+ # If a journal is present the last update might have been interrupted. In that case,
+ # just wipe any leftovers and force the DB to be recreated.
+ os.remove("{0}-journal".format(db_tmp_file))
+
+ if os.path.exists(db_tmp_file):
+ os.remove(db_tmp_file)
+
+def update_db_file(db_tmp_file, d):
+ """
+ Update the given database file
+ """
+ import bb.utils, bb.progress
+ from datetime import date
+ import urllib, gzip, sqlite3
+
+ YEAR_START = 2002
+ cve_socket_timeout = int(d.getVar("CVE_SOCKET_TIMEOUT"))
# Connect to database
- conn = sqlite3.connect(db_file)
+ conn = sqlite3.connect(db_tmp_file)
initialize_db(conn)
with bb.progress.ProgressHandler(d) as ph, open(os.path.join(d.getVar("TMPDIR"), 'cve_check'), 'a') as cve_f:
total_years = date.today().year + 1 - YEAR_START
for i, year in enumerate(range(YEAR_START, date.today().year + 1)):
+ bb.debug(2, "Updating %d" % year)
ph.update((float(i + 1) / total_years) * 100)
- year_url = BASE_URL + str(year)
+ year_url = (d.getVar('NVDCVE_URL')) + str(year)
meta_url = year_url + ".meta"
json_url = year_url + ".json.gz"
# Retrieve meta last modified date
try:
- response = urllib.request.urlopen(meta_url)
+ response = urllib.request.urlopen(meta_url, timeout=cve_socket_timeout)
except urllib.error.URLError as e:
cve_f.write('Warning: CVE db update error, Unable to fetch CVE data.\n\n')
- bb.warn("Failed to fetch CVE data (%s)" % e.reason)
- return
+ bb.warn("Failed to fetch CVE data (%s)" % e)
+ import socket
+ result = socket.getaddrinfo("nvd.nist.gov", 443, proto=socket.IPPROTO_TCP)
+ bb.warn("Host IPs are %s" % (", ".join(t[4][0] for t in result)))
+ return False
if response:
for l in response.read().decode("utf-8").splitlines():
@@ -91,7 +143,7 @@ python do_fetch() {
break
else:
bb.warn("Cannot parse CVE metadata, update failed")
- return
+ return False
# Compare with current db last modified date
cursor = conn.execute("select DATE from META where YEAR = ?", (year,))
@@ -99,31 +151,29 @@ python do_fetch() {
cursor.close()
if not meta or meta[0] != last_modified:
+ bb.debug(2, "Updating entries")
# Clear products table entries corresponding to current year
conn.execute("delete from PRODUCTS where ID like ?", ('CVE-%d%%' % year,)).close()
# Update db with current year json file
try:
- response = urllib.request.urlopen(json_url)
+ response = urllib.request.urlopen(json_url, timeout=cve_socket_timeout)
if response:
update_db(conn, gzip.decompress(response.read()).decode('utf-8'))
conn.execute("insert or replace into META values (?, ?)", [year, last_modified]).close()
except urllib.error.URLError as e:
cve_f.write('Warning: CVE db update error, CVE data is outdated.\n\n')
bb.warn("Cannot parse CVE data (%s), update failed" % e.reason)
- return
-
+ return False
+ else:
+ bb.debug(2, "Already up to date (last modified %s)" % last_modified)
# Update success, set the date to cve_check file.
if year == date.today().year:
cve_f.write('CVE database update : %s\n\n' % date.today())
conn.commit()
conn.close()
-}
-
-do_fetch[lockfiles] += "${CVE_CHECK_DB_FILE_LOCK}"
-do_fetch[file-checksums] = ""
-do_fetch[vardeps] = ""
+ return True
def initialize_db(conn):
with conn:
diff --git a/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-genffs-fix-gcc12-warning.patch b/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-genffs-fix-gcc12-warning.patch
new file mode 100644
index 0000000000..4418d52898
--- /dev/null
+++ b/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-genffs-fix-gcc12-warning.patch
@@ -0,0 +1,49 @@
+From 7b005f344e533cd913c3ca05b266f9872df886d1 Mon Sep 17 00:00:00 2001
+From: Gerd Hoffmann <kraxel@redhat.com>
+Date: Thu, 24 Mar 2022 20:04:34 +0800
+Subject: [PATCH] BaseTools: fix gcc12 warning
+
+GenFfs.c:545:5: error: pointer ?InFileHandle? used after ?fclose? [-Werror=use-after-free]
+ 545 | Error(NULL, 0, 4001, "Resource", "memory cannot be allocated of %s", InFileHandle);
+ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+GenFfs.c:544:5: note: call to ?fclose? here
+ 544 | fclose (InFileHandle);
+ | ^~~~~~~~~~~~~~~~~~~~~
+
+Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
+Reviewed-by: Bob Feng <bob.c.feng@intel.com>
+
+Upstream-Status: Backport [https://github.com/tianocore/edk2/commit/7b005f344e533cd913c3ca05b266f9872df886d1]
+Signed-off-by: Steve Sakoman <steve@sakoman.com>
+
+---
+ BaseTools/Source/C/GenFfs/GenFfs.c | 2 +-
+ BaseTools/Source/C/GenSec/GenSec.c | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/BaseTools/Source/C/GenFfs/GenFfs.c b/BaseTools/Source/C/GenFfs/GenFfs.c
+index 949025c33325..d78d62ab3689 100644
+--- a/BaseTools/Source/C/GenFfs/GenFfs.c
++++ b/BaseTools/Source/C/GenFfs/GenFfs.c
+@@ -542,7 +542,7 @@ GetAlignmentFromFile(char *InFile, UINT32 *Alignment)
+ PeFileBuffer = (UINT8 *) malloc (PeFileSize);
+ if (PeFileBuffer == NULL) {
+ fclose (InFileHandle);
+- Error(NULL, 0, 4001, "Resource", "memory cannot be allocated of %s", InFileHandle);
++ Error(NULL, 0, 4001, "Resource", "memory cannot be allocated for %s", InFile);
+ return EFI_OUT_OF_RESOURCES;
+ }
+ fread (PeFileBuffer, sizeof (UINT8), PeFileSize, InFileHandle);
+diff --git a/BaseTools/Source/C/GenSec/GenSec.c b/BaseTools/Source/C/GenSec/GenSec.c
+index d54a4f9e0a7d..b1d05367ec0b 100644
+--- a/BaseTools/Source/C/GenSec/GenSec.c
++++ b/BaseTools/Source/C/GenSec/GenSec.c
+@@ -1062,7 +1062,7 @@ GetAlignmentFromFile(char *InFile, UINT32 *Alignment)
+ PeFileBuffer = (UINT8 *) malloc (PeFileSize);
+ if (PeFileBuffer == NULL) {
+ fclose (InFileHandle);
+- Error(NULL, 0, 4001, "Resource", "memory cannot be allocated of %s", InFileHandle);
++ Error(NULL, 0, 4001, "Resource", "memory cannot be allocated for %s", InFile);
+ return EFI_OUT_OF_RESOURCES;
+ }
+ fread (PeFileBuffer, sizeof (UINT8), PeFileSize, InFileHandle);
diff --git a/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-lzmaenc-fix-gcc12-warning.patch b/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-lzmaenc-fix-gcc12-warning.patch
new file mode 100644
index 0000000000..a6ef87aa79
--- /dev/null
+++ b/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-lzmaenc-fix-gcc12-warning.patch
@@ -0,0 +1,53 @@
+From 24551a99d1f765c891a4dc21a36f18ccbf56e612 Mon Sep 17 00:00:00 2001
+From: Steve Sakoman <steve@sakoman.com>
+Date: Tue, 10 Jan 2023 06:15:00 -1000
+Subject: [PATCH] BaseTools: fix gcc12 warning
+
+Sdk/C/LzmaEnc.c: In function ?LzmaEnc_CodeOneMemBlock?:
+Sdk/C/LzmaEnc.c:2828:19: error: storing the address of local variable ?outStream? in ?*p.rc.outStream? [-Werror=dangling-pointer=]
+ 2828 | p->rc.outStream = &outStream.vt;
+ | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
+Sdk/C/LzmaEnc.c:2811:28: note: ?outStream? declared here
+ 2811 | CLzmaEnc_SeqOutStreamBuf outStream;
+ | ^~~~~~~~~
+Sdk/C/LzmaEnc.c:2811:28: note: ?pp? declared here
+Sdk/C/LzmaEnc.c:2828:19: error: storing the address of local variable ?outStream? in ?*(CLzmaEnc *)pp.rc.outStream? [-Werror=dangling-pointer=]
+ 2828 | p->rc.outStream = &outStream.vt;
+ | ~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
+Sdk/C/LzmaEnc.c:2811:28: note: ?outStream? declared here
+ 2811 | CLzmaEnc_SeqOutStreamBuf outStream;
+ | ^~~~~~~~~
+Sdk/C/LzmaEnc.c:2811:28: note: ?pp? declared here
+cc1: all warnings being treated as errors
+
+Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
+Reviewed-by: Bob Feng <bob.c.feng@intel.com>
+
+Upstream-Status: Backport [https://github.com/tianocore/edk2/commit/85021f8cf22d1bd4114803c6c610dea5ef0059f1]
+Signed-off-by: Steve Sakoman <steve@sakoman.com>
+---
+ BaseTools/Source/C/LzmaCompress/Sdk/C/LzmaEnc.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/BaseTools/Source/C/LzmaCompress/Sdk/C/LzmaEnc.c b/BaseTools/Source/C/LzmaCompress/Sdk/C/LzmaEnc.c
+index e281716fee..b575c4f888 100644
+--- a/BaseTools/Source/C/LzmaCompress/Sdk/C/LzmaEnc.c
++++ b/BaseTools/Source/C/LzmaCompress/Sdk/C/LzmaEnc.c
+@@ -2638,12 +2638,13 @@ SRes LzmaEnc_CodeOneMemBlock(CLzmaEncHandle pp, Bool reInit,
+
+ nowPos64 = p->nowPos64;
+ RangeEnc_Init(&p->rc);
+- p->rc.outStream = &outStream.vt;
+
+ if (desiredPackSize == 0)
+ return SZ_ERROR_OUTPUT_EOF;
+
++ p->rc.outStream = &outStream.vt;
+ res = LzmaEnc_CodeOneBlock(p, desiredPackSize, *unpackSize);
++ p->rc.outStream = NULL;
+
+ *unpackSize = (UInt32)(p->nowPos64 - nowPos64);
+ *destLen -= outStream.rem;
+--
+2.25.1
+
diff --git a/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-turn-off-gcc12-warning.patch b/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-turn-off-gcc12-warning.patch
new file mode 100644
index 0000000000..73a432684c
--- /dev/null
+++ b/poky/meta/recipes-core/ovmf/ovmf/0001-Basetools-turn-off-gcc12-warning.patch
@@ -0,0 +1,41 @@
+From 22130dcd98b4d4b76ac8d922adb4a2dbc86fa52c Mon Sep 17 00:00:00 2001
+From: Gerd Hoffmann <kraxel@redhat.com>
+Date: Thu, 24 Mar 2022 20:04:36 +0800
+Subject: [PATCH] Basetools: turn off gcc12 warning
+
+In function ?SetDevicePathEndNode?,
+ inlined from ?FileDevicePath? at DevicePathUtilities.c:857:5:
+DevicePathUtilities.c:321:3: error: writing 4 bytes into a region of size 1 [-Werror=stringop-overflow=]
+ 321 | memcpy (Node, &mUefiDevicePathLibEndDevicePath, sizeof (mUefiDevicePathLibEndDevicePath));
+ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+In file included from UefiDevicePathLib.h:22,
+ from DevicePathUtilities.c:16:
+../Include/Protocol/DevicePath.h: In function ?FileDevicePath?:
+../Include/Protocol/DevicePath.h:51:9: note: destination object ?Type? of size 1
+ 51 | UINT8 Type; ///< 0x01 Hardware Device Path.
+ | ^~~~
+
+Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
+Reviewed-by: Bob Feng <bob.c.feng@intel.com>
+
+Upstream-Status: Backport [https://github.com/tianocore/edk2/commit/22130dcd98b4d4b76ac8d922adb4a2dbc86fa52c]
+Signed-off-by: Steve Sakoman <steve@sakoman.com>
+
+---
+ BaseTools/Source/C/DevicePath/GNUmakefile | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/BaseTools/Source/C/DevicePath/GNUmakefile b/BaseTools/Source/C/DevicePath/GNUmakefile
+index 7ca08af9662d..b05d2bddfa68 100644
+--- a/BaseTools/Source/C/DevicePath/GNUmakefile
++++ b/BaseTools/Source/C/DevicePath/GNUmakefile
+@@ -13,6 +13,9 @@ OBJECTS = DevicePath.o UefiDevicePathLib.o DevicePathFromText.o DevicePathUtili
+
+ include $(MAKEROOT)/Makefiles/app.makefile
+
++# gcc 12 trips over device path handling
++BUILD_CFLAGS += -Wno-error=stringop-overflow
++
+ LIBS = -lCommon
+ ifeq ($(CYGWIN), CYGWIN)
+ LIBS += -L/lib/e2fsprogs -luuid
diff --git a/poky/meta/recipes-core/ovmf/ovmf_git.bb b/poky/meta/recipes-core/ovmf/ovmf_git.bb
index b00119313b..a487f77e3c 100644
--- a/poky/meta/recipes-core/ovmf/ovmf_git.bb
+++ b/poky/meta/recipes-core/ovmf/ovmf_git.bb
@@ -18,6 +18,9 @@ SRC_URI = "gitsm://github.com/tianocore/edk2.git;branch=master;protocol=https \
file://0003-ovmf-enable-long-path-file.patch \
file://0004-ovmf-Update-to-latest.patch \
file://0001-Fix-VLA-parameter-warning.patch \
+ file://0001-Basetools-genffs-fix-gcc12-warning.patch \
+ file://0001-Basetools-lzmaenc-fix-gcc12-warning.patch \
+ file://0001-Basetools-turn-off-gcc12-warning.patch \
"
PV = "edk2-stable202008"
diff --git a/poky/meta/recipes-core/psplash/files/psplash-start.service b/poky/meta/recipes-core/psplash/files/psplash-start.service
index 36c2bb38e0..bec9368427 100644
--- a/poky/meta/recipes-core/psplash/files/psplash-start.service
+++ b/poky/meta/recipes-core/psplash/files/psplash-start.service
@@ -2,6 +2,7 @@
Description=Start psplash boot splash screen
DefaultDependencies=no
RequiresMountsFor=/run
+ConditionFileIsExecutable=/usr/bin/psplash
[Service]
Type=notify
diff --git a/poky/meta/recipes-core/psplash/files/psplash-systemd.service b/poky/meta/recipes-core/psplash/files/psplash-systemd.service
index 082207f232..e93e3deb35 100644
--- a/poky/meta/recipes-core/psplash/files/psplash-systemd.service
+++ b/poky/meta/recipes-core/psplash/files/psplash-systemd.service
@@ -4,6 +4,7 @@ DefaultDependencies=no
After=psplash-start.service
Requires=psplash-start.service
RequiresMountsFor=/run
+ConditionFileIsExecutable=/usr/bin/psplash
[Service]
ExecStart=/usr/bin/psplash-systemd
diff --git a/poky/meta/recipes-core/systemd/systemd/CVE-2022-3821.patch b/poky/meta/recipes-core/systemd/systemd/CVE-2022-3821.patch
new file mode 100644
index 0000000000..f9c6704cfc
--- /dev/null
+++ b/poky/meta/recipes-core/systemd/systemd/CVE-2022-3821.patch
@@ -0,0 +1,47 @@
+From 9102c625a673a3246d7e73d8737f3494446bad4e Mon Sep 17 00:00:00 2001
+From: Yu Watanabe <watanabe.yu+github@gmail.com>
+Date: Thu, 7 Jul 2022 18:27:02 +0900
+Subject: [PATCH] time-util: fix buffer-over-run
+
+Fixes #23928.
+
+CVE: CVE-2022-3821
+Upstream-Status: Backport [https://github.com/systemd/systemd/commit/9102c625a673a3246d7e73d8737f3494446bad4e.patch]
+Signed-off-by: Ranjitsinh Rathod <ranjitsinh.rathod@kpit.com>
+Comment: Both the hunks refreshed to backport
+
+---
+ src/basic/time-util.c | 2 +-
+ src/test/test-time-util.c | 5 +++++
+ 2 files changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/src/basic/time-util.c b/src/basic/time-util.c
+index abbc4ad5cd70..26d59de12348 100644
+--- a/src/basic/time-util.c
++++ b/src/basic/time-util.c
+@@ -514,7 +514,7 @@ char *format_timespan(char *buf, size_t
+ t = b;
+ }
+
+- n = MIN((size_t) k, l);
++ n = MIN((size_t) k, l-1);
+
+ l -= n;
+ p += n;
+diff --git a/src/test/test-time-util.c b/src/test/test-time-util.c
+index e8e4e2a67bb1..58c5fa9be40c 100644
+--- a/src/test/test-time-util.c
++++ b/src/test/test-time-util.c
+@@ -501,6 +501,12 @@ int main(int argc, char *argv[]) {
+ test_format_timespan(1);
+ test_format_timespan(USEC_PER_MSEC);
+ test_format_timespan(USEC_PER_SEC);
++
++ /* See issue #23928. */
++ _cleanup_free_ char *buf;
++ assert_se(buf = new(char, 5));
++ assert_se(buf == format_timespan(buf, 5, 100005, 1000));
++
+ test_timezone_is_valid();
+ test_get_timezones();
+ test_usec_add();
diff --git a/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-1.patch b/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-1.patch
new file mode 100644
index 0000000000..39f9480cf8
--- /dev/null
+++ b/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-1.patch
@@ -0,0 +1,115 @@
+From 612ebf6c913dd0e4197c44909cb3157f5c51a2f0 Mon Sep 17 00:00:00 2001
+From: Lennart Poettering <lennart@poettering.net>
+Date: Mon, 31 Aug 2020 19:37:13 +0200
+Subject: [PATCH] pager: set $LESSSECURE whenver we invoke a pager
+
+Some extra safety when invoked via "sudo". With this we address a
+genuine design flaw of sudo, and we shouldn't need to deal with this.
+But it's still a good idea to disable this surface given how exotic it
+is.
+
+Prompted by #5666
+
+CVE: CVE-2023-26604
+Upstream-Status: Backport [https://github.com/systemd/systemd/pull/17270/commits/612ebf6c913dd0e4197c44909cb3157f5c51a2f0]
+Comments: Hunk not refreshed
+Signed-off-by: rajmohan r <rajmohan.r@kpit.com>
+---
+ man/less-variables.xml | 9 +++++++++
+ man/systemctl.xml | 1 +
+ man/systemd.xml | 1 +
+ src/shared/pager.c | 23 +++++++++++++++++++++--
+ 4 files changed, 32 insertions(+), 2 deletions(-)
+
+diff --git a/man/less-variables.xml b/man/less-variables.xml
+index 08e513c99f8e..c52511ca8e18 100644
+--- a/man/less-variables.xml
++++ b/man/less-variables.xml
+@@ -64,6 +64,15 @@
+ the invoking terminal is determined to be UTF-8 compatible).</para></listitem>
+ </varlistentry>
+
++ <varlistentry id='lesssecure'>
++ <term><varname>$SYSTEMD_LESSSECURE</varname></term>
++
++ <listitem><para>Takes a boolean argument. Overrides the <varname>$LESSSECURE</varname> environment
++ variable when invoking the pager, which controls the "secure" mode of less (which disables commands
++ such as <literal>|</literal> which allow to easily shell out to external command lines). By default
++ less secure mode is enabled, with this setting it may be disabled.</para></listitem>
++ </varlistentry>
++
+ <varlistentry id='colors'>
+ <term><varname>$SYSTEMD_COLORS</varname></term>
+
+diff --git a/man/systemctl.xml b/man/systemctl.xml
+index 1c5502883700..a3f0c3041a57 100644
+--- a/man/systemctl.xml
++++ b/man/systemctl.xml
+@@ -2240,6 +2240,7 @@ Jan 12 10:46:45 example.com bluetoothd[8900]: gatt-time-server: Input/output err
+ <xi:include href="less-variables.xml" xpointer="pager"/>
+ <xi:include href="less-variables.xml" xpointer="less"/>
+ <xi:include href="less-variables.xml" xpointer="lesscharset"/>
++ <xi:include href="less-variables.xml" xpointer="lesssecure"/>
+ <xi:include href="less-variables.xml" xpointer="colors"/>
+ <xi:include href="less-variables.xml" xpointer="urlify"/>
+ </refsect1>
+diff --git a/man/systemd.xml b/man/systemd.xml
+index a9040545c2ab..c92cfef77689 100644
+--- a/man/systemd.xml
++++ b/man/systemd.xml
+@@ -692,6 +692,7 @@
+ <xi:include href="less-variables.xml" xpointer="pager"/>
+ <xi:include href="less-variables.xml" xpointer="less"/>
+ <xi:include href="less-variables.xml" xpointer="lesscharset"/>
++ <xi:include href="less-variables.xml" xpointer="lesssecure"/>
+ <xi:include href="less-variables.xml" xpointer="colors"/>
+ <xi:include href="less-variables.xml" xpointer="urlify"/>
+
+diff --git a/src/shared/pager.c b/src/shared/pager.c
+index e03be6d23b2d..9c21881241f5 100644
+--- a/src/shared/pager.c
++++ b/src/shared/pager.c
+@@ -9,6 +9,7 @@
+ #include <unistd.h>
+
+ #include "copy.h"
++#include "env-util.h"
+ #include "fd-util.h"
+ #include "fileio.h"
+ #include "io-util.h"
+@@ -152,8 +153,7 @@ int pager_open(PagerFlags flags) {
+ _exit(EXIT_FAILURE);
+ }
+
+- /* Initialize a good charset for less. This is
+- * particularly important if we output UTF-8
++ /* Initialize a good charset for less. This is particularly important if we output UTF-8
+ * characters. */
+ less_charset = getenv("SYSTEMD_LESSCHARSET");
+ if (!less_charset && is_locale_utf8())
+@@ -164,6 +164,25 @@ int pager_open(PagerFlags flags) {
+ _exit(EXIT_FAILURE);
+ }
+
++ /* People might invoke us from sudo, don't needlessly allow less to be a way to shell out
++ * privileged stuff. */
++ r = getenv_bool("SYSTEMD_LESSSECURE");
++ if (r == 0) { /* Remove env var if off */
++ if (unsetenv("LESSSECURE") < 0) {
++ log_error_errno(errno, "Failed to uset environment variable LESSSECURE: %m");
++ _exit(EXIT_FAILURE);
++ }
++ } else {
++ /* Set env var otherwise */
++ if (r < 0)
++ log_warning_errno(r, "Unable to parse $SYSTEMD_LESSSECURE, ignoring: %m");
++
++ if (setenv("LESSSECURE", "1", 1) < 0) {
++ log_error_errno(errno, "Failed to set environment variable LESSSECURE: %m");
++ _exit(EXIT_FAILURE);
++ }
++ }
++
+ if (pager_args) {
+ r = loop_write(exe_name_pipe[1], pager_args[0], strlen(pager_args[0]) + 1, false);
+ if (r < 0) {
diff --git a/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-2.patch b/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-2.patch
new file mode 100644
index 0000000000..95da7cfad6
--- /dev/null
+++ b/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-2.patch
@@ -0,0 +1,264 @@
+From 1b5b507cd2d1d7a2b053151abb548475ad9c5c3b Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
+Date: Mon, 12 Oct 2020 18:57:32 +0200
+Subject: [PATCH] test-login: always test sd_pid_get_owner_uid(), modernize
+
+A long time some function only worked when in a session, and the test
+didn't execute them when sd_pid_get_session() failed. Let's always call
+them to increase coverage.
+
+While at it, let's test for ==0 not >=0 where we don't expect the function
+to return anything except 0 or error.
+
+CVE: CVE-2023-26604
+Upstream-Status: Backport [https://github.com/systemd/systemd/pull/17270/commits/1b5b507cd2d1d7a2b053151abb548475ad9c5c3b.patch]
+Comments: Hunk not refreshed
+Signed-off-by: rajmohan r <rajmohan.r@kpit.com>
+---
+ src/libsystemd/sd-login/test-login.c | 131 ++++++++++++++-------------
+ 1 file changed, 70 insertions(+), 61 deletions(-)
+
+diff --git a/src/libsystemd/sd-login/test-login.c b/src/libsystemd/sd-login/test-login.c
+index c0c77e04714b..0494fc77ba18 100644
+--- a/src/libsystemd/sd-login/test-login.c
++++ b/src/libsystemd/sd-login/test-login.c
+@@ -5,21 +5,22 @@
+ #include "sd-login.h"
+
+ #include "alloc-util.h"
++#include "errno-list.h"
+ #include "fd-util.h"
+ #include "format-util.h"
+ #include "log.h"
+ #include "string-util.h"
+ #include "strv.h"
+ #include "time-util.h"
+-#include "util.h"
++#include "user-util.h"
+
+ static char* format_uids(char **buf, uid_t* uids, int count) {
+- int pos = 0, k, inc;
++ int pos = 0, inc;
+ size_t size = (DECIMAL_STR_MAX(uid_t) + 1) * count + 1;
+
+ assert_se(*buf = malloc(size));
+
+- for (k = 0; k < count; k++) {
++ for (int k = 0; k < count; k++) {
+ sprintf(*buf + pos, "%s"UID_FMT"%n", k > 0 ? " " : "", uids[k], &inc);
+ pos += inc;
+ }
+@@ -30,6 +31,10 @@ static char* format_uids(char **buf, uid_t* uids, int count) {
+ return *buf;
+ }
+
++static const char *e(int r) {
++ return r == 0 ? "OK" : errno_to_name(r);
++}
++
+ static void test_login(void) {
+ _cleanup_close_pair_ int pair[2] = { -1, -1 };
+ _cleanup_free_ char *pp = NULL, *qq = NULL,
+@@ -39,65 +44,71 @@ static void test_login(void) {
+ *seat = NULL, *session = NULL,
+ *unit = NULL, *user_unit = NULL, *slice = NULL;
+ int r;
+- uid_t u, u2;
+- char *t, **seats, **sessions;
++ uid_t u, u2 = UID_INVALID;
++ char *t, **seats = NULL, **sessions = NULL;
+
+ r = sd_pid_get_unit(0, &unit);
+- assert_se(r >= 0 || r == -ENODATA);
+- log_info("sd_pid_get_unit(0, …) → \"%s\"", strna(unit));
++ log_info("sd_pid_get_unit(0, …) → %s / \"%s\"", e(r), strnull(unit));
++ assert_se(IN_SET(r, 0, -ENODATA));
+
+ r = sd_pid_get_user_unit(0, &user_unit);
+- assert_se(r >= 0 || r == -ENODATA);
+- log_info("sd_pid_get_user_unit(0, …) → \"%s\"", strna(user_unit));
++ log_info("sd_pid_get_user_unit(0, …) → %s / \"%s\"", e(r), strnull(user_unit));
++ assert_se(IN_SET(r, 0, -ENODATA));
+
+ r = sd_pid_get_slice(0, &slice);
+- assert_se(r >= 0 || r == -ENODATA);
+- log_info("sd_pid_get_slice(0, …) → \"%s\"", strna(slice));
++ log_info("sd_pid_get_slice(0, …) → %s / \"%s\"", e(r), strnull(slice));
++ assert_se(IN_SET(r, 0, -ENODATA));
++
++ r = sd_pid_get_owner_uid(0, &u2);
++ log_info("sd_pid_get_owner_uid(0, …) → %s / "UID_FMT, e(r), u2);
++ assert_se(IN_SET(r, 0, -ENODATA));
+
+ r = sd_pid_get_session(0, &session);
+- if (r < 0) {
+- log_warning_errno(r, "sd_pid_get_session(0, …): %m");
+- if (r == -ENODATA)
+- log_info("Seems we are not running in a session, skipping some tests.");
+- } else {
+- log_info("sd_pid_get_session(0, …) → \"%s\"", session);
+-
+- assert_se(sd_pid_get_owner_uid(0, &u2) == 0);
+- log_info("sd_pid_get_owner_uid(0, …) → "UID_FMT, u2);
+-
+- assert_se(sd_pid_get_cgroup(0, &cgroup) == 0);
+- log_info("sd_pid_get_cgroup(0, …) → \"%s\"", cgroup);
+-
+- r = sd_uid_get_display(u2, &display_session);
+- assert_se(r >= 0 || r == -ENODATA);
+- log_info("sd_uid_get_display("UID_FMT", …) → \"%s\"",
+- u2, strnull(display_session));
+-
+- assert_se(socketpair(AF_UNIX, SOCK_STREAM, 0, pair) == 0);
+- sd_peer_get_session(pair[0], &pp);
+- sd_peer_get_session(pair[1], &qq);
+- assert_se(streq_ptr(pp, qq));
+-
+- r = sd_uid_get_sessions(u2, false, &sessions);
++ log_info("sd_pid_get_session(0, …) → %s / \"%s\"", e(r), strnull(session));
++
++ r = sd_pid_get_cgroup(0, &cgroup);
++ log_info("sd_pid_get_cgroup(0, …) → %s / \"%s\"", e(r), strnull(cgroup));
++ assert_se(r == 0);
++
++ r = sd_uid_get_display(u2, &display_session);
++ log_info("sd_uid_get_display("UID_FMT", …) → %s / \"%s\"", u2, e(r), strnull(display_session));
++ if (u2 == UID_INVALID)
++ assert_se(r == -EINVAL);
++ else
++ assert_se(IN_SET(r, 0, -ENODATA));
++
++ assert_se(socketpair(AF_UNIX, SOCK_STREAM, 0, pair) == 0);
++ sd_peer_get_session(pair[0], &pp);
++ sd_peer_get_session(pair[1], &qq);
++ assert_se(streq_ptr(pp, qq));
++
++ r = sd_uid_get_sessions(u2, false, &sessions);
++ assert_se(t = strv_join(sessions, " "));
++ log_info("sd_uid_get_sessions("UID_FMT", …) → %s \"%s\"", u2, e(r), t);
++ if (u2 == UID_INVALID)
++ assert_se(r == -EINVAL);
++ else {
+ assert_se(r >= 0);
+ assert_se(r == (int) strv_length(sessions));
+- assert_se(t = strv_join(sessions, " "));
+- strv_free(sessions);
+- log_info("sd_uid_get_sessions("UID_FMT", …) → [%i] \"%s\"", u2, r, t);
+- free(t);
++ }
++ sessions = strv_free(sessions);
++ free(t);
+
+- assert_se(r == sd_uid_get_sessions(u2, false, NULL));
++ assert_se(r == sd_uid_get_sessions(u2, false, NULL));
+
+- r = sd_uid_get_seats(u2, false, &seats);
++ r = sd_uid_get_seats(u2, false, &seats);
++ assert_se(t = strv_join(seats, " "));
++ log_info("sd_uid_get_seats("UID_FMT", …) → %s \"%s\"", u2, e(r), t);
++ if (u2 == UID_INVALID)
++ assert_se(r == -EINVAL);
++ else {
+ assert_se(r >= 0);
+ assert_se(r == (int) strv_length(seats));
+- assert_se(t = strv_join(seats, " "));
+- strv_free(seats);
+- log_info("sd_uid_get_seats("UID_FMT", …) → [%i] \"%s\"", u2, r, t);
+- free(t);
+-
+- assert_se(r == sd_uid_get_seats(u2, false, NULL));
+ }
++ seats = strv_free(seats);
++ free(t);
++
++ assert_se(r == sd_uid_get_seats(u2, false, NULL));
+
+ if (session) {
+ r = sd_session_is_active(session);
+@@ -109,7 +120,7 @@ static void test_login(void) {
+ log_info("sd_session_is_remote(\"%s\") → %s", session, yes_no(r));
+
+ r = sd_session_get_state(session, &state);
+- assert_se(r >= 0);
++ assert_se(r == 0);
+ log_info("sd_session_get_state(\"%s\") → \"%s\"", session, state);
+
+ assert_se(sd_session_get_uid(session, &u) >= 0);
+@@ -123,16 +134,16 @@ static void test_login(void) {
+ log_info("sd_session_get_class(\"%s\") → \"%s\"", session, class);
+
+ r = sd_session_get_display(session, &display);
+- assert_se(r >= 0 || r == -ENODATA);
++ assert_se(IN_SET(r, 0, -ENODATA));
+ log_info("sd_session_get_display(\"%s\") → \"%s\"", session, strna(display));
+
+ r = sd_session_get_remote_user(session, &remote_user);
+- assert_se(r >= 0 || r == -ENODATA);
++ assert_se(IN_SET(r, 0, -ENODATA));
+ log_info("sd_session_get_remote_user(\"%s\") → \"%s\"",
+ session, strna(remote_user));
+
+ r = sd_session_get_remote_host(session, &remote_host);
+- assert_se(r >= 0 || r == -ENODATA);
++ assert_se(IN_SET(r, 0, -ENODATA));
+ log_info("sd_session_get_remote_host(\"%s\") → \"%s\"",
+ session, strna(remote_host));
+
+@@ -161,7 +172,7 @@ static void test_login(void) {
+ assert_se(r == -ENODATA);
+ }
+
+- assert_se(sd_uid_get_state(u, &state2) >= 0);
++ assert_se(sd_uid_get_state(u, &state2) == 0);
+ log_info("sd_uid_get_state("UID_FMT", …) → %s", u, state2);
+ }
+
+@@ -173,11 +184,11 @@ static void test_login(void) {
+ assert_se(sd_uid_is_on_seat(u, 0, seat) > 0);
+
+ r = sd_seat_get_active(seat, &session2, &u2);
+- assert_se(r >= 0);
++ assert_se(r == 0);
+ log_info("sd_seat_get_active(\"%s\", …) → \"%s\", "UID_FMT, seat, session2, u2);
+
+ r = sd_uid_is_on_seat(u, 1, seat);
+- assert_se(r >= 0);
++ assert_se(IN_SET(r, 0, 1));
+ assert_se(!!r == streq(session, session2));
+
+ r = sd_seat_get_sessions(seat, &sessions, &uids, &n);
+@@ -185,8 +196,8 @@ static void test_login(void) {
+ assert_se(r == (int) strv_length(sessions));
+ assert_se(t = strv_join(sessions, " "));
+ strv_free(sessions);
+- log_info("sd_seat_get_sessions(\"%s\", …) → %i, \"%s\", [%i] {%s}",
+- seat, r, t, n, format_uids(&buf, uids, n));
++ log_info("sd_seat_get_sessions(\"%s\", …) → %s, \"%s\", [%u] {%s}",
++ seat, e(r), t, n, format_uids(&buf, uids, n));
+ free(t);
+
+ assert_se(sd_seat_get_sessions(seat, NULL, NULL, NULL) == r);
+@@ -204,7 +215,7 @@ static void test_login(void) {
+
+ r = sd_seat_get_active(NULL, &t, NULL);
+ assert_se(IN_SET(r, 0, -ENODATA));
+- log_info("sd_seat_get_active(NULL, …) (active session on current seat) → %s", strnull(t));
++ log_info("sd_seat_get_active(NULL, …) (active session on current seat) → %s / \"%s\"", e(r), strnull(t));
+ free(t);
+
+ r = sd_get_sessions(&sessions);
+@@ -244,13 +255,11 @@ static void test_login(void) {
+
+ static void test_monitor(void) {
+ sd_login_monitor *m = NULL;
+- unsigned n;
+ int r;
+
+- r = sd_login_monitor_new("session", &m);
+- assert_se(r >= 0);
++ assert_se(sd_login_monitor_new("session", &m) == 0);
+
+- for (n = 0; n < 5; n++) {
++ for (unsigned n = 0; n < 5; n++) {
+ struct pollfd pollfd = {};
+ usec_t timeout, nw;
diff --git a/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-3.patch b/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-3.patch
new file mode 100644
index 0000000000..f02f62b772
--- /dev/null
+++ b/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-3.patch
@@ -0,0 +1,182 @@
+From 0a42426d797406b4b01a0d9c13bb759c2629d108 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
+Date: Wed, 7 Oct 2020 11:15:05 +0200
+Subject: [PATCH] pager: make pager secure when under euid is changed or
+ explicitly requested
+
+The variable is renamed to SYSTEMD_PAGERSECURE (because it's not just about
+less now), and we automatically enable secure mode in certain cases, but not
+otherwise.
+
+This approach is more nuanced, but should provide a better experience for
+users:
+
+- Previusly we would set LESSSECURE=1 and trust the pager to make use of
+ it. But this has an effect only on less. We need to not start pagers which
+ are insecure when in secure mode. In particular more is like that and is a
+ very popular pager.
+
+- We don't enable secure mode always, which means that those other pagers can
+ reasonably used.
+
+- We do the right thing by default, but the user has ultimate control by
+ setting SYSTEMD_PAGERSECURE.
+
+Fixes #5666.
+
+v2:
+- also check $PKEXEC_UID
+
+v3:
+- use 'sd_pid_get_owner_uid() != geteuid()' as the condition
+
+CVE: CVE-2023-26604
+Upstream-Status: Backport [https://github.com/systemd/systemd/pull/17270/commits/0a42426d797406b4b01a0d9c13bb759c2629d108]
+Comments: Hunk refreshed
+Signed-off-by: rajmohan r <rajmohan.r@kpit.com>
+---
+ man/less-variables.xml | 30 +++++++++++++++----
+ src/shared/pager.c | 63 ++++++++++++++++++++++++++-------------
+ 2 files changed, 66 insertions(+), 27 deletions(-)
+
+diff --git a/man/less-variables.xml b/man/less-variables.xml
+index c52511c..049e9f7 100644
+--- a/man/less-variables.xml
++++ b/man/less-variables.xml
+@@ -65,12 +65,30 @@
+ </varlistentry>
+
+ <varlistentry id='lesssecure'>
+- <term><varname>$SYSTEMD_LESSSECURE</varname></term>
+-
+- <listitem><para>Takes a boolean argument. Overrides the <varname>$LESSSECURE</varname> environment
+- variable when invoking the pager, which controls the "secure" mode of less (which disables commands
+- such as <literal>|</literal> which allow to easily shell out to external command lines). By default
+- less secure mode is enabled, with this setting it may be disabled.</para></listitem>
++ <term><varname>$SYSTEMD_PAGERSECURE</varname></term>
++
++ <listitem><para>Takes a boolean argument. When true, the "secure" mode of the pager is enabled; if
++ false, disabled. If <varname>$SYSTEMD_PAGERSECURE</varname> is not set at all, secure mode is enabled
++ if the effective UID is not the same as the owner of the login session, see <citerefentry
++ project='man-pages'><refentrytitle>geteuid</refentrytitle><manvolnum>2</manvolnum></citerefentry> and
++ <citerefentry><refentrytitle>sd_pid_get_owner_uid</refentrytitle><manvolnum>3</manvolnum></citerefentry>.
++ In secure mode, <option>LESSSECURE=1</option> will be set when invoking the pager, and the pager shall
++ disable commands that open or create new files or start new subprocesses. When
++ <varname>$SYSTEMD_PAGERSECURE</varname> is not set at all, pagers which are not known to implement
++ secure mode will not be used. (Currently only
++ <citerefentry><refentrytitle>less</refentrytitle><manvolnum>1</manvolnum></citerefentry> implements
++ secure mode.)</para>
++
++ <para>Note: when commands are invoked with elevated privileges, for example under <citerefentry
++ project='man-pages'><refentrytitle>sudo</refentrytitle><manvolnum>8</manvolnum></citerefentry> or
++ <citerefentry
++ project='die-net'><refentrytitle>pkexec</refentrytitle><manvolnum>1</manvolnum></citerefentry>, care
++ must be taken to ensure that unintended interactive features are not enabled. "Secure" mode for the
++ pager may be enabled automatically as describe above. Setting <varname>SYSTEMD_PAGERSECURE=0</varname>
++ or not removing it from the inherited environment allows the user to invoke arbitrary commands. Note
++ that if the <varname>$SYSTEMD_PAGER</varname> or <varname>$PAGER</varname> variables are to be
++ honoured, <varname>$SYSTEMD_PAGERSECURE</varname> must be set too. It might be reasonable to completly
++ disable the pager using <option>--no-pager</option> instead.</para></listitem>
+ </varlistentry>
+
+ <varlistentry id='colors'>
+diff --git a/src/shared/pager.c b/src/shared/pager.c
+index a3b6576..a72d9ea 100644
+--- a/src/shared/pager.c
++++ b/src/shared/pager.c
+@@ -8,6 +8,8 @@
+ #include <sys/prctl.h>
+ #include <unistd.h>
+
++#include "sd-login.h"
++
+ #include "copy.h"
+ #include "env-util.h"
+ #include "fd-util.h"
+@@ -164,25 +166,42 @@ int pager_open(PagerFlags flags) {
+ }
+
+ /* People might invoke us from sudo, don't needlessly allow less to be a way to shell out
+- * privileged stuff. */
+- r = getenv_bool("SYSTEMD_LESSSECURE");
+- if (r == 0) { /* Remove env var if off */
+- if (unsetenv("LESSSECURE") < 0) {
+- log_error_errno(errno, "Failed to uset environment variable LESSSECURE: %m");
+- _exit(EXIT_FAILURE);
+- }
+- } else {
+- /* Set env var otherwise */
++ * privileged stuff. If the user set $SYSTEMD_PAGERSECURE, trust their configuration of the
++ * pager. If they didn't, use secure mode when under euid is changed. If $SYSTEMD_PAGERSECURE
++ * wasn't explicitly set, and we autodetect the need for secure mode, only use the pager we
++ * know to be good. */
++ int use_secure_mode = getenv_bool("SYSTEMD_PAGERSECURE");
++ bool trust_pager = use_secure_mode >= 0;
++ if (use_secure_mode == -ENXIO) {
++ uid_t uid;
++
++ r = sd_pid_get_owner_uid(0, &uid);
+ if (r < 0)
+- log_warning_errno(r, "Unable to parse $SYSTEMD_LESSSECURE, ignoring: %m");
++ log_debug_errno(r, "sd_pid_get_owner_uid() failed, enabling pager secure mode: %m");
+
+- if (setenv("LESSSECURE", "1", 1) < 0) {
+- log_error_errno(errno, "Failed to set environment variable LESSSECURE: %m");
+- _exit(EXIT_FAILURE);
+- }
++ use_secure_mode = r < 0 || uid != geteuid();
++
++ } else if (use_secure_mode < 0) {
++ log_warning_errno(use_secure_mode, "Unable to parse $SYSTEMD_PAGERSECURE, assuming true: %m");
++ use_secure_mode = true;
+ }
+
+- if (pager_args) {
++ /* We generally always set variables used by less, even if we end up using a different pager.
++ * They shouldn't hurt in any case, and ideally other pagers would look at them too. */
++ if (use_secure_mode)
++ r = setenv("LESSSECURE", "1", 1);
++ else
++ r = unsetenv("LESSSECURE");
++ if (r < 0) {
++ log_error_errno(errno, "Failed to adjust environment variable LESSSECURE: %m");
++ _exit(EXIT_FAILURE);
++ }
++
++ if (trust_pager && pager_args) { /* The pager config might be set globally, and we cannot
++ * know if the user adjusted it to be appropriate for the
++ * secure mode. Thus, start the pager specified through
++ * envvars only when $SYSTEMD_PAGERSECURE was explicitly set
++ * as well. */
+ r = loop_write(exe_name_pipe[1], pager_args[0], strlen(pager_args[0]) + 1, false);
+ if (r < 0) {
+ log_error_errno(r, "Failed to write pager name to socket: %m");
+@@ -194,13 +213,14 @@ int pager_open(PagerFlags flags) {
+ "Failed to execute '%s', using fallback pagers: %m", pager_args[0]);
+ }
+
+- /* Debian's alternatives command for pagers is
+- * called 'pager'. Note that we do not call
+- * sensible-pagers here, since that is just a
+- * shell script that implements a logic that
+- * is similar to this one anyway, but is
+- * Debian-specific. */
++ /* Debian's alternatives command for pagers is called 'pager'. Note that we do not call
++ * sensible-pagers here, since that is just a shell script that implements a logic that is
++ * similar to this one anyway, but is Debian-specific. */
+ FOREACH_STRING(exe, "pager", "less", "more") {
++ /* Only less implements secure mode right now. */
++ if (use_secure_mode && !streq(exe, "less"))
++ continue;
++
+ r = loop_write(exe_name_pipe[1], exe, strlen(exe) + 1, false);
+ if (r < 0) {
+ log_error_errno(r, "Failed to write pager name to socket: %m");
+@@ -211,6 +231,7 @@ int pager_open(PagerFlags flags) {
+ "Failed to execute '%s', using next fallback pager: %m", exe);
+ }
+
++ /* Our builtin is also very secure. */
+ r = loop_write(exe_name_pipe[1], "(built-in)", strlen("(built-in)") + 1, false);
+ if (r < 0) {
+ log_error_errno(r, "Failed to write pager name to socket: %m");
diff --git a/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-4.patch b/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-4.patch
new file mode 100644
index 0000000000..bc6b0a91c2
--- /dev/null
+++ b/poky/meta/recipes-core/systemd/systemd/CVE-2023-26604-4.patch
@@ -0,0 +1,32 @@
+From b8f736b30e20a2b44e7c34bb4e43b0d97ae77e3c Mon Sep 17 00:00:00 2001
+From: Lennart Poettering <lennart@poettering.net>
+Date: Thu, 15 Oct 2020 10:54:48 +0200
+Subject: [PATCH] pager: lets check SYSTEMD_PAGERSECURE with secure_getenv()
+
+I can't think of any real vulnerability about this, but it still feels
+better to check a variable with "secure" in its name with
+secure_getenv() rather than plain getenv().
+
+Paranoia FTW!
+
+CVE: CVE-2023-26604
+Upstream-Status: Backport [https://github.com/systemd/systemd/pull/17359/commits/b8f736b30e20a2b44e7c34bb4e43b0d97ae77e3c]
+Comments: Hunk refreshed
+Signed-off-by: rajmohan r <rajmohan.r@kpit.com>
+---
+ src/shared/pager.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/shared/pager.c b/src/shared/pager.c
+index a72d9ea..250519c 100644
+--- a/src/shared/pager.c
++++ b/src/shared/pager.c
+@@ -170,7 +170,7 @@ int pager_open(PagerFlags flags) {
+ * pager. If they didn't, use secure mode when under euid is changed. If $SYSTEMD_PAGERSECURE
+ * wasn't explicitly set, and we autodetect the need for secure mode, only use the pager we
+ * know to be good. */
+- int use_secure_mode = getenv_bool("SYSTEMD_PAGERSECURE");
++ int use_secure_mode = getenv_bool_secure("SYSTEMD_PAGERSECURE");
+ bool trust_pager = use_secure_mode >= 0;
+ if (use_secure_mode == -ENXIO) {
+ uid_t uid;
diff --git a/poky/meta/recipes-core/systemd/systemd/systemd-pager.sh b/poky/meta/recipes-core/systemd/systemd/systemd-pager.sh
new file mode 100644
index 0000000000..86e3e0ab78
--- /dev/null
+++ b/poky/meta/recipes-core/systemd/systemd/systemd-pager.sh
@@ -0,0 +1,7 @@
+# Systemd expect a color capable pager, however the less provided
+# by busybox is not. This make many interaction with systemd pretty
+# annoying. As a workaround we disable the systemd pager if less
+# is not the GNU version.
+if ! less -V > /dev/null 2>&1 ; then
+ export SYSTEMD_PAGER=
+fi
diff --git a/poky/meta/recipes-core/systemd/systemd_244.5.bb b/poky/meta/recipes-core/systemd/systemd_244.5.bb
index f3e5395465..bd66d82932 100644
--- a/poky/meta/recipes-core/systemd/systemd_244.5.bb
+++ b/poky/meta/recipes-core/systemd/systemd_244.5.bb
@@ -18,6 +18,7 @@ SRC_URI += "file://touchscreen.rules \
file://00-create-volatile.conf \
file://init \
file://99-default.preset \
+ file://systemd-pager.sh \
file://0001-binfmt-Don-t-install-dependency-links-at-install-tim.patch \
file://0003-implment-systemd-sysv-install-for-OE.patch \
file://CVE-2021-33910.patch \
@@ -33,6 +34,11 @@ SRC_URI += "file://touchscreen.rules \
file://CVE-2021-3997-1.patch \
file://CVE-2021-3997-2.patch \
file://CVE-2021-3997-3.patch \
+ file://CVE-2022-3821.patch \
+ file://CVE-2023-26604-1.patch \
+ file://CVE-2023-26604-2.patch \
+ file://CVE-2023-26604-3.patch \
+ file://CVE-2023-26604-4.patch \
"
# patches needed by musl
@@ -213,7 +219,7 @@ rootlibexecdir = "${rootprefix}/lib"
EXTRA_OEMESON += "-Dlink-udev-shared=false"
EXTRA_OEMESON += "-Dnobody-user=nobody \
- -Dnobody-group=nobody \
+ -Dnobody-group=nogroup \
-Drootlibdir=${rootlibdir} \
-Drootprefix=${rootprefix} \
-Ddefault-locale=C \
@@ -316,6 +322,9 @@ do_install() {
# install default policy for presets
# https://www.freedesktop.org/wiki/Software/systemd/Preset/#howto
install -Dm 0644 ${WORKDIR}/99-default.preset ${D}${systemd_unitdir}/system-preset/99-default.preset
+
+ # add a profile fragment to disable systemd pager with busybox less
+ install -Dm 0644 ${WORKDIR}/systemd-pager.sh ${D}${sysconfdir}/profile.d/systemd-pager.sh
}
python populate_packages_prepend (){
@@ -403,9 +412,9 @@ FILES_${PN}-binfmt = "${sysconfdir}/binfmt.d/ \
${rootlibexecdir}/systemd/systemd-binfmt \
${systemd_unitdir}/system/proc-sys-fs-binfmt_misc.* \
${systemd_unitdir}/system/systemd-binfmt.service"
-RRECOMMENDS_${PN}-binfmt = "kernel-module-binfmt-misc"
+RRECOMMENDS_${PN}-binfmt = "${@bb.utils.contains('PACKAGECONFIG', 'binfmt', 'kernel-module-binfmt-misc', '', d)}"
-RRECOMMENDS_${PN}-vconsole-setup = "kbd kbd-consolefonts kbd-keymaps"
+RRECOMMENDS_${PN}-vconsole-setup = "${@bb.utils.contains('PACKAGECONFIG', 'vconsole', 'kbd kbd-consolefonts kbd-keymaps', '', d)}"
FILES_${PN}-journal-gatewayd = "${rootlibexecdir}/systemd/systemd-journal-gatewayd \
@@ -538,6 +547,7 @@ FILES_${PN} = " ${base_bindir}/* \
${sysconfdir}/dbus-1/ \
${sysconfdir}/modules-load.d/ \
${sysconfdir}/pam.d/ \
+ ${sysconfdir}/profile.d/ \
${sysconfdir}/sysctl.d/ \
${sysconfdir}/systemd/ \
${sysconfdir}/tmpfiles.d/ \
diff --git a/poky/meta/recipes-devtools/binutils/binutils-2.34.inc b/poky/meta/recipes-devtools/binutils/binutils-2.34.inc
index ff0d467132..713e428a3e 100644
--- a/poky/meta/recipes-devtools/binutils/binutils-2.34.inc
+++ b/poky/meta/recipes-devtools/binutils/binutils-2.34.inc
@@ -24,7 +24,7 @@ BRANCH ?= "binutils-2_34-branch"
UPSTREAM_CHECK_GITTAGREGEX = "binutils-(?P<pver>\d+_(\d_?)*)"
-SRCREV ?= "d4b50999b3b287b5f984ade2f8734aa8c9359440"
+SRCREV ?= "c4e78c0868a22971680217a41fdb73516a26813d"
BINUTILS_GIT_URI ?= "git://sourceware.org/git/binutils-gdb.git;branch=${BRANCH};protocol=git"
SRC_URI = "\
${BINUTILS_GIT_URI} \
diff --git a/poky/meta/recipes-devtools/binutils/binutils/CVE-2020-16593.patch b/poky/meta/recipes-devtools/binutils/binutils/CVE-2020-16593.patch
index cbe4a50507..c7c7829261 100644
--- a/poky/meta/recipes-devtools/binutils/binutils/CVE-2020-16593.patch
+++ b/poky/meta/recipes-devtools/binutils/binutils/CVE-2020-16593.patch
@@ -199,6 +199,6 @@ Index: git/bfd/ChangeLog
+ * dwarf2.c (scan_unit_for_symbols): Wrap overlong lines. Don't
+ strdup(0).
+
- 2020-02-19 H.J. Lu <hongjiu.lu@intel.com>
+ 2021-05-03 Alan Modra <amodra@gmail.com>
- PR binutils/25355
+ PR 27755
diff --git a/poky/meta/recipes-devtools/binutils/binutils/CVE-2021-3549.patch b/poky/meta/recipes-devtools/binutils/binutils/CVE-2021-3549.patch
index 4391db340a..5f56dd7696 100644
--- a/poky/meta/recipes-devtools/binutils/binutils/CVE-2021-3549.patch
+++ b/poky/meta/recipes-devtools/binutils/binutils/CVE-2021-3549.patch
@@ -7,31 +7,49 @@ Adds missing sanity checks for avr device info note, to avoid
potential buffer overflows. Uses bfd_malloc_and_get_section for
sanity checking section size.
- PR 27290
- PR 27293
- PR 27295
- * od-elf32_avr.c (elf32_avr_get_note_section_contents): Formatting.
- Use bfd_malloc_and_get_section.
- (elf32_avr_get_note_desc): Formatting. Return descsz. Sanity
- check namesz. Return NULL if descsz is too small. Ensure
- string table is terminated.
- (elf32_avr_get_device_info): Formatting. Add note_size param.
- Sanity check note.
- (elf32_avr_dump_mem_usage): Adjust to suit.
+ PR 27290
+ PR 27293
+ PR 27295
+ * od-elf32_avr.c (elf32_avr_get_note_section_contents): Formatting.
+ Use bfd_malloc_and_get_section.
+ (elf32_avr_get_note_desc): Formatting. Return descsz. Sanity
+ check namesz. Return NULL if descsz is too small. Ensure
+ string table is terminated.
+ (elf32_avr_get_device_info): Formatting. Add note_size param.
+ Sanity check note.
+ (elf32_avr_dump_mem_usage): Adjust to suit.
Upstream-Status: Backport
CVE: CVE-2021-3549
Signed-of-by: Armin Kuster <akuster@mvista.com>
---
- binutils/ChangeLog | 14 +++++++++
- binutils/od-elf32_avr.c | 66 ++++++++++++++++++++++++++---------------
- 2 files changed, 56 insertions(+), 24 deletions(-)
-
-Index: git/binutils/od-elf32_avr.c
-===================================================================
---- git.orig/binutils/od-elf32_avr.c
-+++ git/binutils/od-elf32_avr.c
+diff --git a/binutils/ChangeLog b/binutils/ChangeLog
+index 1e9a96c9bb6..02e5019204e 100644
+--- a/binutils/ChangeLog
++++ b/binutils/ChangeLog
+@@ -1,3 +1,17 @@
++2021-02-11 Alan Modra <amodra@gmail.com>
++
++ PR 27290
++ PR 27293
++ PR 27295
++ * od-elf32_avr.c (elf32_avr_get_note_section_contents): Formatting.
++ Use bfd_malloc_and_get_section.
++ (elf32_avr_get_note_desc): Formatting. Return descsz. Sanity
++ check namesz. Return NULL if descsz is too small. Ensure
++ string table is terminated.
++ (elf32_avr_get_device_info): Formatting. Add note_size param.
++ Sanity check note.
++ (elf32_avr_dump_mem_usage): Adjust to suit.
++
+ 2020-03-25 H.J. Lu <hongjiu.lu@intel.com>
+
+ * ar.c (main): Update bfd_plugin_set_program_name call.
+diff --git a/binutils/od-elf32_avr.c b/binutils/od-elf32_avr.c
+index 5ec99957fe9..1d32bce918e 100644
+--- a/binutils/od-elf32_avr.c
++++ b/binutils/od-elf32_avr.c
@@ -77,23 +77,29 @@ elf32_avr_filter (bfd *abfd)
return bfd_get_flavour (abfd) == bfd_target_elf_flavour;
}
@@ -70,7 +88,7 @@ Index: git/binutils/od-elf32_avr.c
{
Elf_External_Note *xnp = (Elf_External_Note *) contents;
Elf_Internal_Note in;
-@@ -107,42 +113,54 @@ static char* elf32_avr_get_note_desc (bf
+@@ -107,42 +113,54 @@ static char* elf32_avr_get_note_desc (bfd *abfd, char *contents,
if (in.namesz > contents - in.namedata + size)
return NULL;
@@ -163,25 +181,3 @@ Index: git/binutils/od-elf32_avr.c
}
elf32_avr_get_memory_usage (abfd, &text_usage, &data_usage,
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,3 +1,17 @@
-+2021-02-11 Alan Modra <amodra@gmail.com>
-+
-+ PR 27290
-+ PR 27293
-+ PR 27295
-+ * od-elf32_avr.c (elf32_avr_get_note_section_contents): Formatting.
-+ Use bfd_malloc_and_get_section.
-+ (elf32_avr_get_note_desc): Formatting. Return descsz. Sanity
-+ check namesz. Return NULL if descsz is too small. Ensure
-+ string table is terminated.
-+ (elf32_avr_get_device_info): Formatting. Add note_size param.
-+ Sanity check note.
-+ (elf32_avr_dump_mem_usage): Adjust to suit.
-+
- 2020-02-01 Nick Clifton <nickc@redhat.com>
-
- * configure: Regenerate.
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0001-Backport-fix-for-PR-tree-optimization-97236-fix-bad-.patch b/poky/meta/recipes-devtools/gcc/gcc-9.3/0001-Backport-fix-for-PR-tree-optimization-97236-fix-bad-.patch
deleted file mode 100644
index dc1039dcc8..0000000000
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0001-Backport-fix-for-PR-tree-optimization-97236-fix-bad-.patch
+++ /dev/null
@@ -1,119 +0,0 @@
-Upstream-Status: Backport [https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=97b668f9a8c6ec565c278a60e7d1492a6932e409]
-Signed-off-by: Jon Mason <jon.mason@arm.com>
-
-From 97b668f9a8c6ec565c278a60e7d1492a6932e409 Mon Sep 17 00:00:00 2001
-From: Matthias Klose <doko@ubuntu.com>
-Date: Tue, 6 Oct 2020 13:41:37 +0200
-Subject: [PATCH] Backport fix for PR/tree-optimization/97236 - fix bad use of
- VMAT_CONTIGUOUS
-
-This avoids using VMAT_CONTIGUOUS with single-element interleaving
-when using V1mode vectors. Instead keep VMAT_ELEMENTWISE but
-continue to avoid load-lanes and gathers.
-
-2020-10-01 Richard Biener <rguenther@suse.de>
-
- PR tree-optimization/97236
- * tree-vect-stmts.c (get_group_load_store_type): Keep
- VMAT_ELEMENTWISE for single-element vectors.
-
- * gcc.dg/vect/pr97236.c: New testcase.
-
-(cherry picked from commit 1ab88985631dd2c5a5e3b5c0dce47cf8b6ed2f82)
----
- gcc/testsuite/gcc.dg/vect/pr97236.c | 43 +++++++++++++++++++++++++++++
- gcc/tree-vect-stmts.c | 20 ++++++--------
- 2 files changed, 52 insertions(+), 11 deletions(-)
- create mode 100644 gcc/testsuite/gcc.dg/vect/pr97236.c
-
-diff --git a/gcc/testsuite/gcc.dg/vect/pr97236.c b/gcc/testsuite/gcc.dg/vect/pr97236.c
-new file mode 100644
-index 000000000000..9d3dc20d953d
---- /dev/null
-+++ b/gcc/testsuite/gcc.dg/vect/pr97236.c
-@@ -0,0 +1,43 @@
-+typedef unsigned char __uint8_t;
-+typedef __uint8_t uint8_t;
-+typedef struct plane_t {
-+ uint8_t *p_pixels;
-+ int i_lines;
-+ int i_pitch;
-+} plane_t;
-+
-+typedef struct {
-+ plane_t p[5];
-+} picture_t;
-+
-+#define N 4
-+
-+void __attribute__((noipa))
-+picture_Clone(picture_t *picture, picture_t *res)
-+{
-+ for (int i = 0; i < N; i++) {
-+ res->p[i].p_pixels = picture->p[i].p_pixels;
-+ res->p[i].i_lines = picture->p[i].i_lines;
-+ res->p[i].i_pitch = picture->p[i].i_pitch;
-+ }
-+}
-+
-+int
-+main()
-+{
-+ picture_t aaa, bbb;
-+ uint8_t pixels[10] = {1, 1, 1, 1, 1, 1, 1, 1};
-+
-+ for (unsigned i = 0; i < N; i++)
-+ aaa.p[i].p_pixels = pixels;
-+
-+ picture_Clone (&aaa, &bbb);
-+
-+ uint8_t c = 0;
-+ for (unsigned i = 0; i < N; i++)
-+ c += bbb.p[i].p_pixels[0];
-+
-+ if (c != N)
-+ __builtin_abort ();
-+ return 0;
-+}
-diff --git a/gcc/tree-vect-stmts.c b/gcc/tree-vect-stmts.c
-index 507f81b0a0e8..ffbba3441de2 100644
---- a/gcc/tree-vect-stmts.c
-+++ b/gcc/tree-vect-stmts.c
-@@ -2355,25 +2355,23 @@ get_group_load_store_type (stmt_vec_info stmt_info, tree vectype, bool slp,
- /* First cope with the degenerate case of a single-element
- vector. */
- if (known_eq (TYPE_VECTOR_SUBPARTS (vectype), 1U))
-- *memory_access_type = VMAT_CONTIGUOUS;
-+ ;
-
- /* Otherwise try using LOAD/STORE_LANES. */
-- if (*memory_access_type == VMAT_ELEMENTWISE
-- && (vls_type == VLS_LOAD
-- ? vect_load_lanes_supported (vectype, group_size, masked_p)
-- : vect_store_lanes_supported (vectype, group_size,
-- masked_p)))
-+ else if (vls_type == VLS_LOAD
-+ ? vect_load_lanes_supported (vectype, group_size, masked_p)
-+ : vect_store_lanes_supported (vectype, group_size,
-+ masked_p))
- {
- *memory_access_type = VMAT_LOAD_STORE_LANES;
- overrun_p = would_overrun_p;
- }
-
- /* If that fails, try using permuting loads. */
-- if (*memory_access_type == VMAT_ELEMENTWISE
-- && (vls_type == VLS_LOAD
-- ? vect_grouped_load_supported (vectype, single_element_p,
-- group_size)
-- : vect_grouped_store_supported (vectype, group_size)))
-+ else if (vls_type == VLS_LOAD
-+ ? vect_grouped_load_supported (vectype, single_element_p,
-+ group_size)
-+ : vect_grouped_store_supported (vectype, group_size))
- {
- *memory_access_type = VMAT_CONTIGUOUS_PERMUTE;
- overrun_p = would_overrun_p;
---
-2.20.1
-
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0001-aarch64-New-Straight-Line-Speculation-SLS-mitigation.patch b/poky/meta/recipes-devtools/gcc/gcc-9.3/0001-aarch64-New-Straight-Line-Speculation-SLS-mitigation.patch
deleted file mode 100644
index a7e29f4bd7..0000000000
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0001-aarch64-New-Straight-Line-Speculation-SLS-mitigation.patch
+++ /dev/null
@@ -1,204 +0,0 @@
-CVE: CVE-2020-13844
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@arm.com>
-
-From 20da13e395bde597d8337167c712039c8f923c3b Mon Sep 17 00:00:00 2001
-From: Matthew Malcomson <matthew.malcomson@arm.com>
-Date: Thu, 9 Jul 2020 09:11:58 +0100
-Subject: [PATCH 1/3] aarch64: New Straight Line Speculation (SLS) mitigation
- flags
-
-Here we introduce the flags that will be used for straight line speculation.
-
-The new flag introduced is `-mharden-sls=`.
-This flag can take arguments of `none`, `all`, or a comma seperated list
-of one or more of `retbr` or `blr`.
-`none` indicates no special mitigation of the straight line speculation
-vulnerability.
-`all` requests all mitigations currently implemented.
-`retbr` requests that the RET and BR instructions have a speculation
-barrier inserted after them.
-`blr` requests that BLR instructions are replaced by a BL to a function
-stub using a BR with a speculation barrier after it.
-
-Setting this on a per-function basis using attributes or the like is not
-enabled, but may be in the future.
-
-(cherry picked from commit a9ba2a9b77bec7eacaf066801f22d1c366a2bc86)
-
-gcc/ChangeLog:
-
-2020-06-02 Matthew Malcomson <matthew.malcomson@arm.com>
-
- * config/aarch64/aarch64-protos.h (aarch64_harden_sls_retbr_p):
- New.
- (aarch64_harden_sls_blr_p): New.
- * config/aarch64/aarch64.c (enum aarch64_sls_hardening_type):
- New.
- (aarch64_harden_sls_retbr_p): New.
- (aarch64_harden_sls_blr_p): New.
- (aarch64_validate_sls_mitigation): New.
- (aarch64_override_options): Parse options for SLS mitigation.
- * config/aarch64/aarch64.opt (-mharden-sls): New option.
- * doc/invoke.texi: Document new option.
----
- gcc/config/aarch64/aarch64-protos.h | 3 ++
- gcc/config/aarch64/aarch64.c | 76 +++++++++++++++++++++++++++++
- gcc/config/aarch64/aarch64.opt | 4 ++
- gcc/doc/invoke.texi | 12 +++++
- 4 files changed, 95 insertions(+)
-
-diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h
-index c083cad53..31493f412 100644
---- a/gcc/config/aarch64/aarch64-protos.h
-+++ b/gcc/config/aarch64/aarch64-protos.h
-@@ -644,4 +644,7 @@ poly_uint64 aarch64_regmode_natural_size (machine_mode);
-
- bool aarch64_high_bits_all_ones_p (HOST_WIDE_INT);
-
-+extern bool aarch64_harden_sls_retbr_p (void);
-+extern bool aarch64_harden_sls_blr_p (void);
-+
- #endif /* GCC_AARCH64_PROTOS_H */
-diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
-index b452a53af..269ff6c92 100644
---- a/gcc/config/aarch64/aarch64.c
-+++ b/gcc/config/aarch64/aarch64.c
-@@ -11734,6 +11734,79 @@ aarch64_validate_mcpu (const char *str, const struct processor **res,
- return false;
- }
-
-+/* Straight line speculation indicators. */
-+enum aarch64_sls_hardening_type
-+{
-+ SLS_NONE = 0,
-+ SLS_RETBR = 1,
-+ SLS_BLR = 2,
-+ SLS_ALL = 3,
-+};
-+static enum aarch64_sls_hardening_type aarch64_sls_hardening;
-+
-+/* Return whether we should mitigatate Straight Line Speculation for the RET
-+ and BR instructions. */
-+bool
-+aarch64_harden_sls_retbr_p (void)
-+{
-+ return aarch64_sls_hardening & SLS_RETBR;
-+}
-+
-+/* Return whether we should mitigatate Straight Line Speculation for the BLR
-+ instruction. */
-+bool
-+aarch64_harden_sls_blr_p (void)
-+{
-+ return aarch64_sls_hardening & SLS_BLR;
-+}
-+
-+/* As of yet we only allow setting these options globally, in the future we may
-+ allow setting them per function. */
-+static void
-+aarch64_validate_sls_mitigation (const char *const_str)
-+{
-+ char *token_save = NULL;
-+ char *str = NULL;
-+
-+ if (strcmp (const_str, "none") == 0)
-+ {
-+ aarch64_sls_hardening = SLS_NONE;
-+ return;
-+ }
-+ if (strcmp (const_str, "all") == 0)
-+ {
-+ aarch64_sls_hardening = SLS_ALL;
-+ return;
-+ }
-+
-+ char *str_root = xstrdup (const_str);
-+ str = strtok_r (str_root, ",", &token_save);
-+ if (!str)
-+ error ("invalid argument given to %<-mharden-sls=%>");
-+
-+ int temp = SLS_NONE;
-+ while (str)
-+ {
-+ if (strcmp (str, "blr") == 0)
-+ temp |= SLS_BLR;
-+ else if (strcmp (str, "retbr") == 0)
-+ temp |= SLS_RETBR;
-+ else if (strcmp (str, "none") == 0 || strcmp (str, "all") == 0)
-+ {
-+ error ("%<%s%> must be by itself for %<-mharden-sls=%>", str);
-+ break;
-+ }
-+ else
-+ {
-+ error ("invalid argument %<%s%> for %<-mharden-sls=%>", str);
-+ break;
-+ }
-+ str = strtok_r (NULL, ",", &token_save);
-+ }
-+ aarch64_sls_hardening = (aarch64_sls_hardening_type) temp;
-+ free (str_root);
-+}
-+
- /* Parses CONST_STR for branch protection features specified in
- aarch64_branch_protect_types, and set any global variables required. Returns
- the parsing result and assigns LAST_STR to the last processed token from
-@@ -11972,6 +12045,9 @@ aarch64_override_options (void)
- selected_arch = NULL;
- selected_tune = NULL;
-
-+ if (aarch64_harden_sls_string)
-+ aarch64_validate_sls_mitigation (aarch64_harden_sls_string);
-+
- if (aarch64_branch_protection_string)
- aarch64_validate_mbranch_protection (aarch64_branch_protection_string);
-
-diff --git a/gcc/config/aarch64/aarch64.opt b/gcc/config/aarch64/aarch64.opt
-index 3c6d1cc90..d27ab6df8 100644
---- a/gcc/config/aarch64/aarch64.opt
-+++ b/gcc/config/aarch64/aarch64.opt
-@@ -71,6 +71,10 @@ mgeneral-regs-only
- Target Report RejectNegative Mask(GENERAL_REGS_ONLY) Save
- Generate code which uses only the general registers.
-
-+mharden-sls=
-+Target RejectNegative Joined Var(aarch64_harden_sls_string)
-+Generate code to mitigate against straight line speculation.
-+
- mfix-cortex-a53-835769
- Target Report Var(aarch64_fix_a53_err835769) Init(2) Save
- Workaround for ARM Cortex-A53 Erratum number 835769.
-diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
-index 2f7ffe456..5f04a7d2b 100644
---- a/gcc/doc/invoke.texi
-+++ b/gcc/doc/invoke.texi
-@@ -638,6 +638,7 @@ Objective-C and Objective-C++ Dialects}.
- -mpc-relative-literal-loads @gol
- -msign-return-address=@var{scope} @gol
- -mbranch-protection=@var{none}|@var{standard}|@var{pac-ret}[+@var{leaf}]|@var{bti} @gol
-+-mharden-sls=@var{opts} @gol
- -march=@var{name} -mcpu=@var{name} -mtune=@var{name} @gol
- -moverride=@var{string} -mverbose-cost-dump @gol
- -mstack-protector-guard=@var{guard} -mstack-protector-guard-reg=@var{sysreg} @gol
-@@ -15955,6 +15956,17 @@ argument @samp{leaf} can be used to extend the signing to include leaf
- functions.
- @samp{bti} turns on branch target identification mechanism.
-
-+@item -mharden-sls=@var{opts}
-+@opindex mharden-sls
-+Enable compiler hardening against straight line speculation (SLS).
-+@var{opts} is a comma-separated list of the following options:
-+@table @samp
-+@item retbr
-+@item blr
-+@end table
-+In addition, @samp{-mharden-sls=all} enables all SLS hardening while
-+@samp{-mharden-sls=none} disables all SLS hardening.
-+
- @item -msve-vector-bits=@var{bits}
- @opindex msve-vector-bits
- Specify the number of bits in an SVE vector register. This option only has
---
-2.25.1
-
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0002-aarch64-Introduce-SLS-mitigation-for-RET-and-BR-inst.patch b/poky/meta/recipes-devtools/gcc/gcc-9.3/0002-aarch64-Introduce-SLS-mitigation-for-RET-and-BR-inst.patch
deleted file mode 100644
index c972088d2b..0000000000
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0002-aarch64-Introduce-SLS-mitigation-for-RET-and-BR-inst.patch
+++ /dev/null
@@ -1,600 +0,0 @@
-CVE: CVE-2020-13844
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@arm.com>
-
-From dc586a749228ecfb71f72ec2ca10e6f7b6874af3 Mon Sep 17 00:00:00 2001
-From: Matthew Malcomson <matthew.malcomson@arm.com>
-Date: Thu, 9 Jul 2020 09:11:59 +0100
-Subject: [PATCH 2/3] aarch64: Introduce SLS mitigation for RET and BR
- instructions
-
-Instructions following RET or BR are not necessarily executed. In order
-to avoid speculation past RET and BR we can simply append a speculation
-barrier.
-
-Since these speculation barriers will not be architecturally executed,
-they are not expected to add a high performance penalty.
-
-The speculation barrier is to be SB when targeting architectures which
-have this enabled, and DSB SY + ISB otherwise.
-
-We add tests for each of the cases where such an instruction was seen.
-
-This is implemented by modifying each machine description pattern that
-emits either a RET or a BR instruction. We choose not to use something
-like `TARGET_ASM_FUNCTION_EPILOGUE` since it does not affect the
-`indirect_jump`, `jump`, `sibcall_insn` and `sibcall_value_insn`
-patterns and we find it preferable to implement the functionality in the
-same way for every pattern.
-
-There is one particular case which is slightly tricky. The
-implementation of TARGET_ASM_TRAMPOLINE_TEMPLATE uses a BR which needs
-to be mitigated against. The trampoline template is used *once* per
-compilation unit, and the TRAMPOLINE_SIZE is exposed to the user via the
-builtin macro __LIBGCC_TRAMPOLINE_SIZE__.
-In the future we may implement function specific attributes to turn on
-and off hardening on a per-function basis.
-The fixed nature of the trampoline described above implies it will be
-safer to ensure this speculation barrier is always used.
-
-Testing:
- Bootstrap and regtest done on aarch64-none-linux
- Used a temporary hack(1) to use these options on every test in the
- testsuite and a script to check that the output never emitted an
- unmitigated RET or BR.
-
-1) Temporary hack was a change to the testsuite to always use
-`-save-temps` and run a script on the assembly output of those
-compilations which produced one to ensure every RET or BR is immediately
-followed by a speculation barrier.
-
-(cherry picked from be178ecd5ac1fe1510d960ff95c66d0ff831afe1)
-
-gcc/ChangeLog:
-
- * config/aarch64/aarch64-protos.h (aarch64_sls_barrier): New.
- * config/aarch64/aarch64.c (aarch64_output_casesi): Emit
- speculation barrier after BR instruction if needs be.
- (aarch64_trampoline_init): Handle ptr_mode value & adjust size
- of code copied.
- (aarch64_sls_barrier): New.
- (aarch64_asm_trampoline_template): Add needed barriers.
- * config/aarch64/aarch64.h (AARCH64_ISA_SB): New.
- (TARGET_SB): New.
- (TRAMPOLINE_SIZE): Account for barrier.
- * config/aarch64/aarch64.md (indirect_jump, *casesi_dispatch,
- simple_return, *do_return, *sibcall_insn, *sibcall_value_insn):
- Emit barrier if needs be, also account for possible barrier using
- "sls_length" attribute.
- (sls_length): New attribute.
- (length): Determine default using any non-default sls_length
- value.
-
-gcc/testsuite/ChangeLog:
-
- * gcc.target/aarch64/sls-mitigation/sls-miti-retbr.c: New test.
- * gcc.target/aarch64/sls-mitigation/sls-miti-retbr-pacret.c:
- New test.
- * gcc.target/aarch64/sls-mitigation/sls-mitigation.exp: New file.
- * lib/target-supports.exp (check_effective_target_aarch64_asm_sb_ok):
- New proc.
----
- gcc/config/aarch64/aarch64-protos.h | 1 +
- gcc/config/aarch64/aarch64.c | 41 +++++-
- gcc/config/aarch64/aarch64.h | 10 +-
- gcc/config/aarch64/aarch64.md | 75 ++++++++---
- .../sls-mitigation/sls-miti-retbr-pacret.c | 15 +++
- .../aarch64/sls-mitigation/sls-miti-retbr.c | 119 ++++++++++++++++++
- .../aarch64/sls-mitigation/sls-mitigation.exp | 73 +++++++++++
- gcc/testsuite/lib/target-supports.exp | 3 +-
- 8 files changed, 312 insertions(+), 25 deletions(-)
- create mode 100644 gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-retbr-pacret.c
- create mode 100644 gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-retbr.c
- create mode 100644 gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-mitigation.exp
-
-diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h
-index 31493f412..885eae893 100644
---- a/gcc/config/aarch64/aarch64-protos.h
-+++ b/gcc/config/aarch64/aarch64-protos.h
-@@ -644,6 +644,7 @@ poly_uint64 aarch64_regmode_natural_size (machine_mode);
-
- bool aarch64_high_bits_all_ones_p (HOST_WIDE_INT);
-
-+const char *aarch64_sls_barrier (int);
- extern bool aarch64_harden_sls_retbr_p (void);
- extern bool aarch64_harden_sls_blr_p (void);
-
-diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
-index 269ff6c92..dff61105c 100644
---- a/gcc/config/aarch64/aarch64.c
-+++ b/gcc/config/aarch64/aarch64.c
-@@ -8412,8 +8412,8 @@ aarch64_return_addr (int count, rtx frame ATTRIBUTE_UNUSED)
- static void
- aarch64_asm_trampoline_template (FILE *f)
- {
-- int offset1 = 16;
-- int offset2 = 20;
-+ int offset1 = 24;
-+ int offset2 = 28;
-
- if (aarch64_bti_enabled ())
- {
-@@ -8436,6 +8436,17 @@ aarch64_asm_trampoline_template (FILE *f)
- }
- asm_fprintf (f, "\tbr\t%s\n", reg_names [IP1_REGNUM]);
-
-+ /* We always emit a speculation barrier.
-+ This is because the same trampoline template is used for every nested
-+ function. Since nested functions are not particularly common or
-+ performant we don't worry too much about the extra instructions to copy
-+ around.
-+ This is not yet a problem, since we have not yet implemented function
-+ specific attributes to choose between hardening against straight line
-+ speculation or not, but such function specific attributes are likely to
-+ happen in the future. */
-+ asm_fprintf (f, "\tdsb\tsy\n\tisb\n");
-+
- /* The trampoline needs an extra padding instruction. In case if BTI is
- enabled the padding instruction is replaced by the BTI instruction at
- the beginning. */
-@@ -8450,10 +8461,14 @@ static void
- aarch64_trampoline_init (rtx m_tramp, tree fndecl, rtx chain_value)
- {
- rtx fnaddr, mem, a_tramp;
-- const int tramp_code_sz = 16;
-+ const int tramp_code_sz = 24;
-
- /* Don't need to copy the trailing D-words, we fill those in below. */
-- emit_block_move (m_tramp, assemble_trampoline_template (),
-+ /* We create our own memory address in Pmode so that `emit_block_move` can
-+ use parts of the backend which expect Pmode addresses. */
-+ rtx temp = convert_memory_address (Pmode, XEXP (m_tramp, 0));
-+ emit_block_move (gen_rtx_MEM (BLKmode, temp),
-+ assemble_trampoline_template (),
- GEN_INT (tramp_code_sz), BLOCK_OP_NORMAL);
- mem = adjust_address (m_tramp, ptr_mode, tramp_code_sz);
- fnaddr = XEXP (DECL_RTL (fndecl), 0);
-@@ -8640,6 +8655,8 @@ aarch64_output_casesi (rtx *operands)
- output_asm_insn (buf, operands);
- output_asm_insn (patterns[index][1], operands);
- output_asm_insn ("br\t%3", operands);
-+ output_asm_insn (aarch64_sls_barrier (aarch64_harden_sls_retbr_p ()),
-+ operands);
- assemble_label (asm_out_file, label);
- return "";
- }
-@@ -18976,6 +18993,22 @@ aarch64_file_end_indicate_exec_stack ()
- #undef GNU_PROPERTY_AARCH64_FEATURE_1_BTI
- #undef GNU_PROPERTY_AARCH64_FEATURE_1_AND
-
-+/* Helper function for straight line speculation.
-+ Return what barrier should be emitted for straight line speculation
-+ mitigation.
-+ When not mitigating against straight line speculation this function returns
-+ an empty string.
-+ When mitigating against straight line speculation, use:
-+ * SB when the v8.5-A SB extension is enabled.
-+ * DSB+ISB otherwise. */
-+const char *
-+aarch64_sls_barrier (int mitigation_required)
-+{
-+ return mitigation_required
-+ ? (TARGET_SB ? "sb" : "dsb\tsy\n\tisb")
-+ : "";
-+}
-+
- /* Target-specific selftests. */
-
- #if CHECKING_P
-diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h
-index 772a97296..72ddc6fd9 100644
---- a/gcc/config/aarch64/aarch64.h
-+++ b/gcc/config/aarch64/aarch64.h
-@@ -235,6 +235,7 @@ extern unsigned aarch64_architecture_version;
- #define AARCH64_ISA_F16FML (aarch64_isa_flags & AARCH64_FL_F16FML)
- #define AARCH64_ISA_RCPC8_4 (aarch64_isa_flags & AARCH64_FL_RCPC8_4)
- #define AARCH64_ISA_V8_5 (aarch64_isa_flags & AARCH64_FL_V8_5)
-+#define AARCH64_ISA_SB (aarch64_isa_flags & AARCH64_FL_SB)
-
- /* Crypto is an optional extension to AdvSIMD. */
- #define TARGET_CRYPTO (TARGET_SIMD && AARCH64_ISA_CRYPTO)
-@@ -285,6 +286,9 @@ extern unsigned aarch64_architecture_version;
- #define TARGET_FIX_ERR_A53_835769_DEFAULT 1
- #endif
-
-+/* SB instruction is enabled through +sb. */
-+#define TARGET_SB (AARCH64_ISA_SB)
-+
- /* Apply the workaround for Cortex-A53 erratum 835769. */
- #define TARGET_FIX_ERR_A53_835769 \
- ((aarch64_fix_a53_err835769 == 2) \
-@@ -931,8 +935,10 @@ typedef struct
-
- #define RETURN_ADDR_RTX aarch64_return_addr
-
--/* BTI c + 3 insns + 2 pointer-sized entries. */
--#define TRAMPOLINE_SIZE (TARGET_ILP32 ? 24 : 32)
-+/* BTI c + 3 insns
-+ + sls barrier of DSB + ISB.
-+ + 2 pointer-sized entries. */
-+#define TRAMPOLINE_SIZE (24 + (TARGET_ILP32 ? 8 : 16))
-
- /* Trampolines contain dwords, so must be dword aligned. */
- #define TRAMPOLINE_ALIGNMENT 64
-diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
-index cc5a887d4..494aee964 100644
---- a/gcc/config/aarch64/aarch64.md
-+++ b/gcc/config/aarch64/aarch64.md
-@@ -331,10 +331,25 @@
- ;; Attribute that specifies whether the alternative uses MOVPRFX.
- (define_attr "movprfx" "no,yes" (const_string "no"))
-
-+;; Attribute to specify that an alternative has the length of a single
-+;; instruction plus a speculation barrier.
-+(define_attr "sls_length" "none,retbr,casesi" (const_string "none"))
-+
- (define_attr "length" ""
- (cond [(eq_attr "movprfx" "yes")
- (const_int 8)
-- ] (const_int 4)))
-+
-+ (eq_attr "sls_length" "retbr")
-+ (cond [(match_test "!aarch64_harden_sls_retbr_p ()") (const_int 4)
-+ (match_test "TARGET_SB") (const_int 8)]
-+ (const_int 12))
-+
-+ (eq_attr "sls_length" "casesi")
-+ (cond [(match_test "!aarch64_harden_sls_retbr_p ()") (const_int 16)
-+ (match_test "TARGET_SB") (const_int 20)]
-+ (const_int 24))
-+ ]
-+ (const_int 4)))
-
- ;; Strictly for compatibility with AArch32 in pipeline models, since AArch64 has
- ;; no predicated insns.
-@@ -370,8 +385,12 @@
- (define_insn "indirect_jump"
- [(set (pc) (match_operand:DI 0 "register_operand" "r"))]
- ""
-- "br\\t%0"
-- [(set_attr "type" "branch")]
-+ {
-+ output_asm_insn ("br\\t%0", operands);
-+ return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
-+ }
-+ [(set_attr "type" "branch")
-+ (set_attr "sls_length" "retbr")]
- )
-
- (define_insn "jump"
-@@ -657,7 +676,7 @@
- "*
- return aarch64_output_casesi (operands);
- "
-- [(set_attr "length" "16")
-+ [(set_attr "sls_length" "casesi")
- (set_attr "type" "branch")]
- )
-
-@@ -736,14 +755,18 @@
- [(return)]
- ""
- {
-+ const char *ret = NULL;
- if (aarch64_return_address_signing_enabled ()
- && TARGET_ARMV8_3
- && !crtl->calls_eh_return)
-- return "retaa";
--
-- return "ret";
-+ ret = "retaa";
-+ else
-+ ret = "ret";
-+ output_asm_insn (ret, operands);
-+ return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
- }
-- [(set_attr "type" "branch")]
-+ [(set_attr "type" "branch")
-+ (set_attr "sls_length" "retbr")]
- )
-
- (define_expand "return"
-@@ -755,8 +778,12 @@
- (define_insn "simple_return"
- [(simple_return)]
- "aarch64_use_simple_return_insn_p ()"
-- "ret"
-- [(set_attr "type" "branch")]
-+ {
-+ output_asm_insn ("ret", operands);
-+ return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
-+ }
-+ [(set_attr "type" "branch")
-+ (set_attr "sls_length" "retbr")]
- )
-
- (define_insn "*cb<optab><mode>1"
-@@ -947,10 +974,16 @@
- (match_operand 1 "" ""))
- (return)]
- "SIBLING_CALL_P (insn)"
-- "@
-- br\\t%0
-- b\\t%c0"
-- [(set_attr "type" "branch, branch")]
-+ {
-+ if (which_alternative == 0)
-+ {
-+ output_asm_insn ("br\\t%0", operands);
-+ return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
-+ }
-+ return "b\\t%c0";
-+ }
-+ [(set_attr "type" "branch, branch")
-+ (set_attr "sls_length" "retbr,none")]
- )
-
- (define_insn "*sibcall_value_insn"
-@@ -960,10 +993,16 @@
- (match_operand 2 "" "")))
- (return)]
- "SIBLING_CALL_P (insn)"
-- "@
-- br\\t%1
-- b\\t%c1"
-- [(set_attr "type" "branch, branch")]
-+ {
-+ if (which_alternative == 0)
-+ {
-+ output_asm_insn ("br\\t%1", operands);
-+ return aarch64_sls_barrier (aarch64_harden_sls_retbr_p ());
-+ }
-+ return "b\\t%c1";
-+ }
-+ [(set_attr "type" "branch, branch")
-+ (set_attr "sls_length" "retbr,none")]
- )
-
- ;; Call subroutine returning any type.
-diff --git a/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-retbr-pacret.c b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-retbr-pacret.c
-new file mode 100644
-index 000000000..7656123ee
---- /dev/null
-+++ b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-retbr-pacret.c
-@@ -0,0 +1,15 @@
-+/* Avoid ILP32 since pacret is only available for LP64 */
-+/* { dg-do compile { target { ! ilp32 } } } */
-+/* { dg-additional-options "-mharden-sls=retbr -mbranch-protection=pac-ret -march=armv8.3-a" } */
-+
-+/* Testing the do_return pattern for retaa. */
-+long retbr_subcall(void);
-+long retbr_do_return_retaa(void)
-+{
-+ return retbr_subcall()+1;
-+}
-+
-+/* Ensure there are no BR or RET instructions which are not directly followed
-+ by a speculation barrier. */
-+/* { dg-final { scan-assembler-not {\t(br|ret|retaa)\tx[0-9][0-9]?\n\t(?!dsb\tsy\n\tisb)} } } */
-+/* { dg-final { scan-assembler-not {ret\t} } } */
-diff --git a/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-retbr.c b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-retbr.c
-new file mode 100644
-index 000000000..573b30cdc
---- /dev/null
-+++ b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-retbr.c
-@@ -0,0 +1,119 @@
-+/* We ensure that -Wpedantic is off since it complains about the trampolines
-+ we explicitly want to test. */
-+/* { dg-additional-options "-mharden-sls=retbr -Wno-pedantic " } */
-+/*
-+ Ensure that the SLS hardening of RET and BR leaves no unprotected RET/BR
-+ instructions.
-+ */
-+typedef int (foo) (int, int);
-+typedef void (bar) (int, int);
-+struct sls_testclass {
-+ foo *x;
-+ bar *y;
-+ int left;
-+ int right;
-+};
-+
-+int
-+retbr_sibcall_value_insn (struct sls_testclass x)
-+{
-+ return x.x(x.left, x.right);
-+}
-+
-+void
-+retbr_sibcall_insn (struct sls_testclass x)
-+{
-+ x.y(x.left, x.right);
-+}
-+
-+/* Aim to test two different returns.
-+ One that introduces a tail call in the middle of the function, and one that
-+ has a normal return. */
-+int
-+retbr_multiple_returns (struct sls_testclass x)
-+{
-+ int temp;
-+ if (x.left % 10)
-+ return x.x(x.left, 100);
-+ else if (x.right % 20)
-+ {
-+ return x.x(x.left * x.right, 100);
-+ }
-+ temp = x.left % x.right;
-+ temp *= 100;
-+ temp /= 2;
-+ return temp % 3;
-+}
-+
-+void
-+retbr_multiple_returns_void (struct sls_testclass x)
-+{
-+ if (x.left % 10)
-+ {
-+ x.y(x.left, 100);
-+ }
-+ else if (x.right % 20)
-+ {
-+ x.y(x.left * x.right, 100);
-+ }
-+ return;
-+}
-+
-+/* Testing the casesi jump via register. */
-+__attribute__ ((optimize ("Os")))
-+int
-+retbr_casesi_dispatch (struct sls_testclass x)
-+{
-+ switch (x.left)
-+ {
-+ case -5:
-+ return -2;
-+ case -3:
-+ return -1;
-+ case 0:
-+ return 0;
-+ case 3:
-+ return 1;
-+ case 5:
-+ break;
-+ default:
-+ __builtin_unreachable ();
-+ }
-+ return x.right;
-+}
-+
-+/* Testing the BR in trampolines is mitigated against. */
-+void f1 (void *);
-+void f3 (void *, void (*)(void *));
-+void f2 (void *);
-+
-+int
-+retbr_trampolines (void *a, int b)
-+{
-+ if (!b)
-+ {
-+ f1 (a);
-+ return 1;
-+ }
-+ if (b)
-+ {
-+ void retbr_tramp_internal (void *c)
-+ {
-+ if (c == a)
-+ f2 (c);
-+ }
-+ f3 (a, retbr_tramp_internal);
-+ }
-+ return 0;
-+}
-+
-+/* Testing the indirect_jump pattern. */
-+void
-+retbr_indirect_jump (int *buf)
-+{
-+ __builtin_longjmp(buf, 1);
-+}
-+
-+/* Ensure there are no BR or RET instructions which are not directly followed
-+ by a speculation barrier. */
-+/* { dg-final { scan-assembler-not {\t(br|ret|retaa)\tx[0-9][0-9]?\n\t(?!dsb\tsy\n\tisb|sb)} } } */
-diff --git a/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-mitigation.exp b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-mitigation.exp
-new file mode 100644
-index 000000000..812250379
---- /dev/null
-+++ b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-mitigation.exp
-@@ -0,0 +1,73 @@
-+# Regression driver for SLS mitigation on AArch64.
-+# Copyright (C) 2020 Free Software Foundation, Inc.
-+# Contributed by ARM Ltd.
-+#
-+# This file is part of GCC.
-+#
-+# GCC is free software; you can redistribute it and/or modify it
-+# under the terms of the GNU General Public License as published by
-+# the Free Software Foundation; either version 3, or (at your option)
-+# any later version.
-+#
-+# GCC is distributed in the hope that it will be useful, but
-+# WITHOUT ANY WARRANTY; without even the implied warranty of
-+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-+# General Public License for more details.
-+#
-+# You should have received a copy of the GNU General Public License
-+# along with GCC; see the file COPYING3. If not see
-+# <http://www.gnu.org/licenses/>. */
-+
-+# Exit immediately if this isn't an AArch64 target.
-+if {![istarget aarch64*-*-*] } then {
-+ return
-+}
-+
-+# Load support procs.
-+load_lib gcc-dg.exp
-+load_lib torture-options.exp
-+
-+# If a testcase doesn't have special options, use these.
-+global DEFAULT_CFLAGS
-+if ![info exists DEFAULT_CFLAGS] then {
-+ set DEFAULT_CFLAGS " "
-+}
-+
-+# Initialize `dg'.
-+dg-init
-+torture-init
-+
-+# Use different architectures as well as the normal optimisation options.
-+# (i.e. use both SB and DSB+ISB barriers).
-+
-+set save-dg-do-what-default ${dg-do-what-default}
-+# Main loop.
-+# Run with torture tests (i.e. a bunch of different optimisation levels) just
-+# to increase test coverage.
-+set dg-do-what-default assemble
-+gcc-dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cCS\]]] \
-+ "-save-temps" $DEFAULT_CFLAGS
-+
-+# Run the same tests but this time with SB extension.
-+# Since not all supported assemblers will support that extension we decide
-+# whether to assemble or just compile based on whether the extension is
-+# supported for the available assembler.
-+
-+set templist {}
-+foreach x $DG_TORTURE_OPTIONS {
-+ lappend templist "$x -march=armv8.3-a+sb "
-+ lappend templist "$x -march=armv8-a+sb "
-+}
-+set-torture-options $templist
-+if { [check_effective_target_aarch64_asm_sb_ok] } {
-+ set dg-do-what-default assemble
-+} else {
-+ set dg-do-what-default compile
-+}
-+gcc-dg-runtest [lsort [glob -nocomplain $srcdir/$subdir/*.\[cCS\]]] \
-+ "-save-temps" $DEFAULT_CFLAGS
-+set dg-do-what-default ${save-dg-do-what-default}
-+
-+# All done.
-+torture-finish
-+dg-finish
-diff --git a/gcc/testsuite/lib/target-supports.exp b/gcc/testsuite/lib/target-supports.exp
-index ea9a50ccb..79482f9b6 100644
---- a/gcc/testsuite/lib/target-supports.exp
-+++ b/gcc/testsuite/lib/target-supports.exp
-@@ -8579,7 +8579,8 @@ proc check_effective_target_aarch64_tiny { } {
- # Create functions to check that the AArch64 assembler supports the
- # various architecture extensions via the .arch_extension pseudo-op.
-
--foreach { aarch64_ext } { "fp" "simd" "crypto" "crc" "lse" "dotprod" "sve"} {
-+foreach { aarch64_ext } { "fp" "simd" "crypto" "crc" "lse" "dotprod" "sve"
-+ "sb"} {
- eval [string map [list FUNC $aarch64_ext] {
- proc check_effective_target_aarch64_asm_FUNC_ok { } {
- if { [istarget aarch64*-*-*] } {
---
-2.25.1
-
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch b/poky/meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch
deleted file mode 100644
index 6dffef0a34..0000000000
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch
+++ /dev/null
@@ -1,659 +0,0 @@
-CVE: CVE-2020-13844
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@arm.com>
-
-From 2155170525f93093b90a1a065e7ed71a925566e9 Mon Sep 17 00:00:00 2001
-From: Matthew Malcomson <matthew.malcomson@arm.com>
-Date: Thu, 9 Jul 2020 09:11:59 +0100
-Subject: [PATCH 3/3] aarch64: Mitigate SLS for BLR instruction
-
-This patch introduces the mitigation for Straight Line Speculation past
-the BLR instruction.
-
-This mitigation replaces BLR instructions with a BL to a stub which uses
-a BR to jump to the original value. These function stubs are then
-appended with a speculation barrier to ensure no straight line
-speculation happens after these jumps.
-
-When optimising for speed we use a set of stubs for each function since
-this should help the branch predictor make more accurate predictions
-about where a stub should branch.
-
-When optimising for size we use one set of stubs for all functions.
-This set of stubs can have human readable names, and we are using
-`__call_indirect_x<N>` for register x<N>.
-
-When BTI branch protection is enabled the BLR instruction can jump to a
-`BTI c` instruction using any register, while the BR instruction can
-only jump to a `BTI c` instruction using the x16 or x17 registers.
-Hence, in order to ensure this transformation is safe we mov the value
-of the original register into x16 and use x16 for the BR.
-
-As an example when optimising for size:
-a
- BLR x0
-instruction would get transformed to something like
- BL __call_indirect_x0
-where __call_indirect_x0 labels a thunk that contains
-__call_indirect_x0:
- MOV X16, X0
- BR X16
- <speculation barrier>
-
-The first version of this patch used local symbols specific to a
-compilation unit to try and avoid relocations.
-This was mistaken since functions coming from the same compilation unit
-can still be in different sections, and the assembler will insert
-relocations at jumps between sections.
-
-On any relocation the linker is permitted to emit a veneer to handle
-jumps between symbols that are very far apart. The registers x16 and
-x17 may be clobbered by these veneers.
-Hence the function stubs cannot rely on the values of x16 and x17 being
-the same as just before the function stub is called.
-
-Similar can be said for the hot/cold partitioning of single functions,
-so function-local stubs have the same restriction.
-
-This updated version of the patch never emits function stubs for x16 and
-x17, and instead forces other registers to be used.
-
-Given the above, there is now no benefit to local symbols (since they
-are not enough to avoid dealing with linker intricacies). This patch
-now uses global symbols with hidden visibility each stored in their own
-COMDAT section. This means stubs can be shared between compilation
-units while still avoiding the PLT indirection.
-
-This patch also removes the `__call_indirect_x30` stub (and
-function-local equivalent) which would simply jump back to the original
-location.
-
-The function-local stubs are emitted to the assembly output file in one
-chunk, which means we need not add the speculation barrier directly
-after each one.
-This is because we know for certain that the instructions directly after
-the BR in all but the last function stub will be from another one of
-these stubs and hence will not contain a speculation gadget.
-Instead we add a speculation barrier at the end of the sequence of
-stubs.
-
-The global stubs are emitted in COMDAT/.linkonce sections by
-themselves so that the linker can remove duplicates from multiple object
-files. This means they are not emitted in one chunk, and each one must
-include the speculation barrier.
-
-Another difference is that since the global stubs are shared across
-compilation units we do not know that all functions will be targeting an
-architecture supporting the SB instruction.
-Rather than provide multiple stubs for each architecture, we provide a
-stub that will work for all architectures -- using the DSB+ISB barrier.
-
-This mitigation does not apply for BLR instructions in the following
-places:
-- Some accesses to thread-local variables use a code sequence with a BLR
- instruction. This code sequence is part of the binary interface between
- compiler and linker. If this BLR instruction needs to be mitigated, it'd
- probably be best to do so in the linker. It seems that the code sequence
- for thread-local variable access is unlikely to lead to a Spectre Revalation
- Gadget.
-- PLT stubs are produced by the linker and each contain a BLR instruction.
- It seems that at most only after the last PLT stub a Spectre Revalation
- Gadget might appear.
-
-Testing:
- Bootstrap and regtest on AArch64
- (with BOOT_CFLAGS="-mharden-sls=retbr,blr")
- Used a temporary hack(1) in gcc-dg.exp to use these options on every
- test in the testsuite, a slight modification to emit the speculation
- barrier after every function stub, and a script to check that the
- output never emitted a BLR, or unmitigated BR or RET instruction.
- Similar on an aarch64-none-elf cross-compiler.
-
-1) Temporary hack emitted a speculation barrier at the end of every stub
-function, and used a script to ensure that:
- a) Every RET or BR is immediately followed by a speculation barrier.
- b) No BLR instruction is emitted by compiler.
-
-(cherry picked from 96b7f495f9269d5448822e4fc28882edb35a58d7)
-
-gcc/ChangeLog:
-
- * config/aarch64/aarch64-protos.h (aarch64_indirect_call_asm):
- New declaration.
- * config/aarch64/aarch64.c (aarch64_regno_regclass): Handle new
- stub registers class.
- (aarch64_class_max_nregs): Likewise.
- (aarch64_register_move_cost): Likewise.
- (aarch64_sls_shared_thunks): Global array to store stub labels.
- (aarch64_sls_emit_function_stub): New.
- (aarch64_create_blr_label): New.
- (aarch64_sls_emit_blr_function_thunks): New.
- (aarch64_sls_emit_shared_blr_thunks): New.
- (aarch64_asm_file_end): New.
- (aarch64_indirect_call_asm): New.
- (TARGET_ASM_FILE_END): Use aarch64_asm_file_end.
- (TARGET_ASM_FUNCTION_EPILOGUE): Use
- aarch64_sls_emit_blr_function_thunks.
- * config/aarch64/aarch64.h (STB_REGNUM_P): New.
- (enum reg_class): Add STUB_REGS class.
- (machine_function): Introduce `call_via` array for
- function-local stub labels.
- * config/aarch64/aarch64.md (*call_insn, *call_value_insn): Use
- aarch64_indirect_call_asm to emit code when hardening BLR
- instructions.
- * config/aarch64/constraints.md (Ucr): New constraint
- representing registers for indirect calls. Is GENERAL_REGS
- usually, and STUB_REGS when hardening BLR instruction against
- SLS.
- * config/aarch64/predicates.md (aarch64_general_reg): STUB_REGS class
- is also a general register.
-
-gcc/testsuite/ChangeLog:
-
- * gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c: New test.
- * gcc.target/aarch64/sls-mitigation/sls-miti-blr.c: New test.
----
- gcc/config/aarch64/aarch64-protos.h | 1 +
- gcc/config/aarch64/aarch64.c | 225 +++++++++++++++++-
- gcc/config/aarch64/aarch64.h | 15 ++
- gcc/config/aarch64/aarch64.md | 11 +-
- gcc/config/aarch64/constraints.md | 9 +
- gcc/config/aarch64/predicates.md | 3 +-
- .../aarch64/sls-mitigation/sls-miti-blr-bti.c | 40 ++++
- .../aarch64/sls-mitigation/sls-miti-blr.c | 33 +++
- 8 files changed, 328 insertions(+), 9 deletions(-)
- create mode 100644 gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c
- create mode 100644 gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c
-
-diff --git a/gcc/config/aarch64/aarch64-protos.h b/gcc/config/aarch64/aarch64-protos.h
-index 885eae893..2676e43ae 100644
---- a/gcc/config/aarch64/aarch64-protos.h
-+++ b/gcc/config/aarch64/aarch64-protos.h
-@@ -645,6 +645,7 @@ poly_uint64 aarch64_regmode_natural_size (machine_mode);
- bool aarch64_high_bits_all_ones_p (HOST_WIDE_INT);
-
- const char *aarch64_sls_barrier (int);
-+const char *aarch64_indirect_call_asm (rtx);
- extern bool aarch64_harden_sls_retbr_p (void);
- extern bool aarch64_harden_sls_blr_p (void);
-
-diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c
-index dff61105c..bc6c02c3a 100644
---- a/gcc/config/aarch64/aarch64.c
-+++ b/gcc/config/aarch64/aarch64.c
-@@ -8190,6 +8190,9 @@ aarch64_label_mentioned_p (rtx x)
- enum reg_class
- aarch64_regno_regclass (unsigned regno)
- {
-+ if (STUB_REGNUM_P (regno))
-+ return STUB_REGS;
-+
- if (GP_REGNUM_P (regno))
- return GENERAL_REGS;
-
-@@ -8499,6 +8502,7 @@ aarch64_class_max_nregs (reg_class_t regclass, machine_mode mode)
- unsigned int nregs;
- switch (regclass)
- {
-+ case STUB_REGS:
- case TAILCALL_ADDR_REGS:
- case POINTER_REGS:
- case GENERAL_REGS:
-@@ -10693,10 +10697,12 @@ aarch64_register_move_cost (machine_mode mode,
- = aarch64_tune_params.regmove_cost;
-
- /* Caller save and pointer regs are equivalent to GENERAL_REGS. */
-- if (to == TAILCALL_ADDR_REGS || to == POINTER_REGS)
-+ if (to == TAILCALL_ADDR_REGS || to == POINTER_REGS
-+ || to == STUB_REGS)
- to = GENERAL_REGS;
-
-- if (from == TAILCALL_ADDR_REGS || from == POINTER_REGS)
-+ if (from == TAILCALL_ADDR_REGS || from == POINTER_REGS
-+ || from == STUB_REGS)
- from = GENERAL_REGS;
-
- /* Moving between GPR and stack cost is the same as GP2GP. */
-@@ -19009,6 +19015,215 @@ aarch64_sls_barrier (int mitigation_required)
- : "";
- }
-
-+static GTY (()) tree aarch64_sls_shared_thunks[30];
-+static GTY (()) bool aarch64_sls_shared_thunks_needed = false;
-+const char *indirect_symbol_names[30] = {
-+ "__call_indirect_x0",
-+ "__call_indirect_x1",
-+ "__call_indirect_x2",
-+ "__call_indirect_x3",
-+ "__call_indirect_x4",
-+ "__call_indirect_x5",
-+ "__call_indirect_x6",
-+ "__call_indirect_x7",
-+ "__call_indirect_x8",
-+ "__call_indirect_x9",
-+ "__call_indirect_x10",
-+ "__call_indirect_x11",
-+ "__call_indirect_x12",
-+ "__call_indirect_x13",
-+ "__call_indirect_x14",
-+ "__call_indirect_x15",
-+ "", /* "__call_indirect_x16", */
-+ "", /* "__call_indirect_x17", */
-+ "__call_indirect_x18",
-+ "__call_indirect_x19",
-+ "__call_indirect_x20",
-+ "__call_indirect_x21",
-+ "__call_indirect_x22",
-+ "__call_indirect_x23",
-+ "__call_indirect_x24",
-+ "__call_indirect_x25",
-+ "__call_indirect_x26",
-+ "__call_indirect_x27",
-+ "__call_indirect_x28",
-+ "__call_indirect_x29",
-+};
-+
-+/* Function to create a BLR thunk. This thunk is used to mitigate straight
-+ line speculation. Instead of a simple BLR that can be speculated past,
-+ we emit a BL to this thunk, and this thunk contains a BR to the relevant
-+ register. These thunks have the relevant speculation barries put after
-+ their indirect branch so that speculation is blocked.
-+
-+ We use such a thunk so the speculation barriers are kept off the
-+ architecturally executed path in order to reduce the performance overhead.
-+
-+ When optimizing for size we use stubs shared by the linked object.
-+ When optimizing for performance we emit stubs for each function in the hope
-+ that the branch predictor can better train on jumps specific for a given
-+ function. */
-+rtx
-+aarch64_sls_create_blr_label (int regnum)
-+{
-+ gcc_assert (STUB_REGNUM_P (regnum));
-+ if (optimize_function_for_size_p (cfun))
-+ {
-+ /* For the thunks shared between different functions in this compilation
-+ unit we use a named symbol -- this is just for users to more easily
-+ understand the generated assembly. */
-+ aarch64_sls_shared_thunks_needed = true;
-+ const char *thunk_name = indirect_symbol_names[regnum];
-+ if (aarch64_sls_shared_thunks[regnum] == NULL)
-+ {
-+ /* Build a decl representing this function stub and record it for
-+ later. We build a decl here so we can use the GCC machinery for
-+ handling sections automatically (through `get_named_section` and
-+ `make_decl_one_only`). That saves us a lot of trouble handling
-+ the specifics of different output file formats. */
-+ tree decl = build_decl (BUILTINS_LOCATION, FUNCTION_DECL,
-+ get_identifier (thunk_name),
-+ build_function_type_list (void_type_node,
-+ NULL_TREE));
-+ DECL_RESULT (decl) = build_decl (BUILTINS_LOCATION, RESULT_DECL,
-+ NULL_TREE, void_type_node);
-+ TREE_PUBLIC (decl) = 1;
-+ TREE_STATIC (decl) = 1;
-+ DECL_IGNORED_P (decl) = 1;
-+ DECL_ARTIFICIAL (decl) = 1;
-+ make_decl_one_only (decl, DECL_ASSEMBLER_NAME (decl));
-+ resolve_unique_section (decl, 0, false);
-+ aarch64_sls_shared_thunks[regnum] = decl;
-+ }
-+
-+ return gen_rtx_SYMBOL_REF (Pmode, thunk_name);
-+ }
-+
-+ if (cfun->machine->call_via[regnum] == NULL)
-+ cfun->machine->call_via[regnum]
-+ = gen_rtx_LABEL_REF (Pmode, gen_label_rtx ());
-+ return cfun->machine->call_via[regnum];
-+}
-+
-+/* Helper function for aarch64_sls_emit_blr_function_thunks and
-+ aarch64_sls_emit_shared_blr_thunks below. */
-+static void
-+aarch64_sls_emit_function_stub (FILE *out_file, int regnum)
-+{
-+ /* Save in x16 and branch to that function so this transformation does
-+ not prevent jumping to `BTI c` instructions. */
-+ asm_fprintf (out_file, "\tmov\tx16, x%d\n", regnum);
-+ asm_fprintf (out_file, "\tbr\tx16\n");
-+}
-+
-+/* Emit all BLR stubs for this particular function.
-+ Here we emit all the BLR stubs needed for the current function. Since we
-+ emit these stubs in a consecutive block we know there will be no speculation
-+ gadgets between each stub, and hence we only emit a speculation barrier at
-+ the end of the stub sequences.
-+
-+ This is called in the TARGET_ASM_FUNCTION_EPILOGUE hook. */
-+void
-+aarch64_sls_emit_blr_function_thunks (FILE *out_file)
-+{
-+ if (! aarch64_harden_sls_blr_p ())
-+ return;
-+
-+ bool any_functions_emitted = false;
-+ /* We must save and restore the current function section since this assembly
-+ is emitted at the end of the function. This means it can be emitted *just
-+ after* the cold section of a function. That cold part would be emitted in
-+ a different section. That switch would trigger a `.cfi_endproc` directive
-+ to be emitted in the original section and a `.cfi_startproc` directive to
-+ be emitted in the new section. Switching to the original section without
-+ restoring would mean that the `.cfi_endproc` emitted as a function ends
-+ would happen in a different section -- leaving an unmatched
-+ `.cfi_startproc` in the cold text section and an unmatched `.cfi_endproc`
-+ in the standard text section. */
-+ section *save_text_section = in_section;
-+ switch_to_section (function_section (current_function_decl));
-+ for (int regnum = 0; regnum < 30; ++regnum)
-+ {
-+ rtx specu_label = cfun->machine->call_via[regnum];
-+ if (specu_label == NULL)
-+ continue;
-+
-+ targetm.asm_out.print_operand (out_file, specu_label, 0);
-+ asm_fprintf (out_file, ":\n");
-+ aarch64_sls_emit_function_stub (out_file, regnum);
-+ any_functions_emitted = true;
-+ }
-+ if (any_functions_emitted)
-+ /* Can use the SB if needs be here, since this stub will only be used
-+ by the current function, and hence for the current target. */
-+ asm_fprintf (out_file, "\t%s\n", aarch64_sls_barrier (true));
-+ switch_to_section (save_text_section);
-+}
-+
-+/* Emit shared BLR stubs for the current compilation unit.
-+ Over the course of compiling this unit we may have converted some BLR
-+ instructions to a BL to a shared stub function. This is where we emit those
-+ stub functions.
-+ This function is for the stubs shared between different functions in this
-+ compilation unit. We share when optimizing for size instead of speed.
-+
-+ This function is called through the TARGET_ASM_FILE_END hook. */
-+void
-+aarch64_sls_emit_shared_blr_thunks (FILE *out_file)
-+{
-+ if (! aarch64_sls_shared_thunks_needed)
-+ return;
-+
-+ for (int regnum = 0; regnum < 30; ++regnum)
-+ {
-+ tree decl = aarch64_sls_shared_thunks[regnum];
-+ if (!decl)
-+ continue;
-+
-+ const char *name = indirect_symbol_names[regnum];
-+ switch_to_section (get_named_section (decl, NULL, 0));
-+ ASM_OUTPUT_ALIGN (out_file, 2);
-+ targetm.asm_out.globalize_label (out_file, name);
-+ /* Only emits if the compiler is configured for an assembler that can
-+ handle visibility directives. */
-+ targetm.asm_out.assemble_visibility (decl, VISIBILITY_HIDDEN);
-+ ASM_OUTPUT_TYPE_DIRECTIVE (out_file, name, "function");
-+ ASM_OUTPUT_LABEL (out_file, name);
-+ aarch64_sls_emit_function_stub (out_file, regnum);
-+ /* Use the most conservative target to ensure it can always be used by any
-+ function in the translation unit. */
-+ asm_fprintf (out_file, "\tdsb\tsy\n\tisb\n");
-+ ASM_DECLARE_FUNCTION_SIZE (out_file, name, decl);
-+ }
-+}
-+
-+/* Implement TARGET_ASM_FILE_END. */
-+void
-+aarch64_asm_file_end ()
-+{
-+ aarch64_sls_emit_shared_blr_thunks (asm_out_file);
-+ /* Since this function will be called for the ASM_FILE_END hook, we ensure
-+ that what would be called otherwise (e.g. `file_end_indicate_exec_stack`
-+ for FreeBSD) still gets called. */
-+#ifdef TARGET_ASM_FILE_END
-+ TARGET_ASM_FILE_END ();
-+#endif
-+}
-+
-+const char *
-+aarch64_indirect_call_asm (rtx addr)
-+{
-+ gcc_assert (REG_P (addr));
-+ if (aarch64_harden_sls_blr_p ())
-+ {
-+ rtx stub_label = aarch64_sls_create_blr_label (REGNO (addr));
-+ output_asm_insn ("bl\t%0", &stub_label);
-+ }
-+ else
-+ output_asm_insn ("blr\t%0", &addr);
-+ return "";
-+}
-+
- /* Target-specific selftests. */
-
- #if CHECKING_P
-@@ -19529,6 +19744,12 @@ aarch64_libgcc_floating_mode_supported_p
- #define TARGET_RUN_TARGET_SELFTESTS selftest::aarch64_run_selftests
- #endif /* #if CHECKING_P */
-
-+#undef TARGET_ASM_FILE_END
-+#define TARGET_ASM_FILE_END aarch64_asm_file_end
-+
-+#undef TARGET_ASM_FUNCTION_EPILOGUE
-+#define TARGET_ASM_FUNCTION_EPILOGUE aarch64_sls_emit_blr_function_thunks
-+
- struct gcc_target targetm = TARGET_INITIALIZER;
-
- #include "gt-aarch64.h"
-diff --git a/gcc/config/aarch64/aarch64.h b/gcc/config/aarch64/aarch64.h
-index 72ddc6fd9..60682a100 100644
---- a/gcc/config/aarch64/aarch64.h
-+++ b/gcc/config/aarch64/aarch64.h
-@@ -540,6 +540,16 @@ extern unsigned aarch64_architecture_version;
- #define GP_REGNUM_P(REGNO) \
- (((unsigned) (REGNO - R0_REGNUM)) <= (R30_REGNUM - R0_REGNUM))
-
-+/* Registers known to be preserved over a BL instruction. This consists of the
-+ GENERAL_REGS without x16, x17, and x30. The x30 register is changed by the
-+ BL instruction itself, while the x16 and x17 registers may be used by
-+ veneers which can be inserted by the linker. */
-+#define STUB_REGNUM_P(REGNO) \
-+ (GP_REGNUM_P (REGNO) \
-+ && (REGNO) != R16_REGNUM \
-+ && (REGNO) != R17_REGNUM \
-+ && (REGNO) != R30_REGNUM) \
-+
- #define FP_REGNUM_P(REGNO) \
- (((unsigned) (REGNO - V0_REGNUM)) <= (V31_REGNUM - V0_REGNUM))
-
-@@ -561,6 +571,7 @@ enum reg_class
- {
- NO_REGS,
- TAILCALL_ADDR_REGS,
-+ STUB_REGS,
- GENERAL_REGS,
- STACK_REG,
- POINTER_REGS,
-@@ -580,6 +591,7 @@ enum reg_class
- { \
- "NO_REGS", \
- "TAILCALL_ADDR_REGS", \
-+ "STUB_REGS", \
- "GENERAL_REGS", \
- "STACK_REG", \
- "POINTER_REGS", \
-@@ -596,6 +608,7 @@ enum reg_class
- { \
- { 0x00000000, 0x00000000, 0x00000000 }, /* NO_REGS */ \
- { 0x00030000, 0x00000000, 0x00000000 }, /* TAILCALL_ADDR_REGS */\
-+ { 0x3ffcffff, 0x00000000, 0x00000000 }, /* STUB_REGS */ \
- { 0x7fffffff, 0x00000000, 0x00000003 }, /* GENERAL_REGS */ \
- { 0x80000000, 0x00000000, 0x00000000 }, /* STACK_REG */ \
- { 0xffffffff, 0x00000000, 0x00000003 }, /* POINTER_REGS */ \
-@@ -735,6 +748,8 @@ typedef struct GTY (()) machine_function
- struct aarch64_frame frame;
- /* One entry for each hard register. */
- bool reg_is_wrapped_separately[LAST_SAVED_REGNUM];
-+ /* One entry for each general purpose register. */
-+ rtx call_via[SP_REGNUM];
- bool label_is_assembled;
- } machine_function;
- #endif
-diff --git a/gcc/config/aarch64/aarch64.md b/gcc/config/aarch64/aarch64.md
-index 494aee964..ed8cf8ece 100644
---- a/gcc/config/aarch64/aarch64.md
-+++ b/gcc/config/aarch64/aarch64.md
-@@ -908,15 +908,14 @@
- )
-
- (define_insn "*call_insn"
-- [(call (mem:DI (match_operand:DI 0 "aarch64_call_insn_operand" "r, Usf"))
-+ [(call (mem:DI (match_operand:DI 0 "aarch64_call_insn_operand" "Ucr, Usf"))
- (match_operand 1 "" ""))
- (clobber (reg:DI LR_REGNUM))]
- ""
- "@
-- blr\\t%0
-+ * return aarch64_indirect_call_asm (operands[0]);
- bl\\t%c0"
-- [(set_attr "type" "call, call")]
--)
-+ [(set_attr "type" "call, call")])
-
- (define_expand "call_value"
- [(parallel [(set (match_operand 0 "" "")
-@@ -934,12 +933,12 @@
-
- (define_insn "*call_value_insn"
- [(set (match_operand 0 "" "")
-- (call (mem:DI (match_operand:DI 1 "aarch64_call_insn_operand" "r, Usf"))
-+ (call (mem:DI (match_operand:DI 1 "aarch64_call_insn_operand" "Ucr, Usf"))
- (match_operand 2 "" "")))
- (clobber (reg:DI LR_REGNUM))]
- ""
- "@
-- blr\\t%1
-+ * return aarch64_indirect_call_asm (operands[1]);
- bl\\t%c1"
- [(set_attr "type" "call, call")]
- )
-diff --git a/gcc/config/aarch64/constraints.md b/gcc/config/aarch64/constraints.md
-index 21f9549e6..7756dbe83 100644
---- a/gcc/config/aarch64/constraints.md
-+++ b/gcc/config/aarch64/constraints.md
-@@ -24,6 +24,15 @@
- (define_register_constraint "Ucs" "TAILCALL_ADDR_REGS"
- "@internal Registers suitable for an indirect tail call")
-
-+(define_register_constraint "Ucr"
-+ "aarch64_harden_sls_blr_p () ? STUB_REGS : GENERAL_REGS"
-+ "@internal Registers to be used for an indirect call.
-+ This is usually the general registers, but when we are hardening against
-+ Straight Line Speculation we disallow x16, x17, and x30 so we can use
-+ indirection stubs. These indirection stubs cannot use the above registers
-+ since they will be reached by a BL that may have to go through a linker
-+ veneer.")
-+
- (define_register_constraint "w" "FP_REGS"
- "Floating point and SIMD vector registers.")
-
-diff --git a/gcc/config/aarch64/predicates.md b/gcc/config/aarch64/predicates.md
-index 8e1b78421..4250aecb3 100644
---- a/gcc/config/aarch64/predicates.md
-+++ b/gcc/config/aarch64/predicates.md
-@@ -32,7 +32,8 @@
-
- (define_predicate "aarch64_general_reg"
- (and (match_operand 0 "register_operand")
-- (match_test "REGNO_REG_CLASS (REGNO (op)) == GENERAL_REGS")))
-+ (match_test "REGNO_REG_CLASS (REGNO (op)) == STUB_REGS
-+ || REGNO_REG_CLASS (REGNO (op)) == GENERAL_REGS")))
-
- ;; Return true if OP a (const_int 0) operand.
- (define_predicate "const0_operand"
-diff --git a/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c
-new file mode 100644
-index 000000000..b1fb754c7
---- /dev/null
-+++ b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr-bti.c
-@@ -0,0 +1,40 @@
-+/* { dg-do compile } */
-+/* { dg-additional-options "-mharden-sls=blr -mbranch-protection=bti" } */
-+/*
-+ Ensure that the SLS hardening of BLR leaves no BLR instructions.
-+ Here we also check that there are no BR instructions with anything except an
-+ x16 or x17 register. This is because a `BTI c` instruction can be branched
-+ to using a BLR instruction using any register, but can only be branched to
-+ with a BR using an x16 or x17 register.
-+ */
-+typedef int (foo) (int, int);
-+typedef void (bar) (int, int);
-+struct sls_testclass {
-+ foo *x;
-+ bar *y;
-+ int left;
-+ int right;
-+};
-+
-+/* We test both RTL patterns for a call which returns a value and a call which
-+ does not. */
-+int blr_call_value (struct sls_testclass x)
-+{
-+ int retval = x.x(x.left, x.right);
-+ if (retval % 10)
-+ return 100;
-+ return 9;
-+}
-+
-+int blr_call (struct sls_testclass x)
-+{
-+ x.y(x.left, x.right);
-+ if (x.left % 10)
-+ return 100;
-+ return 9;
-+}
-+
-+/* { dg-final { scan-assembler-not {\tblr\t} } } */
-+/* { dg-final { scan-assembler-not {\tbr\tx(?!16|17)} } } */
-+/* { dg-final { scan-assembler {\tbr\tx(16|17)} } } */
-+
-diff --git a/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c
-new file mode 100644
-index 000000000..88baffffe
---- /dev/null
-+++ b/gcc/testsuite/gcc.target/aarch64/sls-mitigation/sls-miti-blr.c
-@@ -0,0 +1,33 @@
-+/* { dg-additional-options "-mharden-sls=blr -save-temps" } */
-+/* Ensure that the SLS hardening of BLR leaves no BLR instructions.
-+ We only test that all BLR instructions have been removed, not that the
-+ resulting code makes sense. */
-+typedef int (foo) (int, int);
-+typedef void (bar) (int, int);
-+struct sls_testclass {
-+ foo *x;
-+ bar *y;
-+ int left;
-+ int right;
-+};
-+
-+/* We test both RTL patterns for a call which returns a value and a call which
-+ does not. */
-+int blr_call_value (struct sls_testclass x)
-+{
-+ int retval = x.x(x.left, x.right);
-+ if (retval % 10)
-+ return 100;
-+ return 9;
-+}
-+
-+int blr_call (struct sls_testclass x)
-+{
-+ x.y(x.left, x.right);
-+ if (x.left % 10)
-+ return 100;
-+ return 9;
-+}
-+
-+/* { dg-final { scan-assembler-not {\tblr\t} } } */
-+/* { dg-final { scan-assembler {\tbr\tx[0-9][0-9]?} } } */
---
-2.25.1
-
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0040-fix-missing-dependencies-for-selftests.patch b/poky/meta/recipes-devtools/gcc/gcc-9.3/0040-fix-missing-dependencies-for-selftests.patch
deleted file mode 100644
index c8960c6098..0000000000
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0040-fix-missing-dependencies-for-selftests.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From b19d8aac15649f31a7588b2634411a1922906ea8 Mon Sep 17 00:00:00 2001
-From: Romain Naour <romain.naour@gmail.com>
-Date: Wed, 3 Jun 2020 12:30:57 -0600
-Subject: [PATCH] Fix missing dependencies for selftests which occasionally
- causes failed builds.
-
-gcc/
-
- * Makefile.in (SELFTEST_DEPS): Move before including language makefile
- fragments.
-
-Upstream-Status: Backport [https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=b19d8aac15649f31a7588b2634411a1922906ea8]
-Signed-off-by:Steve Sakoman <steve@sakoman.com>
-
----
- gcc/Makefile.in | 6 ++++--
- 1 file changed, 4 insertions(+), 2 deletions(-)
-
-diff --git a/gcc/Makefile.in b/gcc/Makefile.in
-index aab1dbba57b..be11311b60d 100644
---- a/gcc/Makefile.in
-+++ b/gcc/Makefile.in
-@@ -1735,6 +1735,10 @@ $(FULL_DRIVER_NAME): ./xgcc$(exeext)
- $(LN_S) $< $@
-
- #
-+# SELFTEST_DEPS need to be set before including language makefile fragments.
-+# Otherwise $(SELFTEST_DEPS) is empty when used from <LANG>/Make-lang.in.
-+SELFTEST_DEPS = $(GCC_PASSES) stmp-int-hdrs $(srcdir)/testsuite/selftests
-+
- # Language makefile fragments.
-
- # The following targets define the interface between us and the languages.
-@@ -2010,8 +2014,6 @@ DEVNULL=$(if $(findstring mingw,$(build)),nul,/dev/null)
- SELFTEST_FLAGS = -nostdinc $(DEVNULL) -S -o $(DEVNULL) \
- -fself-test=$(srcdir)/testsuite/selftests
-
--SELFTEST_DEPS = $(GCC_PASSES) stmp-int-hdrs $(srcdir)/testsuite/selftests
--
- # Run the selftests during the build once we have a driver and the frontend,
- # so that self-test failures are caught as early as possible.
- # Use "s-selftest-FE" to ensure that we only run the selftests if the
---
-2.27.0
-
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3.inc b/poky/meta/recipes-devtools/gcc/gcc-9.5.inc
index c171f673e9..ec28246bf3 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3.inc
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5.inc
@@ -2,13 +2,13 @@ require gcc-common.inc
# Third digit in PV should be incremented after a minor release
-PV = "9.3.0"
+PV = "9.5.0"
# BINV should be incremented to a revision after a minor gcc release
-BINV = "9.3.0"
+BINV = "9.5.0"
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/gcc-9.3:${FILE_DIRNAME}/gcc-9.3/backport:"
+FILESEXTRAPATHS =. "${FILE_DIRNAME}/gcc-9.5:${FILE_DIRNAME}/gcc-9.5/backport:"
DEPENDS =+ "mpfr gmp libmpc zlib flex-native"
NATIVEDEPS = "mpfr-native gmp-native libmpc-native zlib-native flex-native"
@@ -69,14 +69,10 @@ SRC_URI = "\
file://0037-CVE-2019-14250-Check-zero-value-in-simple_object_elf.patch \
file://0038-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch \
file://0039-process_alt_operands-Don-t-match-user-defined-regs-o.patch \
- file://0040-fix-missing-dependencies-for-selftests.patch \
- file://0001-aarch64-New-Straight-Line-Speculation-SLS-mitigation.patch \
- file://0002-aarch64-Introduce-SLS-mitigation-for-RET-and-BR-inst.patch \
- file://0003-aarch64-Mitigate-SLS-for-BLR-instruction.patch \
- file://0001-Backport-fix-for-PR-tree-optimization-97236-fix-bad-.patch \
+ file://0002-libstdc-Fix-inconsistent-noexcept-specific-for-valar.patch \
"
S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/gcc-${PV}"
-SRC_URI[sha256sum] = "71e197867611f6054aa1119b13a0c0abac12834765fe2d81f35ac57f84f742d1"
+SRC_URI[sha256sum] = "27769f64ef1d4cd5e2be8682c0c93f9887983e6cfd1a927ce5a0a2915a95cf8f"
# For dev release snapshotting
#S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/official-gcc-${RELEASE}"
#B = "${WORKDIR}/gcc-${PV}/build.${HOST_SYS}.${TARGET_SYS}"
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch
index 0d9222df17..0d9222df17 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0002-gcc-poison-system-directories.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0002-gcc-poison-system-directories.patch
index f427ee67c1..f427ee67c1 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0002-gcc-poison-system-directories.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0002-gcc-poison-system-directories.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.5/0002-libstdc-Fix-inconsistent-noexcept-specific-for-valar.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0002-libstdc-Fix-inconsistent-noexcept-specific-for-valar.patch
new file mode 100644
index 0000000000..506064bfc2
--- /dev/null
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0002-libstdc-Fix-inconsistent-noexcept-specific-for-valar.patch
@@ -0,0 +1,44 @@
+From 60d966708d7cf105dccf128d2b7a38b0b2580a1a Mon Sep 17 00:00:00 2001
+From: Jonathan Wakely <jwakely@redhat.com>
+Date: Fri, 5 Nov 2021 21:42:20 +0000
+Subject: [PATCH] libstdc++: Fix inconsistent noexcept-specific for valarray
+ begin/end
+
+These declarations should be noexcept after I added it to the
+definitions in <valarray>.
+
+libstdc++-v3/ChangeLog:
+
+ * include/bits/range_access.h (begin(valarray), end(valarray)):
+ Add noexcept.
+
+(cherry picked from commit 2b2d97fc545635a0f6aa9c9ee3b017394bc494bf)
+
+Upstream-Status: Backport [https://github.com/hkaelber/gcc/commit/2b2d97fc545635a0f6aa9c9ee3b017394bc494bf]
+Signed-off-by: Virendra Thakur <virendrak@kpit.com>
+
+---
+ libstdc++-v3/include/bits/range_access.h | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/libstdc++-v3/include/bits/range_access.h b/libstdc++-v3/include/bits/range_access.h
+index 3d99ea92027..4736e75fda1 100644
+--- a/libstdc++-v3/include/bits/range_access.h
++++ b/libstdc++-v3/include/bits/range_access.h
+@@ -101,10 +101,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
+
+ template<typename _Tp> class valarray;
+ // These overloads must be declared for cbegin and cend to use them.
+- template<typename _Tp> _Tp* begin(valarray<_Tp>&);
+- template<typename _Tp> const _Tp* begin(const valarray<_Tp>&);
+- template<typename _Tp> _Tp* end(valarray<_Tp>&);
+- template<typename _Tp> const _Tp* end(const valarray<_Tp>&);
++ template<typename _Tp> _Tp* begin(valarray<_Tp>&) noexcept;
++ template<typename _Tp> const _Tp* begin(const valarray<_Tp>&) noexcept;
++ template<typename _Tp> _Tp* end(valarray<_Tp>&) noexcept;
++ template<typename _Tp> const _Tp* end(const valarray<_Tp>&) noexcept;
+
+ /**
+ * @brief Return an iterator pointing to the first element of
+--
+2.25.1 \ No newline at end of file
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0003-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0003-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch
index 23ec5bce03..23ec5bce03 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0003-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0003-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0004-64-bit-multilib-hack.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0004-64-bit-multilib-hack.patch
index 17ec8986c1..17ec8986c1 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0004-64-bit-multilib-hack.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0004-64-bit-multilib-hack.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0005-optional-libstdc.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0005-optional-libstdc.patch
index 3c28aeac63..3c28aeac63 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0005-optional-libstdc.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0005-optional-libstdc.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0006-COLLECT_GCC_OPTIONS.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0006-COLLECT_GCC_OPTIONS.patch
index 906f3a7317..906f3a7317 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0006-COLLECT_GCC_OPTIONS.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0006-COLLECT_GCC_OPTIONS.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0007-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0007-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch
index 68a876cb95..68a876cb95 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0007-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0007-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0008-fortran-cross-compile-hack.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0008-fortran-cross-compile-hack.patch
index 6acd2b0cf9..6acd2b0cf9 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0008-fortran-cross-compile-hack.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0008-fortran-cross-compile-hack.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0009-cpp-honor-sysroot.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0009-cpp-honor-sysroot.patch
index 5a9e527606..5a9e527606 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0009-cpp-honor-sysroot.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0009-cpp-honor-sysroot.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0010-MIPS64-Default-to-N64-ABI.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0010-MIPS64-Default-to-N64-ABI.patch
index a8103b951e..a8103b951e 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0010-MIPS64-Default-to-N64-ABI.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0010-MIPS64-Default-to-N64-ABI.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0011-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0011-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch
index d9d563d0f7..d9d563d0f7 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0011-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0011-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0012-gcc-Fix-argument-list-too-long-error.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0012-gcc-Fix-argument-list-too-long-error.patch
index f0b79ee145..f0b79ee145 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0012-gcc-Fix-argument-list-too-long-error.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0012-gcc-Fix-argument-list-too-long-error.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0013-Disable-sdt.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0013-Disable-sdt.patch
index 455858354f..455858354f 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0013-Disable-sdt.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0013-Disable-sdt.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0014-libtool.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0014-libtool.patch
index 2953859238..2953859238 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0014-libtool.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0014-libtool.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0015-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0015-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch
index d4445244e2..d4445244e2 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0015-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0015-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0016-Use-the-multilib-config-files-from-B-instead-of-usin.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0016-Use-the-multilib-config-files-from-B-instead-of-usin.patch
index 6f0833ccda..6f0833ccda 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0016-Use-the-multilib-config-files-from-B-instead-of-usin.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0016-Use-the-multilib-config-files-from-B-instead-of-usin.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0017-Avoid-using-libdir-from-.la-which-usually-points-to-.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0017-Avoid-using-libdir-from-.la-which-usually-points-to-.patch
index 96da013bf2..96da013bf2 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0017-Avoid-using-libdir-from-.la-which-usually-points-to-.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0017-Avoid-using-libdir-from-.la-which-usually-points-to-.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0018-export-CPP.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0018-export-CPP.patch
index 2385099c25..2385099c25 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0018-export-CPP.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0018-export-CPP.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0019-Ensure-target-gcc-headers-can-be-included.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0019-Ensure-target-gcc-headers-can-be-included.patch
index e0129d1f96..e0129d1f96 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0019-Ensure-target-gcc-headers-can-be-included.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0019-Ensure-target-gcc-headers-can-be-included.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0020-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0020-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch
index 1d2182140f..1d2182140f 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0020-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0020-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0021-Don-t-search-host-directory-during-relink-if-inst_pr.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0021-Don-t-search-host-directory-during-relink-if-inst_pr.patch
index e363c7d445..e363c7d445 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0021-Don-t-search-host-directory-during-relink-if-inst_pr.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0021-Don-t-search-host-directory-during-relink-if-inst_pr.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0022-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0022-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch
index 846c0de5e8..846c0de5e8 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0022-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0022-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0023-aarch64-Add-support-for-musl-ldso.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0023-aarch64-Add-support-for-musl-ldso.patch
index 102d6fc742..102d6fc742 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0023-aarch64-Add-support-for-musl-ldso.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0023-aarch64-Add-support-for-musl-ldso.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0024-libcc1-fix-libcc1-s-install-path-and-rpath.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0024-libcc1-fix-libcc1-s-install-path-and-rpath.patch
index 443e0a2ca6..443e0a2ca6 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0024-libcc1-fix-libcc1-s-install-path-and-rpath.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0024-libcc1-fix-libcc1-s-install-path-and-rpath.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0025-handle-sysroot-support-for-nativesdk-gcc.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0025-handle-sysroot-support-for-nativesdk-gcc.patch
index 59ac97eaed..59ac97eaed 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0025-handle-sysroot-support-for-nativesdk-gcc.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0025-handle-sysroot-support-for-nativesdk-gcc.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0026-Search-target-sysroot-gcc-version-specific-dirs-with.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0026-Search-target-sysroot-gcc-version-specific-dirs-with.patch
index abfa7516da..abfa7516da 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0026-Search-target-sysroot-gcc-version-specific-dirs-with.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0026-Search-target-sysroot-gcc-version-specific-dirs-with.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0027-Fix-various-_FOR_BUILD-and-related-variables.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0027-Fix-various-_FOR_BUILD-and-related-variables.patch
index ae8acc7f13..ae8acc7f13 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0027-Fix-various-_FOR_BUILD-and-related-variables.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0027-Fix-various-_FOR_BUILD-and-related-variables.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0028-nios2-Define-MUSL_DYNAMIC_LINKER.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0028-nios2-Define-MUSL_DYNAMIC_LINKER.patch
index 52a5d97aef..52a5d97aef 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0028-nios2-Define-MUSL_DYNAMIC_LINKER.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0028-nios2-Define-MUSL_DYNAMIC_LINKER.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0029-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0029-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch
index bfa7e19dd0..bfa7e19dd0 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0029-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0029-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0030-ldbl128-config.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0030-ldbl128-config.patch
index f8e8c07f62..f8e8c07f62 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0030-ldbl128-config.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0030-ldbl128-config.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0031-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0031-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch
index 60a29fc94d..60a29fc94d 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0031-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0031-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0032-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0032-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch
index 6f048dab82..6f048dab82 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0032-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0032-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0033-sync-gcc-stddef.h-with-musl.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0033-sync-gcc-stddef.h-with-musl.patch
index f080b0596f..f080b0596f 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0033-sync-gcc-stddef.h-with-musl.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0033-sync-gcc-stddef.h-with-musl.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0034-fix-segmentation-fault-in-precompiled-header-generat.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0034-fix-segmentation-fault-in-precompiled-header-generat.patch
index 3b7ccb3e3d..3b7ccb3e3d 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0034-fix-segmentation-fault-in-precompiled-header-generat.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0034-fix-segmentation-fault-in-precompiled-header-generat.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0035-Fix-for-testsuite-failure.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0035-Fix-for-testsuite-failure.patch
index 5e199fbcfd..5e199fbcfd 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0035-Fix-for-testsuite-failure.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0035-Fix-for-testsuite-failure.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0036-Re-introduce-spe-commandline-options.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0036-Re-introduce-spe-commandline-options.patch
index 825e070aa3..825e070aa3 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0036-Re-introduce-spe-commandline-options.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0036-Re-introduce-spe-commandline-options.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0037-CVE-2019-14250-Check-zero-value-in-simple_object_elf.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0037-CVE-2019-14250-Check-zero-value-in-simple_object_elf.patch
index f268a4eb58..f268a4eb58 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0037-CVE-2019-14250-Check-zero-value-in-simple_object_elf.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0037-CVE-2019-14250-Check-zero-value-in-simple_object_elf.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0038-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0038-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch
index a79fc03d15..a79fc03d15 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0038-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0038-gentypes-genmodes-Do-not-use-__LINE__-for-maintainin.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-9.3/0039-process_alt_operands-Don-t-match-user-defined-regs-o.patch b/poky/meta/recipes-devtools/gcc/gcc-9.5/0039-process_alt_operands-Don-t-match-user-defined-regs-o.patch
index b69114d1e5..b69114d1e5 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-9.3/0039-process_alt_operands-Don-t-match-user-defined-regs-o.patch
+++ b/poky/meta/recipes-devtools/gcc/gcc-9.5/0039-process_alt_operands-Don-t-match-user-defined-regs-o.patch
diff --git a/poky/meta/recipes-devtools/gcc/gcc-cross-canadian_9.3.bb b/poky/meta/recipes-devtools/gcc/gcc-cross-canadian_9.5.bb
index bf53c5cd78..bf53c5cd78 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-cross-canadian_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/gcc-cross-canadian_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-cross_9.3.bb b/poky/meta/recipes-devtools/gcc/gcc-cross_9.5.bb
index b43cca0c52..b43cca0c52 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-cross_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/gcc-cross_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-crosssdk_9.3.bb b/poky/meta/recipes-devtools/gcc/gcc-crosssdk_9.5.bb
index 40a6c4feff..40a6c4feff 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-crosssdk_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/gcc-crosssdk_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-runtime_9.3.bb b/poky/meta/recipes-devtools/gcc/gcc-runtime_9.5.bb
index dd430b57eb..dd430b57eb 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-runtime_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/gcc-runtime_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-sanitizers_9.3.bb b/poky/meta/recipes-devtools/gcc/gcc-sanitizers_9.5.bb
index f3c7058114..f3c7058114 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-sanitizers_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/gcc-sanitizers_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc-source_9.3.bb b/poky/meta/recipes-devtools/gcc/gcc-source_9.5.bb
index b890fa33ea..b890fa33ea 100644
--- a/poky/meta/recipes-devtools/gcc/gcc-source_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/gcc-source_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/gcc_9.3.bb b/poky/meta/recipes-devtools/gcc/gcc_9.5.bb
index 7d93590588..7d93590588 100644
--- a/poky/meta/recipes-devtools/gcc/gcc_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/gcc_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/libgcc-initial_9.3.bb b/poky/meta/recipes-devtools/gcc/libgcc-initial_9.5.bb
index 0c698c26ec..0c698c26ec 100644
--- a/poky/meta/recipes-devtools/gcc/libgcc-initial_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/libgcc-initial_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/libgcc_9.3.bb b/poky/meta/recipes-devtools/gcc/libgcc_9.5.bb
index ea210a1130..ea210a1130 100644
--- a/poky/meta/recipes-devtools/gcc/libgcc_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/libgcc_9.5.bb
diff --git a/poky/meta/recipes-devtools/gcc/libgfortran_9.3.bb b/poky/meta/recipes-devtools/gcc/libgfortran_9.5.bb
index 71dd8b4bdc..71dd8b4bdc 100644
--- a/poky/meta/recipes-devtools/gcc/libgfortran_9.3.bb
+++ b/poky/meta/recipes-devtools/gcc/libgfortran_9.5.bb
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-23521.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-23521.patch
new file mode 100644
index 0000000000..974546013d
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-23521.patch
@@ -0,0 +1,367 @@
+From eb22e7dfa23da6bd9aed9bd1dad69e1e8e167d24 Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:45:15 +0100
+Subject: [PATCH] CVE-2022-23521
+
+attr: fix overflow when upserting attribute with overly long name
+
+The function `git_attr_internal()` is called to upsert attributes into
+the global map. And while all callers pass a `size_t`, the function
+itself accepts an `int` as the attribute name's length. This can lead to
+an integer overflow in case the attribute name is longer than `INT_MAX`.
+
+Now this overflow seems harmless as the first thing we do is to call
+`attr_name_valid()`, and that function only succeeds in case all chars
+in the range of `namelen` match a certain small set of chars. We thus
+can't do an out-of-bounds read as NUL is not part of that set and all
+strings passed to this function are NUL-terminated. And furthermore, we
+wouldn't ever read past the current attribute name anyway due to the
+same reason. And if validation fails we will return early.
+
+On the other hand it feels fragile to rely on this behaviour, even more
+so given that we pass `namelen` to `FLEX_ALLOC_MEM()`. So let's instead
+just do the correct thing here and accept a `size_t` as line length.
+
+Upstream-Status: Backport [https://github.com/git/git/commit/eb22e7dfa23da6bd9aed9bd1dad69e1e8e167d24 &https://github.com/git/git/commit/8d0d48cf2157cfb914db1f53b3fe40785b86f3aa & https://github.com/git/git/commit/24557209500e6ed618f04a8795a111a0c491a29c & https://github.com/git/git/commit/34ace8bad02bb14ecc5b631f7e3daaa7a9bba7d9 & https://github.com/git/git/commit/447ac906e189535e77dcb1f4bbe3f1bc917d4c12 & https://github.com/git/git/commit/e1e12e97ac73ded85f7d000da1063a774b3cc14f & https://github.com/git/git/commit/a60a66e409c265b2944f18bf43581c146812586d & https://github.com/git/git/commit/d74b1fd54fdbc45966d12ea907dece11e072fb2b & https://github.com/git/git/commit/dfa6b32b5e599d97448337ed4fc18dd50c90758f & https://github.com/git/git/commit/3c50032ff5289cc45659f21949c8d09e52164579
+
+CVE: CVE-2022-23521
+
+Reviewed-by: Sylvain Beucler <beuc@debian.org>
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ attr.c | 97 +++++++++++++++++++++++++++----------------
+ attr.h | 12 ++++++
+ t/t0003-attributes.sh | 59 ++++++++++++++++++++++++++
+ 3 files changed, 132 insertions(+), 36 deletions(-)
+
+diff --git a/attr.c b/attr.c
+index 11f19b5..63484ab 100644
+--- a/attr.c
++++ b/attr.c
+@@ -29,7 +29,7 @@ static const char git_attr__unknown[] = "(builtin)unknown";
+ #endif
+
+ struct git_attr {
+- int attr_nr; /* unique attribute number */
++ unsigned int attr_nr; /* unique attribute number */
+ char name[FLEX_ARRAY]; /* attribute name */
+ };
+
+@@ -221,7 +221,7 @@ static void report_invalid_attr(const char *name, size_t len,
+ * dictionary. If no entry is found, create a new attribute and store it in
+ * the dictionary.
+ */
+-static const struct git_attr *git_attr_internal(const char *name, int namelen)
++static const struct git_attr *git_attr_internal(const char *name, size_t namelen)
+ {
+ struct git_attr *a;
+
+@@ -237,8 +237,8 @@ static const struct git_attr *git_attr_internal(const char *name, int namelen)
+ a->attr_nr = hashmap_get_size(&g_attr_hashmap.map);
+
+ attr_hashmap_add(&g_attr_hashmap, a->name, namelen, a);
+- assert(a->attr_nr ==
+- (hashmap_get_size(&g_attr_hashmap.map) - 1));
++ if (a->attr_nr != hashmap_get_size(&g_attr_hashmap.map) - 1)
++ die(_("unable to add additional attribute"));
+ }
+
+ hashmap_unlock(&g_attr_hashmap);
+@@ -283,7 +283,7 @@ struct match_attr {
+ const struct git_attr *attr;
+ } u;
+ char is_macro;
+- unsigned num_attr;
++ size_t num_attr;
+ struct attr_state state[FLEX_ARRAY];
+ };
+
+@@ -300,7 +300,7 @@ static const char *parse_attr(const char *src, int lineno, const char *cp,
+ struct attr_state *e)
+ {
+ const char *ep, *equals;
+- int len;
++ size_t len;
+
+ ep = cp + strcspn(cp, blank);
+ equals = strchr(cp, '=');
+@@ -344,8 +344,7 @@ static const char *parse_attr(const char *src, int lineno, const char *cp,
+ static struct match_attr *parse_attr_line(const char *line, const char *src,
+ int lineno, int macro_ok)
+ {
+- int namelen;
+- int num_attr, i;
++ size_t namelen, num_attr, i;
+ const char *cp, *name, *states;
+ struct match_attr *res = NULL;
+ int is_macro;
+@@ -356,6 +355,11 @@ static struct match_attr *parse_attr_line(const char *line, const char *src,
+ return NULL;
+ name = cp;
+
++ if (strlen(line) >= ATTR_MAX_LINE_LENGTH) {
++ warning(_("ignoring overly long attributes line %d"), lineno);
++ return NULL;
++ }
++
+ if (*cp == '"' && !unquote_c_style(&pattern, name, &states)) {
+ name = pattern.buf;
+ namelen = pattern.len;
+@@ -392,10 +396,9 @@ static struct match_attr *parse_attr_line(const char *line, const char *src,
+ goto fail_return;
+ }
+
+- res = xcalloc(1,
+- sizeof(*res) +
+- sizeof(struct attr_state) * num_attr +
+- (is_macro ? 0 : namelen + 1));
++ res = xcalloc(1, st_add3(sizeof(*res),
++ st_mult(sizeof(struct attr_state), num_attr),
++ is_macro ? 0 : namelen + 1));
+ if (is_macro) {
+ res->u.attr = git_attr_internal(name, namelen);
+ } else {
+@@ -458,11 +461,12 @@ struct attr_stack {
+
+ static void attr_stack_free(struct attr_stack *e)
+ {
+- int i;
++ unsigned i;
+ free(e->origin);
+ for (i = 0; i < e->num_matches; i++) {
+ struct match_attr *a = e->attrs[i];
+- int j;
++ size_t j;
++
+ for (j = 0; j < a->num_attr; j++) {
+ const char *setto = a->state[j].setto;
+ if (setto == ATTR__TRUE ||
+@@ -671,8 +675,8 @@ static void handle_attr_line(struct attr_stack *res,
+ a = parse_attr_line(line, src, lineno, macro_ok);
+ if (!a)
+ return;
+- ALLOC_GROW(res->attrs, res->num_matches + 1, res->alloc);
+- res->attrs[res->num_matches++] = a;
++ ALLOC_GROW_BY(res->attrs, res->num_matches, 1, res->alloc);
++ res->attrs[res->num_matches - 1] = a;
+ }
+
+ static struct attr_stack *read_attr_from_array(const char **list)
+@@ -711,21 +715,37 @@ void git_attr_set_direction(enum git_attr_direction new_direction)
+
+ static struct attr_stack *read_attr_from_file(const char *path, int macro_ok)
+ {
++ struct strbuf buf = STRBUF_INIT;
+ FILE *fp = fopen_or_warn(path, "r");
+ struct attr_stack *res;
+- char buf[2048];
+ int lineno = 0;
++ int fd;
++ struct stat st;
+
+ if (!fp)
+ return NULL;
+- res = xcalloc(1, sizeof(*res));
+- while (fgets(buf, sizeof(buf), fp)) {
+- char *bufp = buf;
+- if (!lineno)
+- skip_utf8_bom(&bufp, strlen(bufp));
+- handle_attr_line(res, bufp, path, ++lineno, macro_ok);
++
++ fd = fileno(fp);
++ if (fstat(fd, &st)) {
++ warning_errno(_("cannot fstat gitattributes file '%s'"), path);
++ fclose(fp);
++ return NULL;
+ }
++ if (st.st_size >= ATTR_MAX_FILE_SIZE) {
++ warning(_("ignoring overly large gitattributes file '%s'"), path);
++ fclose(fp);
++ return NULL;
++ }
++
++ CALLOC_ARRAY(res, 1);
++ while (strbuf_getline(&buf, fp) != EOF) {
++ if (!lineno && starts_with(buf.buf, utf8_bom))
++ strbuf_remove(&buf, 0, strlen(utf8_bom));
++ handle_attr_line(res, buf.buf, path, ++lineno, macro_ok);
++ }
++
+ fclose(fp);
++ strbuf_release(&buf);
+ return res;
+ }
+
+@@ -736,13 +756,18 @@ static struct attr_stack *read_attr_from_index(const struct index_state *istate,
+ struct attr_stack *res;
+ char *buf, *sp;
+ int lineno = 0;
++ size_t size;
+
+ if (!istate)
+ return NULL;
+
+- buf = read_blob_data_from_index(istate, path, NULL);
++ buf = read_blob_data_from_index(istate, path, &size);
+ if (!buf)
+ return NULL;
++ if (size >= ATTR_MAX_FILE_SIZE) {
++ warning(_("ignoring overly large gitattributes blob '%s'"), path);
++ return NULL;
++ }
+
+ res = xcalloc(1, sizeof(*res));
+ for (sp = buf; *sp; ) {
+@@ -1012,12 +1037,12 @@ static int macroexpand_one(struct all_attrs_item *all_attrs, int nr, int rem);
+ static int fill_one(const char *what, struct all_attrs_item *all_attrs,
+ const struct match_attr *a, int rem)
+ {
+- int i;
++ size_t i;
+
+- for (i = a->num_attr - 1; rem > 0 && i >= 0; i--) {
+- const struct git_attr *attr = a->state[i].attr;
++ for (i = a->num_attr; rem > 0 && i > 0; i--) {
++ const struct git_attr *attr = a->state[i - 1].attr;
+ const char **n = &(all_attrs[attr->attr_nr].value);
+- const char *v = a->state[i].setto;
++ const char *v = a->state[i - 1].setto;
+
+ if (*n == ATTR__UNKNOWN) {
+ debug_set(what,
+@@ -1036,11 +1061,11 @@ static int fill(const char *path, int pathlen, int basename_offset,
+ struct all_attrs_item *all_attrs, int rem)
+ {
+ for (; rem > 0 && stack; stack = stack->prev) {
+- int i;
++ unsigned i;
+ const char *base = stack->origin ? stack->origin : "";
+
+- for (i = stack->num_matches - 1; 0 < rem && 0 <= i; i--) {
+- const struct match_attr *a = stack->attrs[i];
++ for (i = stack->num_matches; 0 < rem && 0 < i; i--) {
++ const struct match_attr *a = stack->attrs[i - 1];
+ if (a->is_macro)
+ continue;
+ if (path_matches(path, pathlen, basename_offset,
+@@ -1071,11 +1096,11 @@ static void determine_macros(struct all_attrs_item *all_attrs,
+ const struct attr_stack *stack)
+ {
+ for (; stack; stack = stack->prev) {
+- int i;
+- for (i = stack->num_matches - 1; i >= 0; i--) {
+- const struct match_attr *ma = stack->attrs[i];
++ unsigned i;
++ for (i = stack->num_matches; i > 0; i--) {
++ const struct match_attr *ma = stack->attrs[i - 1];
+ if (ma->is_macro) {
+- int n = ma->u.attr->attr_nr;
++ unsigned int n = ma->u.attr->attr_nr;
+ if (!all_attrs[n].macro) {
+ all_attrs[n].macro = ma;
+ }
+@@ -1127,7 +1152,7 @@ void git_check_attr(const struct index_state *istate,
+ collect_some_attrs(istate, path, check);
+
+ for (i = 0; i < check->nr; i++) {
+- size_t n = check->items[i].attr->attr_nr;
++ unsigned int n = check->items[i].attr->attr_nr;
+ const char *value = check->all_attrs[n].value;
+ if (value == ATTR__UNKNOWN)
+ value = ATTR__UNSET;
+diff --git a/attr.h b/attr.h
+index b0378bf..f424285 100644
+--- a/attr.h
++++ b/attr.h
+@@ -1,6 +1,18 @@
+ #ifndef ATTR_H
+ #define ATTR_H
+
++/**
++ * The maximum line length for a gitattributes file. If the line exceeds this
++ * length we will ignore it.
++ */
++#define ATTR_MAX_LINE_LENGTH 2048
++
++ /**
++ * The maximum size of the giattributes file. If the file exceeds this size we
++ * will ignore it.
++ */
++#define ATTR_MAX_FILE_SIZE (100 * 1024 * 1024)
++
+ struct index_state;
+
+ /* An attribute is a pointer to this opaque structure */
+diff --git a/t/t0003-attributes.sh b/t/t0003-attributes.sh
+index 71e63d8..556245b 100755
+--- a/t/t0003-attributes.sh
++++ b/t/t0003-attributes.sh
+@@ -342,4 +342,63 @@ test_expect_success 'query binary macro directly' '
+ test_cmp expect actual
+ '
+
++test_expect_success 'large attributes line ignored in tree' '
++ test_when_finished "rm .gitattributes" &&
++ printf "path %02043d" 1 >.gitattributes &&
++ git check-attr --all path >actual 2>err &&
++ echo "warning: ignoring overly long attributes line 1" >expect &&
++ test_cmp expect err &&
++ test_must_be_empty actual
++'
++
++test_expect_success 'large attributes line ignores trailing content in tree' '
++ test_when_finished "rm .gitattributes" &&
++ # older versions of Git broke lines at 2048 bytes; the 2045 bytes
++ # of 0-padding here is accounting for the three bytes of "a 1", which
++ # would knock "trailing" to the "next" line, where it would be
++ # erroneously parsed.
++ printf "a %02045dtrailing attribute\n" 1 >.gitattributes &&
++ git check-attr --all trailing >actual 2>err &&
++ echo "warning: ignoring overly long attributes line 1" >expect &&
++ test_cmp expect err &&
++ test_must_be_empty actual
++'
++
++test_expect_success EXPENSIVE 'large attributes file ignored in tree' '
++ test_when_finished "rm .gitattributes" &&
++ dd if=/dev/zero of=.gitattributes bs=101M count=1 2>/dev/null &&
++ git check-attr --all path >/dev/null 2>err &&
++ echo "warning: ignoring overly large gitattributes file ${SQ}.gitattributes${SQ}" >expect &&
++ test_cmp expect err
++'
++
++test_expect_success 'large attributes line ignored in index' '
++ test_when_finished "git update-index --remove .gitattributes" &&
++ blob=$(printf "path %02043d" 1 | git hash-object -w --stdin) &&
++ git update-index --add --cacheinfo 100644,$blob,.gitattributes &&
++ git check-attr --cached --all path >actual 2>err &&
++ echo "warning: ignoring overly long attributes line 1" >expect &&
++ test_cmp expect err &&
++ test_must_be_empty actual
++'
++
++test_expect_success 'large attributes line ignores trailing content in index' '
++ test_when_finished "git update-index --remove .gitattributes" &&
++ blob=$(printf "a %02045dtrailing attribute\n" 1 | git hash-object -w --stdin) &&
++ git update-index --add --cacheinfo 100644,$blob,.gitattributes &&
++ git check-attr --cached --all trailing >actual 2>err &&
++ echo "warning: ignoring overly long attributes line 1" >expect &&
++ test_cmp expect err &&
++ test_must_be_empty actual
++'
++
++test_expect_success EXPENSIVE 'large attributes file ignored in index' '
++ test_when_finished "git update-index --remove .gitattributes" &&
++ blob=$(dd if=/dev/zero bs=101M count=1 2>/dev/null | git hash-object -w --stdin) &&
++ git update-index --add --cacheinfo 100644,$blob,.gitattributes &&
++ git check-attr --cached --all path >/dev/null 2>err &&
++ echo "warning: ignoring overly large gitattributes blob ${SQ}.gitattributes${SQ}" >expect &&
++ test_cmp expect err
++'
++
+ test_done
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-01.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-01.patch
new file mode 100644
index 0000000000..87091abd47
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-01.patch
@@ -0,0 +1,39 @@
+From a244dc5b0a629290881641467c7a545de7508ab2 Mon Sep 17 00:00:00 2001
+From: Carlo Marcelo Arenas Belón <carenas@gmail.com>
+Date: Tue, 2 Nov 2021 15:46:06 +0000
+Subject: [PATCH 01/12] test-lib: add prerequisite for 64-bit platforms
+
+Allow tests that assume a 64-bit `size_t` to be skipped in 32-bit
+platforms and regardless of the size of `long`.
+
+This imitates the `LONG_IS_64BIT` prerequisite.
+
+Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com>
+Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/a244dc5b0a629290881641467c7a545de7508ab2]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ t/test-lib.sh | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/t/test-lib.sh b/t/test-lib.sh
+index e06fa02..db5ec2f 100644
+--- a/t/test-lib.sh
++++ b/t/test-lib.sh
+@@ -1613,6 +1613,10 @@ build_option () {
+ sed -ne "s/^$1: //p"
+ }
+
++test_lazy_prereq SIZE_T_IS_64BIT '
++ test 8 -eq "$(build_option sizeof-size_t)"
++'
++
+ test_lazy_prereq LONG_IS_64BIT '
+ test 8 -le "$(build_option sizeof-long)"
+ '
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-02.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-02.patch
new file mode 100644
index 0000000000..f35e55b585
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-02.patch
@@ -0,0 +1,187 @@
+From 81dc898df9b4b4035534a927f3234a3839b698bf Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:46:25 +0100
+Subject: [PATCH 02/12] pretty: fix out-of-bounds write caused by integer overflow
+
+When using a padding specifier in the pretty format passed to git-log(1)
+we need to calculate the string length in several places. These string
+lengths are stored in `int`s though, which means that these can easily
+overflow when the input lengths exceeds 2GB. This can ultimately lead to
+an out-of-bounds write when these are used in a call to memcpy(3P):
+
+ ==8340==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7f1ec62f97fe at pc 0x7f2127e5f427 bp 0x7ffd3bd63de0 sp 0x7ffd3bd63588
+ WRITE of size 1 at 0x7f1ec62f97fe thread T0
+ #0 0x7f2127e5f426 in __interceptor_memcpy /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827
+ #1 0x5628e96aa605 in format_and_pad_commit pretty.c:1762
+ #2 0x5628e96aa7f4 in format_commit_item pretty.c:1801
+ #3 0x5628e97cdb24 in strbuf_expand strbuf.c:429
+ #4 0x5628e96ab060 in repo_format_commit_message pretty.c:1869
+ #5 0x5628e96acd0f in pretty_print_commit pretty.c:2161
+ #6 0x5628e95a44c8 in show_log log-tree.c:781
+ #7 0x5628e95a76ba in log_tree_commit log-tree.c:1117
+ #8 0x5628e922bed5 in cmd_log_walk_no_free builtin/log.c:508
+ #9 0x5628e922c35b in cmd_log_walk builtin/log.c:549
+ #10 0x5628e922f1a2 in cmd_log builtin/log.c:883
+ #11 0x5628e9106993 in run_builtin git.c:466
+ #12 0x5628e9107397 in handle_builtin git.c:721
+ #13 0x5628e9107b07 in run_argv git.c:788
+ #14 0x5628e91088a7 in cmd_main git.c:923
+ #15 0x5628e939d682 in main common-main.c:57
+ #16 0x7f2127c3c28f (/usr/lib/libc.so.6+0x2328f)
+ #17 0x7f2127c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
+ #18 0x5628e91020e4 in _start ../sysdeps/x86_64/start.S:115
+
+ 0x7f1ec62f97fe is located 2 bytes to the left of 4831838265-byte region [0x7f1ec62f9800,0x7f1fe62f9839)
+ allocated by thread T0 here:
+ #0 0x7f2127ebe7ea in __interceptor_realloc /usr/src/debug/gcc/libsanitizer/asan/asan_malloc_linux.cpp:85
+ #1 0x5628e98774d4 in xrealloc wrapper.c:136
+ #2 0x5628e97cb01c in strbuf_grow strbuf.c:99
+ #3 0x5628e97ccd42 in strbuf_addchars strbuf.c:327
+ #4 0x5628e96aa55c in format_and_pad_commit pretty.c:1761
+ #5 0x5628e96aa7f4 in format_commit_item pretty.c:1801
+ #6 0x5628e97cdb24 in strbuf_expand strbuf.c:429
+ #7 0x5628e96ab060 in repo_format_commit_message pretty.c:1869
+ #8 0x5628e96acd0f in pretty_print_commit pretty.c:2161
+ #9 0x5628e95a44c8 in show_log log-tree.c:781
+ #10 0x5628e95a76ba in log_tree_commit log-tree.c:1117
+ #11 0x5628e922bed5 in cmd_log_walk_no_free builtin/log.c:508
+ #12 0x5628e922c35b in cmd_log_walk builtin/log.c:549
+ #13 0x5628e922f1a2 in cmd_log builtin/log.c:883
+ #14 0x5628e9106993 in run_builtin git.c:466
+ #15 0x5628e9107397 in handle_builtin git.c:721
+ #16 0x5628e9107b07 in run_argv git.c:788
+ #17 0x5628e91088a7 in cmd_main git.c:923
+ #18 0x5628e939d682 in main common-main.c:57
+ #19 0x7f2127c3c28f (/usr/lib/libc.so.6+0x2328f)
+ #20 0x7f2127c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
+ #21 0x5628e91020e4 in _start ../sysdeps/x86_64/start.S:115
+
+ SUMMARY: AddressSanitizer: heap-buffer-overflow /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827 in __interceptor_memcpy
+ Shadow bytes around the buggy address:
+ 0x0fe458c572a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0fe458c572b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0fe458c572c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0fe458c572d0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0fe458c572e0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ =>0x0fe458c572f0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa[fa]
+ 0x0fe458c57300: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+ 0x0fe458c57310: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+ 0x0fe458c57320: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+ 0x0fe458c57330: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+ 0x0fe458c57340: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+ Shadow byte legend (one shadow byte represents 8 application bytes):
+ Addressable: 00
+ Partially addressable: 01 02 03 04 05 06 07
+ Heap left redzone: fa
+ Freed heap region: fd
+ Stack left redzone: f1
+ Stack mid redzone: f2
+ Stack right redzone: f3
+ Stack after return: f5
+ Stack use after scope: f8
+ Global redzone: f9
+ Global init order: f6
+ Poisoned by user: f7
+ Container overflow: fc
+ Array cookie: ac
+ Intra object redzone: bb
+ ASan internal: fe
+ Left alloca redzone: ca
+ Right alloca redzone: cb
+ ==8340==ABORTING
+
+The pretty format can also be used in `git archive` operations via the
+`export-subst` attribute. So this is what in our opinion makes this a
+critical issue in the context of Git forges which allow to download an
+archive of user supplied Git repositories.
+
+Fix this vulnerability by using `size_t` instead of `int` to track the
+string lengths. Add tests which detect this vulnerability when Git is
+compiled with the address sanitizer.
+
+Reported-by: Joern Schneeweisz <jschneeweisz@gitlab.com>
+Original-patch-by: Joern Schneeweisz <jschneeweisz@gitlab.com>
+Modified-by: Taylor Blau <me@ttalorr.com>
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/81dc898df9b4b4035534a927f3234a3839b698bf]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ pretty.c | 11 ++++++-----
+ t/t4205-log-pretty-formats.sh | 17 +++++++++++++++++
+ 2 files changed, 23 insertions(+), 5 deletions(-)
+
+diff --git a/pretty.c b/pretty.c
+index b32f036..637e344 100644
+--- a/pretty.c
++++ b/pretty.c
+@@ -1427,7 +1427,9 @@ static size_t format_and_pad_commit(struct strbuf *sb, /* in UTF-8 */
+ struct format_commit_context *c)
+ {
+ struct strbuf local_sb = STRBUF_INIT;
+- int total_consumed = 0, len, padding = c->padding;
++ size_t total_consumed = 0;
++ int len, padding = c->padding;
++
+ if (padding < 0) {
+ const char *start = strrchr(sb->buf, '\n');
+ int occupied;
+@@ -1439,7 +1441,7 @@ static size_t format_and_pad_commit(struct strbuf *sb, /* in UTF-8 */
+ }
+ while (1) {
+ int modifier = *placeholder == 'C';
+- int consumed = format_commit_one(&local_sb, placeholder, c);
++ size_t consumed = format_commit_one(&local_sb, placeholder, c);
+ total_consumed += consumed;
+
+ if (!modifier)
+@@ -1505,7 +1507,7 @@ static size_t format_and_pad_commit(struct strbuf *sb, /* in UTF-8 */
+ }
+ strbuf_addbuf(sb, &local_sb);
+ } else {
+- int sb_len = sb->len, offset = 0;
++ size_t sb_len = sb->len, offset = 0;
+ if (c->flush_type == flush_left)
+ offset = padding - len;
+ else if (c->flush_type == flush_both)
+@@ -1528,8 +1530,7 @@ static size_t format_commit_item(struct strbuf *sb, /* in UTF-8 */
+ const char *placeholder,
+ void *context)
+ {
+- int consumed;
+- size_t orig_len;
++ size_t consumed, orig_len;
+ enum {
+ NO_MAGIC,
+ ADD_LF_BEFORE_NON_EMPTY,
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index f42a69f..a2acee1 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -788,4 +788,21 @@ test_expect_success '%S in git log --format works with other placeholders (part
+ test_cmp expect actual
+ '
+
++test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message' '
++ # We only assert that this command does not crash. This needs to be
++ # executed with the address sanitizer to demonstrate failure.
++ git log -1 --pretty="format:%>(2147483646)%x41%41%>(2147483646)%x41" >/dev/null
++'
++
++test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'set up huge commit' '
++ test-tool genzeros 2147483649 | tr "\000" "1" >expect &&
++ huge_commit=$(git commit-tree -F expect HEAD^{tree})
++'
++
++test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message' '
++ git log -1 --format="%B%<(1)%x30" $huge_commit >actual &&
++ echo 0 >>expect &&
++ test_cmp expect actual
++'
++
+ test_done
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-03.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-03.patch
new file mode 100644
index 0000000000..d83d77eaf7
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-03.patch
@@ -0,0 +1,146 @@
+From b49f309aa16febeddb65e82526640a91bbba3be3 Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:46:30 +0100
+Subject: [PATCH 03/12] pretty: fix out-of-bounds read when left-flushing with stealing
+
+With the `%>>(<N>)` pretty formatter, you can ask git-log(1) et al to
+steal spaces. To do so we need to look ahead of the next token to see
+whether there are spaces there. This loop takes into account ANSI
+sequences that end with an `m`, and if it finds any it will skip them
+until it finds the first space. While doing so it does not take into
+account the buffer's limits though and easily does an out-of-bounds
+read.
+
+Add a test that hits this behaviour. While we don't have an easy way to
+verify this, the test causes the following failure when run with
+`SANITIZE=address`:
+
+ ==37941==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x603000000baf at pc 0x55ba6f88e0d0 bp 0x7ffc84c50d20 sp 0x7ffc84c50d10
+ READ of size 1 at 0x603000000baf thread T0
+ #0 0x55ba6f88e0cf in format_and_pad_commit pretty.c:1712
+ #1 0x55ba6f88e7b4 in format_commit_item pretty.c:1801
+ #2 0x55ba6f9b1ae4 in strbuf_expand strbuf.c:429
+ #3 0x55ba6f88f020 in repo_format_commit_message pretty.c:1869
+ #4 0x55ba6f890ccf in pretty_print_commit pretty.c:2161
+ #5 0x55ba6f7884c8 in show_log log-tree.c:781
+ #6 0x55ba6f78b6ba in log_tree_commit log-tree.c:1117
+ #7 0x55ba6f40fed5 in cmd_log_walk_no_free builtin/log.c:508
+ #8 0x55ba6f41035b in cmd_log_walk builtin/log.c:549
+ #9 0x55ba6f4131a2 in cmd_log builtin/log.c:883
+ #10 0x55ba6f2ea993 in run_builtin git.c:466
+ #11 0x55ba6f2eb397 in handle_builtin git.c:721
+ #12 0x55ba6f2ebb07 in run_argv git.c:788
+ #13 0x55ba6f2ec8a7 in cmd_main git.c:923
+ #14 0x55ba6f581682 in main common-main.c:57
+ #15 0x7f2d08c3c28f (/usr/lib/libc.so.6+0x2328f)
+ #16 0x7f2d08c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
+ #17 0x55ba6f2e60e4 in _start ../sysdeps/x86_64/start.S:115
+
+ 0x603000000baf is located 1 bytes to the left of 24-byte region [0x603000000bb0,0x603000000bc8)
+ allocated by thread T0 here:
+ #0 0x7f2d08ebe7ea in __interceptor_realloc /usr/src/debug/gcc/libsanitizer/asan/asan_malloc_linux.cpp:85
+ #1 0x55ba6fa5b494 in xrealloc wrapper.c:136
+ #2 0x55ba6f9aefdc in strbuf_grow strbuf.c:99
+ #3 0x55ba6f9b0a06 in strbuf_add strbuf.c:298
+ #4 0x55ba6f9b1a25 in strbuf_expand strbuf.c:418
+ #5 0x55ba6f88f020 in repo_format_commit_message pretty.c:1869
+ #6 0x55ba6f890ccf in pretty_print_commit pretty.c:2161
+ #7 0x55ba6f7884c8 in show_log log-tree.c:781
+ #8 0x55ba6f78b6ba in log_tree_commit log-tree.c:1117
+ #9 0x55ba6f40fed5 in cmd_log_walk_no_free builtin/log.c:508
+ #10 0x55ba6f41035b in cmd_log_walk builtin/log.c:549
+ #11 0x55ba6f4131a2 in cmd_log builtin/log.c:883
+ #12 0x55ba6f2ea993 in run_builtin git.c:466
+ #13 0x55ba6f2eb397 in handle_builtin git.c:721
+ #14 0x55ba6f2ebb07 in run_argv git.c:788
+ #15 0x55ba6f2ec8a7 in cmd_main git.c:923
+ #16 0x55ba6f581682 in main common-main.c:57
+ #17 0x7f2d08c3c28f (/usr/lib/libc.so.6+0x2328f)
+ #18 0x7f2d08c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
+ #19 0x55ba6f2e60e4 in _start ../sysdeps/x86_64/start.S:115
+
+ SUMMARY: AddressSanitizer: heap-buffer-overflow pretty.c:1712 in format_and_pad_commit
+ Shadow bytes around the buggy address:
+ 0x0c067fff8120: fa fa fd fd fd fa fa fa fd fd fd fa fa fa fd fd
+ 0x0c067fff8130: fd fd fa fa fd fd fd fd fa fa fd fd fd fa fa fa
+ 0x0c067fff8140: fd fd fd fa fa fa fd fd fd fa fa fa fd fd fd fa
+ 0x0c067fff8150: fa fa fd fd fd fd fa fa 00 00 00 fa fa fa fd fd
+ 0x0c067fff8160: fd fa fa fa fd fd fd fa fa fa fd fd fd fa fa fa
+ =>0x0c067fff8170: fd fd fd fa fa[fa]00 00 00 fa fa fa 00 00 00 fa
+ 0x0c067fff8180: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c067fff8190: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c067fff81a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c067fff81b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c067fff81c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ Shadow byte legend (one shadow byte represents 8 application bytes):
+ Addressable: 00
+ Partially addressable: 01 02 03 04 05 06 07
+ Heap left redzone: fa
+ Freed heap region: fd
+ Stack left redzone: f1
+ Stack mid redzone: f2
+ Stack right redzone: f3
+ Stack after return: f5
+ Stack use after scope: f8
+ Global redzone: f9
+ Global init order: f6
+ Poisoned by user: f7
+ Container overflow: fc
+ Array cookie: ac
+ Intra object redzone: bb
+ ASan internal: fe
+ Left alloca redzone: ca
+ Right alloca redzone: cb
+
+Luckily enough, this would only cause us to copy the out-of-bounds data
+into the formatted commit in case we really had an ANSI sequence
+preceding our buffer. So this bug likely has no security consequences.
+
+Fix it regardless by not traversing past the buffer's start.
+
+Reported-by: Patrick Steinhardt <ps@pks.im>
+Reported-by: Eric Sesterhenn <eric.sesterhenn@x41-dsec.de>
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/b49f309aa16febeddb65e82526640a91bbba3be3]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ pretty.c | 2 +-
+ t/t4205-log-pretty-formats.sh | 6 ++++++
+ 2 files changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/pretty.c b/pretty.c
+index 637e344..4348a82 100644
+--- a/pretty.c
++++ b/pretty.c
+@@ -1468,7 +1468,7 @@ static size_t format_and_pad_commit(struct strbuf *sb, /* in UTF-8 */
+ if (*ch != 'm')
+ break;
+ p = ch - 1;
+- while (ch - p < 10 && *p != '\033')
++ while (p > sb->buf && ch - p < 10 && *p != '\033')
+ p--;
+ if (*p != '\033' ||
+ ch + 1 - p != display_mode_esc_sequence_len(p))
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index a2acee1..e69caba 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -788,6 +788,12 @@ test_expect_success '%S in git log --format works with other placeholders (part
+ test_cmp expect actual
+ '
+
++test_expect_success 'log --pretty with space stealing' '
++ printf mm0 >expect &&
++ git log -1 --pretty="format:mm%>>|(1)%x30" >actual &&
++ test_cmp expect actual
++'
++
+ test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message' '
+ # We only assert that this command does not crash. This needs to be
+ # executed with the address sanitizer to demonstrate failure.
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-04.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-04.patch
new file mode 100644
index 0000000000..9e3c74ff67
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-04.patch
@@ -0,0 +1,150 @@
+From f6e0b9f38987ad5e47bab551f8760b70689a5905 Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:46:34 +0100
+Subject: [PATCH 04/12] pretty: fix out-of-bounds read when parsing invalid padding format
+
+An out-of-bounds read can be triggered when parsing an incomplete
+padding format string passed via `--pretty=format` or in Git archives
+when files are marked with the `export-subst` gitattribute.
+
+This bug exists since we have introduced support for truncating output
+via the `trunc` keyword a7f01c6 (pretty: support truncating in %>, %<
+and %><, 2013-04-19). Before this commit, we used to find the end of the
+formatting string by using strchr(3P). This function returns a `NULL`
+pointer in case the character in question wasn't found. The subsequent
+check whether any character was found thus simply checked the returned
+pointer. After the commit we switched to strcspn(3P) though, which only
+returns the offset to the first found character or to the trailing NUL
+byte. As the end pointer is now computed by adding the offset to the
+start pointer it won't be `NULL` anymore, and as a consequence the check
+doesn't do anything anymore.
+
+The out-of-bounds data that is being read can in fact end up in the
+formatted string. As a consequence, it is possible to leak memory
+contents either by calling git-log(1) or via git-archive(1) when any of
+the archived files is marked with the `export-subst` gitattribute.
+
+ ==10888==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000000398 at pc 0x7f0356047cb2 bp 0x7fff3ffb95d0 sp 0x7fff3ffb8d78
+ READ of size 1 at 0x602000000398 thread T0
+ #0 0x7f0356047cb1 in __interceptor_strchrnul /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:725
+ #1 0x563b7cec9a43 in strbuf_expand strbuf.c:417
+ #2 0x563b7cda7060 in repo_format_commit_message pretty.c:1869
+ #3 0x563b7cda8d0f in pretty_print_commit pretty.c:2161
+ #4 0x563b7cca04c8 in show_log log-tree.c:781
+ #5 0x563b7cca36ba in log_tree_commit log-tree.c:1117
+ #6 0x563b7c927ed5 in cmd_log_walk_no_free builtin/log.c:508
+ #7 0x563b7c92835b in cmd_log_walk builtin/log.c:549
+ #8 0x563b7c92b1a2 in cmd_log builtin/log.c:883
+ #9 0x563b7c802993 in run_builtin git.c:466
+ #10 0x563b7c803397 in handle_builtin git.c:721
+ #11 0x563b7c803b07 in run_argv git.c:788
+ #12 0x563b7c8048a7 in cmd_main git.c:923
+ #13 0x563b7ca99682 in main common-main.c:57
+ #14 0x7f0355e3c28f (/usr/lib/libc.so.6+0x2328f)
+ #15 0x7f0355e3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
+ #16 0x563b7c7fe0e4 in _start ../sysdeps/x86_64/start.S:115
+
+ 0x602000000398 is located 0 bytes to the right of 8-byte region [0x602000000390,0x602000000398)
+ allocated by thread T0 here:
+ #0 0x7f0356072faa in __interceptor_strdup /usr/src/debug/gcc/libsanitizer/asan/asan_interceptors.cpp:439
+ #1 0x563b7cf7317c in xstrdup wrapper.c:39
+ #2 0x563b7cd9a06a in save_user_format pretty.c:40
+ #3 0x563b7cd9b3e5 in get_commit_format pretty.c:173
+ #4 0x563b7ce54ea0 in handle_revision_opt revision.c:2456
+ #5 0x563b7ce597c9 in setup_revisions revision.c:2850
+ #6 0x563b7c9269e0 in cmd_log_init_finish builtin/log.c:269
+ #7 0x563b7c927362 in cmd_log_init builtin/log.c:348
+ #8 0x563b7c92b193 in cmd_log builtin/log.c:882
+ #9 0x563b7c802993 in run_builtin git.c:466
+ #10 0x563b7c803397 in handle_builtin git.c:721
+ #11 0x563b7c803b07 in run_argv git.c:788
+ #12 0x563b7c8048a7 in cmd_main git.c:923
+ #13 0x563b7ca99682 in main common-main.c:57
+ #14 0x7f0355e3c28f (/usr/lib/libc.so.6+0x2328f)
+ #15 0x7f0355e3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
+ #16 0x563b7c7fe0e4 in _start ../sysdeps/x86_64/start.S:115
+
+ SUMMARY: AddressSanitizer: heap-buffer-overflow /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:725 in __interceptor_strchrnul
+ Shadow bytes around the buggy address:
+ 0x0c047fff8020: fa fa fd fd fa fa 00 06 fa fa 05 fa fa fa fd fd
+ 0x0c047fff8030: fa fa 00 02 fa fa 06 fa fa fa 05 fa fa fa fd fd
+ 0x0c047fff8040: fa fa 00 07 fa fa 03 fa fa fa fd fd fa fa 00 00
+ 0x0c047fff8050: fa fa 00 01 fa fa fd fd fa fa 00 00 fa fa 00 01
+ 0x0c047fff8060: fa fa 00 06 fa fa 00 06 fa fa 05 fa fa fa 05 fa
+ =>0x0c047fff8070: fa fa 00[fa]fa fa fd fa fa fa fd fd fa fa fd fd
+ 0x0c047fff8080: fa fa fd fd fa fa 00 00 fa fa 00 fa fa fa fd fa
+ 0x0c047fff8090: fa fa fd fd fa fa 00 00 fa fa fa fa fa fa fa fa
+ 0x0c047fff80a0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c047fff80b0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c047fff80c0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ Shadow byte legend (one shadow byte represents 8 application bytes):
+ Addressable: 00
+ Partially addressable: 01 02 03 04 05 06 07
+ Heap left redzone: fa
+ Freed heap region: fd
+ Stack left redzone: f1
+ Stack mid redzone: f2
+ Stack right redzone: f3
+ Stack after return: f5
+ Stack use after scope: f8
+ Global redzone: f9
+ Global init order: f6
+ Poisoned by user: f7
+ Container overflow: fc
+ Array cookie: ac
+ Intra object redzone: bb
+ ASan internal: fe
+ Left alloca redzone: ca
+ Right alloca redzone: cb
+ ==10888==ABORTING
+
+Fix this bug by checking whether `end` points at the trailing NUL byte.
+Add a test which catches this out-of-bounds read and which demonstrates
+that we used to write out-of-bounds data into the formatted message.
+
+Reported-by: Markus Vervier <markus.vervier@x41-dsec.de>
+Original-patch-by: Markus Vervier <markus.vervier@x41-dsec.de>
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/f6e0b9f38987ad5e47bab551f8760b70689a5905]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ pretty.c | 2 +-
+ t/t4205-log-pretty-formats.sh | 6 ++++++
+ 2 files changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/pretty.c b/pretty.c
+index 4348a82..c49e818 100644
+--- a/pretty.c
++++ b/pretty.c
+@@ -1024,7 +1024,7 @@ static size_t parse_padding_placeholder(const char *placeholder,
+ const char *end = start + strcspn(start, ",)");
+ char *next;
+ int width;
+- if (!end || end == start)
++ if (!*end || end == start)
+ return 0;
+ width = strtol(start, &next, 10);
+ if (next == start || width == 0)
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index e69caba..8a349df 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -794,6 +794,12 @@ test_expect_success 'log --pretty with space stealing' '
+ test_cmp expect actual
+ '
+
++test_expect_success 'log --pretty with invalid padding format' '
++ printf "%s%%<(20" "$(git rev-parse HEAD)" >expect &&
++ git log -1 --pretty="format:%H%<(20" >actual &&
++ test_cmp expect actual
++'
++
+ test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message' '
+ # We only assert that this command does not crash. This needs to be
+ # executed with the address sanitizer to demonstrate failure.
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-05.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-05.patch
new file mode 100644
index 0000000000..994f7a55b1
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-05.patch
@@ -0,0 +1,98 @@
+From 1de69c0cdd388b0a5b7bdde0bfa0bda514a354b0 Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:46:39 +0100
+Subject: [PATCH 05/12] pretty: fix adding linefeed when placeholder is not expanded
+
+When a formatting directive has a `+` or ` ` after the `%`, then we add
+either a line feed or space if the placeholder expands to a non-empty
+string. In specific cases though this logic doesn't work as expected,
+and we try to add the character even in the case where the formatting
+directive is empty.
+
+One such pattern is `%w(1)%+d%+w(2)`. `%+d` expands to reference names
+pointing to a certain commit, like in `git log --decorate`. For a tagged
+commit this would for example expand to `\n (tag: v1.0.0)`, which has a
+leading newline due to the `+` modifier and a space added by `%d`. Now
+the second wrapping directive will cause us to rewrap the text to
+`\n(tag:\nv1.0.0)`, which is one byte shorter due to the missing leading
+space. The code that handles the `+` magic now notices that the length
+has changed and will thus try to insert a leading line feed at the
+original posititon. But as the string was shortened, the original
+position is past the buffer's boundary and thus we die with an error.
+
+Now there are two issues here:
+
+ 1. We check whether the buffer length has changed, not whether it
+ has been extended. This causes us to try and add the character
+ past the string boundary.
+
+ 2. The current logic does not make any sense whatsoever. When the
+ string got expanded due to the rewrap, putting the separator into
+ the original position is likely to put it somewhere into the
+ middle of the rewrapped contents.
+
+It is debatable whether `%+w()` makes any sense in the first place.
+Strictly speaking, the placeholder never expands to a non-empty string,
+and consequentially we shouldn't ever accept this combination. We thus
+fix the bug by simply refusing `%+w()`.
+
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/1de69c0cdd388b0a5b7bdde0bfa0bda514a354b0]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ pretty.c | 14 +++++++++++++-
+ t/t4205-log-pretty-formats.sh | 8 ++++++++
+ 2 files changed, 21 insertions(+), 1 deletion(-)
+
+diff --git a/pretty.c b/pretty.c
+index c49e818..195d005 100644
+--- a/pretty.c
++++ b/pretty.c
+@@ -1551,9 +1551,21 @@ static size_t format_commit_item(struct strbuf *sb, /* in UTF-8 */
+ default:
+ break;
+ }
+- if (magic != NO_MAGIC)
++ if (magic != NO_MAGIC) {
+ placeholder++;
+
++ switch (placeholder[0]) {
++ case 'w':
++ /*
++ * `%+w()` cannot ever expand to a non-empty string,
++ * and it potentially changes the layout of preceding
++ * contents. We're thus not able to handle the magic in
++ * this combination and refuse the pattern.
++ */
++ return 0;
++ };
++ }
++
+ orig_len = sb->len;
+ if (((struct format_commit_context *)context)->flush_type != no_flush)
+ consumed = format_and_pad_commit(sb, placeholder, context);
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index 8a349df..fa1bc2b 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -800,6 +800,14 @@ test_expect_success 'log --pretty with invalid padding format' '
+ test_cmp expect actual
+ '
+
++test_expect_success 'log --pretty with magical wrapping directives' '
++ commit_id=$(git commit-tree HEAD^{tree} -m "describe me") &&
++ git tag describe-me $commit_id &&
++ printf "\n(tag:\ndescribe-me)%%+w(2)" >expect &&
++ git log -1 --pretty="format:%w(1)%+d%+w(2)" $commit_id >actual &&
++ test_cmp expect actual
++'
++
+ test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message' '
+ # We only assert that this command does not crash. This needs to be
+ # executed with the address sanitizer to demonstrate failure.
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-06.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-06.patch
new file mode 100644
index 0000000000..93fbe5c7fe
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-06.patch
@@ -0,0 +1,90 @@
+From 48050c42c73c28b0c001d63d11dffac7e116847b Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:46:49 +0100
+Subject: [PATCH 06/12] pretty: fix integer overflow in wrapping format
+
+The `%w(width,indent1,indent2)` formatting directive can be used to
+rewrap text to a specific width and is designed after git-shortlog(1)'s
+`-w` parameter. While the three parameters are all stored as `size_t`
+internally, `strbuf_add_wrapped_text()` accepts integers as input. As a
+result, the casted integers may overflow. As these now-negative integers
+are later on passed to `strbuf_addchars()`, we will ultimately run into
+implementation-defined behaviour due to casting a negative number back
+to `size_t` again. On my platform, this results in trying to allocate
+9000 petabyte of memory.
+
+Fix this overflow by using `cast_size_t_to_int()` so that we reject
+inputs that cannot be represented as an integer.
+
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/48050c42c73c28b0c001d63d11dffac7e116847b]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ git-compat-util.h | 8 ++++++++
+ pretty.c | 4 +++-
+ t/t4205-log-pretty-formats.sh | 12 ++++++++++++
+ 3 files changed, 23 insertions(+), 1 deletion(-)
+
+diff --git a/git-compat-util.h b/git-compat-util.h
+index a1ecfd3..b0f3890 100644
+--- a/git-compat-util.h
++++ b/git-compat-util.h
+@@ -854,6 +854,14 @@ static inline size_t st_sub(size_t a, size_t b)
+ return a - b;
+ }
+
++static inline int cast_size_t_to_int(size_t a)
++{
++ if (a > INT_MAX)
++ die("number too large to represent as int on this platform: %"PRIuMAX,
++ (uintmax_t)a);
++ return (int)a;
++}
++
+ #ifdef HAVE_ALLOCA_H
+ # include <alloca.h>
+ # define xalloca(size) (alloca(size))
+diff --git a/pretty.c b/pretty.c
+index 195d005..ff9fc97 100644
+--- a/pretty.c
++++ b/pretty.c
+@@ -898,7 +898,9 @@ static void strbuf_wrap(struct strbuf *sb, size_t pos,
+ if (pos)
+ strbuf_add(&tmp, sb->buf, pos);
+ strbuf_add_wrapped_text(&tmp, sb->buf + pos,
+- (int) indent1, (int) indent2, (int) width);
++ cast_size_t_to_int(indent1),
++ cast_size_t_to_int(indent2),
++ cast_size_t_to_int(width));
+ strbuf_swap(&tmp, sb);
+ strbuf_release(&tmp);
+ }
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index fa1bc2b..23ac508 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -808,6 +808,18 @@ test_expect_success 'log --pretty with magical wrapping directives' '
+ test_cmp expect actual
+ '
+
++test_expect_success SIZE_T_IS_64BIT 'log --pretty with overflowing wrapping directive' '
++ cat >expect <<-EOF &&
++ fatal: number too large to represent as int on this platform: 2147483649
++ EOF
++ test_must_fail git log -1 --pretty="format:%w(2147483649,1,1)%d" 2>error &&
++ test_cmp expect error &&
++ test_must_fail git log -1 --pretty="format:%w(1,2147483649,1)%d" 2>error &&
++ test_cmp expect error &&
++ test_must_fail git log -1 --pretty="format:%w(1,1,2147483649)%d" 2>error &&
++ test_cmp expect error
++'
++
+ test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message' '
+ # We only assert that this command does not crash. This needs to be
+ # executed with the address sanitizer to demonstrate failure.
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-07.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-07.patch
new file mode 100644
index 0000000000..ec248ad6c2
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-07.patch
@@ -0,0 +1,123 @@
+From 522cc87fdc25449222a5894a428eebf4b8d5eaa9 Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:46:53 +0100
+Subject: [PATCH 07/12] utf8: fix truncated string lengths in utf8_strnwidth()
+
+The `utf8_strnwidth()` function accepts an optional string length as
+input parameter. This parameter can either be set to `-1`, in which case
+we call `strlen()` on the input. Or it can be set to a positive integer
+that indicates a precomputed length, which callers typically compute by
+calling `strlen()` at some point themselves.
+
+The input parameter is an `int` though, whereas `strlen()` returns a
+`size_t`. This can lead to implementation-defined behaviour though when
+the `size_t` cannot be represented by the `int`. In the general case
+though this leads to wrap-around and thus to negative string sizes,
+which is sure enough to not lead to well-defined behaviour.
+
+Fix this by accepting a `size_t` instead of an `int` as string length.
+While this takes away the ability of callers to simply pass in `-1` as
+string length, it really is trivial enough to convert them to instead
+pass in `strlen()` instead.
+
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/522cc87fdc25449222a5894a428eebf4b8d5eaa9]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ column.c | 2 +-
+ pretty.c | 4 ++--
+ utf8.c | 8 +++-----
+ utf8.h | 2 +-
+ 4 files changed, 7 insertions(+), 9 deletions(-)
+
+diff --git a/column.c b/column.c
+index 4a38eed..0c79850 100644
+--- a/column.c
++++ b/column.c
+@@ -23,7 +23,7 @@ struct column_data {
+ /* return length of 's' in letters, ANSI escapes stripped */
+ static int item_length(const char *s)
+ {
+- return utf8_strnwidth(s, -1, 1);
++ return utf8_strnwidth(s, strlen(s), 1);
+ }
+
+ /*
+diff --git a/pretty.c b/pretty.c
+index ff9fc97..c3c1443 100644
+--- a/pretty.c
++++ b/pretty.c
+@@ -1437,7 +1437,7 @@ static size_t format_and_pad_commit(struct strbuf *sb, /* in UTF-8 */
+ int occupied;
+ if (!start)
+ start = sb->buf;
+- occupied = utf8_strnwidth(start, -1, 1);
++ occupied = utf8_strnwidth(start, strlen(start), 1);
+ occupied += c->pretty_ctx->graph_width;
+ padding = (-padding) - occupied;
+ }
+@@ -1455,7 +1455,7 @@ static size_t format_and_pad_commit(struct strbuf *sb, /* in UTF-8 */
+ placeholder++;
+ total_consumed++;
+ }
+- len = utf8_strnwidth(local_sb.buf, -1, 1);
++ len = utf8_strnwidth(local_sb.buf, local_sb.len, 1);
+
+ if (c->flush_type == flush_left_and_steal) {
+ const char *ch = sb->buf + sb->len - 1;
+diff --git a/utf8.c b/utf8.c
+index 5c8f151..a66984b 100644
+--- a/utf8.c
++++ b/utf8.c
+@@ -206,13 +206,11 @@ int utf8_width(const char **start, size_t *remainder_p)
+ * string, assuming that the string is utf8. Returns strlen() instead
+ * if the string does not look like a valid utf8 string.
+ */
+-int utf8_strnwidth(const char *string, int len, int skip_ansi)
++int utf8_strnwidth(const char *string, size_t len, int skip_ansi)
+ {
+ int width = 0;
+ const char *orig = string;
+
+- if (len == -1)
+- len = strlen(string);
+ while (string && string < orig + len) {
+ int skip;
+ while (skip_ansi &&
+@@ -225,7 +223,7 @@ int utf8_strnwidth(const char *string, int len, int skip_ansi)
+
+ int utf8_strwidth(const char *string)
+ {
+- return utf8_strnwidth(string, -1, 0);
++ return utf8_strnwidth(string, strlen(string), 0);
+ }
+
+ int is_utf8(const char *text)
+@@ -792,7 +790,7 @@ int skip_utf8_bom(char **text, size_t len)
+ void strbuf_utf8_align(struct strbuf *buf, align_type position, unsigned int width,
+ const char *s)
+ {
+- int slen = strlen(s);
++ size_t slen = strlen(s);
+ int display_len = utf8_strnwidth(s, slen, 0);
+ int utf8_compensation = slen - display_len;
+
+diff --git a/utf8.h b/utf8.h
+index fcd5167..6da1b6d 100644
+--- a/utf8.h
++++ b/utf8.h
+@@ -7,7 +7,7 @@ typedef unsigned int ucs_char_t; /* assuming 32bit int */
+
+ size_t display_mode_esc_sequence_len(const char *s);
+ int utf8_width(const char **start, size_t *remainder_p);
+-int utf8_strnwidth(const char *string, int len, int skip_ansi);
++int utf8_strnwidth(const char *string, size_t len, int skip_ansi);
+ int utf8_strwidth(const char *string);
+ int is_utf8(const char *text);
+ int is_encoding_utf8(const char *name);
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-08.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-08.patch
new file mode 100644
index 0000000000..3de6a5ba6a
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-08.patch
@@ -0,0 +1,67 @@
+From 17d23e8a3812a5ca3dd6564e74d5250f22e5d76d Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:47:00 +0100
+Subject: [PATCH 08/12] utf8: fix returning negative string width
+
+The `utf8_strnwidth()` function calls `utf8_width()` in a loop and adds
+its returned width to the end result. `utf8_width()` can return `-1`
+though in case it reads a control character, which means that the
+computed string width is going to be wrong. In the worst case where
+there are more control characters than non-control characters, we may
+even return a negative string width.
+
+Fix this bug by treating control characters as having zero width.
+
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/17d23e8a3812a5ca3dd6564e74d5250f22e5d76d]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ t/t4205-log-pretty-formats.sh | 6 ++++++
+ utf8.c | 8 ++++++--
+ 2 files changed, 12 insertions(+), 2 deletions(-)
+
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index 23ac508..261a6f0 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -820,6 +820,12 @@ test_expect_success SIZE_T_IS_64BIT 'log --pretty with overflowing wrapping dire
+ test_cmp expect error
+ '
+
++test_expect_success 'log --pretty with padding and preceding control chars' '
++ printf "\20\20 0" >expect &&
++ git log -1 --pretty="format:%x10%x10%>|(4)%x30" >actual &&
++ test_cmp expect actual
++'
++
+ test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message' '
+ # We only assert that this command does not crash. This needs to be
+ # executed with the address sanitizer to demonstrate failure.
+diff --git a/utf8.c b/utf8.c
+index a66984b..6632bd2 100644
+--- a/utf8.c
++++ b/utf8.c
+@@ -212,11 +212,15 @@ int utf8_strnwidth(const char *string, size_t len, int skip_ansi)
+ const char *orig = string;
+
+ while (string && string < orig + len) {
+- int skip;
++ int glyph_width, skip;
++
+ while (skip_ansi &&
+ (skip = display_mode_esc_sequence_len(string)) != 0)
+ string += skip;
+- width += utf8_width(&string, NULL);
++
++ glyph_width = utf8_width(&string, NULL);
++ if (glyph_width > 0)
++ width += glyph_width;
+ }
+ return string ? width : len;
+ }
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-09.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-09.patch
new file mode 100644
index 0000000000..761d4c6a9f
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-09.patch
@@ -0,0 +1,162 @@
+From 937b71cc8b5b998963a7f9a33312ba3549d55510 Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:47:04 +0100
+Subject: [PATCH 09/12] utf8: fix overflow when returning string width
+
+The return type of both `utf8_strwidth()` and `utf8_strnwidth()` is
+`int`, but we operate on string lengths which are typically of type
+`size_t`. This means that when the string is longer than `INT_MAX`, we
+will overflow and thus return a negative result.
+
+This can lead to an out-of-bounds write with `--pretty=format:%<1)%B`
+and a commit message that is 2^31+1 bytes long:
+
+ =================================================================
+ ==26009==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x603000001168 at pc 0x7f95c4e5f427 bp 0x7ffd8541c900 sp 0x7ffd8541c0a8
+ WRITE of size 2147483649 at 0x603000001168 thread T0
+ #0 0x7f95c4e5f426 in __interceptor_memcpy /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827
+ #1 0x5612bbb1068c in format_and_pad_commit pretty.c:1763
+ #2 0x5612bbb1087a in format_commit_item pretty.c:1801
+ #3 0x5612bbc33bab in strbuf_expand strbuf.c:429
+ #4 0x5612bbb110e7 in repo_format_commit_message pretty.c:1869
+ #5 0x5612bbb12d96 in pretty_print_commit pretty.c:2161
+ #6 0x5612bba0a4d5 in show_log log-tree.c:781
+ #7 0x5612bba0d6c7 in log_tree_commit log-tree.c:1117
+ #8 0x5612bb691ed5 in cmd_log_walk_no_free builtin/log.c:508
+ #9 0x5612bb69235b in cmd_log_walk builtin/log.c:549
+ #10 0x5612bb6951a2 in cmd_log builtin/log.c:883
+ #11 0x5612bb56c993 in run_builtin git.c:466
+ #12 0x5612bb56d397 in handle_builtin git.c:721
+ #13 0x5612bb56db07 in run_argv git.c:788
+ #14 0x5612bb56e8a7 in cmd_main git.c:923
+ #15 0x5612bb803682 in main common-main.c:57
+ #16 0x7f95c4c3c28f (/usr/lib/libc.so.6+0x2328f)
+ #17 0x7f95c4c3c349 in __libc_start_main (/usr/lib/libc.so.6+0x23349)
+ #18 0x5612bb5680e4 in _start ../sysdeps/x86_64/start.S:115
+
+ 0x603000001168 is located 0 bytes to the right of 24-byte region [0x603000001150,0x603000001168)
+ allocated by thread T0 here:
+ #0 0x7f95c4ebe7ea in __interceptor_realloc /usr/src/debug/gcc/libsanitizer/asan/asan_malloc_linux.cpp:85
+ #1 0x5612bbcdd556 in xrealloc wrapper.c:136
+ #2 0x5612bbc310a3 in strbuf_grow strbuf.c:99
+ #3 0x5612bbc32acd in strbuf_add strbuf.c:298
+ #4 0x5612bbc33aec in strbuf_expand strbuf.c:418
+ #5 0x5612bbb110e7 in repo_format_commit_message pretty.c:1869
+ #6 0x5612bbb12d96 in pretty_print_commit pretty.c:2161
+ #7 0x5612bba0a4d5 in show_log log-tree.c:781
+ #8 0x5612bba0d6c7 in log_tree_commit log-tree.c:1117
+ #9 0x5612bb691ed5 in cmd_log_walk_no_free builtin/log.c:508
+ #10 0x5612bb69235b in cmd_log_walk builtin/log.c:549
+ #11 0x5612bb6951a2 in cmd_log builtin/log.c:883
+ #12 0x5612bb56c993 in run_builtin git.c:466
+ #13 0x5612bb56d397 in handle_builtin git.c:721
+ #14 0x5612bb56db07 in run_argv git.c:788
+ #15 0x5612bb56e8a7 in cmd_main git.c:923
+ #16 0x5612bb803682 in main common-main.c:57
+ #17 0x7f95c4c3c28f (/usr/lib/libc.so.6+0x2328f)
+
+ SUMMARY: AddressSanitizer: heap-buffer-overflow /usr/src/debug/gcc/libsanitizer/sanitizer_common/sanitizer_common_interceptors.inc:827 in __interceptor_memcpy
+ Shadow bytes around the buggy address:
+ 0x0c067fff81d0: fd fd fd fa fa fa fd fd fd fa fa fa fd fd fd fa
+ 0x0c067fff81e0: fa fa fd fd fd fd fa fa fd fd fd fd fa fa fd fd
+ 0x0c067fff81f0: fd fa fa fa fd fd fd fa fa fa fd fd fd fa fa fa
+ 0x0c067fff8200: fd fd fd fa fa fa fd fd fd fd fa fa 00 00 00 fa
+ 0x0c067fff8210: fa fa fd fd fd fa fa fa fd fd fd fa fa fa fd fd
+ =>0x0c067fff8220: fd fa fa fa fd fd fd fa fa fa 00 00 00[fa]fa fa
+ 0x0c067fff8230: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c067fff8240: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c067fff8250: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c067fff8260: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ 0x0c067fff8270: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
+ Shadow byte legend (one shadow byte represents 8 application bytes):
+ Addressable: 00
+ Partially addressable: 01 02 03 04 05 06 07
+ Heap left redzone: fa
+ Freed heap region: fd
+ Stack left redzone: f1
+ Stack mid redzone: f2
+ Stack right redzone: f3
+ Stack after return: f5
+ Stack use after scope: f8
+ Global redzone: f9
+ Global init order: f6
+ Poisoned by user: f7
+ Container overflow: fc
+ Array cookie: ac
+ Intra object redzone: bb
+ ASan internal: fe
+ Left alloca redzone: ca
+ Right alloca redzone: cb
+ ==26009==ABORTING
+
+Now the proper fix for this would be to convert both functions to return
+an `size_t` instead of an `int`. But given that this commit may be part
+of a security release, let's instead do the minimal viable fix and die
+in case we see an overflow.
+
+Add a test that would have previously caused us to crash.
+
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/937b71cc8b5b998963a7f9a33312ba3549d55510]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ t/t4205-log-pretty-formats.sh | 8 ++++++++
+ utf8.c | 12 +++++++++---
+ 2 files changed, 17 insertions(+), 3 deletions(-)
+
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index 261a6f0..de15007 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -843,4 +843,12 @@ test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit mes
+ test_cmp expect actual
+ '
+
++test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message does not cause allocation failure' '
++ test_must_fail git log -1 --format="%<(1)%B" $huge_commit 2>error &&
++ cat >expect <<-EOF &&
++ fatal: number too large to represent as int on this platform: 2147483649
++ EOF
++ test_cmp expect error
++'
++
+ test_done
+diff --git a/utf8.c b/utf8.c
+index 6632bd2..03be475 100644
+--- a/utf8.c
++++ b/utf8.c
+@@ -208,11 +208,12 @@ int utf8_width(const char **start, size_t *remainder_p)
+ */
+ int utf8_strnwidth(const char *string, size_t len, int skip_ansi)
+ {
+- int width = 0;
+ const char *orig = string;
++ size_t width = 0;
+
+ while (string && string < orig + len) {
+- int glyph_width, skip;
++ int glyph_width;
++ size_t skip;
+
+ while (skip_ansi &&
+ (skip = display_mode_esc_sequence_len(string)) != 0)
+@@ -222,7 +223,12 @@ int utf8_strnwidth(const char *string, size_t len, int skip_ansi)
+ if (glyph_width > 0)
+ width += glyph_width;
+ }
+- return string ? width : len;
++
++ /*
++ * TODO: fix the interface of this function and `utf8_strwidth()` to
++ * return `size_t` instead of `int`.
++ */
++ return cast_size_t_to_int(string ? width : len);
+ }
+
+ int utf8_strwidth(const char *string)
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-10.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-10.patch
new file mode 100644
index 0000000000..bbfc6e758f
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-10.patch
@@ -0,0 +1,99 @@
+From 81c2d4c3a5ba0e6ab8c348708441fed170e63a82 Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:47:10 +0100
+Subject: [PATCH 10/12] utf8: fix checking for glyph width in strbuf_utf8_replace()
+
+In `strbuf_utf8_replace()`, we call `utf8_width()` to compute the width
+of the current glyph. If the glyph is a control character though it can
+be that `utf8_width()` returns `-1`, but because we assign this value to
+a `size_t` the conversion will cause us to underflow. This bug can
+easily be triggered with the following command:
+
+ $ git log --pretty='format:xxx%<|(1,trunc)%x10'
+
+>From all I can see though this seems to be a benign underflow that has
+no security-related consequences.
+
+Fix the bug by using an `int` instead. When we see a control character,
+we now copy it into the target buffer but don't advance the current
+width of the string.
+
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/81c2d4c3a5ba0e6ab8c348708441fed170e63a82]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ t/t4205-log-pretty-formats.sh | 7 +++++++
+ utf8.c | 19 ++++++++++++++-----
+ 2 files changed, 21 insertions(+), 5 deletions(-)
+
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index de15007..52c8bc8 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -826,6 +826,13 @@ test_expect_success 'log --pretty with padding and preceding control chars' '
+ test_cmp expect actual
+ '
+
++test_expect_success 'log --pretty truncation with control chars' '
++ test_commit "$(printf "\20\20\20\20xxxx")" file contents commit-with-control-chars &&
++ printf "\20\20\20\20x.." >expect &&
++ git log -1 --pretty="format:%<(3,trunc)%s" commit-with-control-chars >actual &&
++ test_cmp expect actual
++'
++
+ test_expect_success EXPENSIVE,SIZE_T_IS_64BIT 'log --pretty with huge commit message' '
+ # We only assert that this command does not crash. This needs to be
+ # executed with the address sanitizer to demonstrate failure.
+diff --git a/utf8.c b/utf8.c
+index 03be475..ec03e69 100644
+--- a/utf8.c
++++ b/utf8.c
+@@ -377,6 +377,7 @@ void strbuf_utf8_replace(struct strbuf *sb_src, int pos, int width,
+ dst = sb_dst.buf;
+
+ while (src < end) {
++ int glyph_width;
+ char *old;
+ size_t n;
+
+@@ -390,21 +391,29 @@ void strbuf_utf8_replace(struct strbuf *sb_src, int pos, int width,
+ break;
+
+ old = src;
+- n = utf8_width((const char**)&src, NULL);
+- if (!src) /* broken utf-8, do nothing */
++ glyph_width = utf8_width((const char**)&src, NULL);
++ if (!src) /* broken utf-8, do nothing */
+ goto out;
+- if (n && w >= pos && w < pos + width) {
++
++ /*
++ * In case we see a control character we copy it into the
++ * buffer, but don't add it to the width.
++ */
++ if (glyph_width < 0)
++ glyph_width = 0;
++
++ if (glyph_width && w >= pos && w < pos + width) {
+ if (subst) {
+ memcpy(dst, subst, subst_len);
+ dst += subst_len;
+ subst = NULL;
+ }
+- w += n;
++ w += glyph_width;
+ continue;
+ }
+ memcpy(dst, old, src - old);
+ dst += src - old;
+- w += n;
++ w += glyph_width;
+ }
+ strbuf_setlen(&sb_dst, dst - sb_dst.buf);
+ strbuf_swap(sb_src, &sb_dst);
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-11.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-11.patch
new file mode 100644
index 0000000000..f339edfc8a
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-11.patch
@@ -0,0 +1,90 @@
+From f930a2394303b902e2973f4308f96529f736b8bc Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:47:15 +0100
+Subject: [PATCH 11/12] utf8: refactor strbuf_utf8_replace to not rely on preallocated buffer
+
+In `strbuf_utf8_replace`, we preallocate the destination buffer and then
+use `memcpy` to copy bytes into it at computed offsets. This feels
+rather fragile and is hard to understand at times. Refactor the code to
+instead use `strbuf_add` and `strbuf_addstr` so that we can be sure that
+there is no possibility to perform an out-of-bounds write.
+
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/f930a2394303b902e2973f4308f96529f736b8bc]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ utf8.c | 34 +++++++++++++---------------------
+ 1 file changed, 13 insertions(+), 21 deletions(-)
+
+diff --git a/utf8.c b/utf8.c
+index ec03e69..a13f5e3 100644
+--- a/utf8.c
++++ b/utf8.c
+@@ -365,26 +365,20 @@ void strbuf_add_wrapped_bytes(struct strbuf *buf, const char *data, int len,
+ void strbuf_utf8_replace(struct strbuf *sb_src, int pos, int width,
+ const char *subst)
+ {
+- struct strbuf sb_dst = STRBUF_INIT;
+- char *src = sb_src->buf;
+- char *end = src + sb_src->len;
+- char *dst;
+- int w = 0, subst_len = 0;
++ const char *src = sb_src->buf, *end = sb_src->buf + sb_src->len;
++ struct strbuf dst;
++ int w = 0;
+
+- if (subst)
+- subst_len = strlen(subst);
+- strbuf_grow(&sb_dst, sb_src->len + subst_len);
+- dst = sb_dst.buf;
++ strbuf_init(&dst, sb_src->len);
+
+ while (src < end) {
++ const char *old;
+ int glyph_width;
+- char *old;
+ size_t n;
+
+ while ((n = display_mode_esc_sequence_len(src))) {
+- memcpy(dst, src, n);
++ strbuf_add(&dst, src, n);
+ src += n;
+- dst += n;
+ }
+
+ if (src >= end)
+@@ -404,21 +398,19 @@ void strbuf_utf8_replace(struct strbuf *sb_src, int pos, int width,
+
+ if (glyph_width && w >= pos && w < pos + width) {
+ if (subst) {
+- memcpy(dst, subst, subst_len);
+- dst += subst_len;
++ strbuf_addstr(&dst, subst);
+ subst = NULL;
+ }
+- w += glyph_width;
+- continue;
++ } else {
++ strbuf_add(&dst, old, src - old);
+ }
+- memcpy(dst, old, src - old);
+- dst += src - old;
++
+ w += glyph_width;
+ }
+- strbuf_setlen(&sb_dst, dst - sb_dst.buf);
+- strbuf_swap(sb_src, &sb_dst);
++
++ strbuf_swap(sb_src, &dst);
+ out:
+- strbuf_release(&sb_dst);
++ strbuf_release(&dst);
+ }
+
+ /*
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2022-41903-12.patch b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-12.patch
new file mode 100644
index 0000000000..978865978d
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2022-41903-12.patch
@@ -0,0 +1,124 @@
+From 304a50adff6480ede46b68f7545baab542cbfb46 Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 1 Dec 2022 15:47:23 +0100
+Subject: [PATCH 12/12] pretty: restrict input lengths for padding and wrapping formats
+
+Both the padding and wrapping formatting directives allow the caller to
+specify an integer that ultimately leads to us adding this many chars to
+the result buffer. As a consequence, it is trivial to e.g. allocate 2GB
+of RAM via a single formatting directive and cause resource exhaustion
+on the machine executing this logic. Furthermore, it is debatable
+whether there are any sane usecases that require the user to pad data to
+2GB boundaries or to indent wrapped data by 2GB.
+
+Restrict the input sizes to 16 kilobytes at a maximum to limit the
+amount of bytes that can be requested by the user. This is not meant
+as a fix because there are ways to trivially amplify the amount of
+data we generate via formatting directives; the real protection is
+achieved by the changes in previous steps to catch and avoid integer
+wraparound that causes us to under-allocate and access beyond the
+end of allocated memory reagions. But having such a limit
+significantly helps fuzzing the pretty format, because the fuzzer is
+otherwise quite fast to run out-of-memory as it discovers these
+formatters.
+
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport [https://github.com/git/git/commit/304a50adff6480ede46b68f7545baab542cbfb46]
+CVE: CVE-2022-41903
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ pretty.c | 26 ++++++++++++++++++++++++++
+ t/t4205-log-pretty-formats.sh | 24 +++++++++++++++---------
+ 2 files changed, 41 insertions(+), 9 deletions(-)
+
+diff --git a/pretty.c b/pretty.c
+index c3c1443..e9687f0 100644
+--- a/pretty.c
++++ b/pretty.c
+@@ -13,6 +13,13 @@
+ #include "gpg-interface.h"
+ #include "trailer.h"
+
++/*
++ * The limit for formatting directives, which enable the caller to append
++ * arbitrarily many bytes to the formatted buffer. This includes padding
++ * and wrapping formatters.
++ */
++#define FORMATTING_LIMIT (16 * 1024)
++
+ static char *user_format;
+ static struct cmt_fmt_map {
+ const char *name;
+@@ -1029,6 +1036,15 @@ static size_t parse_padding_placeholder(const char *placeholder,
+ if (!*end || end == start)
+ return 0;
+ width = strtol(start, &next, 10);
++
++ /*
++ * We need to limit the amount of padding, or otherwise this
++ * would allow the user to pad the buffer by arbitrarily many
++ * bytes and thus cause resource exhaustion.
++ */
++ if (width < -FORMATTING_LIMIT || width > FORMATTING_LIMIT)
++ return 0;
++
+ if (next == start || width == 0)
+ return 0;
+ if (width < 0) {
+@@ -1188,6 +1204,16 @@ static size_t format_commit_one(struct strbuf *sb, /* in UTF-8 */
+ if (*next != ')')
+ return 0;
+ }
++
++ /*
++ * We need to limit the format here as it allows the
++ * user to prepend arbitrarily many bytes to the buffer
++ * when rewrapping.
++ */
++ if (width > FORMATTING_LIMIT ||
++ indent1 > FORMATTING_LIMIT ||
++ indent2 > FORMATTING_LIMIT)
++ return 0;
+ rewrap_message_tail(sb, c, width, indent1, indent2);
+ return end - placeholder + 1;
+ } else
+diff --git a/t/t4205-log-pretty-formats.sh b/t/t4205-log-pretty-formats.sh
+index 52c8bc8..572d02f 100755
+--- a/t/t4205-log-pretty-formats.sh
++++ b/t/t4205-log-pretty-formats.sh
+@@ -809,15 +809,21 @@ test_expect_success 'log --pretty with magical wrapping directives' '
+ '
+
+ test_expect_success SIZE_T_IS_64BIT 'log --pretty with overflowing wrapping directive' '
+- cat >expect <<-EOF &&
+- fatal: number too large to represent as int on this platform: 2147483649
+- EOF
+- test_must_fail git log -1 --pretty="format:%w(2147483649,1,1)%d" 2>error &&
+- test_cmp expect error &&
+- test_must_fail git log -1 --pretty="format:%w(1,2147483649,1)%d" 2>error &&
+- test_cmp expect error &&
+- test_must_fail git log -1 --pretty="format:%w(1,1,2147483649)%d" 2>error &&
+- test_cmp expect error
++ printf "%%w(2147483649,1,1)0" >expect &&
++ git log -1 --pretty="format:%w(2147483649,1,1)%x30" >actual &&
++ test_cmp expect actual &&
++ printf "%%w(1,2147483649,1)0" >expect &&
++ git log -1 --pretty="format:%w(1,2147483649,1)%x30" >actual &&
++ test_cmp expect actual &&
++ printf "%%w(1,1,2147483649)0" >expect &&
++ git log -1 --pretty="format:%w(1,1,2147483649)%x30" >actual &&
++ test_cmp expect actual
++'
++
++test_expect_success SIZE_T_IS_64BIT 'log --pretty with overflowing padding directive' '
++ printf "%%<(2147483649)0" >expect &&
++ git log -1 --pretty="format:%<(2147483649)%x30" >actual &&
++ test_cmp expect actual
+ '
+
+ test_expect_success 'log --pretty with padding and preceding control chars' '
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2023-22490-1.patch b/poky/meta/recipes-devtools/git/files/CVE-2023-22490-1.patch
new file mode 100644
index 0000000000..cc9b448c5c
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2023-22490-1.patch
@@ -0,0 +1,179 @@
+From 58325b93c5b6212697b088371809e9948fee8052 Mon Sep 17 00:00:00 2001
+From: Taylor Blau <me@ttaylorr.com>
+Date: Tue, 24 Jan 2023 19:43:45 -0500
+Subject: [PATCH 1/3] t5619: demonstrate clone_local() with ambiguous transport
+
+When cloning a repository, Git must determine (a) what transport
+mechanism to use, and (b) whether or not the clone is local.
+
+Since f38aa83 (use local cloning if insteadOf makes a local URL,
+2014-07-17), the latter check happens after the remote has been
+initialized, and references the remote's URL instead of the local path.
+This is done to make it possible for a `url.<base>.insteadOf` rule to
+convert a remote URL into a local one, in which case the `clone_local()`
+mechanism should be used.
+
+However, with a specially crafted repository, Git can be tricked into
+using a non-local transport while still setting `is_local` to "1" and
+using the `clone_local()` optimization. The below test case
+demonstrates such an instance, and shows that it can be used to include
+arbitrary (known) paths in the working copy of a cloned repository on a
+victim's machine[^1], even if local file clones are forbidden by
+`protocol.file.allow`.
+
+This happens in a few parts:
+
+ 1. We first call `get_repo_path()` to see if the remote is a local
+ path. If it is, we replace the repo name with its absolute path.
+
+ 2. We then call `transport_get()` on the repo name and decide how to
+ access it. If it was turned into an absolute path in the previous
+ step, then we should always treat it like a file.
+
+ 3. We use `get_repo_path()` again, and set `is_local` as appropriate.
+ But it's already too late to rewrite the repo name as an absolute
+ path, since we've already fed it to the transport code.
+
+The attack works by including a submodule whose URL corresponds to a
+path on disk. In the below example, the repository "sub" is reachable
+via the dumb HTTP protocol at (something like):
+
+ http://127.0.0.1:NNNN/dumb/sub.git
+
+However, the path "http:/127.0.0.1:NNNN/dumb" (that is, a top-level
+directory called "http:", then nested directories "127.0.0.1:NNNN", and
+"dumb") exists within the repository, too.
+
+To determine this, it first picks the appropriate transport, which is
+dumb HTTP. It then uses the remote's URL in order to determine whether
+the repository exists locally on disk. However, the malicious repository
+also contains an embedded stub repository which is the target of a
+symbolic link at the local path corresponding to the "sub" repository on
+disk (i.e., there is a symbolic link at "http:/127.0.0.1/dumb/sub.git",
+pointing to the stub repository via ".git/modules/sub/../../../repo").
+
+This stub repository fools Git into thinking that a local repository
+exists at that URL and thus can be cloned locally. The affected call is
+in `get_repo_path()`, which in turn calls `get_repo_path_1()`, which
+locates a valid repository at that target.
+
+This then causes Git to set the `is_local` variable to "1", and in turn
+instructs Git to clone the repository using its local clone optimization
+via the `clone_local()` function.
+
+The exploit comes into play because the stub repository's top-level
+"$GIT_DIR/objects" directory is a symbolic link which can point to an
+arbitrary path on the victim's machine. `clone_local()` resolves the
+top-level "objects" directory through a `stat(2)` call, meaning that we
+read through the symbolic link and copy or hardlink the directory
+contents at the destination of the link.
+
+In other words, we can get steps (1) and (3) to disagree by leveraging
+the dangling symlink to pick a non-local transport in the first step,
+and then set is_local to "1" in the third step when cloning with
+`--separate-git-dir`, which makes the symlink non-dangling.
+
+This can result in data-exfiltration on the victim's machine when
+sensitive data is at a known path (e.g., "/home/$USER/.ssh").
+
+The appropriate fix is two-fold:
+
+ - Resolve the transport later on (to avoid using the local
+ clone optimization with a non-local transport).
+
+ - Avoid reading through the top-level "objects" directory when
+ (correctly) using the clone_local() optimization.
+
+This patch merely demonstrates the issue. The following two patches will
+implement each part of the above fix, respectively.
+
+[^1]: Provided that any target directory does not contain symbolic
+ links, in which case the changes from 6f054f9 (builtin/clone.c:
+ disallow `--local` clones with symlinks, 2022-07-28) will abort the
+ clone.
+
+Reported-by: yvvdwf <yvvdwf@gmail.com>
+Signed-off-by: Taylor Blau <me@ttaylorr.com>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport
+[https://github.com/git/git/commit/58325b93c5b6212697b088371809e9948fee8052]
+CVE: CVE-2023-22490
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ t/t5619-clone-local-ambiguous-transport.sh | 63 ++++++++++++++++++++++
+ 1 file changed, 63 insertions(+)
+ create mode 100644 t/t5619-clone-local-ambiguous-transport.sh
+
+diff --git a/t/t5619-clone-local-ambiguous-transport.sh b/t/t5619-clone-local-ambiguous-transport.sh
+new file mode 100644
+index 0000000..7ebd31a
+--- /dev/null
++++ b/t/t5619-clone-local-ambiguous-transport.sh
+@@ -0,0 +1,63 @@
++#!/bin/sh
++
++test_description='test local clone with ambiguous transport'
++
++. ./test-lib.sh
++. "$TEST_DIRECTORY/lib-httpd.sh"
++
++if ! test_have_prereq SYMLINKS
++then
++ skip_all='skipping test, symlink support unavailable'
++ test_done
++fi
++
++start_httpd
++
++REPO="$HTTPD_DOCUMENT_ROOT_PATH/sub.git"
++URI="$HTTPD_URL/dumb/sub.git"
++
++test_expect_success 'setup' '
++ mkdir -p sensitive &&
++ echo "secret" >sensitive/secret &&
++
++ git init --bare "$REPO" &&
++ test_commit_bulk -C "$REPO" --ref=main 1 &&
++
++ git -C "$REPO" update-ref HEAD main &&
++ git -C "$REPO" update-server-info &&
++
++ git init malicious &&
++ (
++ cd malicious &&
++
++ git submodule add "$URI" &&
++
++ mkdir -p repo/refs &&
++ touch repo/refs/.gitkeep &&
++ printf "ref: refs/heads/a" >repo/HEAD &&
++ ln -s "$(cd .. && pwd)/sensitive" repo/objects &&
++
++ mkdir -p "$HTTPD_URL/dumb" &&
++ ln -s "../../../.git/modules/sub/../../../repo/" "$URI" &&
++
++ git add . &&
++ git commit -m "initial commit"
++ ) &&
++
++ # Delete all of the references in our malicious submodule to
++ # avoid the client attempting to checkout any objects (which
++ # will be missing, and thus will cause the clone to fail before
++ # we can trigger the exploit).
++ git -C "$REPO" for-each-ref --format="delete %(refname)" >in &&
++ git -C "$REPO" update-ref --stdin <in &&
++ git -C "$REPO" update-server-info
++'
++
++test_expect_failure 'ambiguous transport does not lead to arbitrary file-inclusion' '
++ git clone malicious clone &&
++ git -C clone submodule update --init &&
++
++ test_path_is_missing clone/.git/modules/sub/objects/secret
++'
++
++test_done
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2023-22490-2.patch b/poky/meta/recipes-devtools/git/files/CVE-2023-22490-2.patch
new file mode 100644
index 0000000000..0b5b40f827
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2023-22490-2.patch
@@ -0,0 +1,122 @@
+From cf8f6ce02a13f4d1979a53241afbee15a293fce9 Mon Sep 17 00:00:00 2001
+From: Taylor Blau <me@ttaylorr.com>
+Date: Tue, 24 Jan 2023 19:43:48 -0500
+Subject: [PATCH 2/3] clone: delay picking a transport until after get_repo_path()
+
+In the previous commit, t5619 demonstrates an issue where two calls to
+`get_repo_path()` could trick Git into using its local clone mechanism
+in conjunction with a non-local transport.
+
+That sequence is:
+
+ - the starting state is that the local path https:/example.com/foo is a
+ symlink that points to ../../../.git/modules/foo. So it's dangling.
+
+ - get_repo_path() sees that no such path exists (because it's
+ dangling), and thus we do not canonicalize it into an absolute path
+
+ - because we're using --separate-git-dir, we create .git/modules/foo.
+ Now our symlink is no longer dangling!
+
+ - we pass the url to transport_get(), which sees it as an https URL.
+
+ - we call get_repo_path() again, on the url. This second call was
+ introduced by f38aa83 (use local cloning if insteadOf makes a
+ local URL, 2014-07-17). The idea is that we want to pull the url
+ fresh from the remote.c API, because it will apply any aliases.
+
+And of course now it sees that there is a local file, which is a
+mismatch with the transport we already selected.
+
+The issue in the above sequence is calling `transport_get()` before
+deciding whether or not the repository is indeed local, and not passing
+in an absolute path if it is local.
+
+This is reminiscent of a similar bug report in [1], where it was
+suggested to perform the `insteadOf` lookup earlier. Taking that
+approach may not be as straightforward, since the intent is to store the
+original URL in the config, but to actually fetch from the insteadOf
+one, so conflating the two early on is a non-starter.
+
+Note: we pass the path returned by `get_repo_path(remote->url[0])`,
+which should be the same as `repo_name` (aside from any `insteadOf`
+rewrites).
+
+We *could* pass `absolute_pathdup()` of the same argument, which
+86521ac (Bring local clone's origin URL in line with that of a remote
+clone, 2008-09-01) indicates may differ depending on the presence of
+".git/" for a non-bare repo. That matters for forming relative submodule
+paths, but doesn't matter for the second call, since we're just feeding
+it to the transport code, which is fine either way.
+
+[1]: https://lore.kernel.org/git/CAMoD=Bi41mB3QRn3JdZL-FGHs4w3C2jGpnJB-CqSndO7FMtfzA@mail.gmail.com/
+
+Signed-off-by: Jeff King <peff@peff.net>
+Signed-off-by: Taylor Blau <me@ttaylorr.com>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport
+[https://github.com/git/git/commit/cf8f6ce02a13f4d1979a53241afbee15a293fce9]
+CVE: CVE-2023-22490
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ builtin/clone.c | 8 ++++----
+ t/t5619-clone-local-ambiguous-transport.sh | 15 +++++++++++----
+ 2 files changed, 15 insertions(+), 8 deletions(-)
+
+diff --git a/builtin/clone.c b/builtin/clone.c
+index 53e04b1..b57e703 100644
+--- a/builtin/clone.c
++++ b/builtin/clone.c
+@@ -1112,10 +1112,6 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
+ branch_top.buf);
+ refspec_append(&remote->fetch, default_refspec.buf);
+
+- transport = transport_get(remote, remote->url[0]);
+- transport_set_verbosity(transport, option_verbosity, option_progress);
+- transport->family = family;
+-
+ path = get_repo_path(remote->url[0], &is_bundle);
+ is_local = option_local != 0 && path && !is_bundle;
+ if (is_local) {
+@@ -1135,6 +1131,10 @@ int cmd_clone(int argc, const char **argv, const char *prefix)
+ }
+ if (option_local > 0 && !is_local)
+ warning(_("--local is ignored"));
++
++ transport = transport_get(remote, path ? path : remote->url[0]);
++ transport_set_verbosity(transport, option_verbosity, option_progress);
++ transport->family = family;
+ transport->cloning = 1;
+
+ transport_set_option(transport, TRANS_OPT_KEEP, "yes");
+diff --git a/t/t5619-clone-local-ambiguous-transport.sh b/t/t5619-clone-local-ambiguous-transport.sh
+index 7ebd31a..cce62bf 100644
+--- a/t/t5619-clone-local-ambiguous-transport.sh
++++ b/t/t5619-clone-local-ambiguous-transport.sh
+@@ -53,11 +53,18 @@ test_expect_success 'setup' '
+ git -C "$REPO" update-server-info
+ '
+
+-test_expect_failure 'ambiguous transport does not lead to arbitrary file-inclusion' '
++test_expect_success 'ambiguous transport does not lead to arbitrary file-inclusion' '
+ git clone malicious clone &&
+- git -C clone submodule update --init &&
+-
+- test_path_is_missing clone/.git/modules/sub/objects/secret
++ test_must_fail git -C clone submodule update --init 2>err &&
++
++ test_path_is_missing clone/.git/modules/sub/objects/secret &&
++ # We would actually expect "transport .file. not allowed" here,
++ # but due to quirks of the URL detection in Git, we mis-parse
++ # the absolute path as a bogus URL and die before that step.
++ #
++ # This works for now, and if we ever fix the URL detection, it
++ # is OK to change this to detect the transport error.
++ grep "protocol .* is not supported" err
+ '
+
+ test_done
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2023-22490-3.patch b/poky/meta/recipes-devtools/git/files/CVE-2023-22490-3.patch
new file mode 100644
index 0000000000..08fb7f840b
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2023-22490-3.patch
@@ -0,0 +1,154 @@
+From bffc762f87ae8d18c6001bf0044a76004245754c Mon Sep 17 00:00:00 2001
+From: Taylor Blau <me@ttaylorr.com>
+Date: Tue, 24 Jan 2023 19:43:51 -0500
+Subject: [PATCH 3/3] dir-iterator: prevent top-level symlinks without FOLLOW_SYMLINKS
+
+When using the dir_iterator API, we first stat(2) the base path, and
+then use that as a starting point to enumerate the directory's contents.
+
+If the directory contains symbolic links, we will immediately die() upon
+encountering them without the `FOLLOW_SYMLINKS` flag. The same is not
+true when resolving the top-level directory, though.
+
+As explained in a previous commit, this oversight in 6f054f9
+(builtin/clone.c: disallow `--local` clones with symlinks, 2022-07-28)
+can be used as an attack vector to include arbitrary files on a victim's
+filesystem from outside of the repository.
+
+Prevent resolving top-level symlinks unless the FOLLOW_SYMLINKS flag is
+given, which will cause clones of a repository with a symlink'd
+"$GIT_DIR/objects" directory to fail.
+
+Signed-off-by: Taylor Blau <me@ttaylorr.com>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport
+[https://github.com/git/git/commit/bffc762f87ae8d18c6001bf0044a76004245754c]
+CVE: CVE-2023-22490
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ dir-iterator.c | 13 +++++++++----
+ dir-iterator.h | 5 +++++
+ t/t0066-dir-iterator.sh | 27 ++++++++++++++++++++++++++-
+ t/t5604-clone-reference.sh | 16 ++++++++++++++++
+ 4 files changed, 56 insertions(+), 5 deletions(-)
+
+diff --git a/dir-iterator.c b/dir-iterator.c
+index b17e9f9..3764dd8 100644
+--- a/dir-iterator.c
++++ b/dir-iterator.c
+@@ -203,7 +203,7 @@ struct dir_iterator *dir_iterator_begin(const char *path, unsigned int flags)
+ {
+ struct dir_iterator_int *iter = xcalloc(1, sizeof(*iter));
+ struct dir_iterator *dir_iterator = &iter->base;
+- int saved_errno;
++ int saved_errno, err;
+
+ strbuf_init(&iter->base.path, PATH_MAX);
+ strbuf_addstr(&iter->base.path, path);
+@@ -213,10 +213,15 @@ struct dir_iterator *dir_iterator_begin(const char *path, unsigned int flags)
+ iter->flags = flags;
+
+ /*
+- * Note: stat already checks for NULL or empty strings and
+- * inexistent paths.
++ * Note: stat/lstat already checks for NULL or empty strings and
++ * nonexistent paths.
+ */
+- if (stat(iter->base.path.buf, &iter->base.st) < 0) {
++ if (iter->flags & DIR_ITERATOR_FOLLOW_SYMLINKS)
++ err = stat(iter->base.path.buf, &iter->base.st);
++ else
++ err = lstat(iter->base.path.buf, &iter->base.st);
++
++ if (err < 0) {
+ saved_errno = errno;
+ goto error_out;
+ }
+diff --git a/dir-iterator.h b/dir-iterator.h
+index 0822915..e3b6ff2 100644
+--- a/dir-iterator.h
++++ b/dir-iterator.h
+@@ -61,6 +61,11 @@
+ * not the symlinks themselves, which is the default behavior. Broken
+ * symlinks are ignored.
+ *
++ * Note: setting DIR_ITERATOR_FOLLOW_SYMLINKS affects resolving the
++ * starting path as well (e.g., attempting to iterate starting at a
++ * symbolic link pointing to a directory without FOLLOW_SYMLINKS will
++ * result in an error).
++ *
+ * Warning: circular symlinks are also followed when
+ * DIR_ITERATOR_FOLLOW_SYMLINKS is set. The iteration may end up with
+ * an ELOOP if they happen and DIR_ITERATOR_PEDANTIC is set.
+diff --git a/t/t0066-dir-iterator.sh b/t/t0066-dir-iterator.sh
+index 92910e4..c826f60 100755
+--- a/t/t0066-dir-iterator.sh
++++ b/t/t0066-dir-iterator.sh
+@@ -109,7 +109,9 @@ test_expect_success SYMLINKS 'setup dirs with symlinks' '
+ mkdir -p dir5/a/c &&
+ ln -s ../c dir5/a/b/d &&
+ ln -s ../ dir5/a/b/e &&
+- ln -s ../../ dir5/a/b/f
++ ln -s ../../ dir5/a/b/f &&
++
++ ln -s dir4 dir6
+ '
+
+ test_expect_success SYMLINKS 'dir-iterator should not follow symlinks by default' '
+@@ -145,4 +147,27 @@ test_expect_success SYMLINKS 'dir-iterator should follow symlinks w/ follow flag
+ test_cmp expected-follow-sorted-output actual-follow-sorted-output
+ '
+
++test_expect_success SYMLINKS 'dir-iterator does not resolve top-level symlinks' '
++ test_must_fail test-tool dir-iterator ./dir6 >out &&
++
++ grep "ENOTDIR" out
++'
++
++test_expect_success SYMLINKS 'dir-iterator resolves top-level symlinks w/ follow flag' '
++ cat >expected-follow-sorted-output <<-EOF &&
++ [d] (a) [a] ./dir6/a
++ [d] (a/f) [f] ./dir6/a/f
++ [d] (a/f/c) [c] ./dir6/a/f/c
++ [d] (b) [b] ./dir6/b
++ [d] (b/c) [c] ./dir6/b/c
++ [f] (a/d) [d] ./dir6/a/d
++ [f] (a/e) [e] ./dir6/a/e
++ EOF
++
++ test-tool dir-iterator --follow-symlinks ./dir6 >out &&
++ sort out >actual-follow-sorted-output &&
++
++ test_cmp expected-follow-sorted-output actual-follow-sorted-output
++'
++
+ test_done
+diff --git a/t/t5604-clone-reference.sh b/t/t5604-clone-reference.sh
+index 4894237..615b981 100755
+--- a/t/t5604-clone-reference.sh
++++ b/t/t5604-clone-reference.sh
+@@ -354,4 +354,20 @@ test_expect_success SYMLINKS 'clone repo with symlinked or unknown files at obje
+ test_must_be_empty T--shared.objects-symlinks.raw
+ '
+
++test_expect_success SYMLINKS 'clone repo with symlinked objects directory' '
++ test_when_finished "rm -fr sensitive malicious" &&
++
++ mkdir -p sensitive &&
++ echo "secret" >sensitive/file &&
++
++ git init malicious &&
++ rm -fr malicious/.git/objects &&
++ ln -s "$(pwd)/sensitive" ./malicious/.git/objects &&
++
++ test_must_fail git clone --local malicious clone 2>err &&
++
++ test_path_is_missing clone &&
++ grep "failed to start iterator over" err
++'
++
+ test_done
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/files/CVE-2023-23946.patch b/poky/meta/recipes-devtools/git/files/CVE-2023-23946.patch
new file mode 100644
index 0000000000..3629ff57b2
--- /dev/null
+++ b/poky/meta/recipes-devtools/git/files/CVE-2023-23946.patch
@@ -0,0 +1,184 @@
+From fade728df1221598f42d391cf377e9e84a32053f Mon Sep 17 00:00:00 2001
+From: Patrick Steinhardt <ps@pks.im>
+Date: Thu, 2 Feb 2023 11:54:34 +0100
+Subject: [PATCH] apply: fix writing behind newly created symbolic links
+
+When writing files git-apply(1) initially makes sure that none of the
+files it is about to create are behind a symlink:
+
+```
+ $ git init repo
+ Initialized empty Git repository in /tmp/repo/.git/
+ $ cd repo/
+ $ ln -s dir symlink
+ $ git apply - <<EOF
+ diff --git a/symlink/file b/symlink/file
+ new file mode 100644
+ index 0000000..e69de29
+ EOF
+ error: affected file 'symlink/file' is beyond a symbolic link
+```
+
+This safety mechanism is crucial to ensure that we don't write outside
+of the repository's working directory. It can be fooled though when the
+patch that is being applied creates the symbolic link in the first
+place, which can lead to writing files in arbitrary locations.
+
+Fix this by checking whether the path we're about to create is
+beyond a symlink or not. Tightening these checks like this should be
+fine as we already have these precautions in Git as explained
+above. Ideally, we should update the check we do up-front before
+starting to reflect the computed changes to the working tree so that
+we catch this case as well, but as part of embargoed security work,
+adding an equivalent check just before we try to write out a file
+should serve us well as a reasonable first step.
+
+Digging back into history shows that this vulnerability has existed
+since at least Git v2.9.0. As Git v2.8.0 and older don't build on my
+system anymore I cannot tell whether older versions are affected, as
+well.
+
+Reported-by: Joern Schneeweisz <jschneeweisz@gitlab.com>
+Signed-off-by: Patrick Steinhardt <ps@pks.im>
+Signed-off-by: Junio C Hamano <gitster@pobox.com>
+
+Upstream-Status: Backport
+[https://github.com/git/git/commit/fade728df1221598f42d391cf377e9e84a32053f]
+CVE: CVE-2023-23946
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ apply.c | 27 ++++++++++++++
+ t/t4115-apply-symlink.sh | 81 ++++++++++++++++++++++++++++++++++++++++
+ 2 files changed, 108 insertions(+)
+
+diff --git a/apply.c b/apply.c
+index f8a046a..4f303bf 100644
+--- a/apply.c
++++ b/apply.c
+@@ -4373,6 +4373,33 @@ static int create_one_file(struct apply_state *state,
+ if (state->cached)
+ return 0;
+
++ /*
++ * We already try to detect whether files are beyond a symlink in our
++ * up-front checks. But in the case where symlinks are created by any
++ * of the intermediate hunks it can happen that our up-front checks
++ * didn't yet see the symlink, but at the point of arriving here there
++ * in fact is one. We thus repeat the check for symlinks here.
++ *
++ * Note that this does not make the up-front check obsolete as the
++ * failure mode is different:
++ *
++ * - The up-front checks cause us to abort before we have written
++ * anything into the working directory. So when we exit this way the
++ * working directory remains clean.
++ *
++ * - The checks here happen in the middle of the action where we have
++ * already started to apply the patch. The end result will be a dirty
++ * working directory.
++ *
++ * Ideally, we should update the up-front checks to catch what would
++ * happen when we apply the patch before we damage the working tree.
++ * We have all the information necessary to do so. But for now, as a
++ * part of embargoed security work, having this check would serve as a
++ * reasonable first step.
++ */
++ if (path_is_beyond_symlink(state, path))
++ return error(_("affected file '%s' is beyond a symbolic link"), path);
++
+ res = try_create_file(state, path, mode, buf, size);
+ if (res < 0)
+ return -1;
+diff --git a/t/t4115-apply-symlink.sh b/t/t4115-apply-symlink.sh
+index 872fcda..1acb7b2 100755
+--- a/t/t4115-apply-symlink.sh
++++ b/t/t4115-apply-symlink.sh
+@@ -44,4 +44,85 @@ test_expect_success 'apply --index symlink patch' '
+
+ '
+
++test_expect_success 'symlink setup' '
++ ln -s .git symlink &&
++ git add symlink &&
++ git commit -m "add symlink"
++'
++
++test_expect_success SYMLINKS 'symlink escape when creating new files' '
++ test_when_finished "git reset --hard && git clean -dfx" &&
++
++ cat >patch <<-EOF &&
++ diff --git a/symlink b/renamed-symlink
++ similarity index 100%
++ rename from symlink
++ rename to renamed-symlink
++ --
++ diff --git /dev/null b/renamed-symlink/create-me
++ new file mode 100644
++ index 0000000..039727e
++ --- /dev/null
++ +++ b/renamed-symlink/create-me
++ @@ -0,0 +1,1 @@
++ +busted
++ EOF
++
++ test_must_fail git apply patch 2>stderr &&
++ cat >expected_stderr <<-EOF &&
++ error: affected file ${SQ}renamed-symlink/create-me${SQ} is beyond a symbolic link
++ EOF
++ test_cmp expected_stderr stderr &&
++ ! test_path_exists .git/create-me
++'
++
++test_expect_success SYMLINKS 'symlink escape when modifying file' '
++ test_when_finished "git reset --hard && git clean -dfx" &&
++ touch .git/modify-me &&
++
++ cat >patch <<-EOF &&
++ diff --git a/symlink b/renamed-symlink
++ similarity index 100%
++ rename from symlink
++ rename to renamed-symlink
++ --
++ diff --git a/renamed-symlink/modify-me b/renamed-symlink/modify-me
++ index 1111111..2222222 100644
++ --- a/renamed-symlink/modify-me
++ +++ b/renamed-symlink/modify-me
++ @@ -0,0 +1,1 @@
++ +busted
++ EOF
++
++ test_must_fail git apply patch 2>stderr &&
++ cat >expected_stderr <<-EOF &&
++ error: renamed-symlink/modify-me: No such file or directory
++ EOF
++ test_cmp expected_stderr stderr &&
++ test_must_be_empty .git/modify-me
++'
++
++test_expect_success SYMLINKS 'symlink escape when deleting file' '
++ test_when_finished "git reset --hard && git clean -dfx && rm .git/delete-me" &&
++ touch .git/delete-me &&
++
++ cat >patch <<-EOF &&
++ diff --git a/symlink b/renamed-symlink
++ similarity index 100%
++ rename from symlink
++ rename to renamed-symlink
++ --
++ diff --git a/renamed-symlink/delete-me b/renamed-symlink/delete-me
++ deleted file mode 100644
++ index 1111111..0000000 100644
++ EOF
++
++ test_must_fail git apply patch 2>stderr &&
++ cat >expected_stderr <<-EOF &&
++ error: renamed-symlink/delete-me: No such file or directory
++ EOF
++ test_cmp expected_stderr stderr &&
++ test_path_is_file .git/delete-me
++'
++
+ test_done
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/git/git.inc b/poky/meta/recipes-devtools/git/git.inc
index b5d0004712..36318eed20 100644
--- a/poky/meta/recipes-devtools/git/git.inc
+++ b/poky/meta/recipes-devtools/git/git.inc
@@ -11,8 +11,24 @@ SRC_URI = "${KERNELORG_MIRROR}/software/scm/git/git-${PV}.tar.gz;name=tarball \
${KERNELORG_MIRROR}/software/scm/git/git-manpages-${PV}.tar.gz;name=manpages \
file://fixsort.patch \
file://CVE-2021-40330.patch \
+ file://CVE-2022-23521.patch \
+ file://CVE-2022-41903-01.patch \
+ file://CVE-2022-41903-02.patch \
+ file://CVE-2022-41903-03.patch \
+ file://CVE-2022-41903-04.patch \
+ file://CVE-2022-41903-05.patch \
+ file://CVE-2022-41903-06.patch \
+ file://CVE-2022-41903-07.patch \
+ file://CVE-2022-41903-08.patch \
+ file://CVE-2022-41903-09.patch \
+ file://CVE-2022-41903-10.patch \
+ file://CVE-2022-41903-11.patch \
+ file://CVE-2022-41903-12.patch \
+ file://CVE-2023-22490-1.patch \
+ file://CVE-2023-22490-2.patch \
+ file://CVE-2023-22490-3.patch \
+ file://CVE-2023-23946.patch \
"
-
S = "${WORKDIR}/git-${PV}"
LIC_FILES_CHKSUM = "file://COPYING;md5=7c0d7ef03a7eb04ce795b0f60e68e7e1"
@@ -23,6 +39,10 @@ CVE_PRODUCT = "git-scm:git"
# in mirrored git repos. Most OE users wouldn't build the docs and
# we don't see this as a major issue for our general users/usecases.
CVE_CHECK_WHITELIST += "CVE-2022-24975"
+# This is specific to Git-for-Windows
+CVE_CHECK_WHITELIST += "CVE-2022-41953"
+# specific to Git for Windows
+CVE_CHECK_WHITELIST += "CVE-2023-22743"
PACKAGECONFIG ??= ""
PACKAGECONFIG[cvsserver] = ""
diff --git a/poky/meta/recipes-devtools/go/go-1.14.inc b/poky/meta/recipes-devtools/go/go-1.14.inc
index 2e1d8240f6..3b99b8fe7e 100644
--- a/poky/meta/recipes-devtools/go/go-1.14.inc
+++ b/poky/meta/recipes-devtools/go/go-1.14.inc
@@ -41,6 +41,23 @@ SRC_URI += "\
file://0002-CVE-2022-32190.patch \
file://0003-CVE-2022-32190.patch \
file://0004-CVE-2022-32190.patch \
+ file://CVE-2022-2880.patch \
+ file://CVE-2022-2879.patch \
+ file://CVE-2021-33195.patch \
+ file://CVE-2021-33198.patch \
+ file://CVE-2021-44716.patch \
+ file://CVE-2022-24921.patch \
+ file://CVE-2022-28131.patch \
+ file://CVE-2022-28327.patch \
+ file://CVE-2022-41715.patch \
+ file://CVE-2022-41717.patch \
+ file://CVE-2022-1962.patch \
+ file://CVE-2022-41723.patch \
+ file://CVE-2022-41722-1.patch \
+ file://CVE-2022-41722-2.patch \
+ file://CVE-2020-29510.patch \
+ file://CVE-2023-24537.patch \
+ file://CVE-2023-24534.patch \
"
SRC_URI_append_libc-musl = " file://0009-ld-replace-glibc-dynamic-linker-with-musl.patch"
@@ -56,4 +73,21 @@ CVE_CHECK_WHITELIST += "CVE-2021-29923"
CVE_CHECK_WHITELIST += "CVE-2022-29526"
# Issue only on windows
+CVE_CHECK_WHITELIST += "CVE-2022-29804"
+CVE_CHECK_WHITELIST += "CVE-2022-30580"
CVE_CHECK_WHITELIST += "CVE-2022-30634"
+
+# Issue is in golang.org/x/net/html/parse.go, not used in go compiler
+CVE_CHECK_WHITELIST += "CVE-2021-33194"
+
+# Issue introduced in go1.16, does not exist in 1.14
+CVE_CHECK_WHITELIST += "CVE-2021-41772"
+
+# Fixes code that was added in go1.16, does not exist in 1.14
+CVE_CHECK_WHITELIST += "CVE-2022-30630"
+
+# This is specific to Microsoft Windows
+CVE_CHECK_WHITELIST += "CVE-2022-41716"
+
+# Issue introduced in go1.15beta1, does not exist in 1.14
+CVE_CHECK_WHITELIST += "CVE-2022-1705"
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2020-29510.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2020-29510.patch
new file mode 100644
index 0000000000..e1c9e0bdb9
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2020-29510.patch
@@ -0,0 +1,65 @@
+From a0bf4d38dc2057d28396594264bbdd43d412de22 Mon Sep 17 00:00:00 2001
+From: Filippo Valsorda <filippo@golang.org>
+Date: Tue, 27 Oct 2020 00:21:30 +0100
+Subject: [PATCH] encoding/xml: replace comments inside directives with a space
+
+A Directive (like <!ENTITY xxx []>) can't have other nodes nested inside
+it (in our data structure representation), so there is no way to
+preserve comments. The previous behavior was to just elide them, which
+however might change the semantic meaning of the surrounding markup.
+Instead, replace them with a space which hopefully has the same semantic
+effect of the comment.
+
+Directives are not actually a node type in the XML spec, which instead
+specifies each of them separately (<!ENTITY, <!DOCTYPE, etc.), each with
+its own grammar. The rules for where and when the comments are allowed
+are not straightforward, and can't be implemented without implementing
+custom logic for each of the directives.
+
+Simply preserving the comments in the body of the directive would be
+problematic, as there can be unmatched quotes inside the comment.
+Whether those quotes are considered meaningful semantically or not,
+other parsers might disagree and interpret the output differently.
+
+This issue was reported by Juho Nurminen of Mattermost as it leads to
+round-trip mismatches. See #43168. It's not being fixed in a security
+release because round-trip stability is not a currently supported
+security property of encoding/xml, and we don't believe these fixes
+would be sufficient to reliably guarantee it in the future.
+
+Fixes CVE-2020-29510
+Updates #43168
+
+Change-Id: Icd86c75beff3e1e0689543efebdad10ed5178ce3
+Reviewed-on: https://go-review.googlesource.com/c/go/+/277893
+Run-TryBot: Filippo Valsorda <filippo@golang.org>
+TryBot-Result: Go Bot <gobot@golang.org>
+Trust: Filippo Valsorda <filippo@golang.org>
+Reviewed-by: Katie Hockman <katie@golang.org>
+
+Upstream-Status: Backport from https://github.com/golang/go/commit/a9cfd55e2b09735a25976d1b008a0a3c767494f8
+CVE: CVE-2020-29510
+Signed-off-by: Shubham Kulkarni <skulkarni@mvista.com>
+---
+ src/encoding/xml/xml.go | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/src/encoding/xml/xml.go b/src/encoding/xml/xml.go
+index 01a1460..98647b2 100644
+--- a/src/encoding/xml/xml.go
++++ b/src/encoding/xml/xml.go
+@@ -768,6 +768,12 @@ func (d *Decoder) rawToken() (Token, error) {
+ }
+ b0, b1 = b1, b
+ }
++
++ // Replace the comment with a space in the returned Directive
++ // body, so that markup parts that were separated by the comment
++ // (like a "<" and a "!") don't get joined when re-encoding the
++ // Directive, taking new semantic meaning.
++ d.buf.WriteByte(' ')
+ }
+ }
+ return Directive(d.buf.Bytes()), nil
+--
+2.7.4
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-33195.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-33195.patch
new file mode 100644
index 0000000000..3d9de888ff
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-33195.patch
@@ -0,0 +1,373 @@
+From 9324d7e53151e9dfa4b25af994a28c2e0b11f729 Mon Sep 17 00:00:00 2001
+From: Roland Shoemaker <roland@golang.org>
+Date: Thu, 27 May 2021 10:40:06 -0700
+Subject: [PATCH] net: verify results from Lookup* are valid domain names
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/31d60cda1f58b7558fc5725d2b9e4531655d980e]
+CVE: CVE-2021-33195
+Signed-off-by: Ralph Siemsen <ralph.siemsen@linaro.org>
+
+
+For the methods LookupCNAME, LookupSRV, LookupMX, LookupNS, and
+LookupAddr check that the returned domain names are in fact valid DNS
+names using the existing isDomainName function.
+
+Thanks to Philipp Jeitner and Haya Shulman from Fraunhofer SIT for
+reporting this issue.
+
+Updates #46241
+Fixes #46356
+Fixes CVE-2021-33195
+
+Change-Id: I47a4f58c031cb752f732e88bbdae7f819f0af4f3
+Reviewed-on: https://go-review.googlesource.com/c/go/+/323131
+Trust: Roland Shoemaker <roland@golang.org>
+Run-TryBot: Roland Shoemaker <roland@golang.org>
+TryBot-Result: Go Bot <gobot@golang.org>
+Reviewed-by: Filippo Valsorda <filippo@golang.org>
+Reviewed-by: Katie Hockman <katie@golang.org>
+(cherry picked from commit cdcd02842da7c004efd023881e3719105209c908)
+Reviewed-on: https://go-review.googlesource.com/c/go/+/323269
+---
+ src/net/dnsclient_unix_test.go | 157 +++++++++++++++++++++++++++++++++
+ src/net/lookup.go | 111 ++++++++++++++++++++---
+ 2 files changed, 255 insertions(+), 13 deletions(-)
+
+diff --git a/src/net/dnsclient_unix_test.go b/src/net/dnsclient_unix_test.go
+index 2ad40df..b8617d9 100644
+--- a/src/net/dnsclient_unix_test.go
++++ b/src/net/dnsclient_unix_test.go
+@@ -1800,3 +1800,160 @@ func TestPTRandNonPTR(t *testing.T) {
+ t.Errorf("names = %q; want %q", names, want)
+ }
+ }
++
++func TestCVE202133195(t *testing.T) {
++ fake := fakeDNSServer{
++ rh: func(n, _ string, q dnsmessage.Message, _ time.Time) (dnsmessage.Message, error) {
++ r := dnsmessage.Message{
++ Header: dnsmessage.Header{
++ ID: q.Header.ID,
++ Response: true,
++ RCode: dnsmessage.RCodeSuccess,
++ RecursionAvailable: true,
++ },
++ Questions: q.Questions,
++ }
++ switch q.Questions[0].Type {
++ case dnsmessage.TypeCNAME:
++ r.Answers = []dnsmessage.Resource{}
++ case dnsmessage.TypeA: // CNAME lookup uses a A/AAAA as a proxy
++ r.Answers = append(r.Answers,
++ dnsmessage.Resource{
++ Header: dnsmessage.ResourceHeader{
++ Name: dnsmessage.MustNewName("<html>.golang.org."),
++ Type: dnsmessage.TypeA,
++ Class: dnsmessage.ClassINET,
++ Length: 4,
++ },
++ Body: &dnsmessage.AResource{
++ A: TestAddr,
++ },
++ },
++ )
++ case dnsmessage.TypeSRV:
++ n := q.Questions[0].Name
++ if n.String() == "_hdr._tcp.golang.org." {
++ n = dnsmessage.MustNewName("<html>.golang.org.")
++ }
++ r.Answers = append(r.Answers,
++ dnsmessage.Resource{
++ Header: dnsmessage.ResourceHeader{
++ Name: n,
++ Type: dnsmessage.TypeSRV,
++ Class: dnsmessage.ClassINET,
++ Length: 4,
++ },
++ Body: &dnsmessage.SRVResource{
++ Target: dnsmessage.MustNewName("<html>.golang.org."),
++ },
++ },
++ )
++ case dnsmessage.TypeMX:
++ r.Answers = append(r.Answers,
++ dnsmessage.Resource{
++ Header: dnsmessage.ResourceHeader{
++ Name: dnsmessage.MustNewName("<html>.golang.org."),
++ Type: dnsmessage.TypeMX,
++ Class: dnsmessage.ClassINET,
++ Length: 4,
++ },
++ Body: &dnsmessage.MXResource{
++ MX: dnsmessage.MustNewName("<html>.golang.org."),
++ },
++ },
++ )
++ case dnsmessage.TypeNS:
++ r.Answers = append(r.Answers,
++ dnsmessage.Resource{
++ Header: dnsmessage.ResourceHeader{
++ Name: dnsmessage.MustNewName("<html>.golang.org."),
++ Type: dnsmessage.TypeNS,
++ Class: dnsmessage.ClassINET,
++ Length: 4,
++ },
++ Body: &dnsmessage.NSResource{
++ NS: dnsmessage.MustNewName("<html>.golang.org."),
++ },
++ },
++ )
++ case dnsmessage.TypePTR:
++ r.Answers = append(r.Answers,
++ dnsmessage.Resource{
++ Header: dnsmessage.ResourceHeader{
++ Name: dnsmessage.MustNewName("<html>.golang.org."),
++ Type: dnsmessage.TypePTR,
++ Class: dnsmessage.ClassINET,
++ Length: 4,
++ },
++ Body: &dnsmessage.PTRResource{
++ PTR: dnsmessage.MustNewName("<html>.golang.org."),
++ },
++ },
++ )
++ }
++ return r, nil
++ },
++ }
++
++ r := Resolver{PreferGo: true, Dial: fake.DialContext}
++ // Change the default resolver to match our manipulated resolver
++ originalDefault := DefaultResolver
++ DefaultResolver = &r
++ defer func() {
++ DefaultResolver = originalDefault
++ }()
++
++ _, err := r.LookupCNAME(context.Background(), "golang.org")
++ if expected := "lookup golang.org: CNAME target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("Resolver.LookupCNAME returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++ _, err = LookupCNAME("golang.org")
++ if expected := "lookup golang.org: CNAME target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("LookupCNAME returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++
++ _, _, err = r.LookupSRV(context.Background(), "target", "tcp", "golang.org")
++ if expected := "lookup golang.org: SRV target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("Resolver.LookupSRV returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++ _, _, err = LookupSRV("target", "tcp", "golang.org")
++ if expected := "lookup golang.org: SRV target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("LookupSRV returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++
++ _, _, err = r.LookupSRV(context.Background(), "hdr", "tcp", "golang.org")
++ if expected := "lookup golang.org: SRV header name is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("Resolver.LookupSRV returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++ _, _, err = LookupSRV("hdr", "tcp", "golang.org")
++ if expected := "lookup golang.org: SRV header name is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("LookupSRV returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++
++ _, err = r.LookupMX(context.Background(), "golang.org")
++ if expected := "lookup golang.org: MX target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("Resolver.LookupMX returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++ _, err = LookupMX("golang.org")
++ if expected := "lookup golang.org: MX target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("LookupMX returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++
++ _, err = r.LookupNS(context.Background(), "golang.org")
++ if expected := "lookup golang.org: NS target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("Resolver.LookupNS returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++ _, err = LookupNS("golang.org")
++ if expected := "lookup golang.org: NS target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("LookupNS returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++
++ _, err = r.LookupAddr(context.Background(), "1.2.3.4")
++ if expected := "lookup 1.2.3.4: PTR target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("Resolver.LookupAddr returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++ _, err = LookupAddr("1.2.3.4")
++ if expected := "lookup 1.2.3.4: PTR target is invalid"; err == nil || err.Error() != expected {
++ t.Errorf("LookupAddr returned unexpected error, got %q, want %q", err.Error(), expected)
++ }
++}
+diff --git a/src/net/lookup.go b/src/net/lookup.go
+index 9cebd10..05e88e4 100644
+--- a/src/net/lookup.go
++++ b/src/net/lookup.go
+@@ -364,8 +364,11 @@ func (r *Resolver) LookupPort(ctx context.Context, network, service string) (por
+ // LookupCNAME does not return an error if host does not
+ // contain DNS "CNAME" records, as long as host resolves to
+ // address records.
++//
++// The returned canonical name is validated to be a properly
++// formatted presentation-format domain name.
+ func LookupCNAME(host string) (cname string, err error) {
+- return DefaultResolver.lookupCNAME(context.Background(), host)
++ return DefaultResolver.LookupCNAME(context.Background(), host)
+ }
+
+ // LookupCNAME returns the canonical name for the given host.
+@@ -378,8 +381,18 @@ func LookupCNAME(host string) (cname string, err error) {
+ // LookupCNAME does not return an error if host does not
+ // contain DNS "CNAME" records, as long as host resolves to
+ // address records.
+-func (r *Resolver) LookupCNAME(ctx context.Context, host string) (cname string, err error) {
+- return r.lookupCNAME(ctx, host)
++//
++// The returned canonical name is validated to be a properly
++// formatted presentation-format domain name.
++func (r *Resolver) LookupCNAME(ctx context.Context, host string) (string, error) {
++ cname, err := r.lookupCNAME(ctx, host)
++ if err != nil {
++ return "", err
++ }
++ if !isDomainName(cname) {
++ return "", &DNSError{Err: "CNAME target is invalid", Name: host}
++ }
++ return cname, nil
+ }
+
+ // LookupSRV tries to resolve an SRV query of the given service,
+@@ -391,8 +404,11 @@ func (r *Resolver) LookupCNAME(ctx context.Context, host string) (cname string,
+ // That is, it looks up _service._proto.name. To accommodate services
+ // publishing SRV records under non-standard names, if both service
+ // and proto are empty strings, LookupSRV looks up name directly.
++//
++// The returned service names are validated to be properly
++// formatted presentation-format domain names.
+ func LookupSRV(service, proto, name string) (cname string, addrs []*SRV, err error) {
+- return DefaultResolver.lookupSRV(context.Background(), service, proto, name)
++ return DefaultResolver.LookupSRV(context.Background(), service, proto, name)
+ }
+
+ // LookupSRV tries to resolve an SRV query of the given service,
+@@ -404,28 +420,82 @@ func LookupSRV(service, proto, name string) (cname string, addrs []*SRV, err err
+ // That is, it looks up _service._proto.name. To accommodate services
+ // publishing SRV records under non-standard names, if both service
+ // and proto are empty strings, LookupSRV looks up name directly.
+-func (r *Resolver) LookupSRV(ctx context.Context, service, proto, name string) (cname string, addrs []*SRV, err error) {
+- return r.lookupSRV(ctx, service, proto, name)
++//
++// The returned service names are validated to be properly
++// formatted presentation-format domain names.
++func (r *Resolver) LookupSRV(ctx context.Context, service, proto, name string) (string, []*SRV, error) {
++ cname, addrs, err := r.lookupSRV(ctx, service, proto, name)
++ if err != nil {
++ return "", nil, err
++ }
++ if cname != "" && !isDomainName(cname) {
++ return "", nil, &DNSError{Err: "SRV header name is invalid", Name: name}
++ }
++ for _, addr := range addrs {
++ if addr == nil {
++ continue
++ }
++ if !isDomainName(addr.Target) {
++ return "", nil, &DNSError{Err: "SRV target is invalid", Name: name}
++ }
++ }
++ return cname, addrs, nil
+ }
+
+ // LookupMX returns the DNS MX records for the given domain name sorted by preference.
++//
++// The returned mail server names are validated to be properly
++// formatted presentation-format domain names.
+ func LookupMX(name string) ([]*MX, error) {
+- return DefaultResolver.lookupMX(context.Background(), name)
++ return DefaultResolver.LookupMX(context.Background(), name)
+ }
+
+ // LookupMX returns the DNS MX records for the given domain name sorted by preference.
++//
++// The returned mail server names are validated to be properly
++// formatted presentation-format domain names.
+ func (r *Resolver) LookupMX(ctx context.Context, name string) ([]*MX, error) {
+- return r.lookupMX(ctx, name)
++ records, err := r.lookupMX(ctx, name)
++ if err != nil {
++ return nil, err
++ }
++ for _, mx := range records {
++ if mx == nil {
++ continue
++ }
++ if !isDomainName(mx.Host) {
++ return nil, &DNSError{Err: "MX target is invalid", Name: name}
++ }
++ }
++ return records, nil
+ }
+
+ // LookupNS returns the DNS NS records for the given domain name.
++//
++// The returned name server names are validated to be properly
++// formatted presentation-format domain names.
+ func LookupNS(name string) ([]*NS, error) {
+- return DefaultResolver.lookupNS(context.Background(), name)
++ return DefaultResolver.LookupNS(context.Background(), name)
+ }
+
+ // LookupNS returns the DNS NS records for the given domain name.
++//
++// The returned name server names are validated to be properly
++// formatted presentation-format domain names.
+ func (r *Resolver) LookupNS(ctx context.Context, name string) ([]*NS, error) {
+- return r.lookupNS(ctx, name)
++ records, err := r.lookupNS(ctx, name)
++ if err != nil {
++ return nil, err
++ }
++ for _, ns := range records {
++ if ns == nil {
++ continue
++ }
++ if !isDomainName(ns.Host) {
++ return nil, &DNSError{Err: "NS target is invalid", Name: name}
++ }
++ }
++ return records, nil
+ }
+
+ // LookupTXT returns the DNS TXT records for the given domain name.
+@@ -441,14 +511,29 @@ func (r *Resolver) LookupTXT(ctx context.Context, name string) ([]string, error)
+ // LookupAddr performs a reverse lookup for the given address, returning a list
+ // of names mapping to that address.
+ //
++// The returned names are validated to be properly formatted presentation-format
++// domain names.
++//
+ // When using the host C library resolver, at most one result will be
+ // returned. To bypass the host resolver, use a custom Resolver.
+ func LookupAddr(addr string) (names []string, err error) {
+- return DefaultResolver.lookupAddr(context.Background(), addr)
++ return DefaultResolver.LookupAddr(context.Background(), addr)
+ }
+
+ // LookupAddr performs a reverse lookup for the given address, returning a list
+ // of names mapping to that address.
+-func (r *Resolver) LookupAddr(ctx context.Context, addr string) (names []string, err error) {
+- return r.lookupAddr(ctx, addr)
++//
++// The returned names are validated to be properly formatted presentation-format
++// domain names.
++func (r *Resolver) LookupAddr(ctx context.Context, addr string) ([]string, error) {
++ names, err := r.lookupAddr(ctx, addr)
++ if err != nil {
++ return nil, err
++ }
++ for _, name := range names {
++ if !isDomainName(name) {
++ return nil, &DNSError{Err: "PTR target is invalid", Name: addr}
++ }
++ }
++ return names, nil
+ }
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-33198.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-33198.patch
new file mode 100644
index 0000000000..241c08dad7
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-33198.patch
@@ -0,0 +1,113 @@
+From c8866491ac424cdf39aedb325e6dec9e54418cfb Mon Sep 17 00:00:00 2001
+From: Robert Griesemer <gri@golang.org>
+Date: Sun, 2 May 2021 11:27:03 -0700
+Subject: [PATCH] math/big: check for excessive exponents in Rat.SetString
+
+CVE-2021-33198
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/df9ce19db6df32d94eae8760927bdfbc595433c3]
+CVE: CVE-2021-33198
+Signed-off-by: Ralph Siemsen <ralph.siemsen@linaro.org>
+
+
+Found by OSS-Fuzz https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=33284
+
+Thanks to Emmanuel Odeke for reporting this issue.
+
+Updates #45910
+Fixes #46305
+Fixes CVE-2021-33198
+
+Change-Id: I61e7b04dbd80343420b57eede439e361c0f7b79c
+Reviewed-on: https://go-review.googlesource.com/c/go/+/316149
+Trust: Robert Griesemer <gri@golang.org>
+Trust: Katie Hockman <katie@golang.org>
+Run-TryBot: Robert Griesemer <gri@golang.org>
+TryBot-Result: Go Bot <gobot@golang.org>
+Reviewed-by: Katie Hockman <katie@golang.org>
+Reviewed-by: Emmanuel Odeke <emmanuel@orijtech.com>
+(cherry picked from commit 6c591f79b0b5327549bd4e94970f7a279efb4ab0)
+Reviewed-on: https://go-review.googlesource.com/c/go/+/321831
+Run-TryBot: Katie Hockman <katie@golang.org>
+Reviewed-by: Roland Shoemaker <roland@golang.org>
+---
+ src/math/big/ratconv.go | 15 ++++++++-------
+ src/math/big/ratconv_test.go | 25 +++++++++++++++++++++++++
+ 2 files changed, 33 insertions(+), 7 deletions(-)
+
+diff --git a/src/math/big/ratconv.go b/src/math/big/ratconv.go
+index e8cbdbe..90053a9 100644
+--- a/src/math/big/ratconv.go
++++ b/src/math/big/ratconv.go
+@@ -51,7 +51,8 @@ func (z *Rat) Scan(s fmt.ScanState, ch rune) error {
+ // An optional base-10 ``e'' or base-2 ``p'' (or their upper-case variants)
+ // exponent may be provided as well, except for hexadecimal floats which
+ // only accept an (optional) ``p'' exponent (because an ``e'' or ``E'' cannot
+-// be distinguished from a mantissa digit).
++// be distinguished from a mantissa digit). If the exponent's absolute value
++// is too large, the operation may fail.
+ // The entire string, not just a prefix, must be valid for success. If the
+ // operation failed, the value of z is undefined but the returned value is nil.
+ func (z *Rat) SetString(s string) (*Rat, bool) {
+@@ -174,6 +175,9 @@ func (z *Rat) SetString(s string) (*Rat, bool) {
+ return nil, false
+ }
+ }
++ if n > 1e6 {
++ return nil, false // avoid excessively large exponents
++ }
+ pow5 := z.b.abs.expNN(natFive, nat(nil).setWord(Word(n)), nil) // use underlying array of z.b.abs
+ if exp5 > 0 {
+ z.a.abs = z.a.abs.mul(z.a.abs, pow5)
+@@ -186,15 +190,12 @@ func (z *Rat) SetString(s string) (*Rat, bool) {
+ }
+
+ // apply exp2 contributions
++ if exp2 < -1e7 || exp2 > 1e7 {
++ return nil, false // avoid excessively large exponents
++ }
+ if exp2 > 0 {
+- if int64(uint(exp2)) != exp2 {
+- panic("exponent too large")
+- }
+ z.a.abs = z.a.abs.shl(z.a.abs, uint(exp2))
+ } else if exp2 < 0 {
+- if int64(uint(-exp2)) != -exp2 {
+- panic("exponent too large")
+- }
+ z.b.abs = z.b.abs.shl(z.b.abs, uint(-exp2))
+ }
+
+diff --git a/src/math/big/ratconv_test.go b/src/math/big/ratconv_test.go
+index b820df4..e55e655 100644
+--- a/src/math/big/ratconv_test.go
++++ b/src/math/big/ratconv_test.go
+@@ -590,3 +590,28 @@ func TestIssue31184(t *testing.T) {
+ }
+ }
+ }
++
++func TestIssue45910(t *testing.T) {
++ var x Rat
++ for _, test := range []struct {
++ input string
++ want bool
++ }{
++ {"1e-1000001", false},
++ {"1e-1000000", true},
++ {"1e+1000000", true},
++ {"1e+1000001", false},
++
++ {"0p1000000000000", true},
++ {"1p-10000001", false},
++ {"1p-10000000", true},
++ {"1p+10000000", true},
++ {"1p+10000001", false},
++ {"1.770p02041010010011001001", false}, // test case from issue
++ } {
++ _, got := x.SetString(test.input)
++ if got != test.want {
++ t.Errorf("SetString(%s) got ok = %v; want %v", test.input, got, test.want)
++ }
++ }
++}
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-44716.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-44716.patch
new file mode 100644
index 0000000000..9c4fee2db4
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2021-44716.patch
@@ -0,0 +1,93 @@
+From 9f1860075990e7bf908ca7cc329d1d3ef91741c8 Mon Sep 17 00:00:00 2001
+From: Filippo Valsorda <filippo@golang.org>
+Date: Thu, 9 Dec 2021 06:13:31 -0500
+Subject: [PATCH] net/http: update bundled golang.org/x/net/http2
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/d0aebe3e74fe14799f97ddd3f01129697c6a290a]
+CVE: CVE-2021-44716
+Signed-off-by: Ralph Siemsen <ralph.siemsen@linaro.org>
+
+
+Pull in security fix
+
+ a5309b3 http2: cap the size of the server's canonical header cache
+
+Updates #50058
+Fixes CVE-2021-44716
+
+Change-Id: Ifdd13f97fce168de5fb4b2e74ef2060d059800b9
+Reviewed-on: https://go-review.googlesource.com/c/go/+/370575
+Trust: Filippo Valsorda <filippo@golang.org>
+Run-TryBot: Filippo Valsorda <filippo@golang.org>
+Reviewed-by: Alex Rakoczy <alex@golang.org>
+TryBot-Result: Gopher Robot <gobot@golang.org>
+(cherry picked from commit d0aebe3e74fe14799f97ddd3f01129697c6a290a)
+---
+ src/go.mod | 2 +-
+ src/go.sum | 4 ++--
+ src/net/http/h2_bundle.go | 10 +++++++++-
+ src/vendor/modules.txt | 2 +-
+ 4 files changed, 13 insertions(+), 5 deletions(-)
+
+diff --git a/src/go.mod b/src/go.mod
+index ec6bd98..56f2fbb 100644
+--- a/src/go.mod
++++ b/src/go.mod
+@@ -4,7 +4,7 @@ go 1.14
+
+ require (
+ golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d
+- golang.org/x/net v0.0.0-20210129194117-4acb7895a057
++ golang.org/x/net v0.0.0-20211209100217-a5309b321dca
+ golang.org/x/sys v0.0.0-20200201011859-915c9c3d4ccf // indirect
+ golang.org/x/text v0.3.3-0.20191031172631-4b67af870c6f // indirect
+ )
+diff --git a/src/go.sum b/src/go.sum
+index 171e083..1ceba05 100644
+--- a/src/go.sum
++++ b/src/go.sum
+@@ -2,8 +2,8 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
+ golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d h1:9FCpayM9Egr1baVnV1SX0H87m+XB0B8S0hAMi99X/3U=
+ golang.org/x/crypto v0.0.0-20200128174031-69ecbb4d6d5d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+-golang.org/x/net v0.0.0-20210129194117-4acb7895a057 h1:HThQeV5c0Ab/Puir+q6mC97b7+3dfZdsLWMLoBrzo68=
+-golang.org/x/net v0.0.0-20210129194117-4acb7895a057/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
++golang.org/x/net v0.0.0-20211209100217-a5309b321dca h1:UmeWAm8AwB6NA/e4FSaGlK1EKTLXKX3utx4Si+6kfPg=
++golang.org/x/net v0.0.0-20211209100217-a5309b321dca/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+ golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+ golang.org/x/sys v0.0.0-20200201011859-915c9c3d4ccf h1:+4j7oujXP478CVb/AFvHJmVX5+Pczx2NGts5yirA0oY=
+diff --git a/src/net/http/h2_bundle.go b/src/net/http/h2_bundle.go
+index 702fd5a..83f2a72 100644
+--- a/src/net/http/h2_bundle.go
++++ b/src/net/http/h2_bundle.go
+@@ -4293,7 +4293,15 @@ func (sc *http2serverConn) canonicalHeader(v string) string {
+ sc.canonHeader = make(map[string]string)
+ }
+ cv = CanonicalHeaderKey(v)
+- sc.canonHeader[v] = cv
++ // maxCachedCanonicalHeaders is an arbitrarily-chosen limit on the number of
++ // entries in the canonHeader cache. This should be larger than the number
++ // of unique, uncommon header keys likely to be sent by the peer, while not
++ // so high as to permit unreaasonable memory usage if the peer sends an unbounded
++ // number of unique header keys.
++ const maxCachedCanonicalHeaders = 32
++ if len(sc.canonHeader) < maxCachedCanonicalHeaders {
++ sc.canonHeader[v] = cv
++ }
+ return cv
+ }
+
+diff --git a/src/vendor/modules.txt b/src/vendor/modules.txt
+index 669bd9b..1d67183 100644
+--- a/src/vendor/modules.txt
++++ b/src/vendor/modules.txt
+@@ -8,7 +8,7 @@ golang.org/x/crypto/curve25519
+ golang.org/x/crypto/hkdf
+ golang.org/x/crypto/internal/subtle
+ golang.org/x/crypto/poly1305
+-# golang.org/x/net v0.0.0-20210129194117-4acb7895a057
++# golang.org/x/net v0.0.0-20211209100217-a5309b321dca
+ ## explicit
+ golang.org/x/net/dns/dnsmessage
+ golang.org/x/net/http/httpguts
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-1962.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-1962.patch
new file mode 100644
index 0000000000..b2ab5d0669
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-1962.patch
@@ -0,0 +1,357 @@
+From ba8788ebcead55e99e631c6a1157ad7b35535d11 Mon Sep 17 00:00:00 2001
+From: Roland Shoemaker <bracewell@google.com>
+Date: Wed, 15 Jun 2022 10:43:05 -0700
+Subject: [PATCH] [release-branch.go1.17] go/parser: limit recursion depth
+
+Limit nested parsing to 100,000, which prevents stack exhaustion when
+parsing deeply nested statements, types, and expressions. Also limit
+the scope depth to 1,000 during object resolution.
+
+Thanks to Juho Nurminen of Mattermost for reporting this issue.
+
+Fixes #53707
+Updates #53616
+Fixes CVE-2022-1962
+
+Change-Id: I4d7b86c1d75d0bf3c7af1fdea91582aa74272c64
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1491025
+Reviewed-by: Russ Cox <rsc@google.com>
+Reviewed-by: Damien Neil <dneil@google.com>
+(cherry picked from commit 6a856f08d58e4b6705c0c337d461c540c1235c83)
+Reviewed-on: https://go-review.googlesource.com/c/go/+/417070
+Reviewed-by: Heschi Kreinick <heschi@google.com>
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Run-TryBot: Michael Knyszek <mknyszek@google.com>
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/ba8788ebcead55e99e631c6a1157ad7b35535d11]
+CVE: CVE-2022-1962
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ src/go/parser/interface.go | 10 ++-
+ src/go/parser/parser.go | 48 ++++++++--
+ src/go/parser/parser_test.go | 169 +++++++++++++++++++++++++++++++++++
+ 3 files changed, 220 insertions(+), 7 deletions(-)
+
+diff --git a/src/go/parser/interface.go b/src/go/parser/interface.go
+index 54f9d7b..537b327 100644
+--- a/src/go/parser/interface.go
++++ b/src/go/parser/interface.go
+@@ -92,8 +92,11 @@ func ParseFile(fset *token.FileSet, filename string, src interface{}, mode Mode)
+ defer func() {
+ if e := recover(); e != nil {
+ // resume same panic if it's not a bailout
+- if _, ok := e.(bailout); !ok {
++ bail, ok := e.(bailout)
++ if !ok {
+ panic(e)
++ } else if bail.msg != "" {
++ p.errors.Add(p.file.Position(bail.pos), bail.msg)
+ }
+ }
+
+@@ -188,8 +191,11 @@ func ParseExprFrom(fset *token.FileSet, filename string, src interface{}, mode M
+ defer func() {
+ if e := recover(); e != nil {
+ // resume same panic if it's not a bailout
+- if _, ok := e.(bailout); !ok {
++ bail, ok := e.(bailout)
++ if !ok {
+ panic(e)
++ } else if bail.msg != "" {
++ p.errors.Add(p.file.Position(bail.pos), bail.msg)
+ }
+ }
+ p.errors.Sort()
+diff --git a/src/go/parser/parser.go b/src/go/parser/parser.go
+index 31a7398..586fe90 100644
+--- a/src/go/parser/parser.go
++++ b/src/go/parser/parser.go
+@@ -64,6 +64,10 @@ type parser struct {
+ unresolved []*ast.Ident // unresolved identifiers
+ imports []*ast.ImportSpec // list of imports
+
++ // nestLev is used to track and limit the recursion depth
++ // during parsing.
++ nestLev int
++
+ // Label scopes
+ // (maintained by open/close LabelScope)
+ labelScope *ast.Scope // label scope for current function
+@@ -236,6 +240,24 @@ func un(p *parser) {
+ p.printTrace(")")
+ }
+
++// maxNestLev is the deepest we're willing to recurse during parsing
++const maxNestLev int = 1e5
++
++func incNestLev(p *parser) *parser {
++ p.nestLev++
++ if p.nestLev > maxNestLev {
++ p.error(p.pos, "exceeded max nesting depth")
++ panic(bailout{})
++ }
++ return p
++}
++
++// decNestLev is used to track nesting depth during parsing to prevent stack exhaustion.
++// It is used along with incNestLev in a similar fashion to how un and trace are used.
++func decNestLev(p *parser) {
++ p.nestLev--
++}
++
+ // Advance to the next token.
+ func (p *parser) next0() {
+ // Because of one-token look-ahead, print the previous token
+@@ -348,8 +370,12 @@ func (p *parser) next() {
+ }
+ }
+
+-// A bailout panic is raised to indicate early termination.
+-type bailout struct{}
++// A bailout panic is raised to indicate early termination. pos and msg are
++// only populated when bailing out of object resolution.
++type bailout struct {
++ pos token.Pos
++ msg string
++}
+
+ func (p *parser) error(pos token.Pos, msg string) {
+ epos := p.file.Position(pos)
+@@ -1030,6 +1056,8 @@ func (p *parser) parseChanType() *ast.ChanType {
+
+ // If the result is an identifier, it is not resolved.
+ func (p *parser) tryIdentOrType() ast.Expr {
++ defer decNestLev(incNestLev(p))
++
+ switch p.tok {
+ case token.IDENT:
+ return p.parseTypeName()
+@@ -1609,7 +1637,13 @@ func (p *parser) parseBinaryExpr(lhs bool, prec1 int) ast.Expr {
+ }
+
+ x := p.parseUnaryExpr(lhs)
+- for {
++ // We track the nesting here rather than at the entry for the function,
++ // since it can iteratively produce a nested output, and we want to
++ // limit how deep a structure we generate.
++ var n int
++ defer func() { p.nestLev -= n }()
++ for n = 1; ; n++ {
++ incNestLev(p)
+ op, oprec := p.tokPrec()
+ if oprec < prec1 {
+ return x
+@@ -1628,7 +1662,7 @@ func (p *parser) parseBinaryExpr(lhs bool, prec1 int) ast.Expr {
+ // The result may be a type or even a raw type ([...]int). Callers must
+ // check the result (using checkExpr or checkExprOrType), depending on
+ // context.
+-func (p *parser) parseExpr(lhs bool) ast.Expr {
++func (p *parser) parseExpr(lhs bool) ast.Expr {
+ if p.trace {
+ defer un(trace(p, "Expression"))
+ }
+@@ -1899,6 +1933,8 @@ func (p *parser) parseIfHeader() (init ast.Stmt, cond ast.Expr) {
+ }
+
+ func (p *parser) parseIfStmt() *ast.IfStmt {
++ defer decNestLev(incNestLev(p))
++
+ if p.trace {
+ defer un(trace(p, "IfStmt"))
+ }
+@@ -2214,6 +2250,8 @@ func (p *parser) parseForStmt() ast.Stmt {
+ }
+
+ func (p *parser) parseStmt() (s ast.Stmt) {
++ defer decNestLev(incNestLev(p))
++
+ if p.trace {
+ defer un(trace(p, "Statement"))
+ }
+diff --git a/src/go/parser/parser_test.go b/src/go/parser/parser_test.go
+index 25a374e..37a6a2b 100644
+--- a/src/go/parser/parser_test.go
++++ b/src/go/parser/parser_test.go
+@@ -10,6 +10,7 @@ import (
+ "go/ast"
+ "go/token"
+ "os"
++ "runtime"
+ "strings"
+ "testing"
+ )
+@@ -569,3 +570,171 @@ type x int // comment
+ t.Errorf("got %q, want %q", comment, "// comment")
+ }
+ }
++
++var parseDepthTests = []struct {
++ name string
++ format string
++ // multipler is used when a single statement may result in more than one
++ // change in the depth level, for instance "1+(..." produces a BinaryExpr
++ // followed by a UnaryExpr, which increments the depth twice. The test
++ // case comment explains which nodes are triggering the multiple depth
++ // changes.
++ parseMultiplier int
++ // scope is true if we should also test the statement for the resolver scope
++ // depth limit.
++ scope bool
++ // scopeMultiplier does the same as parseMultiplier, but for the scope
++ // depths.
++ scopeMultiplier int
++}{
++ // The format expands the part inside « » many times.
++ // A second set of brackets nested inside the first stops the repetition,
++ // so that for example «(«1»)» expands to (((...((((1))))...))).
++ {name: "array", format: "package main; var x «[1]»int"},
++ {name: "slice", format: "package main; var x «[]»int"},
++ {name: "struct", format: "package main; var x «struct { X «int» }»", scope: true},
++ {name: "pointer", format: "package main; var x «*»int"},
++ {name: "func", format: "package main; var x «func()»int", scope: true},
++ {name: "chan", format: "package main; var x «chan »int"},
++ {name: "chan2", format: "package main; var x «<-chan »int"},
++ {name: "interface", format: "package main; var x «interface { M() «int» }»", scope: true, scopeMultiplier: 2}, // Scopes: InterfaceType, FuncType
++ {name: "map", format: "package main; var x «map[int]»int"},
++ {name: "slicelit", format: "package main; var x = «[]any{«»}»", parseMultiplier: 2}, // Parser nodes: UnaryExpr, CompositeLit
++ {name: "arraylit", format: "package main; var x = «[1]any{«nil»}»", parseMultiplier: 2}, // Parser nodes: UnaryExpr, CompositeLit
++ {name: "structlit", format: "package main; var x = «struct{x any}{«nil»}»", parseMultiplier: 2}, // Parser nodes: UnaryExpr, CompositeLit
++ {name: "maplit", format: "package main; var x = «map[int]any{1:«nil»}»", parseMultiplier: 2}, // Parser nodes: CompositeLit, KeyValueExpr
++ {name: "dot", format: "package main; var x = «x.»x"},
++ {name: "index", format: "package main; var x = x«[1]»"},
++ {name: "slice", format: "package main; var x = x«[1:2]»"},
++ {name: "slice3", format: "package main; var x = x«[1:2:3]»"},
++ {name: "dottype", format: "package main; var x = x«.(any)»"},
++ {name: "callseq", format: "package main; var x = x«()»"},
++ {name: "methseq", format: "package main; var x = x«.m()»", parseMultiplier: 2}, // Parser nodes: SelectorExpr, CallExpr
++ {name: "binary", format: "package main; var x = «1+»1"},
++ {name: "binaryparen", format: "package main; var x = «1+(«1»)»", parseMultiplier: 2}, // Parser nodes: BinaryExpr, ParenExpr
++ {name: "unary", format: "package main; var x = «^»1"},
++ {name: "addr", format: "package main; var x = «& »x"},
++ {name: "star", format: "package main; var x = «*»x"},
++ {name: "recv", format: "package main; var x = «<-»x"},
++ {name: "call", format: "package main; var x = «f(«1»)»", parseMultiplier: 2}, // Parser nodes: Ident, CallExpr
++ {name: "conv", format: "package main; var x = «(*T)(«1»)»", parseMultiplier: 2}, // Parser nodes: ParenExpr, CallExpr
++ {name: "label", format: "package main; func main() { «Label:» }"},
++ {name: "if", format: "package main; func main() { «if true { «» }»}", parseMultiplier: 2, scope: true, scopeMultiplier: 2}, // Parser nodes: IfStmt, BlockStmt. Scopes: IfStmt, BlockStmt
++ {name: "ifelse", format: "package main; func main() { «if true {} else » {} }", scope: true},
++ {name: "switch", format: "package main; func main() { «switch { default: «» }»}", scope: true, scopeMultiplier: 2}, // Scopes: TypeSwitchStmt, CaseClause
++ {name: "typeswitch", format: "package main; func main() { «switch x.(type) { default: «» }» }", scope: true, scopeMultiplier: 2}, // Scopes: TypeSwitchStmt, CaseClause
++ {name: "for0", format: "package main; func main() { «for { «» }» }", scope: true, scopeMultiplier: 2}, // Scopes: ForStmt, BlockStmt
++ {name: "for1", format: "package main; func main() { «for x { «» }» }", scope: true, scopeMultiplier: 2}, // Scopes: ForStmt, BlockStmt
++ {name: "for3", format: "package main; func main() { «for f(); g(); h() { «» }» }", scope: true, scopeMultiplier: 2}, // Scopes: ForStmt, BlockStmt
++ {name: "forrange0", format: "package main; func main() { «for range x { «» }» }", scope: true, scopeMultiplier: 2}, // Scopes: RangeStmt, BlockStmt
++ {name: "forrange1", format: "package main; func main() { «for x = range z { «» }» }", scope: true, scopeMultiplier: 2}, // Scopes: RangeStmt, BlockStmt
++ {name: "forrange2", format: "package main; func main() { «for x, y = range z { «» }» }", scope: true, scopeMultiplier: 2}, // Scopes: RangeStmt, BlockStmt
++ {name: "go", format: "package main; func main() { «go func() { «» }()» }", parseMultiplier: 2, scope: true}, // Parser nodes: GoStmt, FuncLit
++ {name: "defer", format: "package main; func main() { «defer func() { «» }()» }", parseMultiplier: 2, scope: true}, // Parser nodes: DeferStmt, FuncLit
++ {name: "select", format: "package main; func main() { «select { default: «» }» }", scope: true},
++}
++
++// split splits pre«mid»post into pre, mid, post.
++// If the string does not have that form, split returns x, "", "".
++func split(x string) (pre, mid, post string) {
++ start, end := strings.Index(x, "«"), strings.LastIndex(x, "»")
++ if start < 0 || end < 0 {
++ return x, "", ""
++ }
++ return x[:start], x[start+len("«") : end], x[end+len("»"):]
++}
++
++func TestParseDepthLimit(t *testing.T) {
++ if runtime.GOARCH == "wasm" {
++ t.Skip("causes call stack exhaustion on js/wasm")
++ }
++ for _, tt := range parseDepthTests {
++ for _, size := range []string{"small", "big"} {
++ t.Run(tt.name+"/"+size, func(t *testing.T) {
++ n := maxNestLev + 1
++ if tt.parseMultiplier > 0 {
++ n /= tt.parseMultiplier
++ }
++ if size == "small" {
++ // Decrease the number of statements by 10, in order to check
++ // that we do not fail when under the limit. 10 is used to
++ // provide some wiggle room for cases where the surrounding
++ // scaffolding syntax adds some noise to the depth that changes
++ // on a per testcase basis.
++ n -= 10
++ }
++
++ pre, mid, post := split(tt.format)
++ if strings.Contains(mid, "«") {
++ left, base, right := split(mid)
++ mid = strings.Repeat(left, n) + base + strings.Repeat(right, n)
++ } else {
++ mid = strings.Repeat(mid, n)
++ }
++ input := pre + mid + post
++
++ fset := token.NewFileSet()
++ _, err := ParseFile(fset, "", input, ParseComments|SkipObjectResolution)
++ if size == "small" {
++ if err != nil {
++ t.Errorf("ParseFile(...): %v (want success)", err)
++ }
++ } else {
++ expected := "exceeded max nesting depth"
++ if err == nil || !strings.HasSuffix(err.Error(), expected) {
++ t.Errorf("ParseFile(...) = _, %v, want %q", err, expected)
++ }
++ }
++ })
++ }
++ }
++}
++
++func TestScopeDepthLimit(t *testing.T) {
++ if runtime.GOARCH == "wasm" {
++ t.Skip("causes call stack exhaustion on js/wasm")
++ }
++ for _, tt := range parseDepthTests {
++ if !tt.scope {
++ continue
++ }
++ for _, size := range []string{"small", "big"} {
++ t.Run(tt.name+"/"+size, func(t *testing.T) {
++ n := maxScopeDepth + 1
++ if tt.scopeMultiplier > 0 {
++ n /= tt.scopeMultiplier
++ }
++ if size == "small" {
++ // Decrease the number of statements by 10, in order to check
++ // that we do not fail when under the limit. 10 is used to
++ // provide some wiggle room for cases where the surrounding
++ // scaffolding syntax adds some noise to the depth that changes
++ // on a per testcase basis.
++ n -= 10
++ }
++
++ pre, mid, post := split(tt.format)
++ if strings.Contains(mid, "«") {
++ left, base, right := split(mid)
++ mid = strings.Repeat(left, n) + base + strings.Repeat(right, n)
++ } else {
++ mid = strings.Repeat(mid, n)
++ }
++ input := pre + mid + post
++
++ fset := token.NewFileSet()
++ _, err := ParseFile(fset, "", input, DeclarationErrors)
++ if size == "small" {
++ if err != nil {
++ t.Errorf("ParseFile(...): %v (want success)", err)
++ }
++ } else {
++ expected := "exceeded max scope depth during object resolution"
++ if err == nil || !strings.HasSuffix(err.Error(), expected) {
++ t.Errorf("ParseFile(...) = _, %v, want %q", err, expected)
++ }
++ }
++ })
++ }
++ }
++}
+--
+2.30.2
+
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-24921.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-24921.patch
new file mode 100644
index 0000000000..e4270d8a75
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-24921.patch
@@ -0,0 +1,198 @@
+From ba99f699d26483ea1045f47c760e9be30799e311 Mon Sep 17 00:00:00 2001
+From: Russ Cox <rsc@golang.org>
+Date: Wed, 2 Feb 2022 16:41:32 -0500
+Subject: [PATCH] regexp/syntax: reject very deeply nested regexps in Parse
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/2b65cde5868d8245ef8a0b8eba1e361440252d3b]
+CVE: CVE-2022-24921
+Signed-off-by: Ralph Siemsen <ralph.siemsen@linaro.org
+
+
+The regexp code assumes it can recurse over the structure of
+a regexp safely. Go's growable stacks make that reasonable
+for all plausible regexps, but implausible ones can reach the
+“infinite recursion?” stack limit.
+
+This CL limits the depth of any parsed regexp to 1000.
+That is, the depth of the parse tree is required to be ≤ 1000.
+Regexps that require deeper parse trees will return ErrInternalError.
+A future CL will change the error to ErrInvalidDepth,
+but using ErrInternalError for now avoids introducing new API
+in point releases when this is backported.
+
+Fixes #51112.
+Fixes #51117.
+
+Change-Id: I97d2cd82195946eb43a4ea8561f5b95f91fb14c5
+Reviewed-on: https://go-review.googlesource.com/c/go/+/384616
+Trust: Russ Cox <rsc@golang.org>
+Run-TryBot: Russ Cox <rsc@golang.org>
+Reviewed-by: Ian Lance Taylor <iant@golang.org>
+Reviewed-on: https://go-review.googlesource.com/c/go/+/384855
+---
+ src/regexp/syntax/parse.go | 72 ++++++++++++++++++++++++++++++++-
+ src/regexp/syntax/parse_test.go | 7 ++++
+ 2 files changed, 77 insertions(+), 2 deletions(-)
+
+diff --git a/src/regexp/syntax/parse.go b/src/regexp/syntax/parse.go
+index 8c6d43a..55bd20d 100644
+--- a/src/regexp/syntax/parse.go
++++ b/src/regexp/syntax/parse.go
+@@ -76,13 +76,29 @@ const (
+ opVerticalBar
+ )
+
++// maxHeight is the maximum height of a regexp parse tree.
++// It is somewhat arbitrarily chosen, but the idea is to be large enough
++// that no one will actually hit in real use but at the same time small enough
++// that recursion on the Regexp tree will not hit the 1GB Go stack limit.
++// The maximum amount of stack for a single recursive frame is probably
++// closer to 1kB, so this could potentially be raised, but it seems unlikely
++// that people have regexps nested even this deeply.
++// We ran a test on Google's C++ code base and turned up only
++// a single use case with depth > 100; it had depth 128.
++// Using depth 1000 should be plenty of margin.
++// As an optimization, we don't even bother calculating heights
++// until we've allocated at least maxHeight Regexp structures.
++const maxHeight = 1000
++
+ type parser struct {
+ flags Flags // parse mode flags
+ stack []*Regexp // stack of parsed expressions
+ free *Regexp
+ numCap int // number of capturing groups seen
+ wholeRegexp string
+- tmpClass []rune // temporary char class work space
++ tmpClass []rune // temporary char class work space
++ numRegexp int // number of regexps allocated
++ height map[*Regexp]int // regexp height for height limit check
+ }
+
+ func (p *parser) newRegexp(op Op) *Regexp {
+@@ -92,16 +108,52 @@ func (p *parser) newRegexp(op Op) *Regexp {
+ *re = Regexp{}
+ } else {
+ re = new(Regexp)
++ p.numRegexp++
+ }
+ re.Op = op
+ return re
+ }
+
+ func (p *parser) reuse(re *Regexp) {
++ if p.height != nil {
++ delete(p.height, re)
++ }
+ re.Sub0[0] = p.free
+ p.free = re
+ }
+
++func (p *parser) checkHeight(re *Regexp) {
++ if p.numRegexp < maxHeight {
++ return
++ }
++ if p.height == nil {
++ p.height = make(map[*Regexp]int)
++ for _, re := range p.stack {
++ p.checkHeight(re)
++ }
++ }
++ if p.calcHeight(re, true) > maxHeight {
++ panic(ErrInternalError)
++ }
++}
++
++func (p *parser) calcHeight(re *Regexp, force bool) int {
++ if !force {
++ if h, ok := p.height[re]; ok {
++ return h
++ }
++ }
++ h := 1
++ for _, sub := range re.Sub {
++ hsub := p.calcHeight(sub, false)
++ if h < 1+hsub {
++ h = 1 + hsub
++ }
++ }
++ p.height[re] = h
++ return h
++}
++
+ // Parse stack manipulation.
+
+ // push pushes the regexp re onto the parse stack and returns the regexp.
+@@ -137,6 +189,7 @@ func (p *parser) push(re *Regexp) *Regexp {
+ }
+
+ p.stack = append(p.stack, re)
++ p.checkHeight(re)
+ return re
+ }
+
+@@ -252,6 +305,7 @@ func (p *parser) repeat(op Op, min, max int, before, after, lastRepeat string) (
+ re.Sub = re.Sub0[:1]
+ re.Sub[0] = sub
+ p.stack[n-1] = re
++ p.checkHeight(re)
+
+ if op == OpRepeat && (min >= 2 || max >= 2) && !repeatIsValid(re, 1000) {
+ return "", &Error{ErrInvalidRepeatSize, before[:len(before)-len(after)]}
+@@ -699,6 +753,21 @@ func literalRegexp(s string, flags Flags) *Regexp {
+ // Flags, and returns a regular expression parse tree. The syntax is
+ // described in the top-level comment.
+ func Parse(s string, flags Flags) (*Regexp, error) {
++ return parse(s, flags)
++}
++
++func parse(s string, flags Flags) (_ *Regexp, err error) {
++ defer func() {
++ switch r := recover(); r {
++ default:
++ panic(r)
++ case nil:
++ // ok
++ case ErrInternalError:
++ err = &Error{Code: ErrInternalError, Expr: s}
++ }
++ }()
++
+ if flags&Literal != 0 {
+ // Trivial parser for literal string.
+ if err := checkUTF8(s); err != nil {
+@@ -710,7 +779,6 @@ func Parse(s string, flags Flags) (*Regexp, error) {
+ // Otherwise, must do real work.
+ var (
+ p parser
+- err error
+ c rune
+ op Op
+ lastRepeat string
+diff --git a/src/regexp/syntax/parse_test.go b/src/regexp/syntax/parse_test.go
+index 5581ba1..1ef6d8a 100644
+--- a/src/regexp/syntax/parse_test.go
++++ b/src/regexp/syntax/parse_test.go
+@@ -207,6 +207,11 @@ var parseTests = []parseTest{
+ // Valid repetitions.
+ {`((((((((((x{2}){2}){2}){2}){2}){2}){2}){2}){2}))`, ``},
+ {`((((((((((x{1}){2}){2}){2}){2}){2}){2}){2}){2}){2})`, ``},
++
++ // Valid nesting.
++ {strings.Repeat("(", 999) + strings.Repeat(")", 999), ``},
++ {strings.Repeat("(?:", 999) + strings.Repeat(")*", 999), ``},
++ {"(" + strings.Repeat("|", 12345) + ")", ``}, // not nested at all
+ }
+
+ const testFlags = MatchNL | PerlX | UnicodeGroups
+@@ -482,6 +487,8 @@ var invalidRegexps = []string{
+ `a{100000}`,
+ `a{100000,}`,
+ "((((((((((x{2}){2}){2}){2}){2}){2}){2}){2}){2}){2})",
++ strings.Repeat("(", 1000) + strings.Repeat(")", 1000),
++ strings.Repeat("(?:", 1000) + strings.Repeat(")*", 1000),
+ `\Q\E*`,
+ }
+
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-28131.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-28131.patch
new file mode 100644
index 0000000000..8afa292144
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-28131.patch
@@ -0,0 +1,104 @@
+From 8136eb2e5c316a51d0da710fbd0504cbbefee526 Mon Sep 17 00:00:00 2001
+From: Roland Shoemaker <roland@golang.org>
+Date: Mon, 28 Mar 2022 18:41:26 -0700
+Subject: [PATCH] encoding/xml: use iterative Skip, rather than recursive
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/58facfbe7db2fbb9afed794b281a70bdb12a60ae]
+CVE: CVE-2022-28131
+Signed-off-by: Ralph Siemsen <ralph.siemsen@linaro.org>
+
+
+Prevents exhausting the stack limit in _incredibly_ deeply nested
+structures.
+
+Fixes #53711
+Updates #53614
+Fixes CVE-2022-28131
+
+Change-Id: I47db4595ce10cecc29fbd06afce7b299868599e6
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1419912
+Reviewed-by: Julie Qiu <julieqiu@google.com>
+Reviewed-by: Damien Neil <dneil@google.com>
+(cherry picked from commit 9278cb78443d2b4deb24cbb5b61c9ba5ac688d49)
+Reviewed-on: https://go-review.googlesource.com/c/go/+/417068
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Reviewed-by: Heschi Kreinick <heschi@google.com>
+Run-TryBot: Michael Knyszek <mknyszek@google.com>
+---
+ src/encoding/xml/read.go | 15 ++++++++-------
+ src/encoding/xml/read_test.go | 18 ++++++++++++++++++
+ 2 files changed, 26 insertions(+), 7 deletions(-)
+
+diff --git a/src/encoding/xml/read.go b/src/encoding/xml/read.go
+index 4ffed80..3fac859 100644
+--- a/src/encoding/xml/read.go
++++ b/src/encoding/xml/read.go
+@@ -743,12 +743,12 @@ Loop:
+ }
+
+ // Skip reads tokens until it has consumed the end element
+-// matching the most recent start element already consumed.
+-// It recurs if it encounters a start element, so it can be used to
+-// skip nested structures.
++// matching the most recent start element already consumed,
++// skipping nested structures.
+ // It returns nil if it finds an end element matching the start
+ // element; otherwise it returns an error describing the problem.
+ func (d *Decoder) Skip() error {
++ var depth int64
+ for {
+ tok, err := d.Token()
+ if err != nil {
+@@ -756,11 +756,12 @@ func (d *Decoder) Skip() error {
+ }
+ switch tok.(type) {
+ case StartElement:
+- if err := d.Skip(); err != nil {
+- return err
+- }
++ depth++
+ case EndElement:
+- return nil
++ if depth == 0 {
++ return nil
++ }
++ depth--
+ }
+ }
+ }
+diff --git a/src/encoding/xml/read_test.go b/src/encoding/xml/read_test.go
+index 6a20b1a..7a621a5 100644
+--- a/src/encoding/xml/read_test.go
++++ b/src/encoding/xml/read_test.go
+@@ -5,9 +5,11 @@
+ package xml
+
+ import (
++ "bytes"
+ "errors"
+ "io"
+ "reflect"
++ "runtime"
+ "strings"
+ "testing"
+ "time"
+@@ -1093,3 +1095,19 @@ func TestCVE202228131(t *testing.T) {
+ t.Fatalf("Unmarshal unexpected error: got %q, want %q", err, errExeceededMaxUnmarshalDepth)
+ }
+ }
++
++func TestCVE202230633(t *testing.T) {
++ if runtime.GOARCH == "wasm" {
++ t.Skip("causes memory exhaustion on js/wasm")
++ }
++ defer func() {
++ p := recover()
++ if p != nil {
++ t.Fatal("Unmarshal panicked")
++ }
++ }()
++ var example struct {
++ Things []string
++ }
++ Unmarshal(bytes.Repeat([]byte("<a>"), 17_000_000), &example)
++}
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-28327.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-28327.patch
new file mode 100644
index 0000000000..6361deec7d
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-28327.patch
@@ -0,0 +1,36 @@
+From 34d9ab78568d63d8097911237897b188bdaba9c2 Mon Sep 17 00:00:00 2001
+From: Filippo Valsorda <filippo@golang.org>
+Date: Thu, 31 Mar 2022 12:31:58 -0400
+Subject: [PATCH] crypto/elliptic: tolerate zero-padded scalars in generic
+ P-256
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/7139e8b024604ab168b51b99c6e8168257a5bf58]
+CVE: CVE-2022-28327
+Signed-off-by: Ralph Siemsen <ralph.siemsen@linaro.org>
+
+
+Updates #52075
+Fixes #52076
+Fixes CVE-2022-28327
+
+Change-Id: I595a7514c9a0aa1b9c76aedfc2307e1124271f27
+Reviewed-on: https://go-review.googlesource.com/c/go/+/397136
+Trust: Filippo Valsorda <filippo@golang.org>
+Reviewed-by: Julie Qiu <julie@golang.org>
+---
+ src/crypto/elliptic/p256.go | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/crypto/elliptic/p256.go b/src/crypto/elliptic/p256.go
+index c23e414..787e3e7 100644
+--- a/src/crypto/elliptic/p256.go
++++ b/src/crypto/elliptic/p256.go
+@@ -51,7 +51,7 @@ func p256GetScalar(out *[32]byte, in []byte) {
+ n := new(big.Int).SetBytes(in)
+ var scalarBytes []byte
+
+- if n.Cmp(p256Params.N) >= 0 {
++ if n.Cmp(p256Params.N) >= 0 || len(in) > len(out) {
+ n.Mod(n, p256Params.N)
+ scalarBytes = n.Bytes()
+ } else {
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-2879.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-2879.patch
new file mode 100644
index 0000000000..ea04a82d16
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-2879.patch
@@ -0,0 +1,111 @@
+From 9d339f1d0f53c4116a7cb4acfa895f31a07212ee Mon Sep 17 00:00:00 2001
+From: Damien Neil <dneil@google.com>
+Date: Fri, 2 Sep 2022 20:45:18 -0700
+Subject: [PATCH] archive/tar: limit size of headers
+
+Set a 1MiB limit on special file blocks (PAX headers, GNU long names,
+GNU link names), to avoid reading arbitrarily large amounts of data
+into memory.
+
+Thanks to Adam Korczynski (ADA Logics) and OSS-Fuzz for reporting
+this issue.
+
+Fixes CVE-2022-2879
+Updates #54853
+Fixes #55926
+
+Change-Id: I85136d6ff1e0af101a112190e027987ab4335680
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1565555
+Reviewed-by: Tatiana Bradley <tatianabradley@google.com>
+Run-TryBot: Roland Shoemaker <bracewell@google.com>
+Reviewed-by: Roland Shoemaker <bracewell@google.com>
+(cherry picked from commit 6ee768cef6b82adf7a90dcf367a1699ef694f3b2)
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1591053
+Reviewed-by: Julie Qiu <julieqiu@google.com>
+Reviewed-by: Damien Neil <dneil@google.com>
+Reviewed-on: https://go-review.googlesource.com/c/go/+/438498
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
+Reviewed-by: Carlos Amedee <carlos@golang.org>
+Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
+Run-TryBot: Carlos Amedee <carlos@golang.org>
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/0a723816cd2]
+CVE: CVE-2022-2879
+Signed-off-by: Sunil Kumar <sukumar@mvista.com>
+---
+ src/archive/tar/format.go | 4 ++++
+ src/archive/tar/reader.go | 14 ++++++++++++--
+ src/archive/tar/writer.go | 3 +++
+ 3 files changed, 19 insertions(+), 2 deletions(-)
+
+diff --git a/src/archive/tar/format.go b/src/archive/tar/format.go
+index cfe24a5..6642364 100644
+--- a/src/archive/tar/format.go
++++ b/src/archive/tar/format.go
+@@ -143,6 +143,10 @@ const (
+ blockSize = 512 // Size of each block in a tar stream
+ nameSize = 100 // Max length of the name field in USTAR format
+ prefixSize = 155 // Max length of the prefix field in USTAR format
++
++ // Max length of a special file (PAX header, GNU long name or link).
++ // This matches the limit used by libarchive.
++ maxSpecialFileSize = 1 << 20
+ )
+
+ // blockPadding computes the number of bytes needed to pad offset up to the
+diff --git a/src/archive/tar/reader.go b/src/archive/tar/reader.go
+index 4f9135b..e996595 100644
+--- a/src/archive/tar/reader.go
++++ b/src/archive/tar/reader.go
+@@ -104,7 +104,7 @@ func (tr *Reader) next() (*Header, error) {
+ continue // This is a meta header affecting the next header
+ case TypeGNULongName, TypeGNULongLink:
+ format.mayOnlyBe(FormatGNU)
+- realname, err := ioutil.ReadAll(tr)
++ realname, err := readSpecialFile(tr)
+ if err != nil {
+ return nil, err
+ }
+@@ -294,7 +294,7 @@ func mergePAX(hdr *Header, paxHdrs map[string]string) (err error) {
+ // parsePAX parses PAX headers.
+ // If an extended header (type 'x') is invalid, ErrHeader is returned
+ func parsePAX(r io.Reader) (map[string]string, error) {
+- buf, err := ioutil.ReadAll(r)
++ buf, err := readSpecialFile(r)
+ if err != nil {
+ return nil, err
+ }
+@@ -827,6 +827,16 @@ func tryReadFull(r io.Reader, b []byte) (n int, err error) {
+ return n, err
+ }
+
++// readSpecialFile is like ioutil.ReadAll except it returns
++// ErrFieldTooLong if more than maxSpecialFileSize is read.
++func readSpecialFile(r io.Reader) ([]byte, error) {
++ buf, err := ioutil.ReadAll(io.LimitReader(r, maxSpecialFileSize+1))
++ if len(buf) > maxSpecialFileSize {
++ return nil, ErrFieldTooLong
++ }
++ return buf, err
++}
++
+ // discard skips n bytes in r, reporting an error if unable to do so.
+ func discard(r io.Reader, n int64) error {
+ // If possible, Seek to the last byte before the end of the data section.
+diff --git a/src/archive/tar/writer.go b/src/archive/tar/writer.go
+index e80498d..893eac0 100644
+--- a/src/archive/tar/writer.go
++++ b/src/archive/tar/writer.go
+@@ -199,6 +199,9 @@ func (tw *Writer) writePAXHeader(hdr *Header, paxHdrs map[string]string) error {
+ flag = TypeXHeader
+ }
+ data := buf.String()
++ if len(data) > maxSpecialFileSize {
++ return ErrFieldTooLong
++ }
+ if err := tw.writeRawFile(name, data, flag, FormatPAX); err != nil || isGlobal {
+ return err // Global headers return here
+ }
+--
+2.7.4
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-2880.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-2880.patch
new file mode 100644
index 0000000000..8376dc45ba
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-2880.patch
@@ -0,0 +1,164 @@
+From 753e3f8da191c2ac400407d83c70f46900769417 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Thu, 27 Oct 2022 12:22:41 +0530
+Subject: [PATCH] CVE-2022-2880
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/9d2c73a9fd69e45876509bb3bdb2af99bf77da1e]
+CVE: CVE-2022-2880
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+
+net/http/httputil: avoid query parameter
+
+Query parameter smuggling occurs when a proxy's interpretation
+of query parameters differs from that of a downstream server.
+Change ReverseProxy to avoid forwarding ignored query parameters.
+
+Remove unparsable query parameters from the outbound request
+
+ * if req.Form != nil after calling ReverseProxy.Director; and
+ * before calling ReverseProxy.Rewrite.
+
+This change preserves the existing behavior of forwarding the
+raw query untouched if a Director hook does not parse the query
+by calling Request.ParseForm (possibly indirectly).
+---
+ src/net/http/httputil/reverseproxy.go | 36 +++++++++++
+ src/net/http/httputil/reverseproxy_test.go | 74 ++++++++++++++++++++++
+ 2 files changed, 110 insertions(+)
+
+diff --git a/src/net/http/httputil/reverseproxy.go b/src/net/http/httputil/reverseproxy.go
+index 2072a5f..c6fb873 100644
+--- a/src/net/http/httputil/reverseproxy.go
++++ b/src/net/http/httputil/reverseproxy.go
+@@ -212,6 +212,9 @@ func (p *ReverseProxy) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
+ }
+
+ p.Director(outreq)
++ if outreq.Form != nil {
++ outreq.URL.RawQuery = cleanQueryParams(outreq.URL.RawQuery)
++ }
+ outreq.Close = false
+
+ reqUpType := upgradeType(outreq.Header)
+@@ -561,3 +564,36 @@ func (c switchProtocolCopier) copyToBackend(errc chan<- error) {
+ _, err := io.Copy(c.backend, c.user)
+ errc <- err
+ }
++
++func cleanQueryParams(s string) string {
++ reencode := func(s string) string {
++ v, _ := url.ParseQuery(s)
++ return v.Encode()
++ }
++ for i := 0; i < len(s); {
++ switch s[i] {
++ case ';':
++ return reencode(s)
++ case '%':
++ if i+2 >= len(s) || !ishex(s[i+1]) || !ishex(s[i+2]) {
++ return reencode(s)
++ }
++ i += 3
++ default:
++ i++
++ }
++ }
++ return s
++}
++
++func ishex(c byte) bool {
++ switch {
++ case '0' <= c && c <= '9':
++ return true
++ case 'a' <= c && c <= 'f':
++ return true
++ case 'A' <= c && c <= 'F':
++ return true
++ }
++ return false
++}
+diff --git a/src/net/http/httputil/reverseproxy_test.go b/src/net/http/httputil/reverseproxy_test.go
+index 9a7223a..bc87a3b 100644
+--- a/src/net/http/httputil/reverseproxy_test.go
++++ b/src/net/http/httputil/reverseproxy_test.go
+@@ -1269,3 +1269,77 @@ func TestSingleJoinSlash(t *testing.T) {
+ }
+ }
+ }
++
++const (
++ testWantsCleanQuery = true
++ testWantsRawQuery = false
++)
++
++func TestReverseProxyQueryParameterSmugglingDirectorDoesNotParseForm(t *testing.T) {
++ testReverseProxyQueryParameterSmuggling(t, testWantsRawQuery, func(u *url.URL) *ReverseProxy {
++ proxyHandler := NewSingleHostReverseProxy(u)
++ oldDirector := proxyHandler.Director
++ proxyHandler.Director = func(r *http.Request) {
++ oldDirector(r)
++ }
++ return proxyHandler
++ })
++}
++
++func TestReverseProxyQueryParameterSmugglingDirectorParsesForm(t *testing.T) {
++ testReverseProxyQueryParameterSmuggling(t, testWantsCleanQuery, func(u *url.URL) *ReverseProxy {
++ proxyHandler := NewSingleHostReverseProxy(u)
++ oldDirector := proxyHandler.Director
++ proxyHandler.Director = func(r *http.Request) {
++ // Parsing the form causes ReverseProxy to remove unparsable
++ // query parameters before forwarding.
++ r.FormValue("a")
++ oldDirector(r)
++ }
++ return proxyHandler
++ })
++}
++
++func testReverseProxyQueryParameterSmuggling(t *testing.T, wantCleanQuery bool, newProxy func(*url.URL) *ReverseProxy) {
++ const content = "response_content"
++ backend := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
++ w.Write([]byte(r.URL.RawQuery))
++ }))
++ defer backend.Close()
++ backendURL, err := url.Parse(backend.URL)
++ if err != nil {
++ t.Fatal(err)
++ }
++ proxyHandler := newProxy(backendURL)
++ frontend := httptest.NewServer(proxyHandler)
++ defer frontend.Close()
++
++ // Don't spam output with logs of queries containing semicolons.
++ backend.Config.ErrorLog = log.New(io.Discard, "", 0)
++ frontend.Config.ErrorLog = log.New(io.Discard, "", 0)
++
++ for _, test := range []struct {
++ rawQuery string
++ cleanQuery string
++ }{{
++ rawQuery: "a=1&a=2;b=3",
++ cleanQuery: "a=1",
++ }, {
++ rawQuery: "a=1&a=%zz&b=3",
++ cleanQuery: "a=1&b=3",
++ }} {
++ res, err := frontend.Client().Get(frontend.URL + "?" + test.rawQuery)
++ if err != nil {
++ t.Fatalf("Get: %v", err)
++ }
++ defer res.Body.Close()
++ body, _ := io.ReadAll(res.Body)
++ wantQuery := test.rawQuery
++ if wantCleanQuery {
++ wantQuery = test.cleanQuery
++ }
++ if got, want := string(body), wantQuery; got != want {
++ t.Errorf("proxy forwarded raw query %q as %q, want %q", test.rawQuery, got, want)
++ }
++ }
++}
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41715.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41715.patch
new file mode 100644
index 0000000000..fac0ebe94c
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41715.patch
@@ -0,0 +1,271 @@
+From e9017c2416ad0ef642f5e0c2eab2dbf3cba4d997 Mon Sep 17 00:00:00 2001
+From: Russ Cox <rsc@golang.org>
+Date: Wed, 28 Sep 2022 11:18:51 -0400
+Subject: [PATCH] [release-branch.go1.18] regexp: limit size of parsed regexps
+
+Set a 128 MB limit on the amount of space used by []syntax.Inst
+in the compiled form corresponding to a given regexp.
+
+Also set a 128 MB limit on the rune storage in the *syntax.Regexp
+tree itself.
+
+Thanks to Adam Korczynski (ADA Logics) and OSS-Fuzz for reporting this issue.
+
+Fixes CVE-2022-41715.
+Updates #55949.
+Fixes #55950.
+
+Change-Id: Ia656baed81564436368cf950e1c5409752f28e1b
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1592136
+TryBot-Result: Security TryBots <security-trybots@go-security-trybots.iam.gserviceaccount.com>
+Reviewed-by: Damien Neil <dneil@google.com>
+Run-TryBot: Roland Shoemaker <bracewell@google.com>
+Reviewed-by: Julie Qiu <julieqiu@google.com>
+Reviewed-on: https://go-review.googlesource.com/c/go/+/438501
+Run-TryBot: Carlos Amedee <carlos@golang.org>
+Reviewed-by: Carlos Amedee <carlos@golang.org>
+Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/e9017c2416ad0ef642f5e0c2eab2dbf3cba4d997]
+CVE: CVE-2022-41715
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+
+---
+ src/regexp/syntax/parse.go | 145 ++++++++++++++++++++++++++++++--
+ src/regexp/syntax/parse_test.go | 13 +--
+ 2 files changed, 148 insertions(+), 10 deletions(-)
+
+diff --git a/src/regexp/syntax/parse.go b/src/regexp/syntax/parse.go
+index 55bd20d..60491d5 100644
+--- a/src/regexp/syntax/parse.go
++++ b/src/regexp/syntax/parse.go
+@@ -90,15 +90,49 @@ const (
+ // until we've allocated at least maxHeight Regexp structures.
+ const maxHeight = 1000
+
++// maxSize is the maximum size of a compiled regexp in Insts.
++// It too is somewhat arbitrarily chosen, but the idea is to be large enough
++// to allow significant regexps while at the same time small enough that
++// the compiled form will not take up too much memory.
++// 128 MB is enough for a 3.3 million Inst structures, which roughly
++// corresponds to a 3.3 MB regexp.
++const (
++ maxSize = 128 << 20 / instSize
++ instSize = 5 * 8 // byte, 2 uint32, slice is 5 64-bit words
++)
++
++// maxRunes is the maximum number of runes allowed in a regexp tree
++// counting the runes in all the nodes.
++// Ignoring character classes p.numRunes is always less than the length of the regexp.
++// Character classes can make it much larger: each \pL adds 1292 runes.
++// 128 MB is enough for 32M runes, which is over 26k \pL instances.
++// Note that repetitions do not make copies of the rune slices,
++// so \pL{1000} is only one rune slice, not 1000.
++// We could keep a cache of character classes we've seen,
++// so that all the \pL we see use the same rune list,
++// but that doesn't remove the problem entirely:
++// consider something like [\pL01234][\pL01235][\pL01236]...[\pL^&*()].
++// And because the Rune slice is exposed directly in the Regexp,
++// there is not an opportunity to change the representation to allow
++// partial sharing between different character classes.
++// So the limit is the best we can do.
++const (
++ maxRunes = 128 << 20 / runeSize
++ runeSize = 4 // rune is int32
++)
++
+ type parser struct {
+ flags Flags // parse mode flags
+ stack []*Regexp // stack of parsed expressions
+ free *Regexp
+ numCap int // number of capturing groups seen
+ wholeRegexp string
+- tmpClass []rune // temporary char class work space
+- numRegexp int // number of regexps allocated
+- height map[*Regexp]int // regexp height for height limit check
++ tmpClass []rune // temporary char class work space
++ numRegexp int // number of regexps allocated
++ numRunes int // number of runes in char classes
++ repeats int64 // product of all repetitions seen
++ height map[*Regexp]int // regexp height, for height limit check
++ size map[*Regexp]int64 // regexp compiled size, for size limit check
+ }
+
+ func (p *parser) newRegexp(op Op) *Regexp {
+@@ -122,6 +156,104 @@ func (p *parser) reuse(re *Regexp) {
+ p.free = re
+ }
+
++func (p *parser) checkLimits(re *Regexp) {
++ if p.numRunes > maxRunes {
++ panic(ErrInternalError)
++ }
++ p.checkSize(re)
++ p.checkHeight(re)
++}
++
++func (p *parser) checkSize(re *Regexp) {
++ if p.size == nil {
++ // We haven't started tracking size yet.
++ // Do a relatively cheap check to see if we need to start.
++ // Maintain the product of all the repeats we've seen
++ // and don't track if the total number of regexp nodes
++ // we've seen times the repeat product is in budget.
++ if p.repeats == 0 {
++ p.repeats = 1
++ }
++ if re.Op == OpRepeat {
++ n := re.Max
++ if n == -1 {
++ n = re.Min
++ }
++ if n <= 0 {
++ n = 1
++ }
++ if int64(n) > maxSize/p.repeats {
++ p.repeats = maxSize
++ } else {
++ p.repeats *= int64(n)
++ }
++ }
++ if int64(p.numRegexp) < maxSize/p.repeats {
++ return
++ }
++
++ // We need to start tracking size.
++ // Make the map and belatedly populate it
++ // with info about everything we've constructed so far.
++ p.size = make(map[*Regexp]int64)
++ for _, re := range p.stack {
++ p.checkSize(re)
++ }
++ }
++
++ if p.calcSize(re, true) > maxSize {
++ panic(ErrInternalError)
++ }
++}
++
++func (p *parser) calcSize(re *Regexp, force bool) int64 {
++ if !force {
++ if size, ok := p.size[re]; ok {
++ return size
++ }
++ }
++
++ var size int64
++ switch re.Op {
++ case OpLiteral:
++ size = int64(len(re.Rune))
++ case OpCapture, OpStar:
++ // star can be 1+ or 2+; assume 2 pessimistically
++ size = 2 + p.calcSize(re.Sub[0], false)
++ case OpPlus, OpQuest:
++ size = 1 + p.calcSize(re.Sub[0], false)
++ case OpConcat:
++ for _, sub := range re.Sub {
++ size += p.calcSize(sub, false)
++ }
++ case OpAlternate:
++ for _, sub := range re.Sub {
++ size += p.calcSize(sub, false)
++ }
++ if len(re.Sub) > 1 {
++ size += int64(len(re.Sub)) - 1
++ }
++ case OpRepeat:
++ sub := p.calcSize(re.Sub[0], false)
++ if re.Max == -1 {
++ if re.Min == 0 {
++ size = 2 + sub // x*
++ } else {
++ size = 1 + int64(re.Min)*sub // xxx+
++ }
++ break
++ }
++ // x{2,5} = xx(x(x(x)?)?)?
++ size = int64(re.Max)*sub + int64(re.Max-re.Min)
++ }
++
++ if size < 1 {
++ size = 1
++ }
++ p.size[re] = size
++ return size
++}
++
+ func (p *parser) checkHeight(re *Regexp) {
+ if p.numRegexp < maxHeight {
+ return
+@@ -158,6 +290,7 @@ func (p *parser) calcHeight(re *Regexp, force bool) int {
+
+ // push pushes the regexp re onto the parse stack and returns the regexp.
+ func (p *parser) push(re *Regexp) *Regexp {
++ p.numRunes += len(re.Rune)
+ if re.Op == OpCharClass && len(re.Rune) == 2 && re.Rune[0] == re.Rune[1] {
+ // Single rune.
+ if p.maybeConcat(re.Rune[0], p.flags&^FoldCase) {
+@@ -189,7 +322,7 @@ func (p *parser) push(re *Regexp) *Regexp {
+ }
+
+ p.stack = append(p.stack, re)
+- p.checkHeight(re)
++ p.checkLimits(re)
+ return re
+ }
+
+@@ -305,7 +438,7 @@ func (p *parser) repeat(op Op, min, max int, before, after, lastRepeat string) (
+ re.Sub = re.Sub0[:1]
+ re.Sub[0] = sub
+ p.stack[n-1] = re
+- p.checkHeight(re)
++ p.checkLimits(re)
+
+ if op == OpRepeat && (min >= 2 || max >= 2) && !repeatIsValid(re, 1000) {
+ return "", &Error{ErrInvalidRepeatSize, before[:len(before)-len(after)]}
+@@ -509,6 +642,7 @@ func (p *parser) factor(sub []*Regexp) []*Regexp {
+
+ for j := start; j < i; j++ {
+ sub[j] = p.removeLeadingString(sub[j], len(str))
++ p.checkLimits(sub[j])
+ }
+ suffix := p.collapse(sub[start:i], OpAlternate) // recurse
+
+@@ -566,6 +700,7 @@ func (p *parser) factor(sub []*Regexp) []*Regexp {
+ for j := start; j < i; j++ {
+ reuse := j != start // prefix came from sub[start]
+ sub[j] = p.removeLeadingRegexp(sub[j], reuse)
++ p.checkLimits(sub[j])
+ }
+ suffix := p.collapse(sub[start:i], OpAlternate) // recurse
+
+diff --git a/src/regexp/syntax/parse_test.go b/src/regexp/syntax/parse_test.go
+index 1ef6d8a..67e3c56 100644
+--- a/src/regexp/syntax/parse_test.go
++++ b/src/regexp/syntax/parse_test.go
+@@ -484,12 +484,15 @@ var invalidRegexps = []string{
+ `(?P<>a)`,
+ `[a-Z]`,
+ `(?i)[a-Z]`,
+- `a{100000}`,
+- `a{100000,}`,
+- "((((((((((x{2}){2}){2}){2}){2}){2}){2}){2}){2}){2})",
+- strings.Repeat("(", 1000) + strings.Repeat(")", 1000),
+- strings.Repeat("(?:", 1000) + strings.Repeat(")*", 1000),
+ `\Q\E*`,
++ `a{100000}`, // too much repetition
++ `a{100000,}`, // too much repetition
++ "((((((((((x{2}){2}){2}){2}){2}){2}){2}){2}){2}){2})", // too much repetition
++ strings.Repeat("(", 1000) + strings.Repeat(")", 1000), // too deep
++ strings.Repeat("(?:", 1000) + strings.Repeat(")*", 1000), // too deep
++ "(" + strings.Repeat("(xx?)", 1000) + "){1000}", // too long
++ strings.Repeat("(xx?){1000}", 1000), // too long
++ strings.Repeat(`\pL`, 27000), // too many runes
+ }
+
+ var onlyPerl = []string{
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41717.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41717.patch
new file mode 100644
index 0000000000..8bf22ee4d4
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41717.patch
@@ -0,0 +1,75 @@
+From 618120c165669c00a1606505defea6ca755cdc27 Mon Sep 17 00:00:00 2001
+From: Damien Neil <dneil@google.com>
+Date: Wed, 30 Nov 2022 16:46:33 -0500
+Subject: [PATCH] [release-branch.go1.19] net/http: update bundled
+ golang.org/x/net/http2
+
+Disable cmd/internal/moddeps test, since this update includes PRIVATE
+track fixes.
+
+For #56350.
+For #57009.
+Fixes CVE-2022-41717.
+
+Change-Id: I5c6ce546add81f361dcf0d5123fa4eaaf8f0a03b
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1663835
+Reviewed-by: Tatiana Bradley <tatianabradley@google.com>
+Reviewed-by: Julie Qiu <julieqiu@google.com>
+Reviewed-on: https://go-review.googlesource.com/c/go/+/455363
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Run-TryBot: Jenny Rakoczy <jenny@golang.org>
+Reviewed-by: Michael Pratt <mpratt@google.com>
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/618120c165669c00a1606505defea6ca755cdc27]
+CVE-2022-41717
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ src/net/http/h2_bundle.go | 18 +++++++++++-------
+ 1 file changed, 11 insertions(+), 7 deletions(-)
+
+diff --git a/src/net/http/h2_bundle.go b/src/net/http/h2_bundle.go
+index 83f2a72..cc03a62 100644
+--- a/src/net/http/h2_bundle.go
++++ b/src/net/http/h2_bundle.go
+@@ -4096,6 +4096,7 @@ type http2serverConn struct {
+ headerTableSize uint32
+ peerMaxHeaderListSize uint32 // zero means unknown (default)
+ canonHeader map[string]string // http2-lower-case -> Go-Canonical-Case
++ canonHeaderKeysSize int // canonHeader keys size in bytes
+ writingFrame bool // started writing a frame (on serve goroutine or separate)
+ writingFrameAsync bool // started a frame on its own goroutine but haven't heard back on wroteFrameCh
+ needsFrameFlush bool // last frame write wasn't a flush
+@@ -4278,6 +4279,13 @@ func (sc *http2serverConn) condlogf(err error, format string, args ...interface{
+ }
+ }
+
++// maxCachedCanonicalHeadersKeysSize is an arbitrarily-chosen limit on the size
++// of the entries in the canonHeader cache.
++// This should be larger than the size of unique, uncommon header keys likely to
++// be sent by the peer, while not so high as to permit unreasonable memory usage
++// if the peer sends an unbounded number of unique header keys.
++const http2maxCachedCanonicalHeadersKeysSize = 2048
++
+ func (sc *http2serverConn) canonicalHeader(v string) string {
+ sc.serveG.check()
+ http2buildCommonHeaderMapsOnce()
+@@ -4293,14 +4301,10 @@ func (sc *http2serverConn) canonicalHeader(v string) string {
+ sc.canonHeader = make(map[string]string)
+ }
+ cv = CanonicalHeaderKey(v)
+- // maxCachedCanonicalHeaders is an arbitrarily-chosen limit on the number of
+- // entries in the canonHeader cache. This should be larger than the number
+- // of unique, uncommon header keys likely to be sent by the peer, while not
+- // so high as to permit unreaasonable memory usage if the peer sends an unbounded
+- // number of unique header keys.
+- const maxCachedCanonicalHeaders = 32
+- if len(sc.canonHeader) < maxCachedCanonicalHeaders {
++ size := 100 + len(v)*2 // 100 bytes of map overhead + key + value
++ if sc.canonHeaderKeysSize+size <= http2maxCachedCanonicalHeadersKeysSize {
+ sc.canonHeader[v] = cv
++ sc.canonHeaderKeysSize += size
+ }
+ return cv
+ }
+--
+2.30.2
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41722-1.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41722-1.patch
new file mode 100644
index 0000000000..f5bffd7a0b
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41722-1.patch
@@ -0,0 +1,53 @@
+From 94e0c36694fb044e81381d112fef3692de7cdf52 Mon Sep 17 00:00:00 2001
+From: Yasuhiro Matsumoto <mattn.jp@gmail.com>
+Date: Fri, 22 Apr 2022 10:07:51 +0900
+Subject: [PATCH 1/2] path/filepath: do not remove prefix "." when following
+ path contains ":".
+
+Fixes #52476
+
+Change-Id: I9eb72ac7dbccd6322d060291f31831dc389eb9bb
+Reviewed-on: https://go-review.googlesource.com/c/go/+/401595
+Auto-Submit: Ian Lance Taylor <iant@google.com>
+Reviewed-by: Alex Brainman <alex.brainman@gmail.com>
+Run-TryBot: Ian Lance Taylor <iant@google.com>
+Reviewed-by: Ian Lance Taylor <iant@google.com>
+Reviewed-by: Damien Neil <dneil@google.com>
+TryBot-Result: Gopher Robot <gobot@golang.org>
+
+Upstream-Status: Backport from https://github.com/golang/go/commit/9cd1818a7d019c02fa4898b3e45a323e35033290
+CVE: CVE-2022-41722
+Signed-off-by: Shubham Kulkarni <skulkarni@mvista.com>
+---
+ src/path/filepath/path.go | 14 +++++++++++++-
+ 1 file changed, 13 insertions(+), 1 deletion(-)
+
+diff --git a/src/path/filepath/path.go b/src/path/filepath/path.go
+index 26f1833..92dc090 100644
+--- a/src/path/filepath/path.go
++++ b/src/path/filepath/path.go
+@@ -116,9 +116,21 @@ func Clean(path string) string {
+ case os.IsPathSeparator(path[r]):
+ // empty path element
+ r++
+- case path[r] == '.' && (r+1 == n || os.IsPathSeparator(path[r+1])):
++ case path[r] == '.' && r+1 == n:
+ // . element
+ r++
++ case path[r] == '.' && os.IsPathSeparator(path[r+1]):
++ // ./ element
++ r++
++
++ for r < len(path) && os.IsPathSeparator(path[r]) {
++ r++
++ }
++ if out.w == 0 && volumeNameLen(path[r:]) > 0 {
++ // When joining prefix "." and an absolute path on Windows,
++ // the prefix should not be removed.
++ out.append('.')
++ }
+ case path[r] == '.' && path[r+1] == '.' && (r+2 == n || os.IsPathSeparator(path[r+2])):
+ // .. element: remove to last separator
+ r += 2
+--
+2.7.4
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41722-2.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41722-2.patch
new file mode 100644
index 0000000000..e1f7a55581
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41722-2.patch
@@ -0,0 +1,104 @@
+From b8803cb711ae163b8e67897deb6cf8c49702227c Mon Sep 17 00:00:00 2001
+From: Damien Neil <dneil@google.com>
+Date: Mon, 12 Dec 2022 16:43:37 -0800
+Subject: [PATCH 2/2] path/filepath: do not Clean("a/../c:/b") into c:\b on
+ Windows
+
+Do not permit Clean to convert a relative path into one starting
+with a drive reference. This change causes Clean to insert a .
+path element at the start of a path when the original path does not
+start with a volume name, and the first path element would contain
+a colon.
+
+This may introduce a spurious but harmless . path element under
+some circumstances. For example, Clean("a/../b:/../c") becomes `.\c`.
+
+This reverts CL 401595, since the change here supersedes the one
+in that CL.
+
+Thanks to RyotaK (https://twitter.com/ryotkak) for reporting this issue.
+
+Updates #57274
+Fixes #57276
+Fixes CVE-2022-41722
+
+Change-Id: I837446285a03aa74c79d7642720e01f354c2ca17
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1675249
+Reviewed-by: Roland Shoemaker <bracewell@google.com>
+Run-TryBot: Damien Neil <dneil@google.com>
+Reviewed-by: Julie Qiu <julieqiu@google.com>
+TryBot-Result: Security TryBots <security-trybots@go-security-trybots.iam.gserviceaccount.com>
+(cherry picked from commit 8ca37f4813ef2f64600c92b83f17c9f3ca6c03a5)
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1728944
+Run-TryBot: Roland Shoemaker <bracewell@google.com>
+Reviewed-by: Tatiana Bradley <tatianabradley@google.com>
+Reviewed-by: Damien Neil <dneil@google.com>
+Reviewed-on: https://go-review.googlesource.com/c/go/+/468119
+Reviewed-by: Than McIntosh <thanm@google.com>
+Run-TryBot: Michael Pratt <mpratt@google.com>
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Auto-Submit: Michael Pratt <mpratt@google.com>
+
+Upstream-Status: Backport from https://github.com/golang/go/commit/bdf07c2e168baf736e4c057279ca12a4d674f18c
+CVE: CVE-2022-41722
+Signed-off-by: Shubham Kulkarni <skulkarni@mvista.com>
+---
+ src/path/filepath/path.go | 27 ++++++++++++++-------------
+ 1 file changed, 14 insertions(+), 13 deletions(-)
+
+diff --git a/src/path/filepath/path.go b/src/path/filepath/path.go
+index 92dc090..f0f095e 100644
+--- a/src/path/filepath/path.go
++++ b/src/path/filepath/path.go
+@@ -14,6 +14,7 @@ package filepath
+ import (
+ "errors"
+ "os"
++ "runtime"
+ "sort"
+ "strings"
+ )
+@@ -116,21 +117,9 @@ func Clean(path string) string {
+ case os.IsPathSeparator(path[r]):
+ // empty path element
+ r++
+- case path[r] == '.' && r+1 == n:
++ case path[r] == '.' && (r+1 == n || os.IsPathSeparator(path[r+1])):
+ // . element
+ r++
+- case path[r] == '.' && os.IsPathSeparator(path[r+1]):
+- // ./ element
+- r++
+-
+- for r < len(path) && os.IsPathSeparator(path[r]) {
+- r++
+- }
+- if out.w == 0 && volumeNameLen(path[r:]) > 0 {
+- // When joining prefix "." and an absolute path on Windows,
+- // the prefix should not be removed.
+- out.append('.')
+- }
+ case path[r] == '.' && path[r+1] == '.' && (r+2 == n || os.IsPathSeparator(path[r+2])):
+ // .. element: remove to last separator
+ r += 2
+@@ -156,6 +145,18 @@ func Clean(path string) string {
+ if rooted && out.w != 1 || !rooted && out.w != 0 {
+ out.append(Separator)
+ }
++ // If a ':' appears in the path element at the start of a Windows path,
++ // insert a .\ at the beginning to avoid converting relative paths
++ // like a/../c: into c:.
++ if runtime.GOOS == "windows" && out.w == 0 && out.volLen == 0 && r != 0 {
++ for i := r; i < n && !os.IsPathSeparator(path[i]); i++ {
++ if path[i] == ':' {
++ out.append('.')
++ out.append(Separator)
++ break
++ }
++ }
++ }
+ // copy element
+ for ; r < n && !os.IsPathSeparator(path[r]); r++ {
+ out.append(path[r])
+--
+2.7.4
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41723.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41723.patch
new file mode 100644
index 0000000000..a93fa31dcd
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2022-41723.patch
@@ -0,0 +1,156 @@
+From 451766789f646617157c725e20c955d4a9a70d4e Mon Sep 17 00:00:00 2001
+From: Roland Shoemaker <bracewell@google.com>
+Date: Mon, 6 Feb 2023 10:03:44 -0800
+Subject: [PATCH] net/http: update bundled golang.org/x/net/http2
+
+Disable cmd/internal/moddeps test, since this update includes PRIVATE
+track fixes.
+
+Fixes CVE-2022-41723
+Fixes #58355
+Updates #57855
+
+Change-Id: Ie870562a6f6e44e4e8f57db6a0dde1a41a2b090c
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1728939
+Reviewed-by: Damien Neil <dneil@google.com>
+Reviewed-by: Julie Qiu <julieqiu@google.com>
+Reviewed-by: Tatiana Bradley <tatianabradley@google.com>
+Run-TryBot: Roland Shoemaker <bracewell@google.com>
+Reviewed-on: https://go-review.googlesource.com/c/go/+/468118
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Run-TryBot: Michael Pratt <mpratt@google.com>
+Auto-Submit: Michael Pratt <mpratt@google.com>
+Reviewed-by: Than McIntosh <thanm@google.com>
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/5c3e11bd0b5c0a86e5beffcd4339b86a902b21c3]
+CVE: CVE-2022-41723
+Signed-off-by: Shubham Kulkarni <skulkarni@mvista.com>
+---
+ src/vendor/golang.org/x/net/http2/hpack/hpack.go | 79 +++++++++++++++---------
+ 1 file changed, 49 insertions(+), 30 deletions(-)
+
+diff --git a/src/vendor/golang.org/x/net/http2/hpack/hpack.go b/src/vendor/golang.org/x/net/http2/hpack/hpack.go
+index 85f18a2..02e80e3 100644
+--- a/src/vendor/golang.org/x/net/http2/hpack/hpack.go
++++ b/src/vendor/golang.org/x/net/http2/hpack/hpack.go
+@@ -359,6 +359,7 @@ func (d *Decoder) parseFieldLiteral(n uint8, it indexType) error {
+
+ var hf HeaderField
+ wantStr := d.emitEnabled || it.indexed()
++ var undecodedName undecodedString
+ if nameIdx > 0 {
+ ihf, ok := d.at(nameIdx)
+ if !ok {
+@@ -366,15 +367,27 @@ func (d *Decoder) parseFieldLiteral(n uint8, it indexType) error {
+ }
+ hf.Name = ihf.Name
+ } else {
+- hf.Name, buf, err = d.readString(buf, wantStr)
++ undecodedName, buf, err = d.readString(buf)
+ if err != nil {
+ return err
+ }
+ }
+- hf.Value, buf, err = d.readString(buf, wantStr)
++ undecodedValue, buf, err := d.readString(buf)
+ if err != nil {
+ return err
+ }
++ if wantStr {
++ if nameIdx <= 0 {
++ hf.Name, err = d.decodeString(undecodedName)
++ if err != nil {
++ return err
++ }
++ }
++ hf.Value, err = d.decodeString(undecodedValue)
++ if err != nil {
++ return err
++ }
++ }
+ d.buf = buf
+ if it.indexed() {
+ d.dynTab.add(hf)
+@@ -459,46 +472,52 @@ func readVarInt(n byte, p []byte) (i uint64, remain []byte, err error) {
+ return 0, origP, errNeedMore
+ }
+
+-// readString decodes an hpack string from p.
++// readString reads an hpack string from p.
+ //
+-// wantStr is whether s will be used. If false, decompression and
+-// []byte->string garbage are skipped if s will be ignored
+-// anyway. This does mean that huffman decoding errors for non-indexed
+-// strings past the MAX_HEADER_LIST_SIZE are ignored, but the server
+-// is returning an error anyway, and because they're not indexed, the error
+-// won't affect the decoding state.
+-func (d *Decoder) readString(p []byte, wantStr bool) (s string, remain []byte, err error) {
++// It returns a reference to the encoded string data to permit deferring decode costs
++// until after the caller verifies all data is present.
++func (d *Decoder) readString(p []byte) (u undecodedString, remain []byte, err error) {
+ if len(p) == 0 {
+- return "", p, errNeedMore
++ return u, p, errNeedMore
+ }
+ isHuff := p[0]&128 != 0
+ strLen, p, err := readVarInt(7, p)
+ if err != nil {
+- return "", p, err
++ return u, p, err
+ }
+ if d.maxStrLen != 0 && strLen > uint64(d.maxStrLen) {
+- return "", nil, ErrStringLength
++ // Returning an error here means Huffman decoding errors
++ // for non-indexed strings past the maximum string length
++ // are ignored, but the server is returning an error anyway
++ // and because the string is not indexed the error will not
++ // affect the decoding state.
++ return u, nil, ErrStringLength
+ }
+ if uint64(len(p)) < strLen {
+- return "", p, errNeedMore
+- }
+- if !isHuff {
+- if wantStr {
+- s = string(p[:strLen])
+- }
+- return s, p[strLen:], nil
++ return u, p, errNeedMore
+ }
++ u.isHuff = isHuff
++ u.b = p[:strLen]
++ return u, p[strLen:], nil
++}
+
+- if wantStr {
+- buf := bufPool.Get().(*bytes.Buffer)
+- buf.Reset() // don't trust others
+- defer bufPool.Put(buf)
+- if err := huffmanDecode(buf, d.maxStrLen, p[:strLen]); err != nil {
+- buf.Reset()
+- return "", nil, err
+- }
++type undecodedString struct {
++ isHuff bool
++ b []byte
++}
++
++func (d *Decoder) decodeString(u undecodedString) (string, error) {
++ if !u.isHuff {
++ return string(u.b), nil
++ }
++ buf := bufPool.Get().(*bytes.Buffer)
++ buf.Reset() // don't trust others
++ var s string
++ err := huffmanDecode(buf, d.maxStrLen, u.b)
++ if err == nil {
+ s = buf.String()
+- buf.Reset() // be nice to GC
+ }
+- return s, p[strLen:], nil
++ buf.Reset() // be nice to GC
++ bufPool.Put(buf)
++ return s, err
+ }
+--
+2.7.4
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2023-24534.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2023-24534.patch
new file mode 100644
index 0000000000..d50db04bed
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2023-24534.patch
@@ -0,0 +1,200 @@
+From d6759e7a059f4208f07aa781402841d7ddaaef96 Mon Sep 17 00:00:00 2001
+From: Damien Neil <dneil@google.com>
+Date: Fri, 10 Mar 2023 14:21:05 -0800
+Subject: [PATCH] [release-branch.go1.19] net/textproto: avoid overpredicting
+ the number of MIME header keys
+
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802452
+Run-TryBot: Damien Neil <dneil@google.com>
+Reviewed-by: Roland Shoemaker <bracewell@google.com>
+Reviewed-by: Julie Qiu <julieqiu@google.com>
+(cherry picked from commit f739f080a72fd5b06d35c8e244165159645e2ed6)
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802393
+Reviewed-by: Damien Neil <dneil@google.com>
+Run-TryBot: Roland Shoemaker <bracewell@google.com>
+Change-Id: I675451438d619a9130360c56daf529559004903f
+Reviewed-on: https://go-review.googlesource.com/c/go/+/481982
+Run-TryBot: Michael Knyszek <mknyszek@google.com>
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Reviewed-by: Matthew Dempsky <mdempsky@google.com>
+Auto-Submit: Michael Knyszek <mknyszek@google.com>
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/d6759e7a059f4208f07aa781402841d7ddaaef96]
+CVE: CVE-2023-24534
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ src/bytes/bytes.go | 13 +++++++
+ src/net/textproto/reader.go | 31 +++++++++++------
+ src/net/textproto/reader_test.go | 59 ++++++++++++++++++++++++++++++++
+ 3 files changed, 92 insertions(+), 11 deletions(-)
+
+diff --git a/src/bytes/bytes.go b/src/bytes/bytes.go
+index e872cc2..1f0d760 100644
+--- a/src/bytes/bytes.go
++++ b/src/bytes/bytes.go
+@@ -1078,6 +1078,19 @@ func Index(s, sep []byte) int {
+ return -1
+ }
+
++// Cut slices s around the first instance of sep,
++// returning the text before and after sep.
++// The found result reports whether sep appears in s.
++// If sep does not appear in s, cut returns s, nil, false.
++//
++// Cut returns slices of the original slice s, not copies.
++func Cut(s, sep []byte) (before, after []byte, found bool) {
++ if i := Index(s, sep); i >= 0 {
++ return s[:i], s[i+len(sep):], true
++ }
++ return s, nil, false
++}
++
+ func indexRabinKarp(s, sep []byte) int {
+ // Rabin-Karp search
+ hashsep, pow := hashStr(sep)
+diff --git a/src/net/textproto/reader.go b/src/net/textproto/reader.go
+index a505da9..8d547fe 100644
+--- a/src/net/textproto/reader.go
++++ b/src/net/textproto/reader.go
+@@ -486,8 +487,11 @@ func (r *Reader) ReadMIMEHeader() (MIMEHeader, error) {
+ // large one ahead of time which we'll cut up into smaller
+ // slices. If this isn't big enough later, we allocate small ones.
+ var strs []string
+- hint := r.upcomingHeaderNewlines()
++ hint := r.upcomingHeaderKeys()
+ if hint > 0 {
++ if hint > 1000 {
++ hint = 1000 // set a cap to avoid overallocation
++ }
+ strs = make([]string, hint)
+ }
+
+@@ -562,9 +566,11 @@ func mustHaveFieldNameColon(line []byte) error {
+ return nil
+ }
+
+-// upcomingHeaderNewlines returns an approximation of the number of newlines
++var nl = []byte("\n")
++
++// upcomingHeaderKeys returns an approximation of the number of keys
+ // that will be in this header. If it gets confused, it returns 0.
+-func (r *Reader) upcomingHeaderNewlines() (n int) {
++func (r *Reader) upcomingHeaderKeys() (n int) {
+ // Try to determine the 'hint' size.
+ r.R.Peek(1) // force a buffer load if empty
+ s := r.R.Buffered()
+@@ -572,17 +578,20 @@ func (r *Reader) upcomingHeaderNewlines() (n int) {
+ return
+ }
+ peek, _ := r.R.Peek(s)
+- for len(peek) > 0 {
+- i := bytes.IndexByte(peek, '\n')
+- if i < 3 {
+- // Not present (-1) or found within the next few bytes,
+- // implying we're at the end ("\r\n\r\n" or "\n\n")
+- return
++ for len(peek) > 0 && n < 1000 {
++ var line []byte
++ line, peek, _ = bytes.Cut(peek, nl)
++ if len(line) == 0 || (len(line) == 1 && line[0] == '\r') {
++ // Blank line separating headers from the body.
++ break
++ }
++ if line[0] == ' ' || line[0] == '\t' {
++ // Folded continuation of the previous line.
++ continue
+ }
+ n++
+- peek = peek[i+1:]
+ }
+- return
++ return n
+ }
+
+ // CanonicalMIMEHeaderKey returns the canonical format of the
+diff --git a/src/net/textproto/reader_test.go b/src/net/textproto/reader_test.go
+index 3124d43..3ae0de1 100644
+--- a/src/net/textproto/reader_test.go
++++ b/src/net/textproto/reader_test.go
+@@ -9,6 +9,7 @@ import (
+ "bytes"
+ "io"
+ "reflect"
++ "runtime"
+ "strings"
+ "testing"
+ )
+@@ -127,6 +128,42 @@ func TestReadMIMEHeaderSingle(t *testing.T) {
+ }
+ }
+
++// TestReaderUpcomingHeaderKeys is testing an internal function, but it's very
++// difficult to test well via the external API.
++func TestReaderUpcomingHeaderKeys(t *testing.T) {
++ for _, test := range []struct {
++ input string
++ want int
++ }{{
++ input: "",
++ want: 0,
++ }, {
++ input: "A: v",
++ want: 1,
++ }, {
++ input: "A: v\r\nB: v\r\n",
++ want: 2,
++ }, {
++ input: "A: v\nB: v\n",
++ want: 2,
++ }, {
++ input: "A: v\r\n continued\r\n still continued\r\nB: v\r\n\r\n",
++ want: 2,
++ }, {
++ input: "A: v\r\n\r\nB: v\r\nC: v\r\n",
++ want: 1,
++ }, {
++ input: "A: v" + strings.Repeat("\n", 1000),
++ want: 1,
++ }} {
++ r := reader(test.input)
++ got := r.upcomingHeaderKeys()
++ if test.want != got {
++ t.Fatalf("upcomingHeaderKeys(%q): %v; want %v", test.input, got, test.want)
++ }
++ }
++}
++
+ func TestReadMIMEHeaderNoKey(t *testing.T) {
+ r := reader(": bar\ntest-1: 1\n\n")
+ m, err := r.ReadMIMEHeader()
+@@ -223,6 +260,28 @@ func TestReadMIMEHeaderTrimContinued(t *testing.T) {
+ }
+ }
+
++// Test that reading a header doesn't overallocate. Issue 58975.
++func TestReadMIMEHeaderAllocations(t *testing.T) {
++ var totalAlloc uint64
++ const count = 200
++ for i := 0; i < count; i++ {
++ r := reader("A: b\r\n\r\n" + strings.Repeat("\n", 4096))
++ var m1, m2 runtime.MemStats
++ runtime.ReadMemStats(&m1)
++ _, err := r.ReadMIMEHeader()
++ if err != nil {
++ t.Fatalf("ReadMIMEHeader: %v", err)
++ }
++ runtime.ReadMemStats(&m2)
++ totalAlloc += m2.TotalAlloc - m1.TotalAlloc
++ }
++ // 32k is large and we actually allocate substantially less,
++ // but prior to the fix for #58975 we allocated ~400k in this case.
++ if got, want := totalAlloc/count, uint64(32768); got > want {
++ t.Fatalf("ReadMIMEHeader allocated %v bytes, want < %v", got, want)
++ }
++}
++
+ type readResponseTest struct {
+ in string
+ inCode int
+--
+2.25.1
diff --git a/poky/meta/recipes-devtools/go/go-1.14/CVE-2023-24537.patch b/poky/meta/recipes-devtools/go/go-1.14/CVE-2023-24537.patch
new file mode 100644
index 0000000000..e04b717fc1
--- /dev/null
+++ b/poky/meta/recipes-devtools/go/go-1.14/CVE-2023-24537.patch
@@ -0,0 +1,76 @@
+From bf8c7c575c8a552d9d79deb29e80854dc88528d0 Mon Sep 17 00:00:00 2001
+From: Damien Neil <dneil@google.com>
+Date: Mon, 20 Mar 2023 10:43:19 -0700
+Subject: [PATCH] [release-branch.go1.20] mime/multipart: limit parsed mime
+ message sizes
+
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802456
+Reviewed-by: Julie Qiu <julieqiu@google.com>
+Reviewed-by: Roland Shoemaker <bracewell@google.com>
+Run-TryBot: Damien Neil <dneil@google.com>
+Reviewed-on: https://team-review.git.corp.google.com/c/golang/go-private/+/1802611
+Reviewed-by: Damien Neil <dneil@google.com>
+Change-Id: Ifdfa192d54f722d781a4d8c5f35b5fb72d122168
+Reviewed-on: https://go-review.googlesource.com/c/go/+/481986
+Reviewed-by: Matthew Dempsky <mdempsky@google.com>
+TryBot-Result: Gopher Robot <gobot@golang.org>
+Run-TryBot: Michael Knyszek <mknyszek@google.com>
+Auto-Submit: Michael Knyszek <mknyszek@google.com>
+
+Upstream-Status: Backport [https://github.com/golang/go/commit/126a1d02da82f93ede7ce0bd8d3c51ef627f2104]
+CVE: CVE-2023-24537
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ src/go/parser/parser_test.go | 16 ++++++++++++++++
+ src/go/scanner/scanner.go | 5 ++++-
+ 2 files changed, 20 insertions(+), 1 deletion(-)
+
+diff --git a/src/go/parser/parser_test.go b/src/go/parser/parser_test.go
+index 37a6a2b..714557c 100644
+--- a/src/go/parser/parser_test.go
++++ b/src/go/parser/parser_test.go
+@@ -738,3 +738,19 @@ func TestScopeDepthLimit(t *testing.T) {
+ }
+ }
+ }
++
++// TestIssue59180 tests that line number overflow doesn't cause an infinite loop.
++func TestIssue59180(t *testing.T) {
++ testcases := []string{
++ "package p\n//line :9223372036854775806\n\n//",
++ "package p\n//line :1:9223372036854775806\n\n//",
++ "package p\n//line file:9223372036854775806\n\n//",
++ }
++
++ for _, src := range testcases {
++ _, err := ParseFile(token.NewFileSet(), "", src, ParseComments)
++ if err == nil {
++ t.Errorf("ParseFile(%s) succeeded unexpectedly", src)
++ }
++ }
++}
+diff --git a/src/go/scanner/scanner.go b/src/go/scanner/scanner.go
+index 00fe2dc..3159d25 100644
+--- a/src/go/scanner/scanner.go
++++ b/src/go/scanner/scanner.go
+@@ -246,13 +246,16 @@ func (s *Scanner) updateLineInfo(next, offs int, text []byte) {
+ return
+ }
+
++ // Put a cap on the maximum size of line and column numbers.
++ // 30 bits allows for some additional space before wrapping an int32.
++ const maxLineCol = 1<<30 - 1
+ var line, col int
+ i2, n2, ok2 := trailingDigits(text[:i-1])
+ if ok2 {
+ //line filename:line:col
+ i, i2 = i2, i
+ line, col = n2, n
+- if col == 0 {
++ if col == 0 || col > maxLineCol {
+ s.error(offs+i2, "invalid column number: "+string(text[i2:]))
+ return
+ }
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/go/go-crosssdk.inc b/poky/meta/recipes-devtools/go/go-crosssdk.inc
index f0bec79719..36c9b12af8 100644
--- a/poky/meta/recipes-devtools/go/go-crosssdk.inc
+++ b/poky/meta/recipes-devtools/go/go-crosssdk.inc
@@ -4,6 +4,8 @@ DEPENDS = "go-native virtual/${TARGET_PREFIX}gcc-crosssdk virtual/nativesdk-${TA
PN = "go-crosssdk-${SDK_SYS}"
PROVIDES = "virtual/${TARGET_PREFIX}go-crosssdk"
+export GOCACHE = "${B}/.cache"
+
do_configure[noexec] = "1"
do_compile() {
diff --git a/poky/meta/recipes-devtools/go/go_1.14.bb b/poky/meta/recipes-devtools/go/go_1.14.bb
index c17527998b..76ff788238 100644
--- a/poky/meta/recipes-devtools/go/go_1.14.bb
+++ b/poky/meta/recipes-devtools/go/go_1.14.bb
@@ -7,8 +7,8 @@ export CGO_ENABLED_riscv64 = ""
# windows/mips/riscv doesn't support -buildmode=pie, so skip the QA checking
# for windows/mips/riscv and their variants.
python() {
- if 'mips' in d.getVar('TARGET_ARCH',True) or 'riscv' in d.getVar('TARGET_ARCH',True) or 'windows' in d.getVar('TARGET_GOOS', True):
- d.appendVar('INSANE_SKIP_%s' % d.getVar('PN',True), " textrel")
+ if 'mips' in d.getVar('TARGET_ARCH') or 'riscv' in d.getVar('TARGET_ARCH') or 'windows' in d.getVar('TARGET_GOOS'):
+ d.appendVar('INSANE_SKIP_%s' % d.getVar('PN'), " textrel")
else:
d.setVar('GOBUILDMODE', 'pie')
}
diff --git a/poky/meta/recipes-devtools/opkg/opkg_0.4.2.bb b/poky/meta/recipes-devtools/opkg/opkg_0.4.2.bb
index a813f7258b..55be6547c0 100644
--- a/poky/meta/recipes-devtools/opkg/opkg_0.4.2.bb
+++ b/poky/meta/recipes-devtools/opkg/opkg_0.4.2.bb
@@ -50,7 +50,9 @@ EXTRA_OECONF_class-native = "--localstatedir=/${@os.path.relpath('${localstatedi
do_install_append () {
install -d ${D}${sysconfdir}/opkg
install -m 0644 ${WORKDIR}/opkg.conf ${D}${sysconfdir}/opkg/opkg.conf
- echo "option lists_dir ${OPKGLIBDIR}/opkg/lists" >>${D}${sysconfdir}/opkg/opkg.conf
+ echo "option lists_dir ${OPKGLIBDIR}/opkg/lists" >>${D}${sysconfdir}/opkg/opkg.conf
+ echo "option info_dir ${OPKGLIBDIR}/opkg/info" >>${D}${sysconfdir}/opkg/opkg.conf
+ echo "option status_file ${OPKGLIBDIR}/opkg/status" >>${D}${sysconfdir}/opkg/opkg.conf
# We need to create the lock directory
install -d ${D}${OPKGLIBDIR}/opkg
diff --git a/poky/meta/recipes-devtools/python/files/CVE-2022-45061.patch b/poky/meta/recipes-devtools/python/files/CVE-2022-45061.patch
new file mode 100644
index 0000000000..647bf59908
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/files/CVE-2022-45061.patch
@@ -0,0 +1,100 @@
+From 064ec20bf7a181ba5fa961aaa12973812aa6ca5d Mon Sep 17 00:00:00 2001
+From: "Miss Islington (bot)"
+ <31488909+miss-islington@users.noreply.github.com>
+Date: Mon, 7 Nov 2022 18:57:10 -0800
+Subject: [PATCH] [3.11] gh-98433: Fix quadratic time idna decoding. (GH-99092)
+ (GH-99222)
+
+There was an unnecessary quadratic loop in idna decoding. This restores
+the behavior to linear.
+
+(cherry picked from commit d315722564927c7202dd6e111dc79eaf14240b0d)
+
+(cherry picked from commit a6f6c3a3d6f2b580f2d87885c9b8a9350ad7bf15)
+
+Co-authored-by: Miss Islington (bot) <31488909+miss-islington@users.noreply.github.com>
+Co-authored-by: Gregory P. Smith <greg@krypto.org>
+
+CVE: CVE-2022-45061
+Upstream-Status: Backport [https://github.com/python/cpython/pull/99231/commits/064ec20bf7a181ba5fa961aaa12973812aa6ca5d]
+Signed-off-by: Omkar Patil <Omkar.Patil@kpit.com>
+
+---
+ Lib/encodings/idna.py | 32 +++++++++----------
+ Lib/test/test_codecs.py | 6 ++++
+ ...2-11-04-09-29-36.gh-issue-98433.l76c5G.rst | 6 ++++
+ 3 files changed, 27 insertions(+), 17 deletions(-)
+ create mode 100644 Misc/NEWS.d/next/Security/2022-11-04-09-29-36.gh-issue-98433.l76c5G.rst
+
+diff --git a/Lib/encodings/idna.py b/Lib/encodings/idna.py
+index ea4058512fe3..bf98f513366b 100644
+--- a/Lib/encodings/idna.py
++++ b/Lib/encodings/idna.py
+@@ -39,23 +39,21 @@ def nameprep(label):
+
+ # Check bidi
+ RandAL = [stringprep.in_table_d1(x) for x in label]
+- for c in RandAL:
+- if c:
+- # There is a RandAL char in the string. Must perform further
+- # tests:
+- # 1) The characters in section 5.8 MUST be prohibited.
+- # This is table C.8, which was already checked
+- # 2) If a string contains any RandALCat character, the string
+- # MUST NOT contain any LCat character.
+- if any(stringprep.in_table_d2(x) for x in label):
+- raise UnicodeError("Violation of BIDI requirement 2")
+-
+- # 3) If a string contains any RandALCat character, a
+- # RandALCat character MUST be the first character of the
+- # string, and a RandALCat character MUST be the last
+- # character of the string.
+- if not RandAL[0] or not RandAL[-1]:
+- raise UnicodeError("Violation of BIDI requirement 3")
++ if any(RandAL):
++ # There is a RandAL char in the string. Must perform further
++ # tests:
++ # 1) The characters in section 5.8 MUST be prohibited.
++ # This is table C.8, which was already checked
++ # 2) If a string contains any RandALCat character, the string
++ # MUST NOT contain any LCat character.
++ if any(stringprep.in_table_d2(x) for x in label):
++ raise UnicodeError("Violation of BIDI requirement 2")
++ # 3) If a string contains any RandALCat character, a
++ # RandALCat character MUST be the first character of the
++ # string, and a RandALCat character MUST be the last
++ # character of the string.
++ if not RandAL[0] or not RandAL[-1]:
++ raise UnicodeError("Violation of BIDI requirement 3")
+
+ return label
+
+diff --git a/Lib/test/test_codecs.py b/Lib/test/test_codecs.py
+index d1faf0126c1e..37ade7d80d02 100644
+--- a/Lib/test/test_codecs.py
++++ b/Lib/test/test_codecs.py
+@@ -1532,6 +1532,12 @@ def test_builtin_encode(self):
+ self.assertEqual("pyth\xf6n.org".encode("idna"), b"xn--pythn-mua.org")
+ self.assertEqual("pyth\xf6n.org.".encode("idna"), b"xn--pythn-mua.org.")
+
++ def test_builtin_decode_length_limit(self):
++ with self.assertRaisesRegex(UnicodeError, "too long"):
++ (b"xn--016c"+b"a"*1100).decode("idna")
++ with self.assertRaisesRegex(UnicodeError, "too long"):
++ (b"xn--016c"+b"a"*70).decode("idna")
++
+ def test_stream(self):
+ r = codecs.getreader("idna")(io.BytesIO(b"abc"))
+ r.read(3)
+diff --git a/Misc/NEWS.d/next/Security/2022-11-04-09-29-36.gh-issue-98433.l76c5G.rst b/Misc/NEWS.d/next/Security/2022-11-04-09-29-36.gh-issue-98433.l76c5G.rst
+new file mode 100644
+index 000000000000..5185fac2e29d
+--- /dev/null
++++ b/Misc/NEWS.d/next/Security/2022-11-04-09-29-36.gh-issue-98433.l76c5G.rst
+@@ -0,0 +1,6 @@
++The IDNA codec decoder used on DNS hostnames by :mod:`socket` or :mod:`asyncio`
++related name resolution functions no longer involves a quadratic algorithm.
++This prevents a potential CPU denial of service if an out-of-spec excessive
++length hostname involving bidirectional characters were decoded. Some protocols
++such as :mod:`urllib` http ``3xx`` redirects potentially allow for an attacker
++to supply such a name.
diff --git a/poky/meta/recipes-devtools/python/python3/CVE-2021-28861.patch b/poky/meta/recipes-devtools/python/python3/CVE-2021-28861.patch
deleted file mode 100644
index dc97c6b4eb..0000000000
--- a/poky/meta/recipes-devtools/python/python3/CVE-2021-28861.patch
+++ /dev/null
@@ -1,135 +0,0 @@
-From 4dc2cae3abd75f386374d0635d00443b897d0672 Mon Sep 17 00:00:00 2001
-From: "Miss Islington (bot)"
- <31488909+miss-islington@users.noreply.github.com>
-Date: Wed, 22 Jun 2022 01:42:52 -0700
-Subject: [PATCH] gh-87389: Fix an open redirection vulnerability in
- http.server. (GH-93879) (GH-94094)
-
-Fix an open redirection vulnerability in the `http.server` module when
-an URI path starts with `//` that could produce a 301 Location header
-with a misleading target. Vulnerability discovered, and logic fix
-proposed, by Hamza Avvan (@hamzaavvan).
-
-Test and comments authored by Gregory P. Smith [Google].
-(cherry picked from commit 4abab6b603dd38bec1168e9a37c40a48ec89508e)
-
-Co-authored-by: Gregory P. Smith <greg@krypto.org>
-
-Signed-off-by: Riyaz Khan <Riyaz.Khan@kpit.com>
-
-CVE: CVE-2021-28861
-
-Upstream-Status: Backport [https://github.com/python/cpython/commit/4dc2cae3abd75f386374d0635d00443b897d0672]
-
----
- Lib/http/server.py | 7 +++
- Lib/test/test_httpservers.py | 53 ++++++++++++++++++-
- ...2-06-15-20-09-23.gh-issue-87389.QVaC3f.rst | 3 ++
- 3 files changed, 61 insertions(+), 2 deletions(-)
- create mode 100644 Misc/NEWS.d/next/Security/2022-06-15-20-09-23.gh-issue-87389.QVaC3f.rst
-
-diff --git a/Lib/http/server.py b/Lib/http/server.py
-index 38f7accad7a3..39de35458c38 100644
---- a/Lib/http/server.py
-+++ b/Lib/http/server.py
-@@ -332,6 +332,13 @@ def parse_request(self):
- return False
- self.command, self.path = command, path
-
-+ # gh-87389: The purpose of replacing '//' with '/' is to protect
-+ # against open redirect attacks possibly triggered if the path starts
-+ # with '//' because http clients treat //path as an absolute URI
-+ # without scheme (similar to http://path) rather than a path.
-+ if self.path.startswith('//'):
-+ self.path = '/' + self.path.lstrip('/') # Reduce to a single /
-+
- # Examine the headers and look for a Connection directive.
- try:
- self.headers = http.client.parse_headers(self.rfile,
-diff --git a/Lib/test/test_httpservers.py b/Lib/test/test_httpservers.py
-index 87d4924a34b3..fb026188f0b4 100644
---- a/Lib/test/test_httpservers.py
-+++ b/Lib/test/test_httpservers.py
-@@ -330,7 +330,7 @@ class request_handler(NoLogRequestHandler, SimpleHTTPRequestHandler):
- pass
-
- def setUp(self):
-- BaseTestCase.setUp(self)
-+ super().setUp()
- self.cwd = os.getcwd()
- basetempdir = tempfile.gettempdir()
- os.chdir(basetempdir)
-@@ -358,7 +358,7 @@ def tearDown(self):
- except:
- pass
- finally:
-- BaseTestCase.tearDown(self)
-+ super().tearDown()
-
- def check_status_and_reason(self, response, status, data=None):
- def close_conn():
-@@ -414,6 +414,55 @@ def test_undecodable_filename(self):
- self.check_status_and_reason(response, HTTPStatus.OK,
- data=support.TESTFN_UNDECODABLE)
-
-+ def test_get_dir_redirect_location_domain_injection_bug(self):
-+ """Ensure //evil.co/..%2f../../X does not put //evil.co/ in Location.
-+
-+ //netloc/ in a Location header is a redirect to a new host.
-+ https://github.com/python/cpython/issues/87389
-+
-+ This checks that a path resolving to a directory on our server cannot
-+ resolve into a redirect to another server.
-+ """
-+ os.mkdir(os.path.join(self.tempdir, 'existing_directory'))
-+ url = f'/python.org/..%2f..%2f..%2f..%2f..%2f../%0a%0d/../{self.tempdir_name}/existing_directory'
-+ expected_location = f'{url}/' # /python.org.../ single slash single prefix, trailing slash
-+ # Canonicalizes to /tmp/tempdir_name/existing_directory which does
-+ # exist and is a dir, triggering the 301 redirect logic.
-+ response = self.request(url)
-+ self.check_status_and_reason(response, HTTPStatus.MOVED_PERMANENTLY)
-+ location = response.getheader('Location')
-+ self.assertEqual(location, expected_location, msg='non-attack failed!')
-+
-+ # //python.org... multi-slash prefix, no trailing slash
-+ attack_url = f'/{url}'
-+ response = self.request(attack_url)
-+ self.check_status_and_reason(response, HTTPStatus.MOVED_PERMANENTLY)
-+ location = response.getheader('Location')
-+ self.assertFalse(location.startswith('//'), msg=location)
-+ self.assertEqual(location, expected_location,
-+ msg='Expected Location header to start with a single / and '
-+ 'end with a / as this is a directory redirect.')
-+
-+ # ///python.org... triple-slash prefix, no trailing slash
-+ attack3_url = f'//{url}'
-+ response = self.request(attack3_url)
-+ self.check_status_and_reason(response, HTTPStatus.MOVED_PERMANENTLY)
-+ self.assertEqual(response.getheader('Location'), expected_location)
-+
-+ # If the second word in the http request (Request-URI for the http
-+ # method) is a full URI, we don't worry about it, as that'll be parsed
-+ # and reassembled as a full URI within BaseHTTPRequestHandler.send_head
-+ # so no errant scheme-less //netloc//evil.co/ domain mixup can happen.
-+ attack_scheme_netloc_2slash_url = f'https://pypi.org/{url}'
-+ expected_scheme_netloc_location = f'{attack_scheme_netloc_2slash_url}/'
-+ response = self.request(attack_scheme_netloc_2slash_url)
-+ self.check_status_and_reason(response, HTTPStatus.MOVED_PERMANENTLY)
-+ location = response.getheader('Location')
-+ # We're just ensuring that the scheme and domain make it through, if
-+ # there are or aren't multiple slashes at the start of the path that
-+ # follows that isn't important in this Location: header.
-+ self.assertTrue(location.startswith('https://pypi.org/'), msg=location)
-+
- def test_get(self):
- #constructs the path relative to the root directory of the HTTPServer
- response = self.request(self.base_url + '/test')
-diff --git a/Misc/NEWS.d/next/Security/2022-06-15-20-09-23.gh-issue-87389.QVaC3f.rst b/Misc/NEWS.d/next/Security/2022-06-15-20-09-23.gh-issue-87389.QVaC3f.rst
-new file mode 100644
-index 000000000000..029d437190de
---- /dev/null
-+++ b/Misc/NEWS.d/next/Security/2022-06-15-20-09-23.gh-issue-87389.QVaC3f.rst
-@@ -0,0 +1,3 @@
-+:mod:`http.server`: Fix an open redirection vulnerability in the HTTP server
-+when an URI path starts with ``//``. Vulnerability discovered, and initial
-+fix proposed, by Hamza Avvan.
diff --git a/poky/meta/recipes-devtools/python/python3/CVE-2022-37454.patch b/poky/meta/recipes-devtools/python/python3/CVE-2022-37454.patch
new file mode 100644
index 0000000000..a41cc301e2
--- /dev/null
+++ b/poky/meta/recipes-devtools/python/python3/CVE-2022-37454.patch
@@ -0,0 +1,105 @@
+From 948c6794711458fd148a3fa62296cadeeb2ed631 Mon Sep 17 00:00:00 2001
+From: "Miss Islington (bot)"
+ <31488909+miss-islington@users.noreply.github.com>
+Date: Fri, 28 Oct 2022 03:07:50 -0700
+Subject: [PATCH] [3.8] gh-98517: Fix buffer overflows in _sha3 module
+ (GH-98519) (#98527)
+
+This is a port of the applicable part of XKCP's fix [1] for
+CVE-2022-37454 and avoids the segmentation fault and the infinite
+loop in the test cases published in [2].
+
+[1]: https://github.com/XKCP/XKCP/commit/fdc6fef075f4e81d6b1bc38364248975e08e340a
+[2]: https://mouha.be/sha-3-buffer-overflow/
+
+Regression test added by: Gregory P. Smith [Google LLC] <greg@krypto.org>
+(cherry picked from commit 0e4e058602d93b88256ff90bbef501ba20be9dd3)
+
+Co-authored-by: Theo Buehler <botovq@users.noreply.github.com>
+
+CVE: CVE-2022-37454
+Upstream-Status: Backport [https://github.com/python/cpython/commit/948c6794711458fd148a3fa62296cadeeb2ed631]
+Signed-off-by: Pawan Badganchi <Pawan.Badganchi@kpit.com>
+---
+ Lib/test/test_hashlib.py | 9 +++++++++
+ .../2022-10-21-13-31-47.gh-issue-98517.SXXGfV.rst | 1 +
+ Modules/_sha3/kcp/KeccakSponge.inc | 15 ++++++++-------
+ 3 files changed, 18 insertions(+), 7 deletions(-)
+ create mode 100644 Misc/NEWS.d/next/Security/2022-10-21-13-31-47.gh-issue-98517.SXXGfV.rst
+
+diff --git a/Lib/test/test_hashlib.py b/Lib/test/test_hashlib.py
+index 8b53d23ef525..e6cec4e306e5 100644
+--- a/Lib/test/test_hashlib.py
++++ b/Lib/test/test_hashlib.py
+@@ -434,6 +434,15 @@ def test_case_md5_huge(self, size):
+ def test_case_md5_uintmax(self, size):
+ self.check('md5', b'A'*size, '28138d306ff1b8281f1a9067e1a1a2b3')
+
++ @unittest.skipIf(sys.maxsize < _4G - 1, 'test cannot run on 32-bit systems')
++ @bigmemtest(size=_4G - 1, memuse=1, dry_run=False)
++ def test_sha3_update_overflow(self, size):
++ """Regression test for gh-98517 CVE-2022-37454."""
++ h = hashlib.sha3_224()
++ h.update(b'\x01')
++ h.update(b'\x01'*0xffff_ffff)
++ self.assertEqual(h.hexdigest(), '80762e8ce6700f114fec0f621fd97c4b9c00147fa052215294cceeed')
++
+ # use the three examples from Federal Information Processing Standards
+ # Publication 180-1, Secure Hash Standard, 1995 April 17
+ # http://www.itl.nist.gov/div897/pubs/fip180-1.htm
+diff --git a/Misc/NEWS.d/next/Security/2022-10-21-13-31-47.gh-issue-98517.SXXGfV.rst b/Misc/NEWS.d/next/Security/2022-10-21-13-31-47.gh-issue-98517.SXXGfV.rst
+new file mode 100644
+index 000000000000..2d23a6ad93c7
+--- /dev/null
++++ b/Misc/NEWS.d/next/Security/2022-10-21-13-31-47.gh-issue-98517.SXXGfV.rst
+@@ -0,0 +1 @@
++Port XKCP's fix for the buffer overflows in SHA-3 (CVE-2022-37454).
+diff --git a/Modules/_sha3/kcp/KeccakSponge.inc b/Modules/_sha3/kcp/KeccakSponge.inc
+index e10739deafa8..cf92e4db4d36 100644
+--- a/Modules/_sha3/kcp/KeccakSponge.inc
++++ b/Modules/_sha3/kcp/KeccakSponge.inc
+@@ -171,7 +171,7 @@ int SpongeAbsorb(SpongeInstance *instance, const unsigned char *data, size_t dat
+ i = 0;
+ curData = data;
+ while(i < dataByteLen) {
+- if ((instance->byteIOIndex == 0) && (dataByteLen >= (i + rateInBytes))) {
++ if ((instance->byteIOIndex == 0) && (dataByteLen-i >= rateInBytes)) {
+ #ifdef SnP_FastLoop_Absorb
+ /* processing full blocks first */
+
+@@ -199,10 +199,10 @@ int SpongeAbsorb(SpongeInstance *instance, const unsigned char *data, size_t dat
+ }
+ else {
+ /* normal lane: using the message queue */
+-
+- partialBlock = (unsigned int)(dataByteLen - i);
+- if (partialBlock+instance->byteIOIndex > rateInBytes)
++ if (dataByteLen-i > rateInBytes-instance->byteIOIndex)
+ partialBlock = rateInBytes-instance->byteIOIndex;
++ else
++ partialBlock = (unsigned int)(dataByteLen - i);
+ #ifdef KeccakReference
+ displayBytes(1, "Block to be absorbed (part)", curData, partialBlock);
+ #endif
+@@ -281,7 +281,7 @@ int SpongeSqueeze(SpongeInstance *instance, unsigned char *data, size_t dataByte
+ i = 0;
+ curData = data;
+ while(i < dataByteLen) {
+- if ((instance->byteIOIndex == rateInBytes) && (dataByteLen >= (i + rateInBytes))) {
++ if ((instance->byteIOIndex == rateInBytes) && (dataByteLen-i >= rateInBytes)) {
+ for(j=dataByteLen-i; j>=rateInBytes; j-=rateInBytes) {
+ SnP_Permute(instance->state);
+ SnP_ExtractBytes(instance->state, curData, 0, rateInBytes);
+@@ -299,9 +299,10 @@ int SpongeSqueeze(SpongeInstance *instance, unsigned char *data, size_t dataByte
+ SnP_Permute(instance->state);
+ instance->byteIOIndex = 0;
+ }
+- partialBlock = (unsigned int)(dataByteLen - i);
+- if (partialBlock+instance->byteIOIndex > rateInBytes)
++ if (dataByteLen-i > rateInBytes-instance->byteIOIndex)
+ partialBlock = rateInBytes-instance->byteIOIndex;
++ else
++ partialBlock = (unsigned int)(dataByteLen - i);
+ i += partialBlock;
+
+ SnP_ExtractBytes(instance->state, curData, instance->byteIOIndex, partialBlock);
diff --git a/poky/meta/recipes-devtools/python/python3/python3-manifest.json b/poky/meta/recipes-devtools/python/python3/python3-manifest.json
index 3bcc9b8662..0e87f91dd8 100644
--- a/poky/meta/recipes-devtools/python/python3/python3-manifest.json
+++ b/poky/meta/recipes-devtools/python/python3/python3-manifest.json
@@ -531,7 +531,9 @@
"rdepends": [
"core"
],
- "files": [],
+ "files": [
+ "${libdir}/python${PYTHON_MAJMIN}/distutils/command/wininst-*.exe"
+ ],
"cached": []
},
"distutils": {
diff --git a/poky/meta/recipes-devtools/python/python3_3.8.13.bb b/poky/meta/recipes-devtools/python/python3_3.8.14.bb
index d87abe2351..960e41aced 100644
--- a/poky/meta/recipes-devtools/python/python3_3.8.13.bb
+++ b/poky/meta/recipes-devtools/python/python3_3.8.14.bb
@@ -34,7 +34,8 @@ SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
file://0001-python3-Do-not-hardcode-lib-for-distutils.patch \
file://0020-configure.ac-setup.py-do-not-add-a-curses-include-pa.patch \
file://makerace.patch \
- file://CVE-2021-28861.patch \
+ file://CVE-2022-45061.patch \
+ file://CVE-2022-37454.patch \
"
SRC_URI_append_class-native = " \
@@ -43,8 +44,8 @@ SRC_URI_append_class-native = " \
file://0001-Don-t-search-system-for-headers-libraries.patch \
"
-SRC_URI[md5sum] = "c4b7100dcaace9d33ab1fda9a3a038d6"
-SRC_URI[sha256sum] = "6f309077012040aa39fe8f0c61db8c0fa1c45136763299d375c9e5756f09cf57"
+SRC_URI[md5sum] = "78710eed185b71f4198d354502ff62c9"
+SRC_URI[sha256sum] = "5d77e278271ba803e9909a41a4f3baca006181c93ada682a5e5fe8dc4a24c5f3"
# exclude pre-releases for both python 2.x and 3.x
UPSTREAM_CHECK_REGEX = "[Pp]ython-(?P<pver>\d+(\.\d+)+).tar"
diff --git a/poky/meta/recipes-devtools/qemu/qemu-system-native_4.2.0.bb b/poky/meta/recipes-devtools/qemu/qemu-system-native_4.2.0.bb
index d83ee59375..5ae6a37f26 100644
--- a/poky/meta/recipes-devtools/qemu/qemu-system-native_4.2.0.bb
+++ b/poky/meta/recipes-devtools/qemu/qemu-system-native_4.2.0.bb
@@ -9,7 +9,7 @@ DEPENDS = "glib-2.0-native zlib-native pixman-native qemu-native bison-native"
EXTRA_OECONF_append = " --target-list=${@get_qemu_system_target_list(d)}"
-PACKAGECONFIG ??= "fdt alsa kvm"
+PACKAGECONFIG ??= "fdt alsa kvm slirp"
# Handle distros such as CentOS 5 32-bit that do not have kvm support
PACKAGECONFIG_remove = "${@'kvm' if not os.path.exists('/usr/include/linux/kvm.h') else ''}"
diff --git a/poky/meta/recipes-devtools/qemu/qemu.inc b/poky/meta/recipes-devtools/qemu/qemu.inc
index 368be9979a..8d6c4050f7 100644
--- a/poky/meta/recipes-devtools/qemu/qemu.inc
+++ b/poky/meta/recipes-devtools/qemu/qemu.inc
@@ -111,6 +111,32 @@ SRC_URI = "https://download.qemu.org/${BPN}-${PV}.tar.xz \
file://CVE-2021-4207.patch \
file://CVE-2022-0216-1.patch \
file://CVE-2022-0216-2.patch \
+ file://CVE-2021-3750.patch \
+ file://CVE-2021-3638.patch \
+ file://CVE-2021-20196.patch \
+ file://CVE-2021-3507.patch \
+ file://hw-block-nvme-refactor-nvme_addr_read.patch \
+ file://hw-block-nvme-handle-dma-errors.patch \
+ file://CVE-2021-3929.patch \
+ file://CVE-2022-4144.patch \
+ file://CVE-2020-15859.patch \
+ file://CVE-2020-15469-1.patch \
+ file://CVE-2020-15469-2.patch \
+ file://CVE-2020-15469-3.patch \
+ file://CVE-2020-15469-4.patch \
+ file://CVE-2020-15469-5.patch \
+ file://CVE-2020-15469-6.patch \
+ file://CVE-2020-15469-7.patch \
+ file://CVE-2020-15469-8.patch \
+ file://CVE-2020-35504.patch \
+ file://CVE-2020-35505.patch \
+ file://CVE-2022-26354.patch \
+ file://CVE-2021-3409-1.patch \
+ file://CVE-2021-3409-2.patch \
+ file://CVE-2021-3409-3.patch \
+ file://CVE-2021-3409-4.patch \
+ file://CVE-2021-3409-5.patch \
+ file://hw-display-qxl-Pass-requested-buffer-size-to-qxl_phy.patch \
"
UPSTREAM_CHECK_REGEX = "qemu-(?P<pver>\d+(\.\d+)+)\.tar"
@@ -131,6 +157,11 @@ CVE_CHECK_WHITELIST += "CVE-2018-18438"
# the issue introduced in v5.1.0-rc0
CVE_CHECK_WHITELIST += "CVE-2020-27661"
+# As per https://nvd.nist.gov/vuln/detail/CVE-2023-0664
+# https://bugzilla.redhat.com/show_bug.cgi?id=2167423
+# this bug related to windows specific.
+CVE_CHECK_WHITELIST += "CVE-2023-0664"
+
COMPATIBLE_HOST_mipsarchn32 = "null"
COMPATIBLE_HOST_mipsarchn64 = "null"
@@ -274,6 +305,11 @@ PACKAGECONFIG[capstone] = "--enable-capstone,--disable-capstone"
# libnfs is currently provided by meta-kodi
PACKAGECONFIG[libnfs] = "--enable-libnfs,--disable-libnfs,libnfs"
PACKAGECONFIG[brlapi] = "--enable-brlapi,--disable-brlapi"
+PACKAGECONFIG[vde] = "--enable-vde,--disable-vde"
+# version 4.2.0 doesn't have an "internal" option for enable-slirp, so use "git" which uses the same configure code path
+PACKAGECONFIG[slirp] = "--enable-slirp=git,--disable-slirp"
+PACKAGECONFIG[rbd] = "--enable-rbd,--disable-rbd"
+PACKAGECONFIG[rdma] = "--enable-rdma,--disable-rdma"
INSANE_SKIP_${PN} = "arch"
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-1.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-1.patch
new file mode 100644
index 0000000000..20f39f0a26
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-1.patch
@@ -0,0 +1,50 @@
+From 520f26fc6d17b71a43eaf620e834b3bdf316f3d3 Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Tue, 11 Aug 2020 17:11:25 +0530
+Subject: [PATCH] hw/pci-host: add pci-intack write method
+
+Add pci-intack mmio write method to avoid NULL pointer dereference
+issue.
+
+Reported-by: Lei Sun <slei.casper@gmail.com>
+Reviewed-by: Li Qiang <liq3ea@gmail.com>
+Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Message-Id: <20200811114133.672647-2-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+CVE: CVE-2020-15469
+Upstream-Status: Backport [import from ubuntu
+https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-15469-1.patch?h=ubuntu/focal-security
+Upstream commit https://github.com/qemu/qemu/commit/520f26fc6d17b71a43eaf620e834b3bdf316f3d3 ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/pci-host/prep.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/hw/pci-host/prep.c
++++ b/hw/pci-host/prep.c
+@@ -26,6 +26,7 @@
+ #include "qemu/osdep.h"
+ #include "qemu-common.h"
+ #include "qemu/units.h"
++#include "qemu/log.h"
+ #include "qapi/error.h"
+ #include "hw/pci/pci.h"
+ #include "hw/pci/pci_bus.h"
+@@ -119,8 +120,15 @@ static uint64_t raven_intack_read(void *
+ return pic_read_irq(isa_pic);
+ }
+
++static void raven_intack_write(void *opaque, hwaddr addr,
++ uint64_t data, unsigned size)
++{
++ qemu_log_mask(LOG_UNIMP, "%s not implemented\n", __func__);
++}
++
+ static const MemoryRegionOps raven_intack_ops = {
+ .read = raven_intack_read,
++ .write = raven_intack_write,
+ .valid = {
+ .max_access_size = 1,
+ },
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-2.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-2.patch
new file mode 100644
index 0000000000..d6715d337c
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-2.patch
@@ -0,0 +1,69 @@
+From 4f2a5202a05fc1612954804a2482f07bff105ea2 Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Tue, 11 Aug 2020 17:11:26 +0530
+Subject: [PATCH] pci-host: designware: add pcie-msi read method
+
+Add pcie-msi mmio read method to avoid NULL pointer dereference
+issue.
+
+Reported-by: Lei Sun <slei.casper@gmail.com>
+Reviewed-by: Li Qiang <liq3ea@gmail.com>
+Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Message-Id: <20200811114133.672647-3-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+CVE: CVE-2020-15469
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-15469-2.patch?h=ubuntu/focal-security Upstream Commit https://github.com/qemu/qemu/commit/4f2a5202a05fc1612954804a2482f07bff105ea2]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/pci-host/designware.c | 19 +++++++++++++++++++
+ 1 file changed, 19 insertions(+)
+
+diff --git a/hw/pci-host/designware.c b/hw/pci-host/designware.c
+index f9fb97a..bde3a34 100644
+--- a/hw/pci-host/designware.c
++++ b/hw/pci-host/designware.c
+@@ -21,6 +21,7 @@
+ #include "qemu/osdep.h"
+ #include "qapi/error.h"
+ #include "qemu/module.h"
++#include "qemu/log.h"
+ #include "hw/pci/msi.h"
+ #include "hw/pci/pci_bridge.h"
+ #include "hw/pci/pci_host.h"
+@@ -63,6 +64,23 @@ designware_pcie_root_to_host(DesignwarePCIERoot *root)
+ return DESIGNWARE_PCIE_HOST(bus->parent);
+ }
+
++static uint64_t designware_pcie_root_msi_read(void *opaque, hwaddr addr,
++ unsigned size)
++{
++ /*
++ * Attempts to read from the MSI address are undefined in
++ * the PCI specifications. For this hardware, the datasheet
++ * specifies that a read from the magic address is simply not
++ * intercepted by the MSI controller, and will go out to the
++ * AHB/AXI bus like any other PCI-device-initiated DMA read.
++ * This is not trivial to implement in QEMU, so since
++ * well-behaved guests won't ever ask a PCI device to DMA from
++ * this address we just log the missing functionality.
++ */
++ qemu_log_mask(LOG_UNIMP, "%s not implemented\n", __func__);
++ return 0;
++}
++
+ static void designware_pcie_root_msi_write(void *opaque, hwaddr addr,
+ uint64_t val, unsigned len)
+ {
+@@ -77,6 +95,7 @@ static void designware_pcie_root_msi_write(void *opaque, hwaddr addr,
+ }
+
+ static const MemoryRegionOps designware_pci_host_msi_ops = {
++ .read = designware_pcie_root_msi_read,
+ .write = designware_pcie_root_msi_write,
+ .endianness = DEVICE_LITTLE_ENDIAN,
+ .valid = {
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-3.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-3.patch
new file mode 100644
index 0000000000..85abe8ff32
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-3.patch
@@ -0,0 +1,49 @@
+From 24202d2b561c3b4c48bd28383c8c34b4ac66c2bf Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Tue, 11 Aug 2020 17:11:27 +0530
+Subject: [PATCH] vfio: add quirk device write method
+
+Add vfio quirk device mmio write method to avoid NULL pointer
+dereference issue.
+
+Reported-by: Lei Sun <slei.casper@gmail.com>
+Reviewed-by: Li Qiang <liq3ea@gmail.com>
+Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
+Acked-by: Alex Williamson <alex.williamson@redhat.com>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Message-Id: <20200811114133.672647-4-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+CVE: CVE-2020-15469
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-15469-3.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/24202d2b561c3b4c48bd28383c8c34b4ac66c2bf]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/vfio/pci-quirks.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/hw/vfio/pci-quirks.c
++++ b/hw/vfio/pci-quirks.c
+@@ -13,6 +13,7 @@
+ #include "qemu/osdep.h"
+ #include "exec/memop.h"
+ #include "qemu/units.h"
++#include "qemu/log.h"
+ #include "qemu/error-report.h"
+ #include "qemu/main-loop.h"
+ #include "qemu/module.h"
+@@ -278,8 +279,15 @@ static uint64_t vfio_ati_3c3_quirk_read(
+ return data;
+ }
+
++static void vfio_ati_3c3_quirk_write(void *opaque, hwaddr addr,
++ uint64_t data, unsigned size)
++{
++ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid access\n", __func__);
++}
++
+ static const MemoryRegionOps vfio_ati_3c3_quirk = {
+ .read = vfio_ati_3c3_quirk_read,
++ .write = vfio_ati_3c3_quirk_write,
+ .endianness = DEVICE_LITTLE_ENDIAN,
+ };
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-4.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-4.patch
new file mode 100644
index 0000000000..52fac8a051
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-4.patch
@@ -0,0 +1,53 @@
+From f867cebaedbc9c43189f102e4cdfdff05e88df7f Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Tue, 11 Aug 2020 17:11:28 +0530
+Subject: [PATCH] prep: add ppc-parity write method
+
+Add ppc-parity mmio write method to avoid NULL pointer dereference
+issue.
+
+Reported-by: Lei Sun <slei.casper@gmail.com>
+Acked-by: David Gibson <david@gibson.dropbear.id.au>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Reviewed-by: Li Qiang <liq3ea@gmail.com>
+Message-Id: <20200811114133.672647-5-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+CVE: CVE-2020-15469
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-15469-4.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/f867cebaedbc9c43189f102e4cdfdff05e88df7f]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/ppc/prep_systemio.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/hw/ppc/prep_systemio.c b/hw/ppc/prep_systemio.c
+index 4e48ef2..b2bd783 100644
+--- a/hw/ppc/prep_systemio.c
++++ b/hw/ppc/prep_systemio.c
+@@ -23,6 +23,7 @@
+ */
+
+ #include "qemu/osdep.h"
++#include "qemu/log.h"
+ #include "hw/irq.h"
+ #include "hw/isa/isa.h"
+ #include "hw/qdev-properties.h"
+@@ -235,8 +236,15 @@ static uint64_t ppc_parity_error_readl(void *opaque, hwaddr addr,
+ return val;
+ }
+
++static void ppc_parity_error_writel(void *opaque, hwaddr addr,
++ uint64_t data, unsigned size)
++{
++ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid access\n", __func__);
++}
++
+ static const MemoryRegionOps ppc_parity_error_ops = {
+ .read = ppc_parity_error_readl,
++ .write = ppc_parity_error_writel,
+ .valid = {
+ .min_access_size = 4,
+ .max_access_size = 4,
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-5.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-5.patch
new file mode 100644
index 0000000000..49c6c5e3e2
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-5.patch
@@ -0,0 +1,53 @@
+From b5bf601f364e1a14ca4c3276f88dfec024acf613 Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Tue, 11 Aug 2020 17:11:29 +0530
+Subject: [PATCH] nvram: add nrf51_soc flash read method
+
+Add nrf51_soc mmio read method to avoid NULL pointer dereference
+issue.
+
+Reported-by: Lei Sun <slei.casper@gmail.com>
+Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Reviewed-by: Li Qiang <liq3ea@gmail.com>
+Message-Id: <20200811114133.672647-6-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+CVE: CVE-2020-15469
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-15469-5.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/b5bf601f364e1a14ca4c3276f88dfec024acf613 ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/nvram/nrf51_nvm.c | 10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+diff --git a/hw/nvram/nrf51_nvm.c b/hw/nvram/nrf51_nvm.c
+index f2283c1..7b3460d 100644
+--- a/hw/nvram/nrf51_nvm.c
++++ b/hw/nvram/nrf51_nvm.c
+@@ -273,6 +273,15 @@ static const MemoryRegionOps io_ops = {
+ .endianness = DEVICE_LITTLE_ENDIAN,
+ };
+
++static uint64_t flash_read(void *opaque, hwaddr offset, unsigned size)
++{
++ /*
++ * This is a rom_device MemoryRegion which is always in
++ * romd_mode (we never put it in MMIO mode), so reads always
++ * go directly to RAM and never come here.
++ */
++ g_assert_not_reached();
++}
+
+ static void flash_write(void *opaque, hwaddr offset, uint64_t value,
+ unsigned int size)
+@@ -300,6 +309,7 @@ static void flash_write(void *opaque, hwaddr offset, uint64_t value,
+
+
+ static const MemoryRegionOps flash_ops = {
++ .read = flash_read,
+ .write = flash_write,
+ .valid.min_access_size = 4,
+ .valid.max_access_size = 4,
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-6.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-6.patch
new file mode 100644
index 0000000000..115be68295
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-6.patch
@@ -0,0 +1,61 @@
+Backport of:
+
+From 921604e175b8ec06c39503310e7b3ec1e3eafe9e Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Tue, 11 Aug 2020 17:11:30 +0530
+Subject: [PATCH] spapr_pci: add spapr msi read method
+
+Add spapr msi mmio read method to avoid NULL pointer dereference
+issue.
+
+Reported-by: Lei Sun <slei.casper@gmail.com>
+Acked-by: David Gibson <david@gibson.dropbear.id.au>
+Reviewed-by: Li Qiang <liq3ea@gmail.com>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Message-Id: <20200811114133.672647-7-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+CVE: CVE-2020-15469
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-15469-6.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/921604e175b8ec06c39503310e7b3ec1e3eafe9e]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/ppc/spapr_pci.c | 14 ++++++++++++--
+ 1 file changed, 12 insertions(+), 2 deletions(-)
+
+--- a/hw/ppc/spapr_pci.c
++++ b/hw/ppc/spapr_pci.c
+@@ -52,6 +52,7 @@
+ #include "sysemu/kvm.h"
+ #include "sysemu/hostmem.h"
+ #include "sysemu/numa.h"
++#include "qemu/log.h"
+
+ /* Copied from the kernel arch/powerpc/platforms/pseries/msi.c */
+ #define RTAS_QUERY_FN 0
+@@ -738,6 +739,12 @@ static PCIINTxRoute spapr_route_intx_pin
+ return route;
+ }
+
++static uint64_t spapr_msi_read(void *opaque, hwaddr addr, unsigned size)
++{
++ qemu_log_mask(LOG_GUEST_ERROR, "%s: invalid access\n", __func__);
++ return 0;
++}
++
+ /*
+ * MSI/MSIX memory region implementation.
+ * The handler handles both MSI and MSIX.
+@@ -755,8 +762,11 @@ static void spapr_msi_write(void *opaque
+ }
+
+ static const MemoryRegionOps spapr_msi_ops = {
+- /* There is no .read as the read result is undefined by PCI spec */
+- .read = NULL,
++ /*
++ * .read result is undefined by PCI spec.
++ * define .read method to avoid assert failure in memory_region_init_io
++ */
++ .read = spapr_msi_read,
+ .write = spapr_msi_write,
+ .endianness = DEVICE_LITTLE_ENDIAN
+ };
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-7.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-7.patch
new file mode 100644
index 0000000000..7d8ec32251
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-7.patch
@@ -0,0 +1,50 @@
+From 2c9fb3b784000c1df32231e1c2464bb2e3fc4620 Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Tue, 11 Aug 2020 17:11:31 +0530
+Subject: [PATCH] tz-ppc: add dummy read/write methods
+
+Add tz-ppc-dummy mmio read/write methods to avoid assert failure
+during initialisation.
+
+Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Reviewed-by: Li Qiang <liq3ea@gmail.com>
+Message-Id: <20200811114133.672647-8-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+CVE: CVE-2020-15469
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-15469-7.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/2c9fb3b784000c1df32231e1c2464bb2e3fc4620 ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/misc/tz-ppc.c | 14 ++++++++++++++
+ 1 file changed, 14 insertions(+)
+
+diff --git a/hw/misc/tz-ppc.c b/hw/misc/tz-ppc.c
+index 6431257..36495c6 100644
+--- a/hw/misc/tz-ppc.c
++++ b/hw/misc/tz-ppc.c
+@@ -196,7 +196,21 @@ static bool tz_ppc_dummy_accepts(void *opaque, hwaddr addr,
+ g_assert_not_reached();
+ }
+
++static uint64_t tz_ppc_dummy_read(void *opaque, hwaddr addr, unsigned size)
++{
++ g_assert_not_reached();
++}
++
++static void tz_ppc_dummy_write(void *opaque, hwaddr addr,
++ uint64_t data, unsigned size)
++{
++ g_assert_not_reached();
++}
++
+ static const MemoryRegionOps tz_ppc_dummy_ops = {
++ /* define r/w methods to avoid assert failure in memory_region_init_io */
++ .read = tz_ppc_dummy_read,
++ .write = tz_ppc_dummy_write,
+ .valid.accepts = tz_ppc_dummy_accepts,
+ };
+
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-8.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-8.patch
new file mode 100644
index 0000000000..7857ba266e
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15469-8.patch
@@ -0,0 +1,44 @@
+From 735754aaa15a6ed46db51fd731e88331c446ea54 Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Tue, 11 Aug 2020 17:11:32 +0530
+Subject: [PATCH] imx7-ccm: add digprog mmio write method
+
+Add digprog mmio write method to avoid assert failure during
+initialisation.
+
+Reviewed-by: Li Qiang <liq3ea@gmail.com>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Message-Id: <20200811114133.672647-9-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+CVE: CVE-2020-15469
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-15469-8.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/735754aaa15a6ed46db51fd731e88331c446ea54]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/misc/imx7_ccm.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/hw/misc/imx7_ccm.c b/hw/misc/imx7_ccm.c
+index 02fc1ae..075159e 100644
+--- a/hw/misc/imx7_ccm.c
++++ b/hw/misc/imx7_ccm.c
+@@ -131,8 +131,16 @@ static const struct MemoryRegionOps imx7_set_clr_tog_ops = {
+ },
+ };
+
++static void imx7_digprog_write(void *opaque, hwaddr addr,
++ uint64_t data, unsigned size)
++{
++ qemu_log_mask(LOG_GUEST_ERROR,
++ "Guest write to read-only ANALOG_DIGPROG register\n");
++}
++
+ static const struct MemoryRegionOps imx7_digprog_ops = {
+ .read = imx7_set_clr_tog_read,
++ .write = imx7_digprog_write,
+ .endianness = DEVICE_NATIVE_ENDIAN,
+ .impl = {
+ .min_access_size = 4,
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15859.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15859.patch
new file mode 100644
index 0000000000..0f43adeea8
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-15859.patch
@@ -0,0 +1,39 @@
+From 22dc8663d9fc7baa22100544c600b6285a63c7a3 Mon Sep 17 00:00:00 2001
+From: Jason Wang <jasowang@redhat.com>
+Date: Wed, 22 Jul 2020 16:57:46 +0800
+Subject: [PATCH] net: forbid the reentrant RX
+
+The memory API allows DMA into NIC's MMIO area. This means the NIC's
+RX routine must be reentrant. Instead of auditing all the NIC, we can
+simply detect the reentrancy and return early. The queue->delivering
+is set and cleared by qemu_net_queue_deliver() for other queue helpers
+to know whether the delivering in on going (NIC's receive is being
+called). We can check it and return early in qemu_net_queue_flush() to
+forbid reentrant RX.
+
+Signed-off-by: Jason Wang <jasowang@redhat.com>
+
+CVE: CVE-2020-15859
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/ubuntu/CVE-2020-15859.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/22dc8663d9fc7baa22100544c600b6285a63c7a3 ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ net/queue.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/net/queue.c b/net/queue.c
+index 0164727..19e32c8 100644
+--- a/net/queue.c
++++ b/net/queue.c
+@@ -250,6 +250,9 @@ void qemu_net_queue_purge(NetQueue *queue, NetClientState *from)
+
+ bool qemu_net_queue_flush(NetQueue *queue)
+ {
++ if (queue->delivering)
++ return false;
++
+ while (!QTAILQ_EMPTY(&queue->packets)) {
+ NetPacket *packet;
+ int ret;
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-35504.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-35504.patch
new file mode 100644
index 0000000000..97d32589d8
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-35504.patch
@@ -0,0 +1,51 @@
+Backport of:
+
+From 0db895361b8a82e1114372ff9f4857abea605701 Mon Sep 17 00:00:00 2001
+From: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
+Date: Wed, 7 Apr 2021 20:57:50 +0100
+Subject: [PATCH] esp: always check current_req is not NULL before use in DMA
+ callbacks
+
+After issuing a SCSI command the SCSI layer can call the SCSIBusInfo .cancel
+callback which resets both current_req and current_dev to NULL. If any data
+is left in the transfer buffer (async_len != 0) then the next TI (Transfer
+Information) command will attempt to reference the NULL pointer causing a
+segfault.
+
+Buglink: https://bugs.launchpad.net/qemu/+bug/1910723
+Buglink: https://bugs.launchpad.net/qemu/+bug/1909247
+Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
+Tested-by: Alexander Bulekov <alxndr@bu.edu>
+Message-Id: <20210407195801.685-2-mark.cave-ayland@ilande.co.uk>
+
+CVE: CVE-2020-35504
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-35504.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/0db895361b8a82e1114372ff9f4857abea605701 ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/scsi/esp.c | 19 ++++++++++++++-----
+ 1 file changed, 14 insertions(+), 5 deletions(-)
+
+--- a/hw/scsi/esp.c
++++ b/hw/scsi/esp.c
+@@ -362,6 +362,11 @@ static void do_dma_pdma_cb(ESPState *s)
+ do_cmd(s, s->cmdbuf);
+ return;
+ }
++
++ if (!s->current_req) {
++ return;
++ }
++
+ s->dma_left -= len;
+ s->async_buf += len;
+ s->async_len -= len;
+@@ -415,6 +420,9 @@ static void esp_do_dma(ESPState *s)
+ do_cmd(s, s->cmdbuf);
+ return;
+ }
++ if (!s->current_req) {
++ return;
++ }
+ if (s->async_len == 0) {
+ /* Defer until data is available. */
+ return;
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-35505.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-35505.patch
new file mode 100644
index 0000000000..c5ff6e89ff
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2020-35505.patch
@@ -0,0 +1,42 @@
+Backport of:
+
+From 99545751734035b76bd372c4e7215bb337428d89 Mon Sep 17 00:00:00 2001
+From: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
+Date: Wed, 7 Apr 2021 20:57:55 +0100
+Subject: [PATCH] esp: ensure cmdfifo is not empty and current_dev is non-NULL
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf8
+Content-Transfer-Encoding: 8bit
+
+When about to execute a SCSI command, ensure that cmdfifo is not empty and
+current_dev is non-NULL. This can happen if the guest tries to execute a TI
+(Transfer Information) command without issuing one of the select commands
+first.
+
+Buglink: https://bugs.launchpad.net/qemu/+bug/1910723
+Buglink: https://bugs.launchpad.net/qemu/+bug/1909247
+Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
+Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+Tested-by: Alexander Bulekov <alxndr@bu.edu>
+Message-Id: <20210407195801.685-7-mark.cave-ayland@ilande.co.uk>
+
+CVE: CVE-2020-35504
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2020-35505.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/99545751734035b76bd372c4e7215bb337428d89 ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/scsi/esp.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/hw/scsi/esp.c
++++ b/hw/scsi/esp.c
+@@ -193,6 +193,10 @@ static void do_busid_cmd(ESPState *s, ui
+
+ trace_esp_do_busid_cmd(busid);
+ lun = busid & 7;
++
++ if (!s->current_dev) {
++ return;
++ }
+ current_lun = scsi_device_find(&s->bus, 0, s->current_dev->id, lun);
+ s->current_req = scsi_req_new(current_lun, 0, lun, buf, s);
+ datalen = scsi_req_enqueue(s->current_req);
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-20196.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-20196.patch
new file mode 100644
index 0000000000..e9b815740f
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-20196.patch
@@ -0,0 +1,62 @@
+From 94608c59045791dfd35102bc59b792e96f2cfa30 Mon Sep 17 00:00:00 2001
+From: Vivek Kumbhar <vkumbhar@mvista.com>
+Date: Tue, 29 Nov 2022 15:57:13 +0530
+Subject: [PATCH] CVE-2021-20196
+
+Upstream-Status: Backport [https://gitlab.com/qemu-project/qemu/-/commit/1ab95af033a419e7a64e2d58e67dd96b20af5233]
+CVE: CVE-2021-20196
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+
+hw/block/fdc: Kludge missing floppy drive to fix CVE-2021-20196
+
+Guest might select another drive on the bus by setting the
+DRIVE_SEL bit of the DIGITAL OUTPUT REGISTER (DOR).
+The current controller model doesn't expect a BlockBackend
+to be NULL. A simple way to fix CVE-2021-20196 is to create
+an empty BlockBackend when it is missing. All further
+accesses will be safely handled, and the controller state
+machines keep behaving correctly.
+---
+ hw/block/fdc.c | 19 ++++++++++++++++++-
+ 1 file changed, 18 insertions(+), 1 deletion(-)
+
+diff --git a/hw/block/fdc.c b/hw/block/fdc.c
+index ac5d31e8..e128e975 100644
+--- a/hw/block/fdc.c
++++ b/hw/block/fdc.c
+@@ -58,6 +58,11 @@
+ } \
+ } while (0)
+
++/* Anonymous BlockBackend for empty drive */
++static BlockBackend *blk_create_empty_drive(void)
++{
++ return blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
++}
+
+ /********************************************************/
+ /* qdev floppy bus */
+@@ -1356,7 +1361,19 @@ static FDrive *get_drv(FDCtrl *fdctrl, int unit)
+
+ static FDrive *get_cur_drv(FDCtrl *fdctrl)
+ {
+- return get_drv(fdctrl, fdctrl->cur_drv);
++ FDrive *cur_drv = get_drv(fdctrl, fdctrl->cur_drv);
++
++ if (!cur_drv->blk) {
++ /*
++ * Kludge: empty drive line selected. Create an anonymous
++ * BlockBackend to avoid NULL deref with various BlockBackend
++ * API calls within this model (CVE-2021-20196).
++ * Due to the controller QOM model limitations, we don't
++ * attach the created to the controller device.
++ */
++ cur_drv->blk = blk_create_empty_drive();
++ }
++ return cur_drv;
+ }
+
+ /* Status A register : 0x00 (read-only) */
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-1.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-1.patch
new file mode 100644
index 0000000000..d53383247e
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-1.patch
@@ -0,0 +1,85 @@
+From b263d8f928001b5cfa2a993ea43b7a5b3a1811e8 Mon Sep 17 00:00:00 2001
+From: Bin Meng <bmeng.cn@gmail.com>
+Date: Wed, 3 Mar 2021 20:26:35 +0800
+Subject: [PATCH] hw/sd: sdhci: Don't transfer any data when command time out
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf8
+Content-Transfer-Encoding: 8bit
+
+At the end of sdhci_send_command(), it starts a data transfer if the
+command register indicates data is associated. But the data transfer
+should only be initiated when the command execution has succeeded.
+
+With this fix, the following reproducer:
+
+outl 0xcf8 0x80001810
+outl 0xcfc 0xe1068000
+outl 0xcf8 0x80001804
+outw 0xcfc 0x7
+write 0xe106802c 0x1 0x0f
+write 0xe1068004 0xc 0x2801d10101fffffbff28a384
+write 0xe106800c 0x1f 0x9dacbbcad9e8f7061524334251606f7e8d9cabbac9d8e7f60514233241505f
+write 0xe1068003 0x28 0x80d000251480d000252280d000253080d000253e80d000254c80d000255a80d000256880d0002576
+write 0xe1068003 0x1 0xfe
+
+cannot be reproduced with the following QEMU command line:
+
+$ qemu-system-x86_64 -nographic -M pc-q35-5.0 \
+ -device sdhci-pci,sd-spec-version=3 \
+ -drive if=sd,index=0,file=null-co://,format=raw,id=mydrive \
+ -device sd-card,drive=mydrive \
+ -monitor none -serial none -qtest stdio
+
+Cc: qemu-stable@nongnu.org
+Fixes: CVE-2020-17380
+Fixes: CVE-2020-25085
+Fixes: CVE-2021-3409
+Fixes: d7dfca0807a0 ("hw/sdhci: introduce standard SD host controller")
+Reported-by: Alexander Bulekov <alxndr@bu.edu>
+Reported-by: Cornelius Aschermann (Ruhr-Universität Bochum)
+Reported-by: Sergej Schumilo (Ruhr-Universität Bochum)
+Reported-by: Simon Wörner (Ruhr-Universität Bochum)
+Buglink: https://bugs.launchpad.net/qemu/+bug/1892960
+Buglink: https://bugs.launchpad.net/qemu/+bug/1909418
+Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1928146
+Acked-by: Alistair Francis <alistair.francis@wdc.com>
+Tested-by: Alexander Bulekov <alxndr@bu.edu>
+Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
+Message-Id: <20210303122639.20004-2-bmeng.cn@gmail.com>
+Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+
+CVE: CVE-2021-3409 CVE-2020-17380
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2021-3409-1.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/b263d8f928001b5cfa2a993ea43b7a5b3a1811e8 ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/sd/sdhci.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/hw/sd/sdhci.c
++++ b/hw/sd/sdhci.c
+@@ -316,6 +316,7 @@ static void sdhci_send_command(SDHCIStat
+ SDRequest request;
+ uint8_t response[16];
+ int rlen;
++ bool timeout = false;
+
+ s->errintsts = 0;
+ s->acmd12errsts = 0;
+@@ -339,6 +340,7 @@ static void sdhci_send_command(SDHCIStat
+ trace_sdhci_response16(s->rspreg[3], s->rspreg[2],
+ s->rspreg[1], s->rspreg[0]);
+ } else {
++ timeout = true;
+ trace_sdhci_error("timeout waiting for command response");
+ if (s->errintstsen & SDHC_EISEN_CMDTIMEOUT) {
+ s->errintsts |= SDHC_EIS_CMDTIMEOUT;
+@@ -359,7 +361,7 @@ static void sdhci_send_command(SDHCIStat
+
+ sdhci_update_irq(s);
+
+- if (s->blksize && (s->cmdreg & SDHC_CMD_DATA_PRESENT)) {
++ if (!timeout && s->blksize && (s->cmdreg & SDHC_CMD_DATA_PRESENT)) {
+ s->data_count = 0;
+ sdhci_data_transfer(s);
+ }
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-2.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-2.patch
new file mode 100644
index 0000000000..dc00f76ec9
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-2.patch
@@ -0,0 +1,103 @@
+From 8be45cc947832b3c02144c9d52921f499f2d77fe Mon Sep 17 00:00:00 2001
+From: Bin Meng <bmeng.cn@gmail.com>
+Date: Wed, 3 Mar 2021 20:26:36 +0800
+Subject: [PATCH] hw/sd: sdhci: Don't write to SDHC_SYSAD register when
+ transfer is in progress
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf8
+Content-Transfer-Encoding: 8bit
+
+Per "SD Host Controller Standard Specification Version 7.00"
+chapter 2.2.1 SDMA System Address Register:
+
+This register can be accessed only if no transaction is executing
+(i.e., after a transaction has stopped).
+
+With this fix, the following reproducer:
+
+outl 0xcf8 0x80001010
+outl 0xcfc 0xfbefff00
+outl 0xcf8 0x80001001
+outl 0xcfc 0x06000000
+write 0xfbefff2c 0x1 0x05
+write 0xfbefff0f 0x1 0x37
+write 0xfbefff0a 0x1 0x01
+write 0xfbefff0f 0x1 0x29
+write 0xfbefff0f 0x1 0x02
+write 0xfbefff0f 0x1 0x03
+write 0xfbefff04 0x1 0x01
+write 0xfbefff05 0x1 0x01
+write 0xfbefff07 0x1 0x02
+write 0xfbefff0c 0x1 0x33
+write 0xfbefff0e 0x1 0x20
+write 0xfbefff0f 0x1 0x00
+write 0xfbefff2a 0x1 0x01
+write 0xfbefff0c 0x1 0x00
+write 0xfbefff03 0x1 0x00
+write 0xfbefff05 0x1 0x00
+write 0xfbefff2a 0x1 0x02
+write 0xfbefff0c 0x1 0x32
+write 0xfbefff01 0x1 0x01
+write 0xfbefff02 0x1 0x01
+write 0xfbefff03 0x1 0x01
+
+cannot be reproduced with the following QEMU command line:
+
+$ qemu-system-x86_64 -nographic -machine accel=qtest -m 512M \
+ -nodefaults -device sdhci-pci,sd-spec-version=3 \
+ -drive if=sd,index=0,file=null-co://,format=raw,id=mydrive \
+ -device sd-card,drive=mydrive -qtest stdio
+
+Cc: qemu-stable@nongnu.org
+Fixes: CVE-2020-17380
+Fixes: CVE-2020-25085
+Fixes: CVE-2021-3409
+Fixes: d7dfca0807a0 ("hw/sdhci: introduce standard SD host controller")
+Reported-by: Alexander Bulekov <alxndr@bu.edu>
+Reported-by: Cornelius Aschermann (Ruhr-Universität Bochum)
+Reported-by: Sergej Schumilo (Ruhr-Universität Bochum)
+Reported-by: Simon Wörner (Ruhr-Universität Bochum)
+Buglink: https://bugs.launchpad.net/qemu/+bug/1892960
+Buglink: https://bugs.launchpad.net/qemu/+bug/1909418
+Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1928146
+Tested-by: Alexander Bulekov <alxndr@bu.edu>
+Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
+Message-Id: <20210303122639.20004-3-bmeng.cn@gmail.com>
+Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+
+CVE: CVE-2021-3409 CVE-2020-17380
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2021-3409-2.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/8be45cc947832b3c02144c9d52921f499f2d77fe ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/sd/sdhci.c | 20 +++++++++++---------
+ 1 file changed, 11 insertions(+), 9 deletions(-)
+
+--- a/hw/sd/sdhci.c
++++ b/hw/sd/sdhci.c
+@@ -1117,15 +1117,17 @@ sdhci_write(void *opaque, hwaddr offset,
+
+ switch (offset & ~0x3) {
+ case SDHC_SYSAD:
+- s->sdmasysad = (s->sdmasysad & mask) | value;
+- MASKED_WRITE(s->sdmasysad, mask, value);
+- /* Writing to last byte of sdmasysad might trigger transfer */
+- if (!(mask & 0xFF000000) && TRANSFERRING_DATA(s->prnsts) && s->blkcnt &&
+- s->blksize && SDHC_DMA_TYPE(s->hostctl1) == SDHC_CTRL_SDMA) {
+- if (s->trnmod & SDHC_TRNS_MULTI) {
+- sdhci_sdma_transfer_multi_blocks(s);
+- } else {
+- sdhci_sdma_transfer_single_block(s);
++ if (!TRANSFERRING_DATA(s->prnsts)) {
++ s->sdmasysad = (s->sdmasysad & mask) | value;
++ MASKED_WRITE(s->sdmasysad, mask, value);
++ /* Writing to last byte of sdmasysad might trigger transfer */
++ if (!(mask & 0xFF000000) && s->blkcnt && s->blksize &&
++ SDHC_DMA_TYPE(s->hostctl1) == SDHC_CTRL_SDMA) {
++ if (s->trnmod & SDHC_TRNS_MULTI) {
++ sdhci_sdma_transfer_multi_blocks(s);
++ } else {
++ sdhci_sdma_transfer_single_block(s);
++ }
+ }
+ }
+ break;
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-3.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-3.patch
new file mode 100644
index 0000000000..d06ac0ed3c
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-3.patch
@@ -0,0 +1,71 @@
+Backport of:
+
+From bc6f28995ff88f5d82c38afcfd65406f0ae375aa Mon Sep 17 00:00:00 2001
+From: Bin Meng <bmeng.cn@gmail.com>
+Date: Wed, 3 Mar 2021 20:26:37 +0800
+Subject: [PATCH] hw/sd: sdhci: Correctly set the controller status for ADMA
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf8
+Content-Transfer-Encoding: 8bit
+
+When an ADMA transfer is started, the codes forget to set the
+controller status to indicate a transfer is in progress.
+
+With this fix, the following 2 reproducers:
+
+https://paste.debian.net/plain/1185136
+https://paste.debian.net/plain/1185141
+
+cannot be reproduced with the following QEMU command line:
+
+$ qemu-system-x86_64 -nographic -machine accel=qtest -m 512M \
+ -nodefaults -device sdhci-pci,sd-spec-version=3 \
+ -drive if=sd,index=0,file=null-co://,format=raw,id=mydrive \
+ -device sd-card,drive=mydrive -qtest stdio
+
+Cc: qemu-stable@nongnu.org
+Fixes: CVE-2020-17380
+Fixes: CVE-2020-25085
+Fixes: CVE-2021-3409
+Fixes: d7dfca0807a0 ("hw/sdhci: introduce standard SD host controller")
+Reported-by: Alexander Bulekov <alxndr@bu.edu>
+Reported-by: Cornelius Aschermann (Ruhr-Universität Bochum)
+Reported-by: Sergej Schumilo (Ruhr-Universität Bochum)
+Reported-by: Simon Wörner (Ruhr-Universität Bochum)
+Buglink: https://bugs.launchpad.net/qemu/+bug/1892960
+Buglink: https://bugs.launchpad.net/qemu/+bug/1909418
+Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1928146
+Tested-by: Alexander Bulekov <alxndr@bu.edu>
+Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
+Message-Id: <20210303122639.20004-4-bmeng.cn@gmail.com>
+Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+
+CVE: CVE-2021-3409 CVE-2020-17380
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2021-3409-3.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/bc6f28995ff88f5d82c38afcfd65406f0ae375aa ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/sd/sdhci.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/hw/sd/sdhci.c
++++ b/hw/sd/sdhci.c
+@@ -776,8 +776,9 @@ static void sdhci_do_adma(SDHCIState *s)
+
+ switch (dscr.attr & SDHC_ADMA_ATTR_ACT_MASK) {
+ case SDHC_ADMA_ATTR_ACT_TRAN: /* data transfer */
+-
++ s->prnsts |= SDHC_DATA_INHIBIT | SDHC_DAT_LINE_ACTIVE;
+ if (s->trnmod & SDHC_TRNS_READ) {
++ s->prnsts |= SDHC_DOING_READ;
+ while (length) {
+ if (s->data_count == 0) {
+ for (n = 0; n < block_size; n++) {
+@@ -807,6 +808,7 @@ static void sdhci_do_adma(SDHCIState *s)
+ }
+ }
+ } else {
++ s->prnsts |= SDHC_DOING_WRITE;
+ while (length) {
+ begin = s->data_count;
+ if ((length + begin) < block_size) {
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-4.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-4.patch
new file mode 100644
index 0000000000..2e49e3bc18
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-4.patch
@@ -0,0 +1,52 @@
+Backport of:
+
+From 5cd7aa3451b76bb19c0f6adc2b931f091e5d7fcd Mon Sep 17 00:00:00 2001
+From: Bin Meng <bmeng.cn@gmail.com>
+Date: Wed, 3 Mar 2021 20:26:38 +0800
+Subject: [PATCH] hw/sd: sdhci: Limit block size only when SDHC_BLKSIZE
+ register is writable
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf8
+Content-Transfer-Encoding: 8bit
+
+The codes to limit the maximum block size is only necessary when
+SDHC_BLKSIZE register is writable.
+
+Tested-by: Alexander Bulekov <alxndr@bu.edu>
+Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
+Message-Id: <20210303122639.20004-5-bmeng.cn@gmail.com>
+Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+
+CVE: CVE-2021-3409 CVE-2020-17380
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2021-3409-4.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/5cd7aa3451b76bb19c0f6adc2b931f091e5d7fcd ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/sd/sdhci.c | 14 +++++++-------
+ 1 file changed, 7 insertions(+), 7 deletions(-)
+
+--- a/hw/sd/sdhci.c
++++ b/hw/sd/sdhci.c
+@@ -1137,15 +1137,15 @@ sdhci_write(void *opaque, hwaddr offset,
+ if (!TRANSFERRING_DATA(s->prnsts)) {
+ MASKED_WRITE(s->blksize, mask, extract32(value, 0, 12));
+ MASKED_WRITE(s->blkcnt, mask >> 16, value >> 16);
+- }
+
+- /* Limit block size to the maximum buffer size */
+- if (extract32(s->blksize, 0, 12) > s->buf_maxsz) {
+- qemu_log_mask(LOG_GUEST_ERROR, "%s: Size 0x%x is larger than " \
+- "the maximum buffer 0x%x", __func__, s->blksize,
+- s->buf_maxsz);
++ /* Limit block size to the maximum buffer size */
++ if (extract32(s->blksize, 0, 12) > s->buf_maxsz) {
++ qemu_log_mask(LOG_GUEST_ERROR, "%s: Size 0x%x is larger than "
++ "the maximum buffer 0x%x\n", __func__, s->blksize,
++ s->buf_maxsz);
+
+- s->blksize = deposit32(s->blksize, 0, 12, s->buf_maxsz);
++ s->blksize = deposit32(s->blksize, 0, 12, s->buf_maxsz);
++ }
+ }
+
+ break;
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-5.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-5.patch
new file mode 100644
index 0000000000..7b436809e9
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3409-5.patch
@@ -0,0 +1,93 @@
+From cffb446e8fd19a14e1634c7a3a8b07be3f01d5c9 Mon Sep 17 00:00:00 2001
+From: Bin Meng <bmeng.cn@gmail.com>
+Date: Wed, 3 Mar 2021 20:26:39 +0800
+Subject: [PATCH] hw/sd: sdhci: Reset the data pointer of s->fifo_buffer[] when
+ a different block size is programmed
+MIME-Version: 1.0
+Content-Type: text/plain; charset=utf8
+Content-Transfer-Encoding: 8bit
+
+If the block size is programmed to a different value from the
+previous one, reset the data pointer of s->fifo_buffer[] so that
+s->fifo_buffer[] can be filled in using the new block size in
+the next transfer.
+
+With this fix, the following reproducer:
+
+outl 0xcf8 0x80001010
+outl 0xcfc 0xe0000000
+outl 0xcf8 0x80001001
+outl 0xcfc 0x06000000
+write 0xe000002c 0x1 0x05
+write 0xe0000005 0x1 0x02
+write 0xe0000007 0x1 0x01
+write 0xe0000028 0x1 0x10
+write 0x0 0x1 0x23
+write 0x2 0x1 0x08
+write 0xe000000c 0x1 0x01
+write 0xe000000e 0x1 0x20
+write 0xe000000f 0x1 0x00
+write 0xe000000c 0x1 0x32
+write 0xe0000004 0x2 0x0200
+write 0xe0000028 0x1 0x00
+write 0xe0000003 0x1 0x40
+
+cannot be reproduced with the following QEMU command line:
+
+$ qemu-system-x86_64 -nographic -machine accel=qtest -m 512M \
+ -nodefaults -device sdhci-pci,sd-spec-version=3 \
+ -drive if=sd,index=0,file=null-co://,format=raw,id=mydrive \
+ -device sd-card,drive=mydrive -qtest stdio
+
+Cc: qemu-stable@nongnu.org
+Fixes: CVE-2020-17380
+Fixes: CVE-2020-25085
+Fixes: CVE-2021-3409
+Fixes: d7dfca0807a0 ("hw/sdhci: introduce standard SD host controller")
+Reported-by: Alexander Bulekov <alxndr@bu.edu>
+Reported-by: Cornelius Aschermann (Ruhr-Universität Bochum)
+Reported-by: Sergej Schumilo (Ruhr-Universität Bochum)
+Reported-by: Simon Wörner (Ruhr-Universität Bochum)
+Buglink: https://bugs.launchpad.net/qemu/+bug/1892960
+Buglink: https://bugs.launchpad.net/qemu/+bug/1909418
+Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1928146
+Tested-by: Alexander Bulekov <alxndr@bu.edu>
+Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
+Message-Id: <20210303122639.20004-6-bmeng.cn@gmail.com>
+Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+
+CVE: CVE-2021-3409 CVE-2020-17380
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2021-3409-5.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/cffb446e8fd19a14e1634c7a3a8b07be3f01d5c9 ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/sd/sdhci.c | 12 ++++++++++++
+ 1 file changed, 12 insertions(+)
+
+--- a/hw/sd/sdhci.c
++++ b/hw/sd/sdhci.c
+@@ -1135,6 +1135,8 @@ sdhci_write(void *opaque, hwaddr offset,
+ break;
+ case SDHC_BLKSIZE:
+ if (!TRANSFERRING_DATA(s->prnsts)) {
++ uint16_t blksize = s->blksize;
++
+ MASKED_WRITE(s->blksize, mask, extract32(value, 0, 12));
+ MASKED_WRITE(s->blkcnt, mask >> 16, value >> 16);
+
+@@ -1146,6 +1148,16 @@ sdhci_write(void *opaque, hwaddr offset,
+
+ s->blksize = deposit32(s->blksize, 0, 12, s->buf_maxsz);
+ }
++
++ /*
++ * If the block size is programmed to a different value from
++ * the previous one, reset the data pointer of s->fifo_buffer[]
++ * so that s->fifo_buffer[] can be filled in using the new block
++ * size in the next transfer.
++ */
++ if (blksize != s->blksize) {
++ s->data_count = 0;
++ }
+ }
+
+ break;
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507.patch
new file mode 100644
index 0000000000..4ff3413f8e
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3507.patch
@@ -0,0 +1,87 @@
+From defac5e2fbddf8423a354ff0454283a2115e1367 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
+Date: Thu, 18 Nov 2021 12:57:32 +0100
+Subject: [PATCH] hw/block/fdc: Prevent end-of-track overrun (CVE-2021-3507)
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Per the 82078 datasheet, if the end-of-track (EOT byte in
+the FIFO) is more than the number of sectors per side, the
+command is terminated unsuccessfully:
+
+* 5.2.5 DATA TRANSFER TERMINATION
+
+ The 82078 supports terminal count explicitly through
+ the TC pin and implicitly through the underrun/over-
+ run and end-of-track (EOT) functions. For full sector
+ transfers, the EOT parameter can define the last
+ sector to be transferred in a single or multisector
+ transfer. If the last sector to be transferred is a par-
+ tial sector, the host can stop transferring the data in
+ mid-sector, and the 82078 will continue to complete
+ the sector as if a hardware TC was received. The
+ only difference between these implicit functions and
+ TC is that they return "abnormal termination" result
+ status. Such status indications can be ignored if they
+ were expected.
+
+* 6.1.3 READ TRACK
+
+ This command terminates when the EOT specified
+ number of sectors have been read. If the 82078
+ does not find an I D Address Mark on the diskette
+ after the second· occurrence of a pulse on the
+ INDX# pin, then it sets the IC code in Status Regis-
+ ter 0 to "01" (Abnormal termination), sets the MA bit
+ in Status Register 1 to "1", and terminates the com-
+ mand.
+
+* 6.1.6 VERIFY
+
+ Refer to Table 6-6 and Table 6-7 for information
+ concerning the values of MT and EC versus SC and
+ EOT value.
+
+* Table 6·6. Result Phase Table
+
+* Table 6-7. Verify Command Result Phase Table
+
+Fix by aborting the transfer when EOT > # Sectors Per Side.
+
+Cc: qemu-stable@nongnu.org
+Cc: Hervé Poussineau <hpoussin@reactos.org>
+Fixes: baca51faff0 ("floppy driver: disk geometry auto detect")
+Reported-by: Alexander Bulekov <alxndr@bu.edu>
+Resolves: https://gitlab.com/qemu-project/qemu/-/issues/339
+Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
+Message-Id: <20211118115733.4038610-2-philmd@redhat.com>
+Reviewed-by: Hanna Reitz <hreitz@redhat.com>
+Signed-off-by: Kevin Wolf <kwolf@redhat.com>
+
+Upstream-Status: Backport [https://github.com/qemu/qemu/commit/defac5e2fbddf8423a354ff0454283a2115e1367]
+CVE: CVE-2021-3507
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ hw/block/fdc.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/hw/block/fdc.c b/hw/block/fdc.c
+index 347875a0cdae..57bb355794a9 100644
+--- a/hw/block/fdc.c
++++ b/hw/block/fdc.c
+@@ -1530,6 +1530,14 @@ static void fdctrl_start_transfer(FDCtrl *fdctrl, int direction)
+ int tmp;
+ fdctrl->data_len = 128 << (fdctrl->fifo[5] > 7 ? 7 : fdctrl->fifo[5]);
+ tmp = (fdctrl->fifo[6] - ks + 1);
++ if (tmp < 0) {
++ FLOPPY_DPRINTF("invalid EOT: %d\n", tmp);
++ fdctrl_stop_transfer(fdctrl, FD_SR0_ABNTERM, FD_SR1_MA, 0x00);
++ fdctrl->fifo[3] = kt;
++ fdctrl->fifo[4] = kh;
++ fdctrl->fifo[5] = ks;
++ return;
++ }
+ if (fdctrl->fifo[0] & 0x80)
+ tmp += fdctrl->fifo[6];
+ fdctrl->data_len *= tmp;
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3638.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3638.patch
new file mode 100644
index 0000000000..6e7af8540a
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3638.patch
@@ -0,0 +1,80 @@
+From b68d13531d8882ba66994b9f767b6a8f822464f3 Mon Sep 17 00:00:00 2001
+From: Vivek Kumbhar <vkumbhar@mvista.com>
+Date: Fri, 11 Nov 2022 12:43:26 +0530
+Subject: [PATCH] CVE-2021-3638
+
+Upstream-Status: Backport [https://lists.nongnu.org/archive/html/qemu-devel/2021-09/msg01682.html]
+CVE: CVE-2021-3638
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+
+When building QEMU with DEBUG_ATI defined then running with
+'-device ati-vga,romfile="" -d unimp,guest_errors -trace ati\*'
+we get:
+
+ ati_mm_write 4 0x16c0 DP_CNTL <- 0x1
+ ati_mm_write 4 0x146c DP_GUI_MASTER_CNTL <- 0x2
+ ati_mm_write 4 0x16c8 DP_MIX <- 0xff0000
+ ati_mm_write 4 0x16c4 DP_DATATYPE <- 0x2
+ ati_mm_write 4 0x224 CRTC_OFFSET <- 0x0
+ ati_mm_write 4 0x142c DST_PITCH_OFFSET <- 0xfe00000
+ ati_mm_write 4 0x1420 DST_Y <- 0x3fff
+ ati_mm_write 4 0x1410 DST_HEIGHT <- 0x3fff
+ ati_mm_write 4 0x1588 DST_WIDTH_X <- 0x3fff3fff
+ ati_2d_blt: vram:0x7fff5fa00000 addr:0 ds:0x7fff61273800 stride:2560 bpp:32
+rop:0xff
+ ati_2d_blt: 0 0 0, 0 127 0, (0,0) -> (16383,16383) 16383x16383 > ^
+ ati_2d_blt: pixman_fill(dst:0x7fff5fa00000, stride:254, bpp:8, x:16383,
+y:16383, w:16383, h:16383, xor:0xff000000)
+ Thread 3 "qemu-system-i38" received signal SIGSEGV, Segmentation fault.
+ (gdb) bt
+ #0 0x00007ffff7f62ce0 in sse2_fill.lto_priv () at /lib64/libpixman-1.so.0
+ #1 0x00007ffff7f09278 in pixman_fill () at /lib64/libpixman-1.so.0
+ #2 0x0000555557b5a9af in ati_2d_blt (s=0x631000028800) at
+hw/display/ati_2d.c:196
+ #3 0x0000555557b4b5a2 in ati_mm_write (opaque=0x631000028800, addr=5512,
+data=1073692671, size=4) at hw/display/ati.c:843
+ #4 0x0000555558b90ec4 in memory_region_write_accessor (mr=0x631000039cc0,
+addr=5512, ..., size=4, ...) at softmmu/memory.c:492
+
+Commit 584acf34cb0 ("ati-vga: Fix reverse bit blts") introduced
+the local dst_x and dst_y which adjust the (x, y) coordinates
+depending on the direction in the SRCCOPY ROP3 operation, but
+forgot to address the same issue for the PATCOPY, BLACKNESS and
+WHITENESS operations, which also call pixman_fill().
+
+Fix that now by using the adjusted coordinates in the pixman_fill
+call, and update the related debug printf().
+---
+ hw/display/ati_2d.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/hw/display/ati_2d.c b/hw/display/ati_2d.c
+index 4dc10ea7..692bec91 100644
+--- a/hw/display/ati_2d.c
++++ b/hw/display/ati_2d.c
+@@ -84,7 +84,7 @@ void ati_2d_blt(ATIVGAState *s)
+ DPRINTF("%d %d %d, %d %d %d, (%d,%d) -> (%d,%d) %dx%d %c %c\n",
+ s->regs.src_offset, s->regs.dst_offset, s->regs.default_offset,
+ s->regs.src_pitch, s->regs.dst_pitch, s->regs.default_pitch,
+- s->regs.src_x, s->regs.src_y, s->regs.dst_x, s->regs.dst_y,
++ s->regs.src_x, s->regs.src_y, dst_x, dst_y,
+ s->regs.dst_width, s->regs.dst_height,
+ (s->regs.dp_cntl & DST_X_LEFT_TO_RIGHT ? '>' : '<'),
+ (s->regs.dp_cntl & DST_Y_TOP_TO_BOTTOM ? 'v' : '^'));
+@@ -180,11 +180,11 @@ void ati_2d_blt(ATIVGAState *s)
+ dst_stride /= sizeof(uint32_t);
+ DPRINTF("pixman_fill(%p, %d, %d, %d, %d, %d, %d, %x)\n",
+ dst_bits, dst_stride, bpp,
+- s->regs.dst_x, s->regs.dst_y,
++ dst_x, dst_y,
+ s->regs.dst_width, s->regs.dst_height,
+ filler);
+ pixman_fill((uint32_t *)dst_bits, dst_stride, bpp,
+- s->regs.dst_x, s->regs.dst_y,
++ dst_x, dst_y,
+ s->regs.dst_width, s->regs.dst_height,
+ filler);
+ if (dst_bits >= s->vga.vram_ptr + s->vga.vbe_start_addr &&
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3750.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3750.patch
new file mode 100644
index 0000000000..43630e71fb
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3750.patch
@@ -0,0 +1,180 @@
+From 1938fbc7ec197e2612ab2ce36dd69bff19208aa5 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Mon, 10 Oct 2022 17:44:41 +0530
+Subject: [PATCH] CVE-2021-3750
+
+Upstream-Status: Backport [https://git.qemu.org/?p=qemu.git;a=commit;h=b9d383ab797f54ae5fa8746117770709921dc529 && https://git.qemu.org/?p=qemu.git;a=commit;h=3ab6fdc91b72e156da22848f0003ff4225690ced && https://git.qemu.org/?p=qemu.git;a=commit;h=58e74682baf4e1ad26b064d8c02e5bc99c75c5d9]
+CVE: CVE-2021-3750
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ exec.c | 55 +++++++++++++++++++++++++++++++-------
+ hw/intc/arm_gicv3_redist.c | 4 +--
+ include/exec/memattrs.h | 9 +++++++
+ 3 files changed, 56 insertions(+), 12 deletions(-)
+
+diff --git a/exec.c b/exec.c
+index 1360051a..10581d8d 100644
+--- a/exec.c
++++ b/exec.c
+@@ -39,6 +39,7 @@
+ #include "qemu/config-file.h"
+ #include "qemu/error-report.h"
+ #include "qemu/qemu-print.h"
++#include "qemu/log.h"
+ #if defined(CONFIG_USER_ONLY)
+ #include "qemu.h"
+ #else /* !CONFIG_USER_ONLY */
+@@ -3118,6 +3119,33 @@ static bool prepare_mmio_access(MemoryRegion *mr)
+ return release_lock;
+ }
+
++/**
+++ * flatview_access_allowed
+++ * @mr: #MemoryRegion to be accessed
+++ * @attrs: memory transaction attributes
+++ * @addr: address within that memory region
+++ * @len: the number of bytes to access
+++ *
+++ * Check if a memory transaction is allowed.
+++ *
+++ * Returns: true if transaction is allowed, false if denied.
+++ */
++static bool flatview_access_allowed(MemoryRegion *mr, MemTxAttrs attrs,
++ hwaddr addr, hwaddr len)
++{
++ if (likely(!attrs.memory)) {
++ return true;
++ }
++ if (memory_region_is_ram(mr)) {
++ return true;
++ }
++ qemu_log_mask(LOG_GUEST_ERROR,
++ "Invalid access to non-RAM device at "
++ "addr 0x%" HWADDR_PRIX ", size %" HWADDR_PRIu ", "
++ "region '%s'\n", addr, len, memory_region_name(mr));
++ return false;
++}
++
+ /* Called within RCU critical section. */
+ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
+ MemTxAttrs attrs,
+@@ -3131,7 +3159,10 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
+ bool release_lock = false;
+
+ for (;;) {
+- if (!memory_access_is_direct(mr, true)) {
++ if (!flatview_access_allowed(mr, attrs, addr1, l)) {
++ result |= MEMTX_ACCESS_ERROR;
++ /* Keep going. */
++ } else if (!memory_access_is_direct(mr, true)) {
+ release_lock |= prepare_mmio_access(mr);
+ l = memory_access_size(mr, l, addr1);
+ /* XXX: could force current_cpu to NULL to avoid
+@@ -3173,14 +3204,14 @@ static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
+ hwaddr l;
+ hwaddr addr1;
+ MemoryRegion *mr;
+- MemTxResult result = MEMTX_OK;
+
+ l = len;
+ mr = flatview_translate(fv, addr, &addr1, &l, true, attrs);
+- result = flatview_write_continue(fv, addr, attrs, buf, len,
+- addr1, l, mr);
+-
+- return result;
++ if (!flatview_access_allowed(mr, attrs, addr, len)) {
++ return MEMTX_ACCESS_ERROR;
++ }
++ return flatview_write_continue(fv, addr, attrs, buf, len,
++ addr1, l, mr);
+ }
+
+ /* Called within RCU critical section. */
+@@ -3195,7 +3226,10 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
+ bool release_lock = false;
+
+ for (;;) {
+- if (!memory_access_is_direct(mr, false)) {
++ if (!flatview_access_allowed(mr, attrs, addr1, l)) {
++ result |= MEMTX_ACCESS_ERROR;
++ /* Keep going. */
++ } else if (!memory_access_is_direct(mr, false)) {
+ /* I/O case */
+ release_lock |= prepare_mmio_access(mr);
+ l = memory_access_size(mr, l, addr1);
+@@ -3238,6 +3272,9 @@ static MemTxResult flatview_read(FlatView *fv, hwaddr addr,
+
+ l = len;
+ mr = flatview_translate(fv, addr, &addr1, &l, false, attrs);
++ if (!flatview_access_allowed(mr, attrs, addr, len)) {
++ return MEMTX_ACCESS_ERROR;
++ }
+ return flatview_read_continue(fv, addr, attrs, buf, len,
+ addr1, l, mr);
+ }
+@@ -3474,12 +3511,10 @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr,
+ MemTxAttrs attrs)
+ {
+ FlatView *fv;
+- bool result;
+
+ RCU_READ_LOCK_GUARD();
+ fv = address_space_to_flatview(as);
+- result = flatview_access_valid(fv, addr, len, is_write, attrs);
+- return result;
++ return flatview_access_valid(fv, addr, len, is_write, attrs);
+ }
+
+ static hwaddr
+diff --git a/hw/intc/arm_gicv3_redist.c b/hw/intc/arm_gicv3_redist.c
+index 8645220d..44368e28 100644
+--- a/hw/intc/arm_gicv3_redist.c
++++ b/hw/intc/arm_gicv3_redist.c
+@@ -450,7 +450,7 @@ MemTxResult gicv3_redist_read(void *opaque, hwaddr offset, uint64_t *data,
+ break;
+ }
+
+- if (r == MEMTX_ERROR) {
++ if (r != MEMTX_OK) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: invalid guest read at offset " TARGET_FMT_plx
+ "size %u\n", __func__, offset, size);
+@@ -507,7 +507,7 @@ MemTxResult gicv3_redist_write(void *opaque, hwaddr offset, uint64_t data,
+ break;
+ }
+
+- if (r == MEMTX_ERROR) {
++ if (r != MEMTX_OK) {
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "%s: invalid guest write at offset " TARGET_FMT_plx
+ "size %u\n", __func__, offset, size);
+diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
+index 95f2d20d..9fb98bc1 100644
+--- a/include/exec/memattrs.h
++++ b/include/exec/memattrs.h
+@@ -35,6 +35,14 @@ typedef struct MemTxAttrs {
+ unsigned int secure:1;
+ /* Memory access is usermode (unprivileged) */
+ unsigned int user:1;
++ /*
++ * Bus interconnect and peripherals can access anything (memories,
++ * devices) by default. By setting the 'memory' bit, bus transaction
++ * are restricted to "normal" memories (per the AMBA documentation)
++ * versus devices. Access to devices will be logged and rejected
++ * (see MEMTX_ACCESS_ERROR).
++ */
++ unsigned int memory:1;
+ /* Requester ID (for MSI for example) */
+ unsigned int requester_id:16;
+ /* Invert endianness for this page */
+@@ -66,6 +74,7 @@ typedef struct MemTxAttrs {
+ #define MEMTX_OK 0
+ #define MEMTX_ERROR (1U << 0) /* device returned an error */
+ #define MEMTX_DECODE_ERROR (1U << 1) /* nothing at that address */
++#define MEMTX_ACCESS_ERROR (1U << 2) /* access denied */
+ typedef uint32_t MemTxResult;
+
+ #endif
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3929.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3929.patch
new file mode 100644
index 0000000000..a1862f1226
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2021-3929.patch
@@ -0,0 +1,81 @@
+From 2c682b5975b41495f98cc34b8243042c446eec44 Mon Sep 17 00:00:00 2001
+From: Gaurav Gupta <gauragup@cisco.com>
+Date: Wed, 29 Mar 2023 14:36:16 -0700
+Subject: [PATCH] hw/nvme: fix CVE-2021-3929 MIME-Version: 1.0 Content-Type:
+ text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+This fixes CVE-2021-3929 "locally" by denying DMA to the iomem of the
+device itself. This still allows DMA to MMIO regions of other devices
+(e.g. doing P2P DMA to the controller memory buffer of another NVMe
+device).
+
+Fixes: CVE-2021-3929
+Reported-by: Qiuhao Li <Qiuhao.Li@outlook.com>
+Reviewed-by: Keith Busch <kbusch@kernel.org>
+Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
+
+Upstream-Status: Backport
+[https://gitlab.com/qemu-project/qemu/-/commit/736b01642d85be832385]
+CVE: CVE-2021-3929
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+Signed-off-by: Gaurav Gupta <gauragup@cisco.com>
+---
+ hw/block/nvme.c | 23 +++++++++++++++++++++++
+ hw/block/nvme.h | 1 +
+ 2 files changed, 24 insertions(+)
+
+diff --git a/hw/block/nvme.c b/hw/block/nvme.c
+index bda446d..ae9b19f 100644
+--- a/hw/block/nvme.c
++++ b/hw/block/nvme.c
+@@ -60,8 +60,31 @@ static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr)
+ return addr >= low && addr < hi;
+ }
+
++static inline bool nvme_addr_is_iomem(NvmeCtrl *n, hwaddr addr)
++{
++ hwaddr hi, lo;
++
++ /*
++ * The purpose of this check is to guard against invalid "local" access to
++ * the iomem (i.e. controller registers). Thus, we check against the range
++ * covered by the 'bar0' MemoryRegion since that is currently composed of
++ * two subregions (the NVMe "MBAR" and the MSI-X table/pba). Note, however,
++ * that if the device model is ever changed to allow the CMB to be located
++ * in BAR0 as well, then this must be changed.
++ */
++ lo = n->bar0.addr;
++ hi = lo + int128_get64(n->bar0.size);
++
++ return addr >= lo && addr < hi;
++}
++
+ static int nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size)
+ {
++
++ if (nvme_addr_is_iomem(n, addr)) {
++ return NVME_DATA_TRAS_ERROR;
++ }
++
+ if (n->cmbsz && nvme_addr_is_cmb(n, addr)) {
+ memcpy(buf, (void *)&n->cmbuf[addr - n->ctrl_mem.addr], size);
+ return 0;
+diff --git a/hw/block/nvme.h b/hw/block/nvme.h
+index 557194e..5a2b119 100644
+--- a/hw/block/nvme.h
++++ b/hw/block/nvme.h
+@@ -59,6 +59,7 @@ typedef struct NvmeNamespace {
+
+ typedef struct NvmeCtrl {
+ PCIDevice parent_obj;
++ MemoryRegion bar0;
+ MemoryRegion iomem;
+ MemoryRegion ctrl_mem;
+ NvmeBar bar;
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-26354.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-26354.patch
new file mode 100644
index 0000000000..fc4d6cf3df
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-26354.patch
@@ -0,0 +1,57 @@
+Backport of:
+
+From 8d1b247f3748ac4078524130c6d7ae42b6140aaf Mon Sep 17 00:00:00 2001
+From: Stefano Garzarella <sgarzare@redhat.com>
+Date: Mon, 28 Feb 2022 10:50:58 +0100
+Subject: [PATCH] vhost-vsock: detach the virqueue element in case of error
+
+In vhost_vsock_common_send_transport_reset(), if an element popped from
+the virtqueue is invalid, we should call virtqueue_detach_element() to
+detach it from the virtqueue before freeing its memory.
+
+Fixes: fc0b9b0e1c ("vhost-vsock: add virtio sockets device")
+Fixes: CVE-2022-26354
+Cc: qemu-stable@nongnu.org
+Reported-by: VictorV <vv474172261@gmail.com>
+Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
+Message-Id: <20220228095058.27899-1-sgarzare@redhat.com>
+Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
+Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
+Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
+
+CVE: CVE-2022-26354
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/qemu/tree/debian/patches/CVE-2022-26354.patch?h=ubuntu/focal-security Upstream commit https://github.com/qemu/qemu/commit/8d1b247f3748ac4078524130c6d7ae42b6140aaf ]
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ hw/virtio/vhost-vsock-common.c | 10 +++++++---
+ 1 file changed, 7 insertions(+), 3 deletions(-)
+
+--- a/hw/virtio/vhost-vsock.c
++++ b/hw/virtio/vhost-vsock.c
+@@ -221,19 +221,23 @@ static void vhost_vsock_send_transport_r
+ if (elem->out_num) {
+ error_report("invalid vhost-vsock event virtqueue element with "
+ "out buffers");
+- goto out;
++ goto err;
+ }
+
+ if (iov_from_buf(elem->in_sg, elem->in_num, 0,
+ &event, sizeof(event)) != sizeof(event)) {
+ error_report("vhost-vsock event virtqueue element is too short");
+- goto out;
++ goto err;
+ }
+
+ virtqueue_push(vq, elem, sizeof(event));
+ virtio_notify(VIRTIO_DEVICE(vsock), vq);
+
+-out:
++ g_free(elem);
++ return;
++
++err:
++ virtqueue_detach_element(vq, elem, 0);
+ g_free(elem);
+ }
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-4144.patch b/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-4144.patch
new file mode 100644
index 0000000000..3f0d5fbd5c
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/CVE-2022-4144.patch
@@ -0,0 +1,103 @@
+From 6dbbf055148c6f1b7d8a3251a65bd6f3d1e1f622 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
+Date: Mon, 28 Nov 2022 21:27:40 +0100
+Subject: [PATCH] hw/display/qxl: Avoid buffer overrun in qxl_phys2virt
+ (CVE-2022-4144)
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Have qxl_get_check_slot_offset() return false if the requested
+buffer size does not fit within the slot memory region.
+
+Similarly qxl_phys2virt() now returns NULL in such case, and
+qxl_dirty_one_surface() aborts.
+
+This avoids buffer overrun in the host pointer returned by
+memory_region_get_ram_ptr().
+
+Fixes: CVE-2022-4144 (out-of-bounds read)
+Reported-by: Wenxu Yin (@awxylitol)
+Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1336
+
+Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
+Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
+Message-Id: <20221128202741.4945-5-philmd@linaro.org>
+
+Upstream-Status: Backport [https://gitlab.com/qemu-project/qemu/-/commit/6dbbf055148c6f1b7d8a3251a65bd6f3d1e1f622]
+CVE: CVE-2022-4144
+Comments: Deleted patch hunk in qxl.h,as it contains change
+in comments which is not present in current version of qemu.
+
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ hw/display/qxl.c | 27 +++++++++++++++++++++++----
+ 1 file changed, 23 insertions(+), 4 deletions(-)
+
+diff --git a/hw/display/qxl.c b/hw/display/qxl.c
+index cd7eb39d..6bc8385b 100644
+--- a/hw/display/qxl.c
++++ b/hw/display/qxl.c
+@@ -1440,11 +1440,13 @@ static void qxl_reset_surfaces(PCIQXLDevice *d)
+
+ /* can be also called from spice server thread context */
+ static bool qxl_get_check_slot_offset(PCIQXLDevice *qxl, QXLPHYSICAL pqxl,
+- uint32_t *s, uint64_t *o)
++ uint32_t *s, uint64_t *o,
++ size_t size_requested)
+ {
+ uint64_t phys = le64_to_cpu(pqxl);
+ uint32_t slot = (phys >> (64 - 8)) & 0xff;
+ uint64_t offset = phys & 0xffffffffffff;
++ uint64_t size_available;
+
+ if (slot >= NUM_MEMSLOTS) {
+ qxl_set_guest_bug(qxl, "slot too large %d >= %d", slot,
+@@ -1468,6 +1470,23 @@ static bool qxl_get_check_slot_offset(PCIQXLDevice *qxl, QXLPHYSICAL pqxl,
+ slot, offset, qxl->guest_slots[slot].size);
+ return false;
+ }
++ size_available = memory_region_size(qxl->guest_slots[slot].mr);
++ if (qxl->guest_slots[slot].offset + offset >= size_available) {
++ qxl_set_guest_bug(qxl,
++ "slot %d offset %"PRIu64" > region size %"PRIu64"\n",
++ slot, qxl->guest_slots[slot].offset + offset,
++ size_available);
++ return false;
++ }
++ size_available -= qxl->guest_slots[slot].offset + offset;
++ if (size_requested > size_available) {
++ qxl_set_guest_bug(qxl,
++ "slot %d offset %"PRIu64" size %zu: "
++ "overrun by %"PRIu64" bytes\n",
++ slot, offset, size_requested,
++ size_requested - size_available);
++ return false;
++ }
+
+ *s = slot;
+ *o = offset;
+@@ -1486,7 +1505,7 @@ void *qxl_phys2virt(PCIQXLDevice *qxl, QXLPHYSICAL pqxl, int group_id)
+ offset = le64_to_cpu(pqxl) & 0xffffffffffff;
+ return (void *)(intptr_t)offset;
+ case MEMSLOT_GROUP_GUEST:
+- if (!qxl_get_check_slot_offset(qxl, pqxl, &slot, &offset)) {
++ if (!qxl_get_check_slot_offset(qxl, pqxl, &slot, &offset, size)) {
+ return NULL;
+ }
+ ptr = memory_region_get_ram_ptr(qxl->guest_slots[slot].mr);
+@@ -1944,9 +1963,9 @@ static void qxl_dirty_one_surface(PCIQXLDevice *qxl, QXLPHYSICAL pqxl,
+ uint32_t slot;
+ bool rc;
+
+- rc = qxl_get_check_slot_offset(qxl, pqxl, &slot, &offset);
+- assert(rc == true);
+ size = (uint64_t)height * abs(stride);
++ rc = qxl_get_check_slot_offset(qxl, pqxl, &slot, &offset, size);
++ assert(rc == true);
+ trace_qxl_surfaces_dirty(qxl->id, offset, size);
+ qxl_set_dirty(qxl->guest_slots[slot].mr,
+ qxl->guest_slots[slot].offset + offset,
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/hw-block-nvme-handle-dma-errors.patch b/poky/meta/recipes-devtools/qemu/qemu/hw-block-nvme-handle-dma-errors.patch
new file mode 100644
index 0000000000..0fdae8351a
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/hw-block-nvme-handle-dma-errors.patch
@@ -0,0 +1,146 @@
+From ea2a7c7676d8eb9d1458eaa4b717df46782dcb3a Mon Sep 17 00:00:00 2001
+From: Gaurav Gupta <gauragup@cisco.com>
+Date: Wed, 29 Mar 2023 14:07:17 -0700
+Subject: [PATCH 2/2] hw/block/nvme: handle dma errors
+
+Handling DMA errors gracefully is required for the device to pass the
+block/011 test ("disable PCI device while doing I/O") in the blktests
+suite.
+
+With this patch the device sets the Controller Fatal Status bit in the
+CSTS register when failing to read from a submission queue or writing to
+a completion queue; expecting the host to reset the controller.
+
+If DMA errors occur at any other point in the execution of the command
+(say, while mapping the PRPs), the command is aborted with a Data
+Transfer Error status code.
+
+Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
+Signed-off-by: Gaurav Gupta <gauragup@cisco.com>
+---
+ hw/block/nvme.c | 41 +++++++++++++++++++++++++++++++----------
+ hw/block/trace-events | 3 +++
+ 2 files changed, 34 insertions(+), 10 deletions(-)
+
+diff --git a/hw/block/nvme.c b/hw/block/nvme.c
+index e6f24a6..bda446d 100644
+--- a/hw/block/nvme.c
++++ b/hw/block/nvme.c
+@@ -60,14 +60,14 @@ static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr)
+ return addr >= low && addr < hi;
+ }
+
+-static void nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size)
++static int nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size)
+ {
+ if (n->cmbsz && nvme_addr_is_cmb(n, addr)) {
+ memcpy(buf, (void *)&n->cmbuf[addr - n->ctrl_mem.addr], size);
+- return;
++ return 0;
+ }
+
+- pci_dma_read(&n->parent_obj, addr, buf, size);
++ return pci_dma_read(&n->parent_obj, addr, buf, size);
+ }
+
+ static int nvme_check_sqid(NvmeCtrl *n, uint16_t sqid)
+@@ -152,6 +152,7 @@ static uint16_t nvme_map_prp(QEMUSGList *qsg, QEMUIOVector *iov, uint64_t prp1,
+ hwaddr trans_len = n->page_size - (prp1 % n->page_size);
+ trans_len = MIN(len, trans_len);
+ int num_prps = (len >> n->page_bits) + 1;
++ int ret;
+
+ if (unlikely(!prp1)) {
+ trace_nvme_err_invalid_prp();
+@@ -178,7 +179,11 @@ static uint16_t nvme_map_prp(QEMUSGList *qsg, QEMUIOVector *iov, uint64_t prp1,
+
+ nents = (len + n->page_size - 1) >> n->page_bits;
+ prp_trans = MIN(n->max_prp_ents, nents) * sizeof(uint64_t);
+- nvme_addr_read(n, prp2, (void *)prp_list, prp_trans);
++ ret = nvme_addr_read(n, prp2, (void *)prp_list, prp_trans);
++ if (ret) {
++ trace_pci_nvme_err_addr_read(prp2);
++ return NVME_DATA_TRAS_ERROR;
++ }
+ while (len != 0) {
+ uint64_t prp_ent = le64_to_cpu(prp_list[i]);
+
+@@ -191,8 +196,12 @@ static uint16_t nvme_map_prp(QEMUSGList *qsg, QEMUIOVector *iov, uint64_t prp1,
+ i = 0;
+ nents = (len + n->page_size - 1) >> n->page_bits;
+ prp_trans = MIN(n->max_prp_ents, nents) * sizeof(uint64_t);
+- nvme_addr_read(n, prp_ent, (void *)prp_list,
+- prp_trans);
++ ret = nvme_addr_read(n, prp_ent, (void *)prp_list,
++ prp_trans);
++ if (ret) {
++ trace_pci_nvme_err_addr_read(prp_ent);
++ return NVME_DATA_TRAS_ERROR;
++ }
+ prp_ent = le64_to_cpu(prp_list[i]);
+ }
+
+@@ -286,6 +295,7 @@ static void nvme_post_cqes(void *opaque)
+ NvmeCQueue *cq = opaque;
+ NvmeCtrl *n = cq->ctrl;
+ NvmeRequest *req, *next;
++ int ret;
+
+ QTAILQ_FOREACH_SAFE(req, &cq->req_list, entry, next) {
+ NvmeSQueue *sq;
+@@ -295,15 +305,21 @@ static void nvme_post_cqes(void *opaque)
+ break;
+ }
+
+- QTAILQ_REMOVE(&cq->req_list, req, entry);
+ sq = req->sq;
+ req->cqe.status = cpu_to_le16((req->status << 1) | cq->phase);
+ req->cqe.sq_id = cpu_to_le16(sq->sqid);
+ req->cqe.sq_head = cpu_to_le16(sq->head);
+ addr = cq->dma_addr + cq->tail * n->cqe_size;
++ ret = pci_dma_write(&n->parent_obj, addr, (void *)&req->cqe,
++ sizeof(req->cqe));
++ if (ret) {
++ trace_pci_nvme_err_addr_write(addr);
++ trace_pci_nvme_err_cfs();
++ n->bar.csts = NVME_CSTS_FAILED;
++ break;
++ }
++ QTAILQ_REMOVE(&cq->req_list, req, entry);
+ nvme_inc_cq_tail(cq);
+- pci_dma_write(&n->parent_obj, addr, (void *)&req->cqe,
+- sizeof(req->cqe));
+ QTAILQ_INSERT_TAIL(&sq->req_list, req, entry);
+ }
+ if (cq->tail != cq->head) {
+@@ -888,7 +904,12 @@ static void nvme_process_sq(void *opaque)
+
+ while (!(nvme_sq_empty(sq) || QTAILQ_EMPTY(&sq->req_list))) {
+ addr = sq->dma_addr + sq->head * n->sqe_size;
+- nvme_addr_read(n, addr, (void *)&cmd, sizeof(cmd));
++ if (nvme_addr_read(n, addr, (void *)&cmd, sizeof(cmd))) {
++ trace_pci_nvme_err_addr_read(addr);
++ trace_pci_nvme_err_cfs();
++ n->bar.csts = NVME_CSTS_FAILED;
++ break;
++ }
+ nvme_inc_sq_head(sq);
+
+ req = QTAILQ_FIRST(&sq->req_list);
+diff --git a/hw/block/trace-events b/hw/block/trace-events
+index c03e80c..4e4ad4e 100644
+--- a/hw/block/trace-events
++++ b/hw/block/trace-events
+@@ -60,6 +60,9 @@ nvme_mmio_shutdown_set(void) "shutdown bit set"
+ nvme_mmio_shutdown_cleared(void) "shutdown bit cleared"
+
+ # nvme traces for error conditions
++pci_nvme_err_addr_read(uint64_t addr) "addr 0x%"PRIx64""
++pci_nvme_err_addr_write(uint64_t addr) "addr 0x%"PRIx64""
++pci_nvme_err_cfs(void) "controller fatal status"
+ nvme_err_invalid_dma(void) "PRP/SGL is too small for transfer size"
+ nvme_err_invalid_prplist_ent(uint64_t prplist) "PRP list entry is null or not page aligned: 0x%"PRIx64""
+ nvme_err_invalid_prp2_align(uint64_t prp2) "PRP2 is not page aligned: 0x%"PRIx64""
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/hw-block-nvme-refactor-nvme_addr_read.patch b/poky/meta/recipes-devtools/qemu/qemu/hw-block-nvme-refactor-nvme_addr_read.patch
new file mode 100644
index 0000000000..66ada52efb
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/hw-block-nvme-refactor-nvme_addr_read.patch
@@ -0,0 +1,55 @@
+From 55428706d5b0b8889b8e009eac77137bb556a4f0 Mon Sep 17 00:00:00 2001
+From: Klaus Jensen <k.jensen@samsung.com>
+Date: Tue, 9 Jun 2020 21:03:17 +0200
+Subject: [PATCH 1/2] hw/block/nvme: refactor nvme_addr_read
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Pull the controller memory buffer check to its own function. The check
+will be used on its own in later patches.
+
+Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
+Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
+Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
+Reviewed-by: Keith Busch <kbusch@kernel.org>
+Message-Id: <20200609190333.59390-7-its@irrelevant.dk>
+Signed-off-by: Kevin Wolf <kwolf@redhat.com>
+---
+ hw/block/nvme.c | 16 ++++++++++++----
+ 1 file changed, 12 insertions(+), 4 deletions(-)
+
+diff --git a/hw/block/nvme.c b/hw/block/nvme.c
+index 12d8254..e6f24a6 100644
+--- a/hw/block/nvme.c
++++ b/hw/block/nvme.c
+@@ -52,14 +52,22 @@
+
+ static void nvme_process_sq(void *opaque);
+
++static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr)
++{
++ hwaddr low = n->ctrl_mem.addr;
++ hwaddr hi = n->ctrl_mem.addr + int128_get64(n->ctrl_mem.size);
++
++ return addr >= low && addr < hi;
++}
++
+ static void nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size)
+ {
+- if (n->cmbsz && addr >= n->ctrl_mem.addr &&
+- addr < (n->ctrl_mem.addr + int128_get64(n->ctrl_mem.size))) {
++ if (n->cmbsz && nvme_addr_is_cmb(n, addr)) {
+ memcpy(buf, (void *)&n->cmbuf[addr - n->ctrl_mem.addr], size);
+- } else {
+- pci_dma_read(&n->parent_obj, addr, buf, size);
++ return;
+ }
++
++ pci_dma_read(&n->parent_obj, addr, buf, size);
+ }
+
+ static int nvme_check_sqid(NvmeCtrl *n, uint16_t sqid)
+--
+1.8.3.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu/hw-display-qxl-Pass-requested-buffer-size-to-qxl_phy.patch b/poky/meta/recipes-devtools/qemu/qemu/hw-display-qxl-Pass-requested-buffer-size-to-qxl_phy.patch
new file mode 100644
index 0000000000..f380be486c
--- /dev/null
+++ b/poky/meta/recipes-devtools/qemu/qemu/hw-display-qxl-Pass-requested-buffer-size-to-qxl_phy.patch
@@ -0,0 +1,236 @@
+From 5a44a01c9eca6507be45d107c27377a3e8d0ee8c Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@linaro.org>
+Date: Mon, 28 Nov 2022 21:27:39 +0100
+Subject: [PATCH] hw/display/qxl: Pass requested buffer size to qxl_phys2virt()
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Currently qxl_phys2virt() doesn't check for buffer overrun.
+In order to do so in the next commit, pass the buffer size
+as argument.
+
+For QXLCursor in qxl_render_cursor() -> qxl_cursor() we
+verify the size of the chunked data ahead, checking we can
+access 'sizeof(QXLCursor) + chunk->data_size' bytes.
+Since in the SPICE_CURSOR_TYPE_MONO case the cursor is
+assumed to fit in one chunk, no change are required.
+In SPICE_CURSOR_TYPE_ALPHA the ahead read is handled in
+qxl_unpack_chunks().
+
+Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
+Acked-by: Gerd Hoffmann <kraxel@redhat.com>
+Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
+Message-Id: <20221128202741.4945-4-philmd@linaro.org>
+
+Backport and rebase patch to fix compile error which imported by CVE-2022-4144.patch:
+
+/qxl.c: In function 'qxl_phys2virt':
+| /home/hitendra/work/yocto-work/cgx-data/dunfell-3.1/x86-generic-64-5.4-3.1-cgx/project/tmp/work/i586-montavistamllib32-linux/lib32-qemu/4.2.0-r0.8/qemu-4.2.0/hw/display/qxl.c:1508:67: error: 'size' undeclared (first use in this function); did you mean 'gsize'?
+| 1508 | if (!qxl_get_check_slot_offset(qxl, pqxl, &slot, &offset, size)) {
+| | ^~~~
+| | gsize
+
+Upstream-Status: Backport [https://github.com/qemu/qemu/commit/61c34fc && https://gitlab.com/qemu-project/qemu/-/commit/8efec0ef8bbc1e75a7ebf6e325a35806ece9b39f]
+
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ hw/display/qxl-logger.c | 22 +++++++++++++++++++---
+ hw/display/qxl-render.c | 20 ++++++++++++++++----
+ hw/display/qxl.c | 17 +++++++++++------
+ hw/display/qxl.h | 3 ++-
+ 4 files changed, 48 insertions(+), 14 deletions(-)
+
+diff --git a/hw/display/qxl-logger.c b/hw/display/qxl-logger.c
+index 2ec6d8fa..031ddfec 100644
+--- a/hw/display/qxl-logger.c
++++ b/hw/display/qxl-logger.c
+@@ -106,7 +106,7 @@ static int qxl_log_image(PCIQXLDevice *qxl, QXLPHYSICAL addr, int group_id)
+ QXLImage *image;
+ QXLImageDescriptor *desc;
+
+- image = qxl_phys2virt(qxl, addr, group_id);
++ image = qxl_phys2virt(qxl, addr, group_id, sizeof(QXLImage));
+ if (!image) {
+ return 1;
+ }
+@@ -216,7 +216,8 @@ int qxl_log_cmd_cursor(PCIQXLDevice *qxl, QXLCursorCmd *cmd, int group_id)
+ cmd->u.set.position.y,
+ cmd->u.set.visible ? "yes" : "no",
+ cmd->u.set.shape);
+- cursor = qxl_phys2virt(qxl, cmd->u.set.shape, group_id);
++ cursor = qxl_phys2virt(qxl, cmd->u.set.shape, group_id,
++ sizeof(QXLCursor));
+ if (!cursor) {
+ return 1;
+ }
+@@ -238,6 +239,7 @@ int qxl_log_command(PCIQXLDevice *qxl, const char *ring, QXLCommandExt *ext)
+ {
+ bool compat = ext->flags & QXL_COMMAND_FLAG_COMPAT;
+ void *data;
++ size_t datasz;
+ int ret;
+
+ if (!qxl->cmdlog) {
+@@ -249,7 +251,20 @@ int qxl_log_command(PCIQXLDevice *qxl, const char *ring, QXLCommandExt *ext)
+ qxl_name(qxl_type, ext->cmd.type),
+ compat ? "(compat)" : "");
+
+- data = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id);
++ switch (ext->cmd.type) {
++ case QXL_CMD_DRAW:
++ datasz = compat ? sizeof(QXLCompatDrawable) : sizeof(QXLDrawable);
++ break;
++ case QXL_CMD_SURFACE:
++ datasz = sizeof(QXLSurfaceCmd);
++ break;
++ case QXL_CMD_CURSOR:
++ datasz = sizeof(QXLCursorCmd);
++ break;
++ default:
++ goto out;
++ }
++ data = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id, datasz);
+ if (!data) {
+ return 1;
+ }
+@@ -271,6 +286,7 @@ int qxl_log_command(PCIQXLDevice *qxl, const char *ring, QXLCommandExt *ext)
+ qxl_log_cmd_cursor(qxl, data, ext->group_id);
+ break;
+ }
++out:
+ fprintf(stderr, "\n");
+ return 0;
+ }
+diff --git a/hw/display/qxl-render.c b/hw/display/qxl-render.c
+index d532e157..a65a6d64 100644
+--- a/hw/display/qxl-render.c
++++ b/hw/display/qxl-render.c
+@@ -107,7 +107,9 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl)
+ qxl->guest_primary.resized = 0;
+ qxl->guest_primary.data = qxl_phys2virt(qxl,
+ qxl->guest_primary.surface.mem,
+- MEMSLOT_GROUP_GUEST);
++ MEMSLOT_GROUP_GUEST,
++ qxl->guest_primary.abs_stride
++ * height);
+ if (!qxl->guest_primary.data) {
+ return;
+ }
+@@ -222,7 +224,8 @@ static void qxl_unpack_chunks(void *dest, size_t size, PCIQXLDevice *qxl,
+ if (offset == size) {
+ return;
+ }
+- chunk = qxl_phys2virt(qxl, chunk->next_chunk, group_id);
++ chunk = qxl_phys2virt(qxl, chunk->next_chunk, group_id,
++ sizeof(QXLDataChunk) + chunk->data_size);
+ if (!chunk) {
+ return;
+ }
+@@ -289,7 +292,8 @@ fail:
+ /* called from spice server thread context only */
+ int qxl_render_cursor(PCIQXLDevice *qxl, QXLCommandExt *ext)
+ {
+- QXLCursorCmd *cmd = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id);
++ QXLCursorCmd *cmd = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id,
++ sizeof(QXLCursorCmd));
+ QXLCursor *cursor;
+ QEMUCursor *c;
+
+@@ -308,7 +312,15 @@ int qxl_render_cursor(PCIQXLDevice *qxl, QXLCommandExt *ext)
+ }
+ switch (cmd->type) {
+ case QXL_CURSOR_SET:
+- cursor = qxl_phys2virt(qxl, cmd->u.set.shape, ext->group_id);
++ /* First read the QXLCursor to get QXLDataChunk::data_size ... */
++ cursor = qxl_phys2virt(qxl, cmd->u.set.shape, ext->group_id,
++ sizeof(QXLCursor));
++ if (!cursor) {
++ return 1;
++ }
++ /* Then read including the chunked data following QXLCursor. */
++ cursor = qxl_phys2virt(qxl, cmd->u.set.shape, ext->group_id,
++ sizeof(QXLCursor) + cursor->chunk.data_size);
+ if (!cursor) {
+ return 1;
+ }
+diff --git a/hw/display/qxl.c b/hw/display/qxl.c
+index 6bc8385b..858d3e93 100644
+--- a/hw/display/qxl.c
++++ b/hw/display/qxl.c
+@@ -275,7 +275,8 @@ static void qxl_spice_monitors_config_async(PCIQXLDevice *qxl, int replay)
+ QXL_IO_MONITORS_CONFIG_ASYNC));
+ }
+
+- cfg = qxl_phys2virt(qxl, qxl->guest_monitors_config, MEMSLOT_GROUP_GUEST);
++ cfg = qxl_phys2virt(qxl, qxl->guest_monitors_config, MEMSLOT_GROUP_GUEST,
++ sizeof(QXLMonitorsConfig));
+ if (cfg != NULL && cfg->count == 1) {
+ qxl->guest_primary.resized = 1;
+ qxl->guest_head0_width = cfg->heads[0].width;
+@@ -460,7 +461,8 @@ static int qxl_track_command(PCIQXLDevice *qxl, struct QXLCommandExt *ext)
+ switch (le32_to_cpu(ext->cmd.type)) {
+ case QXL_CMD_SURFACE:
+ {
+- QXLSurfaceCmd *cmd = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id);
++ QXLSurfaceCmd *cmd = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id,
++ sizeof(QXLSurfaceCmd));
+
+ if (!cmd) {
+ return 1;
+@@ -494,7 +496,8 @@ static int qxl_track_command(PCIQXLDevice *qxl, struct QXLCommandExt *ext)
+ }
+ case QXL_CMD_CURSOR:
+ {
+- QXLCursorCmd *cmd = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id);
++ QXLCursorCmd *cmd = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id,
++ sizeof(QXLCursorCmd));
+
+ if (!cmd) {
+ return 1;
+@@ -674,7 +677,8 @@ static int interface_get_command(QXLInstance *sin, struct QXLCommandExt *ext)
+ *
+ * https://cgit.freedesktop.org/spice/win32/qxl-wddm-dod/commit/?id=f6e099db39e7d0787f294d5fd0dce328b5210faa
+ */
+- void *msg = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id);
++ void *msg = qxl_phys2virt(qxl, ext->cmd.data, ext->group_id,
++ sizeof(QXLCommandRing));
+ if (msg != NULL && (
+ msg < (void *)qxl->vga.vram_ptr ||
+ msg > ((void *)qxl->vga.vram_ptr + qxl->vga.vram_size))) {
+@@ -1494,7 +1498,8 @@ static bool qxl_get_check_slot_offset(PCIQXLDevice *qxl, QXLPHYSICAL pqxl,
+ }
+
+ /* can be also called from spice server thread context */
+-void *qxl_phys2virt(PCIQXLDevice *qxl, QXLPHYSICAL pqxl, int group_id)
++void *qxl_phys2virt(PCIQXLDevice *qxl, QXLPHYSICAL pqxl, int group_id,
++ size_t size)
+ {
+ uint64_t offset;
+ uint32_t slot;
+@@ -1994,7 +1999,7 @@ static void qxl_dirty_surfaces(PCIQXLDevice *qxl)
+ }
+
+ cmd = qxl_phys2virt(qxl, qxl->guest_surfaces.cmds[i],
+- MEMSLOT_GROUP_GUEST);
++ MEMSLOT_GROUP_GUEST, sizeof(QXLSurfaceCmd));
+ assert(cmd);
+ assert(cmd->type == QXL_SURFACE_CMD_CREATE);
+ qxl_dirty_one_surface(qxl, cmd->u.surface_create.data,
+diff --git a/hw/display/qxl.h b/hw/display/qxl.h
+index 80eb0d26..fcfd133a 100644
+--- a/hw/display/qxl.h
++++ b/hw/display/qxl.h
+@@ -147,7 +147,8 @@ typedef struct PCIQXLDevice {
+ #define QXL_DEFAULT_REVISION QXL_REVISION_STABLE_V12
+
+ /* qxl.c */
+-void *qxl_phys2virt(PCIQXLDevice *qxl, QXLPHYSICAL phys, int group_id);
++void *qxl_phys2virt(PCIQXLDevice *qxl, QXLPHYSICAL phys, int group_id,
++ size_t size);
+ void qxl_set_guest_bug(PCIQXLDevice *qxl, const char *msg, ...)
+ GCC_FMT_ATTR(2, 3);
+
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/qemu/qemu_4.2.0.bb b/poky/meta/recipes-devtools/qemu/qemu_4.2.0.bb
index f9905e2812..05449afe4e 100644
--- a/poky/meta/recipes-devtools/qemu/qemu_4.2.0.bb
+++ b/poky/meta/recipes-devtools/qemu/qemu_4.2.0.bb
@@ -24,8 +24,8 @@ do_install_append_class-nativesdk() {
}
PACKAGECONFIG ??= " \
- fdt sdl kvm \
+ fdt sdl kvm slirp \
${@bb.utils.filter('DISTRO_FEATURES', 'alsa xen', d)} \
${@bb.utils.filter('DISTRO_FEATURES', 'seccomp', d)} \
"
-PACKAGECONFIG_class-nativesdk ??= "fdt sdl kvm"
+PACKAGECONFIG:class-nativesdk ??= "fdt sdl kvm slirp"
diff --git a/poky/meta/recipes-devtools/quilt/quilt.inc b/poky/meta/recipes-devtools/quilt/quilt.inc
index d7ecda7aaa..ad23b8d922 100644
--- a/poky/meta/recipes-devtools/quilt/quilt.inc
+++ b/poky/meta/recipes-devtools/quilt/quilt.inc
@@ -12,6 +12,7 @@ SRC_URI = "${SAVANNAH_GNU_MIRROR}/quilt/quilt-${PV}.tar.gz \
file://Makefile \
file://test.sh \
file://0001-tests-Allow-different-output-from-mv.patch \
+ file://faildiff-order.patch \
"
SRC_URI_append_class-target = " file://gnu_patch_test_fix_target.patch"
diff --git a/poky/meta/recipes-devtools/quilt/quilt/faildiff-order.patch b/poky/meta/recipes-devtools/quilt/quilt/faildiff-order.patch
new file mode 100644
index 0000000000..f22065a250
--- /dev/null
+++ b/poky/meta/recipes-devtools/quilt/quilt/faildiff-order.patch
@@ -0,0 +1,41 @@
+Upstream-Status: Backport
+Signed-off-by: Ross Burton <ross.burton@arm.com>
+
+From 4dfe7f9e702c85243a71e4de267a13e434b6d6c2 Mon Sep 17 00:00:00 2001
+From: Jean Delvare <jdelvare@suse.de>
+Date: Fri, 20 Jan 2023 12:56:08 +0100
+Subject: [PATCH] test: Fix a race condition
+
+The test suite does not differentiate between stdout and stderr. When
+messages are printed to both, the order in which they will reach us
+is apparently not guaranteed. Ideally this would be deterministic, but
+until then, explicitly test stdout and stderr separately in the test
+case itself. Otherwise the test suite fails randomly, which is a pain
+for distribution package maintainers.
+
+This fixes bug #63651 reported by Ross Burton:
+https://savannah.nongnu.org/bugs/index.php?63651
+
+Signed-off-by: Jean Delvare <jdelvare@suse.de>
+---
+ test/faildiff.test | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/test/faildiff.test b/test/faildiff.test
+index 5afb8e3..0444c15 100644
+--- a/test/faildiff.test
++++ b/test/faildiff.test
+@@ -27,8 +27,9 @@ What happens on binary files?
+ > File test.bin added to patch %{P}test.diff
+
+ $ printf "\\003\\000\\001" > test.bin
+- $ quilt diff -pab --no-index
++ $ quilt diff -pab --no-index 2>/dev/null
+ >~ (Files|Binary files) a/test\.bin and b/test\.bin differ
++ $ quilt diff -pab --no-index >/dev/null
+ > Diff failed on file 'test.bin', aborting
+ $ echo %{?}
+ > 1
+--
+2.34.1
+
diff --git a/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-01.patch b/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-01.patch
new file mode 100644
index 0000000000..0882d6f310
--- /dev/null
+++ b/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-01.patch
@@ -0,0 +1,60 @@
+From b5e8bc74b2b05aa557f663fe227b94d2bc64fbd8 Mon Sep 17 00:00:00 2001
+From: Panu Matilainen <pmatilai@redhat.com>
+Date: Thu, 30 Sep 2021 09:51:10 +0300
+Subject: [PATCH] Process MPI's from all kinds of signatures
+
+No immediate effect but needed by the following commits.
+
+Dependent patch:
+CVE: CVE-2021-3521
+Upstream-Status: Backport [https://github.com/rpm-software-management/rpm/commit/b5e8bc74b2b05aa557f663fe227b94d2bc64fbd8]
+Signed-off-by: Riyaz Khan <Riyaz.Khan@kpit.com>
+
+---
+ rpmio/rpmpgp.c | 12 +++++-------
+ 1 file changed, 5 insertions(+), 7 deletions(-)
+
+diff --git a/rpmio/rpmpgp.c b/rpmio/rpmpgp.c
+index ee5c81e246..340de5fc9a 100644
+--- a/rpmio/rpmpgp.c
++++ b/rpmio/rpmpgp.c
+@@ -511,7 +511,7 @@ pgpDigAlg pgpDigAlgFree(pgpDigAlg alg)
+ return NULL;
+ }
+
+-static int pgpPrtSigParams(pgpTag tag, uint8_t pubkey_algo, uint8_t sigtype,
++static int pgpPrtSigParams(pgpTag tag, uint8_t pubkey_algo,
+ const uint8_t *p, const uint8_t *h, size_t hlen,
+ pgpDigParams sigp)
+ {
+@@ -524,10 +524,8 @@ static int pgpPrtSigParams(pgpTag tag, uint8_t pubkey_algo, uint8_t sigtype,
+ int mpil = pgpMpiLen(p);
+ if (p + mpil > pend)
+ break;
+- if (sigtype == PGPSIGTYPE_BINARY || sigtype == PGPSIGTYPE_TEXT) {
+- if (sigalg->setmpi(sigalg, i, p))
+- break;
+- }
++ if (sigalg->setmpi(sigalg, i, p))
++ break;
+ p += mpil;
+ }
+
+@@ -600,7 +598,7 @@ static int pgpPrtSig(pgpTag tag, const uint8_t *h, size_t hlen,
+ }
+
+ p = ((uint8_t *)v) + sizeof(*v);
+- rc = pgpPrtSigParams(tag, v->pubkey_algo, v->sigtype, p, h, hlen, _digp);
++ rc = pgpPrtSigParams(tag, v->pubkey_algo, p, h, hlen, _digp);
+ } break;
+ case 4:
+ { pgpPktSigV4 v = (pgpPktSigV4)h;
+@@ -658,7 +656,7 @@ static int pgpPrtSig(pgpTag tag, const uint8_t *h, size_t hlen,
+ if (p > (h + hlen))
+ return 1;
+
+- rc = pgpPrtSigParams(tag, v->pubkey_algo, v->sigtype, p, h, hlen, _digp);
++ rc = pgpPrtSigParams(tag, v->pubkey_algo, p, h, hlen, _digp);
+ } break;
+ default:
+ rpmlog(RPMLOG_WARNING, _("Unsupported version of key: V%d\n"), version);
diff --git a/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-02.patch b/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-02.patch
new file mode 100644
index 0000000000..c5f88a8c72
--- /dev/null
+++ b/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-02.patch
@@ -0,0 +1,55 @@
+From 9f03f42e2614a68f589f9db8fe76287146522c0c Mon Sep 17 00:00:00 2001
+From: Panu Matilainen <pmatilai@redhat.com>
+Date: Thu, 30 Sep 2021 09:56:20 +0300
+Subject: [PATCH] Refactor pgpDigParams construction to helper function
+
+No functional changes, just to reduce code duplication and needed by
+the following commits.
+
+Dependent patch:
+CVE: CVE-2021-3521
+Upstream-Status: Backport [https://github.com/rpm-software-management/rpm/commit/9f03f42e2614a68f589f9db8fe76287146522c0c]
+Signed-off-by: Riyaz Khan <Riyaz.Khan@kpit.com>
+
+---
+ rpmio/rpmpgp.c | 13 +++++++++----
+ 1 file changed, 9 insertions(+), 4 deletions(-)
+
+diff --git a/rpmio/rpmpgp.c b/rpmio/rpmpgp.c
+index 340de5fc9a..aad7c275c9 100644
+--- a/rpmio/rpmpgp.c
++++ b/rpmio/rpmpgp.c
+@@ -1055,6 +1055,13 @@ unsigned int pgpDigParamsAlgo(pgpDigParams digp, unsigned int algotype)
+ return algo;
+ }
+
++static pgpDigParams pgpDigParamsNew(uint8_t tag)
++{
++ pgpDigParams digp = xcalloc(1, sizeof(*digp));
++ digp->tag = tag;
++ return digp;
++}
++
+ int pgpPrtParams(const uint8_t * pkts, size_t pktlen, unsigned int pkttype,
+ pgpDigParams * ret)
+ {
+@@ -1072,8 +1079,7 @@ int pgpPrtParams(const uint8_t * pkts, size_t pktlen, unsigned int pkttype,
+ if (pkttype && pkt.tag != pkttype) {
+ break;
+ } else {
+- digp = xcalloc(1, sizeof(*digp));
+- digp->tag = pkt.tag;
++ digp = pgpDigParamsNew(pkt.tag);
+ }
+ }
+
+@@ -1121,8 +1127,7 @@ int pgpPrtParamsSubkeys(const uint8_t *pkts, size_t pktlen,
+ digps = xrealloc(digps, alloced * sizeof(*digps));
+ }
+
+- digps[count] = xcalloc(1, sizeof(**digps));
+- digps[count]->tag = PGPTAG_PUBLIC_SUBKEY;
++ digps[count] = pgpDigParamsNew(PGPTAG_PUBLIC_SUBKEY);
+ /* Copy UID from main key to subkey */
+ digps[count]->userid = xstrdup(mainkey->userid);
+
diff --git a/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-03.patch b/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-03.patch
new file mode 100644
index 0000000000..fd31f11beb
--- /dev/null
+++ b/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521-03.patch
@@ -0,0 +1,34 @@
+From 5ff86764b17f31535cb247543a90dd739076ec38 Mon Sep 17 00:00:00 2001
+From: Demi Marie Obenour <demi@invisiblethingslab.com>
+Date: Thu, 6 May 2021 18:34:45 -0400
+Subject: [PATCH] Do not allow extra packets to follow a signature
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+According to RFC 4880 § 11.4, a detached signature is “simply a
+Signature packet”. Therefore, extra packets following a detached
+signature are not allowed.
+
+Dependent patch:
+CVE: CVE-2021-3521
+Upstream-Status: Backport [https://github.com/rpm-software-management/rpm/commit/5ff86764b17f31535cb247543a90dd739076ec38]
+Signed-off-by: Riyaz Khan <Riyaz.Khan@kpit.com>
+
+---
+ rpmio/rpmpgp.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/rpmio/rpmpgp.c b/rpmio/rpmpgp.c
+index f1a99e7169..5b346a8253 100644
+--- a/rpmio/rpmpgp.c
++++ b/rpmio/rpmpgp.c
+@@ -1068,6 +1068,8 @@ int pgpPrtParams(const uint8_t * pkts, size_t pktlen, unsigned int pkttype,
+ break;
+
+ p += (pkt.body - pkt.head) + pkt.blen;
++ if (pkttype == PGPTAG_SIGNATURE)
++ break;
+ }
+
+ rc = (digp && (p == pend)) ? 0 : -1;
diff --git a/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521.patch b/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521.patch
new file mode 100644
index 0000000000..cb9e9842fe
--- /dev/null
+++ b/poky/meta/recipes-devtools/rpm/files/CVE-2021-3521.patch
@@ -0,0 +1,330 @@
+From bd36c5dc9fb6d90c46fbfed8c2d67516fc571ec8 Mon Sep 17 00:00:00 2001
+From: Panu Matilainen <pmatilai@redhat.com>
+Date: Thu, 30 Sep 2021 09:59:30 +0300
+Subject: [PATCH] Validate and require subkey binding signatures on PGP public
+ keys
+
+All subkeys must be followed by a binding signature by the primary key
+as per the OpenPGP RFC, enforce the presence and validity in the parser.
+
+The implementation is as kludgey as they come to work around our
+simple-minded parser structure without touching API, to maximise
+backportability. Store all the raw packets internally as we decode them
+to be able to access previous elements at will, needed to validate ordering
+and access the actual data. Add testcases for manipulated keys whose
+import previously would succeed.
+
+Depends on the two previous commits:
+7b399fcb8f52566e6f3b4327197a85facd08db91 and
+236b802a4aa48711823a191d1b7f753c82a89ec5
+
+CVE: CVE-2021-3521
+Upstream-Status: Backport [https://github.com/rpm-software-management/rpm/commit/bd36c5dc9fb6d90c46fbfed8c2d67516fc571ec8]
+Comment: Hunk refreshed
+Signed-off-by: Riyaz Khan <Riyaz.Khan@kpit.com>
+
+Fixes CVE-2021-3521.
+---
+ rpmio/rpmpgp.c | 98 +++++++++++++++++--
+ tests/Makefile.am | 3 +
+ tests/data/keys/CVE-2021-3521-badbind.asc | 25 +++++
+ .../data/keys/CVE-2021-3521-nosubsig-last.asc | 25 +++++
+ tests/data/keys/CVE-2021-3521-nosubsig.asc | 37 +++++++
+ tests/rpmsigdig.at | 28 ++++++
+ 6 files changed, 209 insertions(+), 7 deletions(-)
+ create mode 100644 tests/data/keys/CVE-2021-3521-badbind.asc
+ create mode 100644 tests/data/keys/CVE-2021-3521-nosubsig-last.asc
+ create mode 100644 tests/data/keys/CVE-2021-3521-nosubsig.asc
+
+diff --git a/rpmio/rpmpgp.c b/rpmio/rpmpgp.c
+index aad7c275c9..d70802ae86 100644
+--- a/rpmio/rpmpgp.c
++++ b/rpmio/rpmpgp.c
+@@ -1004,37 +1004,121 @@ static pgpDigParams pgpDigParamsNew(uint8_t tag)
+ return digp;
+ }
+
++static int hashKey(DIGEST_CTX hash, const struct pgpPkt *pkt, int exptag)
++{
++ int rc = -1;
++ if (pkt->tag == exptag) {
++ uint8_t head[] = {
++ 0x99,
++ (pkt->blen >> 8),
++ (pkt->blen ),
++ };
++
++ rpmDigestUpdate(hash, head, 3);
++ rpmDigestUpdate(hash, pkt->body, pkt->blen);
++ rc = 0;
++ }
++ return rc;
++}
++
++static int pgpVerifySelf(pgpDigParams key, pgpDigParams selfsig,
++ const struct pgpPkt *all, int i)
++{
++ int rc = -1;
++ DIGEST_CTX hash = NULL;
++
++ switch (selfsig->sigtype) {
++ case PGPSIGTYPE_SUBKEY_BINDING:
++ hash = rpmDigestInit(selfsig->hash_algo, 0);
++ if (hash) {
++ rc = hashKey(hash, &all[0], PGPTAG_PUBLIC_KEY);
++ if (!rc)
++ rc = hashKey(hash, &all[i-1], PGPTAG_PUBLIC_SUBKEY);
++ }
++ break;
++ default:
++ /* ignore types we can't handle */
++ rc = 0;
++ break;
++ }
++
++ if (hash && rc == 0)
++ rc = pgpVerifySignature(key, selfsig, hash);
++
++ rpmDigestFinal(hash, NULL, NULL, 0);
++
++ return rc;
++}
++
+ int pgpPrtParams(const uint8_t * pkts, size_t pktlen, unsigned int pkttype,
+ pgpDigParams * ret)
+ {
+ const uint8_t *p = pkts;
+ const uint8_t *pend = pkts + pktlen;
+ pgpDigParams digp = NULL;
+- struct pgpPkt pkt;
++ pgpDigParams selfsig = NULL;
++ int i = 0;
++ int alloced = 16; /* plenty for normal cases */
++ struct pgpPkt *all = xmalloc(alloced * sizeof(*all));
+ int rc = -1; /* assume failure */
++ int expect = 0;
++ int prevtag = 0;
+
+ while (p < pend) {
+- if (decodePkt(p, (pend - p), &pkt))
++ struct pgpPkt *pkt = &all[i];
++ if (decodePkt(p, (pend - p), pkt))
+ break;
+
+ if (digp == NULL) {
+- if (pkttype && pkt.tag != pkttype) {
++ if (pkttype && pkt->tag != pkttype) {
+ break;
+ } else {
+- digp = pgpDigParamsNew(pkt.tag);
++ digp = pgpDigParamsNew(pkt->tag);
+ }
+ }
+
+- if (pgpPrtPkt(&pkt, digp))
++ if (expect) {
++ if (pkt->tag != expect)
++ break;
++ selfsig = pgpDigParamsNew(pkt->tag);
++ }
++
++ if (pgpPrtPkt(pkt, selfsig ? selfsig : digp))
+ break;
+
+- p += (pkt.body - pkt.head) + pkt.blen;
++ if (selfsig) {
++ /* subkeys must be followed by binding signature */
++ if (prevtag == PGPTAG_PUBLIC_SUBKEY) {
++ if (selfsig->sigtype != PGPSIGTYPE_SUBKEY_BINDING)
++ break;
++ }
++
++ int xx = pgpVerifySelf(digp, selfsig, all, i);
++
++ selfsig = pgpDigParamsFree(selfsig);
++ if (xx)
++ break;
++ expect = 0;
++ }
++
++ if (pkt->tag == PGPTAG_PUBLIC_SUBKEY)
++ expect = PGPTAG_SIGNATURE;
++ prevtag = pkt->tag;
++
++ i++;
++ p += (pkt->body - pkt->head) + pkt->blen;
+ if (pkttype == PGPTAG_SIGNATURE)
+ break;
++
++ if (alloced <= i) {
++ alloced *= 2;
++ all = xrealloc(all, alloced * sizeof(*all));
++ }
+ }
+
+- rc = (digp && (p == pend)) ? 0 : -1;
++ rc = (digp && (p == pend) && expect == 0) ? 0 : -1;
+
++ free(all);
+ if (ret && rc == 0) {
+ *ret = digp;
+ } else {
+diff --git a/tests/Makefile.am b/tests/Makefile.am
+index b4a2e2e1ce..bc535d2833 100644
+--- a/tests/Makefile.am
++++ b/tests/Makefile.am
+@@ -87,6 +87,9 @@ EXTRA_DIST += data/SPECS/hello-config-buildid.spec
+ EXTRA_DIST += data/SPECS/hello-cd.spec
+ EXTRA_DIST += data/keys/rpm.org-rsa-2048-test.pub
+ EXTRA_DIST += data/keys/rpm.org-rsa-2048-test.secret
++EXTRA_DIST += data/keys/CVE-2021-3521-badbind.asc
++EXTRA_DIST += data/keys/CVE-2021-3521-nosubsig.asc
++EXTRA_DIST += data/keys/CVE-2021-3521-nosubsig-last.asc
+ EXTRA_DIST += data/macros.testfile
+
+ # testsuite voodoo
+diff --git a/tests/data/keys/CVE-2021-3521-badbind.asc b/tests/data/keys/CVE-2021-3521-badbind.asc
+new file mode 100644
+index 0000000000..aea00f9d7a
+--- /dev/null
++++ b/tests/data/keys/CVE-2021-3521-badbind.asc
+@@ -0,0 +1,25 @@
++-----BEGIN PGP PUBLIC KEY BLOCK-----
++Version: rpm-4.17.90 (NSS-3)
++
++mQENBFjmORgBCAC7TMEk6wnjSs8Dr4yqSScWdU2pjcqrkTxuzdWvowcIUPZI0w/g
++HkRqGd4apjvY2V15kjL10gk3QhFP3pZ/9p7zh8o8NHX7aGdSGDK7NOq1eFaErPRY
++91LW9RiZ0lbOjXEzIL0KHxUiTQEmdXJT43DJMFPyW9fkCWg0OltiX618FUdWWfI8
++eySdLur1utnqBvdEbCUvWK2RX3vQZQdvEBODnNk2pxqTyV0w6VPQ96W++lF/5Aas
++7rUv3HIyIXxIggc8FRrnH+y9XvvHDonhTIlGnYZN4ubm9i4y3gOkrZlGTrEw7elQ
++1QeMyG2QQEbze8YjpTm4iLABCBrRfPRaQpwrABEBAAG0IXJwbS5vcmcgUlNBIHRl
++c3RrZXkgPHJzYUBycG0ub3JnPokBNwQTAQgAIQUCWOY5GAIbAwULCQgHAgYVCAkK
++CwIEFgIDAQIeAQIXgAAKCRBDRFkeGWTF/MxxCACnjqFL+MmPh9W9JQKT2DcLbBzf
++Cqo6wcEBoCOcwgRSk8dSikhARoteoa55JRJhuMyeKhhEAogE9HRmCPFdjezFTwgB
++BDVBpO2dZ023mLXDVCYX3S8pShOgCP6Tn4wqCnYeAdLcGg106N4xcmgtcssJE+Pr
++XzTZksbZsrTVEmL/Ym+R5w5jBfFnGk7Yw7ndwfQsfNXQb5AZynClFxnX546lcyZX
++fEx3/e6ezw57WNOUK6WT+8b+EGovPkbetK/rGxNXuWaP6X4A/QUm8O98nCuHYFQq
+++mvNdsCBqGf7mhaRGtpHk/JgCn5rFvArMDqLVrR9hX0LdCSsH7EGE+bR3r7wuQEN
++BFjmORgBCACk+vDZrIXQuFXEYToZVwb2attzbbJJCqD71vmZTLsW0QxuPKRgbcYY
++zp4K4lVBnHhFrF8MOUOxJ7kQWIJZMZFt+BDcptCYurbD2H4W2xvnWViiC+LzCMzz
++iMJT6165uefL4JHTDPxC2fFiM9yrc72LmylJNkM/vepT128J5Qv0gRUaQbHiQuS6
++Dm/+WRnUfx3i89SV4mnBxb/Ta93GVqoOciWwzWSnwEnWYAvOb95JL4U7c5J5f/+c
++KnQDHsW7sIiIdscsWzvgf6qs2Ra1Zrt7Fdk4+ZS2f/adagLhDO1C24sXf5XfMk5m
++L0OGwZSr9m5s17VXxfspgU5ugc8kBJfzABEBAAE=
++=WCfs
++-----END PGP PUBLIC KEY BLOCK-----
++
+diff --git a/tests/data/keys/CVE-2021-3521-nosubsig-last.asc b/tests/data/keys/CVE-2021-3521-nosubsig-last.asc
+new file mode 100644
+index 0000000000..aea00f9d7a
+--- /dev/null
++++ b/tests/data/keys/CVE-2021-3521-nosubsig-last.asc
+@@ -0,0 +1,25 @@
++-----BEGIN PGP PUBLIC KEY BLOCK-----
++Version: rpm-4.17.90 (NSS-3)
++
++mQENBFjmORgBCAC7TMEk6wnjSs8Dr4yqSScWdU2pjcqrkTxuzdWvowcIUPZI0w/g
++HkRqGd4apjvY2V15kjL10gk3QhFP3pZ/9p7zh8o8NHX7aGdSGDK7NOq1eFaErPRY
++91LW9RiZ0lbOjXEzIL0KHxUiTQEmdXJT43DJMFPyW9fkCWg0OltiX618FUdWWfI8
++eySdLur1utnqBvdEbCUvWK2RX3vQZQdvEBODnNk2pxqTyV0w6VPQ96W++lF/5Aas
++7rUv3HIyIXxIggc8FRrnH+y9XvvHDonhTIlGnYZN4ubm9i4y3gOkrZlGTrEw7elQ
++1QeMyG2QQEbze8YjpTm4iLABCBrRfPRaQpwrABEBAAG0IXJwbS5vcmcgUlNBIHRl
++c3RrZXkgPHJzYUBycG0ub3JnPokBNwQTAQgAIQUCWOY5GAIbAwULCQgHAgYVCAkK
++CwIEFgIDAQIeAQIXgAAKCRBDRFkeGWTF/MxxCACnjqFL+MmPh9W9JQKT2DcLbBzf
++Cqo6wcEBoCOcwgRSk8dSikhARoteoa55JRJhuMyeKhhEAogE9HRmCPFdjezFTwgB
++BDVBpO2dZ023mLXDVCYX3S8pShOgCP6Tn4wqCnYeAdLcGg106N4xcmgtcssJE+Pr
++XzTZksbZsrTVEmL/Ym+R5w5jBfFnGk7Yw7ndwfQsfNXQb5AZynClFxnX546lcyZX
++fEx3/e6ezw57WNOUK6WT+8b+EGovPkbetK/rGxNXuWaP6X4A/QUm8O98nCuHYFQq
+++mvNdsCBqGf7mhaRGtpHk/JgCn5rFvArMDqLVrR9hX0LdCSsH7EGE+bR3r7wuQEN
++BFjmORgBCACk+vDZrIXQuFXEYToZVwb2attzbbJJCqD71vmZTLsW0QxuPKRgbcYY
++zp4K4lVBnHhFrF8MOUOxJ7kQWIJZMZFt+BDcptCYurbD2H4W2xvnWViiC+LzCMzz
++iMJT6165uefL4JHTDPxC2fFiM9yrc72LmylJNkM/vepT128J5Qv0gRUaQbHiQuS6
++Dm/+WRnUfx3i89SV4mnBxb/Ta93GVqoOciWwzWSnwEnWYAvOb95JL4U7c5J5f/+c
++KnQDHsW7sIiIdscsWzvgf6qs2Ra1Zrt7Fdk4+ZS2f/adagLhDO1C24sXf5XfMk5m
++L0OGwZSr9m5s17VXxfspgU5ugc8kBJfzABEBAAE=
++=WCfs
++-----END PGP PUBLIC KEY BLOCK-----
++
+diff --git a/tests/data/keys/CVE-2021-3521-nosubsig.asc b/tests/data/keys/CVE-2021-3521-nosubsig.asc
+new file mode 100644
+index 0000000000..3a2e7417f8
+--- /dev/null
++++ b/tests/data/keys/CVE-2021-3521-nosubsig.asc
+@@ -0,0 +1,37 @@
++-----BEGIN PGP PUBLIC KEY BLOCK-----
++Version: rpm-4.17.90 (NSS-3)
++
++mQENBFjmORgBCAC7TMEk6wnjSs8Dr4yqSScWdU2pjcqrkTxuzdWvowcIUPZI0w/g
++HkRqGd4apjvY2V15kjL10gk3QhFP3pZ/9p7zh8o8NHX7aGdSGDK7NOq1eFaErPRY
++91LW9RiZ0lbOjXEzIL0KHxUiTQEmdXJT43DJMFPyW9fkCWg0OltiX618FUdWWfI8
++eySdLur1utnqBvdEbCUvWK2RX3vQZQdvEBODnNk2pxqTyV0w6VPQ96W++lF/5Aas
++7rUv3HIyIXxIggc8FRrnH+y9XvvHDonhTIlGnYZN4ubm9i4y3gOkrZlGTrEw7elQ
++1QeMyG2QQEbze8YjpTm4iLABCBrRfPRaQpwrABEBAAG0IXJwbS5vcmcgUlNBIHRl
++c3RrZXkgPHJzYUBycG0ub3JnPokBNwQTAQgAIQUCWOY5GAIbAwULCQgHAgYVCAkK
++CwIEFgIDAQIeAQIXgAAKCRBDRFkeGWTF/MxxCACnjqFL+MmPh9W9JQKT2DcLbBzf
++Cqo6wcEBoCOcwgRSk8dSikhARoteoa55JRJhuMyeKhhEAogE9HRmCPFdjezFTwgB
++BDVBpO2dZ023mLXDVCYX3S8pShOgCP6Tn4wqCnYeAdLcGg106N4xcmgtcssJE+Pr
++XzTZksbZsrTVEmL/Ym+R5w5jBfFnGk7Yw7ndwfQsfNXQb5AZynClFxnX546lcyZX
++fEx3/e6ezw57WNOUK6WT+8b+EGovPkbetK/rGxNXuWaP6X4A/QUm8O98nCuHYFQq
+++mvNdsCBqGf7mhaRGtpHk/JgCn5rFvArMDqLVrR9hX0LdCSsH7EGE+bR3r7wuQEN
++BFjmORgBCACk+vDZrIXQuFXEYToZVwb2attzbbJJCqD71vmZTLsW0QxuPKRgbcYY
++zp4K4lVBnHhFrF8MOUOxJ7kQWIJZMZFt+BDcptCYurbD2H4W2xvnWViiC+LzCMzz
++iMJT6165uefL4JHTDPxC2fFiM9yrc72LmylJNkM/vepT128J5Qv0gRUaQbHiQuS6
++Dm/+WRnUfx3i89SV4mnBxb/Ta93GVqoOciWwzWSnwEnWYAvOb95JL4U7c5J5f/+c
++KnQDHsW7sIiIdscsWzvgf6qs2Ra1Zrt7Fdk4+ZS2f/adagLhDO1C24sXf5XfMk5m
++L0OGwZSr9m5s17VXxfspgU5ugc8kBJfzABEBAAG5AQ0EWOY5GAEIAKT68NmshdC4
++VcRhOhlXBvZq23NtskkKoPvW+ZlMuxbRDG48pGBtxhjOngriVUGceEWsXww5Q7En
++uRBYglkxkW34ENym0Ji6tsPYfhbbG+dZWKIL4vMIzPOIwlPrXrm558vgkdMM/ELZ
++8WIz3KtzvYubKUk2Qz+96lPXbwnlC/SBFRpBseJC5LoOb/5ZGdR/HeLz1JXiacHF
++v9Nr3cZWqg5yJbDNZKfASdZgC85v3kkvhTtzknl//5wqdAMexbuwiIh2xyxbO+B/
++qqzZFrVmu3sV2Tj5lLZ/9p1qAuEM7ULbixd/ld8yTmYvQ4bBlKv2bmzXtVfF+ymB
++Tm6BzyQEl/MAEQEAAYkBHwQYAQgACQUCWOY5GAIbDAAKCRBDRFkeGWTF/PANB/9j
++mifmj6z/EPe0PJFhrpISt9PjiUQCt0IPtiL5zKAkWjHePIzyi+0kCTBF6DDLFxos
++3vN4bWnVKT1kBhZAQlPqpJTg+m74JUYeDGCdNx9SK7oRllATqyu+5rncgxjWVPnQ
++zu/HRPlWJwcVFYEVXYL8xzfantwQTqefjmcRmBRdA2XJITK+hGWwAmrqAWx+q5xX
++Pa8wkNMxVzNS2rUKO9SoVuJ/wlUvfoShkJ/VJ5HDp3qzUqncADfdGN35TDzscngQ
++gHvnMwVBfYfSCABV1hNByoZcc/kxkrWMmsd/EnIyLd1Q1baKqc3cEDuC6E6/o4yJ
++E4XX4jtDmdZPreZALsiB
++=rRop
++-----END PGP PUBLIC KEY BLOCK-----
++
+diff --git a/tests/rpmsigdig.at b/tests/rpmsigdig.at
+index 0f8f2b4884..c8b9f139e1 100644
+--- a/tests/rpmsigdig.at
++++ b/tests/rpmsigdig.at
+@@ -240,6 +240,34 @@ gpg(185e6146f00650f8) = 4:185e6146f00650f8-58e63918
+ [])
+ AT_CLEANUP
+
++AT_SETUP([rpmkeys --import invalid keys])
++AT_KEYWORDS([rpmkeys import])
++RPMDB_INIT
++
++AT_CHECK([
++runroot rpmkeys --import /data/keys/CVE-2021-3521-badbind.asc
++],
++[1],
++[],
++[error: /data/keys/CVE-2021-3521-badbind.asc: key 1 import failed.]
++)
++AT_CHECK([
++runroot rpmkeys --import /data/keys/CVE-2021-3521-nosubsig.asc
++],
++[1],
++[],
++[error: /data/keys/CVE-2021-3521-nosubsig.asc: key 1 import failed.]
++)
++
++AT_CHECK([
++runroot rpmkeys --import /data/keys/CVE-2021-3521-nosubsig-last.asc
++],
++[1],
++[],
++[error: /data/keys/CVE-2021-3521-nosubsig-last.asc: key 1 import failed.]
++)
++AT_CLEANUP
++
+ # ------------------------------
+ # Test pre-built package verification
+ AT_SETUP([rpmkeys -K <signed> 1])
+
diff --git a/poky/meta/recipes-devtools/rpm/rpm_4.14.2.1.bb b/poky/meta/recipes-devtools/rpm/rpm_4.14.2.1.bb
index 376021d913..4d605c8501 100644
--- a/poky/meta/recipes-devtools/rpm/rpm_4.14.2.1.bb
+++ b/poky/meta/recipes-devtools/rpm/rpm_4.14.2.1.bb
@@ -47,6 +47,10 @@ SRC_URI = "git://github.com/rpm-software-management/rpm;branch=rpm-4.14.x;protoc
file://0001-rpmio-Fix-lzopen_internal-mode-parsing-when-Tn-is-us.patch \
file://CVE-2021-3421.patch \
file://CVE-2021-20266.patch \
+ file://CVE-2021-3521-01.patch \
+ file://CVE-2021-3521-02.patch \
+ file://CVE-2021-3521-03.patch \
+ file://CVE-2021-3521.patch \
"
PE = "1"
diff --git a/poky/meta/recipes-devtools/rsync/files/CVE-2022-29154.patch b/poky/meta/recipes-devtools/rsync/files/CVE-2022-29154.patch
new file mode 100644
index 0000000000..61e4e03254
--- /dev/null
+++ b/poky/meta/recipes-devtools/rsync/files/CVE-2022-29154.patch
@@ -0,0 +1,334 @@
+From b7231c7d02cfb65d291af74ff66e7d8c507ee871 Mon Sep 17 00:00:00 2001
+From: Wayne Davison <wayne@opencoder.net>
+Date: Sun, 31 Jul 2022 16:55:34 -0700
+Subject: [PATCH] Some extra file-list safety checks.
+
+CVE-2022-29154 rsync: remote arbitrary files write inside the
+
+Upstream-Status: Backport from [https://git.samba.org/?p=rsync.git;a=patch;h=b7231c7d02cfb65d291af74ff66e7d8c507ee871]
+CVE:CVE-2022-29154
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ exclude.c | 127 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
+ flist.c | 17 ++++++-
+ io.c | 4 ++
+ main.c | 7 ++-
+ receiver.c | 11 +++--
+ 5 files changed, 158 insertions(+), 8 deletions(-)
+
+diff --git a/exclude.c b/exclude.c
+index 7989fb3..e146e96 100644
+--- a/exclude.c
++++ b/exclude.c
+@@ -26,16 +26,21 @@ extern int am_server;
+ extern int am_sender;
+ extern int eol_nulls;
+ extern int io_error;
++extern int xfer_dirs;
++extern int recurse;
+ extern int local_server;
+ extern int prune_empty_dirs;
+ extern int ignore_perishable;
++extern int relative_paths;
+ extern int delete_mode;
+ extern int delete_excluded;
+ extern int cvs_exclude;
+ extern int sanitize_paths;
+ extern int protocol_version;
++extern int list_only;
+ extern int module_id;
+
++extern char *filesfrom_host;
+ extern char curr_dir[MAXPATHLEN];
+ extern unsigned int curr_dir_len;
+ extern unsigned int module_dirlen;
+@@ -43,8 +48,10 @@ extern unsigned int module_dirlen;
+ filter_rule_list filter_list = { .debug_type = "" };
+ filter_rule_list cvs_filter_list = { .debug_type = " [global CVS]" };
+ filter_rule_list daemon_filter_list = { .debug_type = " [daemon]" };
++filter_rule_list implied_filter_list = { .debug_type = " [implied]" };
+
+ int saw_xattr_filter = 0;
++int trust_sender_filter = 0;
+
+ /* Need room enough for ":MODS " prefix plus some room to grow. */
+ #define MAX_RULE_PREFIX (16)
+@@ -293,6 +300,123 @@ static void add_rule(filter_rule_list *listp, const char *pat, unsigned int pat_
+ }
+ }
+
++/* Each arg the client sends to the remote sender turns into an implied include
++ * that the receiver uses to validate the file list from the sender. */
++void add_implied_include(const char *arg)
++{
++ filter_rule *rule;
++ int arg_len, saw_wild = 0, backslash_cnt = 0;
++ int slash_cnt = 1; /* We know we're adding a leading slash. */
++ const char *cp;
++ char *p;
++ if (relative_paths) {
++ cp = strstr(arg, "/./");
++ if (cp)
++ arg = cp+3;
++ } else {
++ if ((cp = strrchr(arg, '/')) != NULL)
++ arg = cp + 1;
++ }
++ arg_len = strlen(arg);
++ if (arg_len) {
++ if (strpbrk(arg, "*[?")) {
++ /* We need to add room to escape backslashes if wildcard chars are present. */
++ cp = arg;
++ while ((cp = strchr(cp, '\\')) != NULL) {
++ arg_len++;
++ cp++;
++ }
++ saw_wild = 1;
++ }
++ arg_len++; /* Leave room for the prefixed slash */
++ rule = new0(filter_rule);
++ if (!implied_filter_list.head)
++ implied_filter_list.head = implied_filter_list.tail = rule;
++ else {
++ rule->next = implied_filter_list.head;
++ implied_filter_list.head = rule;
++ }
++ rule->rflags = FILTRULE_INCLUDE + (saw_wild ? FILTRULE_WILD : 0);
++ p = rule->pattern = new_array(char, arg_len + 1);
++ *p++ = '/';
++ cp = arg;
++ while (*cp) {
++ switch (*cp) {
++ case '\\':
++ backslash_cnt++;
++ if (saw_wild)
++ *p++ = '\\';
++ *p++ = *cp++;
++ break;
++ case '/':
++ if (p[-1] == '/') /* This is safe because of the initial slash. */
++ break;
++ if (relative_paths) {
++ filter_rule const *ent;
++ int found = 0;
++ *p = '\0';
++ for (ent = implied_filter_list.head; ent; ent = ent->next) {
++ if (ent != rule && strcmp(ent->pattern, rule->pattern) == 0)
++ found = 1;
++ }
++ if (!found) {
++ filter_rule *R_rule = new0(filter_rule);
++ R_rule->rflags = FILTRULE_INCLUDE + (saw_wild ? FILTRULE_WILD : 0);
++ R_rule->pattern = strdup(rule->pattern);
++ R_rule->u.slash_cnt = slash_cnt;
++ R_rule->next = implied_filter_list.head;
++ implied_filter_list.head = R_rule;
++ }
++ }
++ slash_cnt++;
++ *p++ = *cp++;
++ break;
++ default:
++ *p++ = *cp++;
++ break;
++ }
++ }
++ *p = '\0';
++ rule->u.slash_cnt = slash_cnt;
++ arg = (const char *)rule->pattern;
++ }
++
++ if (recurse || xfer_dirs) {
++ /* Now create a rule with an added "/" & "**" or "*" at the end */
++ rule = new0(filter_rule);
++ if (recurse)
++ rule->rflags = FILTRULE_INCLUDE | FILTRULE_WILD | FILTRULE_WILD2;
++ else
++ rule->rflags = FILTRULE_INCLUDE | FILTRULE_WILD;
++ /* A +4 in the len leaves enough room for / * * \0 or / * \0 \0 */
++ if (!saw_wild && backslash_cnt) {
++ /* We are appending a wildcard, so now the backslashes need to be escaped. */
++ p = rule->pattern = new_array(char, arg_len + backslash_cnt + 3 + 1);
++ cp = arg;
++ while (*cp) {
++ if (*cp == '\\')
++ *p++ = '\\';
++ *p++ = *cp++;
++ }
++ } else {
++ p = rule->pattern = new_array(char, arg_len + 3 + 1);
++ if (arg_len) {
++ memcpy(p, arg, arg_len);
++ p += arg_len;
++ }
++ }
++ if (p[-1] != '/')
++ *p++ = '/';
++ *p++ = '*';
++ if (recurse)
++ *p++ = '*';
++ *p = '\0';
++ rule->u.slash_cnt = slash_cnt + 1;
++ rule->next = implied_filter_list.head;
++ implied_filter_list.head = rule;
++ }
++}
++
+ /* This frees any non-inherited items, leaving just inherited items on the list. */
+ static void pop_filter_list(filter_rule_list *listp)
+ {
+@@ -721,7 +845,7 @@ static void report_filter_result(enum logcode code, char const *name,
+ : name_flags & NAME_IS_DIR ? "directory"
+ : "file";
+ rprintf(code, "[%s] %sing %s %s because of pattern %s%s%s\n",
+- w, actions[*w!='s'][!(ent->rflags & FILTRULE_INCLUDE)],
++ w, actions[*w=='g'][!(ent->rflags & FILTRULE_INCLUDE)],
+ t, name, ent->pattern,
+ ent->rflags & FILTRULE_DIRECTORY ? "/" : "", type);
+ }
+@@ -894,6 +1018,7 @@ static filter_rule *parse_rule_tok(const char **rulestr_ptr,
+ }
+ switch (ch) {
+ case ':':
++ trust_sender_filter = 1;
+ rule->rflags |= FILTRULE_PERDIR_MERGE
+ | FILTRULE_FINISH_SETUP;
+ /* FALL THROUGH */
+diff --git a/flist.c b/flist.c
+index 499440c..630d685 100644
+--- a/flist.c
++++ b/flist.c
+@@ -70,6 +70,7 @@ extern int need_unsorted_flist;
+ extern int sender_symlink_iconv;
+ extern int output_needs_newline;
+ extern int sender_keeps_checksum;
++extern int trust_sender_filter;
+ extern int unsort_ndx;
+ extern uid_t our_uid;
+ extern struct stats stats;
+@@ -80,8 +81,7 @@ extern char curr_dir[MAXPATHLEN];
+
+ extern struct chmod_mode_struct *chmod_modes;
+
+-extern filter_rule_list filter_list;
+-extern filter_rule_list daemon_filter_list;
++extern filter_rule_list filter_list, implied_filter_list, daemon_filter_list;
+
+ #ifdef ICONV_OPTION
+ extern int filesfrom_convert;
+@@ -904,6 +904,19 @@ static struct file_struct *recv_file_entry(int f, struct file_list *flist, int x
+ exit_cleanup(RERR_UNSUPPORTED);
+ }
+
++ if (*thisname != '.' || thisname[1] != '\0') {
++ int filt_flags = S_ISDIR(mode) ? NAME_IS_DIR : NAME_IS_FILE;
++ if (!trust_sender_filter /* a per-dir filter rule means we must trust the sender's filtering */
++ && filter_list.head && check_filter(&filter_list, FINFO, thisname, filt_flags) < 0) {
++ rprintf(FERROR, "ERROR: rejecting excluded file-list name: %s\n", thisname);
++ exit_cleanup(RERR_PROTOCOL);
++ }
++ if (implied_filter_list.head && check_filter(&implied_filter_list, FINFO, thisname, filt_flags) <= 0) {
++ rprintf(FERROR, "ERROR: rejecting unrequested file-list name: %s\n", thisname);
++ exit_cleanup(RERR_PROTOCOL);
++ }
++ }
++
+ if (inc_recurse && S_ISDIR(mode)) {
+ if (one_file_system) {
+ /* Room to save the dir's device for -x */
+diff --git a/io.c b/io.c
+index c04dbd5..698a7da 100644
+--- a/io.c
++++ b/io.c
+@@ -415,6 +415,7 @@ static void forward_filesfrom_data(void)
+ while (s != eob) {
+ if (*s++ == '\0') {
+ ff_xb.len = s - sob - 1;
++ add_implied_include(sob);
+ if (iconvbufs(ic_send, &ff_xb, &iobuf.out, flags) < 0)
+ exit_cleanup(RERR_PROTOCOL); /* impossible? */
+ write_buf(iobuf.out_fd, s-1, 1); /* Send the '\0'. */
+@@ -446,9 +447,12 @@ static void forward_filesfrom_data(void)
+ char *f = ff_xb.buf + ff_xb.pos;
+ char *t = ff_xb.buf;
+ char *eob = f + len;
++ char *cur = t;
+ /* Eliminate any multi-'\0' runs. */
+ while (f != eob) {
+ if (!(*t++ = *f++)) {
++ add_implied_include(cur);
++ cur = t;
+ while (f != eob && *f == '\0')
+ f++;
+ }
+diff --git a/main.c b/main.c
+index ee9630f..6ec56e7 100644
+--- a/main.c
++++ b/main.c
+@@ -78,6 +78,7 @@ extern BOOL flist_receiving_enabled;
+ extern BOOL shutting_down;
+ extern int backup_dir_len;
+ extern int basis_dir_cnt;
++extern int trust_sender_filter;
+ extern struct stats stats;
+ extern char *stdout_format;
+ extern char *logfile_format;
+@@ -93,7 +94,7 @@ extern char curr_dir[MAXPATHLEN];
+ extern char backup_dir_buf[MAXPATHLEN];
+ extern char *basis_dir[MAX_BASIS_DIRS+1];
+ extern struct file_list *first_flist;
+-extern filter_rule_list daemon_filter_list;
++extern filter_rule_list daemon_filter_list, implied_filter_list;
+
+ uid_t our_uid;
+ gid_t our_gid;
+@@ -534,6 +535,7 @@ static pid_t do_cmd(char *cmd, char *machine, char *user, char **remote_argv, in
+ #ifdef ICONV_CONST
+ setup_iconv();
+ #endif
++ trust_sender_filter = 1;
+ } else if (local_server) {
+ /* If the user didn't request --[no-]whole-file, force
+ * it on, but only if we're not batch processing. */
+@@ -1358,6 +1360,8 @@ static int start_client(int argc, char *argv[])
+ char *dummy_host;
+ int dummy_port = rsync_port;
+ int i;
++ if (filesfrom_fd < 0)
++ add_implied_include(remote_argv[0]);
+ /* For remote source, any extra source args must have either
+ * the same hostname or an empty hostname. */
+ for (i = 1; i < remote_argc; i++) {
+@@ -1381,6 +1385,7 @@ static int start_client(int argc, char *argv[])
+ if (!rsync_port && !*arg) /* Turn an empty arg into a dot dir. */
+ arg = ".";
+ remote_argv[i] = arg;
++ add_implied_include(arg);
+ }
+ }
+
+diff --git a/receiver.c b/receiver.c
+index d6a48f1..c0aa893 100644
+--- a/receiver.c
++++ b/receiver.c
+@@ -577,10 +577,13 @@ int recv_files(int f_in, int f_out, char *local_name)
+ if (DEBUG_GTE(RECV, 1))
+ rprintf(FINFO, "recv_files(%s)\n", fname);
+
+- if (daemon_filter_list.head && (*fname != '.' || fname[1] != '\0')
+- && check_filter(&daemon_filter_list, FLOG, fname, 0) < 0) {
+- rprintf(FERROR, "attempt to hack rsync failed.\n");
+- exit_cleanup(RERR_PROTOCOL);
++ if (daemon_filter_list.head && (*fname != '.' || fname[1] != '\0')) {
++ int filt_flags = S_ISDIR(file->mode) ? NAME_IS_DIR : NAME_IS_FILE;
++ if (check_filter(&daemon_filter_list, FLOG, fname, filt_flags) < 0) {
++ rprintf(FERROR, "ERROR: rejecting file transfer request for daemon excluded file: %s\n",
++ fname);
++ exit_cleanup(RERR_PROTOCOL);
++ }
+ }
+
+ #ifdef SUPPORT_XATTRS
+--
+2.30.2
+
diff --git a/poky/meta/recipes-devtools/rsync/rsync_3.1.3.bb b/poky/meta/recipes-devtools/rsync/rsync_3.1.3.bb
index c743e3f75b..a5c20dee34 100644
--- a/poky/meta/recipes-devtools/rsync/rsync_3.1.3.bb
+++ b/poky/meta/recipes-devtools/rsync/rsync_3.1.3.bb
@@ -16,6 +16,7 @@ SRC_URI = "https://download.samba.org/pub/${BPN}/src/${BP}.tar.gz \
file://CVE-2016-9841.patch \
file://CVE-2016-9842.patch \
file://CVE-2016-9843.patch \
+ file://CVE-2022-29154.patch \
"
SRC_URI[md5sum] = "1581a588fde9d89f6bc6201e8129afaf"
diff --git a/poky/meta/recipes-devtools/ruby/ruby/CVE-2023-28756.patch b/poky/meta/recipes-devtools/ruby/ruby/CVE-2023-28756.patch
new file mode 100644
index 0000000000..c25a147d36
--- /dev/null
+++ b/poky/meta/recipes-devtools/ruby/ruby/CVE-2023-28756.patch
@@ -0,0 +1,61 @@
+From 957bb7cb81995f26c671afce0ee50a5c660e540e Mon Sep 17 00:00:00 2001
+From: Hiroshi SHIBATA <hsbt@ruby-lang.org>
+Date: Wed, 29 Mar 2023 13:28:25 +0900
+Subject: [PATCH] CVE-2023-28756
+
+CVE: CVE-2023-28756
+Upstream-Status: Backport [https://github.com/ruby/ruby/commit/957bb7cb81995f26c671afce0ee50a5c660e540e]
+
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ lib/time.rb | 6 +++---
+ test/test_time.rb | 9 +++++++++
+ 2 files changed, 12 insertions(+), 3 deletions(-)
+
+diff --git a/lib/time.rb b/lib/time.rb
+index f27bacd..4a86e8e 100644
+--- a/lib/time.rb
++++ b/lib/time.rb
+@@ -501,8 +501,8 @@ class Time
+ (Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)\s+
+ (\d{2,})\s+
+ (\d{2})\s*
+- :\s*(\d{2})\s*
+- (?::\s*(\d{2}))?\s+
++ :\s*(\d{2})
++ (?:\s*:\s*(\d\d))?\s+
+ ([+-]\d{4}|
+ UT|GMT|EST|EDT|CST|CDT|MST|MDT|PST|PDT|[A-IK-Z])/ix =~ date
+ # Since RFC 2822 permit comments, the regexp has no right anchor.
+@@ -717,7 +717,7 @@ class Time
+ #
+ # If self is a UTC time, Z is used as TZD. [+-]hh:mm is used otherwise.
+ #
+- # +fractional_digits+ specifies a number of digits to use for fractional
++ # +fraction_digits+ specifies a number of digits to use for fractional
+ # seconds. Its default value is 0.
+ #
+ # require 'time'
+diff --git a/test/test_time.rb b/test/test_time.rb
+index ca20788..4f11048 100644
+--- a/test/test_time.rb
++++ b/test/test_time.rb
+@@ -62,6 +62,15 @@ class TestTimeExtension < Test::Unit::TestCase # :nodoc:
+ assert_equal(true, t.utc?)
+ end
+
++ def test_rfc2822_nonlinear
++ pre = ->(n) {"0 Feb 00 00 :00" + " " * n}
++ assert_linear_performance([100, 500, 5000, 50_000], pre: pre) do |s|
++ assert_raise(ArgumentError) do
++ Time.rfc2822(s)
++ end
++ end
++ end
++
+ def test_encode_rfc2822
+ t = Time.utc(1)
+ assert_equal("Mon, 01 Jan 0001 00:00:00 -0000", t.rfc2822)
+--
+2.25.1
+
diff --git a/poky/meta/recipes-devtools/ruby/ruby_2.7.6.bb b/poky/meta/recipes-devtools/ruby/ruby_2.7.6.bb
index 3af321a83e..91ffde5fa3 100644
--- a/poky/meta/recipes-devtools/ruby/ruby_2.7.6.bb
+++ b/poky/meta/recipes-devtools/ruby/ruby_2.7.6.bb
@@ -7,6 +7,7 @@ SRC_URI += " \
file://run-ptest \
file://0001-Modify-shebang-of-libexec-y2racc-and-libexec-racc2y.patch \
file://0001-template-Makefile.in-do-not-write-host-cross-cc-item.patch \
+ file://CVE-2023-28756.patch \
"
SRC_URI[md5sum] = "f972fb0cce662966bec10d5c5f32d042"
diff --git a/poky/meta/recipes-extended/bc/bc_1.07.1.bb b/poky/meta/recipes-extended/bc/bc_1.07.1.bb
index ff3e8f4409..8ed10d14c2 100644
--- a/poky/meta/recipes-extended/bc/bc_1.07.1.bb
+++ b/poky/meta/recipes-extended/bc/bc_1.07.1.bb
@@ -32,4 +32,4 @@ do_compile_prepend() {
ALTERNATIVE_${PN} = "bc dc"
ALTERNATIVE_PRIORITY = "100"
-BBCLASSEXTEND = "native"
+BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-extended/ghostscript/ghostscript/check-stack-limits-after-function-evalution.patch b/poky/meta/recipes-extended/ghostscript/ghostscript/check-stack-limits-after-function-evalution.patch
index 722bab4ddb..77eec7d158 100644
--- a/poky/meta/recipes-extended/ghostscript/ghostscript/check-stack-limits-after-function-evalution.patch
+++ b/poky/meta/recipes-extended/ghostscript/ghostscript/check-stack-limits-after-function-evalution.patch
@@ -14,7 +14,7 @@ stack than are available.
To cope, add in stack limit checking to throw an appropriate error when this
happens.
-
+CVE: CVE-2021-45944
Upstream-Status: Backported [https://git.ghostscript.com/?p=ghostpdl.git;a=patch;h=7861fcad13c497728189feafb41cd57b5b50ea25]
Signed-off-by: Minjae Kim <flowergom@gmail.com>
---
diff --git a/poky/meta/recipes-extended/libarchive/libarchive/CVE-2022-26280.patch b/poky/meta/recipes-extended/libarchive/libarchive/CVE-2022-26280.patch
new file mode 100644
index 0000000000..501fcc5848
--- /dev/null
+++ b/poky/meta/recipes-extended/libarchive/libarchive/CVE-2022-26280.patch
@@ -0,0 +1,29 @@
+From cfaa28168a07ea4a53276b63068f94fce37d6aff Mon Sep 17 00:00:00 2001
+From: Tim Kientzle <kientzle@acm.org>
+Date: Thu, 24 Mar 2022 10:35:00 +0100
+Subject: [PATCH] ZIP reader: fix possible out-of-bounds read in
+ zipx_lzma_alone_init()
+
+Fixes #1672
+
+CVE: CVE-2022-26280
+Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/cfaa28168a07ea4a53276b63068f94fce37d6aff]
+Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
+
+---
+ libarchive/archive_read_support_format_zip.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/libarchive/archive_read_support_format_zip.c b/libarchive/archive_read_support_format_zip.c
+index 38ada70b5..9d6c900b2 100644
+--- a/libarchive/archive_read_support_format_zip.c
++++ b/libarchive/archive_read_support_format_zip.c
+@@ -1667,7 +1667,7 @@ zipx_lzma_alone_init(struct archive_read *a, struct zip *zip)
+ */
+
+ /* Read magic1,magic2,lzma_params from the ZIPX stream. */
+- if((p = __archive_read_ahead(a, 9, NULL)) == NULL) {
++ if(zip->entry_bytes_remaining < 9 || (p = __archive_read_ahead(a, 9, NULL)) == NULL) {
+ archive_set_error(&a->archive, ARCHIVE_ERRNO_FILE_FORMAT,
+ "Truncated lzma data");
+ return (ARCHIVE_FATAL);
diff --git a/poky/meta/recipes-extended/libarchive/libarchive/CVE-2022-36227.patch b/poky/meta/recipes-extended/libarchive/libarchive/CVE-2022-36227.patch
new file mode 100644
index 0000000000..980a0e884a
--- /dev/null
+++ b/poky/meta/recipes-extended/libarchive/libarchive/CVE-2022-36227.patch
@@ -0,0 +1,43 @@
+From 6311080bff566fcc5591dadfd78efb41705b717f Mon Sep 17 00:00:00 2001
+From: obiwac <obiwac@gmail.com>
+Date: Fri, 22 Jul 2022 22:41:10 +0200
+Subject: [PATCH] CVE-2022-36227
+
+libarchive: CVE-2022-36227 Handle a `calloc` returning NULL (fixes #1754)
+
+Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/bff38efe8c110469c5080d387bec62a6ca15b1a5]
+CVE: CVE-2022-36227
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com
+---
+ libarchive/archive_write.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/libarchive/archive_write.c b/libarchive/archive_write.c
+index 98a55fb..7fe88b6 100644
+--- a/libarchive/archive_write.c
++++ b/libarchive/archive_write.c
+@@ -211,6 +211,10 @@ __archive_write_allocate_filter(struct archive *_a)
+ struct archive_write_filter *f;
+
+ f = calloc(1, sizeof(*f));
++
++ if (f == NULL)
++ return (NULL);
++
+ f->archive = _a;
+ f->state = ARCHIVE_WRITE_FILTER_STATE_NEW;
+ if (a->filter_first == NULL)
+@@ -527,6 +531,10 @@ archive_write_open(struct archive *_a, void *client_data,
+ a->client_data = client_data;
+
+ client_filter = __archive_write_allocate_filter(_a);
++
++ if (client_filter == NULL)
++ return (ARCHIVE_FATAL);
++
+ client_filter->open = archive_write_client_open;
+ client_filter->write = archive_write_client_write;
+ client_filter->close = archive_write_client_close;
+--
+2.25.1
+
diff --git a/poky/meta/recipes-extended/libarchive/libarchive_3.4.2.bb b/poky/meta/recipes-extended/libarchive/libarchive_3.4.2.bb
index 7d2e7b711b..582787d3f3 100644
--- a/poky/meta/recipes-extended/libarchive/libarchive_3.4.2.bb
+++ b/poky/meta/recipes-extended/libarchive/libarchive_3.4.2.bb
@@ -39,6 +39,8 @@ SRC_URI = "http://libarchive.org/downloads/libarchive-${PV}.tar.gz \
file://CVE-2021-23177.patch \
file://CVE-2021-31566-01.patch \
file://CVE-2021-31566-02.patch \
+ file://CVE-2022-26280.patch \
+ file://CVE-2022-36227.patch \
"
SRC_URI[md5sum] = "d953ed6b47694dadf0e6042f8f9ff451"
diff --git a/poky/meta/recipes-extended/libtirpc/libtirpc_1.2.6.bb b/poky/meta/recipes-extended/libtirpc/libtirpc_1.2.6.bb
index fe4e30e61f..80151ff83a 100644
--- a/poky/meta/recipes-extended/libtirpc/libtirpc_1.2.6.bb
+++ b/poky/meta/recipes-extended/libtirpc/libtirpc_1.2.6.bb
@@ -22,7 +22,7 @@ inherit autotools pkgconfig
EXTRA_OECONF = "--disable-gssapi"
do_install_append() {
- chown root:root ${D}${sysconfdir}/netconfig
+ test -e ${D}${sysconfdir}/netconfig && chown root:root ${D}${sysconfdir}/netconfig
}
BBCLASSEXTEND = "native nativesdk"
diff --git a/poky/meta/recipes-extended/screen/screen/CVE-2023-24626.patch b/poky/meta/recipes-extended/screen/screen/CVE-2023-24626.patch
new file mode 100644
index 0000000000..73caf9d81b
--- /dev/null
+++ b/poky/meta/recipes-extended/screen/screen/CVE-2023-24626.patch
@@ -0,0 +1,40 @@
+From e9ad41bfedb4537a6f0de20f00b27c7739f168f7 Mon Sep 17 00:00:00 2001
+From: Alexander Naumov <alexander_naumov@opensuse.org>
+Date: Mon, 30 Jan 2023 17:22:25 +0200
+Subject: fix: missing signal sending permission check on failed query messages
+
+Signed-off-by: Alexander Naumov <alexander_naumov@opensuse.org>
+
+CVE: CVE-2023-24626
+Upstream-Status: Backport [https://git.savannah.gnu.org/cgit/screen.git/commit/?id=e9ad41bfedb4537a6f0de20f00b27c7739f168f7]
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ socket.c | 9 +++++++--
+ 1 file changed, 7 insertions(+), 2 deletions(-)
+
+diff --git a/socket.c b/socket.c
+index bb68b35..9d87445 100644
+--- a/socket.c
++++ b/socket.c
+@@ -1285,11 +1285,16 @@ ReceiveMsg()
+ else
+ queryflag = -1;
+
+- Kill(m.m.command.apid,
++ if (CheckPid(m.m.command.apid)) {
++ Msg(0, "Query attempt with bad pid(%d)!", m.m.command.apid);
++ }
++ else {
++ Kill(m.m.command.apid,
+ (queryflag >= 0)
+ ? SIGCONT
+ : SIG_BYE); /* Send SIG_BYE if an error happened */
+- queryflag = -1;
++ queryflag = -1;
++ }
+ }
+ break;
+ case MSG_COMMAND:
+--
+2.25.1
+
diff --git a/poky/meta/recipes-extended/screen/screen_4.8.0.bb b/poky/meta/recipes-extended/screen/screen_4.8.0.bb
index fe640c262b..c4faa27023 100644
--- a/poky/meta/recipes-extended/screen/screen_4.8.0.bb
+++ b/poky/meta/recipes-extended/screen/screen_4.8.0.bb
@@ -22,6 +22,7 @@ SRC_URI = "${GNU_MIRROR}/screen/screen-${PV}.tar.gz \
file://0001-fix-for-multijob-build.patch \
file://0001-Remove-more-compatibility-stuff.patch \
file://CVE-2021-26937.patch \
+ file://CVE-2023-24626.patch \
"
SRC_URI[md5sum] = "d276213d3acd10339cd37848b8c4ab1e"
diff --git a/poky/meta/recipes-extended/shadow/shadow_4.8.1.bb b/poky/meta/recipes-extended/shadow/shadow_4.8.1.bb
index ff4aad926f..9dfcd4bc10 100644
--- a/poky/meta/recipes-extended/shadow/shadow_4.8.1.bb
+++ b/poky/meta/recipes-extended/shadow/shadow_4.8.1.bb
@@ -9,3 +9,7 @@ BBCLASSEXTEND = "native nativesdk"
# Severity is low and marked as closed and won't fix.
# https://bugzilla.redhat.com/show_bug.cgi?id=884658
CVE_CHECK_WHITELIST += "CVE-2013-4235"
+
+# This is an issue for a different shadow
+CVE_CHECK_WHITELIST += "CVE-2016-15024"
+
diff --git a/poky/meta/recipes-extended/sudo/files/CVE-2023-22809.patch b/poky/meta/recipes-extended/sudo/files/CVE-2023-22809.patch
new file mode 100644
index 0000000000..6c47eb3e44
--- /dev/null
+++ b/poky/meta/recipes-extended/sudo/files/CVE-2023-22809.patch
@@ -0,0 +1,113 @@
+Backport of:
+
+# HG changeset patch
+# Parent 7275148cad1f8cd3c350026460acc4d6ad349c3a
+sudoedit: do not permit editor arguments to include "--"
+We use "--" to separate the editor and arguments from the files to edit.
+If the editor arguments include "--", sudo can be tricked into allowing
+the user to edit a file not permitted by the security policy.
+Thanks to Matthieu Barjole and Victor Cutillas of Synacktiv
+(https://synacktiv.com) for finding this bug.
+
+CVE: CVE-2023-22809
+Upstream-Staus: Backport [http://archive.ubuntu.com/ubuntu/pool/main/s/sudo/sudo_1.8.31-1ubuntu1.4.debian.tar.xz]
+Signed-off-by: Omkar Patil <omkar.patil@kpit.com>
+
+--- a/plugins/sudoers/editor.c
++++ b/plugins/sudoers/editor.c
+@@ -56,7 +56,7 @@ resolve_editor(const char *ed, size_t ed
+ const char *cp, *ep, *tmp;
+ const char *edend = ed + edlen;
+ struct stat user_editor_sb;
+- int nargc;
++ int nargc = 0;
+ debug_decl(resolve_editor, SUDOERS_DEBUG_UTIL)
+
+ /*
+@@ -102,6 +102,21 @@ resolve_editor(const char *ed, size_t ed
+ free(editor_path);
+ while (nargc--)
+ free(nargv[nargc]);
++ free(nargv);
++ debug_return_str(NULL);
++ }
++
++ /*
++ * We use "--" to separate the editor and arguments from the files
++ * to edit. The editor arguments themselves may not contain "--".
++ */
++ if (strcmp(nargv[nargc], "--") == 0) {
++ sudo_warnx(U_("ignoring editor: %.*s"), (int)edlen, ed);
++ sudo_warnx("%s", U_("editor arguments may not contain \"--\""));
++ errno = EINVAL;
++ free(editor_path);
++ while (nargc--)
++ free(nargv[nargc]);
+ free(nargv);
+ debug_return_str(NULL);
+ }
+--- a/plugins/sudoers/sudoers.c
++++ b/plugins/sudoers/sudoers.c
+@@ -616,20 +616,31 @@ sudoers_policy_main(int argc, char * con
+
+ /* Note: must call audit before uid change. */
+ if (ISSET(sudo_mode, MODE_EDIT)) {
++ const char *env_editor = NULL;
+ int edit_argc;
+- const char *env_editor;
+
+ free(safe_cmnd);
+ safe_cmnd = find_editor(NewArgc - 1, NewArgv + 1, &edit_argc,
+ &edit_argv, NULL, &env_editor, false);
+ if (safe_cmnd == NULL) {
+- if (errno != ENOENT)
++ switch (errno) {
++ case ENOENT:
++ audit_failure(NewArgc, NewArgv, N_("%s: command not found"),
++ env_editor ? env_editor : def_editor);
++ sudo_warnx(U_("%s: command not found"),
++ env_editor ? env_editor : def_editor);
++ goto bad;
++ case EINVAL:
++ if (def_env_editor && env_editor != NULL) {
++ /* User tried to do something funny with the editor. */
++ log_warningx(SLOG_NO_STDERR|SLOG_SEND_MAIL,
++ "invalid user-specified editor: %s", env_editor);
++ goto bad;
++ }
++ /* FALLTHROUGH */
++ default:
+ goto done;
+- audit_failure(NewArgc, NewArgv, N_("%s: command not found"),
+- env_editor ? env_editor : def_editor);
+- sudo_warnx(U_("%s: command not found"),
+- env_editor ? env_editor : def_editor);
+- goto bad;
++ }
+ }
+ if (audit_success(edit_argc, edit_argv) != 0 && !def_ignore_audit_errors)
+ goto done;
+--- a/plugins/sudoers/visudo.c
++++ b/plugins/sudoers/visudo.c
+@@ -308,7 +308,7 @@ static char *
+ get_editor(int *editor_argc, char ***editor_argv)
+ {
+ char *editor_path = NULL, **whitelist = NULL;
+- const char *env_editor;
++ const char *env_editor = NULL;
+ static char *files[] = { "+1", "sudoers" };
+ unsigned int whitelist_len = 0;
+ debug_decl(get_editor, SUDOERS_DEBUG_UTIL)
+@@ -342,7 +342,11 @@ get_editor(int *editor_argc, char ***edi
+ if (editor_path == NULL) {
+ if (def_env_editor && env_editor != NULL) {
+ /* We are honoring $EDITOR so this is a fatal error. */
+- sudo_fatalx(U_("specified editor (%s) doesn't exist"), env_editor);
++ if (errno == ENOENT) {
++ sudo_warnx(U_("specified editor (%s) doesn't exist"),
++ env_editor);
++ }
++ exit(EXIT_FAILURE);
+ }
+ sudo_fatalx(U_("no editor found (editor path = %s)"), def_editor);
+ }
diff --git a/poky/meta/recipes-extended/sudo/sudo.inc b/poky/meta/recipes-extended/sudo/sudo.inc
index 153731c807..9c7279d25a 100644
--- a/poky/meta/recipes-extended/sudo/sudo.inc
+++ b/poky/meta/recipes-extended/sudo/sudo.inc
@@ -3,7 +3,7 @@ DESCRIPTION = "Sudo (superuser do) allows a system administrator to give certain
HOMEPAGE = "http://www.sudo.ws"
BUGTRACKER = "http://www.sudo.ws/bugs/"
SECTION = "admin"
-LICENSE = "ISC & BSD & Zlib"
+LICENSE = "ISC & BSD-3-Clause & BSD-2-Clause & Zlib"
LIC_FILES_CHKSUM = "file://doc/LICENSE;md5=07966675feaddba70cc812895b248230 \
file://plugins/sudoers/redblack.c;beginline=1;endline=46;md5=03e35317699ba00b496251e0dfe9f109 \
file://lib/util/reallocarray.c;beginline=3;endline=15;md5=397dd45c7683e90b9f8bf24638cf03bf \
diff --git a/poky/meta/recipes-extended/sudo/sudo/CVE-2022-43995.patch b/poky/meta/recipes-extended/sudo/sudo/CVE-2022-43995.patch
new file mode 100644
index 0000000000..1336c7701d
--- /dev/null
+++ b/poky/meta/recipes-extended/sudo/sudo/CVE-2022-43995.patch
@@ -0,0 +1,59 @@
+From e1554d7996a59bf69544f3d8dd4ae683027948f9 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Tue, 15 Nov 2022 09:17:18 +0530
+Subject: [PATCH] CVE-2022-43995
+
+Upstream-Status: Backport [https://github.com/sudo-project/sudo/commit/bd209b9f16fcd1270c13db27ae3329c677d48050]
+CVE: CVE-2022-43995
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+
+Potential heap overflow for passwords < 8
+characters. Starting with sudo 1.8.0 the plaintext password buffer is
+dynamically sized so it is not safe to assume that it is at least 9 bytes in
+size.
+Found by Hugo Lefeuvre (University of Manchester) with ConfFuzz.
+---
+ plugins/sudoers/auth/passwd.c | 11 +++++------
+ 1 file changed, 5 insertions(+), 6 deletions(-)
+
+diff --git a/plugins/sudoers/auth/passwd.c b/plugins/sudoers/auth/passwd.c
+index 03c7a16..76a7824 100644
+--- a/plugins/sudoers/auth/passwd.c
++++ b/plugins/sudoers/auth/passwd.c
+@@ -63,7 +63,7 @@ sudo_passwd_init(struct passwd *pw, sudo_auth *auth)
+ int
+ sudo_passwd_verify(struct passwd *pw, char *pass, sudo_auth *auth, struct sudo_conv_callback *callback)
+ {
+- char sav, *epass;
++ char des_pass[9], *epass;
+ char *pw_epasswd = auth->data;
+ size_t pw_len;
+ int matched = 0;
+@@ -75,12 +75,12 @@ sudo_passwd_verify(struct passwd *pw, char *pass, sudo_auth *auth, struct sudo_c
+
+ /*
+ * Truncate to 8 chars if standard DES since not all crypt()'s do this.
+- * If this turns out not to be safe we will have to use OS #ifdef's (sigh).
+ */
+- sav = pass[8];
+ pw_len = strlen(pw_epasswd);
+- if (pw_len == DESLEN || HAS_AGEINFO(pw_epasswd, pw_len))
+- pass[8] = '\0';
++ if (pw_len == DESLEN || HAS_AGEINFO(pw_epasswd, pw_len)) {
++ strlcpy(des_pass, pass, sizeof(des_pass));
++ pass = des_pass;
++ }
+
+ /*
+ * Normal UN*X password check.
+@@ -88,7 +88,6 @@ sudo_passwd_verify(struct passwd *pw, char *pass, sudo_auth *auth, struct sudo_c
+ * only compare the first DESLEN characters in that case.
+ */
+ epass = (char *) crypt(pass, pw_epasswd);
+- pass[8] = sav;
+ if (epass != NULL) {
+ if (HAS_AGEINFO(pw_epasswd, pw_len) && strlen(epass) == DESLEN)
+ matched = !strncmp(pw_epasswd, epass, DESLEN);
+--
+2.25.1
+
diff --git a/poky/meta/recipes-extended/sudo/sudo/CVE-2023-28486_CVE-2023-28487-1.patch b/poky/meta/recipes-extended/sudo/sudo/CVE-2023-28486_CVE-2023-28487-1.patch
new file mode 100644
index 0000000000..bc6f8c19a6
--- /dev/null
+++ b/poky/meta/recipes-extended/sudo/sudo/CVE-2023-28486_CVE-2023-28487-1.patch
@@ -0,0 +1,646 @@
+Origin: Backport obtained from SUSE. Thanks!
+
+From 334daf92b31b79ce68ed75e2ee14fca265f029ca Mon Sep 17 00:00:00 2001
+From: "Todd C. Miller" <Todd.Miller@sudo.ws>
+Date: Wed, 18 Jan 2023 08:21:34 -0700
+Subject: [PATCH] Escape control characters in log messages and "sudoreplay -l"
+ output. The log message contains user-controlled strings that could include
+ things like terminal control characters. Space characters in the command
+ path are now also escaped.
+
+Command line arguments that contain spaces are surrounded with
+single quotes and any literal single quote or backslash characters
+are escaped with a backslash. This makes it possible to distinguish
+multiple command line arguments from a single argument that contains
+spaces.
+
+Issue found by Matthieu Barjole and Victor Cutillas of Synacktiv
+(https://synacktiv.com).
+
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/sudo/tree/debian/patches/CVE-2023-2848x-1.patch?h=ubuntu/focal-security
+Upstream commit https://github.com/sudo-project/sudo/commit/334daf92b31b79ce68ed75e2ee14fca265f029ca]
+CVE: CVE-2023-28486 CVE-2023-28487
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ doc/sudoers.man.in | 33 +++++++--
+ doc/sudoers.mdoc.in | 28 ++++++--
+ doc/sudoreplay.man.in | 9 ++
+ doc/sudoreplay.mdoc.in | 10 ++
+ include/sudo_compat.h | 6 +
+ include/sudo_lbuf.h | 7 ++
+ lib/util/lbuf.c | 106 +++++++++++++++++++++++++++++++
+ lib/util/util.exp.in | 1
+ plugins/sudoers/logging.c | 145 +++++++++++--------------------------------
+ plugins/sudoers/sudoreplay.c | 44 +++++++++----
+ 10 files changed, 257 insertions(+), 132 deletions(-)
+
+--- a/doc/sudoers.man.in
++++ b/doc/sudoers.man.in
+@@ -4566,6 +4566,19 @@ can log events using either
+ syslog(3)
+ or a simple log file.
+ The log format is almost identical in both cases.
++Any control characters present in the log data are formatted in octal
++with a leading
++\(oq#\(cq
++character.
++For example, a horizontal tab is stored as
++\(oq#011\(cq
++and an embedded carriage return is stored as
++\(oq#015\(cq.
++In addition, space characters in the command path are stored as
++\(oq#040\(cq.
++Literal single quotes and backslash characters
++(\(oq\e\(cq)
++in command line arguments are escaped with a backslash.
+ .SS "Accepted command log entries"
+ Commands that sudo runs are logged using the following format (split
+ into multiple lines for readability):
+@@ -4646,7 +4659,7 @@ A list of environment variables specifie
+ if specified.
+ .TP 14n
+ command
+-The actual command that was executed.
++The actual command that was executed, including any command line arguments.
+ .PP
+ Messages are logged using the locale specified by
+ \fIsudoers_locale\fR,
+@@ -4882,17 +4895,21 @@ with a few important differences:
+ 1.\&
+ The
+ \fIprogname\fR
+-and
+-\fIhostname\fR
+-fields are not present.
++field is not present.
+ .TP 5n
+ 2.\&
+-If the
+-\fIlog_year\fR
+-option is enabled,
+-the date will also include the year.
++The
++\fIhostname\fR
++is only logged if the
++\fIlog_host\fR
++option is enabled.
+ .TP 5n
+ 3.\&
++The date does not include the year unless the
++\fIlog_year\fR
++option is enabled.
++.TP 5n
++4.\&
+ Lines that are longer than
+ \fIloglinelen\fR
+ characters (80 by default) are word-wrapped and continued on the
+--- a/doc/sudoers.mdoc.in
++++ b/doc/sudoers.mdoc.in
+@@ -4261,6 +4261,19 @@ can log events using either
+ .Xr syslog 3
+ or a simple log file.
+ The log format is almost identical in both cases.
++Any control characters present in the log data are formatted in octal
++with a leading
++.Ql #
++character.
++For example, a horizontal tab is stored as
++.Ql #011
++and an embedded carriage return is stored as
++.Ql #015 .
++In addition, space characters in the command path are stored as
++.Ql #040 .
++Literal single quotes and backslash characters
++.Pq Ql \e
++in command line arguments are escaped with a backslash.
+ .Ss Accepted command log entries
+ Commands that sudo runs are logged using the following format (split
+ into multiple lines for readability):
+@@ -4328,7 +4341,7 @@ option is enabled.
+ A list of environment variables specified on the command line,
+ if specified.
+ .It command
+-The actual command that was executed.
++The actual command that was executed, including any command line arguments.
+ .El
+ .Pp
+ Messages are logged using the locale specified by
+@@ -4550,14 +4563,17 @@ with a few important differences:
+ .It
+ The
+ .Em progname
+-and
++field is not present.
++.It
++The
+ .Em hostname
+-fields are not present.
++is only logged if the
++.Em log_host
++option is enabled.
+ .It
+-If the
++The date does not include the year unless the
+ .Em log_year
+-option is enabled,
+-the date will also include the year.
++option is enabled.
+ .It
+ Lines that are longer than
+ .Em loglinelen
+--- a/doc/sudoreplay.man.in
++++ b/doc/sudoreplay.man.in
+@@ -149,6 +149,15 @@ In this mode,
+ will list available sessions in a format similar to the
+ \fBsudo\fR
+ log file format, sorted by file name (or sequence number).
++Any control characters present in the log data are formated in octal
++with a leading
++\(oq#\(cq
++character.
++For example, a horizontal tab is displayed as
++\(oq#011\(cq
++and an embedded carriage return is displayed as
++\(oq#015\(cq.
++.sp
+ If a
+ \fIsearch expression\fR
+ is specified, it will be used to restrict the IDs that are displayed.
+--- a/doc/sudoreplay.mdoc.in
++++ b/doc/sudoreplay.mdoc.in
+@@ -142,6 +142,16 @@ In this mode,
+ will list available sessions in a format similar to the
+ .Nm sudo
+ log file format, sorted by file name (or sequence number).
++Any control characters present in the log data are formatted in octal
++with a leading
++.Ql #
++character.
++For example, a horizontal tab is displayed as
++.Ql #011
++and an embedded carriage return is displayed as
++.Ql #015 .
++Space characters in the command name and arguments are also formatted in octal.
++.Pp
+ If a
+ .Ar search expression
+ is specified, it will be used to restrict the IDs that are displayed.
+--- a/include/sudo_compat.h
++++ b/include/sudo_compat.h
+@@ -79,6 +79,12 @@
+ # endif
+ #endif
+
++#ifdef HAVE_FALLTHROUGH_ATTRIBUTE
++# define FALLTHROUGH __attribute__((__fallthrough__))
++#else
++# define FALLTHROUGH do { } while (0)
++#endif
++
+ /*
+ * Given the pointer x to the member m of the struct s, return
+ * a pointer to the containing structure.
+--- a/include/sudo_lbuf.h
++++ b/include/sudo_lbuf.h
+@@ -36,9 +36,15 @@ struct sudo_lbuf {
+
+ typedef int (*sudo_lbuf_output_t)(const char *);
+
++/* Flags for sudo_lbuf_append_esc() */
++#define LBUF_ESC_CNTRL 0x01
++#define LBUF_ESC_BLANK 0x02
++#define LBUF_ESC_QUOTE 0x04
++
+ __dso_public void sudo_lbuf_init_v1(struct sudo_lbuf *lbuf, sudo_lbuf_output_t output, int indent, const char *continuation, int cols);
+ __dso_public void sudo_lbuf_destroy_v1(struct sudo_lbuf *lbuf);
+ __dso_public bool sudo_lbuf_append_v1(struct sudo_lbuf *lbuf, const char *fmt, ...) __printflike(2, 3);
++__dso_public bool sudo_lbuf_append_esc_v1(struct sudo_lbuf *lbuf, int flags, const char *fmt, ...) __printflike(3, 4);
+ __dso_public bool sudo_lbuf_append_quoted_v1(struct sudo_lbuf *lbuf, const char *set, const char *fmt, ...) __printflike(3, 4);
+ __dso_public void sudo_lbuf_print_v1(struct sudo_lbuf *lbuf);
+ __dso_public bool sudo_lbuf_error_v1(struct sudo_lbuf *lbuf);
+@@ -47,6 +53,7 @@ __dso_public void sudo_lbuf_clearerr_v1(
+ #define sudo_lbuf_init(_a, _b, _c, _d, _e) sudo_lbuf_init_v1((_a), (_b), (_c), (_d), (_e))
+ #define sudo_lbuf_destroy(_a) sudo_lbuf_destroy_v1((_a))
+ #define sudo_lbuf_append sudo_lbuf_append_v1
++#define sudo_lbuf_append_esc sudo_lbuf_append_esc_v1
+ #define sudo_lbuf_append_quoted sudo_lbuf_append_quoted_v1
+ #define sudo_lbuf_print(_a) sudo_lbuf_print_v1((_a))
+ #define sudo_lbuf_error(_a) sudo_lbuf_error_v1((_a))
+--- a/lib/util/lbuf.c
++++ b/lib/util/lbuf.c
+@@ -93,6 +93,112 @@ sudo_lbuf_expand(struct sudo_lbuf *lbuf,
+ }
+
+ /*
++ * Escape a character in octal form (#0n) and store it as a string
++ * in buf, which must have at least 6 bytes available.
++ * Returns the length of buf, not counting the terminating NUL byte.
++ */
++static int
++escape(unsigned char ch, char *buf)
++{
++ const int len = ch < 0100 ? (ch < 010 ? 3 : 4) : 5;
++
++ /* Work backwards from the least significant digit to most significant. */
++ switch (len) {
++ case 5:
++ buf[4] = (ch & 7) + '0';
++ ch >>= 3;
++ FALLTHROUGH;
++ case 4:
++ buf[3] = (ch & 7) + '0';
++ ch >>= 3;
++ FALLTHROUGH;
++ case 3:
++ buf[2] = (ch & 7) + '0';
++ buf[1] = '0';
++ buf[0] = '#';
++ break;
++ }
++ buf[len] = '\0';
++
++ return len;
++}
++
++/*
++ * Parse the format and append strings, only %s and %% escapes are supported.
++ * Any non-printable characters are escaped in octal as #0nn.
++ */
++bool
++sudo_lbuf_append_esc_v1(struct sudo_lbuf *lbuf, int flags, const char *fmt, ...)
++{
++ unsigned int saved_len = lbuf->len;
++ bool ret = false;
++ const char *s;
++ va_list ap;
++ debug_decl(sudo_lbuf_append_esc, SUDO_DEBUG_UTIL);
++
++ if (sudo_lbuf_error(lbuf))
++ debug_return_bool(false);
++
++#define should_escape(ch) \
++ ((ISSET(flags, LBUF_ESC_CNTRL) && iscntrl((unsigned char)ch)) || \
++ (ISSET(flags, LBUF_ESC_BLANK) && isblank((unsigned char)ch)))
++#define should_quote(ch) \
++ (ISSET(flags, LBUF_ESC_QUOTE) && (ch == '\'' || ch == '\\'))
++
++ va_start(ap, fmt);
++ while (*fmt != '\0') {
++ if (fmt[0] == '%' && fmt[1] == 's') {
++ if ((s = va_arg(ap, char *)) == NULL)
++ s = "(NULL)";
++ while (*s != '\0') {
++ if (should_escape(*s)) {
++ if (!sudo_lbuf_expand(lbuf, sizeof("#0177") - 1))
++ goto done;
++ lbuf->len += escape(*s++, lbuf->buf + lbuf->len);
++ continue;
++ }
++ if (should_quote(*s)) {
++ if (!sudo_lbuf_expand(lbuf, 2))
++ goto done;
++ lbuf->buf[lbuf->len++] = '\\';
++ lbuf->buf[lbuf->len++] = *s++;
++ continue;
++ }
++ if (!sudo_lbuf_expand(lbuf, 1))
++ goto done;
++ lbuf->buf[lbuf->len++] = *s++;
++ }
++ fmt += 2;
++ continue;
++ }
++ if (should_escape(*fmt)) {
++ if (!sudo_lbuf_expand(lbuf, sizeof("#0177") - 1))
++ goto done;
++ if (*fmt == '\'') {
++ lbuf->buf[lbuf->len++] = '\\';
++ lbuf->buf[lbuf->len++] = *fmt++;
++ } else {
++ lbuf->len += escape(*fmt++, lbuf->buf + lbuf->len);
++ }
++ continue;
++ }
++ if (!sudo_lbuf_expand(lbuf, 1))
++ goto done;
++ lbuf->buf[lbuf->len++] = *fmt++;
++ }
++ ret = true;
++
++done:
++ if (!ret)
++ lbuf->len = saved_len;
++ if (lbuf->size != 0)
++ lbuf->buf[lbuf->len] = '\0';
++ va_end(ap);
++
++ debug_return_bool(ret);
++}
++
++/*
+ * Parse the format and append strings, only %s and %% escapes are supported.
+ * Any characters in set are quoted with a backslash.
+ */
+--- a/lib/util/util.exp.in
++++ b/lib/util/util.exp.in
+@@ -79,6 +79,7 @@ sudo_gethostname_v1
+ sudo_gettime_awake_v1
+ sudo_gettime_mono_v1
+ sudo_gettime_real_v1
++sudo_lbuf_append_esc_v1
+ sudo_lbuf_append_quoted_v1
+ sudo_lbuf_append_v1
+ sudo_lbuf_clearerr_v1
+--- a/plugins/sudoers/logging.c
++++ b/plugins/sudoers/logging.c
+@@ -58,6 +58,7 @@
+ #include <syslog.h>
+
+ #include "sudoers.h"
++#include "sudo_lbuf.h"
+
+ #ifndef HAVE_GETADDRINFO
+ # include "compat/getaddrinfo.h"
+@@ -940,14 +941,6 @@ should_mail(int status)
+ (def_mail_no_perms && !ISSET(status, VALIDATE_SUCCESS)));
+ }
+
+-#define LL_TTY_STR "TTY="
+-#define LL_CWD_STR "PWD=" /* XXX - should be CWD= */
+-#define LL_USER_STR "USER="
+-#define LL_GROUP_STR "GROUP="
+-#define LL_ENV_STR "ENV="
+-#define LL_CMND_STR "COMMAND="
+-#define LL_TSID_STR "TSID="
+-
+ #define IS_SESSID(s) ( \
+ isalnum((unsigned char)(s)[0]) && isalnum((unsigned char)(s)[1]) && \
+ (s)[2] == '/' && \
+@@ -962,14 +955,16 @@ should_mail(int status)
+ static char *
+ new_logline(const char *message, const char *errstr)
+ {
+- char *line = NULL, *evstr = NULL;
+ #ifndef SUDOERS_NO_SEQ
+ char sessid[7];
+ #endif
+ const char *tsid = NULL;
+- size_t len = 0;
++ struct sudo_lbuf lbuf;
++ int i;
+ debug_decl(new_logline, SUDOERS_DEBUG_LOGGING)
+
++ sudo_lbuf_init(&lbuf, NULL, 0, NULL, 0);
++
+ #ifndef SUDOERS_NO_SEQ
+ /* A TSID may be a sudoers-style session ID or a free-form string. */
+ if (sudo_user.iolog_file != NULL) {
+@@ -989,119 +984,55 @@ new_logline(const char *message, const c
+ #endif
+
+ /*
+- * Compute line length
++ * Format the log line as an lbuf, escaping control characters in
++ * octal form (#0nn). Error checking (ENOMEM) is done at the end.
+ */
+- if (message != NULL)
+- len += strlen(message) + 3;
+- if (errstr != NULL)
+- len += strlen(errstr) + 3;
+- len += sizeof(LL_TTY_STR) + 2 + strlen(user_tty);
+- len += sizeof(LL_CWD_STR) + 2 + strlen(user_cwd);
+- if (runas_pw != NULL)
+- len += sizeof(LL_USER_STR) + 2 + strlen(runas_pw->pw_name);
+- if (runas_gr != NULL)
+- len += sizeof(LL_GROUP_STR) + 2 + strlen(runas_gr->gr_name);
+- if (tsid != NULL)
+- len += sizeof(LL_TSID_STR) + 2 + strlen(tsid);
+- if (sudo_user.env_vars != NULL) {
+- size_t evlen = 0;
+- char * const *ep;
+-
+- for (ep = sudo_user.env_vars; *ep != NULL; ep++)
+- evlen += strlen(*ep) + 1;
+- if (evlen != 0) {
+- if ((evstr = malloc(evlen)) == NULL)
+- goto oom;
+- evstr[0] = '\0';
+- for (ep = sudo_user.env_vars; *ep != NULL; ep++) {
+- strlcat(evstr, *ep, evlen);
+- strlcat(evstr, " ", evlen); /* NOTE: last one will fail */
+- }
+- len += sizeof(LL_ENV_STR) + 2 + evlen;
+- }
+- }
+- if (user_cmnd != NULL) {
+- /* Note: we log "sudo -l command arg ..." as "list command arg ..." */
+- len += sizeof(LL_CMND_STR) - 1 + strlen(user_cmnd);
+- if (ISSET(sudo_mode, MODE_CHECK))
+- len += sizeof("list ") - 1;
+- if (user_args != NULL)
+- len += strlen(user_args) + 1;
+- }
+-
+- /*
+- * Allocate and build up the line.
+- */
+- if ((line = malloc(++len)) == NULL)
+- goto oom;
+- line[0] = '\0';
+
+ if (message != NULL) {
+- if (strlcat(line, message, len) >= len ||
+- strlcat(line, errstr ? " : " : " ; ", len) >= len)
+- goto toobig;
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, "%s%s", message,
++ errstr ? " : " : " ; ");
+ }
+ if (errstr != NULL) {
+- if (strlcat(line, errstr, len) >= len ||
+- strlcat(line, " ; ", len) >= len)
+- goto toobig;
+- }
+- if (strlcat(line, LL_TTY_STR, len) >= len ||
+- strlcat(line, user_tty, len) >= len ||
+- strlcat(line, " ; ", len) >= len)
+- goto toobig;
+- if (strlcat(line, LL_CWD_STR, len) >= len ||
+- strlcat(line, user_cwd, len) >= len ||
+- strlcat(line, " ; ", len) >= len)
+- goto toobig;
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, "%s ; ", errstr);
++ }
++ if (user_tty != NULL) {
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, "TTY=%s ; ", user_tty);
++ }
++ if (user_cwd != NULL) {
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, "PWD=%s ; ", user_cwd);
++ }
+ if (runas_pw != NULL) {
+- if (strlcat(line, LL_USER_STR, len) >= len ||
+- strlcat(line, runas_pw->pw_name, len) >= len ||
+- strlcat(line, " ; ", len) >= len)
+- goto toobig;
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, "USER=%s ; ",
++ runas_pw->pw_name);
+ }
+ if (runas_gr != NULL) {
+- if (strlcat(line, LL_GROUP_STR, len) >= len ||
+- strlcat(line, runas_gr->gr_name, len) >= len ||
+- strlcat(line, " ; ", len) >= len)
+- goto toobig;
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, "GROUP=%s ; ",
++ runas_gr->gr_name);
+ }
+ if (tsid != NULL) {
+- if (strlcat(line, LL_TSID_STR, len) >= len ||
+- strlcat(line, tsid, len) >= len ||
+- strlcat(line, " ; ", len) >= len)
+- goto toobig;
+- }
+- if (evstr != NULL) {
+- if (strlcat(line, LL_ENV_STR, len) >= len ||
+- strlcat(line, evstr, len) >= len ||
+- strlcat(line, " ; ", len) >= len)
+- goto toobig;
+- free(evstr);
+- evstr = NULL;
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, "TSID=%s ; ", tsid);
++ }
++ if (sudo_user.env_vars != NULL) {
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, "ENV=%s", sudo_user.env_vars[0]);
++ for (i = 1; sudo_user.env_vars[i] != NULL; i++) {
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, " %s",
++ sudo_user.env_vars[i]);
++ }
+ }
+ if (user_cmnd != NULL) {
+- if (strlcat(line, LL_CMND_STR, len) >= len)
+- goto toobig;
+- if (ISSET(sudo_mode, MODE_CHECK) && strlcat(line, "list ", len) >= len)
+- goto toobig;
+- if (strlcat(line, user_cmnd, len) >= len)
+- goto toobig;
++ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL|LBUF_ESC_BLANK,
++ "COMMAND=%s", user_cmnd);
+ if (user_args != NULL) {
+- if (strlcat(line, " ", len) >= len ||
+- strlcat(line, user_args, len) >= len)
+- goto toobig;
++ sudo_lbuf_append_esc(&lbuf,
++ LBUF_ESC_CNTRL|LBUF_ESC_QUOTE,
++ " %s", user_args);
+ }
+ }
+
+- debug_return_str(line);
+-oom:
+- free(evstr);
++ if (!sudo_lbuf_error(&lbuf))
++ debug_return_str(lbuf.buf);
++
++ sudo_lbuf_destroy(&lbuf);
+ sudo_warnx(U_("%s: %s"), __func__, U_("unable to allocate memory"));
+ debug_return_str(NULL);
+-toobig:
+- free(evstr);
+- free(line);
+- sudo_warnx(U_("internal error, %s overflow"), __func__);
+- debug_return_str(NULL);
+ }
+--- a/plugins/sudoers/sudoreplay.c
++++ b/plugins/sudoers/sudoreplay.c
+@@ -71,6 +71,7 @@
+ #include "sudo_conf.h"
+ #include "sudo_debug.h"
+ #include "sudo_event.h"
++#include "sudo_lbuf.h"
+ #include "sudo_util.h"
+
+ #ifdef HAVE_GETOPT_LONG
+@@ -1353,7 +1354,8 @@ match_expr(struct search_node_list *head
+ }
+
+ static int
+-list_session(char *logfile, regex_t *re, const char *user, const char *tty)
++list_session(struct sudo_lbuf *lbuf, char *logfile, regex_t *re,
++ const char *user, const char *tty)
+ {
+ char idbuf[7], *idstr, *cp;
+ const char *timestr;
+@@ -1386,16 +1388,32 @@ list_session(char *logfile, regex_t *re,
+ }
+ /* XXX - print rows + cols? */
+ timestr = get_timestr(li->tstamp, 1);
+- printf("%s : %s : TTY=%s ; CWD=%s ; USER=%s ; ",
+- timestr ? timestr : "invalid date",
+- li->user, li->tty, li->cwd, li->runas_user);
+- if (li->runas_group)
+- printf("GROUP=%s ; ", li->runas_group);
+- printf("TSID=%s ; COMMAND=%s\n", idstr, li->cmd);
+-
+- ret = 0;
+-
++ sudo_lbuf_append_esc(lbuf, LBUF_ESC_CNTRL, "%s : %s : ",
++ timestr ? timestr : "invalid date", li->user);
++ if (li->tty != NULL) {
++ sudo_lbuf_append_esc(lbuf, LBUF_ESC_CNTRL, "TTY=%s ; ",
++ li->tty);
++ }
++ if (li->cwd != NULL) {
++ sudo_lbuf_append_esc(lbuf, LBUF_ESC_CNTRL, "CWD=%s ; ",
++ li->cwd);
++ }
++ sudo_lbuf_append_esc(lbuf, LBUF_ESC_CNTRL, "USER=%s ; ", li->runas_user);
++ if (li->runas_group != NULL) {
++ sudo_lbuf_append_esc(lbuf, LBUF_ESC_CNTRL, "GROUP=%s ; ",
++ li->runas_group);
++ }
++ sudo_lbuf_append_esc(lbuf, LBUF_ESC_CNTRL, "TSID=%s ; ", idstr);
++ sudo_lbuf_append_esc(lbuf, LBUF_ESC_CNTRL, "COMMAND=%s",
++ li->cmd);
++
++ if (!sudo_lbuf_error(lbuf)) {
++ puts(lbuf->buf);
++ ret = 0;
++ }
+ done:
++ lbuf->error = 0;
++ lbuf->len = 0;
+ free_log_info(li);
+ debug_return_int(ret);
+ }
+@@ -1415,6 +1433,7 @@ find_sessions(const char *dir, regex_t *
+ DIR *d;
+ struct dirent *dp;
+ struct stat sb;
++ struct sudo_lbuf lbuf;
+ size_t sdlen, sessions_len = 0, sessions_size = 0;
+ unsigned int i;
+ int len;
+@@ -1426,6 +1445,8 @@ find_sessions(const char *dir, regex_t *
+ #endif
+ debug_decl(find_sessions, SUDO_DEBUG_UTIL)
+
++ sudo_lbuf_init(&lbuf, NULL, 0, NULL, 0);
++
+ d = opendir(dir);
+ if (d == NULL)
+ sudo_fatal(U_("unable to open %s"), dir);
+@@ -1485,7 +1506,7 @@ find_sessions(const char *dir, regex_t *
+
+ /* Check for dir with a log file. */
+ if (lstat(pathbuf, &sb) == 0 && S_ISREG(sb.st_mode)) {
+- list_session(pathbuf, re, user, tty);
++ list_session(&lbuf, pathbuf, re, user, tty);
+ } else {
+ /* Strip off "/log" and recurse if a dir. */
+ pathbuf[sdlen + len - 4] = '\0';
+@@ -1496,6 +1517,7 @@ find_sessions(const char *dir, regex_t *
+ }
+ free(sessions);
+ }
++ sudo_lbuf_destroy(&lbuf);
+
+ debug_return_int(0);
+ }
diff --git a/poky/meta/recipes-extended/sudo/sudo/CVE-2023-28486_CVE-2023-28487-2.patch b/poky/meta/recipes-extended/sudo/sudo/CVE-2023-28486_CVE-2023-28487-2.patch
new file mode 100644
index 0000000000..d021873b70
--- /dev/null
+++ b/poky/meta/recipes-extended/sudo/sudo/CVE-2023-28486_CVE-2023-28487-2.patch
@@ -0,0 +1,26 @@
+Backport of:
+
+From 12648b4e0a8cf486480442efd52f0e0b6cab6e8b Mon Sep 17 00:00:00 2001
+From: "Todd C. Miller" <Todd.Miller@sudo.ws>
+Date: Mon, 13 Mar 2023 08:04:32 -0600
+Subject: [PATCH] Add missing " ; " separator between environment variables and
+ command. This is a regression introduced in sudo 1.9.13. GitHub issue #254.
+
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/sudo/tree/debian/patches/CVE-2023-2848x-2.patch?h=ubuntu/focal-security
+Upstream commit https://github.com/sudo-project/sudo/commit/12648b4e0a8cf486480442efd52f0e0b6cab6e8b]
+CVE: CVE-2023-28486 CVE-2023-28487
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/eventlog/eventlog.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/plugins/sudoers/logging.c
++++ b/plugins/sudoers/logging.c
+@@ -1018,6 +1018,7 @@ new_logline(const char *message, const c
+ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL, " %s",
+ sudo_user.env_vars[i]);
+ }
++ sudo_lbuf_append(&lbuf, " ; ");
+ }
+ if (user_cmnd != NULL) {
+ sudo_lbuf_append_esc(&lbuf, LBUF_ESC_CNTRL|LBUF_ESC_BLANK,
diff --git a/poky/meta/recipes-extended/sudo/sudo_1.8.32.bb b/poky/meta/recipes-extended/sudo/sudo_1.8.32.bb
index 8d16ec2538..e35bbfa789 100644
--- a/poky/meta/recipes-extended/sudo/sudo_1.8.32.bb
+++ b/poky/meta/recipes-extended/sudo/sudo_1.8.32.bb
@@ -4,6 +4,10 @@ SRC_URI = "https://www.sudo.ws/dist/sudo-${PV}.tar.gz \
${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
file://0001-Include-sys-types.h-for-id_t-definition.patch \
file://0001-Fix-includes-when-building-with-musl.patch \
+ file://CVE-2022-43995.patch \
+ file://CVE-2023-22809.patch \
+ file://CVE-2023-28486_CVE-2023-28487-1.patch \
+ file://CVE-2023-28486_CVE-2023-28487-2.patch \
"
PAM_SRC_URI = "file://sudo.pam"
diff --git a/poky/meta/recipes-extended/sysstat/sysstat/CVE-2022-39377.patch b/poky/meta/recipes-extended/sysstat/sysstat/CVE-2022-39377.patch
new file mode 100644
index 0000000000..972cc8938b
--- /dev/null
+++ b/poky/meta/recipes-extended/sysstat/sysstat/CVE-2022-39377.patch
@@ -0,0 +1,92 @@
+From 9c4eaf150662ad40607923389d4519bc83b93540 Mon Sep 17 00:00:00 2001
+From: Sebastien <seb@fedora-2.home>
+Date: Sat, 15 Oct 2022 14:24:22 +0200
+Subject: [PATCH] Fix size_t overflow in sa_common.c (GHSL-2022-074)
+
+allocate_structures function located in sa_common.c insufficiently
+checks bounds before arithmetic multiplication allowing for an
+overflow in the size allocated for the buffer representing system
+activities.
+
+This patch checks that the post-multiplied value is not greater than
+UINT_MAX.
+
+Signed-off-by: Sebastien <seb@fedora-2.home>
+
+Upstream-Status: Backport [https://github.com/sysstat/sysstat/commit/9c4eaf150662ad40607923389d4519bc83b93540]
+CVE : CVE-2022-39377
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ common.c | 25 +++++++++++++++++++++++++
+ common.h | 2 ++
+ sa_common.c | 6 ++++++
+ 3 files changed, 33 insertions(+)
+
+diff --git a/common.c b/common.c
+index ddfe75d..28d475e 100644
+--- a/common.c
++++ b/common.c
+@@ -1528,4 +1528,29 @@ int parse_values(char *strargv, unsigned char bitmap[], int max_val, const char
+
+ return 0;
+ }
++
++/*
++ ***************************************************************************
++ * Check if the multiplication of the 3 values may be greater than UINT_MAX.
++ *
++ * IN:
++ * @val1 First value.
++ * @val2 Second value.
++ * @val3 Third value.
++ ***************************************************************************
++ */
++void check_overflow(size_t val1, size_t val2, size_t val3)
++{
++ if ((unsigned long long) val1 *
++ (unsigned long long) val2 *
++ (unsigned long long) val3 > UINT_MAX) {
++#ifdef DEBUG
++ fprintf(stderr, "%s: Overflow detected (%llu). Aborting...\n",
++ __FUNCTION__,
++ (unsigned long long) val1 * (unsigned long long) val2 * (unsigned long long) val3);
++#endif
++ exit(4);
++ }
++}
++
+ #endif /* SOURCE_SADC undefined */
+diff --git a/common.h b/common.h
+index 86905ba..75f837a 100644
+--- a/common.h
++++ b/common.h
+@@ -249,6 +249,8 @@ int get_wwnid_from_pretty
+ (char *, unsigned long long *, unsigned int *);
+
+ #ifndef SOURCE_SADC
++void check_overflow
++ (size_t, size_t, size_t);
+ int count_bits
+ (void *, int);
+ int count_csvalues
+diff --git a/sa_common.c b/sa_common.c
+index 8a03099..ff90c1f 100644
+--- a/sa_common.c
++++ b/sa_common.c
+@@ -452,7 +452,13 @@ void allocate_structures(struct activity *act[])
+ int i, j;
+
+ for (i = 0; i < NR_ACT; i++) {
++
+ if (act[i]->nr_ini > 0) {
++
++ /* Look for a possible overflow */
++ check_overflow((size_t) act[i]->msize, (size_t) act[i]->nr_ini,
++ (size_t) act[i]->nr2);
++
+ for (j = 0; j < 3; j++) {
+ SREALLOC(act[i]->buf[j], void,
+ (size_t) act[i]->msize * (size_t) act[i]->nr_ini * (size_t) act[i]->nr2);
+--
+2.25.1
+
diff --git a/poky/meta/recipes-extended/sysstat/sysstat_12.2.1.bb b/poky/meta/recipes-extended/sysstat/sysstat_12.2.1.bb
index 2a90f89d25..2c0d5c8136 100644
--- a/poky/meta/recipes-extended/sysstat/sysstat_12.2.1.bb
+++ b/poky/meta/recipes-extended/sysstat/sysstat_12.2.1.bb
@@ -2,7 +2,9 @@ require sysstat.inc
LIC_FILES_CHKSUM = "file://COPYING;md5=a23a74b3f4caf9616230789d94217acb"
-SRC_URI += "file://0001-configure.in-remove-check-for-chkconfig.patch"
+SRC_URI += "file://0001-configure.in-remove-check-for-chkconfig.patch \
+ file://CVE-2022-39377.patch \
+ "
SRC_URI[md5sum] = "9dfff5fac24e35bd92fb7896debf2ffb"
SRC_URI[sha256sum] = "8edb0e19b514ac560a098a02933a4735b881296d61014db89bf80f05dd7a4732"
diff --git a/poky/meta/recipes-extended/tar/tar/CVE-2022-48303.patch b/poky/meta/recipes-extended/tar/tar/CVE-2022-48303.patch
new file mode 100644
index 0000000000..b2f40f3e64
--- /dev/null
+++ b/poky/meta/recipes-extended/tar/tar/CVE-2022-48303.patch
@@ -0,0 +1,43 @@
+From 3da78400eafcccb97e2f2fd4b227ea40d794ede8 Mon Sep 17 00:00:00 2001
+From: Sergey Poznyakoff <gray@gnu.org>
+Date: Sat, 11 Feb 2023 11:57:39 +0200
+Subject: Fix boundary checking in base-256 decoder
+
+* src/list.c (from_header): Base-256 encoding is at least 2 bytes
+long.
+
+Upstream-Status: Backport [see reference below]
+CVE: CVE-2022-48303
+
+Reference to upstream patch:
+https://savannah.gnu.org/bugs/?62387
+https://git.savannah.gnu.org/cgit/tar.git/patch/src/list.c?id=3da78400eafcccb97e2f2fd4b227ea40d794ede8
+
+Signed-off-by: Rodolfo Quesada Zumbado <rodolfo.zumbado@windriver.com>
+Signed-off-by: Joe Slater <joe.slater@windriver.com>
+---
+ src/list.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)Signed-off-by: Rodolfo Quesada Zumbado <rodolfo.zumbado@windriver.com>
+
+
+(limited to 'src/list.c')
+
+diff --git a/src/list.c b/src/list.c
+index 9fafc42..86bcfdd 100644
+--- a/src/list.c
++++ b/src/list.c
+@@ -881,8 +881,9 @@ from_header (char const *where0, size_t digs, char const *type,
+ where++;
+ }
+ }
+- else if (*where == '\200' /* positive base-256 */
+- || *where == '\377' /* negative base-256 */)
++ else if (where <= lim - 2
++ && (*where == '\200' /* positive base-256 */
++ || *where == '\377' /* negative base-256 */))
+ {
+ /* Parse base-256 output. A nonnegative number N is
+ represented as (256**DIGS)/2 + N; a negative number -N is
+--
+cgit v1.1
+
diff --git a/poky/meta/recipes-extended/tar/tar_1.32.bb b/poky/meta/recipes-extended/tar/tar_1.32.bb
index db1540dbd6..1246f01256 100644
--- a/poky/meta/recipes-extended/tar/tar_1.32.bb
+++ b/poky/meta/recipes-extended/tar/tar_1.32.bb
@@ -9,6 +9,7 @@ LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
SRC_URI = "${GNU_MIRROR}/tar/tar-${PV}.tar.bz2 \
file://musl_dirent.patch \
file://CVE-2021-20193.patch \
+ file://CVE-2022-48303.patch \
"
SRC_URI[md5sum] = "17917356fff5cb4bd3cd5a6c3e727b05"
diff --git a/poky/meta/recipes-extended/timezone/timezone.inc b/poky/meta/recipes-extended/timezone/timezone.inc
index d032fed356..1834665a1e 100644
--- a/poky/meta/recipes-extended/timezone/timezone.inc
+++ b/poky/meta/recipes-extended/timezone/timezone.inc
@@ -6,7 +6,7 @@ SECTION = "base"
LICENSE = "PD & BSD-3-Clause"
LIC_FILES_CHKSUM = "file://LICENSE;md5=c679c9d6b02bc2757b3eaf8f53c43fba"
-PV = "2022c"
+PV = "2022g"
SRC_URI =" http://www.iana.org/time-zones/repository/releases/tzcode${PV}.tar.gz;name=tzcode \
http://www.iana.org/time-zones/repository/releases/tzdata${PV}.tar.gz;name=tzdata \
@@ -14,6 +14,5 @@ SRC_URI =" http://www.iana.org/time-zones/repository/releases/tzcode${PV}.tar.gz
UPSTREAM_CHECK_URI = "http://www.iana.org/time-zones"
-SRC_URI[tzcode.sha256sum] = "3e7ce1f3620cc0481907c7e074d69910793285bffe0ca331ef1a6d1ae3ea90cc"
-SRC_URI[tzdata.sha256sum] = "6974f4e348bf2323274b56dff9e7500247e3159eaa4b485dfa0cd66e75c14bfe"
-
+SRC_URI[tzcode.sha256sum] = "9610bb0b9656ff404c361a41f3286da53064b5469d84f00c9cb2314c8614da74"
+SRC_URI[tzdata.sha256sum] = "4491db8281ae94a84d939e427bdd83dc389f26764d27d9a5c52d782c16764478"
diff --git a/poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6461.patch b/poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6461.patch
index 5232cf70c6..a2dba6cb20 100644
--- a/poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6461.patch
+++ b/poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6461.patch
@@ -1,19 +1,20 @@
-There is a potential infinite-loop in function _arc_error_normalized().
+There is an assertion in function _cairo_arc_in_direction().
CVE: CVE-2019-6461
Upstream-Status: Pending
Signed-off-by: Ross Burton <ross.burton@intel.com>
diff --git a/src/cairo-arc.c b/src/cairo-arc.c
-index 390397bae..f9249dbeb 100644
+index 390397bae..1bde774a4 100644
--- a/src/cairo-arc.c
+++ b/src/cairo-arc.c
-@@ -99,7 +99,7 @@ _arc_max_angle_for_tolerance_normalized (double tolerance)
- do {
- angle = M_PI / i++;
- error = _arc_error_normalized (angle);
-- } while (error > tolerance);
-+ } while (error > tolerance && error > __DBL_EPSILON__);
+@@ -186,7 +186,8 @@ _cairo_arc_in_direction (cairo_t *cr,
+ if (cairo_status (cr))
+ return;
- return angle;
- }
+- assert (angle_max >= angle_min);
++ if (angle_max < angle_min)
++ return;
+
+ if (angle_max - angle_min > 2 * M_PI * MAX_FULL_CIRCLES) {
+ angle_max = fmod (angle_max - angle_min, 2 * M_PI);
diff --git a/poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6462.patch b/poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6462.patch
index 4e4598c5b5..7c3209291b 100644
--- a/poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6462.patch
+++ b/poky/meta/recipes-graphics/cairo/cairo/CVE-2019-6462.patch
@@ -1,20 +1,40 @@
-There is an assertion in function _cairo_arc_in_direction().
-
CVE: CVE-2019-6462
-Upstream-Status: Pending
-Signed-off-by: Ross Burton <ross.burton@intel.com>
+Upstream-Status: Backport
+Signed-off-by: Quentin Schulz <quentin.schulz@theobroma-systems.com>
+
+From ab2c5ee21e5f3d3ee4b3f67cfcd5811a4f99c3a0 Mon Sep 17 00:00:00 2001
+From: Heiko Lewin <hlewin@gmx.de>
+Date: Sun, 1 Aug 2021 11:16:03 +0000
+Subject: [PATCH] _arc_max_angle_for_tolerance_normalized: fix infinite loop
+
+---
+ src/cairo-arc.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/cairo-arc.c b/src/cairo-arc.c
-index 390397bae..1bde774a4 100644
+index 390397bae..1c891d1a0 100644
--- a/src/cairo-arc.c
+++ b/src/cairo-arc.c
-@@ -186,7 +186,8 @@ _cairo_arc_in_direction (cairo_t *cr,
- if (cairo_status (cr))
- return;
+@@ -90,16 +90,18 @@ _arc_max_angle_for_tolerance_normalized (double tolerance)
+ { M_PI / 11.0, 9.81410988043554039085e-09 },
+ };
+ int table_size = ARRAY_LENGTH (table);
++ const int max_segments = 1000; /* this value is chosen arbitrarily. this gives an error of about 1.74909e-20 */
-- assert (angle_max >= angle_min);
-+ if (angle_max < angle_min)
-+ return;
+ for (i = 0; i < table_size; i++)
+ if (table[i].error < tolerance)
+ return table[i].angle;
- if (angle_max - angle_min > 2 * M_PI * MAX_FULL_CIRCLES) {
- angle_max = fmod (angle_max - angle_min, 2 * M_PI);
+ ++i;
++
+ do {
+ angle = M_PI / i++;
+ error = _arc_error_normalized (angle);
+- } while (error > tolerance);
++ } while (error > tolerance && i < max_segments);
+
+ return angle;
+ }
+--
+2.38.1
+
diff --git a/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193-pre0.patch b/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193-pre0.patch
new file mode 100644
index 0000000000..90d4cfefb4
--- /dev/null
+++ b/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193-pre0.patch
@@ -0,0 +1,335 @@
+From 3122c2cdc45a964efedad8953a2df67205c3e3a8 Mon Sep 17 00:00:00 2001
+From: Behdad Esfahbod <behdad@behdad.org>
+Date: Sat, 4 Dec 2021 19:50:33 -0800
+Subject: [PATCH] [buffer] Add HB_GLYPH_FLAG_UNSAFE_TO_CONCAT
+
+Fixes https://github.com/harfbuzz/harfbuzz/issues/1463
+Upstream-Status: Backport from [https://github.com/harfbuzz/harfbuzz/commit/3122c2cdc45a964efedad8953a2df67205c3e3a8]
+Comment1: To backport the fix for CVE-2023-25193, add defination for HB_GLYPH_FLAG_UNSAFE_TO_CONCAT. This patch is needed along with CVE-2023-25193-pre1.patch for sucessfull porting.
+Signed-off-by: Siddharth Doshi <sdoshi@mvista.com>
+---
+ src/hb-buffer.cc | 10 ++---
+ src/hb-buffer.h | 76 ++++++++++++++++++++++++++++++------
+ src/hb-buffer.hh | 33 ++++++++++------
+ src/hb-ot-layout-gsubgpos.hh | 39 +++++++++++++++---
+ src/hb-ot-shape.cc | 8 +---
+ 5 files changed, 124 insertions(+), 42 deletions(-)
+
+diff --git a/src/hb-buffer.cc b/src/hb-buffer.cc
+index 6131c86..bba5eae 100644
+--- a/src/hb-buffer.cc
++++ b/src/hb-buffer.cc
+@@ -610,14 +610,14 @@ done:
+ }
+
+ void
+-hb_buffer_t::unsafe_to_break_impl (unsigned int start, unsigned int end)
++hb_buffer_t::unsafe_to_break_impl (unsigned int start, unsigned int end, hb_mask_t mask)
+ {
+ unsigned int cluster = (unsigned int) -1;
+ cluster = _unsafe_to_break_find_min_cluster (info, start, end, cluster);
+- _unsafe_to_break_set_mask (info, start, end, cluster);
++ _unsafe_to_break_set_mask (info, start, end, cluster, mask);
+ }
+ void
+-hb_buffer_t::unsafe_to_break_from_outbuffer (unsigned int start, unsigned int end)
++hb_buffer_t::unsafe_to_break_from_outbuffer (unsigned int start, unsigned int end, hb_mask_t mask)
+ {
+ if (!have_output)
+ {
+@@ -631,8 +631,8 @@ hb_buffer_t::unsafe_to_break_from_outbuffer (unsigned int start, unsigned int en
+ unsigned int cluster = (unsigned int) -1;
+ cluster = _unsafe_to_break_find_min_cluster (out_info, start, out_len, cluster);
+ cluster = _unsafe_to_break_find_min_cluster (info, idx, end, cluster);
+- _unsafe_to_break_set_mask (out_info, start, out_len, cluster);
+- _unsafe_to_break_set_mask (info, idx, end, cluster);
++ _unsafe_to_break_set_mask (out_info, start, out_len, cluster, mask);
++ _unsafe_to_break_set_mask (info, idx, end, cluster, mask);
+ }
+
+ void
+diff --git a/src/hb-buffer.h b/src/hb-buffer.h
+index d5cb746..42dc92a 100644
+--- a/src/hb-buffer.h
++++ b/src/hb-buffer.h
+@@ -77,26 +77,76 @@ typedef struct hb_glyph_info_t
+ * @HB_GLYPH_FLAG_UNSAFE_TO_BREAK: Indicates that if input text is broken at the
+ * beginning of the cluster this glyph is part of,
+ * then both sides need to be re-shaped, as the
+- * result might be different. On the flip side,
+- * it means that when this flag is not present,
+- * then it's safe to break the glyph-run at the
+- * beginning of this cluster, and the two sides
+- * represent the exact same result one would get
+- * if breaking input text at the beginning of
+- * this cluster and shaping the two sides
+- * separately. This can be used to optimize
+- * paragraph layout, by avoiding re-shaping
+- * of each line after line-breaking, or limiting
+- * the reshaping to a small piece around the
+- * breaking point only.
++ * result might be different.
++ *
++ * On the flip side, it means that when this
++ * flag is not present, then it is safe to break
++ * the glyph-run at the beginning of this
++ * cluster, and the two sides will represent the
++ * exact same result one would get if breaking
++ * input text at the beginning of this cluster
++ * and shaping the two sides separately.
++ *
++ * This can be used to optimize paragraph
++ * layout, by avoiding re-shaping of each line
++ * after line-breaking.
++ *
++ * @HB_GLYPH_FLAG_UNSAFE_TO_CONCAT: Indicates that if input text is changed on one
++ * side of the beginning of the cluster this glyph
++ * is part of, then the shaping results for the
++ * other side might change.
++ *
++ * Note that the absence of this flag will NOT by
++ * itself mean that it IS safe to concat text.
++ * Only two pieces of text both of which clear of
++ * this flag can be concatenated safely.
++ *
++ * This can be used to optimize paragraph
++ * layout, by avoiding re-shaping of each line
++ * after line-breaking, by limiting the
++ * reshaping to a small piece around the
++ * breaking positin only, even if the breaking
++ * position carries the
++ * #HB_GLYPH_FLAG_UNSAFE_TO_BREAK or when
++ * hyphenation or other text transformation
++ * happens at line-break position, in the following
++ * way:
++ *
++ * 1. Iterate back from the line-break position till
++ * the the first cluster start position that is
++ * NOT unsafe-to-concat, 2. shape the segment from
++ * there till the end of line, 3. check whether the
++ * resulting glyph-run also is clear of the
++ * unsafe-to-concat at its start-of-text position;
++ * if it is, just splice it into place and the line
++ * is shaped; If not, move on to a position further
++ * back that is clear of unsafe-to-concat and retry
++ * from there, and repeat.
++ *
++ * At the start of next line a similar algorithm can
++ * be implemented. A slight complication will arise,
++ * because while our buffer API has a way to
++ * return flags for position corresponding to
++ * start-of-text, there is currently no position
++ * corresponding to end-of-text. This limitation
++ * can be alleviated by shaping more text than needed
++ * and looking for unsafe-to-concat flag within text
++ * clusters.
++ *
++ * The #HB_GLYPH_FLAG_UNSAFE_TO_BREAK flag will
++ * always imply this flag.
++ *
++ * Since: REPLACEME
++ *
+ * @HB_GLYPH_FLAG_DEFINED: All the currently defined flags.
+ *
+ * Since: 1.5.0
+ */
+ typedef enum { /*< flags >*/
+ HB_GLYPH_FLAG_UNSAFE_TO_BREAK = 0x00000001,
++ HB_GLYPH_FLAG_UNSAFE_TO_CONCAT = 0x00000002,
+
+- HB_GLYPH_FLAG_DEFINED = 0x00000001 /* OR of all defined flags */
++ HB_GLYPH_FLAG_DEFINED = 0x00000003 /* OR of all defined flags */
+ } hb_glyph_flags_t;
+
+ HB_EXTERN hb_glyph_flags_t
+diff --git a/src/hb-buffer.hh b/src/hb-buffer.hh
+index b5596d9..beac7b6 100644
+--- a/src/hb-buffer.hh
++++ b/src/hb-buffer.hh
+@@ -67,8 +67,8 @@ enum hb_buffer_scratch_flags_t {
+ HB_BUFFER_SCRATCH_FLAG_HAS_DEFAULT_IGNORABLES = 0x00000002u,
+ HB_BUFFER_SCRATCH_FLAG_HAS_SPACE_FALLBACK = 0x00000004u,
+ HB_BUFFER_SCRATCH_FLAG_HAS_GPOS_ATTACHMENT = 0x00000008u,
+- HB_BUFFER_SCRATCH_FLAG_HAS_UNSAFE_TO_BREAK = 0x00000010u,
+- HB_BUFFER_SCRATCH_FLAG_HAS_CGJ = 0x00000020u,
++ HB_BUFFER_SCRATCH_FLAG_HAS_CGJ = 0x00000010u,
++ HB_BUFFER_SCRATCH_FLAG_HAS_GLYPH_FLAGS = 0x00000020u,
+
+ /* Reserved for complex shapers' internal use. */
+ HB_BUFFER_SCRATCH_FLAG_COMPLEX0 = 0x01000000u,
+@@ -324,8 +324,19 @@ struct hb_buffer_t
+ return;
+ unsafe_to_break_impl (start, end);
+ }
+- HB_INTERNAL void unsafe_to_break_impl (unsigned int start, unsigned int end);
+- HB_INTERNAL void unsafe_to_break_from_outbuffer (unsigned int start, unsigned int end);
++ void unsafe_to_concat (unsigned int start,
++ unsigned int end)
++ {
++ if (end - start < 2)
++ return;
++ unsafe_to_break_impl (start, end, HB_GLYPH_FLAG_UNSAFE_TO_CONCAT);
++ }
++ HB_INTERNAL void unsafe_to_break_impl (unsigned int start, unsigned int end,
++ hb_mask_t mask = HB_GLYPH_FLAG_UNSAFE_TO_BREAK | HB_GLYPH_FLAG_UNSAFE_TO_CONCAT);
++ HB_INTERNAL void unsafe_to_break_from_outbuffer (unsigned int start, unsigned int end,
++ hb_mask_t mask = HB_GLYPH_FLAG_UNSAFE_TO_BREAK | HB_GLYPH_FLAG_UNSAFE_TO_CONCAT);
++ void unsafe_to_concat_from_outbuffer (unsigned int start, unsigned int end)
++ { unsafe_to_break_from_outbuffer (start, end, HB_GLYPH_FLAG_UNSAFE_TO_CONCAT); }
+
+
+ /* Internal methods */
+@@ -377,12 +388,7 @@ struct hb_buffer_t
+ set_cluster (hb_glyph_info_t &inf, unsigned int cluster, unsigned int mask = 0)
+ {
+ if (inf.cluster != cluster)
+- {
+- if (mask & HB_GLYPH_FLAG_UNSAFE_TO_BREAK)
+- inf.mask |= HB_GLYPH_FLAG_UNSAFE_TO_BREAK;
+- else
+- inf.mask &= ~HB_GLYPH_FLAG_UNSAFE_TO_BREAK;
+- }
++ inf.mask = (inf.mask & ~HB_GLYPH_FLAG_DEFINED) | (mask & HB_GLYPH_FLAG_DEFINED);
+ inf.cluster = cluster;
+ }
+
+@@ -398,13 +404,14 @@ struct hb_buffer_t
+ void
+ _unsafe_to_break_set_mask (hb_glyph_info_t *infos,
+ unsigned int start, unsigned int end,
+- unsigned int cluster)
++ unsigned int cluster,
++ hb_mask_t mask)
+ {
+ for (unsigned int i = start; i < end; i++)
+ if (cluster != infos[i].cluster)
+ {
+- scratch_flags |= HB_BUFFER_SCRATCH_FLAG_HAS_UNSAFE_TO_BREAK;
+- infos[i].mask |= HB_GLYPH_FLAG_UNSAFE_TO_BREAK;
++ scratch_flags |= HB_BUFFER_SCRATCH_FLAG_HAS_GLYPH_FLAGS;
++ infos[i].mask |= mask;
+ }
+ }
+
+diff --git a/src/hb-ot-layout-gsubgpos.hh b/src/hb-ot-layout-gsubgpos.hh
+index 579d178..a6ca456 100644
+--- a/src/hb-ot-layout-gsubgpos.hh
++++ b/src/hb-ot-layout-gsubgpos.hh
+@@ -369,7 +369,7 @@ struct hb_ot_apply_context_t :
+ may_skip (const hb_glyph_info_t &info) const
+ { return matcher.may_skip (c, info); }
+
+- bool next ()
++ bool next (unsigned *unsafe_to = nullptr)
+ {
+ assert (num_items > 0);
+ while (idx + num_items < end)
+@@ -392,11 +392,17 @@ struct hb_ot_apply_context_t :
+ }
+
+ if (skip == matcher_t::SKIP_NO)
++ {
++ if (unsafe_to)
++ *unsafe_to = idx + 1;
+ return false;
++ }
+ }
++ if (unsafe_to)
++ *unsafe_to = end;
+ return false;
+ }
+- bool prev ()
++ bool prev (unsigned *unsafe_from = nullptr)
+ {
+ assert (num_items > 0);
+ while (idx > num_items - 1)
+@@ -419,8 +425,14 @@ struct hb_ot_apply_context_t :
+ }
+
+ if (skip == matcher_t::SKIP_NO)
++ {
++ if (unsafe_from)
++ *unsafe_from = hb_max (1u, idx) - 1u;
+ return false;
++ }
+ }
++ if (unsafe_from)
++ *unsafe_from = 0;
+ return false;
+ }
+
+@@ -834,7 +846,12 @@ static inline bool match_input (hb_ot_apply_context_t *c,
+ match_positions[0] = buffer->idx;
+ for (unsigned int i = 1; i < count; i++)
+ {
+- if (!skippy_iter.next ()) return_trace (false);
++ unsigned unsafe_to;
++ if (!skippy_iter.next (&unsafe_to))
++ {
++ c->buffer->unsafe_to_concat (c->buffer->idx, unsafe_to);
++ return_trace (false);
++ }
+
+ match_positions[i] = skippy_iter.idx;
+
+@@ -1022,8 +1039,14 @@ static inline bool match_backtrack (hb_ot_apply_context_t *c,
+ skippy_iter.set_match_func (match_func, match_data, backtrack);
+
+ for (unsigned int i = 0; i < count; i++)
+- if (!skippy_iter.prev ())
++ {
++ unsigned unsafe_from;
++ if (!skippy_iter.prev (&unsafe_from))
++ {
++ c->buffer->unsafe_to_concat_from_outbuffer (unsafe_from, c->buffer->idx);
+ return_trace (false);
++ }
++ }
+
+ *match_start = skippy_iter.idx;
+
+@@ -1045,8 +1068,14 @@ static inline bool match_lookahead (hb_ot_apply_context_t *c,
+ skippy_iter.set_match_func (match_func, match_data, lookahead);
+
+ for (unsigned int i = 0; i < count; i++)
+- if (!skippy_iter.next ())
++ {
++ unsigned unsafe_to;
++ if (!skippy_iter.next (&unsafe_to))
++ {
++ c->buffer->unsafe_to_concat (c->buffer->idx + offset, unsafe_to);
+ return_trace (false);
++ }
++ }
+
+ *end_index = skippy_iter.idx + 1;
+
+diff --git a/src/hb-ot-shape.cc b/src/hb-ot-shape.cc
+index 5d9a70c..5d10b30 100644
+--- a/src/hb-ot-shape.cc
++++ b/src/hb-ot-shape.cc
+@@ -1008,7 +1008,7 @@ hb_propagate_flags (hb_buffer_t *buffer)
+ /* Propagate cluster-level glyph flags to be the same on all cluster glyphs.
+ * Simplifies using them. */
+
+- if (!(buffer->scratch_flags & HB_BUFFER_SCRATCH_FLAG_HAS_UNSAFE_TO_BREAK))
++ if (!(buffer->scratch_flags & HB_BUFFER_SCRATCH_FLAG_HAS_GLYPH_FLAGS))
+ return;
+
+ hb_glyph_info_t *info = buffer->info;
+@@ -1017,11 +1017,7 @@ hb_propagate_flags (hb_buffer_t *buffer)
+ {
+ unsigned int mask = 0;
+ for (unsigned int i = start; i < end; i++)
+- if (info[i].mask & HB_GLYPH_FLAG_UNSAFE_TO_BREAK)
+- {
+- mask = HB_GLYPH_FLAG_UNSAFE_TO_BREAK;
+- break;
+- }
++ mask |= info[i].mask & HB_GLYPH_FLAG_DEFINED;
+ if (mask)
+ for (unsigned int i = start; i < end; i++)
+ info[i].mask |= mask;
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193-pre1.patch b/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193-pre1.patch
new file mode 100644
index 0000000000..4994e0ef68
--- /dev/null
+++ b/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193-pre1.patch
@@ -0,0 +1,135 @@
+From b29fbd16fa82b82bdf0dcb2f13a63f7dc23cf324 Mon Sep 17 00:00:00 2001
+From: Behdad Esfahbod <behdad@behdad.org>
+Date: Mon, 6 Feb 2023 13:08:52 -0700
+Subject: [PATCH] [gsubgpos] Refactor skippy_iter.match()
+
+Upstream-Status: Backport from [https://github.com/harfbuzz/harfbuzz/commit/b29fbd16fa82b82bdf0dcb2f13a63f7dc23cf324]
+Comment1: To backport the fix for CVE-2023-25193, add defination for MATCH, NOT_MATCH and SKIP.
+Signed-off-by: Siddharth Doshi <sdoshi@mvista.com>
+---
+ src/hb-ot-layout-gsubgpos.hh | 94 +++++++++++++++++++++---------------
+ 1 file changed, 54 insertions(+), 40 deletions(-)
+
+diff --git a/src/hb-ot-layout-gsubgpos.hh b/src/hb-ot-layout-gsubgpos.hh
+index a6ca456..5a7e564 100644
+--- a/src/hb-ot-layout-gsubgpos.hh
++++ b/src/hb-ot-layout-gsubgpos.hh
+@@ -369,33 +369,52 @@ struct hb_ot_apply_context_t :
+ may_skip (const hb_glyph_info_t &info) const
+ { return matcher.may_skip (c, info); }
+
++ enum match_t {
++ MATCH,
++ NOT_MATCH,
++ SKIP
++ };
++
++ match_t match (hb_glyph_info_t &info)
++ {
++ matcher_t::may_skip_t skip = matcher.may_skip (c, info);
++ if (unlikely (skip == matcher_t::SKIP_YES))
++ return SKIP;
++
++ matcher_t::may_match_t match = matcher.may_match (info, match_glyph_data);
++ if (match == matcher_t::MATCH_YES ||
++ (match == matcher_t::MATCH_MAYBE &&
++ skip == matcher_t::SKIP_NO))
++ return MATCH;
++
++ if (skip == matcher_t::SKIP_NO)
++ return NOT_MATCH;
++
++ return SKIP;
++ }
++
+ bool next (unsigned *unsafe_to = nullptr)
+ {
+ assert (num_items > 0);
+ while (idx + num_items < end)
+ {
+ idx++;
+- const hb_glyph_info_t &info = c->buffer->info[idx];
+-
+- matcher_t::may_skip_t skip = matcher.may_skip (c, info);
+- if (unlikely (skip == matcher_t::SKIP_YES))
+- continue;
+-
+- matcher_t::may_match_t match = matcher.may_match (info, match_glyph_data);
+- if (match == matcher_t::MATCH_YES ||
+- (match == matcher_t::MATCH_MAYBE &&
+- skip == matcher_t::SKIP_NO))
+- {
+- num_items--;
+- if (match_glyph_data) match_glyph_data++;
+- return true;
+- }
+-
+- if (skip == matcher_t::SKIP_NO)
++ switch (match (c->buffer->info[idx]))
+ {
+- if (unsafe_to)
+- *unsafe_to = idx + 1;
+- return false;
++ case MATCH:
++ {
++ num_items--;
++ if (match_glyph_data) match_glyph_data++;
++ return true;
++ }
++ case NOT_MATCH:
++ {
++ if (unsafe_to)
++ *unsafe_to = idx + 1;
++ return false;
++ }
++ case SKIP:
++ continue;
+ }
+ }
+ if (unsafe_to)
+@@ -408,27 +427,22 @@ struct hb_ot_apply_context_t :
+ while (idx > num_items - 1)
+ {
+ idx--;
+- const hb_glyph_info_t &info = c->buffer->out_info[idx];
+-
+- matcher_t::may_skip_t skip = matcher.may_skip (c, info);
+- if (unlikely (skip == matcher_t::SKIP_YES))
+- continue;
+-
+- matcher_t::may_match_t match = matcher.may_match (info, match_glyph_data);
+- if (match == matcher_t::MATCH_YES ||
+- (match == matcher_t::MATCH_MAYBE &&
+- skip == matcher_t::SKIP_NO))
++ switch (match (c->buffer->out_info[idx]))
+ {
+- num_items--;
+- if (match_glyph_data) match_glyph_data++;
+- return true;
+- }
+-
+- if (skip == matcher_t::SKIP_NO)
+- {
+- if (unsafe_from)
+- *unsafe_from = hb_max (1u, idx) - 1u;
+- return false;
++ case MATCH:
++ {
++ num_items--;
++ if (match_glyph_data) match_glyph_data++;
++ return true;
++ }
++ case NOT_MATCH:
++ {
++ if (unsafe_from)
++ *unsafe_from = hb_max (1u, idx) - 1u;
++ return false;
++ }
++ case SKIP:
++ continue;
+ }
+ }
+ if (unsafe_from)
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193.patch b/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193.patch
new file mode 100644
index 0000000000..8243117551
--- /dev/null
+++ b/poky/meta/recipes-graphics/harfbuzz/harfbuzz/CVE-2023-25193.patch
@@ -0,0 +1,179 @@
+From 8708b9e081192786c027bb7f5f23d76dbe5c19e8 Mon Sep 17 00:00:00 2001
+From: Behdad Esfahbod <behdad@behdad.org>
+Date: Mon, 6 Feb 2023 14:51:25 -0700
+Subject: [PATCH] [GPOS] Avoid O(n^2) behavior in mark-attachment
+
+Upstream-Status: Backport from [https://github.com/harfbuzz/harfbuzz/commit/8708b9e081192786c027bb7f5f23d76dbe5c19e8]
+Comment1: The Original Patch [https://github.com/harfbuzz/harfbuzz/commit/85be877925ddbf34f74a1229f3ca1716bb6170dc] causes regression and was reverted. This Patch completes the fix.
+Comment2: The Patch contained files MarkBasePosFormat1.hh and MarkLigPosFormat1.hh which were moved from hb-ot-layout-gpos-table.hh as per https://github.com/harfbuzz/harfbuzz/commit/197d9a5c994eb41c8c89b7b958b26b1eacfeeb00
+CVE: CVE-2023-25193
+Signed-off-by: Siddharth Doshi <sdoshi@mvista.com>
+---
+ src/hb-ot-layout-gpos-table.hh | 101 ++++++++++++++++++++++++---------
+ src/hb-ot-layout-gsubgpos.hh | 5 +-
+ 2 files changed, 77 insertions(+), 29 deletions(-)
+
+diff --git a/src/hb-ot-layout-gpos-table.hh b/src/hb-ot-layout-gpos-table.hh
+index 024312d..88df13d 100644
+--- a/src/hb-ot-layout-gpos-table.hh
++++ b/src/hb-ot-layout-gpos-table.hh
+@@ -1458,6 +1458,25 @@ struct MarkBasePosFormat1
+
+ const Coverage &get_coverage () const { return this+markCoverage; }
+
++ static inline bool accept (hb_buffer_t *buffer, unsigned idx)
++ {
++ /* We only want to attach to the first of a MultipleSubst sequence.
++ * https://github.com/harfbuzz/harfbuzz/issues/740
++ * Reject others...
++ * ...but stop if we find a mark in the MultipleSubst sequence:
++ * https://github.com/harfbuzz/harfbuzz/issues/1020 */
++ return !_hb_glyph_info_multiplied (&buffer->info[idx]) ||
++ 0 == _hb_glyph_info_get_lig_comp (&buffer->info[idx]) ||
++ (idx == 0 ||
++ _hb_glyph_info_is_mark (&buffer->info[idx - 1]) ||
++ !_hb_glyph_info_multiplied (&buffer->info[idx - 1]) ||
++ _hb_glyph_info_get_lig_id (&buffer->info[idx]) !=
++ _hb_glyph_info_get_lig_id (&buffer->info[idx - 1]) ||
++ _hb_glyph_info_get_lig_comp (&buffer->info[idx]) !=
++ _hb_glyph_info_get_lig_comp (&buffer->info[idx - 1]) + 1
++ );
++ }
++
+ bool apply (hb_ot_apply_context_t *c) const
+ {
+ TRACE_APPLY (this);
+@@ -1465,37 +1484,46 @@ struct MarkBasePosFormat1
+ unsigned int mark_index = (this+markCoverage).get_coverage (buffer->cur().codepoint);
+ if (likely (mark_index == NOT_COVERED)) return_trace (false);
+
+- /* Now we search backwards for a non-mark glyph */
++ /* Now we search backwards for a non-mark glyph.
++ * We don't use skippy_iter.prev() to avoid O(n^2) behavior. */
++
+ hb_ot_apply_context_t::skipping_iterator_t &skippy_iter = c->iter_input;
+- skippy_iter.reset (buffer->idx, 1);
+ skippy_iter.set_lookup_props (LookupFlag::IgnoreMarks);
+- do {
+- if (!skippy_iter.prev ()) return_trace (false);
+- /* We only want to attach to the first of a MultipleSubst sequence.
+- * https://github.com/harfbuzz/harfbuzz/issues/740
+- * Reject others...
+- * ...but stop if we find a mark in the MultipleSubst sequence:
+- * https://github.com/harfbuzz/harfbuzz/issues/1020 */
+- if (!_hb_glyph_info_multiplied (&buffer->info[skippy_iter.idx]) ||
+- 0 == _hb_glyph_info_get_lig_comp (&buffer->info[skippy_iter.idx]) ||
+- (skippy_iter.idx == 0 ||
+- _hb_glyph_info_is_mark (&buffer->info[skippy_iter.idx - 1]) ||
+- _hb_glyph_info_get_lig_id (&buffer->info[skippy_iter.idx]) !=
+- _hb_glyph_info_get_lig_id (&buffer->info[skippy_iter.idx - 1]) ||
+- _hb_glyph_info_get_lig_comp (&buffer->info[skippy_iter.idx]) !=
+- _hb_glyph_info_get_lig_comp (&buffer->info[skippy_iter.idx - 1]) + 1
+- ))
+- break;
+- skippy_iter.reject ();
+- } while (true);
++ unsigned j;
++ for (j = buffer->idx; j > c->last_base_until; j--)
++ {
++ auto match = skippy_iter.match (buffer->info[j - 1]);
++ if (match == skippy_iter.MATCH)
++ {
++ if (!accept (buffer, j - 1))
++ match = skippy_iter.SKIP;
++ }
++ if (match == skippy_iter.MATCH)
++ {
++ c->last_base = (signed) j - 1;
++ break;
++ }
++ }
++ c->last_base_until = buffer->idx;
++ if (c->last_base == -1)
++ {
++ buffer->unsafe_to_concat_from_outbuffer (0, buffer->idx + 1);
++ return_trace (false);
++ }
++
++ unsigned idx = (unsigned) c->last_base;
+
+ /* Checking that matched glyph is actually a base glyph by GDEF is too strong; disabled */
+- //if (!_hb_glyph_info_is_base_glyph (&buffer->info[skippy_iter.idx])) { return_trace (false); }
++ //if (!_hb_glyph_info_is_base_glyph (&buffer->info[idx])) { return_trace (false); }
+
+- unsigned int base_index = (this+baseCoverage).get_coverage (buffer->info[skippy_iter.idx].codepoint);
++ unsigned int base_index = (this+baseCoverage).get_coverage (buffer->info[idx].codepoint);
+ if (base_index == NOT_COVERED) return_trace (false);
++ {
++ buffer->unsafe_to_concat_from_outbuffer (idx, buffer->idx + 1);
++ return_trace (false);
++ }
+
+- return_trace ((this+markArray).apply (c, mark_index, base_index, this+baseArray, classCount, skippy_iter.idx));
++ return_trace ((this+markArray).apply (c, mark_index, base_index, this+baseArray, classCount, idx));
+ }
+
+ bool subset (hb_subset_context_t *c) const
+@@ -1587,15 +1615,32 @@ struct MarkLigPosFormat1
+ if (likely (mark_index == NOT_COVERED)) return_trace (false);
+
+ /* Now we search backwards for a non-mark glyph */
++
+ hb_ot_apply_context_t::skipping_iterator_t &skippy_iter = c->iter_input;
+- skippy_iter.reset (buffer->idx, 1);
+ skippy_iter.set_lookup_props (LookupFlag::IgnoreMarks);
+- if (!skippy_iter.prev ()) return_trace (false);
++
++ unsigned j;
++ for (j = buffer->idx; j > c->last_base_until; j--)
++ {
++ auto match = skippy_iter.match (buffer->info[j - 1]);
++ if (match == skippy_iter.MATCH)
++ {
++ c->last_base = (signed) j - 1;
++ break;
++ }
++ }
++ c->last_base_until = buffer->idx;
++ if (c->last_base == -1)
++ {
++ buffer->unsafe_to_concat_from_outbuffer (0, buffer->idx + 1);
++ return_trace (false);
++ }
++
++ j = (unsigned) c->last_base;
+
+ /* Checking that matched glyph is actually a ligature by GDEF is too strong; disabled */
+- //if (!_hb_glyph_info_is_ligature (&buffer->info[skippy_iter.idx])) { return_trace (false); }
++ //if (!_hb_glyph_info_is_ligature (&buffer->info[idx])) { return_trace (false); }
+
+- unsigned int j = skippy_iter.idx;
+ unsigned int lig_index = (this+ligatureCoverage).get_coverage (buffer->info[j].codepoint);
+ if (lig_index == NOT_COVERED) return_trace (false);
+
+diff --git a/src/hb-ot-layout-gsubgpos.hh b/src/hb-ot-layout-gsubgpos.hh
+index 5a7e564..437123c 100644
+--- a/src/hb-ot-layout-gsubgpos.hh
++++ b/src/hb-ot-layout-gsubgpos.hh
+@@ -503,6 +503,9 @@ struct hb_ot_apply_context_t :
+ uint32_t random_state;
+
+
++ signed last_base = -1; // GPOS uses
++ unsigned last_base_until = 0; // GPOS uses
++
+ hb_ot_apply_context_t (unsigned int table_index_,
+ hb_font_t *font_,
+ hb_buffer_t *buffer_) :
+@@ -536,7 +539,7 @@ struct hb_ot_apply_context_t :
+ iter_context.init (this, true);
+ }
+
+- void set_lookup_mask (hb_mask_t mask) { lookup_mask = mask; init_iters (); }
++ void set_lookup_mask (hb_mask_t mask) { lookup_mask = mask; last_base = -1; last_base_until = 0; init_iters (); }
+ void set_auto_zwj (bool auto_zwj_) { auto_zwj = auto_zwj_; init_iters (); }
+ void set_auto_zwnj (bool auto_zwnj_) { auto_zwnj = auto_zwnj_; init_iters (); }
+ void set_random (bool random_) { random = random_; }
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/harfbuzz/harfbuzz_2.6.4.bb b/poky/meta/recipes-graphics/harfbuzz/harfbuzz_2.6.4.bb
index ee08c12bee..0cfe01f1e5 100644
--- a/poky/meta/recipes-graphics/harfbuzz/harfbuzz_2.6.4.bb
+++ b/poky/meta/recipes-graphics/harfbuzz/harfbuzz_2.6.4.bb
@@ -7,7 +7,10 @@ LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://COPYING;md5=e11f5c3149cdec4bb309babb020b32b9 \
file://src/hb-ucd.cc;beginline=1;endline=15;md5=29d4dcb6410429195df67efe3382d8bc"
-SRC_URI = "http://www.freedesktop.org/software/harfbuzz/release/${BP}.tar.xz"
+SRC_URI = "http://www.freedesktop.org/software/harfbuzz/release/${BP}.tar.xz \
+ file://CVE-2023-25193-pre0.patch \
+ file://CVE-2023-25193-pre1.patch \
+ file://CVE-2023-25193.patch"
SRC_URI[md5sum] = "2b3a4dfdb3e5e50055f941978944da9f"
SRC_URI[sha256sum] = "9413b8d96132d699687ef914ebb8c50440efc87b3f775d25856d7ec347c03c12"
diff --git a/poky/meta/recipes-graphics/libsdl2/libsdl2/CVE-2022-4743.patch b/poky/meta/recipes-graphics/libsdl2/libsdl2/CVE-2022-4743.patch
new file mode 100644
index 0000000000..b02a2169a6
--- /dev/null
+++ b/poky/meta/recipes-graphics/libsdl2/libsdl2/CVE-2022-4743.patch
@@ -0,0 +1,38 @@
+From 00b67f55727bc0944c3266e2b875440da132ce4b Mon Sep 17 00:00:00 2001
+From: zhailiangliang <zhailiangliang@loongson.cn>
+Date: Wed, 21 Sep 2022 10:30:38 +0800
+Subject: [PATCH] Fix potential memory leak in GLES_CreateTexture
+
+
+CVE: CVE-2022-4743
+Upstream-Status: Backport [https://github.com/libsdl-org/SDL/commit/00b67f55727bc0944c3266e2b875440da132ce4b.patch]
+Signed-off-by: Ranjitsinh Rathod <ranjitsinh.rathod@kpit.com>
+
+---
+ src/render/opengles/SDL_render_gles.c | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/src/render/opengles/SDL_render_gles.c b/src/render/opengles/SDL_render_gles.c
+index a5fbab309eda..ba08a46e2805 100644
+--- a/src/render/opengles/SDL_render_gles.c
++++ b/src/render/opengles/SDL_render_gles.c
+@@ -359,6 +359,9 @@ GLES_CreateTexture(SDL_Renderer * renderer, SDL_Texture * texture)
+ renderdata->glGenTextures(1, &data->texture);
+ result = renderdata->glGetError();
+ if (result != GL_NO_ERROR) {
++ if (texture->access == SDL_TEXTUREACCESS_STREAMING) {
++ SDL_free(data->pixels);
++ }
+ SDL_free(data);
+ return GLES_SetError("glGenTextures()", result);
+ }
+@@ -387,6 +390,9 @@ GLES_CreateTexture(SDL_Renderer * renderer, SDL_Texture * texture)
+
+ result = renderdata->glGetError();
+ if (result != GL_NO_ERROR) {
++ if (texture->access == SDL_TEXTUREACCESS_STREAMING) {
++ SDL_free(data->pixels);
++ }
+ SDL_free(data);
+ return GLES_SetError("glTexImage2D()", result);
+ }
diff --git a/poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.12.bb b/poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.12.bb
index 44d36fca22..fa29bc99ac 100644
--- a/poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.12.bb
+++ b/poky/meta/recipes-graphics/libsdl2/libsdl2_2.0.12.bb
@@ -22,6 +22,7 @@ SRC_URI = "http://www.libsdl.org/release/SDL2-${PV}.tar.gz \
file://directfb-renderfillrect-fix.patch \
file://CVE-2020-14409-14410.patch \
file://CVE-2021-33657.patch \
+ file://CVE-2022-4743.patch \
"
S = "${WORKDIR}/SDL2-${PV}"
diff --git a/poky/meta/recipes-graphics/xorg-lib/libx11/CVE-2022-3554.patch b/poky/meta/recipes-graphics/xorg-lib/libx11/CVE-2022-3554.patch
new file mode 100644
index 0000000000..fb61195225
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-lib/libx11/CVE-2022-3554.patch
@@ -0,0 +1,58 @@
+From 8b51d1375a4dd6a7cf3a919da83d8e87e57e7333 Mon Sep 17 00:00:00 2001
+From: Hitendra Prajapati <hprajapati@mvista.com>
+Date: Wed, 2 Nov 2022 17:04:15 +0530
+Subject: [PATCH] CVE-2022-3554
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/lib/libx11/-/commit/1d11822601fd24a396b354fa616b04ed3df8b4ef]
+CVE: CVE-2022-3554
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+
+fix a memory leak in XRegisterIMInstantiateCallback
+
+Analysis:
+
+ _XimRegisterIMInstantiateCallback() opens an XIM and closes it using
+ the internal function pointers, but the internal close function does
+ not free the pointer to the XIM (this would be done in XCloseIM()).
+
+Report/patch:
+
+ Date: Mon, 03 Oct 2022 18:47:32 +0800
+ From: Po Lu <luangruo@yahoo.com>
+ To: xorg-devel@lists.x.org
+ Subject: Re: Yet another leak in Xlib
+
+ For reference, here's how I'm calling XRegisterIMInstantiateCallback:
+
+ XSetLocaleModifiers ("");
+ XRegisterIMInstantiateCallback (compositor.display,
+ XrmGetDatabase (compositor.display),
+ (char *) compositor.resource_name,
+ (char *) compositor.app_name,
+ IMInstantiateCallback, NULL);
+ and XMODIFIERS is:
+
+ @im=ibus
+
+Signed-off-by: Thomas E. Dickey's avatarThomas E. Dickey <dickey@invisible-island.net>
+---
+ modules/im/ximcp/imInsClbk.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/modules/im/ximcp/imInsClbk.c b/modules/im/ximcp/imInsClbk.c
+index 961aaba..0a8a874 100644
+--- a/modules/im/ximcp/imInsClbk.c
++++ b/modules/im/ximcp/imInsClbk.c
+@@ -204,6 +204,9 @@ _XimRegisterIMInstantiateCallback(
+ if( xim ) {
+ lock = True;
+ xim->methods->close( (XIM)xim );
++ /* XIMs must be freed manually after being opened; close just
++ does the protocol to deinitialize the IM. */
++ XFree( xim );
+ lock = False;
+ icb->call = True;
+ callback( display, client_data, NULL );
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/xorg-lib/libx11/CVE-2022-3555.patch b/poky/meta/recipes-graphics/xorg-lib/libx11/CVE-2022-3555.patch
new file mode 100644
index 0000000000..855ce80e77
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-lib/libx11/CVE-2022-3555.patch
@@ -0,0 +1,38 @@
+From 8a368d808fec166b5fb3dfe6312aab22c7ee20af Mon Sep 17 00:00:00 2001
+From: Hodong <hodong@yozmos.com>
+Date: Thu, 20 Jan 2022 00:57:41 +0900
+Subject: [PATCH] Fix two memory leaks in _XFreeX11XCBStructure()
+
+Even when XCloseDisplay() was called, some memory was leaked.
+
+XCloseDisplay() calls _XFreeDisplayStructure(), which calls
+_XFreeX11XCBStructure().
+
+However, _XFreeX11XCBStructure() did not destroy the condition variables,
+resulting in the leaking of some 40 bytes.
+
+Signed-off-by: Hodong <hodong@yozmos.com>
+
+Upstream-Status: Backport from [https://gitlab.freedesktop.org/xorg/lib/libx11/-/commit/8a368d808fec166b5fb3dfe6312aab22c7ee20af]
+CVE:CVE-2022-3555
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ src/xcb_disp.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/src/xcb_disp.c b/src/xcb_disp.c
+index 70a602f4..e9becee3 100644
+--- a/src/xcb_disp.c
++++ b/src/xcb_disp.c
+@@ -102,6 +102,8 @@ void _XFreeX11XCBStructure(Display *dpy)
+ dpy->xcb->pending_requests = tmp->next;
+ free(tmp);
+ }
++ xcondition_clear(dpy->xcb->event_notify);
++ xcondition_clear(dpy->xcb->reply_notify);
+ xcondition_free(dpy->xcb->event_notify);
+ xcondition_free(dpy->xcb->reply_notify);
+ Xfree(dpy->xcb);
+--
+2.18.2
+
diff --git a/poky/meta/recipes-graphics/xorg-lib/libx11_1.6.9.bb b/poky/meta/recipes-graphics/xorg-lib/libx11_1.6.9.bb
index ff2a6f7265..ad3fab1204 100644
--- a/poky/meta/recipes-graphics/xorg-lib/libx11_1.6.9.bb
+++ b/poky/meta/recipes-graphics/xorg-lib/libx11_1.6.9.bb
@@ -16,6 +16,8 @@ SRC_URI += "file://Fix-hanging-issue-in-_XReply.patch \
file://CVE-2020-14344.patch \
file://CVE-2020-14363.patch \
file://CVE-2021-31535.patch \
+ file://CVE-2022-3554.patch \
+ file://CVE-2022-3555.patch \
"
SRC_URI[md5sum] = "55adbfb6d4370ecac5e70598c4e7eed2"
diff --git a/poky/meta/recipes-graphics/xorg-lib/pixman/CVE-2022-44638.patch b/poky/meta/recipes-graphics/xorg-lib/pixman/CVE-2022-44638.patch
new file mode 100644
index 0000000000..d54ae16b33
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-lib/pixman/CVE-2022-44638.patch
@@ -0,0 +1,34 @@
+CVE: CVE-2022-44638
+Upstream-Status: Backport
+Signed-off-by: Ross Burton <ross.burton@arm.com>
+Signed-off-by:Bhabu Bindu <bhabu.bindu@kpit.com>
+
+From a1f88e842e0216a5b4df1ab023caebe33c101395 Mon Sep 17 00:00:00 2001
+From: Matt Turner <mattst88@gmail.com>
+Date: Wed, 2 Nov 2022 12:07:32 -0400
+Subject: [PATCH] Avoid integer overflow leading to out-of-bounds write
+
+Thanks to Maddie Stone and Google's Project Zero for discovering this
+issue, providing a proof-of-concept, and a great analysis.
+
+Closes: https://gitlab.freedesktop.org/pixman/pixman/-/issues/63
+---
+ pixman/pixman-trap.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/pixman/pixman-trap.c b/pixman/pixman-trap.c
+index 91766fd..7560405 100644
+--- a/pixman/pixman-trap.c
++++ b/pixman/pixman-trap.c
+@@ -74,7 +74,7 @@ pixman_sample_floor_y (pixman_fixed_t y,
+
+ if (f < Y_FRAC_FIRST (n))
+ {
+- if (pixman_fixed_to_int (i) == 0x8000)
++ if (pixman_fixed_to_int (i) == 0xffff8000)
+ {
+ f = 0; /* saturate */
+ }
+--
+GitLab
+
diff --git a/poky/meta/recipes-graphics/xorg-lib/pixman_0.38.4.bb b/poky/meta/recipes-graphics/xorg-lib/pixman_0.38.4.bb
index 22e19ba069..5873c19bab 100644
--- a/poky/meta/recipes-graphics/xorg-lib/pixman_0.38.4.bb
+++ b/poky/meta/recipes-graphics/xorg-lib/pixman_0.38.4.bb
@@ -10,6 +10,7 @@ DEPENDS = "zlib"
SRC_URI = "https://www.cairographics.org/releases/${BP}.tar.gz \
file://0001-ARM-qemu-related-workarounds-in-cpu-features-detecti.patch \
file://0001-test-utils-Check-for-FE_INVALID-definition-before-us.patch \
+ file://CVE-2022-44638.patch \
"
SRC_URI[md5sum] = "267a7af290f93f643a1bc74490d9fdd1"
SRC_URI[sha256sum] = "da66d6fd6e40aee70f7bd02e4f8f76fc3f006ec879d346bae6a723025cfbdde7"
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3550.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3550.patch
new file mode 100644
index 0000000000..efec7b6b4e
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3550.patch
@@ -0,0 +1,40 @@
+From d2dcbdc67c96c84dff301505072b0b7b022f1a14 Mon Sep 17 00:00:00 2001
+From: Peter Hutterer <peter.hutterer@who-t.net>
+Date: Sun, 4 Dec 2022 17:40:21 +0000
+Subject: [PATCH 1/3] xkb: proof GetCountedString against request length
+ attacks
+
+GetCountedString did a check for the whole string to be within the
+request buffer but not for the initial 2 bytes that contain the length
+field. A swapped client could send a malformed request to trigger a
+swaps() on those bytes, writing into random memory.
+
+Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
+
+Ustream-Status: Backport [https://cgit.freedesktop.org/xorg/xserver/commit/?id=11beef0b7f1ed290348e45618e5fa0d2bffcb72e]
+CVE: CVE-2022-3550
+Signed-off-by:Minjae Kim <flowergom@gmail.com>
+
+---
+ xkb/xkb.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/xkb/xkb.c b/xkb/xkb.c
+index 68c59df..bf8aaa3 100644
+--- a/xkb/xkb.c
++++ b/xkb/xkb.c
+@@ -5138,6 +5138,11 @@ _GetCountedString(char **wire_inout, ClientPtr client, char **str)
+ CARD16 len;
+
+ wire = *wire_inout;
++
++ if (client->req_len <
++ bytes_to_int32(wire + 2 - (char *) client->requestBuffer))
++ return BadValue;
++
+ len = *(CARD16 *) wire;
+ if (client->swapped) {
+ swaps(&len);
+--
+2.17.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3551.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3551.patch
new file mode 100644
index 0000000000..a3b977aac9
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3551.patch
@@ -0,0 +1,64 @@
+From d3787290f56165f5656ddd2123dbf676a32d0a68 Mon Sep 17 00:00:00 2001
+From: Peter Hutterer <peter.hutterer@who-t.net>
+Date: Sun, 4 Dec 2022 17:44:00 +0000
+Subject: [PATCH 2/3] xkb: fix some possible memleaks in XkbGetKbdByName
+
+GetComponentByName returns an allocated string, so let's free that if we
+fail somewhere.
+
+Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
+
+Upstream-Status: Backport [https://cgit.freedesktop.org/xorg/xserver/commit/?id=18f91b950e22c2a342a4fbc55e9ddf7534a707d2]
+CVE: CVE-2022-3551
+Signed-off-by:Minjae Kim <flowergom@gmail.com>
+
+---
+ xkb/xkb.c | 26 +++++++++++++++++++-------
+ 1 file changed, 19 insertions(+), 7 deletions(-)
+
+diff --git a/xkb/xkb.c b/xkb/xkb.c
+index bf8aaa3..f79d306 100644
+--- a/xkb/xkb.c
++++ b/xkb/xkb.c
+@@ -5908,19 +5908,31 @@ ProcXkbGetKbdByName(ClientPtr client)
+ xkb = dev->key->xkbInfo->desc;
+ status = Success;
+ str = (unsigned char *) &stuff[1];
+- if (GetComponentSpec(&str, TRUE, &status)) /* keymap, unsupported */
+- return BadMatch;
++ {
++ char *keymap = GetComponentSpec(&str, TRUE, &status); /* keymap, unsupported */
++ if (keymap) {
++ free(keymap);
++ return BadMatch;
++ }
++ }
+ names.keycodes = GetComponentSpec(&str, TRUE, &status);
+ names.types = GetComponentSpec(&str, TRUE, &status);
+ names.compat = GetComponentSpec(&str, TRUE, &status);
+ names.symbols = GetComponentSpec(&str, TRUE, &status);
+ names.geometry = GetComponentSpec(&str, TRUE, &status);
+- if (status != Success)
+- return status;
+- len = str - ((unsigned char *) stuff);
+- if ((XkbPaddedSize(len) / 4) != stuff->length)
+- return BadLength;
++ if (status == Success) {
++ len = str - ((unsigned char *) stuff);
++ if ((XkbPaddedSize(len) / 4) != stuff->length)
++ status = BadLength;
++ }
+
++ if (status != Success) {
++ free(names.keycodes);
++ free(names.types);
++ free(names.compat);
++ free(names.symbols);
++ free(names.geometry);
++ }
+ CHK_MASK_LEGAL(0x01, stuff->want, XkbGBN_AllComponentsMask);
+ CHK_MASK_LEGAL(0x02, stuff->need, XkbGBN_AllComponentsMask);
+
+--
+2.17.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3553.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3553.patch
new file mode 100644
index 0000000000..94cea77edc
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-3553.patch
@@ -0,0 +1,49 @@
+From 57ad2c03730d56f8432b6d66b29c0e5a9f9b1ec2 Mon Sep 17 00:00:00 2001
+From: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
+Date: Sun, 4 Dec 2022 17:46:18 +0000
+Subject: [PATCH 3/3] xquartz: Fix a possible crash when editing the
+ Application menu due to mutaing immutable arrays
+
+Crashing on exception: -[__NSCFArray replaceObjectAtIndex:withObject:]: mutating method sent to immutable object
+
+Application Specific Backtrace 0:
+0 CoreFoundation 0x00007ff80d2c5e9b __exceptionPreprocess + 242
+1 libobjc.A.dylib 0x00007ff80d027e48 objc_exception_throw + 48
+2 CoreFoundation 0x00007ff80d38167b _CFThrowFormattedException + 194
+3 CoreFoundation 0x00007ff80d382a25 -[__NSCFArray removeObjectAtIndex:].cold.1 + 0
+4 CoreFoundation 0x00007ff80d2e6c0b -[__NSCFArray replaceObjectAtIndex:withObject:] + 119
+5 X11.bin 0x00000001003180f9 -[X11Controller tableView:setObjectValue:forTableColumn:row:] + 169
+
+Fixes: https://github.com/XQuartz/XQuartz/issues/267
+Signed-off-by: Jeremy Huddleston Sequoia <jeremyhu@apple.com>
+
+Upstream-Status: Backport [https://cgit.freedesktop.org/xorg/xserver/commit/?id=dfd057996b26420309c324ec844a5ba6dd07eda3]
+CVE: CVE-2022-3553
+Signed-off-by:Minjae Kim <flowergom@gmail.com>
+
+---
+ hw/xquartz/X11Controller.m | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+diff --git a/hw/xquartz/X11Controller.m b/hw/xquartz/X11Controller.m
+index 3efda50..9870ff2 100644
+--- a/hw/xquartz/X11Controller.m
++++ b/hw/xquartz/X11Controller.m
+@@ -467,8 +467,12 @@ extern char *bundle_id_prefix;
+ self.table_apps = table_apps;
+
+ NSArray * const apps = self.apps;
+- if (apps != nil)
+- [table_apps addObjectsFromArray:apps];
++
++ if (apps != nil) {
++ for (NSArray <NSString *> * row in apps) {
++ [table_apps addObject:row.mutableCopy];
++ }
++ }
+
+ columns = [apps_table tableColumns];
+ [[columns objectAtIndex:0] setIdentifier:@"0"];
+--
+2.17.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-4283.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-4283.patch
new file mode 100644
index 0000000000..3f6b68fea8
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-4283.patch
@@ -0,0 +1,39 @@
+From ccdd431cd8f1cabae9d744f0514b6533c438908c Mon Sep 17 00:00:00 2001
+From: Peter Hutterer <peter.hutterer@who-t.net>
+Date: Mon, 5 Dec 2022 15:55:54 +1000
+Subject: [PATCH] xkb: reset the radio_groups pointer to NULL after freeing it
+
+Unlike other elements of the keymap, this pointer was freed but not
+reset. On a subsequent XkbGetKbdByName request, the server may access
+already freed memory.
+
+CVE-2022-4283, ZDI-CAN-19530
+
+This vulnerability was discovered by:
+Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
+
+Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
+Acked-by: Olivier Fourdan <ofourdan@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/ccdd431cd8f1cabae9d744f0514b6533c438908c]
+CVE: CVE-2022-4283
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ xkb/xkbUtils.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/xkb/xkbUtils.c b/xkb/xkbUtils.c
+index 8975ade..9bc51fc 100644
+--- a/xkb/xkbUtils.c
++++ b/xkb/xkbUtils.c
+@@ -1327,6 +1327,7 @@ _XkbCopyNames(XkbDescPtr src, XkbDescPtr dst)
+ }
+ else {
+ free(dst->names->radio_groups);
++ dst->names->radio_groups = NULL;
+ }
+ dst->names->num_rg = src->names->num_rg;
+
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46340.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46340.patch
new file mode 100644
index 0000000000..a6c97485cd
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46340.patch
@@ -0,0 +1,55 @@
+From b320ca0ffe4c0c872eeb3a93d9bde21f765c7c63 Mon Sep 17 00:00:00 2001
+From: Peter Hutterer <peter.hutterer@who-t.net>
+Date: Tue, 29 Nov 2022 12:55:45 +1000
+Subject: [PATCH] Xtest: disallow GenericEvents in XTestSwapFakeInput
+
+XTestSwapFakeInput assumes all events in this request are
+sizeof(xEvent) and iterates through these in 32-byte increments.
+However, a GenericEvent may be of arbitrary length longer than 32 bytes,
+so any GenericEvent in this list would result in subsequent events to be
+misparsed.
+
+Additional, the swapped event is written into a stack-allocated struct
+xEvent (size 32 bytes). For any GenericEvent longer than 32 bytes,
+swapping the event may thus smash the stack like an avocado on toast.
+
+Catch this case early and return BadValue for any GenericEvent.
+Which is what would happen in unswapped setups anyway since XTest
+doesn't support GenericEvent.
+
+CVE-2022-46340, ZDI-CAN 19265
+
+This vulnerability was discovered by:
+Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
+
+Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
+Acked-by: Olivier Fourdan <ofourdan@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/b320ca0ffe4c0c872eeb3a93d9bde21f765c7c63]
+CVE: CVE-2022-46340
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ Xext/xtest.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/Xext/xtest.c b/Xext/xtest.c
+index 38b8012..bf11789 100644
+--- a/Xext/xtest.c
++++ b/Xext/xtest.c
+@@ -501,10 +501,11 @@ XTestSwapFakeInput(ClientPtr client, xReq * req)
+
+ nev = ((req->length << 2) - sizeof(xReq)) / sizeof(xEvent);
+ for (ev = (xEvent *) &req[1]; --nev >= 0; ev++) {
++ int evtype = ev->u.u.type & 0x177;
+ /* Swap event */
+- proc = EventSwapVector[ev->u.u.type & 0177];
++ proc = EventSwapVector[evtype];
+ /* no swapping proc; invalid event type? */
+- if (!proc || proc == NotImplemented) {
++ if (!proc || proc == NotImplemented || evtype == GenericEvent) {
+ client->errorValue = ev->u.u.type;
+ return BadValue;
+ }
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46341.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46341.patch
new file mode 100644
index 0000000000..0ef6e5fc9f
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46341.patch
@@ -0,0 +1,86 @@
+From 51eb63b0ee1509c6c6b8922b0e4aa037faa6f78b Mon Sep 17 00:00:00 2001
+From: Peter Hutterer <peter.hutterer@who-t.net>
+Date: Tue, 29 Nov 2022 13:55:32 +1000
+Subject: [PATCH] Xi: disallow passive grabs with a detail > 255
+
+The XKB protocol effectively prevents us from ever using keycodes above
+255. For buttons it's theoretically possible but realistically too niche
+to worry about. For all other passive grabs, the detail must be zero
+anyway.
+
+This fixes an OOB write:
+
+ProcXIPassiveUngrabDevice() calls DeletePassiveGrabFromList with a
+temporary grab struct which contains tempGrab->detail.exact = stuff->detail.
+For matching existing grabs, DeleteDetailFromMask is called with the
+stuff->detail value. This function creates a new mask with the one bit
+representing stuff->detail cleared.
+
+However, the array size for the new mask is 8 * sizeof(CARD32) bits,
+thus any detail above 255 results in an OOB array write.
+
+CVE-2022-46341, ZDI-CAN 19381
+
+This vulnerability was discovered by:
+Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
+
+Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
+Acked-by: Olivier Fourdan <ofourdan@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/51eb63b0ee1509c6c6b8922b0e4aa037faa6f78b]
+CVE: CVE-2022-46341
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ Xi/xipassivegrab.c | 22 ++++++++++++++--------
+ 1 file changed, 14 insertions(+), 8 deletions(-)
+
+diff --git a/Xi/xipassivegrab.c b/Xi/xipassivegrab.c
+index d30f51f..89a5910 100644
+--- a/Xi/xipassivegrab.c
++++ b/Xi/xipassivegrab.c
+@@ -133,6 +133,12 @@ ProcXIPassiveGrabDevice(ClientPtr client)
+ return BadValue;
+ }
+
++ /* XI2 allows 32-bit keycodes but thanks to XKB we can never
++ * implement this. Just return an error for all keycodes that
++ * cannot work anyway, same for buttons > 255. */
++ if (stuff->detail > 255)
++ return XIAlreadyGrabbed;
++
+ if (XICheckInvalidMaskBits(client, (unsigned char *) &stuff[1],
+ stuff->mask_len * 4) != Success)
+ return BadValue;
+@@ -203,14 +209,8 @@ ProcXIPassiveGrabDevice(ClientPtr client)
+ &param, XI2, &mask);
+ break;
+ case XIGrabtypeKeycode:
+- /* XI2 allows 32-bit keycodes but thanks to XKB we can never
+- * implement this. Just return an error for all keycodes that
+- * cannot work anyway */
+- if (stuff->detail > 255)
+- status = XIAlreadyGrabbed;
+- else
+- status = GrabKey(client, dev, mod_dev, stuff->detail,
+- &param, XI2, &mask);
++ status = GrabKey(client, dev, mod_dev, stuff->detail,
++ &param, XI2, &mask);
+ break;
+ case XIGrabtypeEnter:
+ case XIGrabtypeFocusIn:
+@@ -319,6 +319,12 @@ ProcXIPassiveUngrabDevice(ClientPtr client)
+ return BadValue;
+ }
+
++ /* We don't allow passive grabs for details > 255 anyway */
++ if (stuff->detail > 255) {
++ client->errorValue = stuff->detail;
++ return BadValue;
++ }
++
+ rc = dixLookupWindow(&win, stuff->grab_window, client, DixSetAttrAccess);
+ if (rc != Success)
+ return rc;
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46342.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46342.patch
new file mode 100644
index 0000000000..23fef3f321
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46342.patch
@@ -0,0 +1,78 @@
+From b79f32b57cc0c1186b2899bce7cf89f7b325161b Mon Sep 17 00:00:00 2001
+From: Peter Hutterer <peter.hutterer@who-t.net>
+Date: Wed, 30 Nov 2022 11:20:40 +1000
+Subject: [PATCH] Xext: free the XvRTVideoNotify when turning off from the same
+ client
+
+This fixes a use-after-free bug:
+
+When a client first calls XvdiSelectVideoNotify() on a drawable with a
+TRUE onoff argument, a struct XvVideoNotifyRec is allocated. This struct
+is added twice to the resources:
+ - as the drawable's XvRTVideoNotifyList. This happens only once per
+ drawable, subsequent calls append to this list.
+ - as the client's XvRTVideoNotify. This happens for every client.
+
+The struct keeps the ClientPtr around once it has been added for a
+client. The idea, presumably, is that if the client disconnects we can remove
+all structs from the drawable's list that match the client (by resetting
+the ClientPtr to NULL), but if the drawable is destroyed we can remove
+and free the whole list.
+
+However, if the same client then calls XvdiSelectVideoNotify() on the
+same drawable with a FALSE onoff argument, only the ClientPtr on the
+existing struct was set to NULL. The struct itself remained in the
+client's resources.
+
+If the drawable is now destroyed, the resource system invokes
+XvdiDestroyVideoNotifyList which frees the whole list for this drawable
+- including our struct. This function however does not free the resource
+for the client since our ClientPtr is NULL.
+
+Later, when the client is destroyed and the resource system invokes
+XvdiDestroyVideoNotify, we unconditionally set the ClientPtr to NULL. On
+a struct that has been freed previously. This is generally frowned upon.
+
+Fix this by calling FreeResource() on the second call instead of merely
+setting the ClientPtr to NULL. This removes the struct from the client
+resources (but not from the list), ensuring that it won't be accessed
+again when the client quits.
+
+Note that the assignment tpn->client = NULL; is superfluous since the
+XvdiDestroyVideoNotify function will do this anyway. But it's left for
+clarity and to match a similar invocation in XvdiSelectPortNotify.
+
+CVE-2022-46342, ZDI-CAN 19400
+
+This vulnerability was discovered by:
+Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
+
+Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
+Acked-by: Olivier Fourdan <ofourdan@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/b79f32b57cc0c1186b2899bce7cf89f7b325161b]
+CVE: CVE-2022-46342
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ Xext/xvmain.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/Xext/xvmain.c b/Xext/xvmain.c
+index c520c7d..5f4c174 100644
+--- a/Xext/xvmain.c
++++ b/Xext/xvmain.c
+@@ -811,8 +811,10 @@ XvdiSelectVideoNotify(ClientPtr client, DrawablePtr pDraw, BOOL onoff)
+ tpn = pn;
+ while (tpn) {
+ if (tpn->client == client) {
+- if (!onoff)
++ if (!onoff) {
+ tpn->client = NULL;
++ FreeResource(tpn->id, XvRTVideoNotify);
++ }
+ return Success;
+ }
+ if (!tpn->client)
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46343.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46343.patch
new file mode 100644
index 0000000000..838f7d3726
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46343.patch
@@ -0,0 +1,51 @@
+From 842ca3ccef100ce010d1d8f5f6d6cc1915055900 Mon Sep 17 00:00:00 2001
+From: Peter Hutterer <peter.hutterer@who-t.net>
+Date: Tue, 29 Nov 2022 14:53:07 +1000
+Subject: [PATCH] Xext: free the screen saver resource when replacing it
+
+This fixes a use-after-free bug:
+
+When a client first calls ScreenSaverSetAttributes(), a struct
+ScreenSaverAttrRec is allocated and added to the client's
+resources.
+
+When the same client calls ScreenSaverSetAttributes() again, a new
+struct ScreenSaverAttrRec is allocated, replacing the old struct. The
+old struct was freed but not removed from the clients resources.
+
+Later, when the client is destroyed the resource system invokes
+ScreenSaverFreeAttr and attempts to clean up the already freed struct.
+
+Fix this by letting the resource system free the old attrs instead.
+
+CVE-2022-46343, ZDI-CAN 19404
+
+This vulnerability was discovered by:
+Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
+
+Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
+Acked-by: Olivier Fourdan <ofourdan@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/842ca3ccef100ce010d1d8f5f6d6cc1915055900]
+CVE: CVE-2022-46343
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ Xext/saver.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Xext/saver.c b/Xext/saver.c
+index c23907d..05b9ca3 100644
+--- a/Xext/saver.c
++++ b/Xext/saver.c
+@@ -1051,7 +1051,7 @@ ScreenSaverSetAttributes(ClientPtr client)
+ pVlist++;
+ }
+ if (pPriv->attr)
+- FreeScreenAttr(pPriv->attr);
++ FreeResource(pPriv->attr->resource, AttrType);
+ pPriv->attr = pAttr;
+ pAttr->resource = FakeClientID(client->index);
+ if (!AddResource(pAttr->resource, AttrType, (void *) pAttr))
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46344.patch b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46344.patch
new file mode 100644
index 0000000000..e25afa0d16
--- /dev/null
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg/CVE-2022-46344.patch
@@ -0,0 +1,75 @@
+From 8f454b793e1f13c99872c15f0eed1d7f3b823fe8 Mon Sep 17 00:00:00 2001
+From: Peter Hutterer <peter.hutterer@who-t.net>
+Date: Tue, 29 Nov 2022 13:26:57 +1000
+Subject: [PATCH] Xi: avoid integer truncation in length check of
+ ProcXIChangeProperty
+
+This fixes an OOB read and the resulting information disclosure.
+
+Length calculation for the request was clipped to a 32-bit integer. With
+the correct stuff->num_items value the expected request size was
+truncated, passing the REQUEST_FIXED_SIZE check.
+
+The server then proceeded with reading at least stuff->num_items bytes
+(depending on stuff->format) from the request and stuffing whatever it
+finds into the property. In the process it would also allocate at least
+stuff->num_items bytes, i.e. 4GB.
+
+The same bug exists in ProcChangeProperty and ProcXChangeDeviceProperty,
+so let's fix that too.
+
+CVE-2022-46344, ZDI-CAN 19405
+
+This vulnerability was discovered by:
+Jan-Niklas Sohn working with Trend Micro Zero Day Initiative
+
+Signed-off-by: Peter Hutterer <peter.hutterer@who-t.net>
+Acked-by: Olivier Fourdan <ofourdan@redhat.com>
+
+Upstream-Status: Backport [https://gitlab.freedesktop.org/xorg/xserver/-/commit/8f454b793e1f13c99872c15f0eed1d7f3b823fe8]
+CVE: CVE-2022-46344
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ Xi/xiproperty.c | 4 ++--
+ dix/property.c | 3 ++-
+ 2 files changed, 4 insertions(+), 3 deletions(-)
+
+diff --git a/Xi/xiproperty.c b/Xi/xiproperty.c
+index 6ec419e..0cfa6e3 100644
+--- a/Xi/xiproperty.c
++++ b/Xi/xiproperty.c
+@@ -890,7 +890,7 @@ ProcXChangeDeviceProperty(ClientPtr client)
+ REQUEST(xChangeDevicePropertyReq);
+ DeviceIntPtr dev;
+ unsigned long len;
+- int totalSize;
++ uint64_t totalSize;
+ int rc;
+
+ REQUEST_AT_LEAST_SIZE(xChangeDevicePropertyReq);
+@@ -1128,7 +1128,7 @@ ProcXIChangeProperty(ClientPtr client)
+ {
+ int rc;
+ DeviceIntPtr dev;
+- int totalSize;
++ uint64_t totalSize;
+ unsigned long len;
+
+ REQUEST(xXIChangePropertyReq);
+diff --git a/dix/property.c b/dix/property.c
+index ff1d669..6fdb74a 100644
+--- a/dix/property.c
++++ b/dix/property.c
+@@ -205,7 +205,8 @@ ProcChangeProperty(ClientPtr client)
+ WindowPtr pWin;
+ char format, mode;
+ unsigned long len;
+- int sizeInBytes, totalSize, err;
++ int sizeInBytes, err;
++ uint64_t totalSize;
+
+ REQUEST(xChangePropertyReq);
+
+--
+2.25.1
+
diff --git a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.20.14.bb b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.20.14.bb
index d176f390a4..ab18a87a3d 100644
--- a/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.20.14.bb
+++ b/poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.20.14.bb
@@ -5,7 +5,16 @@ SRC_URI += "file://0001-xf86pciBus.c-use-Intel-ddx-only-for-pre-gen4-hardwar.pat
file://0001-test-xtest-Initialize-array-with-braces.patch \
file://sdksyms-no-build-path.patch \
file://0001-drmmode_display.c-add-missing-mi.h-include.patch \
- "
+ file://CVE-2022-3550.patch \
+ file://CVE-2022-3551.patch \
+ file://CVE-2022-3553.patch \
+ file://CVE-2022-4283.patch \
+ file://CVE-2022-46340.patch \
+ file://CVE-2022-46341.patch \
+ file://CVE-2022-46342.patch \
+ file://CVE-2022-46343.patch \
+ file://CVE-2022-46344.patch \
+"
SRC_URI[md5sum] = "453fc86aac8c629b3a5b77e8dcca30bf"
SRC_URI[sha256sum] = "54b199c9280ff8bf0f73a54a759645bd0eeeda7255d1c99310d5b7595f3ac066"
diff --git a/poky/meta/recipes-kernel/linux-firmware/linux-firmware_20220913.bb b/poky/meta/recipes-kernel/linux-firmware/linux-firmware_20230210.bb
index 2baf4bbe49..fb1ea61906 100644
--- a/poky/meta/recipes-kernel/linux-firmware/linux-firmware_20220913.bb
+++ b/poky/meta/recipes-kernel/linux-firmware/linux-firmware_20230210.bb
@@ -45,6 +45,7 @@ LICENSE = "\
& Firmware-phanfw \
& Firmware-qat \
& Firmware-qcom \
+ & Firmware-qcom-yamato \
& Firmware-qla1280 \
& Firmware-qla2xxx \
& Firmware-qualcommAthos_ar3k \
@@ -70,8 +71,8 @@ LICENSE = "\
LIC_FILES_CHKSUM = "file://LICENCE.Abilis;md5=b5ee3f410780e56711ad48eadc22b8bc \
file://LICENCE.adsp_sst;md5=615c45b91a5a4a9fe046d6ab9a2df728 \
file://LICENCE.agere;md5=af0133de6b4a9b2522defd5f188afd31 \
- file://LICENSE.amdgpu;md5=44c1166d052226cb2d6c8d7400090203 \
- file://LICENSE.amd-ucode;md5=3c5399dc9148d7f0e1f41e34b69cf14f \
+ file://LICENSE.amdgpu;md5=a2589a05ea5b6bd2b7f4f623c7e7a649 \
+ file://LICENSE.amd-ucode;md5=6ca90c57f7b248de1e25c7f68ffc4698 \
file://LICENSE.amlogic_vdec;md5=dc44f59bf64a81643e500ad3f39a468a \
file://LICENCE.atheros_firmware;md5=30a14c7823beedac9fa39c64fdd01a13 \
file://LICENSE.atmel;md5=aa74ac0c60595dee4d4e239107ea77a3 \
@@ -109,6 +110,7 @@ LIC_FILES_CHKSUM = "file://LICENCE.Abilis;md5=b5ee3f410780e56711ad48eadc22b8bc \
file://LICENCE.phanfw;md5=954dcec0e051f9409812b561ea743bfa \
file://LICENCE.qat_firmware;md5=9e7d8bea77612d7cc7d9e9b54b623062 \
file://LICENSE.qcom;md5=164e3362a538eb11d3ac51e8e134294b \
+ file://LICENSE.qcom_yamato;md5=d0de0eeccaf1843a850bf7a6777eec5c \
file://LICENCE.qla1280;md5=d6895732e622d950609093223a2c4f5d \
file://LICENCE.qla2xxx;md5=505855e921b75f1be4a437ad9b79dff0 \
file://LICENSE.QualcommAtheros_ar3k;md5=b5fe244fb2b532311de1472a3bc06da5 \
@@ -132,7 +134,7 @@ LIC_FILES_CHKSUM = "file://LICENCE.Abilis;md5=b5ee3f410780e56711ad48eadc22b8bc \
"
# WHENCE checksum is defined separately to ease overriding it if
# class-devupstream is selected.
-WHENCE_CHKSUM = "98ecc3d3223df7ebdc23b0ec56aafb20"
+WHENCE_CHKSUM = "aadb3cccbde1e53fc244a409e9bd5a22"
# These are not common licenses, set NO_GENERIC_LICENSE for them
# so that the license files will be copied from fetched source
@@ -177,6 +179,7 @@ NO_GENERIC_LICENSE[Firmware-ath9k-htc] = "LICENCE.open-ath9k-htc-firmware"
NO_GENERIC_LICENSE[Firmware-phanfw] = "LICENCE.phanfw"
NO_GENERIC_LICENSE[Firmware-qat] = "LICENCE.qat_firmware"
NO_GENERIC_LICENSE[Firmware-qcom] = "LICENSE.qcom"
+NO_GENERIC_LICENSE[Firmware-qcom-yamato] = "LICENSE.qcom_yamato"
NO_GENERIC_LICENSE[Firmware-qla1280] = "LICENCE.qla1280"
NO_GENERIC_LICENSE[Firmware-qla2xxx] = "LICENCE.qla2xxx"
NO_GENERIC_LICENSE[Firmware-qualcommAthos_ar3k] = "LICENSE.QualcommAtheros_ar3k"
@@ -209,7 +212,7 @@ SRC_URI:class-devupstream = "git://git.kernel.org/pub/scm/linux/kernel/git/firmw
# Pin this to the 20220509 release, override this in local.conf
SRCREV:class-devupstream ?= "b19cbdca78ab2adfd210c91be15a22568e8b8cae"
-SRC_URI[sha256sum] = "26fd00f2d8e96c4af6f44269a6b893eb857253044f75ad28ef6706a2250cd8e9"
+SRC_URI[sha256sum] = "6e3d9e8d52cffc4ec0dbe8533a8445328e0524a20f159a5b61c2706f983ce38a"
inherit allarch
@@ -305,7 +308,7 @@ PACKAGES =+ "${PN}-ralink-license ${PN}-ralink \
${PN}-nvidia-gpu \
${PN}-netronome-license ${PN}-netronome \
${PN}-qat ${PN}-qat-license \
- ${PN}-qcom-license \
+ ${PN}-qcom-license ${PN}-qcom-yamato-license \
${PN}-qcom-venus-1.8 ${PN}-qcom-venus-4.2 ${PN}-qcom-venus-5.2 ${PN}-qcom-venus-5.4 \
${PN}-qcom-vpu-1.0 ${PN}-qcom-vpu-2.0 \
${PN}-qcom-adreno-a2xx ${PN}-qcom-adreno-a3xx ${PN}-qcom-adreno-a4xx ${PN}-qcom-adreno-a530 \
@@ -961,14 +964,41 @@ RDEPENDS_${PN}-qat = "${PN}-qat-license"
# For QCOM VPU/GPU and SDM845
LICENSE_${PN}-qcom-license = "Firmware-qcom"
+LICENSE_${PN}-qcom-yamato-license = "Firmware-qcom-yamato"
+LICENSE_${PN}-qcom-venus-1.8 = "Firmware-qcom"
+LICENSE_${PN}-qcom-venus-4.2 = "Firmware-qcom"
+LICENSE_${PN}-qcom-venus-5.2 = "Firmware-qcom"
+LICENSE_${PN}-qcom-venus-5.4 = "Firmware-qcom"
+LICENSE_${PN}-qcom-vpu-1.0 = "Firmware-qcom"
+LICENSE_${PN}-qcom-vpu-2.0 = "Firmware-qcom"
+LICENSE_${PN}-qcom-adreno-a2xx = "Firmware-qcom Firmware-qcom-yamato"
+LICENSE_${PN}-qcom-adreno-a3xx = "Firmware-qcom"
+LICENSE_${PN}-qcom-adreno-a4xx = "Firmware-qcom"
+LICENSE_${PN}-qcom-adreno-a530 = "Firmware-qcom"
+LICENSE_${PN}-qcom-adreno-a630 = "Firmware-qcom"
+LICENSE_${PN}-qcom-adreno-a650 = "Firmware-qcom"
+LICENSE_${PN}-qcom-adreno-a660 = "Firmware-qcom"
+LICENSE_${PN}-qcom-apq8096-audio = "Firmware-qcom"
+LICENSE_${PN}-qcom-apq8096-modem = "Firmware-qcom"
+LICENSE_${PN}-qcom-sc8280xp-lenovo-x13s-audio = "Firmware-qcom"
+LICENSE_${PN}-qcom-sc8280xp-lenovo-x13s-adreno = "Firmware-qcom"
+LICENSE_${PN}-qcom-sc8280xp-lenovo-x13s-compute = "Firmware-qcom"
+LICENSE_${PN}-qcom-sc8280xp-lenovo-x13s-sensors = "Firmware-qcom"
+LICENSE_${PN}-qcom-sdm845-audio = "Firmware-qcom"
+LICENSE_${PN}-qcom-sdm845-compute = "Firmware-qcom"
+LICENSE_${PN}-qcom-sdm845-modem = "Firmware-qcom"
+LICENSE_${PN}-qcom-sm8250-audio = "Firmware-qcom"
+LICENSE_${PN}-qcom-sm8250-compute = "Firmware-qcom"
+
FILES_${PN}-qcom-license = "${nonarch_base_libdir}/firmware/LICENSE.qcom ${nonarch_base_libdir}/firmware/qcom/NOTICE.txt"
+FILES_${PN}-qcom-yamato-license = "${nonarch_base_libdir}/firmware/LICENSE.qcom_yamato"
FILES_${PN}-qcom-venus-1.8 = "${nonarch_base_libdir}/firmware/qcom/venus-1.8/*"
FILES_${PN}-qcom-venus-4.2 = "${nonarch_base_libdir}/firmware/qcom/venus-4.2/*"
FILES_${PN}-qcom-venus-5.2 = "${nonarch_base_libdir}/firmware/qcom/venus-5.2/*"
FILES_${PN}-qcom-venus-5.4 = "${nonarch_base_libdir}/firmware/qcom/venus-5.4/*"
FILES_${PN}-qcom-vpu-1.0 = "${nonarch_base_libdir}/firmware/qcom/vpu-1.0/*"
FILES_${PN}-qcom-vpu-2.0 = "${nonarch_base_libdir}/firmware/qcom/vpu-2.0/*"
-FILES_${PN}-qcom-adreno-a2xx = "${nonarch_base_libdir}/firmware/qcom/leia_*.fw"
+FILES_${PN}-qcom-adreno-a2xx = "${nonarch_base_libdir}/firmware/qcom/leia_*.fw ${nonarch_base_libdir}/firmware/qcom/yamato_*.fw"
FILES_${PN}-qcom-adreno-a3xx = "${nonarch_base_libdir}/firmware/qcom/a3*_*.fw ${nonarch_base_libdir}/firmware/a300_*.fw"
FILES_${PN}-qcom-adreno-a4xx = "${nonarch_base_libdir}/firmware/qcom/a4*_*.fw"
FILES_${PN}-qcom-adreno-a530 = "${nonarch_base_libdir}/firmware/qcom/a530*.*"
@@ -994,7 +1024,7 @@ RDEPENDS_${PN}-qcom-venus-5.4 = "${PN}-qcom-license"
RDEPENDS_${PN}-qcom-vpu-1.0 = "${PN}-qcom-license"
RDEPENDS_${PN}-qcom-vpu-2.0 = "${PN}-qcom-license"
RDEPENDS_${PN}-qcom-adreno-a2xx = "${PN}-qcom-license"
-RDEPENDS_${PN}-qcom-adreno-a3xx = "${PN}-qcom-license"
+RDEPENDS_${PN}-qcom-adreno-a2xx = "${PN}-qcom-license ${PN}-qcom-yamato-license"
RDEPENDS_${PN}-qcom-adreno-a4xx = "${PN}-qcom-license"
RDEPENDS_${PN}-qcom-adreno-a530 = "${PN}-qcom-license"
RDEPENDS_${PN}-qcom-adreno-a630 = "${PN}-qcom-license"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-dev.bb b/poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
index 06a9108fab..a1c0de9981 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
+++ b/poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
@@ -10,8 +10,6 @@
inherit kernel
require recipes-kernel/linux/linux-yocto.inc
-# for ncurses tests
-inherit pkgconfig
# provide this .inc to set specific revisions
include recipes-kernel/linux/linux-yocto-dev-revisions.inc
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb b/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb
index 1a0e6d7b67..e0967223b9 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb
+++ b/poky/meta/recipes-kernel/linux/linux-yocto-rt_5.4.bb
@@ -11,13 +11,13 @@ python () {
raise bb.parse.SkipRecipe("Set PREFERRED_PROVIDER_virtual/kernel to linux-yocto-rt to enable it")
}
-SRCREV_machine ?= "03cd66d9814a26fff4681d3a053654848e519fd6"
-SRCREV_meta ?= "2f18e629f78da51cacf531bed58a83568724a376"
+SRCREV_machine ?= "f064f6017b7ce09ade0f365e1b7d776dc9e2e168"
+SRCREV_meta ?= "c7e2e528893abbebd14447510d38ded1ef98dcd2"
SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;branch=${KBRANCH};name=machine \
git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.4;destsuffix=${KMETA}"
-LINUX_VERSION ?= "5.4.213"
+LINUX_VERSION ?= "5.4.237"
LIC_FILES_CHKSUM = "file://COPYING;md5=bbea815ee2795b2f4230826c0c6b8814"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.4.bb b/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.4.bb
index 0f71051d0f..6cdf00763b 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.4.bb
+++ b/poky/meta/recipes-kernel/linux/linux-yocto-tiny_5.4.bb
@@ -6,7 +6,7 @@ KCONFIG_MODE = "--allnoconfig"
require recipes-kernel/linux/linux-yocto.inc
-LINUX_VERSION ?= "5.4.213"
+LINUX_VERSION ?= "5.4.237"
LIC_FILES_CHKSUM = "file://COPYING;md5=bbea815ee2795b2f4230826c0c6b8814"
DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
@@ -15,9 +15,9 @@ DEPENDS += "openssl-native util-linux-native"
KMETA = "kernel-meta"
KCONF_BSP_AUDIT_LEVEL = "2"
-SRCREV_machine_qemuarm ?= "284fd0f6e11db890ad6cfd246a2c47521db4a05f"
-SRCREV_machine ?= "6d8cf8757864e674bb8f55b6ff68de5e3387d110"
-SRCREV_meta ?= "2f18e629f78da51cacf531bed58a83568724a376"
+SRCREV_machine_qemuarm ?= "00c3a33c0f772ff1fa8902e8fe8856131c27a9b5"
+SRCREV_machine ?= "0693cbc007cf6a7b335edb5f78542d77b048d5dd"
+SRCREV_meta ?= "c7e2e528893abbebd14447510d38ded1ef98dcd2"
PV = "${LINUX_VERSION}+git${SRCPV}"
diff --git a/poky/meta/recipes-kernel/linux/linux-yocto_5.4.bb b/poky/meta/recipes-kernel/linux/linux-yocto_5.4.bb
index d60a44e4a3..e95a044099 100644
--- a/poky/meta/recipes-kernel/linux/linux-yocto_5.4.bb
+++ b/poky/meta/recipes-kernel/linux/linux-yocto_5.4.bb
@@ -12,16 +12,16 @@ KBRANCH_qemux86 ?= "v5.4/standard/base"
KBRANCH_qemux86-64 ?= "v5.4/standard/base"
KBRANCH_qemumips64 ?= "v5.4/standard/mti-malta64"
-SRCREV_machine_qemuarm ?= "bcf3f5cf5f1bcfac1df54a2a9f19c92a49fc7538"
-SRCREV_machine_qemuarm64 ?= "fea87c9d80c7531f85f69fee97cf9500403cef6b"
-SRCREV_machine_qemumips ?= "f1d654a16a5b5a3bbc9288936827628a4a4553a2"
-SRCREV_machine_qemuppc ?= "f6bbc9d216fd3cef1df3ced215b0b22503c48906"
-SRCREV_machine_qemuriscv64 ?= "c0b728020967728840c39994e472db7ed7b727cf"
-SRCREV_machine_qemux86 ?= "c0b728020967728840c39994e472db7ed7b727cf"
-SRCREV_machine_qemux86-64 ?= "c0b728020967728840c39994e472db7ed7b727cf"
-SRCREV_machine_qemumips64 ?= "841245c9bd427e2e7cc786b92cecaf4390e5dd52"
-SRCREV_machine ?= "c0b728020967728840c39994e472db7ed7b727cf"
-SRCREV_meta ?= "2f18e629f78da51cacf531bed58a83568724a376"
+SRCREV_machine_qemuarm ?= "981be716d817e38d2d67269aab3caaa095bd2bdd"
+SRCREV_machine_qemuarm64 ?= "32083245f7eb993b85a33a8d30bd9f41128b6147"
+SRCREV_machine_qemumips ?= "4d002b5ac3b434b21ae58ac15cd73be3ae5ef5a8"
+SRCREV_machine_qemuppc ?= "82b4b51143a6beeb49efa548494bdb5c01f336b2"
+SRCREV_machine_qemuriscv64 ?= "936721bc390034d774b28393bf61808de8899718"
+SRCREV_machine_qemux86 ?= "936721bc390034d774b28393bf61808de8899718"
+SRCREV_machine_qemux86-64 ?= "936721bc390034d774b28393bf61808de8899718"
+SRCREV_machine_qemumips64 ?= "d662d749c441de5a09bfd8870cd10e41b1e27b6b"
+SRCREV_machine ?= "936721bc390034d774b28393bf61808de8899718"
+SRCREV_meta ?= "c7e2e528893abbebd14447510d38ded1ef98dcd2"
# remap qemuarm to qemuarma15 for the 5.4 kernel
# KMACHINE_qemuarm ?= "qemuarma15"
@@ -30,7 +30,7 @@ SRC_URI = "git://git.yoctoproject.org/linux-yocto.git;name=machine;branch=${KBRA
git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-5.4;destsuffix=${KMETA}"
LIC_FILES_CHKSUM = "file://COPYING;md5=bbea815ee2795b2f4230826c0c6b8814"
-LINUX_VERSION ?= "5.4.213"
+LINUX_VERSION ?= "5.4.237"
DEPENDS += "${@bb.utils.contains('ARCH', 'x86', 'elfutils-native', '', d)}"
DEPENDS += "openssl-native util-linux-native"
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0001-fix-strncpy-equals-destination-size-warning.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0001-fix-strncpy-equals-destination-size-warning.patch
deleted file mode 100644
index 6f82488772..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0001-fix-strncpy-equals-destination-size-warning.patch
+++ /dev/null
@@ -1,42 +0,0 @@
-From cb78974394a9af865e1d2d606e838dbec0de80e8 Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 5 Oct 2020 15:31:42 -0400
-Subject: [PATCH 01/16] fix: strncpy equals destination size warning
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-Some versions of GCC when called with -Wstringop-truncation will warn
-when doing a copy of the same size as the destination buffer with
-strncpy :
-
- ‘strncpy’ specified bound 256 equals destination size [-Werror=stringop-truncation]
-
-Since we unconditionally write '\0' in the last byte, reduce the copy
-size by one.
-
-Upstream-Status: Backport
-
-Change-Id: Idb907c9550817a06fc0dffc489740f63d440e7d4
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
----
- lttng-syscalls.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/lttng-syscalls.c b/lttng-syscalls.c
-index 49c0d81b..b43dd570 100644
---- a/lttng-syscalls.c
-+++ b/lttng-syscalls.c
-@@ -719,7 +719,7 @@ int fill_table(const struct trace_syscall_entry *table, size_t table_len,
- ev.u.syscall.abi = LTTNG_KERNEL_SYSCALL_ABI_COMPAT;
- break;
- }
-- strncpy(ev.name, desc->name, LTTNG_KERNEL_SYM_NAME_LEN);
-+ strncpy(ev.name, desc->name, LTTNG_KERNEL_SYM_NAME_LEN - 1);
- ev.name[LTTNG_KERNEL_SYM_NAME_LEN - 1] = '\0';
- ev.instrumentation = LTTNG_KERNEL_SYSCALL;
- chan_table[i] = _lttng_event_create(chan, &ev, filter,
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0002-fix-objtool-Rename-frame.h-objtool.h-v5.10.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0002-fix-objtool-Rename-frame.h-objtool.h-v5.10.patch
deleted file mode 100644
index 90d7b0cf9c..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0002-fix-objtool-Rename-frame.h-objtool.h-v5.10.patch
+++ /dev/null
@@ -1,88 +0,0 @@
-From 8e4e8641961df32bfe519fd18d899250951acd1a Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 26 Oct 2020 13:41:02 -0400
-Subject: [PATCH 02/16] fix: objtool: Rename frame.h -> objtool.h (v5.10)
-
-See upstream commit :
-
- commit 00089c048eb4a8250325efb32a2724fd0da68cce
- Author: Julien Thierry <jthierry@redhat.com>
- Date: Fri Sep 4 16:30:25 2020 +0100
-
- objtool: Rename frame.h -> objtool.h
-
- Header frame.h is getting more code annotations to help objtool analyze
- object files.
-
- Rename the file to objtool.h.
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: Ic2283161bebcbf1e33b72805eb4d2628f4ae3e89
----
- lttng-filter-interpreter.c | 2 +-
- wrapper/{frame.h => objtool.h} | 19 ++++++++++++-------
- 2 files changed, 13 insertions(+), 8 deletions(-)
- rename wrapper/{frame.h => objtool.h} (50%)
-
-diff --git a/lttng-filter-interpreter.c b/lttng-filter-interpreter.c
-index 21169f01..5d572437 100644
---- a/lttng-filter-interpreter.c
-+++ b/lttng-filter-interpreter.c
-@@ -8,7 +8,7 @@
- */
-
- #include <wrapper/uaccess.h>
--#include <wrapper/frame.h>
-+#include <wrapper/objtool.h>
- #include <wrapper/types.h>
- #include <linux/swab.h>
-
-diff --git a/wrapper/frame.h b/wrapper/objtool.h
-similarity index 50%
-rename from wrapper/frame.h
-rename to wrapper/objtool.h
-index 6e6dc811..3b997cae 100644
---- a/wrapper/frame.h
-+++ b/wrapper/objtool.h
-@@ -1,18 +1,23 @@
--/* SPDX-License-Identifier: (GPL-2.0 or LGPL-2.1)
-+/* SPDX-License-Identifier: (GPL-2.0-only or LGPL-2.1-only)
- *
-- * wrapper/frame.h
-+ * wrapper/objtool.h
- *
- * Copyright (C) 2016 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
- */
-
--#ifndef _LTTNG_WRAPPER_FRAME_H
--#define _LTTNG_WRAPPER_FRAME_H
-+#ifndef _LTTNG_WRAPPER_OBJTOOL_H
-+#define _LTTNG_WRAPPER_OBJTOOL_H
-
- #include <linux/version.h>
-
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4,6,0))
--
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+#include <linux/objtool.h>
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,6,0))
- #include <linux/frame.h>
-+#endif
-+
-+
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4,6,0))
-
- #define LTTNG_STACK_FRAME_NON_STANDARD(func) \
- STACK_FRAME_NON_STANDARD(func)
-@@ -23,4 +28,4 @@
-
- #endif
-
--#endif /* _LTTNG_WRAPPER_FRAME_H */
-+#endif /* _LTTNG_WRAPPER_OBJTOOL_H */
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0003-fix-btrfs-tracepoints-output-proper-root-owner-for-t.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0003-fix-btrfs-tracepoints-output-proper-root-owner-for-t.patch
deleted file mode 100644
index 2a100361ea..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0003-fix-btrfs-tracepoints-output-proper-root-owner-for-t.patch
+++ /dev/null
@@ -1,316 +0,0 @@
-From 5a3b76a81fd3df52405700d369223d64c7a04dc8 Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Tue, 27 Oct 2020 11:42:23 -0400
-Subject: [PATCH 03/16] fix: btrfs: tracepoints: output proper root owner for
- trace_find_free_extent() (v5.10)
-
-See upstream commit :
-
- commit 437490fed3b0c9ae21af8f70e0f338d34560842b
- Author: Qu Wenruo <wqu@suse.com>
- Date: Tue Jul 28 09:42:49 2020 +0800
-
- btrfs: tracepoints: output proper root owner for trace_find_free_extent()
-
- The current trace event always output result like this:
-
- find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=4(METADATA)
- find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=4(METADATA)
- find_free_extent: root=2(EXTENT_TREE) len=8192 empty_size=0 flags=1(DATA)
- find_free_extent: root=2(EXTENT_TREE) len=8192 empty_size=0 flags=1(DATA)
- find_free_extent: root=2(EXTENT_TREE) len=4096 empty_size=0 flags=1(DATA)
- find_free_extent: root=2(EXTENT_TREE) len=4096 empty_size=0 flags=1(DATA)
-
- T's saying we're allocating data extent for EXTENT tree, which is not
- even possible.
-
- It's because we always use EXTENT tree as the owner for
- trace_find_free_extent() without using the @root from
- btrfs_reserve_extent().
-
- This patch will change the parameter to use proper @root for
- trace_find_free_extent():
-
- Now it looks much better:
-
- find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
- find_free_extent: root=5(FS_TREE) len=8192 empty_size=0 flags=1(DATA)
- find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=1(DATA)
- find_free_extent: root=5(FS_TREE) len=4096 empty_size=0 flags=1(DATA)
- find_free_extent: root=5(FS_TREE) len=8192 empty_size=0 flags=1(DATA)
- find_free_extent: root=5(FS_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
- find_free_extent: root=7(CSUM_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
- find_free_extent: root=2(EXTENT_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
- find_free_extent: root=1(ROOT_TREE) len=16384 empty_size=0 flags=36(METADATA|DUP)
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: I1d674064d29b31417e2acffdeb735f5052a87032
----
- instrumentation/events/lttng-module/btrfs.h | 206 ++++++++++++--------
- 1 file changed, 122 insertions(+), 84 deletions(-)
-
-diff --git a/instrumentation/events/lttng-module/btrfs.h b/instrumentation/events/lttng-module/btrfs.h
-index 7b290085..52fcfd0d 100644
---- a/instrumentation/events/lttng-module/btrfs.h
-+++ b/instrumentation/events/lttng-module/btrfs.h
-@@ -1856,7 +1856,29 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__reserved_extent, btrfs_reserved_extent_f
-
- #endif /* #else #if (LINUX_VERSION_CODE >= KERNEL_VERSION(4,10,0)) */
-
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,5,0))
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0) || \
-+ LTTNG_KERNEL_RANGE(5,9,6, 5,10,0) || \
-+ LTTNG_KERNEL_RANGE(5,4,78, 5,5,0))
-+LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
-+
-+ btrfs_find_free_extent,
-+
-+ TP_PROTO(const struct btrfs_root *root, u64 num_bytes, u64 empty_size,
-+ u64 data),
-+
-+ TP_ARGS(root, num_bytes, empty_size, data),
-+
-+ TP_FIELDS(
-+ ctf_array(u8, fsid, root->lttng_fs_info_fsid, BTRFS_UUID_SIZE)
-+ ctf_integer(u64, root_objectid, root->root_key.objectid)
-+ ctf_integer(u64, num_bytes, num_bytes)
-+ ctf_integer(u64, empty_size, empty_size)
-+ ctf_integer(u64, data, data)
-+ )
-+)
-+
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(5,5,0))
-+
- LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
-
- btrfs_find_free_extent,
-@@ -1874,6 +1896,105 @@ LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
- )
- )
-
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,18,0))
-+
-+LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
-+
-+ btrfs_find_free_extent,
-+
-+ TP_PROTO(const struct btrfs_fs_info *fs_info, u64 num_bytes, u64 empty_size,
-+ u64 data),
-+
-+ TP_ARGS(fs_info, num_bytes, empty_size, data),
-+
-+ TP_FIELDS(
-+ ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE)
-+ ctf_integer(u64, num_bytes, num_bytes)
-+ ctf_integer(u64, empty_size, empty_size)
-+ ctf_integer(u64, data, data)
-+ )
-+)
-+
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,14,0))
-+
-+LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
-+
-+ btrfs_find_free_extent,
-+
-+ TP_PROTO(const struct btrfs_fs_info *fs_info, u64 num_bytes, u64 empty_size,
-+ u64 data),
-+
-+ TP_ARGS(fs_info, num_bytes, empty_size, data),
-+
-+ TP_FIELDS(
-+ ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE)
-+ ctf_integer(u64, num_bytes, num_bytes)
-+ ctf_integer(u64, empty_size, empty_size)
-+ ctf_integer(u64, data, data)
-+ )
-+)
-+
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,10,0))
-+
-+LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
-+
-+ btrfs_find_free_extent,
-+
-+ TP_PROTO(struct btrfs_fs_info *fs_info, u64 num_bytes, u64 empty_size,
-+ u64 data),
-+
-+ TP_ARGS(fs_info, num_bytes, empty_size, data),
-+
-+ TP_FIELDS(
-+ ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE)
-+ ctf_integer(u64, num_bytes, num_bytes)
-+ ctf_integer(u64, empty_size, empty_size)
-+ ctf_integer(u64, data, data)
-+ )
-+)
-+
-+#elif (LTTNG_SLE_KERNEL_RANGE(4,4,73,5,0,0, 4,4,73,6,0,0) || \
-+ LTTNG_SLE_KERNEL_RANGE(4,4,82,6,0,0, 4,4,82,7,0,0) || \
-+ LTTNG_SLE_KERNEL_RANGE(4,4,92,6,0,0, 4,4,92,7,0,0) || \
-+ LTTNG_SLE_KERNEL_RANGE(4,4,103,6,0,0, 4,5,0,0,0,0))
-+
-+LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
-+
-+ btrfs_find_free_extent,
-+
-+ TP_PROTO(const struct btrfs_root *root, u64 num_bytes, u64 empty_size,
-+ u64 data),
-+
-+ TP_ARGS(root, num_bytes, empty_size, data),
-+
-+ TP_FIELDS(
-+ ctf_integer(u64, root_objectid, root->root_key.objectid)
-+ ctf_integer(u64, num_bytes, num_bytes)
-+ ctf_integer(u64, empty_size, empty_size)
-+ ctf_integer(u64, data, data)
-+ )
-+)
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3,3,0))
-+
-+LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
-+
-+ btrfs_find_free_extent,
-+
-+ TP_PROTO(struct btrfs_root *root, u64 num_bytes, u64 empty_size,
-+ u64 data),
-+
-+ TP_ARGS(root, num_bytes, empty_size, data),
-+
-+ TP_FIELDS(
-+ ctf_integer(u64, root_objectid, root->root_key.objectid)
-+ ctf_integer(u64, num_bytes, num_bytes)
-+ ctf_integer(u64, empty_size, empty_size)
-+ ctf_integer(u64, data, data)
-+ )
-+)
-+#endif
-+
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,5,0))
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__reserve_extent,
-
- TP_PROTO(const struct btrfs_block_group *block_group, u64 start,
-@@ -1907,22 +2028,6 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__reserve_extent, btrfs_reserve_extent_clus
- )
-
- #elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,18,0))
--LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
--
-- btrfs_find_free_extent,
--
-- TP_PROTO(const struct btrfs_fs_info *fs_info, u64 num_bytes, u64 empty_size,
-- u64 data),
--
-- TP_ARGS(fs_info, num_bytes, empty_size, data),
--
-- TP_FIELDS(
-- ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE)
-- ctf_integer(u64, num_bytes, num_bytes)
-- ctf_integer(u64, empty_size, empty_size)
-- ctf_integer(u64, data, data)
-- )
--)
-
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__reserve_extent,
-
-@@ -1957,22 +2062,6 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__reserve_extent, btrfs_reserve_extent_clus
- )
-
- #elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,14,0))
--LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
--
-- btrfs_find_free_extent,
--
-- TP_PROTO(const struct btrfs_fs_info *fs_info, u64 num_bytes, u64 empty_size,
-- u64 data),
--
-- TP_ARGS(fs_info, num_bytes, empty_size, data),
--
-- TP_FIELDS(
-- ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE)
-- ctf_integer(u64, num_bytes, num_bytes)
-- ctf_integer(u64, empty_size, empty_size)
-- ctf_integer(u64, data, data)
-- )
--)
-
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__reserve_extent,
-
-@@ -2011,23 +2100,6 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__reserve_extent, btrfs_reserve_extent_clus
-
- #elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,10,0))
-
--LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
--
-- btrfs_find_free_extent,
--
-- TP_PROTO(struct btrfs_fs_info *fs_info, u64 num_bytes, u64 empty_size,
-- u64 data),
--
-- TP_ARGS(fs_info, num_bytes, empty_size, data),
--
-- TP_FIELDS(
-- ctf_array(u8, fsid, lttng_fs_info_fsid, BTRFS_UUID_SIZE)
-- ctf_integer(u64, num_bytes, num_bytes)
-- ctf_integer(u64, empty_size, empty_size)
-- ctf_integer(u64, data, data)
-- )
--)
--
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__reserve_extent,
-
- TP_PROTO(struct btrfs_fs_info *fs_info,
-@@ -2066,23 +2138,6 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__reserve_extent, btrfs_reserve_extent_clus
- LTTNG_SLE_KERNEL_RANGE(4,4,92,6,0,0, 4,4,92,7,0,0) || \
- LTTNG_SLE_KERNEL_RANGE(4,4,103,6,0,0, 4,5,0,0,0,0))
-
--LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
--
-- btrfs_find_free_extent,
--
-- TP_PROTO(const struct btrfs_root *root, u64 num_bytes, u64 empty_size,
-- u64 data),
--
-- TP_ARGS(root, num_bytes, empty_size, data),
--
-- TP_FIELDS(
-- ctf_integer(u64, root_objectid, root->root_key.objectid)
-- ctf_integer(u64, num_bytes, num_bytes)
-- ctf_integer(u64, empty_size, empty_size)
-- ctf_integer(u64, data, data)
-- )
--)
--
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__reserve_extent,
-
- TP_PROTO(const struct btrfs_root *root,
-@@ -2120,23 +2175,6 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__reserve_extent, btrfs_reserve_extent_clus
-
- #elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3,3,0))
-
--LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
--
-- btrfs_find_free_extent,
--
-- TP_PROTO(struct btrfs_root *root, u64 num_bytes, u64 empty_size,
-- u64 data),
--
-- TP_ARGS(root, num_bytes, empty_size, data),
--
-- TP_FIELDS(
-- ctf_integer(u64, root_objectid, root->root_key.objectid)
-- ctf_integer(u64, num_bytes, num_bytes)
-- ctf_integer(u64, empty_size, empty_size)
-- ctf_integer(u64, data, data)
-- )
--)
--
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__reserve_extent,
-
- TP_PROTO(struct btrfs_root *root,
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0004-fix-btrfs-make-ordered-extent-tracepoint-take-btrfs_.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0004-fix-btrfs-make-ordered-extent-tracepoint-take-btrfs_.patch
deleted file mode 100644
index 67025418c3..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0004-fix-btrfs-make-ordered-extent-tracepoint-take-btrfs_.patch
+++ /dev/null
@@ -1,179 +0,0 @@
-From d51a3332909ff034c8ec16ead0090bd6a4e2bc38 Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Tue, 27 Oct 2020 12:10:05 -0400
-Subject: [PATCH 04/16] fix: btrfs: make ordered extent tracepoint take
- btrfs_inode (v5.10)
-
-See upstream commit :
-
- commit acbf1dd0fcbd10c67826a19958f55a053b32f532
- Author: Nikolay Borisov <nborisov@suse.com>
- Date: Mon Aug 31 14:42:40 2020 +0300
-
- btrfs: make ordered extent tracepoint take btrfs_inode
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: I096d0801ffe0ad826cfe414cdd1c0857cbd2b624
----
- instrumentation/events/lttng-module/btrfs.h | 120 +++++++++++++++-----
- 1 file changed, 90 insertions(+), 30 deletions(-)
-
-diff --git a/instrumentation/events/lttng-module/btrfs.h b/instrumentation/events/lttng-module/btrfs.h
-index 52fcfd0d..d47f3280 100644
---- a/instrumentation/events/lttng-module/btrfs.h
-+++ b/instrumentation/events/lttng-module/btrfs.h
-@@ -346,7 +346,29 @@ LTTNG_TRACEPOINT_EVENT(btrfs_handle_em_exist,
- )
- #endif
-
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,6,0))
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__ordered_extent,
-+
-+ TP_PROTO(const struct btrfs_inode *inode,
-+ const struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered),
-+
-+ TP_FIELDS(
-+ ctf_array(u8, fsid, inode->root->lttng_fs_info_fsid, BTRFS_UUID_SIZE)
-+ ctf_integer(ino_t, ino, btrfs_ino(inode))
-+ ctf_integer(u64, file_offset, ordered->file_offset)
-+ ctf_integer(u64, start, ordered->disk_bytenr)
-+ ctf_integer(u64, len, ordered->num_bytes)
-+ ctf_integer(u64, disk_len, ordered->disk_num_bytes)
-+ ctf_integer(u64, bytes_left, ordered->bytes_left)
-+ ctf_integer(unsigned long, flags, ordered->flags)
-+ ctf_integer(int, compress_type, ordered->compress_type)
-+ ctf_integer(int, refs, refcount_read(&ordered->refs))
-+ ctf_integer(u64, root_objectid, inode->root->root_key.objectid)
-+ )
-+)
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(5,6,0))
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__ordered_extent,
-
- TP_PROTO(const struct inode *inode,
-@@ -458,7 +480,39 @@ LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__ordered_extent,
- )
- #endif
-
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4,14,0) || \
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_add,
-+
-+ TP_PROTO(const struct btrfs_inode *inode,
-+ const struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered)
-+)
-+
-+LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_remove,
-+
-+ TP_PROTO(const struct btrfs_inode *inode,
-+ const struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered)
-+)
-+
-+LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_start,
-+
-+ TP_PROTO(const struct btrfs_inode *inode,
-+ const struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered)
-+)
-+
-+LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_put,
-+
-+ TP_PROTO(const struct btrfs_inode *inode,
-+ const struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered)
-+)
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(4,14,0) || \
- LTTNG_SLE_KERNEL_RANGE(4,4,73,5,0,0, 4,4,73,6,0,0) || \
- LTTNG_SLE_KERNEL_RANGE(4,4,82,6,0,0, 4,4,82,7,0,0) || \
- LTTNG_SLE_KERNEL_RANGE(4,4,92,6,0,0, 4,4,92,7,0,0) || \
-@@ -494,7 +548,41 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_put,
-
- TP_ARGS(inode, ordered)
- )
-+#else
-+LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_add,
-+
-+ TP_PROTO(struct inode *inode, struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered)
-+)
-+
-+LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_remove,
-+
-+ TP_PROTO(struct inode *inode, struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered)
-+)
-+
-+LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_start,
-+
-+ TP_PROTO(struct inode *inode, struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered)
-+)
-
-+LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_put,
-+
-+ TP_PROTO(struct inode *inode, struct btrfs_ordered_extent *ordered),
-+
-+ TP_ARGS(inode, ordered)
-+)
-+#endif
-+
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4,14,0) || \
-+ LTTNG_SLE_KERNEL_RANGE(4,4,73,5,0,0, 4,4,73,6,0,0) || \
-+ LTTNG_SLE_KERNEL_RANGE(4,4,82,6,0,0, 4,4,82,7,0,0) || \
-+ LTTNG_SLE_KERNEL_RANGE(4,4,92,6,0,0, 4,4,92,7,0,0) || \
-+ LTTNG_SLE_KERNEL_RANGE(4,4,103,6,0,0, 4,5,0,0,0,0))
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__writepage,
-
- TP_PROTO(const struct page *page, const struct inode *inode,
-@@ -563,34 +651,6 @@ LTTNG_TRACEPOINT_EVENT(btrfs_sync_file,
- )
- )
- #else
--LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_add,
--
-- TP_PROTO(struct inode *inode, struct btrfs_ordered_extent *ordered),
--
-- TP_ARGS(inode, ordered)
--)
--
--LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_remove,
--
-- TP_PROTO(struct inode *inode, struct btrfs_ordered_extent *ordered),
--
-- TP_ARGS(inode, ordered)
--)
--
--LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_start,
--
-- TP_PROTO(struct inode *inode, struct btrfs_ordered_extent *ordered),
--
-- TP_ARGS(inode, ordered)
--)
--
--LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__ordered_extent, btrfs_ordered_extent_put,
--
-- TP_PROTO(struct inode *inode, struct btrfs_ordered_extent *ordered),
--
-- TP_ARGS(inode, ordered)
--)
--
- LTTNG_TRACEPOINT_EVENT_CLASS(btrfs__writepage,
-
- TP_PROTO(struct page *page, struct inode *inode,
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0005-fix-ext4-fast-commit-recovery-path-v5.10.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0005-fix-ext4-fast-commit-recovery-path-v5.10.patch
deleted file mode 100644
index 63d97fa4a3..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0005-fix-ext4-fast-commit-recovery-path-v5.10.patch
+++ /dev/null
@@ -1,91 +0,0 @@
-From b96f5364ba4d5a8b9e8159fe0b9e20d598a1c0f5 Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 26 Oct 2020 17:03:23 -0400
-Subject: [PATCH 05/16] fix: ext4: fast commit recovery path (v5.10)
-
-See upstream commit :
-
- commit 8016e29f4362e285f0f7e38fadc61a5b7bdfdfa2
- Author: Harshad Shirwadkar <harshadshirwadkar@gmail.com>
- Date: Thu Oct 15 13:37:59 2020 -0700
-
- ext4: fast commit recovery path
-
- This patch adds fast commit recovery path support for Ext4 file
- system. We add several helper functions that are similar in spirit to
- e2fsprogs journal recovery path handlers. Example of such functions
- include - a simple block allocator, idempotent block bitmap update
- function etc. Using these routines and the fast commit log in the fast
- commit area, the recovery path (ext4_fc_replay()) performs fast commit
- log recovery.
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: Ia65cf44e108f2df0b458f0d335f33a8f18f50baa
----
- instrumentation/events/lttng-module/ext4.h | 40 ++++++++++++++++++++++
- 1 file changed, 40 insertions(+)
-
-diff --git a/instrumentation/events/lttng-module/ext4.h b/instrumentation/events/lttng-module/ext4.h
-index f9a55e29..5fddccad 100644
---- a/instrumentation/events/lttng-module/ext4.h
-+++ b/instrumentation/events/lttng-module/ext4.h
-@@ -1423,6 +1423,18 @@ LTTNG_TRACEPOINT_EVENT(ext4_ext_load_extent,
- )
- )
-
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+LTTNG_TRACEPOINT_EVENT(ext4_load_inode,
-+ TP_PROTO(struct super_block *sb, unsigned long ino),
-+
-+ TP_ARGS(sb, ino),
-+
-+ TP_FIELDS(
-+ ctf_integer(dev_t, dev, sb->s_dev)
-+ ctf_integer(ino_t, ino, ino)
-+ )
-+)
-+#else
- LTTNG_TRACEPOINT_EVENT(ext4_load_inode,
- TP_PROTO(struct inode *inode),
-
-@@ -2045,6 +2057,34 @@ LTTNG_TRACEPOINT_EVENT(ext4_es_shrink_exit,
-
- #endif
-
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+LTTNG_TRACEPOINT_EVENT(ext4_fc_replay_scan,
-+ TP_PROTO(struct super_block *sb, int error, int off),
-+
-+ TP_ARGS(sb, error, off),
-+
-+ TP_FIELDS(
-+ ctf_integer(dev_t, dev, sb->s_dev)
-+ ctf_integer(int, error, error)
-+ ctf_integer(int, off, off)
-+ )
-+)
-+
-+LTTNG_TRACEPOINT_EVENT(ext4_fc_replay,
-+ TP_PROTO(struct super_block *sb, int tag, int ino, int priv1, int priv2),
-+
-+ TP_ARGS(sb, tag, ino, priv1, priv2),
-+
-+ TP_FIELDS(
-+ ctf_integer(dev_t, dev, sb->s_dev)
-+ ctf_integer(int, tag, tag)
-+ ctf_integer(int, ino, ino)
-+ ctf_integer(int, priv1, priv1)
-+ ctf_integer(int, priv2, priv2)
-+ )
-+)
-+#endif
-+
- #endif /* LTTNG_TRACE_EXT4_H */
-
- /* This part must be outside protection */
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0006-fix-KVM-x86-Add-intr-vectoring-info-and-error-code-t.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0006-fix-KVM-x86-Add-intr-vectoring-info-and-error-code-t.patch
deleted file mode 100644
index 56c563cea3..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0006-fix-KVM-x86-Add-intr-vectoring-info-and-error-code-t.patch
+++ /dev/null
@@ -1,124 +0,0 @@
-From a6334775b763c187d84914e89a0b835a793ae0fd Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 26 Oct 2020 14:11:17 -0400
-Subject: [PATCH 06/16] fix: KVM: x86: Add intr/vectoring info and error code
- to kvm_exit tracepoint (v5.10)
-
-See upstream commit :
-
- commit 235ba74f008d2e0936b29f77f68d4e2f73ffd24a
- Author: Sean Christopherson <sean.j.christopherson@intel.com>
- Date: Wed Sep 23 13:13:46 2020 -0700
-
- KVM: x86: Add intr/vectoring info and error code to kvm_exit tracepoint
-
- Extend the kvm_exit tracepoint to align it with kvm_nested_vmexit in
- terms of what information is captured. On SVM, add interrupt info and
- error code, while on VMX it add IDT vectoring and error code. This
- sets the stage for macrofying the kvm_exit tracepoint definition so that
- it can be reused for kvm_nested_vmexit without loss of information.
-
- Opportunistically stuff a zero for VM_EXIT_INTR_INFO if the VM-Enter
- failed, as the field is guaranteed to be invalid. Note, it'd be
- possible to further filter the interrupt/exception fields based on the
- VM-Exit reason, but the helper is intended only for tracepoints, i.e.
- an extra VMREAD or two is a non-issue, the failed VM-Enter case is just
- low hanging fruit.
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: I638fa29ef7d8bb432de42a33f9ae4db43259b915
----
- .../events/lttng-module/arch/x86/kvm/trace.h | 55 ++++++++++++++++++-
- 1 file changed, 53 insertions(+), 2 deletions(-)
-
-diff --git a/instrumentation/events/lttng-module/arch/x86/kvm/trace.h b/instrumentation/events/lttng-module/arch/x86/kvm/trace.h
-index 4416ae02..0917b51f 100644
---- a/instrumentation/events/lttng-module/arch/x86/kvm/trace.h
-+++ b/instrumentation/events/lttng-module/arch/x86/kvm/trace.h
-@@ -115,6 +115,37 @@ LTTNG_TRACEPOINT_EVENT_MAP(kvm_apic, kvm_x86_apic,
- /*
- * Tracepoint for kvm guest exit:
- */
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+LTTNG_TRACEPOINT_EVENT_CODE_MAP(kvm_exit, kvm_x86_exit,
-+ TP_PROTO(unsigned int exit_reason, struct kvm_vcpu *vcpu, u32 isa),
-+ TP_ARGS(exit_reason, vcpu, isa),
-+
-+ TP_locvar(
-+ u64 info1, info2;
-+ u32 intr_info, error_code;
-+ ),
-+
-+ TP_code_pre(
-+ kvm_x86_ops.get_exit_info(vcpu, &tp_locvar->info1,
-+ &tp_locvar->info2,
-+ &tp_locvar->intr_info,
-+ &tp_locvar->error_code);
-+ ),
-+
-+ TP_FIELDS(
-+ ctf_integer(unsigned int, exit_reason, exit_reason)
-+ ctf_integer(unsigned long, guest_rip, kvm_rip_read(vcpu))
-+ ctf_integer(u32, isa, isa)
-+ ctf_integer(u64, info1, tp_locvar->info1)
-+ ctf_integer(u64, info2, tp_locvar->info2)
-+ ctf_integer(u32, intr_info, tp_locvar->intr_info)
-+ ctf_integer(u32, error_code, tp_locvar->error_code)
-+ ctf_integer(unsigned int, vcpu_id, vcpu->vcpu_id)
-+ ),
-+
-+ TP_code_post()
-+)
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(5,7,0))
- LTTNG_TRACEPOINT_EVENT_CODE_MAP(kvm_exit, kvm_x86_exit,
- TP_PROTO(unsigned int exit_reason, struct kvm_vcpu *vcpu, u32 isa),
- TP_ARGS(exit_reason, vcpu, isa),
-@@ -124,13 +155,32 @@ LTTNG_TRACEPOINT_EVENT_CODE_MAP(kvm_exit, kvm_x86_exit,
- ),
-
- TP_code_pre(
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,7,0))
- kvm_x86_ops.get_exit_info(vcpu, &tp_locvar->info1,
- &tp_locvar->info2);
-+ ),
-+
-+ TP_FIELDS(
-+ ctf_integer(unsigned int, exit_reason, exit_reason)
-+ ctf_integer(unsigned long, guest_rip, kvm_rip_read(vcpu))
-+ ctf_integer(u32, isa, isa)
-+ ctf_integer(u64, info1, tp_locvar->info1)
-+ ctf_integer(u64, info2, tp_locvar->info2)
-+ ),
-+
-+ TP_code_post()
-+)
- #else
-+LTTNG_TRACEPOINT_EVENT_CODE_MAP(kvm_exit, kvm_x86_exit,
-+ TP_PROTO(unsigned int exit_reason, struct kvm_vcpu *vcpu, u32 isa),
-+ TP_ARGS(exit_reason, vcpu, isa),
-+
-+ TP_locvar(
-+ u64 info1, info2;
-+ ),
-+
-+ TP_code_pre(
- kvm_x86_ops->get_exit_info(vcpu, &tp_locvar->info1,
- &tp_locvar->info2);
--#endif
- ),
-
- TP_FIELDS(
-@@ -143,6 +193,7 @@ LTTNG_TRACEPOINT_EVENT_CODE_MAP(kvm_exit, kvm_x86_exit,
-
- TP_code_post()
- )
-+#endif
-
- /*
- * Tracepoint for kvm interrupt injection:
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0007-fix-kvm-x86-mmu-Add-TDP-MMU-PF-handler-v5.10.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0007-fix-kvm-x86-mmu-Add-TDP-MMU-PF-handler-v5.10.patch
deleted file mode 100644
index d78a8c25c7..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0007-fix-kvm-x86-mmu-Add-TDP-MMU-PF-handler-v5.10.patch
+++ /dev/null
@@ -1,82 +0,0 @@
-From 2f421c43c60b2c9d3ed63c1a363320e98a536a35 Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 26 Oct 2020 14:28:35 -0400
-Subject: [PATCH 07/16] fix: kvm: x86/mmu: Add TDP MMU PF handler (v5.10)
-
-See upstream commit :
-
- commit bb18842e21111a979e2e0e1c5d85c09646f18d51
- Author: Ben Gardon <bgardon@google.com>
- Date: Wed Oct 14 11:26:50 2020 -0700
-
- kvm: x86/mmu: Add TDP MMU PF handler
-
- Add functions to handle page faults in the TDP MMU. These page faults
- are currently handled in much the same way as the x86 shadow paging
- based MMU, however the ordering of some operations is slightly
- different. Future patches will add eager NX splitting, a fast page fault
- handler, and parallel page faults.
-
- Tested by running kvm-unit-tests and KVM selftests on an Intel Haswell
- machine. This series introduced no new failures.
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: Ie56959cb6c77913d2f1188b0ca15da9114623a4e
----
- .../lttng-module/arch/x86/kvm/mmutrace.h | 20 ++++++++++++++++++-
- probes/lttng-probe-kvm-x86-mmu.c | 5 +++++
- 2 files changed, 24 insertions(+), 1 deletion(-)
-
-diff --git a/instrumentation/events/lttng-module/arch/x86/kvm/mmutrace.h b/instrumentation/events/lttng-module/arch/x86/kvm/mmutrace.h
-index e5470400..86717835 100644
---- a/instrumentation/events/lttng-module/arch/x86/kvm/mmutrace.h
-+++ b/instrumentation/events/lttng-module/arch/x86/kvm/mmutrace.h
-@@ -163,7 +163,25 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(kvm_mmu_page_class, kvm_mmu_prepare_zap_page,
- TP_ARGS(sp)
- )
-
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(3,11,0))
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+
-+LTTNG_TRACEPOINT_EVENT_MAP(
-+ mark_mmio_spte,
-+
-+ kvm_mmu_mark_mmio_spte,
-+
-+ TP_PROTO(u64 *sptep, gfn_t gfn, u64 spte),
-+ TP_ARGS(sptep, gfn, spte),
-+
-+ TP_FIELDS(
-+ ctf_integer_hex(void *, sptep, sptep)
-+ ctf_integer(gfn_t, gfn, gfn)
-+ ctf_integer(unsigned, access, spte & ACC_ALL)
-+ ctf_integer(unsigned int, gen, get_mmio_spte_generation(spte))
-+ )
-+)
-+
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(3,11,0))
-
- LTTNG_TRACEPOINT_EVENT_MAP(
- mark_mmio_spte,
-diff --git a/probes/lttng-probe-kvm-x86-mmu.c b/probes/lttng-probe-kvm-x86-mmu.c
-index 8f981865..5043c776 100644
---- a/probes/lttng-probe-kvm-x86-mmu.c
-+++ b/probes/lttng-probe-kvm-x86-mmu.c
-@@ -31,6 +31,11 @@
- #include <../../arch/x86/kvm/mmutrace.h>
- #endif
-
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+#include <../arch/x86/kvm/mmu.h>
-+#include <../arch/x86/kvm/mmu/spte.h>
-+#endif
-+
- #undef TRACE_INCLUDE_PATH
- #undef TRACE_INCLUDE_FILE
-
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0008-fix-KVM-x86-mmu-Return-unique-RET_PF_-values-if-the-.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0008-fix-KVM-x86-mmu-Return-unique-RET_PF_-values-if-the-.patch
deleted file mode 100644
index a71bb728f0..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0008-fix-KVM-x86-mmu-Return-unique-RET_PF_-values-if-the-.patch
+++ /dev/null
@@ -1,71 +0,0 @@
-From 14bbccffa579f4d66e2900843d6afae1294ce7c8 Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 26 Oct 2020 17:07:13 -0400
-Subject: [PATCH 08/16] fix: KVM: x86/mmu: Return unique RET_PF_* values if the
- fault was fixed (v5.10)
-
-See upstream commit :
-
- commit c4371c2a682e0da1ed2cd7e3c5496f055d873554
- Author: Sean Christopherson <sean.j.christopherson@intel.com>
- Date: Wed Sep 23 15:04:24 2020 -0700
-
- KVM: x86/mmu: Return unique RET_PF_* values if the fault was fixed
-
- Introduce RET_PF_FIXED and RET_PF_SPURIOUS to provide unique return
- values instead of overloading RET_PF_RETRY. In the short term, the
- unique values add clarity to the code and RET_PF_SPURIOUS will be used
- by set_spte() to avoid unnecessary work for spurious faults.
-
- In the long term, TDX will use RET_PF_FIXED to deterministically map
- memory during pre-boot. The page fault flow may bail early for benign
- reasons, e.g. if the mmu_notifier fires for an unrelated address. With
- only RET_PF_RETRY, it's impossible for the caller to distinguish between
- "cool, page is mapped" and "darn, need to try again", and thus cannot
- handle benign cases like the mmu_notifier retry.
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: Ie0855c78852b45f588e131fe2463e15aae1bc023
----
- .../lttng-module/arch/x86/kvm/mmutrace.h | 22 ++++++++++++++++++-
- 1 file changed, 21 insertions(+), 1 deletion(-)
-
-diff --git a/instrumentation/events/lttng-module/arch/x86/kvm/mmutrace.h b/instrumentation/events/lttng-module/arch/x86/kvm/mmutrace.h
-index 86717835..cdf0609f 100644
---- a/instrumentation/events/lttng-module/arch/x86/kvm/mmutrace.h
-+++ b/instrumentation/events/lttng-module/arch/x86/kvm/mmutrace.h
-@@ -233,7 +233,27 @@ LTTNG_TRACEPOINT_EVENT_MAP(
- )
- )
-
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,6,0) || \
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+LTTNG_TRACEPOINT_EVENT_MAP(
-+ fast_page_fault,
-+
-+ kvm_mmu_fast_page_fault,
-+
-+ TP_PROTO(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u32 error_code,
-+ u64 *sptep, u64 old_spte, int ret),
-+ TP_ARGS(vcpu, cr2_or_gpa, error_code, sptep, old_spte, ret),
-+
-+ TP_FIELDS(
-+ ctf_integer(int, vcpu_id, vcpu->vcpu_id)
-+ ctf_integer(gpa_t, cr2_or_gpa, cr2_or_gpa)
-+ ctf_integer(u32, error_code, error_code)
-+ ctf_integer_hex(u64 *, sptep, sptep)
-+ ctf_integer(u64, old_spte, old_spte)
-+ ctf_integer(u64, new_spte, *sptep)
-+ ctf_integer(int, ret, ret)
-+ )
-+)
-+#elif (LINUX_VERSION_CODE >= KERNEL_VERSION(5,6,0) || \
- LTTNG_KERNEL_RANGE(4,19,103, 4,20,0) || \
- LTTNG_KERNEL_RANGE(5,4,19, 5,5,0) || \
- LTTNG_KERNEL_RANGE(5,5,3, 5,6,0) || \
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0009-fix-tracepoint-Optimize-using-static_call-v5.10.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0009-fix-tracepoint-Optimize-using-static_call-v5.10.patch
deleted file mode 100644
index b942aa5c95..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0009-fix-tracepoint-Optimize-using-static_call-v5.10.patch
+++ /dev/null
@@ -1,155 +0,0 @@
-From c6b31b349fe901a8f586a66064f9e9b15449ac1c Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 26 Oct 2020 17:09:05 -0400
-Subject: [PATCH 09/16] fix: tracepoint: Optimize using static_call() (v5.10)
-
-See upstream commit :
-
- commit d25e37d89dd2f41d7acae0429039d2f0ae8b4a07
- Author: Steven Rostedt (VMware) <rostedt@goodmis.org>
- Date: Tue Aug 18 15:57:52 2020 +0200
-
- tracepoint: Optimize using static_call()
-
- Currently the tracepoint site will iterate a vector and issue indirect
- calls to however many handlers are registered (ie. the vector is
- long).
-
- Using static_call() it is possible to optimize this for the common
- case of only having a single handler registered. In this case the
- static_call() can directly call this handler. Otherwise, if the vector
- is longer than 1, call a function that iterates the whole vector like
- the current code.
-
-Upstream-Status: Backport
-
-Change-Id: I739dd84d62cc1a821b8bd8acff74fa29aa25d22f
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
----
- lttng-statedump-impl.c | 44 ++++++++++++++++++++++++++++++++-------
- probes/lttng.c | 7 +++++--
- tests/probes/lttng-test.c | 7 ++++++-
- wrapper/tracepoint.h | 8 +++++++
- 4 files changed, 56 insertions(+), 10 deletions(-)
-
-diff --git a/lttng-statedump-impl.c b/lttng-statedump-impl.c
-index 54a309d1..e0b19b42 100644
---- a/lttng-statedump-impl.c
-+++ b/lttng-statedump-impl.c
-@@ -55,13 +55,43 @@
- #define LTTNG_INSTRUMENTATION
- #include <instrumentation/events/lttng-module/lttng-statedump.h>
-
--DEFINE_TRACE(lttng_statedump_block_device);
--DEFINE_TRACE(lttng_statedump_end);
--DEFINE_TRACE(lttng_statedump_interrupt);
--DEFINE_TRACE(lttng_statedump_file_descriptor);
--DEFINE_TRACE(lttng_statedump_start);
--DEFINE_TRACE(lttng_statedump_process_state);
--DEFINE_TRACE(lttng_statedump_network_interface);
-+LTTNG_DEFINE_TRACE(lttng_statedump_block_device,
-+ TP_PROTO(struct lttng_session *session,
-+ dev_t dev, const char *diskname),
-+ TP_ARGS(session, dev, diskname));
-+
-+LTTNG_DEFINE_TRACE(lttng_statedump_end,
-+ TP_PROTO(struct lttng_session *session),
-+ TP_ARGS(session));
-+
-+LTTNG_DEFINE_TRACE(lttng_statedump_interrupt,
-+ TP_PROTO(struct lttng_session *session,
-+ unsigned int irq, const char *chip_name,
-+ struct irqaction *action),
-+ TP_ARGS(session, irq, chip_name, action));
-+
-+LTTNG_DEFINE_TRACE(lttng_statedump_file_descriptor,
-+ TP_PROTO(struct lttng_session *session,
-+ struct files_struct *files,
-+ int fd, const char *filename,
-+ unsigned int flags, fmode_t fmode),
-+ TP_ARGS(session, files, fd, filename, flags, fmode));
-+
-+LTTNG_DEFINE_TRACE(lttng_statedump_start,
-+ TP_PROTO(struct lttng_session *session),
-+ TP_ARGS(session));
-+
-+LTTNG_DEFINE_TRACE(lttng_statedump_process_state,
-+ TP_PROTO(struct lttng_session *session,
-+ struct task_struct *p,
-+ int type, int mode, int submode, int status,
-+ struct files_struct *files),
-+ TP_ARGS(session, p, type, mode, submode, status, files));
-+
-+LTTNG_DEFINE_TRACE(lttng_statedump_network_interface,
-+ TP_PROTO(struct lttng_session *session,
-+ struct net_device *dev, struct in_ifaddr *ifa),
-+ TP_ARGS(session, dev, ifa));
-
- struct lttng_fd_ctx {
- char *page;
-diff --git a/probes/lttng.c b/probes/lttng.c
-index 05bc1388..7ddaa69f 100644
---- a/probes/lttng.c
-+++ b/probes/lttng.c
-@@ -8,7 +8,7 @@
- */
-
- #include <linux/module.h>
--#include <linux/tracepoint.h>
-+#include <wrapper/tracepoint.h>
- #include <linux/uaccess.h>
- #include <linux/gfp.h>
- #include <linux/fs.h>
-@@ -32,7 +32,10 @@
- #define LTTNG_LOGGER_COUNT_MAX 1024
- #define LTTNG_LOGGER_FILE "lttng-logger"
-
--DEFINE_TRACE(lttng_logger);
-+LTTNG_DEFINE_TRACE(lttng_logger,
-+ PARAMS(const char __user *text, size_t len),
-+ PARAMS(text, len)
-+);
-
- static struct proc_dir_entry *lttng_logger_dentry;
-
-diff --git a/tests/probes/lttng-test.c b/tests/probes/lttng-test.c
-index c728bed5..8f2d3feb 100644
---- a/tests/probes/lttng-test.c
-+++ b/tests/probes/lttng-test.c
-@@ -26,7 +26,12 @@
- #define LTTNG_INSTRUMENTATION
- #include <instrumentation/events/lttng-module/lttng-test.h>
-
--DEFINE_TRACE(lttng_test_filter_event);
-+LTTNG_DEFINE_TRACE(lttng_test_filter_event,
-+ PARAMS(int anint, int netint, long *values,
-+ char *text, size_t textlen,
-+ char *etext, uint32_t * net_values),
-+ PARAMS(anint, netint, values, text, textlen, etext, net_values)
-+);
-
- #define LTTNG_TEST_FILTER_EVENT_FILE "lttng-test-filter-event"
-
-diff --git a/wrapper/tracepoint.h b/wrapper/tracepoint.h
-index 3883e11a..758038b6 100644
---- a/wrapper/tracepoint.h
-+++ b/wrapper/tracepoint.h
-@@ -20,6 +20,14 @@
-
- #endif
-
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0))
-+#define LTTNG_DEFINE_TRACE(name, proto, args) \
-+ DEFINE_TRACE(name, PARAMS(proto), PARAMS(args))
-+#else
-+#define LTTNG_DEFINE_TRACE(name, proto, args) \
-+ DEFINE_TRACE(name)
-+#endif
-+
- #ifndef HAVE_KABI_2635_TRACEPOINT
-
- #define kabi_2635_tracepoint_probe_register tracepoint_probe_register
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0010-fix-include-order-for-older-kernels.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0010-fix-include-order-for-older-kernels.patch
deleted file mode 100644
index 250e9c6261..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0010-fix-include-order-for-older-kernels.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 2ce89d35c9477d8c17c00489c72e1548e16af9b9 Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Fri, 20 Nov 2020 11:42:30 -0500
-Subject: [PATCH 10/16] fix: include order for older kernels
-
-Fixes a build failure on v3.0 and v3.1.
-
-Upstream-Status: Backport
-
-Change-Id: Ic48512d2aa5ee46678e67d147b92dba6d0959615
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
----
- lttng-events.h | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/lttng-events.h b/lttng-events.h
-index 099fd78b..f5cc57c6 100644
---- a/lttng-events.h
-+++ b/lttng-events.h
-@@ -16,6 +16,7 @@
- #include <linux/kref.h>
- #include <lttng-cpuhotplug.h>
- #include <linux/uuid.h>
-+#include <linux/irq_work.h>
- #include <wrapper/uprobes.h>
- #include <lttng-tracer.h>
- #include <lttng-abi.h>
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0011-Add-release-maintainer-script.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0011-Add-release-maintainer-script.patch
deleted file mode 100644
index d25d64b9de..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0011-Add-release-maintainer-script.patch
+++ /dev/null
@@ -1,59 +0,0 @@
-From 22ffa48439e617a32556365e00827fba062c5688 Mon Sep 17 00:00:00 2001
-From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Date: Mon, 23 Nov 2020 10:49:57 -0500
-Subject: [PATCH 11/16] Add release maintainer script
-
-Upstream-Status: Backport
-
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
----
- scripts/maintainer/do-release.sh | 37 ++++++++++++++++++++++++++++++++
- 1 file changed, 37 insertions(+)
- create mode 100755 scripts/maintainer/do-release.sh
-
-diff --git a/scripts/maintainer/do-release.sh b/scripts/maintainer/do-release.sh
-new file mode 100755
-index 00000000..e0cec167
---- /dev/null
-+++ b/scripts/maintainer/do-release.sh
-@@ -0,0 +1,37 @@
-+#!/bin/sh
-+
-+# invoke with do-release 2.N.M, or 2.N.M-rcXX
-+
-+REL=$1
-+SRCDIR=~/git/lttng-modules
-+# The output files are created in ${HOME}/stable/
-+OUTPUTDIR=${HOME}/stable
-+
-+if [ x"$1" = x"" ]; then
-+ echo "1 arg : VERSION";
-+ exit 1;
-+fi
-+
-+cd ${OUTPUTDIR}
-+
-+echo Doing LTTng modules release ${REL}
-+
-+mkdir lttng-modules-${REL}
-+cd lttng-modules-${REL}
-+cp -ax ${SRCDIR}/. .
-+
-+#cleanup
-+make clean
-+git clean -xdf
-+
-+for a in \*.orig \*.rej Module.markers Module.symvers; do
-+ find . -name "${a}" -exec rm '{}' \;;
-+done
-+for a in outgoing .tmp_versions .git .pc; do
-+ find . -name "${a}" -exec rm -rf '{}' \;;
-+done
-+
-+cd ..
-+tar cvfj lttng-modules-${REL}.tar.bz2 lttng-modules-${REL}
-+mksums lttng-modules-${REL}.tar.bz2
-+signpkg lttng-modules-${REL}.tar.bz2
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0012-Improve-the-release-script.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0012-Improve-the-release-script.patch
deleted file mode 100644
index f5e7fb55a2..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0012-Improve-the-release-script.patch
+++ /dev/null
@@ -1,173 +0,0 @@
-From a241d30fa82ed0be1026f14e36e8bd2b0e65740d Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 23 Nov 2020 12:15:43 -0500
-Subject: [PATCH 12/16] Improve the release script
-
- * Use git-archive, this removes all custom code to cleanup the repo, it
- can now be used in an unclean repo as the code will be exported from
- a specific tag.
- * Add parameters, this will allow using the script on any machine
- while keeping the default behavior for the maintainer.
-
-Upstream-Status: Backport
-
-Change-Id: I9f29d0e1afdbf475d0bbaeb9946ca3216f725e86
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
----
- .gitattributes | 3 +
- scripts/maintainer/do-release.sh | 121 +++++++++++++++++++++++++------
- 2 files changed, 100 insertions(+), 24 deletions(-)
- create mode 100644 .gitattributes
-
-diff --git a/.gitattributes b/.gitattributes
-new file mode 100644
-index 00000000..7839355a
---- /dev/null
-+++ b/.gitattributes
-@@ -0,0 +1,3 @@
-+.gitattributes export-ignore
-+.gitignore export-ignore
-+.gitreview export-ignore
-diff --git a/scripts/maintainer/do-release.sh b/scripts/maintainer/do-release.sh
-index e0cec167..5e94e136 100755
---- a/scripts/maintainer/do-release.sh
-+++ b/scripts/maintainer/do-release.sh
-@@ -1,37 +1,110 @@
--#!/bin/sh
-+#!/bin/bash
-+
-+set -eu
-+set -o pipefail
-
- # invoke with do-release 2.N.M, or 2.N.M-rcXX
-
--REL=$1
--SRCDIR=~/git/lttng-modules
-+# Default maintainer values
-+SRCDIR="${HOME}/git/lttng-modules"
- # The output files are created in ${HOME}/stable/
--OUTPUTDIR=${HOME}/stable
-+OUTPUTDIR="${HOME}/stable"
-+SIGN="yes"
-+VERBOSE=""
-+
-+usage() {
-+ echo "Usage: do-release.sh [OPTION]... RELEASE"
-+ echo
-+ echo "Mandatory arguments to long options are mandatory for short options too."
-+ echo " -s, --srcdir DIR source directory"
-+ echo " -o, --outputdir DIR output directory, must exist"
-+ echo " -n, --no-sign don't GPG sign the output archive"
-+ echo " -v, --verbose verbose command output"
-+}
-+
-+POS_ARGS=()
-+while [[ $# -gt 0 ]]
-+do
-+ arg="$1"
-+
-+ case $arg in
-+ -n|--no-sign)
-+ SIGN="no"
-+ shift 1
-+ ;;
-+
-+ -s|--srcdir)
-+ SRCDIR="$2"
-+ shift 2
-+ ;;
-+
-+ -o|--outputdir)
-+ OUTPUTDIR="$2"
-+ shift 2
-+ ;;
-+
-+ -v|--verbose)
-+ VERBOSE="-v"
-+ shift 1
-+ ;;
-+
-+ # Catch unknown arguments
-+ -*)
-+ usage
-+ exit 1
-+ ;;
-+
-+ *)
-+ POS_ARGS+=("$1")
-+ shift
-+ ;;
-+ esac
-+done
-+set -- "${POS_ARGS[@]}"
-
--if [ x"$1" = x"" ]; then
-- echo "1 arg : VERSION";
-+REL=${1:-}
-+
-+if [ x"${REL}" = x"" ]; then
-+ usage
- exit 1;
- fi
-
--cd ${OUTPUTDIR}
-+echo "Doing LTTng modules release ${REL}"
-+echo " Source dir: ${SRCDIR}"
-+echo " Output dir: ${OUTPUTDIR}"
-+echo " GPG sign: ${SIGN}"
-
--echo Doing LTTng modules release ${REL}
-+# Make sure the output directory exists
-+if [ ! -d "${OUTPUTDIR}" ]; then
-+ echo "Output directory '${OUTPUTDIR}' doesn't exist."
-+ exit 1
-+fi
-
--mkdir lttng-modules-${REL}
--cd lttng-modules-${REL}
--cp -ax ${SRCDIR}/. .
-+# Make sure the source directory is a git repository
-+if [ ! -r "${SRCDIR}/.git/config" ]; then
-+ echo "Source directory '${SRCDIR}' isn't a git repository."
-+ exit 1
-+fi
-
--#cleanup
--make clean
--git clean -xdf
-+# Set the git repo directory for all further git commands
-+export GIT_DIR="${SRCDIR}/.git/"
-
--for a in \*.orig \*.rej Module.markers Module.symvers; do
-- find . -name "${a}" -exec rm '{}' \;;
--done
--for a in outgoing .tmp_versions .git .pc; do
-- find . -name "${a}" -exec rm -rf '{}' \;;
--done
-+# Check if the release tag exists
-+if ! git rev-parse "refs/tags/v${REL}" >/dev/null 2>&1; then
-+ echo "Release tag 'v${REL}' doesn't exist."
-+ exit 1
-+fi
-+
-+# Generate the compressed tar archive, the git attributes from the tag will be used.
-+git archive $VERBOSE --format=tar --prefix="lttng-modules-${REL}/" "v${REL}" | bzip2 > "${OUTPUTDIR}/lttng-modules-${REL}.tar.bz2"
-
--cd ..
--tar cvfj lttng-modules-${REL}.tar.bz2 lttng-modules-${REL}
--mksums lttng-modules-${REL}.tar.bz2
--signpkg lttng-modules-${REL}.tar.bz2
-+pushd "${OUTPUTDIR}" >/dev/null
-+# Generate the hashes
-+md5sum "lttng-modules-${REL}.tar.bz2" > "lttng-modules-${REL}.tar.bz2.md5"
-+sha256sum "lttng-modules-${REL}.tar.bz2" > "lttng-modules-${REL}.tar.bz2.sha256"
-+
-+if [ "x${SIGN}" = "xyes" ]; then
-+ # Sign with the default key
-+ gpg --armor -b "lttng-modules-${REL}.tar.bz2"
-+fi
-+popd >/dev/null
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0013-fix-backport-of-fix-ext4-fast-commit-recovery-path-v.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0013-fix-backport-of-fix-ext4-fast-commit-recovery-path-v.patch
deleted file mode 100644
index f6288923e1..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0013-fix-backport-of-fix-ext4-fast-commit-recovery-path-v.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From 59fcc704bea8ecf4bd401e744df41e3331359524 Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Mon, 23 Nov 2020 10:19:52 -0500
-Subject: [PATCH 13/16] fix: backport of fix: ext4: fast commit recovery path
- (v5.10)
-
-Add missing '#endif'.
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: I43349d685d7ed740b32ce992be0c2e7e6f12c799
----
- instrumentation/events/lttng-module/ext4.h | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/instrumentation/events/lttng-module/ext4.h b/instrumentation/events/lttng-module/ext4.h
-index 5fddccad..d454fa6e 100644
---- a/instrumentation/events/lttng-module/ext4.h
-+++ b/instrumentation/events/lttng-module/ext4.h
-@@ -1446,6 +1446,7 @@ LTTNG_TRACEPOINT_EVENT(ext4_load_inode,
- )
- )
- #endif
-+#endif
-
- #if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,5,0))
-
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0014-Revert-fix-include-order-for-older-kernels.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0014-Revert-fix-include-order-for-older-kernels.patch
deleted file mode 100644
index 446391a832..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0014-Revert-fix-include-order-for-older-kernels.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From b2df75dd378ce5260bb51872e43ac1d76fbf4588 Mon Sep 17 00:00:00 2001
-From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Date: Mon, 23 Nov 2020 14:21:51 -0500
-Subject: [PATCH 14/16] Revert "fix: include order for older kernels"
-
-This reverts commit 2ce89d35c9477d8c17c00489c72e1548e16af9b9.
-
-This commit is only needed for master and stable-2.12, because
-stable-2.11 does not include irq_work.h.
-
-Upstream-Status: Backport
-
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
----
- lttng-events.h | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/lttng-events.h b/lttng-events.h
-index f5cc57c6..099fd78b 100644
---- a/lttng-events.h
-+++ b/lttng-events.h
-@@ -16,7 +16,6 @@
- #include <linux/kref.h>
- #include <lttng-cpuhotplug.h>
- #include <linux/uuid.h>
--#include <linux/irq_work.h>
- #include <wrapper/uprobes.h>
- #include <lttng-tracer.h>
- #include <lttng-abi.h>
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0015-fix-backport-of-fix-tracepoint-Optimize-using-static.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0015-fix-backport-of-fix-tracepoint-Optimize-using-static.patch
deleted file mode 100644
index 1ff10d48da..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0015-fix-backport-of-fix-tracepoint-Optimize-using-static.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-From f8922333020aaa267e17fb23180b56c4c16ebe9e Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Tue, 24 Nov 2020 11:11:42 -0500
-Subject: [PATCH 15/16] fix: backport of fix: tracepoint: Optimize using
- static_call() (v5.10)
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: I94f2b845f11654e639f03254185980de527a4ca8
----
- lttng-statedump-impl.c | 9 ++++-----
- 1 file changed, 4 insertions(+), 5 deletions(-)
-
-diff --git a/lttng-statedump-impl.c b/lttng-statedump-impl.c
-index e0b19b42..a8c32db5 100644
---- a/lttng-statedump-impl.c
-+++ b/lttng-statedump-impl.c
-@@ -72,10 +72,9 @@ LTTNG_DEFINE_TRACE(lttng_statedump_interrupt,
-
- LTTNG_DEFINE_TRACE(lttng_statedump_file_descriptor,
- TP_PROTO(struct lttng_session *session,
-- struct files_struct *files,
-- int fd, const char *filename,
-+ struct task_struct *p, int fd, const char *filename,
- unsigned int flags, fmode_t fmode),
-- TP_ARGS(session, files, fd, filename, flags, fmode));
-+ TP_ARGS(session, p, fd, filename, flags, fmode));
-
- LTTNG_DEFINE_TRACE(lttng_statedump_start,
- TP_PROTO(struct lttng_session *session),
-@@ -85,8 +84,8 @@ LTTNG_DEFINE_TRACE(lttng_statedump_process_state,
- TP_PROTO(struct lttng_session *session,
- struct task_struct *p,
- int type, int mode, int submode, int status,
-- struct files_struct *files),
-- TP_ARGS(session, p, type, mode, submode, status, files));
-+ struct pid_namespace *pid_ns),
-+ TP_ARGS(session, p, type, mode, submode, status, pid_ns));
-
- LTTNG_DEFINE_TRACE(lttng_statedump_network_interface,
- TP_PROTO(struct lttng_session *session,
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/0016-fix-adjust-version-range-for-trace_find_free_extent.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/0016-fix-adjust-version-range-for-trace_find_free_extent.patch
deleted file mode 100644
index 59d4d7afa7..0000000000
--- a/poky/meta/recipes-kernel/lttng/lttng-modules/0016-fix-adjust-version-range-for-trace_find_free_extent.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From 5c3e67d7994097cc75f45258b7518aacb55dde1b Mon Sep 17 00:00:00 2001
-From: Michael Jeanson <mjeanson@efficios.com>
-Date: Tue, 24 Nov 2020 11:27:18 -0500
-Subject: [PATCH 16/16] fix: adjust version range for trace_find_free_extent()
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
-Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
-Change-Id: Iaa6088092cf58b4d29d55f3ff9586c57ae272302
----
- instrumentation/events/lttng-module/btrfs.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/instrumentation/events/lttng-module/btrfs.h b/instrumentation/events/lttng-module/btrfs.h
-index d47f3280..efe7af96 100644
---- a/instrumentation/events/lttng-module/btrfs.h
-+++ b/instrumentation/events/lttng-module/btrfs.h
-@@ -1917,7 +1917,7 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(btrfs__reserved_extent, btrfs_reserved_extent_f
- #endif /* #else #if (LINUX_VERSION_CODE >= KERNEL_VERSION(4,10,0)) */
-
- #if (LINUX_VERSION_CODE >= KERNEL_VERSION(5,10,0) || \
-- LTTNG_KERNEL_RANGE(5,9,6, 5,10,0) || \
-+ LTTNG_KERNEL_RANGE(5,9,5, 5,10,0) || \
- LTTNG_KERNEL_RANGE(5,4,78, 5,5,0))
- LTTNG_TRACEPOINT_EVENT_MAP(find_free_extent,
-
---
-2.25.1
-
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules/fix-jbd2-use-the-correct-print-format.patch b/poky/meta/recipes-kernel/lttng/lttng-modules/fix-jbd2-use-the-correct-print-format.patch
new file mode 100644
index 0000000000..b4939188cc
--- /dev/null
+++ b/poky/meta/recipes-kernel/lttng/lttng-modules/fix-jbd2-use-the-correct-print-format.patch
@@ -0,0 +1,147 @@
+fix: jbd2: use the correct print format
+See upstream commit :
+
+ commit d87a7b4c77a997d5388566dd511ca8e6b8e8a0a8
+ Author: Bixuan Cui <cuibixuan@linux.alibaba.com>
+ Date: Tue Oct 11 19:33:44 2022 +0800
+
+ jbd2: use the correct print format
+
+ The print format error was found when using ftrace event:
+ <...>-1406 [000] .... 23599442.895823: jbd2_end_commit: dev 252,8 transaction -1866216965 sync 0 head -1866217368
+ <...>-1406 [000] .... 23599442.896299: jbd2_start_commit: dev 252,8 transaction -1866216964 sync 0
+
+ Use the correct print format for transaction, head and tid.
+
+Change-Id: Ic053f0e0c1e24ebc75bae51d07696aaa5e1c0094
+Signed-off-by: Michael Jeanson <mjeanson@efficios.com>
+Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+
+Upstream-status: Backport
+Signed-off-by: Steve Sakoman <steve@sakoman.com>
+Note: combines three upstream commits:
+https://github.com/lttng/lttng-modules/commit/b28830a0dcdf95ec3e6b390b4d032667deaad0c0
+https://github.com/lttng/lttng-modules/commit/4fd2615b87b3cac0fd5bdc5fc82db05f6fcfdecf
+https://github.com/lttng/lttng-modules/commit/612c99eb24bf72f4d47d02025e92de8c35ece14e
+
+diff --git a/instrumentation/events/lttng-module/jbd2.h b/instrumentation/events/lttng-module/jbd2.h
+--- a/instrumentation/events/lttng-module/jbd2.h
++++ b/instrumentation/events/lttng-module/jbd2.h
+@@ -29,6 +29,25 @@ LTTNG_TRACEPOINT_EVENT(jbd2_checkpoint,
+ )
+ )
+
++#if (LTTNG_LINUX_VERSION_CODE >= LTTNG_KERNEL_VERSION(6,2,0) \
++ || LTTNG_KERNEL_RANGE(5,4,229, 5,5,0) \
++ || LTTNG_KERNEL_RANGE(5,10,163, 5,11,0) \
++ || LTTNG_KERNEL_RANGE(5,15,87, 5,16,0) \
++ || LTTNG_KERNEL_RANGE(6,0,18, 6,1,0) \
++ || LTTNG_KERNEL_RANGE(6,1,4, 6,2,0))
++LTTNG_TRACEPOINT_EVENT_CLASS(jbd2_commit,
++
++ TP_PROTO(journal_t *journal, transaction_t *commit_transaction),
++
++ TP_ARGS(journal, commit_transaction),
++
++ TP_FIELDS(
++ ctf_integer(dev_t, dev, journal->j_fs_dev->bd_dev)
++ ctf_integer(char, sync_commit, commit_transaction->t_synchronous_commit)
++ ctf_integer(tid_t, transaction, commit_transaction->t_tid)
++ )
++)
++#else
+ LTTNG_TRACEPOINT_EVENT_CLASS(jbd2_commit,
+
+ TP_PROTO(journal_t *journal, transaction_t *commit_transaction),
+@@ -41,6 +60,7 @@ LTTNG_TRACEPOINT_EVENT_CLASS(jbd2_commit
+ ctf_integer(int, transaction, commit_transaction->t_tid)
+ )
+ )
++#endif
+
+ LTTNG_TRACEPOINT_EVENT_INSTANCE(jbd2_commit, jbd2_start_commit,
+
+@@ -79,6 +99,25 @@ LTTNG_TRACEPOINT_EVENT_INSTANCE(jbd2_com
+ )
+ #endif
+
++#if (LTTNG_LINUX_VERSION_CODE >= LTTNG_KERNEL_VERSION(6,2,0) \
++ || LTTNG_KERNEL_RANGE(5,4,229, 5,5,0) \
++ || LTTNG_KERNEL_RANGE(5,10,163, 5,11,0) \
++ || LTTNG_KERNEL_RANGE(5,15,87, 5,16,0) \
++ || LTTNG_KERNEL_RANGE(6,0,18, 6,1,0) \
++ || LTTNG_KERNEL_RANGE(6,1,4, 6,2,0))
++LTTNG_TRACEPOINT_EVENT(jbd2_end_commit,
++ TP_PROTO(journal_t *journal, transaction_t *commit_transaction),
++
++ TP_ARGS(journal, commit_transaction),
++
++ TP_FIELDS(
++ ctf_integer(dev_t, dev, journal->j_fs_dev->bd_dev)
++ ctf_integer(char, sync_commit, commit_transaction->t_synchronous_commit)
++ ctf_integer(tid_t, transaction, commit_transaction->t_tid)
++ ctf_integer(tid_t, head, journal->j_tail_sequence)
++ )
++)
++#else
+ LTTNG_TRACEPOINT_EVENT(jbd2_end_commit,
+ TP_PROTO(journal_t *journal, transaction_t *commit_transaction),
+
+@@ -91,6 +130,7 @@ LTTNG_TRACEPOINT_EVENT(jbd2_end_commit,
+ ctf_integer(int, head, journal->j_tail_sequence)
+ )
+ )
++#endif
+
+ LTTNG_TRACEPOINT_EVENT(jbd2_submit_inode_data,
+ TP_PROTO(struct inode *inode),
+@@ -103,7 +143,48 @@ LTTNG_TRACEPOINT_EVENT(jbd2_submit_inode
+ )
+ )
+
+-#if (LTTNG_LINUX_VERSION_CODE >= LTTNG_KERNEL_VERSION(2,6,32))
++#if (LTTNG_LINUX_VERSION_CODE >= LTTNG_KERNEL_VERSION(6,2,0) \
++ || LTTNG_KERNEL_RANGE(5,4,229, 5,5,0) \
++ || LTTNG_KERNEL_RANGE(5,10,163, 5,11,0) \
++ || LTTNG_KERNEL_RANGE(5,15,87, 5,16,0) \
++ || LTTNG_KERNEL_RANGE(6,0,18, 6,1,0) \
++ || LTTNG_KERNEL_RANGE(6,1,4, 6,2,0))
++LTTNG_TRACEPOINT_EVENT(jbd2_run_stats,
++ TP_PROTO(dev_t dev, tid_t tid,
++ struct transaction_run_stats_s *stats),
++
++ TP_ARGS(dev, tid, stats),
++
++ TP_FIELDS(
++ ctf_integer(dev_t, dev, dev)
++ ctf_integer(tid_t, tid, tid)
++ ctf_integer(unsigned long, wait, stats->rs_wait)
++ ctf_integer(unsigned long, running, stats->rs_running)
++ ctf_integer(unsigned long, locked, stats->rs_locked)
++ ctf_integer(unsigned long, flushing, stats->rs_flushing)
++ ctf_integer(unsigned long, logging, stats->rs_logging)
++ ctf_integer(__u32, handle_count, stats->rs_handle_count)
++ ctf_integer(__u32, blocks, stats->rs_blocks)
++ ctf_integer(__u32, blocks_logged, stats->rs_blocks_logged)
++ )
++)
++
++LTTNG_TRACEPOINT_EVENT(jbd2_checkpoint_stats,
++ TP_PROTO(dev_t dev, tid_t tid,
++ struct transaction_chp_stats_s *stats),
++
++ TP_ARGS(dev, tid, stats),
++
++ TP_FIELDS(
++ ctf_integer(dev_t, dev, dev)
++ ctf_integer(tid_t, tid, tid)
++ ctf_integer(unsigned long, chp_time, stats->cs_chp_time)
++ ctf_integer(__u32, forced_to_close, stats->cs_forced_to_close)
++ ctf_integer(__u32, written, stats->cs_written)
++ ctf_integer(__u32, dropped, stats->cs_dropped)
++ )
++)
++#else
+ LTTNG_TRACEPOINT_EVENT(jbd2_run_stats,
+ TP_PROTO(dev_t dev, unsigned long tid,
+ struct transaction_run_stats_s *stats),
diff --git a/poky/meta/recipes-kernel/lttng/lttng-modules_2.11.6.bb b/poky/meta/recipes-kernel/lttng/lttng-modules_2.11.9.bb
index 76b9f13618..8e9c44241b 100644
--- a/poky/meta/recipes-kernel/lttng/lttng-modules_2.11.6.bb
+++ b/poky/meta/recipes-kernel/lttng/lttng-modules_2.11.9.bb
@@ -12,29 +12,14 @@ COMPATIBLE_HOST = '(x86_64|i.86|powerpc|aarch64|mips|nios2|arm|riscv).*-linux'
SRC_URI = "https://lttng.org/files/${BPN}/${BPN}-${PV}.tar.bz2 \
file://Makefile-Do-not-fail-if-CONFIG_TRACEPOINTS-is-not-en.patch \
file://BUILD_RUNTIME_BUG_ON-vs-gcc7.patch \
- file://0001-fix-strncpy-equals-destination-size-warning.patch \
- file://0002-fix-objtool-Rename-frame.h-objtool.h-v5.10.patch \
- file://0003-fix-btrfs-tracepoints-output-proper-root-owner-for-t.patch \
- file://0004-fix-btrfs-make-ordered-extent-tracepoint-take-btrfs_.patch \
- file://0005-fix-ext4-fast-commit-recovery-path-v5.10.patch \
- file://0006-fix-KVM-x86-Add-intr-vectoring-info-and-error-code-t.patch \
- file://0007-fix-kvm-x86-mmu-Add-TDP-MMU-PF-handler-v5.10.patch \
- file://0008-fix-KVM-x86-mmu-Return-unique-RET_PF_-values-if-the-.patch \
- file://0009-fix-tracepoint-Optimize-using-static_call-v5.10.patch \
- file://0010-fix-include-order-for-older-kernels.patch \
- file://0011-Add-release-maintainer-script.patch \
- file://0012-Improve-the-release-script.patch \
- file://0013-fix-backport-of-fix-ext4-fast-commit-recovery-path-v.patch \
- file://0014-Revert-fix-include-order-for-older-kernels.patch \
- file://0015-fix-backport-of-fix-tracepoint-Optimize-using-static.patch \
- file://0016-fix-adjust-version-range-for-trace_find_free_extent.patch \
file://0017-fix-random-remove-unused-tracepoints-v5.18.patch \
file://0018-fix-random-remove-unused-tracepoints-v5.10-v5.15.patch \
file://0019-fix-random-tracepoints-removed-in-stable-kernels.patch \
+ file://fix-jbd2-use-the-correct-print-format.patch \
"
-SRC_URI[md5sum] = "8ef09fdfcdec669d33f7fc1c1c80f2c4"
-SRC_URI[sha256sum] = "23372811cdcd2ac28ba8c9d09484ed5f9238cfbd0043f8c663ff3875ba9c8566"
+SRC_URI[md5sum] = "cfb23ea6bdaf1ad40c7f9ac098b4016d"
+SRC_URI[sha256sum] = "0c5fe9f8d8dbd1411a3c1c643dcbd0a55577bd15845758b73948e00bc7c387a6"
export INSTALL_MOD_DIR="kernel/lttng-modules"
diff --git a/poky/meta/recipes-kernel/make-mod-scripts/make-mod-scripts_1.0.bb b/poky/meta/recipes-kernel/make-mod-scripts/make-mod-scripts_1.0.bb
index f9df345ca5..32b89bb5ea 100644
--- a/poky/meta/recipes-kernel/make-mod-scripts/make-mod-scripts_1.0.bb
+++ b/poky/meta/recipes-kernel/make-mod-scripts/make-mod-scripts_1.0.bb
@@ -3,7 +3,7 @@ HOMEPAGE = "https://www.yoctoproject.org/"
LICENSE = "GPLv2"
LIC_FILES_CHKSUM = "file://${COREBASE}/meta/files/common-licenses/GPL-2.0;md5=801f80980d171dd6425610833a22dbe6"
-inherit kernel-arch
+inherit kernel-arch linux-kernel-base
inherit pkgconfig
PACKAGE_ARCH = "${MACHINE_ARCH}"
diff --git a/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.08.12.bb b/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2023.02.13.bb
index 7165a9f9b3..295510225a 100644
--- a/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2022.08.12.bb
+++ b/poky/meta/recipes-kernel/wireless-regdb/wireless-regdb_2023.02.13.bb
@@ -5,7 +5,7 @@ LICENSE = "ISC"
LIC_FILES_CHKSUM = "file://LICENSE;md5=07c4f6dea3845b02a18dc00c8c87699c"
SRC_URI = "https://www.kernel.org/pub/software/network/${BPN}/${BP}.tar.xz"
-SRC_URI[sha256sum] = "59c8f7d17966db71b27f90e735ee8f5b42ca3527694a8c5e6e9b56bd379c3b84"
+SRC_URI[sha256sum] = "fe81e8a8694dc4753a45087a1c4c7e1b48dee5a59f5f796ce374ea550f0b2e73"
inherit bin_package allarch
diff --git a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2022-3109.patch b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2022-3109.patch
new file mode 100644
index 0000000000..febf49cff2
--- /dev/null
+++ b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2022-3109.patch
@@ -0,0 +1,41 @@
+From 656cb0450aeb73b25d7d26980af342b37ac4c568 Mon Sep 17 00:00:00 2001
+From: Jiasheng Jiang <jiasheng@iscas.ac.cn>
+Date: Tue, 15 Feb 2022 17:58:08 +0800
+Subject: [PATCH] avcodec/vp3: Add missing check for av_malloc
+
+Since the av_malloc() may fail and return NULL pointer,
+it is needed that the 's->edge_emu_buffer' should be checked
+whether the new allocation is success.
+
+Fixes: d14723861b ("VP3: fix decoding of videos with stride > 2048")
+
+CVE: CVE-2022-3109
+Upstream-Status: Backport [https://github.com/FFmpeg/FFmpeg/commit/656cb0450aeb73b25d7d26980af342b37ac4c568]
+Comments: Refreshed hunk
+
+Reviewed-by: Peter Ross <pross@xvid.org>
+Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn>
+Signed-off-by: Bhabu Bindu <bhabu.bindu@kpit.com>
+---
+ libavcodec/vp3.c | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/libavcodec/vp3.c b/libavcodec/vp3.c
+index e9ab54d73677..e2418eb6fa04 100644
+--- a/libavcodec/vp3.c
++++ b/libavcodec/vp3.c
+@@ -2740,8 +2740,13 @@
+ if (ff_thread_get_buffer(avctx, &s->current_frame, AV_GET_BUFFER_FLAG_REF) < 0)
+ goto error;
+
+- if (!s->edge_emu_buffer)
++ if (!s->edge_emu_buffer) {
+ s->edge_emu_buffer = av_malloc(9 * FFABS(s->current_frame.f->linesize[0]));
++ if (!s->edge_emu_buffer) {
++ ret = AVERROR(ENOMEM);
++ goto error;
++ }
++ }
+
+ if (s->keyframe) {
+ if (!s->theora) {
diff --git a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2022-3341.patch b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2022-3341.patch
new file mode 100644
index 0000000000..fcbd9b3e1b
--- /dev/null
+++ b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2022-3341.patch
@@ -0,0 +1,67 @@
+From 9cf652cef49d74afe3d454f27d49eb1a1394951e Mon Sep 17 00:00:00 2001
+From: Jiasheng Jiang <jiasheng@iscas.ac.cn>
+Date: Wed, 23 Feb 2022 10:31:59 +0800
+Subject: [PATCH] avformat/nutdec: Add check for avformat_new_stream
+
+Check for failure of avformat_new_stream() and propagate
+the error code.
+
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2022-3341
+
+Upstream-Status: Backport [https://github.com/FFmpeg/FFmpeg/commit/9cf652cef49d74afe3d454f27d49eb1a1394951e]
+
+Comments: Refreshed Hunk
+Signed-off-by: Narpat Mali <narpat.mali@windriver.com>
+Signed-off-by: Bhabu Bindu <bhabu.bindu@kpit.com>
+---
+ libavformat/nutdec.c | 16 ++++++++++++----
+ 1 file changed, 12 insertions(+), 4 deletions(-)
+
+diff --git a/libavformat/nutdec.c b/libavformat/nutdec.c
+index 0a8a700acf..f9ad2c0af1 100644
+--- a/libavformat/nutdec.c
++++ b/libavformat/nutdec.c
+@@ -351,8 +351,12 @@ static int decode_main_header(NUTContext *nut)
+ ret = AVERROR(ENOMEM);
+ goto fail;
+ }
+- for (i = 0; i < stream_count; i++)
+- avformat_new_stream(s, NULL);
++ for (i = 0; i < stream_count; i++) {
++ if (!avformat_new_stream(s, NULL)) {
++ ret = AVERROR(ENOMEM);
++ goto fail;
++ }
++ }
+
+ return 0;
+ fail:
+@@ -793,19 +793,23 @@
+ NUTContext *nut = s->priv_data;
+ AVIOContext *bc = s->pb;
+ int64_t pos;
+- int initialized_stream_count;
++ int initialized_stream_count, ret;
+
+ nut->avf = s;
+
+ /* main header */
+ pos = 0;
++ ret = 0;
+ do {
++ if (ret == AVERROR(ENOMEM))
++ return ret;
++
+ pos = find_startcode(bc, MAIN_STARTCODE, pos) + 1;
+ if (pos < 0 + 1) {
+ av_log(s, AV_LOG_ERROR, "No main startcode found.\n");
+ goto fail;
+ }
+- } while (decode_main_header(nut) < 0);
++ } while ((ret = decode_main_header(nut)) < 0);
+
+ /* stream headers */
+ pos = 0;
+
diff --git a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_4.2.2.bb b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_4.2.2.bb
index cbfdbf0563..1e000dddfa 100644
--- a/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_4.2.2.bb
+++ b/poky/meta/recipes-multimedia/ffmpeg/ffmpeg_4.2.2.bb
@@ -30,6 +30,8 @@ SRC_URI = "https://www.ffmpeg.org/releases/${BP}.tar.xz \
file://CVE-2021-3566.patch \
file://CVE-2021-38291.patch \
file://CVE-2022-1475.patch \
+ file://CVE-2022-3109.patch \
+ file://CVE-2022-3341.patch \
"
SRC_URI[md5sum] = "348956fc2faa57a2f79bbb84ded9fbc3"
SRC_URI[sha256sum] = "cb754255ab0ee2ea5f66f8850e1bd6ad5cac1cd855d0a2f4990fb8c668b0d29c"
diff --git a/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3570_3598.patch b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3570_3598.patch
new file mode 100644
index 0000000000..760e20dd2b
--- /dev/null
+++ b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3570_3598.patch
@@ -0,0 +1,659 @@
+From 226e336cdceec933da2e9f72b6578c7a1bea450b Mon Sep 17 00:00:00 2001
+From: Su Laus <sulau@freenet.de>
+Date: Thu, 13 Oct 2022 14:33:27 +0000
+Subject: [PATCH] tiffcrop subroutines require a larger buffer (fixes #271,
+
+Upstream-Status: Backport [import from debian http://security.debian.org/debian-security/pool/updates/main/t/tiff/tiff_4.1.0+git191117-2~deb10u7.debian.tar.xz ]
+CVE: CVE-2022-3570 CVE-2022-3598
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/cfbb883bf6ea7bedcb04177cc4e52d304522fdff
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/24d3b2425af24432e0e4e2fd58b33f3b04c4bfa4
+Reviewed-by: Sylvain Beucler <beuc@debian.org>
+Last-Update: 2023-01-17
+
+ #381, #386, #388, #389, #435)
+
+---
+ tools/tiffcrop.c | 209 ++++++++++++++++++++++++++---------------------
+ 1 file changed, 117 insertions(+), 92 deletions(-)
+
+diff --git a/tools/tiffcrop.c b/tools/tiffcrop.c
+index c7877aa..c923920 100644
+--- a/tools/tiffcrop.c
++++ b/tools/tiffcrop.c
+@@ -126,6 +126,7 @@ static char tiffcrop_rev_date[] = "03-03-2010";
+
+ #ifdef HAVE_STDINT_H
+ # include <stdint.h>
++# include <inttypes.h>
+ #endif
+
+ #ifndef HAVE_GETOPT
+@@ -212,6 +213,10 @@ extern int getopt(int argc, char * const argv[], const char *optstring);
+
+ #define TIFF_DIR_MAX 65534
+
++/* Some conversion subroutines require image buffers, which are at least 3 bytes
++ * larger than the necessary size for the image itself. */
++#define NUM_BUFF_OVERSIZE_BYTES 3
++
+ /* Offsets into buffer for margins and fixed width and length segments */
+ struct offset {
+ uint32 tmargin;
+@@ -233,7 +238,7 @@ struct offset {
+ */
+
+ struct buffinfo {
+- uint32 size; /* size of this buffer */
++ size_t size; /* size of this buffer */
+ unsigned char *buffer; /* address of the allocated buffer */
+ };
+
+@@ -771,8 +776,8 @@ static int readContigTilesIntoBuffer (TIFF* in, uint8* buf,
+ uint32 dst_rowsize, shift_width;
+ uint32 bytes_per_sample, bytes_per_pixel;
+ uint32 trailing_bits, prev_trailing_bits;
+- uint32 tile_rowsize = TIFFTileRowSize(in);
+- uint32 src_offset, dst_offset;
++ tmsize_t tile_rowsize = TIFFTileRowSize(in);
++ tmsize_t src_offset, dst_offset;
+ uint32 row_offset, col_offset;
+ uint8 *bufp = (uint8*) buf;
+ unsigned char *src = NULL;
+@@ -822,7 +827,7 @@ static int readContigTilesIntoBuffer (TIFF* in, uint8* buf,
+ TIFFError("readContigTilesIntoBuffer", "Integer overflow when calculating buffer size.");
+ exit(-1);
+ }
+- tilebuf = _TIFFmalloc(tile_buffsize + 3);
++ tilebuf = _TIFFmalloc(tile_buffsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (tilebuf == 0)
+ return 0;
+ tilebuf[tile_buffsize] = 0;
+@@ -986,7 +991,7 @@ static int readSeparateTilesIntoBuffer (TIFF* in, uint8 *obuf,
+ for (sample = 0; (sample < spp) && (sample < MAX_SAMPLES); sample++)
+ {
+ srcbuffs[sample] = NULL;
+- tbuff = (unsigned char *)_TIFFmalloc(tilesize + 8);
++ tbuff = (unsigned char *)_TIFFmalloc(tilesize + NUM_BUFF_OVERSIZE_BYTES);
+ if (!tbuff)
+ {
+ TIFFError ("readSeparateTilesIntoBuffer",
+@@ -1181,7 +1186,8 @@ writeBufferToSeparateStrips (TIFF* out, uint8* buf,
+ }
+ rowstripsize = rowsperstrip * bytes_per_sample * (width + 1);
+
+- obuf = _TIFFmalloc (rowstripsize);
++ /* Add 3 padding bytes for extractContigSamples32bits */
++ obuf = _TIFFmalloc (rowstripsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (obuf == NULL)
+ return 1;
+
+@@ -1194,7 +1200,7 @@ writeBufferToSeparateStrips (TIFF* out, uint8* buf,
+ stripsize = TIFFVStripSize(out, nrows);
+ src = buf + (row * rowsize);
+ total_bytes += stripsize;
+- memset (obuf, '\0', rowstripsize);
++ memset (obuf, '\0',rowstripsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (extractContigSamplesToBuffer(obuf, src, nrows, width, s, spp, bps, dump))
+ {
+ _TIFFfree(obuf);
+@@ -1202,10 +1208,15 @@ writeBufferToSeparateStrips (TIFF* out, uint8* buf,
+ }
+ if ((dump->outfile != NULL) && (dump->level == 1))
+ {
+- dump_info(dump->outfile, dump->format,"",
++ if ((uint64_t)scanlinesize > 0x0ffffffffULL) {
++ dump_info(dump->infile, dump->format, "loadImage",
++ "Attention: scanlinesize %"PRIu64" is larger than UINT32_MAX.\nFollowing dump might be wrong.",
++ (uint64_t)scanlinesize);
++ }
++ dump_info(dump->outfile, dump->format,"",
+ "Sample %2d, Strip: %2d, bytes: %4d, Row %4d, bytes: %4d, Input offset: %6d",
+- s + 1, strip + 1, stripsize, row + 1, scanlinesize, src - buf);
+- dump_buffer(dump->outfile, dump->format, nrows, scanlinesize, row, obuf);
++ s + 1, strip + 1, stripsize, row + 1, (uint32)scanlinesize, src - buf);
++ dump_buffer(dump->outfile, dump->format, nrows, (uint32)scanlinesize, row, obuf);
+ }
+
+ if (TIFFWriteEncodedStrip(out, strip++, obuf, stripsize) < 0)
+@@ -1232,7 +1243,7 @@ static int writeBufferToContigTiles (TIFF* out, uint8* buf, uint32 imagelength,
+ uint32 tl, tw;
+ uint32 row, col, nrow, ncol;
+ uint32 src_rowsize, col_offset;
+- uint32 tile_rowsize = TIFFTileRowSize(out);
++ tmsize_t tile_rowsize = TIFFTileRowSize(out);
+ uint8* bufp = (uint8*) buf;
+ tsize_t tile_buffsize = 0;
+ tsize_t tilesize = TIFFTileSize(out);
+@@ -1275,9 +1286,11 @@ static int writeBufferToContigTiles (TIFF* out, uint8* buf, uint32 imagelength,
+ }
+ src_rowsize = ((imagewidth * spp * bps) + 7U) / 8;
+
+- tilebuf = _TIFFmalloc(tile_buffsize);
++ /* Add 3 padding bytes for extractContigSamples32bits */
++ tilebuf = _TIFFmalloc(tile_buffsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (tilebuf == 0)
+ return 1;
++ memset(tilebuf, 0, tile_buffsize + NUM_BUFF_OVERSIZE_BYTES);
+ for (row = 0; row < imagelength; row += tl)
+ {
+ nrow = (row + tl > imagelength) ? imagelength - row : tl;
+@@ -1323,7 +1336,8 @@ static int writeBufferToSeparateTiles (TIFF* out, uint8* buf, uint32 imagelength
+ uint32 imagewidth, tsample_t spp,
+ struct dump_opts * dump)
+ {
+- tdata_t obuf = _TIFFmalloc(TIFFTileSize(out));
++ /* Add 3 padding bytes for extractContigSamples32bits */
++ tdata_t obuf = _TIFFmalloc(TIFFTileSize(out) + NUM_BUFF_OVERSIZE_BYTES);
+ uint32 tl, tw;
+ uint32 row, col, nrow, ncol;
+ uint32 src_rowsize, col_offset;
+@@ -1333,6 +1347,7 @@ static int writeBufferToSeparateTiles (TIFF* out, uint8* buf, uint32 imagelength
+
+ if (obuf == NULL)
+ return 1;
++ memset(obuf, 0, TIFFTileSize(out) + NUM_BUFF_OVERSIZE_BYTES);
+
+ TIFFGetField(out, TIFFTAG_TILELENGTH, &tl);
+ TIFFGetField(out, TIFFTAG_TILEWIDTH, &tw);
+@@ -1754,14 +1769,14 @@ void process_command_opts (int argc, char *argv[], char *mp, char *mode, uint32
+
+ *opt_offset = '\0';
+ /* convert option to lowercase */
+- end = strlen (opt_ptr);
++ end = (unsigned int)strlen (opt_ptr);
+ for (i = 0; i < end; i++)
+ *(opt_ptr + i) = tolower((int) *(opt_ptr + i));
+ /* Look for dump format specification */
+ if (strncmp(opt_ptr, "for", 3) == 0)
+ {
+ /* convert value to lowercase */
+- end = strlen (opt_offset + 1);
++ end = (unsigned int)strlen (opt_offset + 1);
+ for (i = 1; i <= end; i++)
+ *(opt_offset + i) = tolower((int) *(opt_offset + i));
+ /* check dump format value */
+@@ -2213,6 +2228,8 @@ main(int argc, char* argv[])
+ size_t length;
+ char temp_filename[PATH_MAX + 16]; /* Extra space keeps the compiler from complaining */
+
++ assert(NUM_BUFF_OVERSIZE_BYTES >= 3);
++
+ little_endian = *((unsigned char *)&little_endian) & '1';
+
+ initImageData(&image);
+@@ -3114,13 +3131,13 @@ extractContigSamples32bits (uint8 *in, uint8 *out, uint32 cols,
+ /* If we have a full buffer's worth, write it out */
+ if (ready_bits >= 32)
+ {
+- bytebuff1 = (buff2 >> 56);
++ bytebuff1 = (uint8)(buff2 >> 56);
+ *dst++ = bytebuff1;
+- bytebuff2 = (buff2 >> 48);
++ bytebuff2 = (uint8)(buff2 >> 48);
+ *dst++ = bytebuff2;
+- bytebuff3 = (buff2 >> 40);
++ bytebuff3 = (uint8)(buff2 >> 40);
+ *dst++ = bytebuff3;
+- bytebuff4 = (buff2 >> 32);
++ bytebuff4 = (uint8)(buff2 >> 32);
+ *dst++ = bytebuff4;
+ ready_bits -= 32;
+
+@@ -3495,13 +3512,13 @@ extractContigSamplesShifted32bits (uint8 *in, uint8 *out, uint32 cols,
+ }
+ else /* If we have a full buffer's worth, write it out */
+ {
+- bytebuff1 = (buff2 >> 56);
++ bytebuff1 = (uint8)(buff2 >> 56);
+ *dst++ = bytebuff1;
+- bytebuff2 = (buff2 >> 48);
++ bytebuff2 = (uint8)(buff2 >> 48);
+ *dst++ = bytebuff2;
+- bytebuff3 = (buff2 >> 40);
++ bytebuff3 = (uint8)(buff2 >> 40);
+ *dst++ = bytebuff3;
+- bytebuff4 = (buff2 >> 32);
++ bytebuff4 = (uint8)(buff2 >> 32);
+ *dst++ = bytebuff4;
+ ready_bits -= 32;
+
+@@ -3678,10 +3695,10 @@ extractContigSamplesToTileBuffer(uint8 *out, uint8 *in, uint32 rows, uint32 cols
+ static int readContigStripsIntoBuffer (TIFF* in, uint8* buf)
+ {
+ uint8* bufp = buf;
+- int32 bytes_read = 0;
++ tmsize_t bytes_read = 0;
+ uint32 strip, nstrips = TIFFNumberOfStrips(in);
+- uint32 stripsize = TIFFStripSize(in);
+- uint32 rows = 0;
++ tmsize_t stripsize = TIFFStripSize(in);
++ tmsize_t rows = 0;
+ uint32 rps = TIFFGetFieldDefaulted(in, TIFFTAG_ROWSPERSTRIP, &rps);
+ tsize_t scanline_size = TIFFScanlineSize(in);
+
+@@ -3694,13 +3711,12 @@ static int readContigStripsIntoBuffer (TIFF* in, uint8* buf)
+ bytes_read = TIFFReadEncodedStrip (in, strip, bufp, -1);
+ rows = bytes_read / scanline_size;
+ if ((strip < (nstrips - 1)) && (bytes_read != (int32)stripsize))
+- TIFFError("", "Strip %d: read %lu bytes, strip size %lu",
+- (int)strip + 1, (unsigned long) bytes_read,
+- (unsigned long)stripsize);
++ TIFFError("", "Strip %"PRIu32": read %"PRId64" bytes, strip size %"PRIu64,
++ strip + 1, bytes_read, stripsize);
+
+ if (bytes_read < 0 && !ignore) {
+- TIFFError("", "Error reading strip %lu after %lu rows",
+- (unsigned long) strip, (unsigned long)rows);
++ TIFFError("", "Error reading strip %"PRIu32" after %"PRIu64" rows",
++ strip, rows);
+ return 0;
+ }
+ bufp += stripsize;
+@@ -4164,13 +4180,13 @@ combineSeparateSamples32bits (uint8 *in[], uint8 *out, uint32 cols,
+ /* If we have a full buffer's worth, write it out */
+ if (ready_bits >= 32)
+ {
+- bytebuff1 = (buff2 >> 56);
++ bytebuff1 = (uint8)(buff2 >> 56);
+ *dst++ = bytebuff1;
+- bytebuff2 = (buff2 >> 48);
++ bytebuff2 = (uint8)(buff2 >> 48);
+ *dst++ = bytebuff2;
+- bytebuff3 = (buff2 >> 40);
++ bytebuff3 = (uint8)(buff2 >> 40);
+ *dst++ = bytebuff3;
+- bytebuff4 = (buff2 >> 32);
++ bytebuff4 = (uint8)(buff2 >> 32);
+ *dst++ = bytebuff4;
+ ready_bits -= 32;
+
+@@ -4213,10 +4229,10 @@ combineSeparateSamples32bits (uint8 *in[], uint8 *out, uint32 cols,
+ "Row %3d, Col %3d, Src byte offset %3d bit offset %2d Dst offset %3d",
+ row + 1, col + 1, src_byte, src_bit, dst - out);
+
+- dump_long (dumpfile, format, "Match bits ", matchbits);
++ dump_wide (dumpfile, format, "Match bits ", matchbits);
+ dump_data (dumpfile, format, "Src bits ", src, 4);
+- dump_long (dumpfile, format, "Buff1 bits ", buff1);
+- dump_long (dumpfile, format, "Buff2 bits ", buff2);
++ dump_wide (dumpfile, format, "Buff1 bits ", buff1);
++ dump_wide (dumpfile, format, "Buff2 bits ", buff2);
+ dump_byte (dumpfile, format, "Write bits1", bytebuff1);
+ dump_byte (dumpfile, format, "Write bits2", bytebuff2);
+ dump_info (dumpfile, format, "", "Ready bits: %2d", ready_bits);
+@@ -4689,13 +4705,13 @@ combineSeparateTileSamples32bits (uint8 *in[], uint8 *out, uint32 cols,
+ /* If we have a full buffer's worth, write it out */
+ if (ready_bits >= 32)
+ {
+- bytebuff1 = (buff2 >> 56);
++ bytebuff1 = (uint8)(buff2 >> 56);
+ *dst++ = bytebuff1;
+- bytebuff2 = (buff2 >> 48);
++ bytebuff2 = (uint8)(buff2 >> 48);
+ *dst++ = bytebuff2;
+- bytebuff3 = (buff2 >> 40);
++ bytebuff3 = (uint8)(buff2 >> 40);
+ *dst++ = bytebuff3;
+- bytebuff4 = (buff2 >> 32);
++ bytebuff4 = (uint8)(buff2 >> 32);
+ *dst++ = bytebuff4;
+ ready_bits -= 32;
+
+@@ -4738,10 +4754,10 @@ combineSeparateTileSamples32bits (uint8 *in[], uint8 *out, uint32 cols,
+ "Row %3d, Col %3d, Src byte offset %3d bit offset %2d Dst offset %3d",
+ row + 1, col + 1, src_byte, src_bit, dst - out);
+
+- dump_long (dumpfile, format, "Match bits ", matchbits);
++ dump_wide (dumpfile, format, "Match bits ", matchbits);
+ dump_data (dumpfile, format, "Src bits ", src, 4);
+- dump_long (dumpfile, format, "Buff1 bits ", buff1);
+- dump_long (dumpfile, format, "Buff2 bits ", buff2);
++ dump_wide (dumpfile, format, "Buff1 bits ", buff1);
++ dump_wide (dumpfile, format, "Buff2 bits ", buff2);
+ dump_byte (dumpfile, format, "Write bits1", bytebuff1);
+ dump_byte (dumpfile, format, "Write bits2", bytebuff2);
+ dump_info (dumpfile, format, "", "Ready bits: %2d", ready_bits);
+@@ -4764,7 +4780,7 @@ static int readSeparateStripsIntoBuffer (TIFF *in, uint8 *obuf, uint32 length,
+ {
+ int i, bytes_per_sample, bytes_per_pixel, shift_width, result = 1;
+ uint32 j;
+- int32 bytes_read = 0;
++ tmsize_t bytes_read = 0;
+ uint16 bps = 0, planar;
+ uint32 nstrips;
+ uint32 strips_per_sample;
+@@ -4830,7 +4846,7 @@ static int readSeparateStripsIntoBuffer (TIFF *in, uint8 *obuf, uint32 length,
+ for (s = 0; (s < spp) && (s < MAX_SAMPLES); s++)
+ {
+ srcbuffs[s] = NULL;
+- buff = _TIFFmalloc(stripsize + 3);
++ buff = _TIFFmalloc(stripsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (!buff)
+ {
+ TIFFError ("readSeparateStripsIntoBuffer",
+@@ -4853,7 +4869,7 @@ static int readSeparateStripsIntoBuffer (TIFF *in, uint8 *obuf, uint32 length,
+ buff = srcbuffs[s];
+ strip = (s * strips_per_sample) + j;
+ bytes_read = TIFFReadEncodedStrip (in, strip, buff, stripsize);
+- rows_this_strip = bytes_read / src_rowsize;
++ rows_this_strip = (uint32)(bytes_read / src_rowsize);
+ if (bytes_read < 0 && !ignore)
+ {
+ TIFFError(TIFFFileName(in),
+@@ -5860,13 +5876,14 @@ loadImage(TIFF* in, struct image_data *image, struct dump_opts *dump, unsigned c
+ uint16 input_compression = 0, input_photometric = 0;
+ uint16 subsampling_horiz, subsampling_vert;
+ uint32 width = 0, length = 0;
+- uint32 stsize = 0, tlsize = 0, buffsize = 0, scanlinesize = 0;
++ tmsize_t stsize = 0, tlsize = 0, buffsize = 0;
++ tmsize_t scanlinesize = 0;
+ uint32 tw = 0, tl = 0; /* Tile width and length */
+- uint32 tile_rowsize = 0;
++ tmsize_t tile_rowsize = 0;
+ unsigned char *read_buff = NULL;
+ unsigned char *new_buff = NULL;
+ int readunit = 0;
+- static uint32 prev_readsize = 0;
++ static tmsize_t prev_readsize = 0;
+
+ TIFFGetFieldDefaulted(in, TIFFTAG_BITSPERSAMPLE, &bps);
+ TIFFGetFieldDefaulted(in, TIFFTAG_SAMPLESPERPIXEL, &spp);
+@@ -6168,7 +6185,7 @@ loadImage(TIFF* in, struct image_data *image, struct dump_opts *dump, unsigned c
+ TIFFError("loadImage", "Unable to allocate/reallocate read buffer");
+ return (-1);
+ }
+- read_buff = (unsigned char *)_TIFFmalloc(buffsize+3);
++ read_buff = (unsigned char *)_TIFFmalloc(buffsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ else
+ {
+@@ -6179,11 +6196,11 @@ loadImage(TIFF* in, struct image_data *image, struct dump_opts *dump, unsigned c
+ TIFFError("loadImage", "Unable to allocate/reallocate read buffer");
+ return (-1);
+ }
+- new_buff = _TIFFrealloc(read_buff, buffsize+3);
++ new_buff = _TIFFrealloc(read_buff, buffsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (!new_buff)
+ {
+ free (read_buff);
+- read_buff = (unsigned char *)_TIFFmalloc(buffsize+3);
++ read_buff = (unsigned char *)_TIFFmalloc(buffsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ else
+ read_buff = new_buff;
+@@ -6256,8 +6273,13 @@ loadImage(TIFF* in, struct image_data *image, struct dump_opts *dump, unsigned c
+ dump_info (dump->infile, dump->format, "",
+ "Bits per sample %d, Samples per pixel %d", bps, spp);
+
++ if ((uint64_t)scanlinesize > 0x0ffffffffULL) {
++ dump_info(dump->infile, dump->format, "loadImage",
++ "Attention: scanlinesize %"PRIu64" is larger than UINT32_MAX.\nFollowing dump might be wrong.",
++ (uint64_t)scanlinesize);
++ }
+ for (i = 0; i < length; i++)
+- dump_buffer(dump->infile, dump->format, 1, scanlinesize,
++ dump_buffer(dump->infile, dump->format, 1, (uint32)scanlinesize,
+ i, read_buff + (i * scanlinesize));
+ }
+ return (0);
+@@ -7277,13 +7299,13 @@ writeSingleSection(TIFF *in, TIFF *out, struct image_data *image,
+ if (TIFFGetField(in, TIFFTAG_NUMBEROFINKS, &ninks)) {
+ TIFFSetField(out, TIFFTAG_NUMBEROFINKS, ninks);
+ if (TIFFGetField(in, TIFFTAG_INKNAMES, &inknames)) {
+- int inknameslen = strlen(inknames) + 1;
++ int inknameslen = (int)strlen(inknames) + 1;
+ const char* cp = inknames;
+ while (ninks > 1) {
+ cp = strchr(cp, '\0');
+ if (cp) {
+ cp++;
+- inknameslen += (strlen(cp) + 1);
++ inknameslen += ((int)strlen(cp) + 1);
+ }
+ ninks--;
+ }
+@@ -7346,23 +7368,23 @@ createImageSection(uint32 sectsize, unsigned char **sect_buff_ptr)
+
+ if (!sect_buff)
+ {
+- sect_buff = (unsigned char *)_TIFFmalloc(sectsize);
++ sect_buff = (unsigned char *)_TIFFmalloc(sectsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (!sect_buff)
+ {
+ TIFFError("createImageSection", "Unable to allocate/reallocate section buffer");
+ return (-1);
+ }
+- _TIFFmemset(sect_buff, 0, sectsize);
++ _TIFFmemset(sect_buff, 0, sectsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ else
+ {
+ if (prev_sectsize < sectsize)
+ {
+- new_buff = _TIFFrealloc(sect_buff, sectsize);
++ new_buff = _TIFFrealloc(sect_buff, sectsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (!new_buff)
+ {
+ free (sect_buff);
+- sect_buff = (unsigned char *)_TIFFmalloc(sectsize);
++ sect_buff = (unsigned char *)_TIFFmalloc(sectsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ else
+ sect_buff = new_buff;
+@@ -7372,7 +7394,7 @@ createImageSection(uint32 sectsize, unsigned char **sect_buff_ptr)
+ TIFFError("createImageSection", "Unable to allocate/reallocate section buffer");
+ return (-1);
+ }
+- _TIFFmemset(sect_buff, 0, sectsize);
++ _TIFFmemset(sect_buff, 0, sectsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ }
+
+@@ -7403,17 +7425,17 @@ processCropSelections(struct image_data *image, struct crop_mask *crop,
+ cropsize = crop->bufftotal;
+ crop_buff = seg_buffs[0].buffer;
+ if (!crop_buff)
+- crop_buff = (unsigned char *)_TIFFmalloc(cropsize);
++ crop_buff = (unsigned char *)_TIFFmalloc(cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ else
+ {
+ prev_cropsize = seg_buffs[0].size;
+ if (prev_cropsize < cropsize)
+ {
+- next_buff = _TIFFrealloc(crop_buff, cropsize);
++ next_buff = _TIFFrealloc(crop_buff, cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (! next_buff)
+ {
+ _TIFFfree (crop_buff);
+- crop_buff = (unsigned char *)_TIFFmalloc(cropsize);
++ crop_buff = (unsigned char *)_TIFFmalloc(cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ else
+ crop_buff = next_buff;
+@@ -7426,7 +7448,7 @@ processCropSelections(struct image_data *image, struct crop_mask *crop,
+ return (-1);
+ }
+
+- _TIFFmemset(crop_buff, 0, cropsize);
++ _TIFFmemset(crop_buff, 0, cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ seg_buffs[0].buffer = crop_buff;
+ seg_buffs[0].size = cropsize;
+
+@@ -7505,17 +7527,17 @@ processCropSelections(struct image_data *image, struct crop_mask *crop,
+ cropsize = crop->bufftotal;
+ crop_buff = seg_buffs[i].buffer;
+ if (!crop_buff)
+- crop_buff = (unsigned char *)_TIFFmalloc(cropsize);
++ crop_buff = (unsigned char *)_TIFFmalloc(cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ else
+ {
+ prev_cropsize = seg_buffs[0].size;
+ if (prev_cropsize < cropsize)
+ {
+- next_buff = _TIFFrealloc(crop_buff, cropsize);
++ next_buff = _TIFFrealloc(crop_buff, cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (! next_buff)
+ {
+ _TIFFfree (crop_buff);
+- crop_buff = (unsigned char *)_TIFFmalloc(cropsize);
++ crop_buff = (unsigned char *)_TIFFmalloc(cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ else
+ crop_buff = next_buff;
+@@ -7528,7 +7550,7 @@ processCropSelections(struct image_data *image, struct crop_mask *crop,
+ return (-1);
+ }
+
+- _TIFFmemset(crop_buff, 0, cropsize);
++ _TIFFmemset(crop_buff, 0, cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ seg_buffs[i].buffer = crop_buff;
+ seg_buffs[i].size = cropsize;
+
+@@ -7641,24 +7663,24 @@ createCroppedImage(struct image_data *image, struct crop_mask *crop,
+ crop_buff = *crop_buff_ptr;
+ if (!crop_buff)
+ {
+- crop_buff = (unsigned char *)_TIFFmalloc(cropsize);
++ crop_buff = (unsigned char *)_TIFFmalloc(cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (!crop_buff)
+ {
+ TIFFError("createCroppedImage", "Unable to allocate/reallocate crop buffer");
+ return (-1);
+ }
+- _TIFFmemset(crop_buff, 0, cropsize);
++ _TIFFmemset(crop_buff, 0, cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ prev_cropsize = cropsize;
+ }
+ else
+ {
+ if (prev_cropsize < cropsize)
+ {
+- new_buff = _TIFFrealloc(crop_buff, cropsize);
++ new_buff = _TIFFrealloc(crop_buff, cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (!new_buff)
+ {
+ free (crop_buff);
+- crop_buff = (unsigned char *)_TIFFmalloc(cropsize);
++ crop_buff = (unsigned char *)_TIFFmalloc(cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ else
+ crop_buff = new_buff;
+@@ -7667,7 +7689,7 @@ createCroppedImage(struct image_data *image, struct crop_mask *crop,
+ TIFFError("createCroppedImage", "Unable to allocate/reallocate crop buffer");
+ return (-1);
+ }
+- _TIFFmemset(crop_buff, 0, cropsize);
++ _TIFFmemset(crop_buff, 0, cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ }
+ }
+
+@@ -7965,13 +7987,13 @@ writeCroppedImage(TIFF *in, TIFF *out, struct image_data *image,
+ if (TIFFGetField(in, TIFFTAG_NUMBEROFINKS, &ninks)) {
+ TIFFSetField(out, TIFFTAG_NUMBEROFINKS, ninks);
+ if (TIFFGetField(in, TIFFTAG_INKNAMES, &inknames)) {
+- int inknameslen = strlen(inknames) + 1;
++ int inknameslen = (int)strlen(inknames) + 1;
+ const char* cp = inknames;
+ while (ninks > 1) {
+ cp = strchr(cp, '\0');
+ if (cp) {
+ cp++;
+- inknameslen += (strlen(cp) + 1);
++ inknameslen += ((int)strlen(cp) + 1);
+ }
+ ninks--;
+ }
+@@ -8356,13 +8378,13 @@ rotateContigSamples32bits(uint16 rotation, uint16 spp, uint16 bps, uint32 width,
+ }
+ else /* If we have a full buffer's worth, write it out */
+ {
+- bytebuff1 = (buff2 >> 56);
++ bytebuff1 = (uint8)(buff2 >> 56);
+ *dst++ = bytebuff1;
+- bytebuff2 = (buff2 >> 48);
++ bytebuff2 = (uint8)(buff2 >> 48);
+ *dst++ = bytebuff2;
+- bytebuff3 = (buff2 >> 40);
++ bytebuff3 = (uint8)(buff2 >> 40);
+ *dst++ = bytebuff3;
+- bytebuff4 = (buff2 >> 32);
++ bytebuff4 = (uint8)(buff2 >> 32);
+ *dst++ = bytebuff4;
+ ready_bits -= 32;
+
+@@ -8431,12 +8453,13 @@ rotateImage(uint16 rotation, struct image_data *image, uint32 *img_width,
+ return (-1);
+ }
+
+- if (!(rbuff = (unsigned char *)_TIFFmalloc(buffsize)))
++ /* Add 3 padding bytes for extractContigSamplesShifted32bits */
++ if (!(rbuff = (unsigned char *)_TIFFmalloc(buffsize + NUM_BUFF_OVERSIZE_BYTES)))
+ {
+- TIFFError("rotateImage", "Unable to allocate rotation buffer of %1u bytes", buffsize);
++ TIFFError("rotateImage", "Unable to allocate rotation buffer of %1u bytes", buffsize + NUM_BUFF_OVERSIZE_BYTES);
+ return (-1);
+ }
+- _TIFFmemset(rbuff, '\0', buffsize);
++ _TIFFmemset(rbuff, '\0', buffsize + NUM_BUFF_OVERSIZE_BYTES);
+
+ ibuff = *ibuff_ptr;
+ switch (rotation)
+@@ -8964,13 +8987,13 @@ reverseSamples32bits (uint16 spp, uint16 bps, uint32 width,
+ }
+ else /* If we have a full buffer's worth, write it out */
+ {
+- bytebuff1 = (buff2 >> 56);
++ bytebuff1 = (uint8)(buff2 >> 56);
+ *dst++ = bytebuff1;
+- bytebuff2 = (buff2 >> 48);
++ bytebuff2 = (uint8)(buff2 >> 48);
+ *dst++ = bytebuff2;
+- bytebuff3 = (buff2 >> 40);
++ bytebuff3 = (uint8)(buff2 >> 40);
+ *dst++ = bytebuff3;
+- bytebuff4 = (buff2 >> 32);
++ bytebuff4 = (uint8)(buff2 >> 32);
+ *dst++ = bytebuff4;
+ ready_bits -= 32;
+
+@@ -9061,12 +9084,13 @@ mirrorImage(uint16 spp, uint16 bps, uint16 mirror, uint32 width, uint32 length,
+ {
+ case MIRROR_BOTH:
+ case MIRROR_VERT:
+- line_buff = (unsigned char *)_TIFFmalloc(rowsize);
++ line_buff = (unsigned char *)_TIFFmalloc(rowsize + NUM_BUFF_OVERSIZE_BYTES);
+ if (line_buff == NULL)
+ {
+- TIFFError ("mirrorImage", "Unable to allocate mirror line buffer of %1u bytes", rowsize);
++ TIFFError ("mirrorImage", "Unable to allocate mirror line buffer of %1u bytes", rowsize + NUM_BUFF_OVERSIZE_BYTES);
+ return (-1);
+ }
++ _TIFFmemset(line_buff, '\0', rowsize + NUM_BUFF_OVERSIZE_BYTES);
+
+ dst = ibuff + (rowsize * (length - 1));
+ for (row = 0; row < length / 2; row++)
+@@ -9098,11 +9122,12 @@ mirrorImage(uint16 spp, uint16 bps, uint16 mirror, uint32 width, uint32 length,
+ }
+ else
+ { /* non 8 bit per sample data */
+- if (!(line_buff = (unsigned char *)_TIFFmalloc(rowsize + 1)))
++ if (!(line_buff = (unsigned char *)_TIFFmalloc(rowsize + NUM_BUFF_OVERSIZE_BYTES)))
+ {
+ TIFFError("mirrorImage", "Unable to allocate mirror line buffer");
+ return (-1);
+ }
++ _TIFFmemset(line_buff, '\0', rowsize + NUM_BUFF_OVERSIZE_BYTES);
+ bytes_per_sample = (bps + 7) / 8;
+ bytes_per_pixel = ((bps * spp) + 7) / 8;
+ if (bytes_per_pixel < (bytes_per_sample + 1))
+@@ -9114,7 +9139,7 @@ mirrorImage(uint16 spp, uint16 bps, uint16 mirror, uint32 width, uint32 length,
+ {
+ row_offset = row * rowsize;
+ src = ibuff + row_offset;
+- _TIFFmemset (line_buff, '\0', rowsize);
++ _TIFFmemset (line_buff, '\0', rowsize + NUM_BUFF_OVERSIZE_BYTES);
+ switch (shift_width)
+ {
+ case 1: if (reverseSamples16bits(spp, bps, width, src, line_buff))
diff --git a/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3597_3626_3627.patch b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3597_3626_3627.patch
new file mode 100644
index 0000000000..18a4b4e0ff
--- /dev/null
+++ b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3597_3626_3627.patch
@@ -0,0 +1,123 @@
+From f7c06c395daf1b2c52ab431e00db2d9fc2ac993e Mon Sep 17 00:00:00 2001
+From: Su Laus <sulau@freenet.de>
+Date: Tue, 10 May 2022 20:03:17 +0000
+Subject: [PATCH] tiffcrop: Fix issue #330 and some more from 320 to 349
+
+Upstream-Status: Backport [import from debian http://security.debian.org/debian-security/pool/updates/main/t/tiff/tiff_4.1.0+git191117-2~deb10u7.debian.tar.xz ]
+CVE: CVE-2022-3597 CVE-2022-3626 CVE-2022-3627
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/e319508023580e2f70e6e626f745b5b2a1707313
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/8fe3735942ea1d90d8cef843b55b3efe8ab6feaf
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/bad48e90b410df32172006c7876da449ba62cdba
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/236b7191f04c60d09ee836ae13b50f812c841047
+Reviewed-by: Sylvain Beucler <beuc@debian.org>
+Last-Update: 2023-01-17
+
+---
+ tools/tiffcrop.c | 50 ++++++++++++++++++++++++++++++++++++++++--------
+ 1 file changed, 42 insertions(+), 8 deletions(-)
+
+diff --git a/tools/tiffcrop.c b/tools/tiffcrop.c
+index c923920..a0789a3 100644
+--- a/tools/tiffcrop.c
++++ b/tools/tiffcrop.c
+@@ -103,7 +103,12 @@
+ * selects which functions dump data, with higher numbers selecting
+ * lower level, scanline level routines. Debug reports a limited set
+ * of messages to monitor progess without enabling dump logs.
+- */
++ *
++ * Note 1: The (-X|-Y), -Z, -z and -S options are mutually exclusive.
++ * In no case should the options be applied to a given selection successively.
++ * Note 2: Any of the -X, -Y, -Z and -z options together with other PAGE_MODE_x options
++ * such as -H, -V, -P, -J or -K are not supported and may cause buffer overflows.
++ */
+
+ static char tiffcrop_version_id[] = "2.4.1";
+ static char tiffcrop_rev_date[] = "03-03-2010";
+@@ -176,12 +181,12 @@ extern int getopt(int argc, char * const argv[], const char *optstring);
+ #define ROTATECW_270 32
+ #define ROTATE_ANY (ROTATECW_90 | ROTATECW_180 | ROTATECW_270)
+
+-#define CROP_NONE 0
+-#define CROP_MARGINS 1
+-#define CROP_WIDTH 2
+-#define CROP_LENGTH 4
+-#define CROP_ZONES 8
+-#define CROP_REGIONS 16
++#define CROP_NONE 0 /* "-S" -> Page_MODE_ROWSCOLS and page->rows/->cols != 0 */
++#define CROP_MARGINS 1 /* "-m" */
++#define CROP_WIDTH 2 /* "-X" */
++#define CROP_LENGTH 4 /* "-Y" */
++#define CROP_ZONES 8 /* "-Z" */
++#define CROP_REGIONS 16 /* "-z" */
+ #define CROP_ROTATE 32
+ #define CROP_MIRROR 64
+ #define CROP_INVERT 128
+@@ -323,7 +328,7 @@ struct crop_mask {
+ #define PAGE_MODE_RESOLUTION 1
+ #define PAGE_MODE_PAPERSIZE 2
+ #define PAGE_MODE_MARGINS 4
+-#define PAGE_MODE_ROWSCOLS 8
++#define PAGE_MODE_ROWSCOLS 8 /* for -S option */
+
+ #define INVERT_DATA_ONLY 10
+ #define INVERT_DATA_AND_TAG 11
+@@ -754,6 +759,12 @@ static char* usage_info[] = {
+ " The four debug/dump options are independent, though it makes little sense to",
+ " specify a dump file without specifying a detail level.",
+ " ",
++"Note 1: The (-X|-Y), -Z, -z and -S options are mutually exclusive.",
++" In no case should the options be applied to a given selection successively.",
++" ",
++"Note 2: Any of the -X, -Y, -Z and -z options together with other PAGE_MODE_x options",
++" such as - H, -V, -P, -J or -K are not supported and may cause buffer overflows.",
++" ",
+ NULL
+ };
+
+@@ -2112,6 +2123,27 @@ void process_command_opts (int argc, char *argv[], char *mp, char *mode, uint32
+ /*NOTREACHED*/
+ }
+ }
++ /*-- Check for not allowed combinations (e.g. -X, -Y and -Z, -z and -S are mutually exclusive) --*/
++ char XY, Z, R, S;
++ XY = ((crop_data->crop_mode & CROP_WIDTH) || (crop_data->crop_mode & CROP_LENGTH)) ? 1 : 0;
++ Z = (crop_data->crop_mode & CROP_ZONES) ? 1 : 0;
++ R = (crop_data->crop_mode & CROP_REGIONS) ? 1 : 0;
++ S = (page->mode & PAGE_MODE_ROWSCOLS) ? 1 : 0;
++ if (XY + Z + R + S > 1) {
++ TIFFError("tiffcrop input error", "The crop options(-X|-Y), -Z, -z and -S are mutually exclusive.->exit");
++ exit(EXIT_FAILURE);
++ }
++
++ /* Check for not allowed combination:
++ * Any of the -X, -Y, -Z and -z options together with other PAGE_MODE_x options
++ * such as -H, -V, -P, -J or -K are not supported and may cause buffer overflows.
++. */
++ if ((XY + Z + R > 0) && page->mode != PAGE_MODE_NONE) {
++ TIFFError("tiffcrop input error",
++ "Any of the crop options -X, -Y, -Z and -z together with other PAGE_MODE_x options such as - H, -V, -P, -J or -K is not supported and may cause buffer overflows..->exit");
++ exit(EXIT_FAILURE);
++ }
++
+ } /* end process_command_opts */
+
+ /* Start a new output file if one has not been previously opened or
+@@ -2384,6 +2416,7 @@ main(int argc, char* argv[])
+ exit (-1);
+ }
+
++ /* Crop input image and copy zones and regions from input image into seg_buffs or crop_buff. */
+ if (crop.selections > 0)
+ {
+ if (processCropSelections(&image, &crop, &read_buff, seg_buffs))
+@@ -2400,6 +2433,7 @@ main(int argc, char* argv[])
+ exit (-1);
+ }
+ }
++ /* Format and write selected image parts to output file(s). */
+ if (page.mode == PAGE_MODE_NONE)
+ { /* Whole image or sections not based on output page size */
+ if (crop.selections > 0)
diff --git a/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3599.patch b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3599.patch
new file mode 100644
index 0000000000..9689a99638
--- /dev/null
+++ b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3599.patch
@@ -0,0 +1,277 @@
+From 01bca7e6f608da7696949fca6acda78b9935ba19 Mon Sep 17 00:00:00 2001
+From: Su_Laus <sulau@freenet.de>
+Date: Tue, 30 Aug 2022 16:56:48 +0200
+Subject: [PATCH] Revised handling of TIFFTAG_INKNAMES and related
+
+Upstream-Status: Backport [import from debian http://security.debian.org/debian-security/pool/updates/main/t/tiff/tiff_4.1.0+git191117-2~deb10u7.debian.tar.xz ]
+CVE: CVE-2022-3599
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/e813112545942107551433d61afd16ac094ff246
+Reviewed-by: Sylvain Beucler <beuc@debian.org>
+Last-Update: 2023-01-17
+
+ TIFFTAG_NUMBEROFINKS value
+
+In order to solve the buffer overflow issues related to TIFFTAG_INKNAMES and related TIFFTAG_NUMBEROFINKS value, a revised handling of those tags within LibTiff is proposed:
+
+Behaviour for writing:
+ `NumberOfInks` MUST fit to the number of inks in the `InkNames` string.
+ `NumberOfInks` is automatically set when `InkNames` is set.
+ If `NumberOfInks` is different to the number of inks within `InkNames` string, that will be corrected and a warning is issued.
+ If `NumberOfInks` is not equal to samplesperpixel only a warning will be issued.
+
+Behaviour for reading:
+ When reading `InkNames` from a TIFF file, the `NumberOfInks` will be set automatically to the number of inks in `InkNames` string.
+ If `NumberOfInks` is different to the number of inks within `InkNames` string, that will be corrected and a warning is issued.
+ If `NumberOfInks` is not equal to samplesperpixel only a warning will be issued.
+
+This allows the safe use of the NumberOfInks value to read out the InkNames without buffer overflow
+
+This MR will close the following issues: #149, #150, #152, #168 (to be checked), #250, #269, #398 and #456.
+
+It also fixes the old bug at http://bugzilla.maptools.org/show_bug.cgi?id=2599, for which the limitation of `NumberOfInks = SPP` was introduced, which is in my opinion not necessary and does not solve the general issue.
+
+---
+ libtiff/tif_dir.c | 120 ++++++++++++++++++++++++-----------------
+ libtiff/tif_dir.h | 2 +
+ libtiff/tif_dirinfo.c | 2 +-
+ libtiff/tif_dirwrite.c | 5 ++
+ libtiff/tif_print.c | 4 ++
+ 5 files changed, 83 insertions(+), 50 deletions(-)
+
+diff --git a/libtiff/tif_dir.c b/libtiff/tif_dir.c
+index 39aeeb4..9d8267a 100644
+--- a/libtiff/tif_dir.c
++++ b/libtiff/tif_dir.c
+@@ -29,6 +29,7 @@
+ * (and also some miscellaneous stuff)
+ */
+ #include "tiffiop.h"
++# include <inttypes.h>
+
+ /*
+ * These are used in the backwards compatibility code...
+@@ -137,32 +138,30 @@ setExtraSamples(TIFF* tif, va_list ap, uint32* v)
+ }
+
+ /*
+- * Confirm we have "samplesperpixel" ink names separated by \0. Returns
++ * Count ink names separated by \0. Returns
+ * zero if the ink names are not as expected.
+ */
+-static uint32
+-checkInkNamesString(TIFF* tif, uint32 slen, const char* s)
++static uint16
++countInkNamesString(TIFF *tif, uint32 slen, const char *s)
+ {
+- TIFFDirectory* td = &tif->tif_dir;
+- uint16 i = td->td_samplesperpixel;
++ uint16 i = 0;
++ const char *ep = s + slen;
++ const char *cp = s;
+
+ if (slen > 0) {
+- const char* ep = s+slen;
+- const char* cp = s;
+- for (; i > 0; i--) {
++ do {
+ for (; cp < ep && *cp != '\0'; cp++) {}
+ if (cp >= ep)
+ goto bad;
+ cp++; /* skip \0 */
+- }
+- return ((uint32)(cp-s));
++ i++;
++ } while (cp < ep);
++ return (i);
+ }
+ bad:
+ TIFFErrorExt(tif->tif_clientdata, "TIFFSetField",
+- "%s: Invalid InkNames value; expecting %d names, found %d",
+- tif->tif_name,
+- td->td_samplesperpixel,
+- td->td_samplesperpixel-i);
++ "%s: Invalid InkNames value; no NUL at given buffer end location %"PRIu32", after %"PRIu16" ink",
++ tif->tif_name, slen, i);
+ return (0);
+ }
+
+@@ -476,13 +475,61 @@ _TIFFVSetField(TIFF* tif, uint32 tag, va_list ap)
+ _TIFFsetFloatArray(&td->td_refblackwhite, va_arg(ap, float*), 6);
+ break;
+ case TIFFTAG_INKNAMES:
+- v = (uint16) va_arg(ap, uint16_vap);
+- s = va_arg(ap, char*);
+- v = checkInkNamesString(tif, v, s);
+- status = v > 0;
+- if( v > 0 ) {
+- _TIFFsetNString(&td->td_inknames, s, v);
+- td->td_inknameslen = v;
++ {
++ v = (uint16) va_arg(ap, uint16_vap);
++ s = va_arg(ap, char*);
++ uint16 ninksinstring;
++ ninksinstring = countInkNamesString(tif, v, s);
++ status = ninksinstring > 0;
++ if(ninksinstring > 0 ) {
++ _TIFFsetNString(&td->td_inknames, s, v);
++ td->td_inknameslen = v;
++ /* Set NumberOfInks to the value ninksinstring */
++ if (TIFFFieldSet(tif, FIELD_NUMBEROFINKS))
++ {
++ if (td->td_numberofinks != ninksinstring) {
++ TIFFErrorExt(tif->tif_clientdata, module,
++ "Warning %s; Tag %s:\n Value %"PRIu16" of NumberOfInks is different from the number of inks %"PRIu16".\n -> NumberOfInks value adapted to %"PRIu16"",
++ tif->tif_name, fip->field_name, td->td_numberofinks, ninksinstring, ninksinstring);
++ td->td_numberofinks = ninksinstring;
++ }
++ } else {
++ td->td_numberofinks = ninksinstring;
++ TIFFSetFieldBit(tif, FIELD_NUMBEROFINKS);
++ }
++ if (TIFFFieldSet(tif, FIELD_SAMPLESPERPIXEL))
++ {
++ if (td->td_numberofinks != td->td_samplesperpixel) {
++ TIFFErrorExt(tif->tif_clientdata, module,
++ "Warning %s; Tag %s:\n Value %"PRIu16" of NumberOfInks is different from the SamplesPerPixel value %"PRIu16"",
++ tif->tif_name, fip->field_name, td->td_numberofinks, td->td_samplesperpixel);
++ }
++ }
++ }
++ }
++ break;
++ case TIFFTAG_NUMBEROFINKS:
++ v = (uint16)va_arg(ap, uint16_vap);
++ /* If InkNames already set also NumberOfInks is set accordingly and should be equal */
++ if (TIFFFieldSet(tif, FIELD_INKNAMES))
++ {
++ if (v != td->td_numberofinks) {
++ TIFFErrorExt(tif->tif_clientdata, module,
++ "Error %s; Tag %s:\n It is not possible to set the value %"PRIu32" for NumberOfInks\n which is different from the number of inks in the InkNames tag (%"PRIu16")",
++ tif->tif_name, fip->field_name, v, td->td_numberofinks);
++ /* Do not set / overwrite number of inks already set by InkNames case accordingly. */
++ status = 0;
++ }
++ } else {
++ td->td_numberofinks = (uint16)v;
++ if (TIFFFieldSet(tif, FIELD_SAMPLESPERPIXEL))
++ {
++ if (td->td_numberofinks != td->td_samplesperpixel) {
++ TIFFErrorExt(tif->tif_clientdata, module,
++ "Warning %s; Tag %s:\n Value %"PRIu32" of NumberOfInks is different from the SamplesPerPixel value %"PRIu16"",
++ tif->tif_name, fip->field_name, v, td->td_samplesperpixel);
++ }
++ }
+ }
+ break;
+ case TIFFTAG_PERSAMPLE:
+@@ -887,34 +934,6 @@ _TIFFVGetField(TIFF* tif, uint32 tag, va_list ap)
+ if (fip->field_bit == FIELD_CUSTOM) {
+ standard_tag = 0;
+ }
+-
+- if( standard_tag == TIFFTAG_NUMBEROFINKS )
+- {
+- int i;
+- for (i = 0; i < td->td_customValueCount; i++) {
+- uint16 val;
+- TIFFTagValue *tv = td->td_customValues + i;
+- if (tv->info->field_tag != standard_tag)
+- continue;
+- if( tv->value == NULL )
+- return 0;
+- val = *(uint16 *)tv->value;
+- /* Truncate to SamplesPerPixel, since the */
+- /* setting code for INKNAMES assume that there are SamplesPerPixel */
+- /* inknames. */
+- /* Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2599 */
+- if( val > td->td_samplesperpixel )
+- {
+- TIFFWarningExt(tif->tif_clientdata,"_TIFFVGetField",
+- "Truncating NumberOfInks from %u to %u",
+- val, td->td_samplesperpixel);
+- val = td->td_samplesperpixel;
+- }
+- *va_arg(ap, uint16*) = val;
+- return 1;
+- }
+- return 0;
+- }
+
+ switch (standard_tag) {
+ case TIFFTAG_SUBFILETYPE:
+@@ -1092,6 +1111,9 @@ _TIFFVGetField(TIFF* tif, uint32 tag, va_list ap)
+ case TIFFTAG_INKNAMES:
+ *va_arg(ap, char**) = td->td_inknames;
+ break;
++ case TIFFTAG_NUMBEROFINKS:
++ *va_arg(ap, uint16 *) = td->td_numberofinks;
++ break;
+ default:
+ {
+ int i;
+diff --git a/libtiff/tif_dir.h b/libtiff/tif_dir.h
+index e7f0667..7cad679 100644
+--- a/libtiff/tif_dir.h
++++ b/libtiff/tif_dir.h
+@@ -117,6 +117,7 @@ typedef struct {
+ /* CMYK parameters */
+ int td_inknameslen;
+ char* td_inknames;
++ uint16 td_numberofinks; /* number of inks in InkNames string */
+
+ int td_customValueCount;
+ TIFFTagValue *td_customValues;
+@@ -174,6 +175,7 @@ typedef struct {
+ #define FIELD_TRANSFERFUNCTION 44
+ #define FIELD_INKNAMES 46
+ #define FIELD_SUBIFD 49
++#define FIELD_NUMBEROFINKS 50
+ /* FIELD_CUSTOM (see tiffio.h) 65 */
+ /* end of support for well-known tags; codec-private tags follow */
+ #define FIELD_CODEC 66 /* base of codec-private tags */
+diff --git a/libtiff/tif_dirinfo.c b/libtiff/tif_dirinfo.c
+index fbfaaf0..bf7de70 100644
+--- a/libtiff/tif_dirinfo.c
++++ b/libtiff/tif_dirinfo.c
+@@ -104,7 +104,7 @@ tiffFields[] = {
+ { TIFFTAG_SUBIFD, -1, -1, TIFF_IFD8, 0, TIFF_SETGET_C16_IFD8, TIFF_SETGET_UNDEFINED, FIELD_SUBIFD, 1, 1, "SubIFD", (TIFFFieldArray*) &tiffFieldArray },
+ { TIFFTAG_INKSET, 1, 1, TIFF_SHORT, 0, TIFF_SETGET_UINT16, TIFF_SETGET_UNDEFINED, FIELD_CUSTOM, 0, 0, "InkSet", NULL },
+ { TIFFTAG_INKNAMES, -1, -1, TIFF_ASCII, 0, TIFF_SETGET_C16_ASCII, TIFF_SETGET_UNDEFINED, FIELD_INKNAMES, 1, 1, "InkNames", NULL },
+- { TIFFTAG_NUMBEROFINKS, 1, 1, TIFF_SHORT, 0, TIFF_SETGET_UINT16, TIFF_SETGET_UNDEFINED, FIELD_CUSTOM, 1, 0, "NumberOfInks", NULL },
++ { TIFFTAG_NUMBEROFINKS, 1, 1, TIFF_SHORT, 0, TIFF_SETGET_UINT16, TIFF_SETGET_UNDEFINED, FIELD_NUMBEROFINKS, 1, 0, "NumberOfInks", NULL },
+ { TIFFTAG_DOTRANGE, 2, 2, TIFF_SHORT, 0, TIFF_SETGET_UINT16_PAIR, TIFF_SETGET_UNDEFINED, FIELD_CUSTOM, 0, 0, "DotRange", NULL },
+ { TIFFTAG_TARGETPRINTER, -1, -1, TIFF_ASCII, 0, TIFF_SETGET_ASCII, TIFF_SETGET_UNDEFINED, FIELD_CUSTOM, 1, 0, "TargetPrinter", NULL },
+ { TIFFTAG_EXTRASAMPLES, -1, -1, TIFF_SHORT, 0, TIFF_SETGET_C16_UINT16, TIFF_SETGET_UNDEFINED, FIELD_EXTRASAMPLES, 0, 1, "ExtraSamples", NULL },
+diff --git a/libtiff/tif_dirwrite.c b/libtiff/tif_dirwrite.c
+index 9e4d306..a2dbc3b 100644
+--- a/libtiff/tif_dirwrite.c
++++ b/libtiff/tif_dirwrite.c
+@@ -677,6 +677,11 @@ TIFFWriteDirectorySec(TIFF* tif, int isimage, int imagedone, uint64* pdiroff)
+ if (!TIFFWriteDirectoryTagAscii(tif,&ndir,dir,TIFFTAG_INKNAMES,tif->tif_dir.td_inknameslen,tif->tif_dir.td_inknames))
+ goto bad;
+ }
++ if (TIFFFieldSet(tif, FIELD_NUMBEROFINKS))
++ {
++ if (!TIFFWriteDirectoryTagShort(tif, &ndir, dir, TIFFTAG_NUMBEROFINKS, tif->tif_dir.td_numberofinks))
++ goto bad;
++ }
+ if (TIFFFieldSet(tif,FIELD_SUBIFD))
+ {
+ if (!TIFFWriteDirectoryTagSubifd(tif,&ndir,dir))
+diff --git a/libtiff/tif_print.c b/libtiff/tif_print.c
+index a073794..a9f05a7 100644
+--- a/libtiff/tif_print.c
++++ b/libtiff/tif_print.c
+@@ -402,6 +402,10 @@ TIFFPrintDirectory(TIFF* tif, FILE* fd, long flags)
+ }
+ fputs("\n", fd);
+ }
++ if (TIFFFieldSet(tif, FIELD_NUMBEROFINKS)) {
++ fprintf(fd, " NumberOfInks: %d\n",
++ td->td_numberofinks);
++ }
+ if (TIFFFieldSet(tif,FIELD_THRESHHOLDING)) {
+ fprintf(fd, " Thresholding: ");
+ switch (td->td_threshholding) {
diff --git a/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3970.patch b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3970.patch
new file mode 100644
index 0000000000..ea70827cbe
--- /dev/null
+++ b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-3970.patch
@@ -0,0 +1,45 @@
+From 7e87352217d1f0c77eee7033ac59e3aab08532bb Mon Sep 17 00:00:00 2001
+From: Even Rouault <even.rouault@spatialys.com>
+Date: Tue, 8 Nov 2022 15:16:58 +0100
+Subject: [PATCH] TIFFReadRGBATileExt(): fix (unsigned) integer overflow on
+
+Upstream-Status: Backport [import from debian http://security.debian.org/debian-security/pool/updates/main/t/tiff/tiff_4.1.0+git191117-2~deb10u7.debian.tar.xz ]
+CVE: CVE-2022-3970
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/227500897dfb07fb7d27f7aa570050e62617e3be
+Reviewed-by: Sylvain Beucler <beuc@debian.org>
+Last-Update: 2023-01-17
+
+ strips/tiles > 2 GB
+
+Fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=53137
+
+---
+ libtiff/tif_getimage.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/libtiff/tif_getimage.c b/libtiff/tif_getimage.c
+index 96ab146..0b90dcc 100644
+--- a/libtiff/tif_getimage.c
++++ b/libtiff/tif_getimage.c
+@@ -3042,15 +3042,15 @@ TIFFReadRGBATileExt(TIFF* tif, uint32 col, uint32 row, uint32 * raster, int stop
+ return( ok );
+
+ for( i_row = 0; i_row < read_ysize; i_row++ ) {
+- memmove( raster + (tile_ysize - i_row - 1) * tile_xsize,
+- raster + (read_ysize - i_row - 1) * read_xsize,
++ memmove( raster + (size_t)(tile_ysize - i_row - 1) * tile_xsize,
++ raster + (size_t)(read_ysize - i_row - 1) * read_xsize,
+ read_xsize * sizeof(uint32) );
+- _TIFFmemset( raster + (tile_ysize - i_row - 1) * tile_xsize+read_xsize,
++ _TIFFmemset( raster + (size_t)(tile_ysize - i_row - 1) * tile_xsize+read_xsize,
+ 0, sizeof(uint32) * (tile_xsize - read_xsize) );
+ }
+
+ for( i_row = read_ysize; i_row < tile_ysize; i_row++ ) {
+- _TIFFmemset( raster + (tile_ysize - i_row - 1) * tile_xsize,
++ _TIFFmemset( raster + (size_t)(tile_ysize - i_row - 1) * tile_xsize,
+ 0, sizeof(uint32) * tile_xsize );
+ }
+
diff --git a/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-48281.patch b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-48281.patch
new file mode 100644
index 0000000000..5747202bd9
--- /dev/null
+++ b/poky/meta/recipes-multimedia/libtiff/files/CVE-2022-48281.patch
@@ -0,0 +1,26 @@
+From 424c82b5b33256e7f03faace51dc8010f3ded9ff Mon Sep 17 00:00:00 2001
+From: Su Laus <sulau@freenet.de>
+Date: Sat, 21 Jan 2023 15:58:10 +0000
+Subject: [PATCH] tiffcrop: Correct simple copy paste error. Fix #488.
+
+Upstream-Status: Backport [import from debian http://security.debian.org/debian-security/pool/updates/main/t/tiff/tiff_4.1.0+git191117-2~deb10u7.debian.tar.xz]
+CVE: CVE-2022-48281
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+
+---
+ tools/tiffcrop.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tools/tiffcrop.c b/tools/tiffcrop.c
+index a0789a3..8aed9cd 100644
+--- a/tools/tiffcrop.c
++++ b/tools/tiffcrop.c
+@@ -7564,7 +7564,7 @@ processCropSelections(struct image_data *image, struct crop_mask *crop,
+ crop_buff = (unsigned char *)_TIFFmalloc(cropsize + NUM_BUFF_OVERSIZE_BYTES);
+ else
+ {
+- prev_cropsize = seg_buffs[0].size;
++ prev_cropsize = seg_buffs[i].size;
+ if (prev_cropsize < cropsize)
+ {
+ next_buff = _TIFFrealloc(crop_buff, cropsize + NUM_BUFF_OVERSIZE_BYTES);
diff --git a/poky/meta/recipes-multimedia/libtiff/files/CVE-2023-0795_0796_0797_0798_0799.patch b/poky/meta/recipes-multimedia/libtiff/files/CVE-2023-0795_0796_0797_0798_0799.patch
new file mode 100644
index 0000000000..253018525a
--- /dev/null
+++ b/poky/meta/recipes-multimedia/libtiff/files/CVE-2023-0795_0796_0797_0798_0799.patch
@@ -0,0 +1,157 @@
+From 7808740e100ba30ffb791044f3b14dec3e85ed6f Mon Sep 17 00:00:00 2001
+From: Markus Koschany <apo@debian.org>
+Date: Tue, 21 Feb 2023 14:26:43 +0100
+Subject: [PATCH] CVE-2023-0795
+
+This is also the fix for CVE-2023-0796, CVE-2023-0797, CVE-2023-0798,
+CVE-2023-0799.
+
+Bug-Debian: https://bugs.debian.org/1031632
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/afaabc3e50d4e5d80a94143f7e3c997e7e410f68
+
+Upstream-Status: Backport [import from debian http://security.debian.org/debian-security/pool/updates/main/t/tiff/tiff_4.1.0+git191117-2~deb10u7.debian.tar.xz ]
+CVE: CVE-2023-0795 CVE-2023-0796 CVE-2023-0797 CVE-2023-0798 CVE-2023-0799
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ tools/tiffcrop.c | 51 ++++++++++++++++++++++++++++--------------------
+ 1 file changed, 30 insertions(+), 21 deletions(-)
+
+diff --git a/tools/tiffcrop.c b/tools/tiffcrop.c
+index 8aed9cd..f21a7d7 100644
+--- a/tools/tiffcrop.c
++++ b/tools/tiffcrop.c
+@@ -277,7 +277,6 @@ struct region {
+ uint32 width; /* width in pixels */
+ uint32 length; /* length in pixels */
+ uint32 buffsize; /* size of buffer needed to hold the cropped region */
+- unsigned char *buffptr; /* address of start of the region */
+ };
+
+ /* Cropping parameters from command line and image data
+@@ -532,7 +531,7 @@ static int rotateContigSamples24bits(uint16, uint16, uint16, uint32,
+ static int rotateContigSamples32bits(uint16, uint16, uint16, uint32,
+ uint32, uint32, uint8 *, uint8 *);
+ static int rotateImage(uint16, struct image_data *, uint32 *, uint32 *,
+- unsigned char **);
++ unsigned char **, int);
+ static int mirrorImage(uint16, uint16, uint16, uint32, uint32,
+ unsigned char *);
+ static int invertImage(uint16, uint16, uint16, uint32, uint32,
+@@ -5112,7 +5111,6 @@ initCropMasks (struct crop_mask *cps)
+ cps->regionlist[i].width = 0;
+ cps->regionlist[i].length = 0;
+ cps->regionlist[i].buffsize = 0;
+- cps->regionlist[i].buffptr = NULL;
+ cps->zonelist[i].position = 0;
+ cps->zonelist[i].total = 0;
+ }
+@@ -6358,8 +6356,13 @@ static int correct_orientation(struct image_data *image, unsigned char **work_b
+ image->adjustments & ROTATE_ANY);
+ return (-1);
+ }
+-
+- if (rotateImage(rotation, image, &image->width, &image->length, work_buff_ptr))
++
++ /* Dummy variable in order not to switch two times the
++ * image->width,->length within rotateImage(),
++ * but switch xres, yres there. */
++ uint32_t width = image->width;
++ uint32_t length = image->length;
++ if (rotateImage(rotation, image, &width, &length, work_buff_ptr, TRUE))
+ {
+ TIFFError ("correct_orientation", "Unable to rotate image");
+ return (-1);
+@@ -6427,7 +6430,6 @@ extractCompositeRegions(struct image_data *image, struct crop_mask *crop,
+ /* These should not be needed for composite images */
+ crop->regionlist[i].width = crop_width;
+ crop->regionlist[i].length = crop_length;
+- crop->regionlist[i].buffptr = crop_buff;
+
+ src_rowsize = ((img_width * bps * spp) + 7) / 8;
+ dst_rowsize = (((crop_width * bps * count) + 7) / 8);
+@@ -6664,7 +6666,6 @@ extractSeparateRegion(struct image_data *image, struct crop_mask *crop,
+
+ crop->regionlist[region].width = crop_width;
+ crop->regionlist[region].length = crop_length;
+- crop->regionlist[region].buffptr = crop_buff;
+
+ src = read_buff;
+ dst = crop_buff;
+@@ -7542,7 +7543,7 @@ processCropSelections(struct image_data *image, struct crop_mask *crop,
+ if (crop->crop_mode & CROP_ROTATE) /* rotate should be last as it can reallocate the buffer */
+ {
+ if (rotateImage(crop->rotation, image, &crop->combined_width,
+- &crop->combined_length, &crop_buff))
++ &crop->combined_length, &crop_buff, FALSE))
+ {
+ TIFFError("processCropSelections",
+ "Failed to rotate composite regions by %d degrees", crop->rotation);
+@@ -7648,7 +7649,7 @@ processCropSelections(struct image_data *image, struct crop_mask *crop,
+ if (crop->crop_mode & CROP_ROTATE) /* rotate should be last as it can reallocate the buffer */
+ {
+ if (rotateImage(crop->rotation, image, &crop->regionlist[i].width,
+- &crop->regionlist[i].length, &crop_buff))
++ &crop->regionlist[i].length, &crop_buff, FALSE))
+ {
+ TIFFError("processCropSelections",
+ "Failed to rotate crop region by %d degrees", crop->rotation);
+@@ -7780,7 +7781,7 @@ createCroppedImage(struct image_data *image, struct crop_mask *crop,
+ if (crop->crop_mode & CROP_ROTATE) /* rotate should be last as it can reallocate the buffer */
+ {
+ if (rotateImage(crop->rotation, image, &crop->combined_width,
+- &crop->combined_length, crop_buff_ptr))
++ &crop->combined_length, crop_buff_ptr, TRUE))
+ {
+ TIFFError("createCroppedImage",
+ "Failed to rotate image or cropped selection by %d degrees", crop->rotation);
+@@ -8443,7 +8444,7 @@ rotateContigSamples32bits(uint16 rotation, uint16 spp, uint16 bps, uint32 width,
+ /* Rotate an image by a multiple of 90 degrees clockwise */
+ static int
+ rotateImage(uint16 rotation, struct image_data *image, uint32 *img_width,
+- uint32 *img_length, unsigned char **ibuff_ptr)
++ uint32 *img_length, unsigned char **ibuff_ptr, int rot_image_params)
+ {
+ int shift_width;
+ uint32 bytes_per_pixel, bytes_per_sample;
+@@ -8634,11 +8635,15 @@ rotateImage(uint16 rotation, struct image_data *image, uint32 *img_width,
+
+ *img_width = length;
+ *img_length = width;
+- image->width = length;
+- image->length = width;
+- res_temp = image->xres;
+- image->xres = image->yres;
+- image->yres = res_temp;
++ /* Only toggle image parameters if whole input image is rotated. */
++ if (rot_image_params)
++ {
++ image->width = length;
++ image->length = width;
++ res_temp = image->xres;
++ image->xres = image->yres;
++ image->yres = res_temp;
++ }
+ break;
+
+ case 270: if ((bps % 8) == 0) /* byte aligned data */
+@@ -8711,11 +8716,15 @@ rotateImage(uint16 rotation, struct image_data *image, uint32 *img_width,
+
+ *img_width = length;
+ *img_length = width;
+- image->width = length;
+- image->length = width;
+- res_temp = image->xres;
+- image->xres = image->yres;
+- image->yres = res_temp;
++ /* Only toggle image parameters if whole input image is rotated. */
++ if (rot_image_params)
++ {
++ image->width = length;
++ image->length = width;
++ res_temp = image->xres;
++ image->xres = image->yres;
++ image->yres = res_temp;
++ }
+ break;
+ default:
+ break;
diff --git a/poky/meta/recipes-multimedia/libtiff/files/CVE-2023-0800_0801_0802_0803_0804.patch b/poky/meta/recipes-multimedia/libtiff/files/CVE-2023-0800_0801_0802_0803_0804.patch
new file mode 100644
index 0000000000..bf1a439b4d
--- /dev/null
+++ b/poky/meta/recipes-multimedia/libtiff/files/CVE-2023-0800_0801_0802_0803_0804.patch
@@ -0,0 +1,135 @@
+From e18be834497e0ebf68d443abb9e18187f36cd3bf Mon Sep 17 00:00:00 2001
+From: Markus Koschany <apo@debian.org>
+Date: Tue, 21 Feb 2023 14:39:52 +0100
+Subject: [PATCH] CVE-2023-0800
+
+This is also the fix for CVE-2023-0801, CVE-2023-0802, CVE-2023-0803,
+CVE-2023-0804.
+
+Bug-Debian: https://bugs.debian.org/1031632
+Origin: https://gitlab.com/libtiff/libtiff/-/commit/33aee1275d9d1384791d2206776eb8152d397f00
+
+Upstream-Status: Backport [import from debian http://security.debian.org/debian-security/pool/updates/main/t/tiff/tiff_4.1.0+git191117-2~deb10u7.debian.tar.xz ]
+CVE: CVE-2023-0800 CVE-2023-0801 CVE-2023-0802 CVE-2023-0803 CVE-2023-0804
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ tools/tiffcrop.c | 73 +++++++++++++++++++++++++++++++++++++++++++++---
+ 1 file changed, 69 insertions(+), 4 deletions(-)
+
+diff --git a/tools/tiffcrop.c b/tools/tiffcrop.c
+index f21a7d7..742615a 100644
+--- a/tools/tiffcrop.c
++++ b/tools/tiffcrop.c
+@@ -5250,18 +5250,40 @@ computeInputPixelOffsets(struct crop_mask *crop, struct image_data *image,
+
+ crop->regionlist[i].buffsize = buffsize;
+ crop->bufftotal += buffsize;
++
++ /* For composite images with more than one region, the
++ * combined_length or combined_width always needs to be equal,
++ * respectively.
++ * Otherwise, even the first section/region copy
++ * action might cause buffer overrun. */
+ if (crop->img_mode == COMPOSITE_IMAGES)
+ {
+ switch (crop->edge_ref)
+ {
+ case EDGE_LEFT:
+ case EDGE_RIGHT:
++ if (i > 0 && zlength != crop->combined_length)
++ {
++ TIFFError(
++ "computeInputPixelOffsets",
++ "Only equal length regions can be combined for "
++ "-E left or right");
++ return (-1);
++ }
+ crop->combined_length = zlength;
+ crop->combined_width += zwidth;
+ break;
+ case EDGE_BOTTOM:
+ case EDGE_TOP: /* width from left, length from top */
+ default:
++ if (i > 0 && zwidth != crop->combined_width)
++ {
++ TIFFError("computeInputPixelOffsets",
++ "Only equal width regions can be "
++ "combined for -E "
++ "top or bottom");
++ return (-1);
++ }
+ crop->combined_width = zwidth;
+ crop->combined_length += zlength;
+ break;
+@@ -6416,6 +6438,47 @@ extractCompositeRegions(struct image_data *image, struct crop_mask *crop,
+ crop->combined_width = 0;
+ crop->combined_length = 0;
+
++ /* If there is more than one region, check beforehand whether all the width
++ * and length values of the regions are the same, respectively. */
++ switch (crop->edge_ref)
++ {
++ default:
++ case EDGE_TOP:
++ case EDGE_BOTTOM:
++ for (i = 1; i < crop->selections; i++)
++ {
++ uint32_t crop_width0 =
++ crop->regionlist[i - 1].x2 - crop->regionlist[i - 1].x1 + 1;
++ uint32_t crop_width1 =
++ crop->regionlist[i].x2 - crop->regionlist[i].x1 + 1;
++ if (crop_width0 != crop_width1)
++ {
++ TIFFError("extractCompositeRegions",
++ "Only equal width regions can be combined for -E "
++ "top or bottom");
++ return (1);
++ }
++ }
++ break;
++ case EDGE_LEFT:
++ case EDGE_RIGHT:
++ for (i = 1; i < crop->selections; i++)
++ {
++ uint32_t crop_length0 =
++ crop->regionlist[i - 1].y2 - crop->regionlist[i - 1].y1 + 1;
++ uint32_t crop_length1 =
++ crop->regionlist[i].y2 - crop->regionlist[i].y1 + 1;
++ if (crop_length0 != crop_length1)
++ {
++ TIFFError("extractCompositeRegions",
++ "Only equal length regions can be combined for "
++ "-E left or right");
++ return (1);
++ }
++ }
++ }
++
++
+ for (i = 0; i < crop->selections; i++)
+ {
+ /* rows, columns, width, length are expressed in pixels */
+@@ -6439,8 +6502,9 @@ extractCompositeRegions(struct image_data *image, struct crop_mask *crop,
+ default:
+ case EDGE_TOP:
+ case EDGE_BOTTOM:
+- if ((i > 0) && (crop_width != crop->regionlist[i - 1].width))
+- {
++ if ((crop->selections > i + 1) &&
++ (crop_width != crop->regionlist[i + 1].width))
++ {
+ TIFFError ("extractCompositeRegions",
+ "Only equal width regions can be combined for -E top or bottom");
+ return (1);
+@@ -6520,8 +6584,9 @@ extractCompositeRegions(struct image_data *image, struct crop_mask *crop,
+ break;
+ case EDGE_LEFT: /* splice the pieces of each row together, side by side */
+ case EDGE_RIGHT:
+- if ((i > 0) && (crop_length != crop->regionlist[i - 1].length))
+- {
++ if ((crop->selections > i + 1) &&
++ (crop_length != crop->regionlist[i + 1].length))
++ {
+ TIFFError ("extractCompositeRegions",
+ "Only equal length regions can be combined for -E left or right");
+ return (1);
diff --git a/poky/meta/recipes-multimedia/libtiff/tiff_4.1.0.bb b/poky/meta/recipes-multimedia/libtiff/tiff_4.1.0.bb
index 74ececb113..4b48d81e2b 100644
--- a/poky/meta/recipes-multimedia/libtiff/tiff_4.1.0.bb
+++ b/poky/meta/recipes-multimedia/libtiff/tiff_4.1.0.bb
@@ -29,6 +29,13 @@ SRC_URI = "http://download.osgeo.org/libtiff/tiff-${PV}.tar.gz \
file://CVE-2022-2867-CVE-2022-2868-CVE-2022-2869.patch \
file://CVE-2022-1354.patch \
file://CVE-2022-1355.patch \
+ file://CVE-2022-3570_3598.patch \
+ file://CVE-2022-3597_3626_3627.patch \
+ file://CVE-2022-3599.patch \
+ file://CVE-2022-3970.patch \
+ file://CVE-2022-48281.patch \
+ file://CVE-2023-0795_0796_0797_0798_0799.patch \
+ file://CVE-2023-0800_0801_0802_0803_0804.patch \
"
SRC_URI[md5sum] = "2165e7aba557463acc0664e71a3ed424"
SRC_URI[sha256sum] = "5d29f32517dadb6dbcd1255ea5bbc93a2b54b94fbf83653b4d65c7d6775b8634"
diff --git a/poky/meta/recipes-support/apr/apr-util/0001-Fix-error-handling-in-gdbm.patch b/poky/meta/recipes-support/apr/apr-util/0001-Fix-error-handling-in-gdbm.patch
deleted file mode 100644
index 57e7453312..0000000000
--- a/poky/meta/recipes-support/apr/apr-util/0001-Fix-error-handling-in-gdbm.patch
+++ /dev/null
@@ -1,135 +0,0 @@
-From 6b638fa9afbeb54dfa19378e391465a5284ce1ad Mon Sep 17 00:00:00 2001
-From: Changqing Li <changqing.li@windriver.com>
-Date: Wed, 12 Sep 2018 17:16:36 +0800
-Subject: [PATCH] Fix error handling in gdbm
-
-Only check for gdbm_errno if the return value of the called gdbm_*
-function says so. This fixes apr-util with gdbm 1.14, which does not
-seem to always reset gdbm_errno.
-
-Also make the gdbm driver return error codes starting with
-APR_OS_START_USEERR instead of always returning APR_EGENERAL. This is
-what the berkleydb driver already does.
-
-Also ensure that dsize is 0 if dptr == NULL.
-
-Upstream-Status: Backport[https://svn.apache.org/viewvc?
-view=revision&amp;revision=1825311]
-
-Signed-off-by: Changqing Li <changqing.li@windriver.com>
----
- dbm/apr_dbm_gdbm.c | 47 +++++++++++++++++++++++++++++------------------
- 1 file changed, 29 insertions(+), 18 deletions(-)
-
-diff --git a/dbm/apr_dbm_gdbm.c b/dbm/apr_dbm_gdbm.c
-index 749447a..1c86327 100644
---- a/dbm/apr_dbm_gdbm.c
-+++ b/dbm/apr_dbm_gdbm.c
-@@ -36,13 +36,25 @@
- static apr_status_t g2s(int gerr)
- {
- if (gerr == -1) {
-- /* ### need to fix this */
-- return APR_EGENERAL;
-+ if (gdbm_errno == GDBM_NO_ERROR)
-+ return APR_SUCCESS;
-+ return APR_OS_START_USEERR + gdbm_errno;
- }
-
- return APR_SUCCESS;
- }
-
-+static apr_status_t gdat2s(datum d)
-+{
-+ if (d.dptr == NULL) {
-+ if (gdbm_errno == GDBM_NO_ERROR || gdbm_errno == GDBM_ITEM_NOT_FOUND)
-+ return APR_SUCCESS;
-+ return APR_OS_START_USEERR + gdbm_errno;
-+ }
-+
-+ return APR_SUCCESS;
-+}
-+
- static apr_status_t datum_cleanup(void *dptr)
- {
- if (dptr)
-@@ -53,22 +65,15 @@ static apr_status_t datum_cleanup(void *dptr)
-
- static apr_status_t set_error(apr_dbm_t *dbm, apr_status_t dbm_said)
- {
-- apr_status_t rv = APR_SUCCESS;
-
-- /* ### ignore whatever the DBM said (dbm_said); ask it explicitly */
-+ dbm->errcode = dbm_said;
-
-- if ((dbm->errcode = gdbm_errno) == GDBM_NO_ERROR) {
-+ if (dbm_said == APR_SUCCESS)
- dbm->errmsg = NULL;
-- }
-- else {
-- dbm->errmsg = gdbm_strerror(gdbm_errno);
-- rv = APR_EGENERAL; /* ### need something better */
-- }
--
-- /* captured it. clear it now. */
-- gdbm_errno = GDBM_NO_ERROR;
-+ else
-+ dbm->errmsg = gdbm_strerror(dbm_said - APR_OS_START_USEERR);
-
-- return rv;
-+ return dbm_said;
- }
-
- /* --------------------------------------------------------------------------
-@@ -107,7 +112,7 @@ static apr_status_t vt_gdbm_open(apr_dbm_t **pdb, const char *pathname,
- NULL);
-
- if (file == NULL)
-- return APR_EGENERAL; /* ### need a better error */
-+ return APR_OS_START_USEERR + gdbm_errno; /* ### need a better error */
-
- /* we have an open database... return it */
- *pdb = apr_pcalloc(pool, sizeof(**pdb));
-@@ -141,10 +146,12 @@ static apr_status_t vt_gdbm_fetch(apr_dbm_t *dbm, apr_datum_t key,
- if (pvalue->dptr)
- apr_pool_cleanup_register(dbm->pool, pvalue->dptr, datum_cleanup,
- apr_pool_cleanup_null);
-+ else
-+ pvalue->dsize = 0;
-
- /* store the error info into DBM, and return a status code. Also, note
- that *pvalue should have been cleared on error. */
-- return set_error(dbm, APR_SUCCESS);
-+ return set_error(dbm, gdat2s(rd));
- }
-
- static apr_status_t vt_gdbm_store(apr_dbm_t *dbm, apr_datum_t key,
-@@ -201,9 +208,11 @@ static apr_status_t vt_gdbm_firstkey(apr_dbm_t *dbm, apr_datum_t *pkey)
- if (pkey->dptr)
- apr_pool_cleanup_register(dbm->pool, pkey->dptr, datum_cleanup,
- apr_pool_cleanup_null);
-+ else
-+ pkey->dsize = 0;
-
- /* store any error info into DBM, and return a status code. */
-- return set_error(dbm, APR_SUCCESS);
-+ return set_error(dbm, gdat2s(rd));
- }
-
- static apr_status_t vt_gdbm_nextkey(apr_dbm_t *dbm, apr_datum_t *pkey)
-@@ -221,9 +230,11 @@ static apr_status_t vt_gdbm_nextkey(apr_dbm_t *dbm, apr_datum_t *pkey)
- if (pkey->dptr)
- apr_pool_cleanup_register(dbm->pool, pkey->dptr, datum_cleanup,
- apr_pool_cleanup_null);
-+ else
-+ pkey->dsize = 0;
-
- /* store any error info into DBM, and return a status code. */
-- return set_error(dbm, APR_SUCCESS);
-+ return set_error(dbm, gdat2s(rd));
- }
-
- static void vt_gdbm_freedatum(apr_dbm_t *dbm, apr_datum_t data)
---
-2.7.4
-
diff --git a/poky/meta/recipes-support/apr/apr-util_1.6.1.bb b/poky/meta/recipes-support/apr/apr-util_1.6.3.bb
index f7d827a1d8..3d9d619c7b 100644
--- a/poky/meta/recipes-support/apr/apr-util_1.6.1.bb
+++ b/poky/meta/recipes-support/apr/apr-util_1.6.3.bb
@@ -13,11 +13,9 @@ SRC_URI = "${APACHE_MIRROR}/apr/${BPN}-${PV}.tar.gz \
file://configfix.patch \
file://configure_fixes.patch \
file://run-ptest \
- file://0001-Fix-error-handling-in-gdbm.patch \
-"
+ "
-SRC_URI[md5sum] = "bd502b9a8670a8012c4d90c31a84955f"
-SRC_URI[sha256sum] = "b65e40713da57d004123b6319828be7f1273fbc6490e145874ee1177e112c459"
+SRC_URI[sha256sum] = "2b74d8932703826862ca305b094eef2983c27b39d5c9414442e9976a9acf1983"
EXTRA_OECONF = "--with-apr=${STAGING_BINDIR_CROSS}/apr-1-config \
--without-odbc \
@@ -35,6 +33,7 @@ OE_BINCONFIG_EXTRA_MANGLE = " -e 's:location=source:location=installed:'"
do_configure_append() {
if [ "${CLASSOVERRIDE}" = "class-target" ]; then
cp ${STAGING_DATADIR}/apr/apr_rules.mk ${B}/build/rules.mk
+ sed -i -e 's#^CFLAGS=.*#CFLAGS=${TARGET_CFLAGS}#g' ${B}/build/rules.mk
fi
}
do_configure_prepend_class-native() {
@@ -49,6 +48,7 @@ do_configure_append_class-native() {
do_configure_prepend_class-nativesdk() {
cp ${STAGING_DATADIR}/apr/apr_rules.mk ${S}/build/rules.mk
+ sed -i -e 's#^CFLAGS=.*#CFLAGS=${TARGET_CFLAGS}#g' ${S}/build/rules.mk
}
do_configure_append_class-nativesdk() {
diff --git a/poky/meta/recipes-support/apr/apr/0001-Add-option-to-disable-timed-dependant-tests.patch b/poky/meta/recipes-support/apr/apr/0001-Add-option-to-disable-timed-dependant-tests.patch
index abff4e9331..a274f3a16e 100644
--- a/poky/meta/recipes-support/apr/apr/0001-Add-option-to-disable-timed-dependant-tests.patch
+++ b/poky/meta/recipes-support/apr/apr/0001-Add-option-to-disable-timed-dependant-tests.patch
@@ -1,14 +1,15 @@
-From 2bbe20b4f69e84e7a18bc79d382486953f479328 Mon Sep 17 00:00:00 2001
+From 225abf37cd0b49960664b59f08e515a4c4ea5ad0 Mon Sep 17 00:00:00 2001
From: Jeremy Puhlman <jpuhlman@mvista.com>
Date: Thu, 26 Mar 2020 18:30:36 +0000
Subject: [PATCH] Add option to disable timed dependant tests
-The disabled tests rely on timing to pass correctly. On a virtualized
+The disabled tests rely on timing to pass correctly. On a virtualized
system under heavy load, these tests randomly fail because they miss
a timer or other timing related issues.
Upstream-Status: Pending
Signed-off-by: Jeremy Puhlman <jpuhlman@mvista.com>
+
---
configure.in | 6 ++++++
include/apr.h.in | 1 +
@@ -16,10 +17,10 @@ Signed-off-by: Jeremy Puhlman <jpuhlman@mvista.com>
3 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/configure.in b/configure.in
-index d9f32d6..f0c5661 100644
+index bfd488b..3663220 100644
--- a/configure.in
+++ b/configure.in
-@@ -2886,6 +2886,12 @@ AC_ARG_ENABLE(timedlocks,
+@@ -3023,6 +3023,12 @@ AC_ARG_ENABLE(timedlocks,
)
AC_SUBST(apr_has_timedlocks)
@@ -45,10 +46,10 @@ index ee99def..c46a5f4 100644
#define APR_PROCATTR_USER_SET_REQUIRES_PASSWORD @apr_procattr_user_set_requires_password@
diff --git a/test/testlock.c b/test/testlock.c
-index a43f477..6233d0b 100644
+index e3437c1..04e01b9 100644
--- a/test/testlock.c
+++ b/test/testlock.c
-@@ -396,13 +396,13 @@ abts_suite *testlock(abts_suite *suite)
+@@ -535,7 +535,7 @@ abts_suite *testlock(abts_suite *suite)
abts_run_test(suite, threads_not_impl, NULL);
#else
abts_run_test(suite, test_thread_mutex, NULL);
@@ -56,6 +57,8 @@ index a43f477..6233d0b 100644
+#if APR_HAS_TIMEDLOCKS && APR_HAVE_TIME_DEPENDANT_TESTS
abts_run_test(suite, test_thread_timedmutex, NULL);
#endif
+ abts_run_test(suite, test_thread_nestedmutex, NULL);
+@@ -543,7 +543,7 @@ abts_suite *testlock(abts_suite *suite)
abts_run_test(suite, test_thread_rwlock, NULL);
abts_run_test(suite, test_cond, NULL);
abts_run_test(suite, test_timeoutcond, NULL);
@@ -63,7 +66,4 @@ index a43f477..6233d0b 100644
+#if APR_HAS_TIMEDLOCKS && APR_HAVE_TIME_DEPENDANT_TESTS
abts_run_test(suite, test_timeoutmutex, NULL);
#endif
- #endif
---
-2.23.0
-
+ #ifdef WIN32
diff --git a/poky/meta/recipes-support/apr/apr/0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch b/poky/meta/recipes-support/apr/apr/0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch
new file mode 100644
index 0000000000..a78b16284f
--- /dev/null
+++ b/poky/meta/recipes-support/apr/apr/0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch
@@ -0,0 +1,58 @@
+From 316b81c462f065927d7fec56aadd5c8cb94d1cf0 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 26 Aug 2022 00:28:08 -0700
+Subject: [PATCH] configure: Remove runtime test for mmap that can map
+ /dev/zero
+
+This never works for cross-compile moreover it ends up disabling
+ac_cv_file__dev_zero which then results in compiler errors in shared
+mutexes
+
+Upstream-Status: Inappropriate [Cross-compile specific]
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+---
+ configure.in | 30 ------------------------------
+ 1 file changed, 30 deletions(-)
+
+diff --git a/configure.in b/configure.in
+index 3663220..dce9789 100644
+--- a/configure.in
++++ b/configure.in
+@@ -1303,36 +1303,6 @@ AC_CHECK_FUNCS([mmap munmap shm_open shm_unlink shmget shmat shmdt shmctl \
+ APR_CHECK_DEFINE(MAP_ANON, sys/mman.h)
+ AC_CHECK_FILE(/dev/zero)
+
+-# Not all systems can mmap /dev/zero (such as HP-UX). Check for that.
+-if test "$ac_cv_func_mmap" = "yes" &&
+- test "$ac_cv_file__dev_zero" = "yes"; then
+- AC_CACHE_CHECK([for mmap that can map /dev/zero],
+- [ac_cv_mmap__dev_zero],
+- [AC_TRY_RUN([#include <sys/types.h>
+-#include <sys/stat.h>
+-#include <fcntl.h>
+-#ifdef HAVE_SYS_MMAN_H
+-#include <sys/mman.h>
+-#endif
+- int main()
+- {
+- int fd;
+- void *m;
+- fd = open("/dev/zero", O_RDWR);
+- if (fd < 0) {
+- return 1;
+- }
+- m = mmap(0, sizeof(void*), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
+- if (m == (void *)-1) { /* aka MAP_FAILED */
+- return 2;
+- }
+- if (munmap(m, sizeof(void*)) < 0) {
+- return 3;
+- }
+- return 0;
+- }], [], [ac_cv_file__dev_zero=no], [ac_cv_file__dev_zero=no])])
+-fi
+-
+ # Now we determine which one is our anonymous shmem preference.
+ haveshmgetanon="0"
+ havemmapzero="0"
diff --git a/poky/meta/recipes-support/apr/apr/0002-apr-Remove-workdir-path-references-from-installed-ap.patch b/poky/meta/recipes-support/apr/apr/0002-apr-Remove-workdir-path-references-from-installed-ap.patch
index 72e706f966..d63423f3a1 100644
--- a/poky/meta/recipes-support/apr/apr/0002-apr-Remove-workdir-path-references-from-installed-ap.patch
+++ b/poky/meta/recipes-support/apr/apr/0002-apr-Remove-workdir-path-references-from-installed-ap.patch
@@ -1,8 +1,7 @@
-From 5925b20da8bbc34d9bf5a5dca123ef38864d43c6 Mon Sep 17 00:00:00 2001
+From 689a8db96a6d1e1cae9cbfb35d05ac82140a6555 Mon Sep 17 00:00:00 2001
From: Hongxu Jia <hongxu.jia@windriver.com>
Date: Tue, 30 Jan 2018 09:39:06 +0800
-Subject: [PATCH 2/7] apr: Remove workdir path references from installed apr
- files
+Subject: [PATCH] apr: Remove workdir path references from installed apr files
Upstream-Status: Inappropriate [configuration]
@@ -14,20 +13,23 @@ packages at target run time, the workdir path caused confusion.
Rebase to 1.6.3
Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+
---
- apr-config.in | 26 ++------------------------
- 1 file changed, 2 insertions(+), 24 deletions(-)
+ apr-config.in | 32 ++------------------------------
+ 1 file changed, 2 insertions(+), 30 deletions(-)
diff --git a/apr-config.in b/apr-config.in
-index 84b4073..bbbf651 100644
+index bed47ca..47874e5 100644
--- a/apr-config.in
+++ b/apr-config.in
-@@ -152,14 +152,7 @@ while test $# -gt 0; do
+@@ -164,16 +164,7 @@ while test $# -gt 0; do
flags="$flags $LDFLAGS"
;;
--includes)
- if test "$location" = "installed"; then
flags="$flags -I$includedir $EXTRA_INCLUDES"
+- elif test "$location" = "crosscompile"; then
+- flags="$flags -I$APR_TARGET_DIR/$includedir $EXTRA_INCLUDES"
- elif test "$location" = "source"; then
- flags="$flags -I$APR_SOURCE_DIR/include $EXTRA_INCLUDES"
- else
@@ -37,13 +39,15 @@ index 84b4073..bbbf651 100644
;;
--srcdir)
echo $APR_SOURCE_DIR
-@@ -181,29 +174,14 @@ while test $# -gt 0; do
+@@ -197,33 +188,14 @@ while test $# -gt 0; do
exit 0
;;
--link-ld)
- if test "$location" = "installed"; then
- ### avoid using -L if libdir is a "standard" location like /usr/lib
- flags="$flags -L$libdir -l${APR_LIBNAME}"
+- elif test "$location" = "crosscompile"; then
+- flags="$flags -L$APR_TARGET_DIR/$libdir -l${APR_LIBNAME}"
- else
- ### this surely can't work since the library is in .libs?
- flags="$flags -L$APR_BUILD_DIR -l${APR_LIBNAME}"
@@ -62,6 +66,8 @@ index 84b4073..bbbf651 100644
- # Since the user is specifying they are linking with libtool, we
- # *know* that -R will be recognized by libtool.
- flags="$flags -L$libdir -R$libdir -l${APR_LIBNAME}"
+- elif test "$location" = "crosscompile"; then
+- flags="$flags -L${APR_TARGET_DIR}/$libdir -l${APR_LIBNAME}"
- else
- flags="$flags $LA_FILE"
- fi
@@ -69,6 +75,3 @@ index 84b4073..bbbf651 100644
;;
--shlib-path-var)
echo "$SHLIBPATH_VAR"
---
-1.8.3.1
-
diff --git a/poky/meta/recipes-support/apr/apr/0003-Makefile.in-configure.in-support-cross-compiling.patch b/poky/meta/recipes-support/apr/apr/0003-Makefile.in-configure.in-support-cross-compiling.patch
deleted file mode 100644
index 4dd53bd8eb..0000000000
--- a/poky/meta/recipes-support/apr/apr/0003-Makefile.in-configure.in-support-cross-compiling.patch
+++ /dev/null
@@ -1,63 +0,0 @@
-From d5028c10f156c224475b340cfb1ba025d6797243 Mon Sep 17 00:00:00 2001
-From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Fri, 2 Feb 2018 15:51:42 +0800
-Subject: [PATCH 3/7] Makefile.in/configure.in: support cross compiling
-
-While cross compiling, the tools/gen_test_char could not
-be executed at build time, use AX_PROG_CC_FOR_BUILD to
-build native tools/gen_test_char
-
-Upstream-Status: Submitted [https://github.com/apache/apr/pull/8]
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- Makefile.in | 10 +++-------
- configure.in | 3 +++
- 2 files changed, 6 insertions(+), 7 deletions(-)
-
-diff --git a/Makefile.in b/Makefile.in
-index 5fb760e..8675f90 100644
---- a/Makefile.in
-+++ b/Makefile.in
-@@ -46,7 +46,7 @@ LT_VERSION = @LT_VERSION@
-
- CLEAN_TARGETS = apr-config.out apr.exp exports.c export_vars.c .make.dirs \
- build/apr_rules.out tools/gen_test_char@EXEEXT@ \
-- tools/gen_test_char.o tools/gen_test_char.lo \
-+ tools/gen_test_char.o \
- include/private/apr_escape_test_char.h
- DISTCLEAN_TARGETS = config.cache config.log config.status \
- include/apr.h include/arch/unix/apr_private.h \
-@@ -131,13 +131,9 @@ check: $(TARGET_LIB)
- etags:
- etags `find . -name '*.[ch]'`
-
--OBJECTS_gen_test_char = tools/gen_test_char.lo $(LOCAL_LIBS)
--tools/gen_test_char.lo: tools/gen_test_char.c
-+tools/gen_test_char@EXEEXT@: tools/gen_test_char.c
- $(APR_MKDIR) tools
-- $(LT_COMPILE)
--
--tools/gen_test_char@EXEEXT@: $(OBJECTS_gen_test_char)
-- $(LINK_PROG) $(OBJECTS_gen_test_char) $(ALL_LIBS)
-+ $(CC_FOR_BUILD) $(CFLAGS_FOR_BUILD) $< -o $@
-
- include/private/apr_escape_test_char.h: tools/gen_test_char@EXEEXT@
- $(APR_MKDIR) include/private
-diff --git a/configure.in b/configure.in
-index 719f331..361120f 100644
---- a/configure.in
-+++ b/configure.in
-@@ -183,6 +183,9 @@ dnl can only be used once within a configure script, so this prevents a
- dnl preload section from invoking the macro to get compiler info.
- AC_PROG_CC
-
-+dnl Check build CC for gen_test_char compiling which is executed at build time.
-+AX_PROG_CC_FOR_BUILD
-+
- dnl AC_PROG_SED is only avaliable in recent autoconf versions.
- dnl Use AC_CHECK_PROG instead if AC_PROG_SED is not present.
- ifdef([AC_PROG_SED],
---
-1.8.3.1
-
diff --git a/poky/meta/recipes-support/apr/apr/0006-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch b/poky/meta/recipes-support/apr/apr/0006-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch
deleted file mode 100644
index d1a2ebe881..0000000000
--- a/poky/meta/recipes-support/apr/apr/0006-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch
+++ /dev/null
@@ -1,76 +0,0 @@
-From 49661ea3858cf8494926cccf57d3e8c6dcb47117 Mon Sep 17 00:00:00 2001
-From: Dengke Du <dengke.du@windriver.com>
-Date: Wed, 14 Dec 2016 18:13:08 +0800
-Subject: [PATCH] apr: fix off_t size doesn't match in glibc when cross
- compiling
-
-In configure.in, it contains the following:
-
- APR_CHECK_SIZEOF_EXTENDED([#include <sys/types.h>], off_t, 8)
-
-the macro "APR_CHECK_SIZEOF_EXTENDED" was defined in build/apr_common.m4,
-it use the "AC_TRY_RUN" macro, this macro let the off_t to 8, when cross
-compiling enable.
-
-So it was hardcoded for cross compiling, we should detect it dynamic based on
-the sysroot's glibc. We change it to the following:
-
- AC_CHECK_SIZEOF(off_t)
-
-The same for the following hardcoded types for cross compiling:
-
- pid_t 8
- ssize_t 8
- size_t 8
- off_t 8
-
-Change the above correspondingly.
-
-Signed-off-by: Dengke Du <dengke.du@windriver.com>
-
-Upstream-Status: Pending
-
----
- configure.in | 8 ++++----
- 1 file changed, 4 insertions(+), 4 deletions(-)
-
-diff --git a/configure.in b/configure.in
-index 27b8539..fb408d1 100644
---- a/configure.in
-+++ b/configure.in
-@@ -1801,7 +1801,7 @@ else
- socklen_t_value="int"
- fi
-
--APR_CHECK_SIZEOF_EXTENDED([#include <sys/types.h>], pid_t, 8)
-+AC_CHECK_SIZEOF(pid_t)
-
- if test "$ac_cv_sizeof_pid_t" = "$ac_cv_sizeof_short"; then
- pid_t_fmt='#define APR_PID_T_FMT "hd"'
-@@ -1873,7 +1873,7 @@ APR_CHECK_TYPES_FMT_COMPATIBLE(size_t, unsigned long, lu, [size_t_fmt="lu"], [
- APR_CHECK_TYPES_FMT_COMPATIBLE(size_t, unsigned int, u, [size_t_fmt="u"])
- ])
-
--APR_CHECK_SIZEOF_EXTENDED([#include <sys/types.h>], ssize_t, 8)
-+AC_CHECK_SIZEOF(ssize_t)
-
- dnl the else cases below should no longer occur;
- AC_MSG_CHECKING([which format to use for apr_ssize_t])
-@@ -1891,7 +1891,7 @@ fi
-
- ssize_t_fmt="#define APR_SSIZE_T_FMT \"$ssize_t_fmt\""
-
--APR_CHECK_SIZEOF_EXTENDED([#include <stddef.h>], size_t, 8)
-+AC_CHECK_SIZEOF(size_t)
-
- # else cases below should no longer occur;
- AC_MSG_CHECKING([which format to use for apr_size_t])
-@@ -1909,7 +1909,7 @@ fi
-
- size_t_fmt="#define APR_SIZE_T_FMT \"$size_t_fmt\""
-
--APR_CHECK_SIZEOF_EXTENDED([#include <sys/types.h>], off_t, 8)
-+AC_CHECK_SIZEOF(off_t)
-
- if test "${ac_cv_sizeof_off_t}${apr_cv_use_lfs64}" = "4yes"; then
- # Enable LFS
diff --git a/poky/meta/recipes-support/apr/apr/CVE-2021-35940.patch b/poky/meta/recipes-support/apr/apr/CVE-2021-35940.patch
deleted file mode 100644
index 00befdacee..0000000000
--- a/poky/meta/recipes-support/apr/apr/CVE-2021-35940.patch
+++ /dev/null
@@ -1,58 +0,0 @@
-
-SECURITY: CVE-2021-35940 (cve.mitre.org)
-
-Restore fix for CVE-2017-12613 which was missing in 1.7.x branch, though
-was addressed in 1.6.x in 1.6.3 and later via r1807976.
-
-The fix was merged back to 1.7.x in r1891198.
-
-Since this was a regression in 1.7.0, a new CVE name has been assigned
-to track this, CVE-2021-35940.
-
-Thanks to Iveta Cesalova <icesalov redhat.com> for reporting this issue.
-
-https://svn.apache.org/viewvc?view=revision&revision=1891198
-
-Upstream-Status: Backport
-CVE: CVE-2021-35940
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
-
-Index: time/unix/time.c
-===================================================================
---- a/time/unix/time.c (revision 1891197)
-+++ b/time/unix/time.c (revision 1891198)
-@@ -142,6 +142,9 @@
- static const int dayoffset[12] =
- {306, 337, 0, 31, 61, 92, 122, 153, 184, 214, 245, 275};
-
-+ if (xt->tm_mon < 0 || xt->tm_mon >= 12)
-+ return APR_EBADDATE;
-+
- /* shift new year to 1st March in order to make leap year calc easy */
-
- if (xt->tm_mon < 2)
-Index: time/win32/time.c
-===================================================================
---- a/time/win32/time.c (revision 1891197)
-+++ b/time/win32/time.c (revision 1891198)
-@@ -54,6 +54,9 @@
- static const int dayoffset[12] =
- {0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334};
-
-+ if (tm->wMonth < 1 || tm->wMonth > 12)
-+ return APR_EBADDATE;
-+
- /* Note; the caller is responsible for filling in detailed tm_usec,
- * tm_gmtoff and tm_isdst data when applicable.
- */
-@@ -228,6 +231,9 @@
- static const int dayoffset[12] =
- {306, 337, 0, 31, 61, 92, 122, 153, 184, 214, 245, 275};
-
-+ if (xt->tm_mon < 0 || xt->tm_mon >= 12)
-+ return APR_EBADDATE;
-+
- /* shift new year to 1st March in order to make leap year calc easy */
-
- if (xt->tm_mon < 2)
diff --git a/poky/meta/recipes-support/apr/apr/libtoolize_check.patch b/poky/meta/recipes-support/apr/apr/libtoolize_check.patch
index 740792e6b0..80ce43caa4 100644
--- a/poky/meta/recipes-support/apr/apr/libtoolize_check.patch
+++ b/poky/meta/recipes-support/apr/apr/libtoolize_check.patch
@@ -1,6 +1,7 @@
+From 17835709bc55657b7af1f7c99b3f572b819cf97e Mon Sep 17 00:00:00 2001
From: Helmut Grohne <helmut@subdivi.de>
-Subject: check for libtoolize rather than libtool
-Last-Update: 2014-09-19
+Date: Tue, 7 Feb 2023 07:04:00 +0000
+Subject: [PATCH] check for libtoolize rather than libtool
libtool is now in package libtool-bin, but apr only needs libtoolize.
@@ -8,14 +9,22 @@ Upstream-Status: Pending [ from debian: https://sources.debian.org/data/main/a/a
Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
---- apr.orig/build/buildcheck.sh
-+++ apr/build/buildcheck.sh
-@@ -39,11 +39,11 @@ fi
+---
+ build/buildcheck.sh | 10 ++++------
+ 1 file changed, 4 insertions(+), 6 deletions(-)
+
+diff --git a/build/buildcheck.sh b/build/buildcheck.sh
+index 44921b5..08bc8a8 100755
+--- a/build/buildcheck.sh
++++ b/build/buildcheck.sh
+@@ -39,13 +39,11 @@ fi
# ltmain.sh (GNU libtool 1.1361 2004/01/02 23:10:52) 1.5a
# output is multiline from 1.5 onwards
-# Require libtool 1.4 or newer
--libtool=`build/PrintPath glibtool1 glibtool libtool libtool15 libtool14`
+-if test -z "$libtool"; then
+- libtool=`build/PrintPath glibtool1 glibtool libtool libtool15 libtool14`
+-fi
-lt_pversion=`$libtool --version 2>/dev/null|sed -e 's/([^)]*)//g;s/^[^0-9]*//;s/[- ].*//g;q'`
+# Require libtoolize 1.4 or newer
+libtoolize=`build/PrintPath glibtoolize1 glibtoolize libtoolize libtoolize15 libtoolize14`
diff --git a/poky/meta/recipes-support/apr/apr_1.7.0.bb b/poky/meta/recipes-support/apr/apr_1.7.2.bb
index 92cc61a864..807dce21da 100644
--- a/poky/meta/recipes-support/apr/apr_1.7.0.bb
+++ b/poky/meta/recipes-support/apr/apr_1.7.2.bb
@@ -16,18 +16,15 @@ BBCLASSEXTEND = "native nativesdk"
SRC_URI = "${APACHE_MIRROR}/apr/${BPN}-${PV}.tar.bz2 \
file://run-ptest \
file://0002-apr-Remove-workdir-path-references-from-installed-ap.patch \
- file://0003-Makefile.in-configure.in-support-cross-compiling.patch \
file://0004-Fix-packet-discards-HTTP-redirect.patch \
file://0005-configure.in-fix-LTFLAGS-to-make-it-work-with-ccache.patch \
- file://0006-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch \
file://0007-explicitly-link-libapr-against-phtread-to-make-gold-.patch \
file://libtoolize_check.patch \
file://0001-Add-option-to-disable-timed-dependant-tests.patch \
- file://CVE-2021-35940.patch \
+ file://0001-configure-Remove-runtime-test-for-mmap-that-can-map-.patch \
"
-SRC_URI[md5sum] = "7a14a83d664e87599ea25ff4432e48a7"
-SRC_URI[sha256sum] = "e2e148f0b2e99b8e5c6caa09f6d4fb4dd3e83f744aa72a952f94f5a14436f7ea"
+SRC_URI[sha256sum] = "75e77cc86776c030c0a5c408dfbd0bf2a0b75eed5351e52d5439fa1e5509a43e"
inherit autotools-brokensep lib_package binconfig multilib_header ptest multilib_script
@@ -35,17 +32,30 @@ OE_BINCONFIG_EXTRA_MANGLE = " -e 's:location=source:location=installed:'"
# Added to fix some issues with cmake. Refer to https://github.com/bmwcarit/meta-ros/issues/68#issuecomment-19896928
CACHED_CONFIGUREVARS += "apr_cv_mutex_recursive=yes"
-
+# Enable largefile
+CACHED_CONFIGUREVARS += "apr_cv_use_lfs64=yes"
+# Additional AC_TRY_RUN tests which will need to be cached for cross compile
+CACHED_CONFIGUREVARS += "apr_cv_epoll=yes epoll_create1=yes apr_cv_sock_cloexec=yes \
+ ac_cv_struct_rlimit=yes \
+ ac_cv_func_sem_open=yes \
+ apr_cv_process_shared_works=yes \
+ apr_cv_mutex_robust_shared=yes \
+ "
# Also suppress trying to use sctp.
#
CACHED_CONFIGUREVARS += "ac_cv_header_netinet_sctp_h=no ac_cv_header_netinet_sctp_uio_h=no"
-CACHED_CONFIGUREVARS += "ac_cv_sizeof_struct_iovec=yes"
+# ac_cv_sizeof_struct_iovec is deduced using runtime check which will fail during cross-compile
+CACHED_CONFIGUREVARS += "${@['ac_cv_sizeof_struct_iovec=16','ac_cv_sizeof_struct_iovec=8'][d.getVar('SITEINFO_BITS') != '32']}"
+
CACHED_CONFIGUREVARS += "ac_cv_file__dev_zero=yes"
+CACHED_CONFIGUREVARS:append:libc-musl = " ac_cv_strerror_r_rc_int=yes"
PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"
+PACKAGECONFIG:append:libc-musl = " xsi-strerror"
PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
PACKAGECONFIG[timed-tests] = "--enable-timed-tests,--disable-timed-tests,"
+PACKAGECONFIG[xsi-strerror] = "ac_cv_strerror_r_rc_int=yes,ac_cv_strerror_r_rc_int=no,"
do_configure_prepend() {
# Avoid absolute paths for grep since it causes failures
diff --git a/poky/meta/recipes-support/bmap-tools/bmap-tools_3.5.bb b/poky/meta/recipes-support/bmap-tools/bmap-tools_3.5.bb
index 97b88ec033..6a93cacc18 100644
--- a/poky/meta/recipes-support/bmap-tools/bmap-tools_3.5.bb
+++ b/poky/meta/recipes-support/bmap-tools/bmap-tools_3.5.bb
@@ -9,7 +9,7 @@ SECTION = "console/utils"
LICENSE = "GPLv2"
LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-SRC_URI = "git://github.com/intel/${BPN};branch=master;protocol=https"
+SRC_URI = "git://github.com/intel/${BPN};branch=main;protocol=https"
SRCREV = "db7087b883bf52cbff063ad17a41cc1cbb85104d"
S = "${WORKDIR}/git"
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2022-32221.patch b/poky/meta/recipes-support/curl/curl/CVE-2022-32221.patch
new file mode 100644
index 0000000000..8e662abd3a
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2022-32221.patch
@@ -0,0 +1,29 @@
+From 75c04a3e75e8e3025a17ca3033ca307da9691cd0 Mon Sep 17 00:00:00 2001
+From: Vivek Kumbhar <vkumbhar@mvista.com>
+Date: Fri, 11 Nov 2022 10:49:58 +0530
+Subject: [PATCH] CVE-2022-32221
+
+Upstream-Status: Backport [https://github.com/curl/curl/commit/a64e3e59938abd7d6]
+CVE: CVE-2022-32221
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+
+setopt: when POST is set, reset the 'upload' field.
+---
+ lib/setopt.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/lib/setopt.c b/lib/setopt.c
+index bebb2e4..4d96f6b 100644
+--- a/lib/setopt.c
++++ b/lib/setopt.c
+@@ -486,6 +486,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption option, va_list param)
+ }
+ else
+ data->set.httpreq = HTTPREQ_GET;
++ data->set.upload = FALSE;
+ break;
+
+ case CURLOPT_COPYPOSTFIELDS:
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2022-35260.patch b/poky/meta/recipes-support/curl/curl/CVE-2022-35260.patch
new file mode 100644
index 0000000000..476c996b0a
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2022-35260.patch
@@ -0,0 +1,68 @@
+From 3ff3989ec53d9ddcf4bdd99f5d5788dd87486768 Mon Sep 17 00:00:00 2001
+From: Daniel Stenberg <daniel@haxx.se>
+Date: Tue, 4 Oct 2022 14:37:24 +0200
+Subject: [PATCH] netrc: replace fgets with Curl_get_line
+
+Upstream-Status: Backport
+CVE: CVE-2022-35260
+Reference to upstream patch: https://github.com/curl/curl/commit/c97ec984fb2bc919a3aa863e0476dffa377b184c
+
+Make the parser only accept complete lines and avoid problems with
+overly long lines.
+
+Reported-by: Hiroki Kurosawa
+
+Closes #9789
+---
+ lib/curl_get_line.c | 4 ++--
+ lib/netrc.c | 5 +++--
+ 2 files changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/lib/curl_get_line.c b/lib/curl_get_line.c
+index c4194851ae09..4b9eea9e631c 100644
+--- a/lib/curl_get_line.c
++++ b/lib/curl_get_line.c
+@@ -28,8 +28,8 @@
+ #include "memdebug.h"
+
+ /*
+- * get_line() makes sure to only return complete whole lines that fit in 'len'
+- * bytes and end with a newline.
++ * Curl_get_line() makes sure to only return complete whole lines that fit in
++ * 'len' bytes and end with a newline.
+ */
+ char *Curl_get_line(char *buf, int len, FILE *input)
+ {
+diff --git a/lib/netrc.c b/lib/netrc.c
+index 1c9da31993c9..93239132c9d8 100644
+--- a/lib/netrc.c
++++ b/lib/netrc.c
+@@ -31,6 +31,7 @@
+ #include "netrc.h"
+ #include "strtok.h"
+ #include "strcase.h"
++#include "curl_get_line.h"
+
+ /* The last 3 #include files should be in this order */
+ #include "curl_printf.h"
+@@ -83,7 +84,7 @@ static int parsenetrc(const char *host,
+ char netrcbuffer[4096];
+ int netrcbuffsize = (int)sizeof(netrcbuffer);
+
+- while(!done && fgets(netrcbuffer, netrcbuffsize, file)) {
++ while(!done && Curl_get_line(netrcbuffer, netrcbuffsize, file)) {
+ tok = strtok_r(netrcbuffer, " \t\n", &tok_buf);
+ if(tok && *tok == '#')
+ /* treat an initial hash as a comment line */
+@@ -169,7 +170,7 @@ static int parsenetrc(const char *host,
+
+ tok = strtok_r(NULL, " \t\n", &tok_buf);
+ } /* while(tok) */
+- } /* while fgets() */
++ } /* while Curl_get_line() */
+
+ out:
+ if(!retcode) {
+--
+2.34.1
+
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2022-43552.patch b/poky/meta/recipes-support/curl/curl/CVE-2022-43552.patch
new file mode 100644
index 0000000000..d729441454
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2022-43552.patch
@@ -0,0 +1,82 @@
+rom 4f20188ac644afe174be6005ef4f6ffba232b8b2 Mon Sep 17 00:00:00 2001
+From: Daniel Stenberg <daniel@haxx.se>
+Date: Mon, 19 Dec 2022 08:38:37 +0100
+Subject: [PATCH] smb/telnet: do not free the protocol struct in *_done()
+
+It is managed by the generic layer.
+
+Reported-by: Trail of Bits
+
+Closes #10112
+
+CVE: CVE-2022-43552
+Upstream-Status: Backport [https://github.com/curl/curl/commit/4f20188ac644afe174be6005ef4f6ffba232b8b2]
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ lib/smb.c | 14 ++------------
+ lib/telnet.c | 3 ---
+ 2 files changed, 2 insertions(+), 15 deletions(-)
+
+diff --git a/lib/smb.c b/lib/smb.c
+index 12f9925..8db3b27 100644
+--- a/lib/smb.c
++++ b/lib/smb.c
+@@ -61,8 +61,6 @@ static CURLcode smb_connect(struct connectdata *conn, bool *done);
+ static CURLcode smb_connection_state(struct connectdata *conn, bool *done);
+ static CURLcode smb_do(struct connectdata *conn, bool *done);
+ static CURLcode smb_request_state(struct connectdata *conn, bool *done);
+-static CURLcode smb_done(struct connectdata *conn, CURLcode status,
+- bool premature);
+ static CURLcode smb_disconnect(struct connectdata *conn, bool dead);
+ static int smb_getsock(struct connectdata *conn, curl_socket_t *socks);
+ static CURLcode smb_parse_url_path(struct connectdata *conn);
+@@ -74,7 +72,7 @@ const struct Curl_handler Curl_handler_smb = {
+ "SMB", /* scheme */
+ smb_setup_connection, /* setup_connection */
+ smb_do, /* do_it */
+- smb_done, /* done */
++ ZERO_NULL, /* done */
+ ZERO_NULL, /* do_more */
+ smb_connect, /* connect_it */
+ smb_connection_state, /* connecting */
+@@ -99,7 +97,7 @@ const struct Curl_handler Curl_handler_smbs = {
+ "SMBS", /* scheme */
+ smb_setup_connection, /* setup_connection */
+ smb_do, /* do_it */
+- smb_done, /* done */
++ ZERO_NULL, /* done */
+ ZERO_NULL, /* do_more */
+ smb_connect, /* connect_it */
+ smb_connection_state, /* connecting */
+@@ -919,14 +917,6 @@ static CURLcode smb_request_state(struct connectdata *conn, bool *done)
+ return CURLE_OK;
+ }
+
+-static CURLcode smb_done(struct connectdata *conn, CURLcode status,
+- bool premature)
+-{
+- (void) premature;
+- Curl_safefree(conn->data->req.protop);
+- return status;
+-}
+-
+ static CURLcode smb_disconnect(struct connectdata *conn, bool dead)
+ {
+ struct smb_conn *smbc = &conn->proto.smbc;
+diff --git a/lib/telnet.c b/lib/telnet.c
+index 3347ad6..e3b9208 100644
+--- a/lib/telnet.c
++++ b/lib/telnet.c
+@@ -1294,9 +1294,6 @@ static CURLcode telnet_done(struct connectdata *conn,
+
+ curl_slist_free_all(tn->telnet_vars);
+ tn->telnet_vars = NULL;
+-
+- Curl_safefree(conn->data->req.protop);
+-
+ return CURLE_OK;
+ }
+
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2023-23916.patch b/poky/meta/recipes-support/curl/curl/CVE-2023-23916.patch
new file mode 100644
index 0000000000..054615963e
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2023-23916.patch
@@ -0,0 +1,231 @@
+From 119fb187192a9ea13dc90d9d20c215fc82799ab9 Mon Sep 17 00:00:00 2001
+From: Patrick Monnerat <patrick@monnerat.net>
+Date: Mon, 13 Feb 2023 08:33:09 +0100
+Subject: [PATCH] content_encoding: do not reset stage counter for each header
+
+Test 418 verifies
+
+Closes #10492
+
+Upstream-Status: Backport [https://github.com/curl/curl/commit/119fb187192a9ea13dc90d9d20c215fc82799ab9]
+CVE: CVE-2023-23916
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ lib/content_encoding.c | 7 +-
+ lib/urldata.h | 1 +
+ tests/data/Makefile.inc | 2 +-
+ tests/data/test418 | 152 ++++++++++++++++++++++++++++++++++++++++
+ 4 files changed, 157 insertions(+), 5 deletions(-)
+ create mode 100644 tests/data/test418
+
+diff --git a/lib/content_encoding.c b/lib/content_encoding.c
+index 91e621f..7e098a5 100644
+--- a/lib/content_encoding.c
++++ b/lib/content_encoding.c
+@@ -944,7 +944,6 @@ CURLcode Curl_build_unencoding_stack(struct connectdata *conn,
+ {
+ struct Curl_easy *data = conn->data;
+ struct SingleRequest *k = &data->req;
+- int counter = 0;
+
+ do {
+ const char *name;
+@@ -979,9 +978,9 @@ CURLcode Curl_build_unencoding_stack(struct connectdata *conn,
+ if(!encoding)
+ encoding = &error_encoding; /* Defer error at stack use. */
+
+- if(++counter >= MAX_ENCODE_STACK) {
+- failf(data, "Reject response due to %u content encodings",
+- counter);
++ if(k->writer_stack_depth++ >= MAX_ENCODE_STACK) {
++ failf(data, "Reject response due to more than %u content encodings",
++ MAX_ENCODE_STACK);
+ return CURLE_BAD_CONTENT_ENCODING;
+ }
+ /* Stack the unencoding stage. */
+diff --git a/lib/urldata.h b/lib/urldata.h
+index ad0ef8f..168f874 100644
+--- a/lib/urldata.h
++++ b/lib/urldata.h
+@@ -648,6 +648,7 @@ struct SingleRequest {
+ #ifndef CURL_DISABLE_DOH
+ struct dohdata doh; /* DoH specific data for this request */
+ #endif
++ unsigned char writer_stack_depth; /* Unencoding stack depth. */
+ BIT(header); /* incoming data has HTTP header */
+ BIT(content_range); /* set TRUE if Content-Range: was found */
+ BIT(upload_done); /* set to TRUE when doing chunked transfer-encoding
+diff --git a/tests/data/Makefile.inc b/tests/data/Makefile.inc
+index 60e8176..40de8bc 100644
+--- a/tests/data/Makefile.inc
++++ b/tests/data/Makefile.inc
+@@ -63,7 +63,7 @@ test350 test351 test352 test353 test354 test355 test356 test357 \
+ test393 test394 test395 \
+ \
+ test400 test401 test402 test403 test404 test405 test406 test407 test408 \
+-test409 \
++test409 test418 \
+ \
+ test490 test491 test492 \
+ \
+diff --git a/tests/data/test418 b/tests/data/test418
+new file mode 100644
+index 0000000..50e974e
+--- /dev/null
++++ b/tests/data/test418
+@@ -0,0 +1,152 @@
++<testcase>
++<info>
++<keywords>
++HTTP
++gzip
++</keywords>
++</info>
++
++#
++# Server-side
++<reply>
++<data nocheck="yes">
++HTTP/1.1 200 OK
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++Transfer-Encoding: gzip
++
++-foo-
++</data>
++</reply>
++
++#
++# Client-side
++<client>
++<server>
++http
++</server>
++ <name>
++Response with multiple Transfer-Encoding headers
++ </name>
++ <command>
++http://%HOSTIP:%HTTPPORT/%TESTNUMBER -sS
++</command>
++</client>
++
++#
++# Verify data after the test has been "shot"
++<verify>
++<protocol crlf="yes">
++GET /%TESTNUMBER HTTP/1.1
++Host: %HOSTIP:%HTTPPORT
++User-Agent: curl/%VERSION
++Accept: */*
++
++</protocol>
++
++# CURLE_BAD_CONTENT_ENCODING is 61
++<errorcode>
++61
++</errorcode>
++<stderr mode="text">
++curl: (61) Reject response due to more than 5 content encodings
++</stderr>
++</verify>
++</testcase>
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2023-27533.patch b/poky/meta/recipes-support/curl/curl/CVE-2023-27533.patch
new file mode 100644
index 0000000000..64ba135056
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2023-27533.patch
@@ -0,0 +1,59 @@
+Backport of:
+
+From 538b1e79a6e7b0bb829ab4cecc828d32105d0684 Mon Sep 17 00:00:00 2001
+From: Daniel Stenberg <daniel@haxx.se>
+Date: Mon, 6 Mar 2023 12:07:33 +0100
+Subject: [PATCH] telnet: only accept option arguments in ascii
+
+To avoid embedded telnet negotiation commands etc.
+
+Reported-by: Harry Sintonen
+Closes #10728
+
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/curl/tree/debian/patches/CVE-2023-27533.patch?h=ubuntu/focal-security
+Upstream commit https://github.com/curl/curl/commit/538b1e79a6e7b0bb829ab4cecc828d32105d0684]
+CVE: CVE-2023-27533
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/telnet.c | 15 +++++++++++++++
+ 1 file changed, 15 insertions(+)
+
+--- a/lib/telnet.c
++++ b/lib/telnet.c
+@@ -815,6 +815,17 @@ static void printsub(struct Curl_easy *d
+ }
+ }
+
++static bool str_is_nonascii(const char *str)
++{
++ size_t len = strlen(str);
++ while(len--) {
++ if(*str & 0x80)
++ return TRUE;
++ str++;
++ }
++ return FALSE;
++}
++
+ static CURLcode check_telnet_options(struct connectdata *conn)
+ {
+ struct curl_slist *head;
+@@ -829,6 +840,8 @@ static CURLcode check_telnet_options(str
+ /* Add the user name as an environment variable if it
+ was given on the command line */
+ if(conn->bits.user_passwd) {
++ if(str_is_nonascii(data->conn->user))
++ return CURLE_BAD_FUNCTION_ARGUMENT;
+ msnprintf(option_arg, sizeof(option_arg), "USER,%s", conn->user);
+ beg = curl_slist_append(tn->telnet_vars, option_arg);
+ if(!beg) {
+@@ -844,6 +857,9 @@ static CURLcode check_telnet_options(str
+ if(sscanf(head->data, "%127[^= ]%*[ =]%255s",
+ option_keyword, option_arg) == 2) {
+
++ if(str_is_nonascii(option_arg))
++ continue;
++
+ /* Terminal type */
+ if(strcasecompare(option_keyword, "TTYPE")) {
+ strncpy(tn->subopt_ttype, option_arg, 31);
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2023-27534.patch b/poky/meta/recipes-support/curl/curl/CVE-2023-27534.patch
new file mode 100644
index 0000000000..aeeffd5fea
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2023-27534.patch
@@ -0,0 +1,123 @@
+From 4e2b52b5f7a3bf50a0f1494155717b02cc1df6d6 Mon Sep 17 00:00:00 2001
+From: Daniel Stenberg <daniel@haxx.se>
+Date: Thu, 9 Mar 2023 16:22:11 +0100
+Subject: [PATCH] curl_path: create the new path with dynbuf
+
+CVE: CVE-2023-27534
+Upstream-Status: Backport [https://github.com/curl/curl/commit/4e2b52b5f7a3bf50a0f1494155717b02cc1df6d6]
+
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ lib/curl_path.c | 71 ++++++++++++++++++++++++-------------------------
+ 1 file changed, 35 insertions(+), 36 deletions(-)
+
+diff --git a/lib/curl_path.c b/lib/curl_path.c
+index f429634..e17db4b 100644
+--- a/lib/curl_path.c
++++ b/lib/curl_path.c
+@@ -30,6 +30,8 @@
+ #include "escape.h"
+ #include "memdebug.h"
+
++#define MAX_SSHPATH_LEN 100000 /* arbitrary */
++
+ /* figure out the path to work with in this particular request */
+ CURLcode Curl_getworkingpath(struct connectdata *conn,
+ char *homedir, /* when SFTP is used */
+@@ -37,60 +39,57 @@ CURLcode Curl_getworkingpath(struct connectdata *conn,
+ real path to work with */
+ {
+ struct Curl_easy *data = conn->data;
+- char *real_path = NULL;
+ char *working_path;
+ size_t working_path_len;
++ struct dynbuf npath;
+ CURLcode result =
+ Curl_urldecode(data, data->state.up.path, 0, &working_path,
+ &working_path_len, FALSE);
+ if(result)
+ return result;
+
++ /* new path to switch to in case we need to */
++ Curl_dyn_init(&npath, MAX_SSHPATH_LEN);
++
+ /* Check for /~/, indicating relative to the user's home directory */
+- if(conn->handler->protocol & CURLPROTO_SCP) {
+- real_path = malloc(working_path_len + 1);
+- if(real_path == NULL) {
++ if((data->conn->handler->protocol & CURLPROTO_SCP) &&
++ (working_path_len > 3) && (!memcmp(working_path, "/~/", 3))) {
++ /* It is referenced to the home directory, so strip the leading '/~/' */
++ if(Curl_dyn_addn(&npath, &working_path[3], working_path_len - 3)) {
+ free(working_path);
+ return CURLE_OUT_OF_MEMORY;
+ }
+- if((working_path_len > 3) && (!memcmp(working_path, "/~/", 3)))
+- /* It is referenced to the home directory, so strip the leading '/~/' */
+- memcpy(real_path, working_path + 3, working_path_len - 2);
+- else
+- memcpy(real_path, working_path, 1 + working_path_len);
+ }
+- else if(conn->handler->protocol & CURLPROTO_SFTP) {
+- if((working_path_len > 1) && (working_path[1] == '~')) {
+- size_t homelen = strlen(homedir);
+- real_path = malloc(homelen + working_path_len + 1);
+- if(real_path == NULL) {
+- free(working_path);
+- return CURLE_OUT_OF_MEMORY;
+- }
+- /* It is referenced to the home directory, so strip the
+- leading '/' */
+- memcpy(real_path, homedir, homelen);
+- real_path[homelen] = '/';
+- real_path[homelen + 1] = '\0';
+- if(working_path_len > 3) {
+- memcpy(real_path + homelen + 1, working_path + 3,
+- 1 + working_path_len -3);
+- }
++ else if((data->conn->handler->protocol & CURLPROTO_SFTP) &&
++ (working_path_len > 2) && !memcmp(working_path, "/~/", 3)) {
++ size_t len;
++ const char *p;
++ int copyfrom = 3;
++ if(Curl_dyn_add(&npath, homedir)) {
++ free(working_path);
++ return CURLE_OUT_OF_MEMORY;
+ }
+- else {
+- real_path = malloc(working_path_len + 1);
+- if(real_path == NULL) {
+- free(working_path);
+- return CURLE_OUT_OF_MEMORY;
+- }
+- memcpy(real_path, working_path, 1 + working_path_len);
++ /* Copy a separating '/' if homedir does not end with one */
++ len = Curl_dyn_len(&npath);
++ p = Curl_dyn_ptr(&npath);
++ if(len && (p[len-1] != '/'))
++ copyfrom = 2;
++
++ if(Curl_dyn_addn(&npath,
++ &working_path[copyfrom], working_path_len - copyfrom)) {
++ free(working_path);
++ return CURLE_OUT_OF_MEMORY;
+ }
+ }
+
+- free(working_path);
++ if(Curl_dyn_len(&npath)) {
++ free(working_path);
+
+- /* store the pointer for the caller to receive */
+- *path = real_path;
++ /* store the pointer for the caller to receive */
++ *path = Curl_dyn_ptr(&npath);
++ }
++ else
++ *path = working_path;
+
+ return CURLE_OK;
+ }
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2023-27535-pre1.patch b/poky/meta/recipes-support/curl/curl/CVE-2023-27535-pre1.patch
new file mode 100644
index 0000000000..034b72f7e6
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2023-27535-pre1.patch
@@ -0,0 +1,236 @@
+From ed5095ed94281989e103c72e032200b83be37878 Mon Sep 17 00:00:00 2001
+From: Daniel Stenberg <daniel@haxx.se>
+Date: Thu, 6 Oct 2022 00:49:10 +0200
+Subject: [PATCH] strcase: add and use Curl_timestrcmp
+
+This is a strcmp() alternative function for comparing "secrets",
+designed to take the same time no matter the content to not leak
+match/non-match info to observers based on how fast it is.
+
+The time this function takes is only a function of the shortest input
+string.
+
+Reported-by: Trail of Bits
+
+Closes #9658
+
+Upstream-Status: Backport from [https://github.com/curl/curl/commit/ed5095ed94281989e103c72e032200b83be37878 & https://github.com/curl/curl/commit/f18af4f874cecab82a9797e8c7541e0990c7a64c]
+Comment: to backport fix for CVE-2023-27535, add function Curl_timestrcmp.
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/netrc.c | 6 +++---
+ lib/strcase.c | 22 ++++++++++++++++++++++
+ lib/strcase.h | 1 +
+ lib/url.c | 33 +++++++++++++--------------------
+ lib/vauth/digest_sspi.c | 4 ++--
+ lib/vtls/vtls.c | 21 ++++++++++++++++++++-
+ 6 files changed, 61 insertions(+), 26 deletions(-)
+
+diff --git a/lib/netrc.c b/lib/netrc.c
+index 9323913..fe3fd1e 100644
+--- a/lib/netrc.c
++++ b/lib/netrc.c
+@@ -124,9 +124,9 @@ static int parsenetrc(const char *host,
+ /* we are now parsing sub-keywords concerning "our" host */
+ if(state_login) {
+ if(specific_login) {
+- state_our_login = strcasecompare(login, tok);
++ state_our_login = !Curl_timestrcmp(login, tok);
+ }
+- else if(!login || strcmp(login, tok)) {
++ else if(!login || Curl_timestrcmp(login, tok)) {
+ if(login_alloc) {
+ free(login);
+ login_alloc = FALSE;
+@@ -142,7 +142,7 @@ static int parsenetrc(const char *host,
+ }
+ else if(state_password) {
+ if((state_our_login || !specific_login)
+- && (!password || strcmp(password, tok))) {
++ && (!password || Curl_timestrcmp(password, tok))) {
+ if(password_alloc) {
+ free(password);
+ password_alloc = FALSE;
+diff --git a/lib/strcase.c b/lib/strcase.c
+index 70bf21c..ec776b3 100644
+--- a/lib/strcase.c
++++ b/lib/strcase.c
+@@ -261,6 +261,28 @@ bool Curl_safecmp(char *a, char *b)
+ return !a && !b;
+ }
+
++/*
++ * Curl_timestrcmp() returns 0 if the two strings are identical. The time this
++ * function spends is a function of the shortest string, not of the contents.
++ */
++int Curl_timestrcmp(const char *a, const char *b)
++{
++ int match = 0;
++ int i = 0;
++
++ if(a && b) {
++ while(1) {
++ match |= a[i]^b[i];
++ if(!a[i] || !b[i])
++ break;
++ i++;
++ }
++ }
++ else
++ return a || b;
++ return match;
++}
++
+ /* --- public functions --- */
+
+ int curl_strequal(const char *first, const char *second)
+diff --git a/lib/strcase.h b/lib/strcase.h
+index 8929a53..8077108 100644
+--- a/lib/strcase.h
++++ b/lib/strcase.h
+@@ -49,5 +49,6 @@ void Curl_strntoupper(char *dest, const char *src, size_t n);
+ void Curl_strntolower(char *dest, const char *src, size_t n);
+
+ bool Curl_safecmp(char *a, char *b);
++int Curl_timestrcmp(const char *first, const char *second);
+
+ #endif /* HEADER_CURL_STRCASE_H */
+diff --git a/lib/url.c b/lib/url.c
+index 9f14a7b..dfbde3b 100644
+--- a/lib/url.c
++++ b/lib/url.c
+@@ -886,19 +886,10 @@ socks_proxy_info_matches(const struct proxy_info* data,
+ /* the user information is case-sensitive
+ or at least it is not defined as case-insensitive
+ see https://tools.ietf.org/html/rfc3986#section-3.2.1 */
+- if((data->user == NULL) != (needle->user == NULL))
+- return FALSE;
+- /* curl_strequal does a case insentive comparison, so do not use it here! */
+- if(data->user &&
+- needle->user &&
+- strcmp(data->user, needle->user) != 0)
+- return FALSE;
+- if((data->passwd == NULL) != (needle->passwd == NULL))
+- return FALSE;
++
+ /* curl_strequal does a case insentive comparison, so do not use it here! */
+- if(data->passwd &&
+- needle->passwd &&
+- strcmp(data->passwd, needle->passwd) != 0)
++ if(Curl_timestrcmp(data->user, needle->user) ||
++ Curl_timestrcmp(data->passwd, needle->passwd))
+ return FALSE;
+ return TRUE;
+ }
+@@ -1257,10 +1248,10 @@ ConnectionExists(struct Curl_easy *data,
+ if(!(needle->handler->flags & PROTOPT_CREDSPERREQUEST)) {
+ /* This protocol requires credentials per connection,
+ so verify that we're using the same name and password as well */
+- if(strcmp(needle->user, check->user) ||
+- strcmp(needle->passwd, check->passwd) ||
+- !Curl_safecmp(needle->sasl_authzid, check->sasl_authzid) ||
+- !Curl_safecmp(needle->oauth_bearer, check->oauth_bearer)) {
++ if(Curl_timestrcmp(needle->user, check->user) ||
++ Curl_timestrcmp(needle->passwd, check->passwd) ||
++ Curl_timestrcmp(needle->sasl_authzid, check->sasl_authzid) ||
++ Curl_timestrcmp(needle->oauth_bearer, check->oauth_bearer)) {
+ /* one of them was different */
+ continue;
+ }
+@@ -1326,8 +1317,8 @@ ConnectionExists(struct Curl_easy *data,
+ possible. (Especially we must not reuse the same connection if
+ partway through a handshake!) */
+ if(wantNTLMhttp) {
+- if(strcmp(needle->user, check->user) ||
+- strcmp(needle->passwd, check->passwd)) {
++ if(Curl_timestrcmp(needle->user, check->user) ||
++ Curl_timestrcmp(needle->passwd, check->passwd)) {
+
+ /* we prefer a credential match, but this is at least a connection
+ that can be reused and "upgraded" to NTLM */
+@@ -1348,8 +1339,10 @@ ConnectionExists(struct Curl_easy *data,
+ if(!check->http_proxy.user || !check->http_proxy.passwd)
+ continue;
+
+- if(strcmp(needle->http_proxy.user, check->http_proxy.user) ||
+- strcmp(needle->http_proxy.passwd, check->http_proxy.passwd))
++ if(Curl_timestrcmp(needle->http_proxy.user,
++ check->http_proxy.user) ||
++ Curl_timestrcmp(needle->http_proxy.passwd,
++ check->http_proxy.passwd))
+ continue;
+ }
+ else if(check->proxy_ntlm_state != NTLMSTATE_NONE) {
+diff --git a/lib/vauth/digest_sspi.c b/lib/vauth/digest_sspi.c
+index a109056..3986386 100644
+--- a/lib/vauth/digest_sspi.c
++++ b/lib/vauth/digest_sspi.c
+@@ -450,8 +450,8 @@ CURLcode Curl_auth_create_digest_http_message(struct Curl_easy *data,
+ has changed then delete that context. */
+ if((userp && !digest->user) || (!userp && digest->user) ||
+ (passwdp && !digest->passwd) || (!passwdp && digest->passwd) ||
+- (userp && digest->user && strcmp(userp, digest->user)) ||
+- (passwdp && digest->passwd && strcmp(passwdp, digest->passwd))) {
++ (userp && digest->user && Curl_timestrcmp(userp, digest->user)) ||
++ (passwdp && digest->passwd && Curl_timestrcmp(passwdp, digest->passwd))) {
+ if(digest->http_context) {
+ s_pSecFn->DeleteSecurityContext(digest->http_context);
+ Curl_safefree(digest->http_context);
+diff --git a/lib/vtls/vtls.c b/lib/vtls/vtls.c
+index e8cb70f..70a9391 100644
+--- a/lib/vtls/vtls.c
++++ b/lib/vtls/vtls.c
+@@ -98,9 +98,15 @@ Curl_ssl_config_matches(struct ssl_primary_config* data,
+ Curl_safecmp(data->issuercert, needle->issuercert) &&
+ Curl_safecmp(data->clientcert, needle->clientcert) &&
+ Curl_safecmp(data->random_file, needle->random_file) &&
+- Curl_safecmp(data->egdsocket, needle->egdsocket) &&
++ Curl_safecmp(data->egdsocket, needle->egdsocket) &&
++#ifdef USE_TLS_SRP
++ !Curl_timestrcmp(data->username, needle->username) &&
++ !Curl_timestrcmp(data->password, needle->password) &&
++ (data->authtype == needle->authtype) &&
++#endif
+ Curl_safe_strcasecompare(data->cipher_list, needle->cipher_list) &&
+ Curl_safe_strcasecompare(data->cipher_list13, needle->cipher_list13) &&
++ Curl_safe_strcasecompare(data->CRLfile, needle->CRLfile) &&
+ Curl_safe_strcasecompare(data->pinned_key, needle->pinned_key))
+ return TRUE;
+
+@@ -117,6 +123,9 @@ Curl_clone_primary_ssl_config(struct ssl_primary_config *source,
+ dest->verifyhost = source->verifyhost;
+ dest->verifystatus = source->verifystatus;
+ dest->sessionid = source->sessionid;
++#ifdef USE_TLS_SRP
++ dest->authtype = source->authtype;
++#endif
+
+ CLONE_STRING(CApath);
+ CLONE_STRING(CAfile);
+@@ -127,6 +136,11 @@ Curl_clone_primary_ssl_config(struct ssl_primary_config *source,
+ CLONE_STRING(cipher_list);
+ CLONE_STRING(cipher_list13);
+ CLONE_STRING(pinned_key);
++ CLONE_STRING(CRLfile);
++#ifdef USE_TLS_SRP
++ CLONE_STRING(username);
++ CLONE_STRING(password);
++#endif
+
+ return TRUE;
+ }
+@@ -142,6 +156,11 @@ void Curl_free_primary_ssl_config(struct ssl_primary_config* sslc)
+ Curl_safefree(sslc->cipher_list);
+ Curl_safefree(sslc->cipher_list13);
+ Curl_safefree(sslc->pinned_key);
++ Curl_safefree(sslc->CRLfile);
++#ifdef USE_TLS_SRP
++ Curl_safefree(sslc->username);
++ Curl_safefree(sslc->password);
++#endif
+ }
+
+ #ifdef USE_SSL
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2023-27535.patch b/poky/meta/recipes-support/curl/curl/CVE-2023-27535.patch
new file mode 100644
index 0000000000..e38390a57c
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2023-27535.patch
@@ -0,0 +1,170 @@
+From 8f4608468b890dce2dad9f91d5607ee7e9c1aba1 Mon Sep 17 00:00:00 2001
+From: Daniel Stenberg <daniel@haxx.se>
+Date: Thu, 9 Mar 2023 17:47:06 +0100
+Subject: [PATCH] ftp: add more conditions for connection reuse
+
+Reported-by: Harry Sintonen
+Closes #10730
+
+Upstream-Status: Backport [import from ubuntu https://git.launchpad.net/ubuntu/+source/curl/tree/debian/patches/CVE-2023-27535.patch?h=ubuntu/focal-security
+Upstream commit https://github.com/curl/curl/commit/8f4608468b890dce2dad9f91d5607ee7e9c1aba1]
+CVE: CVE-2023-27535
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/ftp.c | 30 ++++++++++++++++++++++++++++--
+ lib/ftp.h | 5 +++++
+ lib/setopt.c | 2 +-
+ lib/url.c | 16 +++++++++++++++-
+ lib/urldata.h | 4 ++--
+ 5 files changed, 51 insertions(+), 6 deletions(-)
+
+diff --git a/lib/ftp.c b/lib/ftp.c
+index 31a34e8..7a82a74 100644
+--- a/lib/ftp.c
++++ b/lib/ftp.c
+@@ -4059,6 +4059,10 @@ static CURLcode ftp_disconnect(struct connectdata *conn, bool dead_connection)
+ }
+
+ freedirs(ftpc);
++ free(ftpc->account);
++ ftpc->account = NULL;
++ free(ftpc->alternative_to_user);
++ ftpc->alternative_to_user = NULL;
+ free(ftpc->prevpath);
+ ftpc->prevpath = NULL;
+ free(ftpc->server_os);
+@@ -4326,11 +4330,31 @@ static CURLcode ftp_setup_connection(struct connectdata *conn)
+ struct Curl_easy *data = conn->data;
+ char *type;
+ struct FTP *ftp;
++ struct ftp_conn *ftpc = &conn->proto.ftpc;
+
+- conn->data->req.protop = ftp = calloc(sizeof(struct FTP), 1);
++ ftp = calloc(sizeof(struct FTP), 1);
+ if(NULL == ftp)
+ return CURLE_OUT_OF_MEMORY;
+
++ /* clone connection related data that is FTP specific */
++ if(data->set.str[STRING_FTP_ACCOUNT]) {
++ ftpc->account = strdup(data->set.str[STRING_FTP_ACCOUNT]);
++ if(!ftpc->account) {
++ free(ftp);
++ return CURLE_OUT_OF_MEMORY;
++ }
++ }
++ if(data->set.str[STRING_FTP_ALTERNATIVE_TO_USER]) {
++ ftpc->alternative_to_user =
++ strdup(data->set.str[STRING_FTP_ALTERNATIVE_TO_USER]);
++ if(!ftpc->alternative_to_user) {
++ Curl_safefree(ftpc->account);
++ free(ftp);
++ return CURLE_OUT_OF_MEMORY;
++ }
++ }
++ conn->data->req.protop = ftp;
++
+ ftp->path = &data->state.up.path[1]; /* don't include the initial slash */
+
+ /* FTP URLs support an extension like ";type=<typecode>" that
+@@ -4366,7 +4390,9 @@ static CURLcode ftp_setup_connection(struct connectdata *conn)
+ /* get some initial data into the ftp struct */
+ ftp->transfer = FTPTRANSFER_BODY;
+ ftp->downloadsize = 0;
+- conn->proto.ftpc.known_filesize = -1; /* unknown size for now */
++ ftpc->known_filesize = -1; /* unknown size for now */
++ ftpc->use_ssl = data->set.use_ssl;
++ ftpc->ccc = data->set.ftp_ccc;
+
+ return CURLE_OK;
+ }
+diff --git a/lib/ftp.h b/lib/ftp.h
+index 984347f..163dcb3 100644
+--- a/lib/ftp.h
++++ b/lib/ftp.h
+@@ -116,6 +116,8 @@ struct FTP {
+ struct */
+ struct ftp_conn {
+ struct pingpong pp;
++ char *account;
++ char *alternative_to_user;
+ char *entrypath; /* the PWD reply when we logged on */
+ char **dirs; /* realloc()ed array for path components */
+ int dirdepth; /* number of entries used in the 'dirs' array */
+@@ -141,6 +143,9 @@ struct ftp_conn {
+ ftpstate state; /* always use ftp.c:state() to change state! */
+ ftpstate state_saved; /* transfer type saved to be reloaded after
+ data connection is established */
++ unsigned char use_ssl; /* if AUTH TLS is to be attempted etc, for FTP or
++ IMAP or POP3 or others! (type: curl_usessl)*/
++ unsigned char ccc; /* ccc level for this connection */
+ curl_off_t retr_size_saved; /* Size of retrieved file saved */
+ char *server_os; /* The target server operating system. */
+ curl_off_t known_filesize; /* file size is different from -1, if wildcard
+diff --git a/lib/setopt.c b/lib/setopt.c
+index 4d96f6b..a91bb70 100644
+--- a/lib/setopt.c
++++ b/lib/setopt.c
+@@ -2126,7 +2126,7 @@ CURLcode Curl_vsetopt(struct Curl_easy *data, CURLoption option, va_list param)
+ arg = va_arg(param, long);
+ if((arg < CURLUSESSL_NONE) || (arg >= CURLUSESSL_LAST))
+ return CURLE_BAD_FUNCTION_ARGUMENT;
+- data->set.use_ssl = (curl_usessl)arg;
++ data->set.use_ssl = (unsigned char)arg;
+ break;
+
+ case CURLOPT_SSL_OPTIONS:
+diff --git a/lib/url.c b/lib/url.c
+index dfbde3b..f84375c 100644
+--- a/lib/url.c
++++ b/lib/url.c
+@@ -1257,10 +1257,24 @@ ConnectionExists(struct Curl_easy *data,
+ }
+ }
+
+- if(get_protocol_family(needle->handler->protocol) & PROTO_FAMILY_SSH) {
++#ifdef USE_SSH
++ else if(get_protocol_family(needle->handler->protocol) & PROTO_FAMILY_SSH) {
+ if(!ssh_config_matches(needle, check))
+ continue;
+ }
++#endif
++#ifndef CURL_DISABLE_FTP
++ else if(get_protocol_family(needle->handler->protocol) & PROTO_FAMILY_FTP) {
++ /* Also match ACCOUNT, ALTERNATIVE-TO-USER, USE_SSL and CCC options */
++ if(Curl_timestrcmp(needle->proto.ftpc.account,
++ check->proto.ftpc.account) ||
++ Curl_timestrcmp(needle->proto.ftpc.alternative_to_user,
++ check->proto.ftpc.alternative_to_user) ||
++ (needle->proto.ftpc.use_ssl != check->proto.ftpc.use_ssl) ||
++ (needle->proto.ftpc.ccc != check->proto.ftpc.ccc))
++ continue;
++ }
++#endif
+
+ if(!needle->bits.httpproxy || (needle->handler->flags&PROTOPT_SSL) ||
+ needle->bits.tunnel_proxy) {
+diff --git a/lib/urldata.h b/lib/urldata.h
+index 168f874..51b793b 100644
+--- a/lib/urldata.h
++++ b/lib/urldata.h
+@@ -1730,8 +1730,6 @@ struct UserDefined {
+ void *ssh_keyfunc_userp; /* custom pointer to callback */
+ enum CURL_NETRC_OPTION
+ use_netrc; /* defined in include/curl.h */
+- curl_usessl use_ssl; /* if AUTH TLS is to be attempted etc, for FTP or
+- IMAP or POP3 or others! */
+ long new_file_perms; /* Permissions to use when creating remote files */
+ long new_directory_perms; /* Permissions to use when creating remote dirs */
+ long ssh_auth_types; /* allowed SSH auth types */
+@@ -1851,6 +1849,8 @@ struct UserDefined {
+ BIT(http09_allowed); /* allow HTTP/0.9 responses */
+ BIT(mail_rcpt_allowfails); /* allow RCPT TO command to fail for some
+ recipients */
++ unsigned char use_ssl; /* if AUTH TLS is to be attempted etc, for FTP or
++ IMAP or POP3 or others! (type: curl_usessl)*/
+ };
+
+ struct Names {
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2023-27536.patch b/poky/meta/recipes-support/curl/curl/CVE-2023-27536.patch
new file mode 100644
index 0000000000..b04a77de25
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2023-27536.patch
@@ -0,0 +1,55 @@
+From cb49e67303dbafbab1cebf4086e3ec15b7d56ee5 Mon Sep 17 00:00:00 2001
+From: Daniel Stenberg <daniel@haxx.se>
+Date: Fri, 10 Mar 2023 09:22:43 +0100
+Subject: [PATCH] url: only reuse connections with same GSS delegation
+
+Reported-by: Harry Sintonen
+Closes #10731
+
+Upstream-Status: Backport [https://github.com/curl/curl/commit/cb49e67303dbafbab1cebf4086e3ec15b7d56ee5]
+CVE: CVE-2023-27536
+Signed-off-by: Vijay Anusuri <vanusuri@mvista.com>
+---
+ lib/url.c | 6 ++++++
+ lib/urldata.h | 1 +
+ 2 files changed, 7 insertions(+)
+
+diff --git a/lib/url.c b/lib/url.c
+index f84375c..87f4eb0 100644
+--- a/lib/url.c
++++ b/lib/url.c
+@@ -1257,6 +1257,11 @@ ConnectionExists(struct Curl_easy *data,
+ }
+ }
+
++ /* GSS delegation differences do not actually affect every connection
++ and auth method, but this check takes precaution before efficiency */
++ if(needle->gssapi_delegation != check->gssapi_delegation)
++ continue;
++
+ #ifdef USE_SSH
+ else if(get_protocol_family(needle->handler->protocol) & PROTO_FAMILY_SSH) {
+ if(!ssh_config_matches(needle, check))
+@@ -1708,6 +1713,7 @@ static struct connectdata *allocate_conn(struct Curl_easy *data)
+ conn->fclosesocket = data->set.fclosesocket;
+ conn->closesocket_client = data->set.closesocket_client;
+ conn->lastused = Curl_now(); /* used now */
++ conn->gssapi_delegation = data->set.gssapi_delegation;
+
+ return conn;
+ error:
+diff --git a/lib/urldata.h b/lib/urldata.h
+index 51b793b..b8a611b 100644
+--- a/lib/urldata.h
++++ b/lib/urldata.h
+@@ -1118,6 +1118,7 @@ struct connectdata {
+ handle */
+ BIT(sock_accepted); /* TRUE if the SECONDARYSOCKET was created with
+ accept() */
++ long gssapi_delegation; /* inherited from set.gssapi_delegation */
+ };
+
+ /* The end of connectdata. */
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/curl/curl/CVE-2023-27538.patch b/poky/meta/recipes-support/curl/curl/CVE-2023-27538.patch
new file mode 100644
index 0000000000..6c40989d3b
--- /dev/null
+++ b/poky/meta/recipes-support/curl/curl/CVE-2023-27538.patch
@@ -0,0 +1,31 @@
+From af369db4d3833272b8ed443f7fcc2e757a0872eb Mon Sep 17 00:00:00 2001
+From: Daniel Stenberg <daniel@haxx.se>
+Date: Fri, 10 Mar 2023 08:22:51 +0100
+Subject: [PATCH] url: fix the SSH connection reuse check
+
+Reported-by: Harry Sintonen
+Closes #10735
+
+CVE: CVE-2023-27538
+Upstream-Status: Backport [https://github.com/curl/curl/commit/af369db4d3833272b8ed443f7fcc2e757a0872eb]
+Signed-off-by: Hitendra Prajapati <hprajapati@mvista.com>
+---
+ lib/url.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/lib/url.c b/lib/url.c
+index 8da0245..9f14a7b 100644
+--- a/lib/url.c
++++ b/lib/url.c
+@@ -1266,7 +1266,7 @@ ConnectionExists(struct Curl_easy *data,
+ }
+ }
+
+- if(get_protocol_family(needle->handler->protocol) == PROTO_FAMILY_SSH) {
++ if(get_protocol_family(needle->handler->protocol) & PROTO_FAMILY_SSH) {
+ if(!ssh_config_matches(needle, check))
+ continue;
+ }
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/curl/curl_7.69.1.bb b/poky/meta/recipes-support/curl/curl_7.69.1.bb
index ed37094049..32d18ddb3a 100644
--- a/poky/meta/recipes-support/curl/curl_7.69.1.bb
+++ b/poky/meta/recipes-support/curl/curl_7.69.1.bb
@@ -39,6 +39,16 @@ SRC_URI = "https://curl.haxx.se/download/curl-${PV}.tar.bz2 \
file://CVE-2022-32207.patch \
file://CVE-2022-32208.patch \
file://CVE-2022-35252.patch \
+ file://CVE-2022-32221.patch \
+ file://CVE-2022-35260.patch \
+ file://CVE-2022-43552.patch \
+ file://CVE-2023-23916.patch \
+ file://CVE-2023-27534.patch \
+ file://CVE-2023-27538.patch \
+ file://CVE-2023-27533.patch \
+ file://CVE-2023-27535-pre1.patch \
+ file://CVE-2023-27535.patch \
+ file://CVE-2023-27536.patch \
"
SRC_URI[md5sum] = "ec5fc263f898a3dfef08e805f1ecca42"
diff --git a/poky/meta/recipes-support/gnutls/gnutls/CVE-2023-0361.patch b/poky/meta/recipes-support/gnutls/gnutls/CVE-2023-0361.patch
new file mode 100644
index 0000000000..943f4ca704
--- /dev/null
+++ b/poky/meta/recipes-support/gnutls/gnutls/CVE-2023-0361.patch
@@ -0,0 +1,85 @@
+From 80a6ce8ddb02477cd724cd5b2944791aaddb702a Mon Sep 17 00:00:00 2001
+From: Alexander Sosedkin <asosedkin@redhat.com>
+Date: Tue, 9 Aug 2022 16:05:53 +0200
+Subject: [PATCH] auth/rsa: side-step potential side-channel
+
+Signed-off-by: Alexander Sosedkin <asosedkin@redhat.com>
+Signed-off-by: Hubert Kario <hkario@redhat.com>
+Tested-by: Hubert Kario <hkario@redhat.com>
+Upstream-Status: Backport [https://gitlab.com/gnutls/gnutls/-/commit/80a6ce8ddb02477cd724cd5b2944791aaddb702a
+ https://gitlab.com/gnutls/gnutls/-/commit/4b7ff428291c7ed77c6d2635577c83a43bbae558]
+CVE: CVE-2023-0361
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+---
+ lib/auth/rsa.c | 30 +++---------------------------
+ 1 file changed, 3 insertions(+), 27 deletions(-)
+
+diff --git a/lib/auth/rsa.c b/lib/auth/rsa.c
+index 8108ee8..858701f 100644
+--- a/lib/auth/rsa.c
++++ b/lib/auth/rsa.c
+@@ -155,13 +155,10 @@ static int
+ proc_rsa_client_kx(gnutls_session_t session, uint8_t * data,
+ size_t _data_size)
+ {
+- const char attack_error[] = "auth_rsa: Possible PKCS #1 attack\n";
+ gnutls_datum_t ciphertext;
+ int ret, dsize;
+ ssize_t data_size = _data_size;
+ volatile uint8_t ver_maj, ver_min;
+- volatile uint8_t check_ver_min;
+- volatile uint32_t ok;
+
+ #ifdef ENABLE_SSL3
+ if (get_num_version(session) == GNUTLS_SSL3) {
+@@ -187,7 +184,6 @@ proc_rsa_client_kx(gnutls_session_t session, uint8_t * data,
+
+ ver_maj = _gnutls_get_adv_version_major(session);
+ ver_min = _gnutls_get_adv_version_minor(session);
+- check_ver_min = (session->internals.allow_wrong_pms == 0);
+
+ session->key.key.data = gnutls_malloc(GNUTLS_MASTER_SIZE);
+ if (session->key.key.data == NULL) {
+@@ -206,10 +202,9 @@ proc_rsa_client_kx(gnutls_session_t session, uint8_t * data,
+ return ret;
+ }
+
+- ret =
+- gnutls_privkey_decrypt_data2(session->internals.selected_key,
+- 0, &ciphertext, session->key.key.data,
+- session->key.key.size);
++ gnutls_privkey_decrypt_data2(session->internals.selected_key,
++ 0, &ciphertext, session->key.key.data,
++ session->key.key.size);
+ /* After this point, any conditional on failure that cause differences
+ * in execution may create a timing or cache access pattern side
+ * channel that can be used as an oracle, so treat very carefully */
+@@ -225,25 +220,6 @@ proc_rsa_client_kx(gnutls_session_t session, uint8_t * data,
+ * Vlastimil Klima, Ondej Pokorny and Tomas Rosa.
+ */
+
+- /* ok is 0 in case of error and 1 in case of success. */
+-
+- /* if ret < 0 */
+- ok = CONSTCHECK_EQUAL(ret, 0);
+- /* session->key.key.data[0] must equal ver_maj */
+- ok &= CONSTCHECK_EQUAL(session->key.key.data[0], ver_maj);
+- /* if check_ver_min then session->key.key.data[1] must equal ver_min */
+- ok &= CONSTCHECK_NOT_EQUAL(check_ver_min, 0) &
+- CONSTCHECK_EQUAL(session->key.key.data[1], ver_min);
+-
+- if (ok) {
+- /* call logging function unconditionally so all branches are
+- * indistinguishable for timing and cache access when debug
+- * logging is disabled */
+- _gnutls_no_log("%s", attack_error);
+- } else {
+- _gnutls_debug_log("%s", attack_error);
+- }
+-
+ /* This is here to avoid the version check attack
+ * discussed above.
+ */
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/gnutls/gnutls_3.6.14.bb b/poky/meta/recipes-support/gnutls/gnutls_3.6.14.bb
index f1757871ce..0c3392d521 100644
--- a/poky/meta/recipes-support/gnutls/gnutls_3.6.14.bb
+++ b/poky/meta/recipes-support/gnutls/gnutls_3.6.14.bb
@@ -27,6 +27,7 @@ SRC_URI = "https://www.gnupg.org/ftp/gcrypt/gnutls/v${SHRT_VER}/gnutls-${PV}.tar
file://CVE-2021-20232.patch \
file://CVE-2022-2509.patch \
file://CVE-2021-4209.patch \
+ file://CVE-2023-0361.patch \
"
SRC_URI[sha256sum] = "5630751adec7025b8ef955af4d141d00d252a985769f51b4059e5affa3d39d63"
diff --git a/poky/meta/recipes-support/gnutls/libtasn1/CVE-2021-46848.patch b/poky/meta/recipes-support/gnutls/libtasn1/CVE-2021-46848.patch
new file mode 100644
index 0000000000..9a8ceecbe7
--- /dev/null
+++ b/poky/meta/recipes-support/gnutls/libtasn1/CVE-2021-46848.patch
@@ -0,0 +1,45 @@
+From 22fd12b290adea788122044cb58dc9e77754644f Mon Sep 17 00:00:00 2001
+From: Vivek Kumbhar <vkumbhar@mvista.com>
+Date: Thu, 17 Nov 2022 12:07:50 +0530
+Subject: [PATCH] CVE-2021-46848
+
+Upstream-Status: Backport [https://gitlab.com/gnutls/libtasn1/-/commit/44a700d2051a666235748970c2df047ff207aeb5]
+CVE: CVE-2021-46848
+Signed-off-by: Vivek Kumbhar <vkumbhar@mvista.com>
+
+Fix ETYPE_OK off by one array size check.
+---
+ NEWS | 4 ++++
+ lib/int.h | 2 +-
+ 2 files changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/NEWS b/NEWS
+index f042481..d8f684e 100644
+--- a/NEWS
++++ b/NEWS
+@@ -1,5 +1,9 @@
+ GNU Libtasn1 NEWS -*- outline -*-
+
++* Noteworthy changes in release ?.? (????-??-??) [?]
++- Fix ETYPE_OK out of bounds read. Closes: #32.
++- Update gnulib files and various maintenance fixes.
++
+ * Noteworthy changes in release 4.16.0 (released 2020-02-01) [stable]
+ - asn1_decode_simple_ber: added support for constructed definite
+ octet strings. This allows this function decode the whole set of
+diff --git a/lib/int.h b/lib/int.h
+index ea16257..c877282 100644
+--- a/lib/int.h
++++ b/lib/int.h
+@@ -97,7 +97,7 @@ typedef struct tag_and_class_st
+ #define ETYPE_TAG(etype) (_asn1_tags[etype].tag)
+ #define ETYPE_CLASS(etype) (_asn1_tags[etype].class)
+ #define ETYPE_OK(etype) (((etype) != ASN1_ETYPE_INVALID && \
+- (etype) <= _asn1_tags_size && \
++ (etype) < _asn1_tags_size && \
+ _asn1_tags[(etype)].desc != NULL)?1:0)
+
+ #define ETYPE_IS_STRING(etype) ((etype == ASN1_ETYPE_GENERALSTRING || \
+--
+2.25.1
+
diff --git a/poky/meta/recipes-support/gnutls/libtasn1_4.16.0.bb b/poky/meta/recipes-support/gnutls/libtasn1_4.16.0.bb
index 8d3a14506a..d2b3c492ec 100644
--- a/poky/meta/recipes-support/gnutls/libtasn1_4.16.0.bb
+++ b/poky/meta/recipes-support/gnutls/libtasn1_4.16.0.bb
@@ -12,6 +12,7 @@ LIC_FILES_CHKSUM = "file://doc/COPYING;md5=d32239bcb673463ab874e80d47fae504 \
SRC_URI = "${GNU_MIRROR}/libtasn1/libtasn1-${PV}.tar.gz \
file://dont-depend-on-help2man.patch \
+ file://CVE-2021-46848.patch \
"
DEPENDS = "bison-native"
diff --git a/poky/meta/recipes-support/libksba/libksba/CVE-2022-3515.patch b/poky/meta/recipes-support/libksba/libksba/CVE-2022-3515.patch
new file mode 100644
index 0000000000..ff9f2f9275
--- /dev/null
+++ b/poky/meta/recipes-support/libksba/libksba/CVE-2022-3515.patch
@@ -0,0 +1,47 @@
+From 4b7d9cd4a018898d7714ce06f3faf2626c14582b Mon Sep 17 00:00:00 2001
+From: Werner Koch <wk@gnupg.org>
+Date: Wed, 5 Oct 2022 14:19:06 +0200
+Subject: [PATCH] Detect a possible overflow directly in the TLV parser.
+
+* src/ber-help.c (_ksba_ber_read_tl): Check for overflow of a commonly
+used sum.
+--
+
+It is quite common to have checks like
+
+ if (ti.nhdr + ti.length >= DIM(tmpbuf))
+ return gpg_error (GPG_ERR_TOO_LARGE);
+
+This patch detects possible integer overflows immmediately when
+creating the TI object.
+
+Reported-by: ZDI-CAN-18927, ZDI-CAN-18928, ZDI-CAN-18929
+
+
+Upstream-Status: Backport [https://git.gnupg.org/cgi-bin/gitweb.cgi?p=libksba.git;a=patch;h=4b7d9cd4a018898d7714ce06f3faf2626c14582b]
+CVE: CVE-2022-3515
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ src/ber-help.c | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+diff --git a/src/ber-help.c b/src/ber-help.c
+index 81c31ed..56efb6a 100644
+--- a/src/ber-help.c
++++ b/src/ber-help.c
+@@ -182,6 +182,12 @@ _ksba_ber_read_tl (ksba_reader_t reader, struct tag_info *ti)
+ ti->length = len;
+ }
+
++ if (ti->length > ti->nhdr && (ti->nhdr + ti->length) < ti->length)
++ {
++ ti->err_string = "header+length would overflow";
++ return gpg_error (GPG_ERR_EOVERFLOW);
++ }
++
+ /* Without this kludge some example certs can't be parsed */
+ if (ti->class == CLASS_UNIVERSAL && !ti->tag)
+ ti->length = 0;
+--
+2.11.0
+
diff --git a/poky/meta/recipes-support/libksba/libksba/CVE-2022-47629.patch b/poky/meta/recipes-support/libksba/libksba/CVE-2022-47629.patch
new file mode 100644
index 0000000000..b09d0eb557
--- /dev/null
+++ b/poky/meta/recipes-support/libksba/libksba/CVE-2022-47629.patch
@@ -0,0 +1,69 @@
+From b17444b3c47e32c77a3ba5335ae30ccbadcba3cf Mon Sep 17 00:00:00 2001
+From: Werner Koch <wk@gnupg.org>
+Date: Tue, 22 Nov 2022 16:36:46 +0100
+Subject: [PATCH] Fix an integer overflow in the CRL signature parser.
+
+* src/crl.c (parse_signature): N+N2 now checked for overflow.
+
+* src/ocsp.c (parse_response_extensions): Do not accept too large
+values.
+(parse_single_extensions): Ditto.
+--
+
+The second patch is an extra safegourd not related to the reported
+bug.
+
+GnuPG-bug-id: 6284
+Reported-by: Joseph Surin, elttam
+CVE: CVE-2022-47629
+https://git.gnupg.org/cgi-bin/gitweb.cgi?p=libksba.git;a=commit;h=f61a5ea4e0f6a80fd4b28ef0174bee77793cf070
+Upstream-Status: Backport
+Signed-off-by: Chee Yang Lee <chee.yang.lee@intel.com>
+---
+ src/crl.c | 2 +-
+ src/ocsp.c | 12 ++++++++++++
+ 2 files changed, 13 insertions(+), 1 deletion(-)
+
+diff --git a/src/crl.c b/src/crl.c
+index 87a3fa3..9d3028e 100644
+--- a/src/crl.c
++++ b/src/crl.c
+@@ -1434,7 +1434,7 @@ parse_signature (ksba_crl_t crl)
+ && !ti.is_constructed) )
+ return gpg_error (GPG_ERR_INV_CRL_OBJ);
+ n2 = ti.nhdr + ti.length;
+- if (n + n2 >= DIM(tmpbuf))
++ if (n + n2 >= DIM(tmpbuf) || (n + n2) < n)
+ return gpg_error (GPG_ERR_TOO_LARGE);
+ memcpy (tmpbuf+n, ti.buf, ti.nhdr);
+ err = read_buffer (crl->reader, tmpbuf+n+ti.nhdr, ti.length);
+diff --git a/src/ocsp.c b/src/ocsp.c
+index 4b26f8d..c41234e 100644
+--- a/src/ocsp.c
++++ b/src/ocsp.c
+@@ -912,6 +912,12 @@ parse_response_extensions (ksba_ocsp_t ocsp,
+ else
+ ocsp->good_nonce = 1;
+ }
++ if (ti.length > (1<<24))
++ {
++ /* Bail out on much too large objects. */
++ err = gpg_error (GPG_ERR_BAD_BER);
++ goto leave;
++ }
+ ex = xtrymalloc (sizeof *ex + strlen (oid) + ti.length);
+ if (!ex)
+ {
+@@ -979,6 +985,12 @@ parse_single_extensions (struct ocsp_reqitem_s *ri,
+ err = parse_octet_string (&data, &datalen, &ti);
+ if (err)
+ goto leave;
++ if (ti.length > (1<<24))
++ {
++ /* Bail out on much too large objects. */
++ err = gpg_error (GPG_ERR_BAD_BER);
++ goto leave;
++ }
+ ex = xtrymalloc (sizeof *ex + strlen (oid) + ti.length);
+ if (!ex)
+ {
diff --git a/poky/meta/recipes-support/libksba/libksba_1.3.5.bb b/poky/meta/recipes-support/libksba/libksba_1.3.5.bb
index 7f9ab4f5fc..5293aa91e1 100644
--- a/poky/meta/recipes-support/libksba/libksba_1.3.5.bb
+++ b/poky/meta/recipes-support/libksba/libksba_1.3.5.bb
@@ -22,7 +22,10 @@ inherit autotools binconfig-disabled pkgconfig texinfo
UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
SRC_URI = "${GNUPG_MIRROR}/${BPN}/${BPN}-${PV}.tar.bz2 \
- file://ksba-add-pkgconfig-support.patch"
+ file://ksba-add-pkgconfig-support.patch \
+ file://CVE-2022-47629.patch \
+ file://CVE-2022-3515.patch \
+"
SRC_URI[md5sum] = "8302a3e263a7c630aa7dea7d341f07a2"
SRC_URI[sha256sum] = "41444fd7a6ff73a79ad9728f985e71c9ba8cd3e5e53358e70d5f066d35c1a340"
diff --git a/poky/meta/recipes-support/vim/vim.inc b/poky/meta/recipes-support/vim/vim.inc
index f2cd235329..94eabfa197 100644
--- a/poky/meta/recipes-support/vim/vim.inc
+++ b/poky/meta/recipes-support/vim/vim.inc
@@ -10,8 +10,7 @@ DEPENDS = "ncurses gettext-native"
RSUGGESTS_${PN} = "diffutils"
LICENSE = "vim"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=6b30ea4fa660c483b619924bc709ef99 \
- file://runtime/doc/uganda.txt;md5=001ef779f422a0e9106d428c84495b4d"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=6b30ea4fa660c483b619924bc709ef99"
SRC_URI = "git://github.com/vim/vim.git;branch=master;protocol=https \
file://disable_acl_header_check.patch \
@@ -20,8 +19,8 @@ SRC_URI = "git://github.com/vim/vim.git;branch=master;protocol=https \
file://no-path-adjust.patch \
"
-PV .= ".0598"
-SRCREV = "8279af514ca7e5fd3c31cf13b0864163d1a0bfeb"
+PV .= ".1429"
+SRCREV = "1a08a3e2a584889f19b84a27672134649b73da58"
# Remove when 8.3 is out
UPSTREAM_VERSION_UNKNOWN = "1"
@@ -33,7 +32,7 @@ S = "${WORKDIR}/git"
VIMDIR = "vim${@d.getVar('PV').split('.')[0]}${@d.getVar('PV').split('.')[1]}"
-inherit autotools-brokensep update-alternatives mime-xdg
+inherit autotools-brokensep update-alternatives mime-xdg pkgconfig
CLEANBROKEN = "1"
@@ -81,6 +80,7 @@ EXTRA_OECONF = " \
--disable-netbeans \
--disable-desktop-database-update \
--with-tlib=ncurses \
+ --with-modified-by='${MAINTAINER}' \
ac_cv_small_wchar_t=no \
ac_cv_path_GLIB_COMPILE_RESOURCES=no \
vim_cv_getcwd_broken=no \
diff --git a/poky/scripts/lib/buildstats.py b/poky/scripts/lib/buildstats.py
index c69b5bf4d7..3b76286ba5 100644
--- a/poky/scripts/lib/buildstats.py
+++ b/poky/scripts/lib/buildstats.py
@@ -8,7 +8,7 @@ import json
import logging
import os
import re
-from collections import namedtuple,OrderedDict
+from collections import namedtuple
from statistics import mean
@@ -238,7 +238,7 @@ class BuildStats(dict):
subdirs = os.listdir(path)
for dirname in subdirs:
recipe_dir = os.path.join(path, dirname)
- if not os.path.isdir(recipe_dir):
+ if dirname == "reduced_proc_pressure" or not os.path.isdir(recipe_dir):
continue
name, epoch, version, revision = cls.split_nevr(dirname)
bsrecipe = BSRecipe(name, epoch, version, revision)
diff --git a/poky/scripts/lib/devtool/deploy.py b/poky/scripts/lib/devtool/deploy.py
index e0f8e64b9c..b4f9fbfe45 100644
--- a/poky/scripts/lib/devtool/deploy.py
+++ b/poky/scripts/lib/devtool/deploy.py
@@ -201,9 +201,9 @@ def deploy(args, config, basepath, workspace):
print(' %s' % item)
return 0
- extraoptions = ''
+ extraoptions = '-o HostKeyAlgorithms=+ssh-rsa'
if args.no_host_check:
- extraoptions += '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
+ extraoptions += ' -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
if not args.show_status:
extraoptions += ' -q'
@@ -274,9 +274,9 @@ def undeploy(args, config, basepath, workspace):
elif not args.recipename and not args.all:
raise argparse_oe.ArgumentUsageError('If you don\'t specify a recipe, you must specify -a/--all', 'undeploy-target')
- extraoptions = ''
+ extraoptions = '-o HostKeyAlgorithms=+ssh-rsa'
if args.no_host_check:
- extraoptions += '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
+ extraoptions += ' -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
if not args.show_status:
extraoptions += ' -q'
diff --git a/poky/scripts/lib/devtool/menuconfig.py b/poky/scripts/lib/devtool/menuconfig.py
index 95384c5333..ff9227035d 100644
--- a/poky/scripts/lib/devtool/menuconfig.py
+++ b/poky/scripts/lib/devtool/menuconfig.py
@@ -43,7 +43,7 @@ def menuconfig(args, config, basepath, workspace):
return 1
check_workspace_recipe(workspace, args.component)
- pn = rd.getVar('PN', True)
+ pn = rd.getVar('PN')
if not rd.getVarFlag('do_menuconfig','task'):
raise DevtoolError("This recipe does not support menuconfig option")
diff --git a/poky/scripts/lib/devtool/standard.py b/poky/scripts/lib/devtool/standard.py
index f364a45283..cfa88616af 100644
--- a/poky/scripts/lib/devtool/standard.py
+++ b/poky/scripts/lib/devtool/standard.py
@@ -357,7 +357,7 @@ def _move_file(src, dst, dry_run_outdir=None, base_outdir=None):
bb.utils.mkdirhier(dst_d)
shutil.move(src, dst)
-def _copy_file(src, dst, dry_run_outdir=None):
+def _copy_file(src, dst, dry_run_outdir=None, base_outdir=None):
"""Copy a file. Creates all the directory components of destination path."""
dry_run_suffix = ' (dry-run)' if dry_run_outdir else ''
logger.debug('Copying %s to %s%s' % (src, dst, dry_run_suffix))
diff --git a/poky/scripts/lib/resulttool/resultutils.py b/poky/scripts/lib/resulttool/resultutils.py
index 8917022d36..7666331ba2 100644
--- a/poky/scripts/lib/resulttool/resultutils.py
+++ b/poky/scripts/lib/resulttool/resultutils.py
@@ -142,7 +142,7 @@ def generic_get_log(sectionname, results, section):
return decode_log(ptest['log'])
def ptestresult_get_log(results, section):
- return generic_get_log('ptestresuls.sections', results, section)
+ return generic_get_log('ptestresult.sections', results, section)
def generic_get_rawlogs(sectname, results):
if sectname not in results:
diff --git a/poky/scripts/lib/wic/plugins/imager/direct.py b/poky/scripts/lib/wic/plugins/imager/direct.py
index 2505c13fce..42704d1e10 100644
--- a/poky/scripts/lib/wic/plugins/imager/direct.py
+++ b/poky/scripts/lib/wic/plugins/imager/direct.py
@@ -115,7 +115,7 @@ class DirectPlugin(ImagerPlugin):
updated = False
for part in self.parts:
if not part.realnum or not part.mountpoint \
- or part.mountpoint == "/" or not part.mountpoint.startswith('/'):
+ or part.mountpoint == "/" or not (part.mountpoint.startswith('/') or part.mountpoint == "swap"):
continue
if part.use_uuid:
diff --git a/poky/scripts/nativesdk-intercept/chgrp b/poky/scripts/nativesdk-intercept/chgrp
new file mode 100755
index 0000000000..30cc417d3a
--- /dev/null
+++ b/poky/scripts/nativesdk-intercept/chgrp
@@ -0,0 +1,27 @@
+#!/usr/bin/env python3
+#
+# Wrapper around 'chgrp' that redirects to root in all cases
+
+import os
+import shutil
+import sys
+
+# calculate path to the real 'chgrp'
+path = os.environ['PATH']
+path = path.replace(os.path.dirname(sys.argv[0]), '')
+real_chgrp = shutil.which('chgrp', path=path)
+
+args = list()
+
+found = False
+for i in sys.argv:
+ if i.startswith("-"):
+ args.append(i)
+ continue
+ if not found:
+ args.append("root")
+ found = True
+ else:
+ args.append(i)
+
+os.execv(real_chgrp, args)
diff --git a/poky/scripts/nativesdk-intercept/chown b/poky/scripts/nativesdk-intercept/chown
new file mode 100755
index 0000000000..3914b3e384
--- /dev/null
+++ b/poky/scripts/nativesdk-intercept/chown
@@ -0,0 +1,27 @@
+#!/usr/bin/env python3
+#
+# Wrapper around 'chown' that redirects to root in all cases
+
+import os
+import shutil
+import sys
+
+# calculate path to the real 'chown'
+path = os.environ['PATH']
+path = path.replace(os.path.dirname(sys.argv[0]), '')
+real_chown = shutil.which('chown', path=path)
+
+args = list()
+
+found = False
+for i in sys.argv:
+ if i.startswith("-"):
+ args.append(i)
+ continue
+ if not found:
+ args.append("root:root")
+ found = True
+ else:
+ args.append(i)
+
+os.execv(real_chown, args)
diff --git a/poky/scripts/pybootchartgui/pybootchartgui/parsing.py b/poky/scripts/pybootchartgui/pybootchartgui/parsing.py
index b42dac6b88..9d6787ec5a 100644
--- a/poky/scripts/pybootchartgui/pybootchartgui/parsing.py
+++ b/poky/scripts/pybootchartgui/pybootchartgui/parsing.py
@@ -128,7 +128,7 @@ class Trace:
def compile(self, writer):
def find_parent_id_for(pid):
- if pid is 0:
+ if pid == 0:
return 0
ppid = self.parent_map.get(pid)
if ppid: