diff options
author | Andrew Geissler <geissonator@yahoo.com> | 2020-08-21 23:58:33 +0300 |
---|---|---|
committer | Andrew Geissler <geissonator@yahoo.com> | 2020-09-04 02:23:19 +0300 |
commit | 635e0e4637e40ba03f69204265427550fd404f4c (patch) | |
tree | 0d690d7b8b3c0b0e5ac9b807152b6cc0560d2037 /poky/meta/classes | |
parent | afce0b5cf42bd06e31b142730c0acf3d60a8cf2f (diff) | |
download | openbmc-635e0e4637e40ba03f69204265427550fd404f4c.tar.xz |
poky: subtree update:23deb29c1b..c67f57c09e
Adrian Bunk (1):
librsvg: Upgrade 2.40.20 -> 2.40.21
Alejandro Hernandez (1):
musl: Upgrade to latest release 1.2.1
Alex Kiernan (8):
systemd: Upgrade v245.6 -> v246
systemd: Move musl patches to SRC_URI_MUSL
systemd: Fix path to modules-load.d et al
nfs-utils: Drop StandardError=syslog from systemd unit
openssh: Drop StandardError=syslog from systemd unit
volatile-binds: Drop StandardOutput=syslog from systemd unit
systemd: Upgrade v246 -> v246.1
systemd: Upgrade v246.1 -> v246.2
Alexander Kanavin (16):
sysvinit: update 2.96 -> 2.97
kbd: update 2.2.0 -> 2.3.0
gnu-config: update to latest revision
go: update 1.14.4 -> 1.14.6
meson: update 0.54.3 -> 0.55.0
nasm: update 2.14.02 -> 2.15.03
glib-2.0: correct build with latest meson
rsync: update 3.2.1 -> 3.2.2
vala: update 0.48.6 -> 0.48.7
logrotate: update 3.16.0 -> 3.17.0
mesa: update 20.1.2 -> 20.1.4
libcap: update 2.36 -> 2.41
net-tools: fix upstream version check
meson.bbclass: add a cups-config entry
oeqa: write @OETestTag content into json test reports for each case
libhandy: upstream has moved to gnome
Alistair Francis (1):
binutils: Remove RISC-V PIE patch
Andrei Gherzan (2):
initscripts: Fix various shellcheck warnings in populate-volatile.sh
initscripts: Fix populate-volatile.sh bug when file/dir exists
Anuj Mittal (4):
harfbuzz: upgrade 2.6.8 -> 2.7.1
sqlite3: upgrade 3.32.3 -> 3.33.0
stress-ng: upgrade 0.11.17 -> 0.11.18
x264: upgrade to latest revision
Armin Kuster (1):
glibc: Secruity fix for CVE-2020-6096
Bruce Ashfield (25):
linux-yocto/5.4: update to v5.4.53
linux-yocto/5.4: fix perf build with binutils 2.35
kernel/yocto: allow dangling KERNEL_FEATURES
linux-yocto/5.4: update to v5.4.54
systemtap: update to 4.3 latest
kernel-devsrc: fix x86 (32bit) on target module build
lttng-modules: update to 2.12.2 (fixes v5.8+ builds)
yocto-bsps: update reference BSPs to 5.4.54
kernel-yocto: enhance configuration queue analysis capabilities
strace: update to 5.8 (fix build against v5.8 uapi headers)
linux-yocto-rt/5.4: update to rt32
linux-yocto/5.4: update to v5.4.56
linux-yocto/5.4: update to v5.4.57
kernel-yocto: set cwd before querying the meta data dir
kernel-yocto: make # is not set matching more precise
kernel-yocto: split meta data gathering into patch and config phases
make-mod-scripts: add HOSTCXX definitions and gmp-native dependency
kernel-devsrc: fix on target modules prepare for ARM
kernel-devsrc: 5.8 + gcc10 require gcc-plugins + libmpc-dev
linux-yocto/5.4: update to v5.4.58
linux-yocto/5.4: perf cs-etm: Move definition of 'traceid_list' global variable from header file
libc-headers: update to v5.8
linux-yocto: introduce 5.8 reference kernel
kernel-yocto/5.8: add gmp-native dependency
linux-yocto/5.8: update to v5.8.1
Chandana kalluri (1):
qemu.inc: Use virtual/libgl instead of mesa
Changhyeok Bae (2):
iproute2: upgrade 5.7.0 -> 5.8.0
ethtool: upgrade 5.7 -> 5.8
Changqing Li (5):
layer.conf: fix adwaita-icon-theme signature change problem
gtk-icon-cache.bbclass: add features_check
gcc-runtime.inc: fix m32 compile fail with x86-64 compiler
libffi: fix multilib header conflict
gpgme: fix multilib header conflict
Chen Qi (3):
grub: set CVE_PRODUCT to grub2
runqemu: fix permission check of /dev/vhost-net
fribidi: extend CVE_PRODUCT to include fribidi
Chris Laplante (11):
lib/oe/log_colorizer.py: add LogColorizerProxyProgressHandler
bitbake: build: print traceback if progress handler can't be created
bitbake: build: create_progress_handler: fix calling 'get' on NoneType
bitbake: progress: modernize syntax, format
bitbake: progress: fix hypothetical NameError if 'progress' isn't set
bitbake: progress: filter ANSI escape codes before looking for progress text
bitbake: tests/color: add test suite for ANSI color code filtering
bitbake: data: emit filename/lineno information for shell functions
bitbake: build: print a backtrace when a Bash shell function fails
bitbake: build: print a backtrace with the original metadata locations of Bash shell funcs
bitbake: build: make shell traps less chatty when 'bitbake -v' is used
Dan Callaghan (1):
stress-ng: create a symlink for /usr/bin/stress
Daniel Ammann (1):
wic: fix typo
Daniel Gomez (1):
allarch: Add missing allarch ttf-bitstream-vera
Diego Sueiro (1):
cml1: Add the option to choose the .config root dir
Dmitry Baryshkov (3):
mesa: enable freedreno Vulkan driver if freedreno is enabled
arch-armv8-2a.inc: add tune include for armv8.2a
tune-cortexa55.inc: switch to using armv8.2a include file
Fredrik Gustafsson (13):
package_manager: Move to package_manager/__init__.py
rpm: Move manifest to its own subdir
ipk: Move ipk manifest to its own subdir
deb: Move deb manifest to its own subdir
rpm: Move rootfs to its own dir
ipk: Move rootfs to its own dir
deb: Move rootfs to its own dir
rpm: Move sdk to its own dir
ipk: Move sdk to its own dir
deb: Move sdk to its own dir
rpm: Move package manager to its own dir
ipk: Move package manager to its own dir
deb: Move package manager to its own dir
Guillaume Champagne (1):
weston: add missing packageconfigs
Jeremy Puhlman (1):
gobject-introspection: disable scanner caching in install
Joe Slater (3):
libdnf: allow reproducible binary builds
gconf: use python3
gcr: make sure gcr-oids.h is generated
Jonathan Richardson (1):
cortex-m0plus.inc: Add tuning for cortex M0 plus
Joshua Watt (3):
bitbake: bitbake: command: Handle multiconfig in findSigInfo
lib/oe/reproducible.py: Fix git HEAD check
perl: Add check for non-arch Storable.pm file
Khasim Mohammed (2):
wic/bootimg-efi: Add support for IMAGE_BOOT_FILES
wic/bootimg-efi: Update docs for IMAGE_BOOT_FILES support in bootimg-efi
Khem Raj (23):
qemumips: Use 34Kf CPU emulation
libunwind: Backport a fix for -fno-common option to compile
dhcp: Use -fcommon compiler option
inetutils: Fix build with -fno-common
libomxil: Use -fcommon compiler option
kexec-tools: Fix build with -fno-common
distcc: Fix build with -fno-common
libacpi: Fix build with -fno-common
minicom: Fix build when using -fno-common
binutils: Upgrade to 2.35 release
xf86-video-intel: Fix build with -fno-common
glibc: Upgrade to 2.32 release
go: Upgrade to 1.14.7
webkitgtk: Upgrade to 2.28.4
kexec-tools: Fix additional duplicate symbols on aarch64/x86_64 builds
gcc: Upgrade to 10.2.0
buildcpio.py: Apply patch to fix build with -fno-common
buildgalculator: Patch to fix build with -fno-common
localedef: Update to include floatn.h fix
xserver-xorg: Fix build with -fno-common/mips
binutils: Let crosssdk gold linker generate 4096 btyes long .interp section
gcc-cross-canadian: Correct the regexp to delete versioned gcc binary
curl: Upgrade to 7.72.0
Konrad Weihmann (2):
rootfs-post: remove traling blanks from tasks
cve-update: handle baseMetricV2 as optional
Lee Chee Yang (4):
buildhistory: use pid for temporary txt file name
checklayer: check layer in BBLAYERS before test
ghostscript: fix CVE-2020-15900
qemu : fix CVE-2020-15863
Mark Hatle (1):
package.bbclass: Sort shlib2 output for hash equivalency
Martin Jansa (2):
net-tools: upgrade to latest revision in upstream repo instead of old debian snapshot
perf: backport a fix for confusing non-fatal error
Matt Madison (1):
cogl-1.0: correct X11 dependencies
Matthew (3):
ltp: remove --with-power-management-testsuite from EXTRA_OECONF
ltp: remove OOM tests from runtest/mm
ltp: make copyFrom scp command non-fatal
Mikko Rapeli (2):
alsa-topology-conf: use ${datadir} in do_install()
alsa-ucm-conf: use ${datadir} in do_install()
Ming Liu (3):
conf/machine: set UBOOT_MACHINE for qemumips and qemumips64
multilib.conf: add u-boot to NON_MULTILIB_RECIPES
libubootenv: uprev to v0.3
Mingli Yu (2):
ccache: Upgrade to 3.7.11
Revert "python3: define a profile directory path"
Naoto Yamaguchi (1):
patch.py: Change to more strictly fuzz detection
Nathan Rossi (4):
libexif: Enable native and nativesdk
cmake.bbclass: Rework compiler program variables for allarch
python3: Improve handling of python3 manifest generation
python3-manifest.json: Updates
Oleksandr Kravchuk (9):
python3-setuptools: update to 49.2.0
bash-completion: update to 2.11
python3: update to 3.8.5
re2c: update to 2.0
diffoscope: update to 153
json-c: update to 0.15
git: update 2.28.0
libwpe: update to 1.7.1
python3-setuptools: update to 49.3.1
Richard Purdie (20):
perl: Avoid race continually rebuilding miniperl
gcc: Fix mangled patch
bitbake: server/process: Fix UI first connection tracking
bitbake: server/process: Account for xmlrpc connections
Revert "lib/oe/log_colorizer.py: add LogColorizerProxyProgressHandler"
lib/package_manager: Fix missing imports
populate_sdk_ext: Ensure buildtools doesn't corrupt OECORE_NATIVE_SYSROOT
buildtools: Handle generic environment setup injection
uninative: Handle PREMIRRORS generically
maintainers: Update entries for Mark Hatle
gcr: Fix patch Upstream-Status from v2 patch
bitbake: server/process: Remove pointless process forking
bitbake: server/process: Simplfy idle callback handler function
bitbake: server/process: Pass timeout/xmlrpc parameters directly to the server
bitbake: server/process: Add extra logfile flushing
packagefeed-stability: Remove as obsolete
build-compare: Drop recipe
qemu: Upgrade 5.0.0 -> 5.1.0
selftest/tinfoil: Increase wait event timeout
lttng-tools: upgrade 2.12.1 -> 2.12.2
Ross Burton (3):
popt: upgrade to 1.18
conf/machine: set UBOOT_MACHINE for qemuarm and qemuarm64
gcc: backport a fix for out-of-line atomics on aarch64
TeohJayShen (2):
oeqa/manual/bsp-hw.json : remove shutdown_system test
oeqa/manual/bsp-hw.json : remove X_server_can_start_up_with_runlevel_5_boot test
Trevor Gamblin (1):
llvm: upgrade 9.0.1 -> 10.0.1
Tyler Hicks (1):
kernel-devicetree: Fix intermittent build failures caused by DTB builds
Usama Arif (3):
kernel-fitimage: build configuration for image tree when dtb is not present
oeqa/selftest/imagefeatures: Add testcase for fitImage
ref-manual: Add documentation for kernel-fitimage
Vasyl Vavrychuk (1):
runqemu: Check gtk or sdl option is passed together with gl or gl-es options.
Yi Zhao (1):
pbzip2: extend for nativesdk
Zhang Qiang (1):
kernel.bbclass: Configuration for environment with HOSTCXX
hongxu (1):
nativesdk-rpm: adjust RPM_CONFIGDIR paths dynamically
zangrc (8):
libevdev:upgrade 1.9.0 -> 1.9.1
mpg123:upgrade 1.26.2 -> 1.26.3
flex: Refresh patch
stress-ng:upgrade 0.11.15 -> 0.11.17
sudo:upgrade 1.9.1 -> 1.9.2
libcap: Upgrade 2.41 -> 2.42
libinput: Upgrade 1.15.6 -> 1.16.0
python3-setuptools: Upgrade 49.2.0 -> 49.2.1
Signed-off-by: Andrew Geissler <geissonator@yahoo.com>
Change-Id: Ic7fa1e8484c1c7722a70c75608aa4ab21fa7d755
Diffstat (limited to 'poky/meta/classes')
-rw-r--r-- | poky/meta/classes/buildhistory.bbclass | 11 | ||||
-rw-r--r-- | poky/meta/classes/cmake.bbclass | 36 | ||||
-rw-r--r-- | poky/meta/classes/cml1.bbclass | 18 | ||||
-rw-r--r-- | poky/meta/classes/gtk-icon-cache.bbclass | 5 | ||||
-rw-r--r-- | poky/meta/classes/kernel-devicetree.bbclass | 2 | ||||
-rw-r--r-- | poky/meta/classes/kernel-fitimage.bbclass | 29 | ||||
-rw-r--r-- | poky/meta/classes/kernel-yocto.bbclass | 256 | ||||
-rw-r--r-- | poky/meta/classes/kernel.bbclass | 2 | ||||
-rw-r--r-- | poky/meta/classes/meson.bbclass | 1 | ||||
-rw-r--r-- | poky/meta/classes/package.bbclass | 2 | ||||
-rw-r--r-- | poky/meta/classes/packagefeed-stability.bbclass | 252 | ||||
-rw-r--r-- | poky/meta/classes/populate_sdk_ext.bbclass | 3 | ||||
-rw-r--r-- | poky/meta/classes/rootfs-postcommands.bbclass | 6 | ||||
-rw-r--r-- | poky/meta/classes/rootfsdebugfiles.bbclass | 2 | ||||
-rw-r--r-- | poky/meta/classes/uninative.bbclass | 13 |
15 files changed, 281 insertions, 357 deletions
diff --git a/poky/meta/classes/buildhistory.bbclass b/poky/meta/classes/buildhistory.bbclass index a4288ef9e..805e976ac 100644 --- a/poky/meta/classes/buildhistory.bbclass +++ b/poky/meta/classes/buildhistory.bbclass @@ -429,8 +429,8 @@ def buildhistory_list_installed(d, rootfs_type="image"): from oe.sdk import sdk_list_installed_packages from oe.utils import format_pkg_list - process_list = [('file', 'bh_installed_pkgs.txt'),\ - ('deps', 'bh_installed_pkgs_deps.txt')] + process_list = [('file', 'bh_installed_pkgs_%s.txt' % os.getpid()),\ + ('deps', 'bh_installed_pkgs_deps_%s.txt' % os.getpid())] if rootfs_type == "image": pkgs = image_list_installed_packages(d) @@ -460,9 +460,10 @@ buildhistory_get_installed() { # Get list of installed packages pkgcache="$1/installed-packages.tmp" - cat ${WORKDIR}/bh_installed_pkgs.txt | sort > $pkgcache && rm ${WORKDIR}/bh_installed_pkgs.txt + cat ${WORKDIR}/bh_installed_pkgs_${PID}.txt | sort > $pkgcache && rm ${WORKDIR}/bh_installed_pkgs_${PID}.txt cat $pkgcache | awk '{ print $1 }' > $1/installed-package-names.txt + if [ -s $pkgcache ] ; then cat $pkgcache | awk '{ print $2 }' | xargs -n1 basename > $1/installed-packages.txt else @@ -471,8 +472,8 @@ buildhistory_get_installed() { # Produce dependency graph # First, quote each name to handle characters that cause issues for dot - sed 's:\([^| ]*\):"\1":g' ${WORKDIR}/bh_installed_pkgs_deps.txt > $1/depends.tmp && - rm ${WORKDIR}/bh_installed_pkgs_deps.txt + sed 's:\([^| ]*\):"\1":g' ${WORKDIR}/bh_installed_pkgs_deps_${PID}.txt > $1/depends.tmp && + rm ${WORKDIR}/bh_installed_pkgs_deps_${PID}.txt # Remove lines with rpmlib(...) and config(...) dependencies, change the # delimiter from pipe to "->", set the style for recommend lines and # turn versioned dependencies into edge labels. diff --git a/poky/meta/classes/cmake.bbclass b/poky/meta/classes/cmake.bbclass index 8243f7ce8..7c055e8a3 100644 --- a/poky/meta/classes/cmake.bbclass +++ b/poky/meta/classes/cmake.bbclass @@ -21,23 +21,6 @@ python() { d.setVarFlag("do_compile", "progress", r"outof:^\[(\d+)/(\d+)\]\s+") else: bb.fatal("Unknown CMake Generator %s" % generator) - - # C/C++ Compiler (without cpu arch/tune arguments) - if not d.getVar('OECMAKE_C_COMPILER'): - cc_list = d.getVar('CC').split() - if cc_list[0] == 'ccache': - d.setVar('OECMAKE_C_COMPILER_LAUNCHER', cc_list[0]) - d.setVar('OECMAKE_C_COMPILER', cc_list[1]) - else: - d.setVar('OECMAKE_C_COMPILER', cc_list[0]) - - if not d.getVar('OECMAKE_CXX_COMPILER'): - cxx_list = d.getVar('CXX').split() - if cxx_list[0] == 'ccache': - d.setVar('OECMAKE_CXX_COMPILER_LAUNCHER', cxx_list[0]) - d.setVar('OECMAKE_CXX_COMPILER', cxx_list[1]) - else: - d.setVar('OECMAKE_CXX_COMPILER', cxx_list[0]) } OECMAKE_AR ?= "${AR}" @@ -51,8 +34,23 @@ OECMAKE_CXX_LINK_FLAGS ?= "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS} ${CXXFLAGS} ${LD CXXFLAGS += "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS}" CFLAGS += "${HOST_CC_ARCH} ${TOOLCHAIN_OPTIONS}" -OECMAKE_C_COMPILER_LAUNCHER ?= "" -OECMAKE_CXX_COMPILER_LAUNCHER ?= "" +def oecmake_map_compiler(compiler, d): + args = d.getVar(compiler).split() + if args[0] == "ccache": + return args[1], args[0] + return args[0], "" + +# C/C++ Compiler (without cpu arch/tune arguments) +OECMAKE_C_COMPILER ?= "${@oecmake_map_compiler('CC', d)[0]}" +OECMAKE_C_COMPILER_LAUNCHER ?= "${@oecmake_map_compiler('CC', d)[1]}" +OECMAKE_CXX_COMPILER ?= "${@oecmake_map_compiler('CXX', d)[0]}" +OECMAKE_CXX_COMPILER_LAUNCHER ?= "${@oecmake_map_compiler('CXX', d)[1]}" + +# clear compiler vars for allarch to avoid sig hash difference +OECMAKE_C_COMPILER_allarch = "" +OECMAKE_C_COMPILER_LAUNCHER_allarch = "" +OECMAKE_CXX_COMPILER_allarch = "" +OECMAKE_CXX_COMPILER_LAUNCHER_allarch = "" OECMAKE_RPATH ?= "" OECMAKE_PERLNATIVE_DIR ??= "" diff --git a/poky/meta/classes/cml1.bbclass b/poky/meta/classes/cml1.bbclass index 8ab240589..9b9866f4c 100644 --- a/poky/meta/classes/cml1.bbclass +++ b/poky/meta/classes/cml1.bbclass @@ -27,12 +27,16 @@ CROSS_CURSES_INC = '-DCURSES_LOC="<curses.h>"' TERMINFO = "${STAGING_DATADIR_NATIVE}/terminfo" KCONFIG_CONFIG_COMMAND ??= "menuconfig" +KCONFIG_CONFIG_ROOTDIR ??= "${B}" python do_menuconfig() { import shutil + config = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config") + configorig = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config.orig") + try: - mtime = os.path.getmtime(".config") - shutil.copy(".config", ".config.orig") + mtime = os.path.getmtime(config) + shutil.copy(config, configorig) except OSError: mtime = 0 @@ -42,7 +46,7 @@ python do_menuconfig() { # FIXME this check can be removed when the minimum bitbake version has been bumped if hasattr(bb.build, 'write_taint'): try: - newmtime = os.path.getmtime(".config") + newmtime = os.path.getmtime(config) except OSError: newmtime = 0 @@ -52,7 +56,7 @@ python do_menuconfig() { } do_menuconfig[depends] += "ncurses-native:do_populate_sysroot" do_menuconfig[nostamp] = "1" -do_menuconfig[dirs] = "${B}" +do_menuconfig[dirs] = "${KCONFIG_CONFIG_ROOTDIR}" addtask menuconfig after do_configure python do_diffconfig() { @@ -61,8 +65,8 @@ python do_diffconfig() { workdir = d.getVar('WORKDIR') fragment = workdir + '/fragment.cfg' - configorig = '.config.orig' - config = '.config' + configorig = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config.orig") + config = os.path.join(d.getVar('KCONFIG_CONFIG_ROOTDIR'), ".config") try: md5newconfig = bb.utils.md5_file(configorig) @@ -85,5 +89,5 @@ python do_diffconfig() { } do_diffconfig[nostamp] = "1" -do_diffconfig[dirs] = "${B}" +do_diffconfig[dirs] = "${KCONFIG_CONFIG_ROOTDIR}" addtask diffconfig diff --git a/poky/meta/classes/gtk-icon-cache.bbclass b/poky/meta/classes/gtk-icon-cache.bbclass index dd394af27..340a28385 100644 --- a/poky/meta/classes/gtk-icon-cache.bbclass +++ b/poky/meta/classes/gtk-icon-cache.bbclass @@ -1,5 +1,10 @@ FILES_${PN} += "${datadir}/icons/hicolor" +#gtk+3 reqiure GTK3DISTROFEATURES, DEPENDS on it make all the +#recipes inherit this class require GTK3DISTROFEATURES +inherit features_check +ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}" + DEPENDS +=" ${@['hicolor-icon-theme', '']['${BPN}' == 'hicolor-icon-theme']} \ ${@['gdk-pixbuf', '']['${BPN}' == 'gdk-pixbuf']} \ ${@['gtk+3', '']['${BPN}' == 'gtk+3']} \ diff --git a/poky/meta/classes/kernel-devicetree.bbclass b/poky/meta/classes/kernel-devicetree.bbclass index 522c46575..81dda8003 100644 --- a/poky/meta/classes/kernel-devicetree.bbclass +++ b/poky/meta/classes/kernel-devicetree.bbclass @@ -52,7 +52,7 @@ do_configure_append() { do_compile_append() { for dtbf in ${KERNEL_DEVICETREE}; do dtb=`normalize_dtb "$dtbf"` - oe_runmake $dtb + oe_runmake $dtb CC="${KERNEL_CC} $cc_extra " LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS} done } diff --git a/poky/meta/classes/kernel-fitimage.bbclass b/poky/meta/classes/kernel-fitimage.bbclass index 72b05ff8d..fa4ea6fee 100644 --- a/poky/meta/classes/kernel-fitimage.bbclass +++ b/poky/meta/classes/kernel-fitimage.bbclass @@ -257,12 +257,21 @@ fitimage_emit_section_config() { # Test if we have any DTBs at all sep="" conf_desc="" + conf_node="conf@" kernel_line="" fdt_line="" ramdisk_line="" setup_line="" default_line="" + # conf node name is selected based on dtb ID if it is present, + # otherwise its selected based on kernel ID + if [ -n "${3}" ]; then + conf_node=$conf_node${3} + else + conf_node=$conf_node${2} + fi + if [ -n "${2}" ]; then conf_desc="Linux kernel" sep=", " @@ -287,12 +296,18 @@ fitimage_emit_section_config() { fi if [ "${6}" = "1" ]; then - default_line="default = \"conf@${3}\";" + # default node is selected based on dtb ID if it is present, + # otherwise its selected based on kernel ID + if [ -n "${3}" ]; then + default_line="default = \"conf@${3}\";" + else + default_line="default = \"conf@${2}\";" + fi fi cat << EOF >> ${1} ${default_line} - conf@${3} { + $conf_node { description = "${6} ${conf_desc}"; ${kernel_line} ${fdt_line} @@ -434,6 +449,13 @@ fitimage_assemble() { # fitimage_emit_section_maint ${1} confstart + # kernel-fitimage.bbclass currently only supports a single kernel (no less or + # more) to be added to the FIT image along with 0 or more device trees and + # 0 or 1 ramdisk. + # If a device tree is to be part of the FIT image, then select + # the default configuration to be used is based on the dtbcount. If there is + # no dtb present than select the default configuation to be based on + # the kernelcount. if [ -n "${DTBS}" ]; then i=1 for DTB in ${DTBS}; do @@ -445,6 +467,9 @@ fitimage_assemble() { fi i=`expr ${i} + 1` done + else + defaultconfigcount=1 + fitimage_emit_section_config ${1} "${kernelcount}" "" "${ramdiskcount}" "${setupcount}" "${defaultconfigcount}" fi fitimage_emit_section_maint ${1} sectend diff --git a/poky/meta/classes/kernel-yocto.bbclass b/poky/meta/classes/kernel-yocto.bbclass index 3311f6e84..96ea61225 100644 --- a/poky/meta/classes/kernel-yocto.bbclass +++ b/poky/meta/classes/kernel-yocto.bbclass @@ -87,6 +87,13 @@ def get_machine_branch(d, default): do_kernel_metadata() { set +e + + if [ -n "$1" ]; then + mode="$1" + else + mode="patch" + fi + cd ${S} export KMETA=${KMETA} @@ -120,14 +127,13 @@ do_kernel_metadata() { if [ -n "${KBUILD_DEFCONFIG}" ]; then if [ -f "${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG}" ]; then if [ -f "${WORKDIR}/defconfig" ]; then - # If the two defconfig's are different, warn that we didn't overwrite the - # one already placed in WORKDIR by the fetcher. + # If the two defconfig's are different, warn that we overwrote the + # one already placed in WORKDIR cmp "${WORKDIR}/defconfig" "${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG}" if [ $? -ne 0 ]; then - bbwarn "defconfig detected in WORKDIR. ${KBUILD_DEFCONFIG} skipped" - else - cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig + bbdebug 1 "detected SRC_URI or unpatched defconfig in WORKDIR. ${KBUILD_DEFCONFIG} copied over it" fi + cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig else cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig fi @@ -137,17 +143,19 @@ do_kernel_metadata() { fi fi - # was anyone trying to patch the kernel meta data ?, we need to do - # this here, since the scc commands migrate the .cfg fragments to the - # kernel source tree, where they'll be used later. - check_git_config - patches="${@" ".join(find_patches(d,'kernel-meta'))}" - for p in $patches; do - ( - cd ${WORKDIR}/kernel-meta - git am -s $p - ) - done + if [ "$mode" = "patch" ]; then + # was anyone trying to patch the kernel meta data ?, we need to do + # this here, since the scc commands migrate the .cfg fragments to the + # kernel source tree, where they'll be used later. + check_git_config + patches="${@" ".join(find_patches(d,'kernel-meta'))}" + for p in $patches; do + ( + cd ${WORKDIR}/kernel-meta + git am -s $p + ) + done + fi sccs_from_src_uri="${@" ".join(find_sccs(d))}" patches="${@" ".join(find_patches(d,''))}" @@ -212,13 +220,40 @@ do_kernel_metadata() { fi meta_dir=$(kgit --meta) - # run1: pull all the configuration fragments, no matter where they come from - elements="`echo -n ${bsp_definition} $sccs_defconfig ${sccs} ${patches} ${KERNEL_FEATURES}`" - if [ -n "${elements}" ]; then - echo "${bsp_definition}" > ${S}/${meta_dir}/bsp_definition - scc --force -o ${S}/${meta_dir}:cfg,merge,meta ${includes} $sccs_defconfig $bsp_definition $sccs $patches ${KERNEL_FEATURES} - if [ $? -ne 0 ]; then - bbfatal_log "Could not generate configuration queue for ${KMACHINE}." + KERNEL_FEATURES_FINAL="" + if [ -n "${KERNEL_FEATURES}" ]; then + for feature in ${KERNEL_FEATURES}; do + feature_found=f + for d in $includes; do + path_to_check=$(echo $d | sed 's/-I//g') + if [ "$feature_found" = "f" ] && [ -e "$path_to_check/$feature" ]; then + feature_found=t + fi + done + if [ "$feature_found" = "f" ]; then + if [ -n "${KERNEL_DANGLING_FEATURES_WARN_ONLY}" ]; then + bbwarn "Feature '$feature' not found, but KERNEL_DANGLING_FEATURES_WARN_ONLY is set" + bbwarn "This may cause runtime issues, dropping feature and allowing configuration to continue" + else + bberror "Feature '$feature' not found, this will cause configuration failures." + bberror "Check the SRC_URI for meta-data repositories or directories that may be missing" + bbfatal_log "Set KERNEL_DANGLING_FEATURES_WARN_ONLY to ignore this issue" + fi + else + KERNEL_FEATURES_FINAL="$KERNEL_FEATURES_FINAL $feature" + fi + done + fi + + if [ "$mode" = "config" ]; then + # run1: pull all the configuration fragments, no matter where they come from + elements="`echo -n ${bsp_definition} $sccs_defconfig ${sccs} ${patches} $KERNEL_FEATURES_FINAL`" + if [ -n "${elements}" ]; then + echo "${bsp_definition}" > ${S}/${meta_dir}/bsp_definition + scc --force -o ${S}/${meta_dir}:cfg,merge,meta ${includes} $sccs_defconfig $bsp_definition $sccs $patches $KERNEL_FEATURES_FINAL + if [ $? -ne 0 ]; then + bbfatal_log "Could not generate configuration queue for ${KMACHINE}." + fi fi fi @@ -229,12 +264,14 @@ do_kernel_metadata() { sccs="${bsp_definition} ${sccs}" fi - # run2: only generate patches for elements that have been passed on the SRC_URI - elements="`echo -n ${sccs} ${patches} ${KERNEL_FEATURES}`" - if [ -n "${elements}" ]; then - scc --force -o ${S}/${meta_dir}:patch --cmds patch ${includes} ${sccs} ${patches} ${KERNEL_FEATURES} - if [ $? -ne 0 ]; then - bbfatal_log "Could not generate configuration queue for ${KMACHINE}." + if [ "$mode" = "patch" ]; then + # run2: only generate patches for elements that have been passed on the SRC_URI + elements="`echo -n ${sccs} ${patches} $KERNEL_FEATURES_FINAL`" + if [ -n "${elements}" ]; then + scc --force -o ${S}/${meta_dir}:patch --cmds patch ${includes} ${sccs} ${patches} $KERNEL_FEATURES_FINAL + if [ $? -ne 0 ]; then + bbfatal_log "Could not generate configuration queue for ${KMACHINE}." + fi fi fi } @@ -338,6 +375,8 @@ do_kernel_configme[depends] += "bc-native:do_populate_sysroot bison-native:do_po do_kernel_configme[depends] += "kern-tools-native:do_populate_sysroot" do_kernel_configme[dirs] += "${S} ${B}" do_kernel_configme() { + do_kernel_metadata config + # translate the kconfig_mode into something that merge_config.sh # understands case ${KCONFIG_MODE} in @@ -380,6 +419,67 @@ do_kernel_configme() { } addtask kernel_configme before do_configure after do_patch +addtask config_analysis + +do_config_analysis[depends] = "virtual/kernel:do_configure" +do_config_analysis[depends] += "kern-tools-native:do_populate_sysroot" + +CONFIG_AUDIT_FILE ?= "${WORKDIR}/config-audit.txt" +CONFIG_ANALYSIS_FILE ?= "${WORKDIR}/config-analysis.txt" + +python do_config_analysis() { + import re, string, sys, subprocess + + s = d.getVar('S') + + env = os.environ.copy() + env['PATH'] = "%s:%s%s" % (d.getVar('PATH'), s, "/scripts/util/") + env['LD'] = d.getVar('KERNEL_LD') + env['CC'] = d.getVar('KERNEL_CC') + env['ARCH'] = d.getVar('ARCH') + env['srctree'] = s + + # read specific symbols from the kernel recipe or from local.conf + # i.e.: CONFIG_ANALYSIS_pn-linux-yocto-dev = 'NF_CONNTRACK LOCALVERSION' + config = d.getVar( 'CONFIG_ANALYSIS' ) + if not config: + config = [ "" ] + else: + config = config.split() + + for c in config: + for action in ["analysis","audit"]: + if action == "analysis": + try: + analysis = subprocess.check_output(['symbol_why.py', '--dotconfig', '{}'.format( d.getVar('B') + '/.config' ), '--blame', c], cwd=s, env=env ).decode('utf-8') + except subprocess.CalledProcessError as e: + bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8')) + + outfile = d.getVar( 'CONFIG_ANALYSIS_FILE' ) + + if action == "audit": + try: + analysis = subprocess.check_output(['symbol_why.py', '--dotconfig', '{}'.format( d.getVar('B') + '/.config' ), '--summary', '--extended', '--sanity', c], cwd=s, env=env ).decode('utf-8') + except subprocess.CalledProcessError as e: + bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8')) + + outfile = d.getVar( 'CONFIG_AUDIT_FILE' ) + + if c: + outdir = os.path.dirname( outfile ) + outname = os.path.basename( outfile ) + outfile = outdir + '/'+ c + '-' + outname + + if config and os.path.isfile(outfile): + os.remove(outfile) + + with open(outfile, 'w+') as f: + f.write( analysis ) + + bb.warn( "Configuration {} executed, see: {} for details".format(action,outfile )) + if c: + bb.warn( analysis ) +} python do_kernel_configcheck() { import re, string, sys, subprocess @@ -389,57 +489,89 @@ python do_kernel_configcheck() { # meta-series for processing kmeta = d.getVar("KMETA") or "meta" if not os.path.exists(kmeta): - kmeta = "." + kmeta + kmeta = subprocess.check_output(['kgit', '--meta'], cwd=d.getVar('S')).decode('utf-8').rstrip() s = d.getVar('S') env = os.environ.copy() env['PATH'] = "%s:%s%s" % (d.getVar('PATH'), s, "/scripts/util/") - env['LD'] = "${KERNEL_LD}" + env['LD'] = d.getVar('KERNEL_LD') + env['CC'] = d.getVar('KERNEL_CC') + env['ARCH'] = d.getVar('ARCH') + env['srctree'] = s try: configs = subprocess.check_output(['scc', '--configs', '-o', s + '/.kernel-meta'], env=env).decode('utf-8') except subprocess.CalledProcessError as e: bb.fatal( "Cannot gather config fragments for audit: %s" % e.output.decode("utf-8") ) - try: - subprocess.check_call(['kconf_check', '--report', '-o', - '%s/%s/cfg' % (s, kmeta), d.getVar('B') + '/.config', s, configs], cwd=s, env=env) - except subprocess.CalledProcessError: - # The configuration gathering can return different exit codes, but - # we interpret them based on the KCONF_AUDIT_LEVEL variable, so we catch - # everything here, and let the run continue. - pass - config_check_visibility = int(d.getVar("KCONF_AUDIT_LEVEL") or 0) bsp_check_visibility = int(d.getVar("KCONF_BSP_AUDIT_LEVEL") or 0) - # if config check visibility is non-zero, report dropped configuration values - mismatch_file = d.expand("${S}/%s/cfg/mismatch.txt" % kmeta) - if os.path.exists(mismatch_file): - if config_check_visibility: - with open (mismatch_file, "r") as myfile: + # if config check visibility is "1", that's the lowest level of audit. So + # we add the --classify option to the run, since classification will + # streamline the output to only report options that could be boot issues, + # or are otherwise required for proper operation. + extra_params = "" + if config_check_visibility == 1: + extra_params = "--classify" + + # category #1: mismatches + try: + analysis = subprocess.check_output(['symbol_why.py', '--dotconfig', '{}'.format( d.getVar('B') + '/.config' ), '--mismatches', extra_params], cwd=s, env=env ).decode('utf-8') + except subprocess.CalledProcessError as e: + bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8')) + + if analysis: + outfile = "{}/{}/cfg/mismatch.txt".format( s, kmeta ) + if os.path.isfile(outfile): + os.remove(outfile) + with open(outfile, 'w+') as f: + f.write( analysis ) + + if config_check_visibility and os.stat(outfile).st_size > 0: + with open (outfile, "r") as myfile: results = myfile.read() bb.warn( "[kernel config]: specified values did not make it into the kernel's final configuration:\n\n%s" % results) - if bsp_check_visibility: - invalid_file = d.expand("${S}/%s/cfg/invalid.cfg" % kmeta) - if os.path.exists(invalid_file) and os.stat(invalid_file).st_size > 0: - with open (invalid_file, "r") as myfile: - results = myfile.read() - bb.warn( "[kernel config]: This BSP sets config options that are not offered anywhere within this kernel:\n\n%s" % results) - errors_file = d.expand("${S}/%s/cfg/fragment_errors.txt" % kmeta) - if os.path.exists(errors_file) and os.stat(errors_file).st_size > 0: - with open (errors_file, "r") as myfile: + # category #2: invalid fragment elements + extra_params = "" + if bsp_check_visibility > 1: + extra_params = "--strict" + try: + analysis = subprocess.check_output(['symbol_why.py', '--dotconfig', '{}'.format( d.getVar('B') + '/.config' ), '--invalid', extra_params], cwd=s, env=env ).decode('utf-8') + except subprocess.CalledProcessError as e: + bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8')) + + if analysis: + outfile = "{}/{}/cfg/invalid.txt".format(s,kmeta) + if os.path.isfile(outfile): + os.remove(outfile) + with open(outfile, 'w+') as f: + f.write( analysis ) + + if bsp_check_visibility and os.stat(outfile).st_size > 0: + with open (outfile, "r") as myfile: results = myfile.read() - bb.warn( "[kernel config]: This BSP contains fragments with errors:\n\n%s" % results) - - # if the audit level is greater than two, we report if a fragment has overriden - # a value from a base fragment. This is really only used for new kernel introduction - if bsp_check_visibility > 2: - redefinition_file = d.expand("${S}/%s/cfg/redefinition.txt" % kmeta) - if os.path.exists(redefinition_file) and os.stat(redefinition_file).st_size > 0: - with open (redefinition_file, "r") as myfile: + bb.warn( "[kernel config]: This BSP contains fragments with warnings:\n\n%s" % results) + + # category #3: redefined options (this is pretty verbose and is debug only) + try: + analysis = subprocess.check_output(['symbol_why.py', '--dotconfig', '{}'.format( d.getVar('B') + '/.config' ), '--sanity'], cwd=s, env=env ).decode('utf-8') + except subprocess.CalledProcessError as e: + bb.fatal( "config analysis failed: %s" % e.output.decode('utf-8')) + + if analysis: + outfile = "{}/{}/cfg/redefinition.txt".format(s,kmeta) + if os.path.isfile(outfile): + os.remove(outfile) + with open(outfile, 'w+') as f: + f.write( analysis ) + + # if the audit level is greater than two, we report if a fragment has overriden + # a value from a base fragment. This is really only used for new kernel introduction + if bsp_check_visibility > 2 and os.stat(outfile).st_size > 0: + with open (outfile, "r") as myfile: results = myfile.read() bb.warn( "[kernel config]: This BSP has configuration options defined in more than one config, with differing values:\n\n%s" % results) } diff --git a/poky/meta/classes/kernel.bbclass b/poky/meta/classes/kernel.bbclass index cf43a5d60..e2ceb6a33 100644 --- a/poky/meta/classes/kernel.bbclass +++ b/poky/meta/classes/kernel.bbclass @@ -212,6 +212,8 @@ UBOOT_LOADADDRESS ?= "${UBOOT_ENTRYPOINT}" KERNEL_EXTRA_ARGS ?= "" EXTRA_OEMAKE = " HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" HOSTCPP="${BUILD_CPP}"" +EXTRA_OEMAKE += " HOSTCXX="${BUILD_CXX} ${BUILD_CXXFLAGS} ${BUILD_LDFLAGS}"" + KERNEL_ALT_IMAGETYPE ??= "" copy_initramfs() { diff --git a/poky/meta/classes/meson.bbclass b/poky/meta/classes/meson.bbclass index ff52d20e5..83aa854b7 100644 --- a/poky/meta/classes/meson.bbclass +++ b/poky/meta/classes/meson.bbclass @@ -98,6 +98,7 @@ strip = ${@meson_array('STRIP', d)} readelf = ${@meson_array('READELF', d)} pkgconfig = 'pkg-config' llvm-config = 'llvm-config${LLVMVERSION}' +cups-config = 'cups-config' [properties] needs_exe_wrapper = true diff --git a/poky/meta/classes/package.bbclass b/poky/meta/classes/package.bbclass index f8dc1bb46..7a36262eb 100644 --- a/poky/meta/classes/package.bbclass +++ b/poky/meta/classes/package.bbclass @@ -1936,7 +1936,7 @@ python package_do_shlibs() { shlibs_file = os.path.join(shlibswork_dir, pkg + ".list") if len(sonames): with open(shlibs_file, 'w') as fd: - for s in sonames: + for s in sorted(sonames): if s[0] in shlib_provider and s[1] in shlib_provider[s[0]]: (old_pkg, old_pkgver) = shlib_provider[s[0]][s[1]] if old_pkg != pkg: diff --git a/poky/meta/classes/packagefeed-stability.bbclass b/poky/meta/classes/packagefeed-stability.bbclass deleted file mode 100644 index 564860256..000000000 --- a/poky/meta/classes/packagefeed-stability.bbclass +++ /dev/null @@ -1,252 +0,0 @@ -# Class to avoid copying packages into the feed if they haven't materially changed -# -# Copyright (C) 2015 Intel Corporation -# Released under the MIT license (see COPYING.MIT for details) -# -# This class effectively intercepts packages as they are written out by -# do_package_write_*, causing them to be written into a different -# directory where we can compare them to whatever older packages might -# be in the "real" package feed directory, and avoid copying the new -# package to the feed if it has not materially changed. The idea is to -# avoid unnecessary churn in the packages when dependencies trigger task -# reexecution (and thus repackaging). Enabling the class is simple: -# -# INHERIT += "packagefeed-stability" -# -# Caveats: -# 1) Latest PR values in the build system may not match those in packages -# seen on the target (naturally) -# 2) If you rebuild from sstate without the existing package feed present, -# you will lose the "state" of the package feed i.e. the preserved old -# package versions. Not the end of the world, but would negate the -# entire purpose of this class. -# -# Note that running -c cleanall on a recipe will purposely delete the old -# package files so they will definitely be copied the next time. - -python() { - if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross', d): - return - # Package backend agnostic intercept - # This assumes that the package_write task is called package_write_<pkgtype> - # and that the directory in which packages should be written is - # pointed to by the variable DEPLOY_DIR_<PKGTYPE> - for pkgclass in (d.getVar('PACKAGE_CLASSES') or '').split(): - if pkgclass.startswith('package_'): - pkgtype = pkgclass.split('_', 1)[1] - pkgwritefunc = 'do_package_write_%s' % pkgtype - sstate_outputdirs = d.getVarFlag(pkgwritefunc, 'sstate-outputdirs', False) - deploydirvar = 'DEPLOY_DIR_%s' % pkgtype.upper() - deploydirvarref = '${' + deploydirvar + '}' - pkgcomparefunc = 'do_package_compare_%s' % pkgtype - - if bb.data.inherits_class('image', d): - d.appendVarFlag('do_rootfs', 'recrdeptask', ' ' + pkgcomparefunc) - - if bb.data.inherits_class('populate_sdk_base', d): - d.appendVarFlag('do_populate_sdk', 'recrdeptask', ' ' + pkgcomparefunc) - - if bb.data.inherits_class('populate_sdk_ext', d): - d.appendVarFlag('do_populate_sdk_ext', 'recrdeptask', ' ' + pkgcomparefunc) - - d.appendVarFlag('do_build', 'recrdeptask', ' ' + pkgcomparefunc) - - if d.getVarFlag(pkgwritefunc, 'noexec') or not d.getVarFlag(pkgwritefunc, 'task'): - # Packaging is disabled for this recipe, we shouldn't do anything - continue - - if deploydirvarref in sstate_outputdirs: - deplor_dir_pkgtype = d.expand(deploydirvarref + '-prediff') - # Set intermediate output directory - d.setVarFlag(pkgwritefunc, 'sstate-outputdirs', sstate_outputdirs.replace(deploydirvarref, deplor_dir_pkgtype)) - # Update SSTATE_DUPWHITELIST to avoid shared location conflicted error - d.appendVar('SSTATE_DUPWHITELIST', ' %s' % deplor_dir_pkgtype) - - d.setVar(pkgcomparefunc, d.getVar('do_package_compare', False)) - d.setVarFlags(pkgcomparefunc, d.getVarFlags('do_package_compare', False)) - d.appendVarFlag(pkgcomparefunc, 'depends', ' build-compare-native:do_populate_sysroot') - bb.build.addtask(pkgcomparefunc, 'do_build', 'do_packagedata ' + pkgwritefunc, d) -} - -# This isn't the real task function - it's a template that we use in the -# anonymous python code above -fakeroot python do_package_compare () { - currenttask = d.getVar('BB_CURRENTTASK') - pkgtype = currenttask.rsplit('_', 1)[1] - package_compare_impl(pkgtype, d) -} - -def package_compare_impl(pkgtype, d): - import errno - import fnmatch - import glob - import subprocess - import oe.sstatesig - - pn = d.getVar('PN') - deploydir = d.getVar('DEPLOY_DIR_%s' % pkgtype.upper()) - prepath = deploydir + '-prediff/' - - # Find out PKGR values are - pkgdatadir = d.getVar('PKGDATA_DIR') - packages = [] - try: - with open(os.path.join(pkgdatadir, pn), 'r') as f: - for line in f: - if line.startswith('PACKAGES:'): - packages = line.split(':', 1)[1].split() - break - except IOError as e: - if e.errno == errno.ENOENT: - pass - - if not packages: - bb.debug(2, '%s: no packages, nothing to do' % pn) - return - - pkgrvalues = {} - rpkgnames = {} - rdepends = {} - pkgvvalues = {} - for pkg in packages: - with open(os.path.join(pkgdatadir, 'runtime', pkg), 'r') as f: - for line in f: - if line.startswith('PKGR:'): - pkgrvalues[pkg] = line.split(':', 1)[1].strip() - if line.startswith('PKGV:'): - pkgvvalues[pkg] = line.split(':', 1)[1].strip() - elif line.startswith('PKG_%s:' % pkg): - rpkgnames[pkg] = line.split(':', 1)[1].strip() - elif line.startswith('RDEPENDS_%s:' % pkg): - rdepends[pkg] = line.split(':', 1)[1].strip() - - # Prepare a list of the runtime package names for packages that were - # actually produced - rpkglist = [] - for pkg, rpkg in rpkgnames.items(): - if os.path.exists(os.path.join(pkgdatadir, 'runtime', pkg + '.packaged')): - rpkglist.append((rpkg, pkg)) - rpkglist.sort(key=lambda x: len(x[0]), reverse=True) - - pvu = d.getVar('PV', False) - if '$' + '{SRCPV}' in pvu: - pvprefix = pvu.split('$' + '{SRCPV}', 1)[0] - else: - pvprefix = None - - pkgwritetask = 'package_write_%s' % pkgtype - files = [] - docopy = False - manifest, _ = oe.sstatesig.sstate_get_manifest_filename(pkgwritetask, d) - mlprefix = d.getVar('MLPREFIX') - # Copy recipe's all packages if one of the packages are different to make - # they have the same PR. - with open(manifest, 'r') as f: - for line in f: - if line.startswith(prepath): - srcpath = line.rstrip() - if os.path.isfile(srcpath): - destpath = os.path.join(deploydir, os.path.relpath(srcpath, prepath)) - - # This is crude but should work assuming the output - # package file name starts with the package name - # and rpkglist is sorted by length (descending) - pkgbasename = os.path.basename(destpath) - pkgname = None - for rpkg, pkg in rpkglist: - if mlprefix and pkgtype == 'rpm' and rpkg.startswith(mlprefix): - rpkg = rpkg[len(mlprefix):] - if pkgbasename.startswith(rpkg): - pkgr = pkgrvalues[pkg] - destpathspec = destpath.replace(pkgr, '*') - if pvprefix: - pkgv = pkgvvalues[pkg] - if pkgv.startswith(pvprefix): - pkgvsuffix = pkgv[len(pvprefix):] - if '+' in pkgvsuffix: - newpkgv = pvprefix + '*+' + pkgvsuffix.split('+', 1)[1] - destpathspec = destpathspec.replace(pkgv, newpkgv) - pkgname = pkg - break - else: - bb.warn('Unable to map %s back to package' % pkgbasename) - destpathspec = destpath - - oldfile = None - if not docopy: - oldfiles = glob.glob(destpathspec) - if oldfiles: - oldfile = oldfiles[-1] - result = subprocess.call(['pkg-diff.sh', oldfile, srcpath]) - if result != 0: - docopy = True - bb.note("%s and %s are different, will copy packages" % (oldfile, srcpath)) - else: - docopy = True - bb.note("No old packages found for %s, will copy packages" % pkgname) - - files.append((pkgname, pkgbasename, srcpath, destpath)) - - # Remove all the old files and copy again if docopy - if docopy: - bb.note('Copying packages for recipe %s' % pn) - pcmanifest = os.path.join(prepath, d.expand('pkg-compare-manifest-${MULTIMACH_TARGET_SYS}-${PN}')) - try: - with open(pcmanifest, 'r') as f: - for line in f: - fn = line.rstrip() - if fn: - try: - os.remove(fn) - bb.note('Removed old package %s' % fn) - except OSError as e: - if e.errno == errno.ENOENT: - pass - except IOError as e: - if e.errno == errno.ENOENT: - pass - - # Create new manifest - with open(pcmanifest, 'w') as f: - for pkgname, pkgbasename, srcpath, destpath in files: - destdir = os.path.dirname(destpath) - bb.utils.mkdirhier(destdir) - # Remove allarch rpm pkg if it is already existed (for - # multilib), they're identical in theory, but sstate.bbclass - # copies it again, so keep align with that. - if os.path.exists(destpath) and pkgtype == 'rpm' \ - and d.getVar('PACKAGE_ARCH') == 'all': - os.unlink(destpath) - if (os.stat(srcpath).st_dev == os.stat(destdir).st_dev): - # Use a hard link to save space - os.link(srcpath, destpath) - else: - shutil.copyfile(srcpath, destpath) - f.write('%s\n' % destpath) - else: - bb.note('Not copying packages for recipe %s' % pn) - -do_cleansstate[postfuncs] += "pfs_cleanpkgs" -python pfs_cleanpkgs () { - import errno - for pkgclass in (d.getVar('PACKAGE_CLASSES') or '').split(): - if pkgclass.startswith('package_'): - pkgtype = pkgclass.split('_', 1)[1] - deploydir = d.getVar('DEPLOY_DIR_%s' % pkgtype.upper()) - prepath = deploydir + '-prediff' - pcmanifest = os.path.join(prepath, d.expand('pkg-compare-manifest-${MULTIMACH_TARGET_SYS}-${PN}')) - try: - with open(pcmanifest, 'r') as f: - for line in f: - fn = line.rstrip() - if fn: - try: - os.remove(fn) - except OSError as e: - if e.errno == errno.ENOENT: - pass - os.remove(pcmanifest) - except IOError as e: - if e.errno == errno.ENOENT: - pass -} diff --git a/poky/meta/classes/populate_sdk_ext.bbclass b/poky/meta/classes/populate_sdk_ext.bbclass index fd0da16e7..44d99cfb9 100644 --- a/poky/meta/classes/populate_sdk_ext.bbclass +++ b/poky/meta/classes/populate_sdk_ext.bbclass @@ -653,7 +653,10 @@ sdk_ext_postinst() { # Make sure when the user sets up the environment, they also get # the buildtools-tarball tools in their path. + echo "# Save and reset OECORE_NATIVE_SYSROOT as buildtools may change it" >> $env_setup_script + echo "SAVED=\"\$OECORE_NATIVE_SYSROOT\"" >> $env_setup_script echo ". $target_sdk_dir/buildtools/environment-setup*" >> $env_setup_script + echo "OECORE_NATIVE_SYSROOT=\"\$SAVED\"" >> $env_setup_script fi # Allow bitbake environment setup to be ran as part of this sdk. diff --git a/poky/meta/classes/rootfs-postcommands.bbclass b/poky/meta/classes/rootfs-postcommands.bbclass index c43b9a982..984730ebe 100644 --- a/poky/meta/classes/rootfs-postcommands.bbclass +++ b/poky/meta/classes/rootfs-postcommands.bbclass @@ -1,6 +1,6 @@ # Zap the root password if debug-tweaks feature is not enabled -ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'empty-root-password' ], "", "zap_empty_root_password ; ",d)}' +ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'empty-root-password' ], "", "zap_empty_root_password; ",d)}' # Allow dropbear/openssh to accept logins from accounts with an empty password string if debug-tweaks or allow-empty-password is enabled ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'allow-empty-password' ], "ssh_allow_empty_password; ", "",d)}' @@ -12,7 +12,7 @@ ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'deb ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains_any("IMAGE_FEATURES", [ 'debug-tweaks', 'post-install-logging' ], "postinst_enable_logging; ", "",d)}' # Create /etc/timestamp during image construction to give a reasonably sane default time setting -ROOTFS_POSTPROCESS_COMMAND += "rootfs_update_timestamp ; " +ROOTFS_POSTPROCESS_COMMAND += "rootfs_update_timestamp; " # Tweak the mount options for rootfs in /etc/fstab if read-only-rootfs is enabled ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", "read_only_rootfs_hook; ", "",d)}' @@ -26,7 +26,7 @@ ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("IMAGE_FEATURES", "read-only APPEND_append = '${@bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", " ro", "", d)}' # Generates test data file with data store variables expanded in json format -ROOTFS_POSTPROCESS_COMMAND += "write_image_test_data ; " +ROOTFS_POSTPROCESS_COMMAND += "write_image_test_data; " # Write manifest IMAGE_MANIFEST = "${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.manifest" diff --git a/poky/meta/classes/rootfsdebugfiles.bbclass b/poky/meta/classes/rootfsdebugfiles.bbclass index e2ba4e364..85c7ec743 100644 --- a/poky/meta/classes/rootfsdebugfiles.bbclass +++ b/poky/meta/classes/rootfsdebugfiles.bbclass @@ -28,7 +28,7 @@ ROOTFS_DEBUG_FILES ?= "" ROOTFS_DEBUG_FILES[doc] = "Lists additional files or directories to be installed with 'cp -a' in the format 'source1 target1;source2 target2;...'" -ROOTFS_POSTPROCESS_COMMAND += "rootfs_debug_files ;" +ROOTFS_POSTPROCESS_COMMAND += "rootfs_debug_files;" rootfs_debug_files () { #!/bin/sh -e echo "${ROOTFS_DEBUG_FILES}" | sed -e 's/;/\n/g' | while read source target mode; do diff --git a/poky/meta/classes/uninative.bbclass b/poky/meta/classes/uninative.bbclass index 70799bbf6..316c0f061 100644 --- a/poky/meta/classes/uninative.bbclass +++ b/poky/meta/classes/uninative.bbclass @@ -56,12 +56,17 @@ python uninative_event_fetchloader() { # Our games with path manipulation of DL_DIR mean standard PREMIRRORS don't work # and we can't easily put 'chksum' into the url path from a url parameter with # the current fetcher url handling - ownmirror = d.getVar('SOURCE_MIRROR_URL') - if ownmirror: - localdata.appendVar("PREMIRRORS", " ${UNINATIVE_URL}${UNINATIVE_TARBALL} ${SOURCE_MIRROR_URL}/uninative/%s/${UNINATIVE_TARBALL}" % chksum) + premirrors = bb.fetch2.mirror_from_string(localdata.getVar("PREMIRRORS")) + for line in premirrors: + try: + (find, replace) = line + except ValueError: + continue + if find.startswith("http"): + localdata.appendVar("PREMIRRORS", " ${UNINATIVE_URL}${UNINATIVE_TARBALL} %s/uninative/%s/${UNINATIVE_TARBALL}" % (replace, chksum)) srcuri = d.expand("${UNINATIVE_URL}${UNINATIVE_TARBALL};sha256sum=%s" % chksum) - bb.note("Fetching uninative binary shim from %s" % srcuri) + bb.note("Fetching uninative binary shim %s (will check PREMIRRORS first)" % srcuri) fetcher = bb.fetch2.Fetch([srcuri], localdata, cache=False) fetcher.download() |