diff options
author | Brad Bishop <bradleyb@fuzziesquirrel.com> | 2019-02-07 00:01:43 +0300 |
---|---|---|
committer | Brad Bishop <bradleyb@fuzziesquirrel.com> | 2019-02-07 06:42:14 +0300 |
commit | 977dc1ac484e0c201b30f551e5f2d1d32e27eccf (patch) | |
tree | e13bde6791728dc10e5f04de29858c25f2ac5fa6 /poky/meta/classes | |
parent | 8fcf4c59a86ff23e3a2eb6101b5ffacdd50093f9 (diff) | |
download | openbmc-977dc1ac484e0c201b30f551e5f2d1d32e27eccf.tar.xz |
poky: refresh thud: 1d987b98ed..ee7dd31944
Update poky to thud HEAD.
Alex Kiernan (2):
systemd: backport fix to stop enabling ECN
systemd: Add PACKAGECONFIG for gnutls
Alexander Kanavin (3):
lighttpd: update to 1.4.51
boost: update to 1.69.0
systemd: backport a patch to fix meson 0.49.0 issue
Alexey Brodkin (1):
wic: sdimage-bootpart: Use mmcblk0 drive instead of bogus mmcblk
André Draszik (1):
meta: remove True option to getVar calls (again)
Anuj Mittal (6):
eudev: upgrade 3.2.5 -> 3.2.7
gsettings-desktop-schemas: upgrade 3.28.0 -> 3.28.1
libatomic-ops: upgrade 7.6.6 -> 7.6.8
libpng: upgrade 1.6.35 -> 1.6.36
common-licenses: update Libpng license text
i2c-tools: upgrade 4.0 -> 4.1
Aníbal Limón (1):
meta/classes/testimage.bbclass: Only validate IMAGE_FSTYPES when is QEMU
Armin Kuster (1):
tzdata/tzcode-native: update to 2018i
Brad Bishop (1):
systemd-systemctl-native: handle Install wildcards
Bruce Ashfield (3):
kernel: use olddefconfig as the primary target for KERNEL_CONFIG_COMMAND
linux-yocto/4.18: update to v4.18.22
linux-yocto/4.18: update to v4.18.25
Changqing Li (1):
libsndfile1: Security fix CVE-2017-17456/17457 CVE-2018-19661/19662
Chen Qi (3):
package.bbclass: fix python unclosed file ResourceWarning
eSDK.py: avoid error in tearDownClass due to race condistion
eSDK.py: unset BBPATH and BUILDDIR to avoid eSDK failure
Douglas Royds (6):
icecc: readlink -f on the recipe-sysroot gcc/g++
icecc: Trivial simplification
icecc: Syntax error meant that we weren't waiting for tarball generation
icecc: Don't generate recipe-sysroot symlinks at recipe-parsing time
icecc: patchelf is needed by icecc-create-env
patch: reproducibility: Fix host umask leakage
Erik Botö (1):
testimage: Add possibility to pass parmeters to qemu
Federico Sauter (1):
kernel: don't assign the build user/host
Joshua Watt (1):
classes/testsdk: Split implementation into classes
Kai Kang (2):
testimage.bbclass: remove boot parameter systemd.log_target
systemd: fix compile error for x32
Kevin Hao (1):
meta-yocto-bsp: Bump to the latest stable kernel for the non-x86 BSPs
Khem Raj (6):
grub2: Fix passing null to printf formats
gnupg: Upgrade to 2.2.12 release
binutils: Fix build with clang
binutils: Upgrade to latest on 2.31 release branch
binutils: bfd doesn't handle ELF compressed data alignment
systemd: Fix memory use after free errors
Manjukumar Matha (1):
kernel.bbclass: Fix incorrect deploying of fitimage.initramfs
Marcus Cooper (3):
systemd: Security fix CVE-2018-16864
systemd: Security fix CVE-2018-16865
systemd: Security fix CVE-2018-16866
Michael Ho (1):
sstate: add support for caching shared workdir tasks
Naveen Saini (2):
linux-yocto: update genericx86* SRCREV for 4.18
linux-yocto: update genericx86* SRCREV for 4.18
Peter Kjellerstedt (2):
systemd: Correct and clean up user/group definitions
systemd: Correct a conditional add to SYSTEMD_PACKAGES
Richard Purdie (9):
nativesdk-*-provides-dummy: Fixes to allow correct operation with opkg
classes: Correctly markup regex strings
testimage: Remove duplicate dependencies
testimage: Simplfy DEFAULT_TEST_SUITES logic
testimage: Further cleanup DEFAULT_TEST_SUITES
testimage: Enable autorunning of the package manager testsuites
oeqa/runtime/cases: Improve test dependency information
oeqa/runtime/cases: Improve dependencies of kernel/gcc/build tests
oeqa/utils/buildproject: Only clean files if we've done something
Robert Yang (7):
oeqa/utils/qemurunner: Print output when failed to login
oeqa/utils/qemurunner: set timeout to 60s for run_serial
oeqa: Fix for QEMU_USE_KVM
oeqa: make it work for multiple users
runqemu-gen-tapdevs: Allow run --help without sudo
oeqa/manual/bsp-qemu.json: Update for QEMU_USE_KVM
oeqa/selftest/runqemu: Enable kvm when QEMU_USE_KVM is set
Ross Burton (2):
toolchain-scripts: run post-relocate scripts for every environment
runqemu: clean up subprocess usage
Yeoh Ee Peng (3):
scripts/oe-git-archive: fix non-existent key referencing error
testimage: Add support for slirp
oeqa/qemu & runtime: qemu do not need ip input from external
OpenBMC compatibility updates:
meta-phosphor:
Brad Bishop (1):
phosphor: rebase i2c-tools patches
Change-Id: Idc626fc076580aeebde1420bcad01e069b559504
Signed-off-by: Brad Bishop <bradleyb@fuzziesquirrel.com>
Diffstat (limited to 'poky/meta/classes')
-rw-r--r-- | poky/meta/classes/clutter.bbclass | 4 | ||||
-rw-r--r-- | poky/meta/classes/cmake.bbclass | 2 | ||||
-rw-r--r-- | poky/meta/classes/icecc.bbclass | 72 | ||||
-rw-r--r-- | poky/meta/classes/insane.bbclass | 16 | ||||
-rw-r--r-- | poky/meta/classes/kernel.bbclass | 9 | ||||
-rw-r--r-- | poky/meta/classes/license_image.bbclass | 14 | ||||
-rw-r--r-- | poky/meta/classes/package.bbclass | 3 | ||||
-rw-r--r-- | poky/meta/classes/patch.bbclass | 1 | ||||
-rw-r--r-- | poky/meta/classes/perl-version.bbclass | 2 | ||||
-rw-r--r-- | poky/meta/classes/sstate.bbclass | 6 | ||||
-rw-r--r-- | poky/meta/classes/systemd.bbclass | 2 | ||||
-rw-r--r-- | poky/meta/classes/testimage.bbclass | 74 | ||||
-rw-r--r-- | poky/meta/classes/testsdk.bbclass | 217 | ||||
-rw-r--r-- | poky/meta/classes/toolchain-scripts.bbclass | 46 | ||||
-rw-r--r-- | poky/meta/classes/update-alternatives.bbclass | 2 | ||||
-rw-r--r-- | poky/meta/classes/useradd-staticids.bbclass | 4 |
16 files changed, 134 insertions, 340 deletions
diff --git a/poky/meta/classes/clutter.bbclass b/poky/meta/classes/clutter.bbclass index 8550363bd..5edab0e55 100644 --- a/poky/meta/classes/clutter.bbclass +++ b/poky/meta/classes/clutter.bbclass @@ -1,11 +1,11 @@ def get_minor_dir(v): import re - m = re.match("^([0-9]+)\.([0-9]+)", v) + m = re.match(r"^([0-9]+)\.([0-9]+)", v) return "%s.%s" % (m.group(1), m.group(2)) def get_real_name(n): import re - m = re.match("^([a-z]+(-[a-z]+)?)(-[0-9]+\.[0-9]+)?", n) + m = re.match(r"^([a-z]+(-[a-z]+)?)(-[0-9]+\.[0-9]+)?", n) return "%s" % (m.group(1)) VERMINOR = "${@get_minor_dir("${PV}")}" diff --git a/poky/meta/classes/cmake.bbclass b/poky/meta/classes/cmake.bbclass index fd40a9863..b364d2bc2 100644 --- a/poky/meta/classes/cmake.bbclass +++ b/poky/meta/classes/cmake.bbclass @@ -20,7 +20,7 @@ python() { elif generator == "Ninja": d.appendVar("DEPENDS", " ninja-native") d.setVar("OECMAKE_GENERATOR_ARGS", "-G Ninja -DCMAKE_MAKE_PROGRAM=ninja") - d.setVarFlag("do_compile", "progress", "outof:^\[(\d+)/(\d+)\]\s+") + d.setVarFlag("do_compile", "progress", r"outof:^\[(\d+)/(\d+)\]\s+") else: bb.fatal("Unknown CMake Generator %s" % generator) } diff --git a/poky/meta/classes/icecc.bbclass b/poky/meta/classes/icecc.bbclass index 2b189232c..7d94525d3 100644 --- a/poky/meta/classes/icecc.bbclass +++ b/poky/meta/classes/icecc.bbclass @@ -38,6 +38,8 @@ BB_HASHBASE_WHITELIST += "ICECC_PARALLEL_MAKE ICECC_DISABLED ICECC_USER_PACKAGE_ ICECC_ENV_EXEC ?= "${STAGING_BINDIR_NATIVE}/icecc-create-env" +HOSTTOOLS_NONFATAL += "icecc patchelf" + # This version can be incremented when changes are made to the environment that # invalidate the version on the compile nodes. Changing it will cause a new # environment to be created. @@ -98,9 +100,11 @@ DEPENDS_prepend += "${@icecc_dep_prepend(d)} " get_cross_kernel_cc[vardepsexclude] += "KERNEL_CC" def get_cross_kernel_cc(bb,d): - kernel_cc = d.getVar('KERNEL_CC') + if not icecc_is_kernel(bb, d): + return None # evaluate the expression by the shell if necessary + kernel_cc = d.getVar('KERNEL_CC') if '`' in kernel_cc or '$(' in kernel_cc: import subprocess kernel_cc = subprocess.check_output("echo %s" % kernel_cc, shell=True).decode("utf-8")[:-1] @@ -113,38 +117,6 @@ def get_cross_kernel_cc(bb,d): def get_icecc(d): return d.getVar('ICECC_PATH') or bb.utils.which(os.getenv("PATH"), "icecc") -def create_path(compilers, bb, d): - """ - Create Symlinks for the icecc in the staging directory - """ - staging = os.path.join(d.expand('${STAGING_BINDIR}'), "ice") - if icecc_is_kernel(bb, d): - staging += "-kernel" - - #check if the icecc path is set by the user - icecc = get_icecc(d) - - # Create the dir if necessary - try: - os.stat(staging) - except: - try: - os.makedirs(staging) - except: - pass - - for compiler in compilers: - gcc_path = os.path.join(staging, compiler) - try: - os.stat(gcc_path) - except: - try: - os.symlink(icecc, gcc_path) - except: - pass - - return staging - def use_icecc(bb,d): if d.getVar('ICECC_DISABLED') == "1": # don't even try it, when explicitly disabled @@ -248,12 +220,11 @@ def icecc_path(bb,d): # don't create unnecessary directories when icecc is disabled return + staging = os.path.join(d.expand('${STAGING_BINDIR}'), "ice") if icecc_is_kernel(bb, d): - return create_path( [get_cross_kernel_cc(bb,d), ], bb, d) + staging += "-kernel" - else: - prefix = d.expand('${HOST_PREFIX}') - return create_path( [prefix+"gcc", prefix+"g++"], bb, d) + return staging def icecc_get_external_tool(bb, d, tool): external_toolchain_bindir = d.expand('${EXTERNAL_TOOLCHAIN}${bindir_cross}') @@ -303,9 +274,9 @@ def icecc_get_and_check_tool(bb, d, tool): # compiler environment package. t = icecc_get_tool(bb, d, tool) if t: - link_path = icecc_get_tool_link(tool, d) + link_path = icecc_get_tool_link(t, d) if link_path == get_icecc(d): - bb.error("%s is a symlink to %s in PATH and this prevents icecc from working" % (t, get_icecc(d))) + bb.error("%s is a symlink to %s in PATH and this prevents icecc from working" % (t, link_path)) return "" else: return t @@ -350,6 +321,27 @@ set_icecc_env() { return fi + ICECC_BIN="${@get_icecc(d)}" + if [ -z "${ICECC_BIN}" ]; then + bbwarn "Cannot use icecc: icecc binary not found" + return + fi + if [ -z "$(which patchelf patchelf-uninative)" ]; then + bbwarn "Cannot use icecc: patchelf not found" + return + fi + + # Create symlinks to icecc in the recipe-sysroot directory + mkdir -p ${ICE_PATH} + if [ -n "${KERNEL_CC}" ]; then + compilers="${@get_cross_kernel_cc(bb,d)}" + else + compilers="${HOST_PREFIX}gcc ${HOST_PREFIX}g++" + fi + for compiler in $compilers; do + ln -sf ${ICECC_BIN} ${ICE_PATH}/$compiler + done + ICECC_CC="${@icecc_get_and_check_tool(bb, d, "gcc")}" ICECC_CXX="${@icecc_get_and_check_tool(bb, d, "g++")}" # cannot use icecc_get_and_check_tool here because it assumes as without target_sys prefix @@ -387,7 +379,7 @@ set_icecc_env() { ${ICECC_ENV_EXEC} ${ICECC_ENV_DEBUG} "${ICECC_CC}" "${ICECC_CXX}" "${ICECC_AS}" "${ICECC_VERSION}" then touch "${ICECC_VERSION}.done" - elif [ ! wait_for_file "${ICECC_VERSION}.done" 30 ] + elif ! wait_for_file "${ICECC_VERSION}.done" 30 then # locking failed so wait for ${ICECC_VERSION}.done to appear bbwarn "Timeout waiting for ${ICECC_VERSION}.done" diff --git a/poky/meta/classes/insane.bbclass b/poky/meta/classes/insane.bbclass index 6718feb3a..295feb8a5 100644 --- a/poky/meta/classes/insane.bbclass +++ b/poky/meta/classes/insane.bbclass @@ -111,7 +111,7 @@ def package_qa_check_rpath(file,name, d, elf, messages): phdrs = elf.run_objdump("-p", d) import re - rpath_re = re.compile("\s+RPATH\s+(.*)") + rpath_re = re.compile(r"\s+RPATH\s+(.*)") for line in phdrs.split("\n"): m = rpath_re.match(line) if m: @@ -140,7 +140,7 @@ def package_qa_check_useless_rpaths(file, name, d, elf, messages): phdrs = elf.run_objdump("-p", d) import re - rpath_re = re.compile("\s+RPATH\s+(.*)") + rpath_re = re.compile(r"\s+RPATH\s+(.*)") for line in phdrs.split("\n"): m = rpath_re.match(line) if m: @@ -203,8 +203,8 @@ def package_qa_check_libdir(d): # The re's are purposely fuzzy, as some there are some .so.x.y.z files # that don't follow the standard naming convention. It checks later # that they are actual ELF files - lib_re = re.compile("^/lib.+\.so(\..+)?$") - exec_re = re.compile("^%s.*/lib.+\.so(\..+)?$" % exec_prefix) + lib_re = re.compile(r"^/lib.+\.so(\..+)?$") + exec_re = re.compile(r"^%s.*/lib.+\.so(\..+)?$" % exec_prefix) for root, dirs, files in os.walk(pkgdest): if root == pkgdest: @@ -302,7 +302,7 @@ def package_qa_check_arch(path,name,d, elf, messages): # Check the architecture and endiannes of the binary is_32 = (("virtual/kernel" in provides) or bb.data.inherits_class("module", d)) and \ (target_os == "linux-gnux32" or target_os == "linux-muslx32" or \ - target_os == "linux-gnu_ilp32" or re.match('mips64.*32', d.getVar('DEFAULTTUNE'))) + target_os == "linux-gnu_ilp32" or re.match(r'mips64.*32', d.getVar('DEFAULTTUNE'))) is_bpf = (oe.qa.elf_machine_to_string(elf.machine()) == "BPF") if not ((machine == elf.machine()) or is_32 or is_bpf): package_qa_add_message(messages, "arch", "Architecture did not match (%s, expected %s) on %s" % \ @@ -342,7 +342,7 @@ def package_qa_textrel(path, name, d, elf, messages): sane = True import re - textrel_re = re.compile("\s+TEXTREL\s+") + textrel_re = re.compile(r"\s+TEXTREL\s+") for line in phdrs.split("\n"): if textrel_re.match(line): sane = False @@ -952,7 +952,7 @@ python do_package_qa () { import re # The package name matches the [a-z0-9.+-]+ regular expression - pkgname_pattern = re.compile("^[a-z0-9.+-]+$") + pkgname_pattern = re.compile(r"^[a-z0-9.+-]+$") taskdepdata = d.getVar("BB_TASKDEPDATA", False) taskdeps = set() @@ -1160,7 +1160,7 @@ python () { if pn in overrides: msg = 'Recipe %s has PN of "%s" which is in OVERRIDES, this can result in unexpected behaviour.' % (d.getVar("FILE"), pn) package_qa_handle_error("pn-overrides", msg, d) - prog = re.compile('[A-Z]') + prog = re.compile(r'[A-Z]') if prog.search(pn): package_qa_handle_error("uppercase-pn", 'PN: %s is upper case, this can result in unexpected behavior.' % pn, d) diff --git a/poky/meta/classes/kernel.bbclass b/poky/meta/classes/kernel.bbclass index e04d2fe00..45cb4fabc 100644 --- a/poky/meta/classes/kernel.bbclass +++ b/poky/meta/classes/kernel.bbclass @@ -157,8 +157,8 @@ PACKAGES_DYNAMIC += "^${KERNEL_PACKAGE_NAME}-firmware-.*" export OS = "${TARGET_OS}" export CROSS_COMPILE = "${TARGET_PREFIX}" export KBUILD_BUILD_VERSION = "1" -export KBUILD_BUILD_USER = "oe-user" -export KBUILD_BUILD_HOST = "oe-host" +export KBUILD_BUILD_USER ?= "oe-user" +export KBUILD_BUILD_HOST ?= "oe-host" KERNEL_RELEASE ?= "${KERNEL_VERSION}" @@ -492,7 +492,7 @@ sysroot_stage_all () { : } -KERNEL_CONFIG_COMMAND ?= "oe_runmake_call -C ${S} CC="${KERNEL_CC}" O=${B} oldnoconfig" +KERNEL_CONFIG_COMMAND ?= "oe_runmake_call -C ${S} CC="${KERNEL_CC}" O=${B} olddefconfig || oe_runmake -C ${S} O=${B} CC="${KERNEL_CC}" oldnoconfig" python check_oldest_kernel() { oldest_kernel = d.getVar('OLDEST_KERNEL') @@ -682,6 +682,9 @@ kernel_do_deploy() { if [ ! -z "${INITRAMFS_IMAGE}" -a x"${INITRAMFS_IMAGE_BUNDLE}" = x1 ]; then for imageType in ${KERNEL_IMAGETYPES} ; do + if [ "$imageType" = "fitImage" ] ; then + continue + fi initramfs_base_name=${imageType}-${INITRAMFS_NAME} initramfs_symlink_name=${imageType}-${INITRAMFS_LINK_NAME} install -m 0644 ${KERNEL_OUTPUT_DIR}/${imageType}.initramfs $deployDir/${initramfs_base_name}.bin diff --git a/poky/meta/classes/license_image.bbclass b/poky/meta/classes/license_image.bbclass index f0fbb763f..b65ff56f7 100644 --- a/poky/meta/classes/license_image.bbclass +++ b/poky/meta/classes/license_image.bbclass @@ -52,8 +52,8 @@ def write_license_files(d, license_manifest, pkg_dic): except oe.license.LicenseError as exc: bb.fatal('%s: %s' % (d.getVar('P'), exc)) else: - pkg_dic[pkg]["LICENSES"] = re.sub('[|&()*]', ' ', pkg_dic[pkg]["LICENSE"]) - pkg_dic[pkg]["LICENSES"] = re.sub(' *', ' ', pkg_dic[pkg]["LICENSES"]) + pkg_dic[pkg]["LICENSES"] = re.sub(r'[|&()*]', ' ', pkg_dic[pkg]["LICENSE"]) + pkg_dic[pkg]["LICENSES"] = re.sub(r' *', ' ', pkg_dic[pkg]["LICENSES"]) pkg_dic[pkg]["LICENSES"] = pkg_dic[pkg]["LICENSES"].split() if not "IMAGE_MANIFEST" in pkg_dic[pkg]: @@ -78,7 +78,7 @@ def write_license_files(d, license_manifest, pkg_dic): for lic in pkg_dic[pkg]["LICENSES"]: lic_file = os.path.join(d.getVar('LICENSE_DIRECTORY'), pkg_dic[pkg]["PN"], "generic_%s" % - re.sub('\+', '', lic)) + re.sub(r'\+', '', lic)) # add explicity avoid of CLOSED license because isn't generic if lic == "CLOSED": continue @@ -119,14 +119,14 @@ def write_license_files(d, license_manifest, pkg_dic): pkg_license = os.path.join(pkg_license_dir, lic) pkg_rootfs_license = os.path.join(pkg_rootfs_license_dir, lic) - if re.match("^generic_.*$", lic): + if re.match(r"^generic_.*$", lic): generic_lic = canonical_license(d, - re.search("^generic_(.*)$", lic).group(1)) + re.search(r"^generic_(.*)$", lic).group(1)) # Do not copy generic license into package if isn't # declared into LICENSES of the package. - if not re.sub('\+$', '', generic_lic) in \ - [re.sub('\+', '', lic) for lic in \ + if not re.sub(r'\+$', '', generic_lic) in \ + [re.sub(r'\+', '', lic) for lic in \ pkg_manifest_licenses]: continue diff --git a/poky/meta/classes/package.bbclass b/poky/meta/classes/package.bbclass index d1e9138c6..0fe9576b4 100644 --- a/poky/meta/classes/package.bbclass +++ b/poky/meta/classes/package.bbclass @@ -368,7 +368,8 @@ def append_source_info(file, sourcefile, d, fatal=True): # is still assuming that. debuglistoutput = '\0'.join(debugsources) + '\0' lf = bb.utils.lockfile(sourcefile + ".lock") - open(sourcefile, 'a').write(debuglistoutput) + with open(sourcefile, 'a') as sf: + sf.write(debuglistoutput) bb.utils.unlockfile(lf) diff --git a/poky/meta/classes/patch.bbclass b/poky/meta/classes/patch.bbclass index 3e0a18182..cd241f1c8 100644 --- a/poky/meta/classes/patch.bbclass +++ b/poky/meta/classes/patch.bbclass @@ -153,6 +153,7 @@ python patch_do_patch() { patch_do_patch[vardepsexclude] = "PATCHRESOLVE" addtask patch after do_unpack +do_patch[umask] = "022" do_patch[dirs] = "${WORKDIR}" do_patch[depends] = "${PATCHDEPENDENCY}" diff --git a/poky/meta/classes/perl-version.bbclass b/poky/meta/classes/perl-version.bbclass index fafe68a77..bafd96518 100644 --- a/poky/meta/classes/perl-version.bbclass +++ b/poky/meta/classes/perl-version.bbclass @@ -13,7 +13,7 @@ def get_perl_version(d): return None l = f.readlines(); f.close(); - r = re.compile("^version='(\d*\.\d*\.\d*)'") + r = re.compile(r"^version='(\d*\.\d*\.\d*)'") for s in l: m = r.match(s) if m: diff --git a/poky/meta/classes/sstate.bbclass b/poky/meta/classes/sstate.bbclass index 9f059a04a..edbfba5de 100644 --- a/poky/meta/classes/sstate.bbclass +++ b/poky/meta/classes/sstate.bbclass @@ -362,7 +362,10 @@ def sstate_installpkgdir(ss, d): for plain in ss['plaindirs']: workdir = d.getVar('WORKDIR') + sharedworkdir = os.path.join(d.getVar('TMPDIR'), "work-shared") src = sstateinst + "/" + plain.replace(workdir, '') + if sharedworkdir in plain: + src = sstateinst + "/" + plain.replace(sharedworkdir, '') dest = plain bb.utils.mkdirhier(src) prepdir(dest) @@ -620,8 +623,11 @@ def sstate_package(ss, d): os.rename(state[1], sstatebuild + state[0]) workdir = d.getVar('WORKDIR') + sharedworkdir = os.path.join(d.getVar('TMPDIR'), "work-shared") for plain in ss['plaindirs']: pdir = plain.replace(workdir, sstatebuild) + if sharedworkdir in plain: + pdir = plain.replace(sharedworkdir, sstatebuild) bb.utils.mkdirhier(plain) bb.utils.mkdirhier(pdir) os.rename(plain, pdir) diff --git a/poky/meta/classes/systemd.bbclass b/poky/meta/classes/systemd.bbclass index c7b784dea..c8f4fdec8 100644 --- a/poky/meta/classes/systemd.bbclass +++ b/poky/meta/classes/systemd.bbclass @@ -86,7 +86,7 @@ python systemd_populate_packages() { def systemd_generate_package_scripts(pkg): bb.debug(1, 'adding systemd calls to postinst/postrm for %s' % pkg) - paths_escaped = ' '.join(shlex.quote(s) for s in d.getVar('SYSTEMD_SERVICE_' + pkg, True).split()) + paths_escaped = ' '.join(shlex.quote(s) for s in d.getVar('SYSTEMD_SERVICE_' + pkg).split()) d.setVar('SYSTEMD_SERVICE_ESCAPED_' + pkg, paths_escaped) # Add pkg to the overrides so that it finds the SYSTEMD_SERVICE_pkg diff --git a/poky/meta/classes/testimage.bbclass b/poky/meta/classes/testimage.bbclass index f2ff91da9..cb8c12acc 100644 --- a/poky/meta/classes/testimage.bbclass +++ b/poky/meta/classes/testimage.bbclass @@ -31,6 +31,7 @@ TESTIMAGE_AUTO ??= "0" # TEST_LOG_DIR contains a command ssh log and may contain infromation about what command is running, output and return codes and for qemu a boot log till login. # Booting is handled by this class, and it's not a test in itself. # TEST_QEMUBOOT_TIMEOUT can be used to set the maximum time in seconds the launch code will wait for the login prompt. +# TEST_QEMUPARAMS can be used to pass extra parameters to qemu, e.g. "-m 1024" for setting the amount of ram to 1 GB. TEST_LOG_DIR ?= "${WORKDIR}/testimage" @@ -40,31 +41,13 @@ TEST_NEEDED_PACKAGES_DIR ?= "${WORKDIR}/testimage/packages" TEST_EXTRACTED_DIR ?= "${TEST_NEEDED_PACKAGES_DIR}/extracted" TEST_PACKAGED_DIR ?= "${TEST_NEEDED_PACKAGES_DIR}/packaged" -PKGMANTESTSUITE = "\ - ${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'dnf rpm', '', d)} \ - ${@bb.utils.contains('IMAGE_PKGTYPE', 'ipk', 'opkg', '', d)} \ - ${@bb.utils.contains('IMAGE_PKGTYPE', 'deb', 'apt', '', d)} \ - " -SYSTEMDSUITE = "${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)}" -MINTESTSUITE = "ping" -NETTESTSUITE = "${MINTESTSUITE} ssh df date scp oe_syslog ${SYSTEMDSUITE}" -DEVTESTSUITE = "gcc kernelmodule ldd" -PTESTTESTSUITE = "${MINTESTSUITE} ssh scp ptest" - -DEFAULT_TEST_SUITES = "${MINTESTSUITE} auto" -DEFAULT_TEST_SUITES_pn-core-image-minimal = "${MINTESTSUITE}" -DEFAULT_TEST_SUITES_pn-core-image-minimal-dev = "${MINTESTSUITE}" -DEFAULT_TEST_SUITES_pn-core-image-full-cmdline = "${NETTESTSUITE} perl python logrotate ptest" -DEFAULT_TEST_SUITES_pn-core-image-x11 = "${MINTESTSUITE}" -DEFAULT_TEST_SUITES_pn-core-image-lsb = "${NETTESTSUITE} pam parselogs ${PKGMANTESTSUITE} ptest" -DEFAULT_TEST_SUITES_pn-core-image-sato = "${NETTESTSUITE} connman xorg parselogs ${PKGMANTESTSUITE} \ - ${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'python', '', d)} ptest gi" -DEFAULT_TEST_SUITES_pn-core-image-sato-sdk = "${NETTESTSUITE} buildcpio buildlzip buildgalculator \ - connman ${DEVTESTSUITE} logrotate perl parselogs python ${PKGMANTESTSUITE} xorg ptest gi stap" -DEFAULT_TEST_SUITES_pn-core-image-lsb-dev = "${NETTESTSUITE} pam perl python parselogs ${PKGMANTESTSUITE} ptest gi" -DEFAULT_TEST_SUITES_pn-core-image-lsb-sdk = "${NETTESTSUITE} buildcpio buildlzip buildgalculator \ - connman ${DEVTESTSUITE} logrotate pam parselogs perl python ${PKGMANTESTSUITE} ptest gi stap" -DEFAULT_TEST_SUITES_pn-meta-toolchain = "auto" +BASICTESTSUITE = "\ + ping date df ssh scp python perl gi ptest parselogs \ + logrotate connman systemd oe_syslog pam stap ldd xorg \ + kernelmodule gcc buildcpio buildlzip buildgalculator \ + dnf rpm opkg apt" + +DEFAULT_TEST_SUITES = "${BASICTESTSUITE}" # aarch64 has no graphics DEFAULT_TEST_SUITES_remove_aarch64 = "xorg" @@ -81,21 +64,20 @@ TEST_SUITES ?= "${DEFAULT_TEST_SUITES}" TEST_QEMUBOOT_TIMEOUT ?= "1000" TEST_TARGET ?= "qemu" +TEST_QEMUPARAMS ?= "" TESTIMAGEDEPENDS = "" TESTIMAGEDEPENDS_append_qemuall = " qemu-native:do_populate_sysroot qemu-helper-native:do_populate_sysroot qemu-helper-native:do_addto_recipe_sysroot" TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'cpio-native:do_populate_sysroot', '', d)}" -TESTIMAGEDEPENDS_append_qemuall = " ${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'cpio-native:do_populate_sysroot', '', d)}" -TESTIMAGEDEPENDS_append_qemuall = " ${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'createrepo-c-native:do_populate_sysroot', '', d)}" TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'dnf-native:do_populate_sysroot', '', d)}" +TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'createrepo-c-native:do_populate_sysroot', '', d)}" TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'ipk', 'opkg-utils-native:do_populate_sysroot package-index:do_package_index', '', d)}" TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'deb', 'apt-native:do_populate_sysroot package-index:do_package_index', '', d)}" -TESTIMAGEDEPENDS += "${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'createrepo-c-native:do_populate_sysroot', '', d)}" TESTIMAGELOCK = "${TMPDIR}/testimage.lock" TESTIMAGELOCK_qemuall = "" -TESTIMAGE_DUMP_DIR ?= "/tmp/oe-saved-tests/" +TESTIMAGE_DUMP_DIR ?= "${LOG_DIR}/runtime-hostdump/" TESTIMAGE_UPDATE_VARS ?= "DL_DIR WORKDIR DEPLOY_DIR" @@ -219,12 +201,13 @@ def testimage_main(d): machine = d.getVar("MACHINE") # Get rootfs - fstypes = [fs for fs in d.getVar('IMAGE_FSTYPES').split(' ') - if fs in supported_fstypes] - if not fstypes: - bb.fatal('Unsupported image type built. Add a comptible image to ' - 'IMAGE_FSTYPES. Supported types: %s' % - ', '.join(supported_fstypes)) + fstypes = d.getVar('IMAGE_FSTYPES').split() + if d.getVar("TEST_TARGET") == "qemu": + fstypes = [fs for fs in fstypes if fs in supported_fstypes] + if not fstypes: + bb.fatal('Unsupported image type built. Add a comptible image to ' + 'IMAGE_FSTYPES. Supported types: %s' % + ', '.join(supported_fstypes)) rootfs = '%s.%s' % (image_name, fstypes[0]) # Get tmpdir (not really used, just for compatibility) @@ -248,13 +231,11 @@ def testimage_main(d): boottime = int(d.getVar("TEST_QEMUBOOT_TIMEOUT")) # Get use_kvm - qemu_use_kvm = d.getVar("QEMU_USE_KVM") - if qemu_use_kvm and \ - (d.getVar('MACHINE') in qemu_use_kvm.split() or \ - oe.types.boolean(qemu_use_kvm) and 'x86' in machine): - kvm = True - else: - kvm = False + kvm = oe.types.qemu_use_kvm(d.getVar('QEMU_USE_KVM'), d.getVar('TARGET_ARCH')) + + slirp = False + if d.getVar("QEMU_USE_SLIRP"): + slirp = True # TODO: We use the current implementatin of qemu runner because of # time constrains, qemu runner really needs a refactor too. @@ -267,6 +248,8 @@ def testimage_main(d): 'boottime' : boottime, 'bootlog' : bootlog, 'kvm' : kvm, + 'slirp' : slirp, + 'dump_dir' : d.getVar("TESTIMAGE_DUMP_DIR"), } # TODO: Currently BBPATH is needed for custom loading of targets. @@ -306,17 +289,12 @@ def testimage_main(d): package_extraction(d, tc.suites) - bootparams = None - if d.getVar('VIRTUAL-RUNTIME_init_manager', '') == 'systemd': - # Add systemd.log_level=debug to enable systemd debug logging - bootparams = 'systemd.log_target=console' - results = None orig_sigterm_handler = signal.signal(signal.SIGTERM, sigterm_exception) try: # We need to check if runqemu ends unexpectedly # or if the worker send us a SIGTERM - tc.target.start(extra_bootparams=bootparams) + tc.target.start(params=d.getVar("TEST_QEMUPARAMS")) results = tc.runTests() except (RuntimeError, BlockingIOError) as err: if isinstance(err, RuntimeError): diff --git a/poky/meta/classes/testsdk.bbclass b/poky/meta/classes/testsdk.bbclass index 458c3f40b..758a23ac5 100644 --- a/poky/meta/classes/testsdk.bbclass +++ b/poky/meta/classes/testsdk.bbclass @@ -14,218 +14,31 @@ # # where "<image-name>" is an image like core-image-sato. -def get_sdk_configuration(d, test_type): - import platform - from oeqa.utils.metadata import get_layers - configuration = {'TEST_TYPE': test_type, - 'MACHINE': d.getVar("MACHINE"), - 'SDKMACHINE': d.getVar("SDKMACHINE"), - 'IMAGE_BASENAME': d.getVar("IMAGE_BASENAME"), - 'IMAGE_PKGTYPE': d.getVar("IMAGE_PKGTYPE"), - 'STARTTIME': d.getVar("DATETIME"), - 'HOST_DISTRO': oe.lsb.distro_identifier().replace(' ', '-'), - 'LAYERS': get_layers(d.getVar("BBLAYERS"))} - return configuration -get_sdk_configuration[vardepsexclude] = "DATETIME" +TESTSDK_CLASS_NAME ?= "oeqa.sdk.testsdk.TestSDK" +TESTSDKEXT_CLASS_NAME ?= "oeqa.sdkext.testsdk.TestSDKExt" -def get_sdk_json_result_dir(d): - json_result_dir = os.path.join(d.getVar("LOG_DIR"), 'oeqa') - custom_json_result_dir = d.getVar("OEQA_JSON_RESULT_DIR") - if custom_json_result_dir: - json_result_dir = custom_json_result_dir - return json_result_dir +def import_and_run(name, d): + import importlib -def get_sdk_result_id(configuration): - return '%s_%s_%s_%s_%s' % (configuration['TEST_TYPE'], configuration['IMAGE_BASENAME'], configuration['SDKMACHINE'], configuration['MACHINE'], configuration['STARTTIME']) + class_name = d.getVar(name) + if class_name: + module, cls = class_name.rsplit('.', 1) + m = importlib.import_module(module) + c = getattr(m, cls)() + c.run(d) + else: + bb.warn('No tests were run because %s did not define a class' % name) -def testsdk_main(d): - import os - import subprocess - import json - import logging - - from bb.utils import export_proxies - from oeqa.sdk.context import OESDKTestContext, OESDKTestContextExecutor - from oeqa.utils import make_logger_bitbake_compatible - - pn = d.getVar("PN") - logger = make_logger_bitbake_compatible(logging.getLogger("BitBake")) - - # sdk use network for download projects for build - export_proxies(d) - - tcname = d.expand("${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.sh") - if not os.path.exists(tcname): - bb.fatal("The toolchain %s is not built. Build it before running the tests: 'bitbake <image> -c populate_sdk' ." % tcname) - - tdname = d.expand("${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.testdata.json") - test_data = json.load(open(tdname, "r")) - - target_pkg_manifest = OESDKTestContextExecutor._load_manifest( - d.expand("${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.target.manifest")) - host_pkg_manifest = OESDKTestContextExecutor._load_manifest( - d.expand("${SDK_DEPLOY}/${TOOLCHAIN_OUTPUTNAME}.host.manifest")) - - processes = d.getVar("TESTIMAGE_NUMBER_THREADS") or d.getVar("BB_NUMBER_THREADS") - if processes: - try: - import testtools, subunit - except ImportError: - bb.warn("Failed to import testtools or subunit, the testcases will run serially") - processes = None - - sdk_dir = d.expand("${WORKDIR}/testimage-sdk/") - bb.utils.remove(sdk_dir, True) - bb.utils.mkdirhier(sdk_dir) - try: - subprocess.check_output("cd %s; %s <<EOF\n./\nY\nEOF" % (sdk_dir, tcname), shell=True) - except subprocess.CalledProcessError as e: - bb.fatal("Couldn't install the SDK:\n%s" % e.output.decode("utf-8")) - - fail = False - sdk_envs = OESDKTestContextExecutor._get_sdk_environs(sdk_dir) - for s in sdk_envs: - sdk_env = sdk_envs[s] - bb.plain("SDK testing environment: %s" % s) - tc = OESDKTestContext(td=test_data, logger=logger, sdk_dir=sdk_dir, - sdk_env=sdk_env, target_pkg_manifest=target_pkg_manifest, - host_pkg_manifest=host_pkg_manifest) - - try: - tc.loadTests(OESDKTestContextExecutor.default_cases) - except Exception as e: - import traceback - bb.fatal("Loading tests failed:\n%s" % traceback.format_exc()) - - if processes: - result = tc.runTests(processes=int(processes)) - else: - result = tc.runTests() - - component = "%s %s" % (pn, OESDKTestContextExecutor.name) - context_msg = "%s:%s" % (os.path.basename(tcname), os.path.basename(sdk_env)) - configuration = get_sdk_configuration(d, 'sdk') - result.logDetails(get_sdk_json_result_dir(d), - configuration, - get_sdk_result_id(configuration)) - result.logSummary(component, context_msg) - - if not result.wasSuccessful(): - fail = True - - if fail: - bb.fatal("%s - FAILED - check the task log and the commands log" % pn) - -testsdk_main[vardepsexclude] =+ "BB_ORIGENV" +import_and_run[vardepsexclude] = "DATETIME BB_ORIGENV" python do_testsdk() { - testsdk_main(d) + import_and_run('TESTSDK_CLASS_NAME', d) } addtask testsdk do_testsdk[nostamp] = "1" -def testsdkext_main(d): - import os - import json - import subprocess - import logging - - from bb.utils import export_proxies - from oeqa.utils import avoid_paths_in_environ, make_logger_bitbake_compatible, subprocesstweak - from oeqa.sdkext.context import OESDKExtTestContext, OESDKExtTestContextExecutor - - pn = d.getVar("PN") - logger = make_logger_bitbake_compatible(logging.getLogger("BitBake")) - - # extensible sdk use network - export_proxies(d) - - subprocesstweak.errors_have_output() - - # extensible sdk can be contaminated if native programs are - # in PATH, i.e. use perl-native instead of eSDK one. - paths_to_avoid = [d.getVar('STAGING_DIR'), - d.getVar('BASE_WORKDIR')] - os.environ['PATH'] = avoid_paths_in_environ(paths_to_avoid) - - tcname = d.expand("${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.sh") - if not os.path.exists(tcname): - bb.fatal("The toolchain ext %s is not built. Build it before running the" \ - " tests: 'bitbake <image> -c populate_sdk_ext' ." % tcname) - - tdname = d.expand("${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.testdata.json") - test_data = json.load(open(tdname, "r")) - - target_pkg_manifest = OESDKExtTestContextExecutor._load_manifest( - d.expand("${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.target.manifest")) - host_pkg_manifest = OESDKExtTestContextExecutor._load_manifest( - d.expand("${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.host.manifest")) - - sdk_dir = d.expand("${WORKDIR}/testsdkext/") - bb.utils.remove(sdk_dir, True) - bb.utils.mkdirhier(sdk_dir) - try: - subprocess.check_output("%s -y -d %s" % (tcname, sdk_dir), shell=True) - except subprocess.CalledProcessError as e: - msg = "Couldn't install the extensible SDK:\n%s" % e.output.decode("utf-8") - logfn = os.path.join(sdk_dir, 'preparing_build_system.log') - if os.path.exists(logfn): - msg += '\n\nContents of preparing_build_system.log:\n' - with open(logfn, 'r') as f: - for line in f: - msg += line - bb.fatal(msg) - - fail = False - sdk_envs = OESDKExtTestContextExecutor._get_sdk_environs(sdk_dir) - for s in sdk_envs: - bb.plain("Extensible SDK testing environment: %s" % s) - - sdk_env = sdk_envs[s] - - # Use our own SSTATE_DIR and DL_DIR so that updates to the eSDK come from our sstate cache - # and we don't spend hours downloading kernels for the kernel module test - # Abuse auto.conf since local.conf would be overwritten by the SDK - with open(os.path.join(sdk_dir, 'conf', 'auto.conf'), 'a+') as f: - f.write('SSTATE_MIRRORS += " \\n file://.* file://%s/PATH"\n' % test_data.get('SSTATE_DIR')) - f.write('SOURCE_MIRROR_URL = "file://%s"\n' % test_data.get('DL_DIR')) - f.write('INHERIT += "own-mirrors"\n') - f.write('PREMIRRORS_prepend = " git://git.yoctoproject.org/.* git://%s/git2/git.yoctoproject.org.BASENAME \\n "\n' % test_data.get('DL_DIR')) - - # We need to do this in case we have a minimal SDK - subprocess.check_output(". %s > /dev/null; devtool sdk-install meta-extsdk-toolchain" % \ - sdk_env, cwd=sdk_dir, shell=True, stderr=subprocess.STDOUT) - - tc = OESDKExtTestContext(td=test_data, logger=logger, sdk_dir=sdk_dir, - sdk_env=sdk_env, target_pkg_manifest=target_pkg_manifest, - host_pkg_manifest=host_pkg_manifest) - - try: - tc.loadTests(OESDKExtTestContextExecutor.default_cases) - except Exception as e: - import traceback - bb.fatal("Loading tests failed:\n%s" % traceback.format_exc()) - - result = tc.runTests() - - component = "%s %s" % (pn, OESDKExtTestContextExecutor.name) - context_msg = "%s:%s" % (os.path.basename(tcname), os.path.basename(sdk_env)) - configuration = get_sdk_configuration(d, 'sdkext') - result.logDetails(get_sdk_json_result_dir(d), - configuration, - get_sdk_result_id(configuration)) - result.logSummary(component, context_msg) - - if not result.wasSuccessful(): - fail = True - - if fail: - bb.fatal("%s - FAILED - check the task log and the commands log" % pn) - -testsdkext_main[vardepsexclude] =+ "BB_ORIGENV" - python do_testsdkext() { - testsdkext_main(d) + import_and_run('TESTSDKEXT_CLASS_NAME', d) } addtask testsdkext do_testsdkext[nostamp] = "1" diff --git a/poky/meta/classes/toolchain-scripts.bbclass b/poky/meta/classes/toolchain-scripts.bbclass index 6d1ba6947..1a2ec4f3b 100644 --- a/poky/meta/classes/toolchain-scripts.bbclass +++ b/poky/meta/classes/toolchain-scripts.bbclass @@ -128,30 +128,30 @@ toolchain_create_post_relocate_script() { touch $relocate_script cat >> $relocate_script <<EOF -# Source top-level SDK env scripts in case they are needed for the relocate -# scripts. -for env_setup_script in ${env_dir}/environment-setup-*; do - . \$env_setup_script - status=\$? - if [ \$status != 0 ]; then - echo "\$0: Failed to source \$env_setup_script with status \$status" - exit \$status - fi -done - if [ -d "${SDKPATHNATIVE}/post-relocate-setup.d/" ]; then - for s in ${SDKPATHNATIVE}/post-relocate-setup.d/*; do - if [ ! -x \$s ]; then - continue - fi - \$s "\$1" - status=\$? - if [ \$status != 0 ]; then - echo "post-relocate command \"\$s \$1\" failed with status \$status" >&2 - exit \$status - fi - done - rm -rf "${SDKPATHNATIVE}/post-relocate-setup.d" + # Source top-level SDK env scripts in case they are needed for the relocate + # scripts. + for env_setup_script in ${env_dir}/environment-setup-*; do + . \$env_setup_script + status=\$? + if [ \$status != 0 ]; then + echo "\$0: Failed to source \$env_setup_script with status \$status" + exit \$status + fi + + for s in ${SDKPATHNATIVE}/post-relocate-setup.d/*; do + if [ ! -x \$s ]; then + continue + fi + \$s "\$1" + status=\$? + if [ \$status != 0 ]; then + echo "post-relocate command \"\$s \$1\" failed with status \$status" >&2 + exit \$status + fi + done + done + rm -rf "${SDKPATHNATIVE}/post-relocate-setup.d" fi EOF } diff --git a/poky/meta/classes/update-alternatives.bbclass b/poky/meta/classes/update-alternatives.bbclass index aa01058cf..a7f1a6fda 100644 --- a/poky/meta/classes/update-alternatives.bbclass +++ b/poky/meta/classes/update-alternatives.bbclass @@ -143,7 +143,7 @@ python perform_packagecopy_append () { if not alt_link: alt_link = "%s/%s" % (d.getVar('bindir'), alt_name) d.setVarFlag('ALTERNATIVE_LINK_NAME', alt_name, alt_link) - if alt_link.startswith(os.path.join(d.getVar('sysconfdir', True), 'init.d')): + if alt_link.startswith(os.path.join(d.getVar('sysconfdir'), 'init.d')): # Managing init scripts does not work (bug #10433), foremost # because of a race with update-rc.d bb.fatal("Using update-alternatives for managing SysV init scripts is not supported") diff --git a/poky/meta/classes/useradd-staticids.bbclass b/poky/meta/classes/useradd-staticids.bbclass index 64bf6dc82..70d59e557 100644 --- a/poky/meta/classes/useradd-staticids.bbclass +++ b/poky/meta/classes/useradd-staticids.bbclass @@ -59,8 +59,8 @@ def update_useradd_static_config(d): # Paths are resolved via BBPATH. def get_table_list(d, var, default): files = [] - bbpath = d.getVar('BBPATH', True) - tables = d.getVar(var, True) + bbpath = d.getVar('BBPATH') + tables = d.getVar(var) if not tables: tables = default for conf_file in tables.split(): |