summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2021-11-17media: vidtv: move kfree(dvb) to vidtv_bridge_dev_release()Hans Verkuil1-1/+4
commit 112024a3b6dcfc62ec36ea0cf58b897f2ce54c59 upstream. Adding kfree(dvb) to vidtv_bridge_remove() will remove the memory too soon: if an application still has an open filehandle to the device when the driver is unloaded, then when that filehandle is closed, a use-after-free access takes place to the freed memory. Move the kfree(dvb) to vidtv_bridge_dev_release() instead. Signed-off-by: Hans Verkuil <hverkuil-cisco@xs4all.nl> Fixes: 76e21bb8be4f ("media: vidtv: Fix memory leak in remove") Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17drm/amd/display: Look at firmware version to determine using dmub on dcn21Mario Limonciello1-1/+8
commit 91adec9e07097e538691daed5d934e7886dd1dc3 upstream. commit 652de07addd2 ("drm/amd/display: Fully switch to dmub for all dcn21 asics") switched over to using dmub on Renoir to fix Gitlab 1735, but this implied a new dependency on newer firmware which might not be met on older kernel versions. Since sw_init runs before hw_init, there is an opportunity to determine whether or not the firmware version is new to adjust the behavior. Cc: Roman.Li@amd.com BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1772 BugLink: https://gitlab.freedesktop.org/drm/amd/-/issues/1735 Fixes: 652de07addd2 ("drm/amd/display: Fully switch to dmub for all dcn21 asics") Signed-off-by: Mario Limonciello <mario.limonciello@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Roman Li <Roman.Li@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17pinctrl: amd: Handle wake-up interruptBasavaraj Natikar1-0/+10
commit acd47b9f28e55b505aedb842131b40904e151d7c upstream. Enable/disable power management wakeup mode, which is disabled by default. enable_irq_wake enables wakes the system from sleep. Hence added enable/disable irq_wake to handle wake-up interrupt. Signed-off-by: Basavaraj Natikar <Basavaraj.Natikar@amd.com> Tested-by: Mario Limonciello <mario.limonciello@amd.com> Acked-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Link: https://lore.kernel.org/r/20210831120613.1514899-3-Basavaraj.Natikar@amd.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17pinctrl: amd: Add irq field dataBasavaraj Natikar2-5/+5
commit 7e6f8d6f4a42ef9b693ff1b49267c546931d4619 upstream. pinctrl_amd use gpiochip_get_data() to get their local state containers back from the gpiochip passed as amd_gpio chip data. Hence added irq field data to get directly using amd_gpio chip data. Signed-off-by: Basavaraj Natikar <Basavaraj.Natikar@amd.com> Tested-by: Mario Limonciello <mario.limonciello@amd.com> Acked-by: Shyam Sundar S K <Shyam-sundar.S-k@amd.com> Link: https://lore.kernel.org/r/20210831120613.1514899-2-Basavaraj.Natikar@amd.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17SUNRPC: Partial revert of commit 6f9f17287e78Trond Myklebust1-13/+15
commit ea7a1019d8baf8503ecd6e3ec8436dec283569e6 upstream. The premise of commit 6f9f17287e78 ("SUNRPC: Mitigate cond_resched() in xprt_transmit()") was that cond_resched() is expensive and unnecessary when there has been just a single send. The point of cond_resched() is to ensure that tasks that should pre-empt this one get a chance to do so when it is safe to do so. The code prior to commit 6f9f17287e78 failed to take into account that it was keeping a rpc_task pinned for longer than it needed to, and so rather than doing a full revert, let's just move the cond_resched. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17PCI: aardvark: Fix PCIe Max Payload Size settingPali Rohár1-1/+2
commit a4e17d65dafdd3513042d8f00404c9b6068a825c upstream. Change PCIe Max Payload Size setting in PCIe Device Control register to 512 bytes to align with PCIe Link Initialization sequence as defined in Marvell Armada 3700 Functional Specification. According to the specification, maximal Max Payload Size supported by this device is 512 bytes. Without this kernel prints suspicious line: pci 0000:01:00.0: Upstream bridge's Max Payload Size set to 256 (was 16384, max 512) With this change it changes to: pci 0000:01:00.0: Upstream bridge's Max Payload Size set to 256 (was 512, max 512) Link: https://lore.kernel.org/r/20211005180952.6812-3-kabel@kernel.org Fixes: 8c39d710363c ("PCI: aardvark: Add Aardvark PCI host controller driver") Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Marek Behún <kabel@kernel.org> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17PCI: Add PCI_EXP_DEVCTL_PAYLOAD_* macrosPali Rohár1-0/+6
commit 460275f124fb072dca218a6b43b6370eebbab20d upstream. Define a macro PCI_EXP_DEVCTL_PAYLOAD_* for every possible Max Payload Size in linux/pci_regs.h, in the same style as PCI_EXP_DEVCTL_READRQ_*. Link: https://lore.kernel.org/r/20211005180952.6812-2-kabel@kernel.org Signed-off-by: Pali Rohár <pali@kernel.org> Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Marek Behún <kabel@kernel.org> Reviewed-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17drm/sun4i: Fix macros in sun8i_csc.hJernej Skrabec1-2/+2
commit c302c98da646409d657a473da202f10f417f3ff1 upstream. Macros SUN8I_CSC_CTRL() and SUN8I_CSC_COEFF() don't follow usual recommendation of having arguments enclosed in parenthesis. While that didn't change anything for quite sometime, it actually become important after CSC code rework with commit ea067aee45a8 ("drm/sun4i: de2/de3: Remove redundant CSC matrices"). Without this fix, colours are completely off for supported YVU formats on SoCs with DE2 (A64, H3, R40, etc.). Fix the issue by enclosing macro arguments in parenthesis. Cc: stable@vger.kernel.org # 5.12+ Fixes: 883029390550 ("drm/sun4i: Add DE2 CSC library") Reported-by: Roman Stratiienko <r.stratiienko@gmail.com> Signed-off-by: Jernej Skrabec <jernej.skrabec@gmail.com> Reviewed-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Maxime Ripard <maxime@cerno.tech> Link: https://patchwork.freedesktop.org/patch/msgid/20210831184819.93670-1-jernej.skrabec@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17powerpc/85xx: fix timebase sync issue when CONFIG_HOTPLUG_CPU=nXiaoming Ni3-7/+13
commit c45361abb9185b1e172bd75eff51ad5f601ccae4 upstream. When CONFIG_SMP=y, timebase synchronization is required when the second kernel is started. arch/powerpc/kernel/smp.c: int __cpu_up(unsigned int cpu, struct task_struct *tidle) { ... if (smp_ops->give_timebase) smp_ops->give_timebase(); ... } void start_secondary(void *unused) { ... if (smp_ops->take_timebase) smp_ops->take_timebase(); ... } When CONFIG_HOTPLUG_CPU=n and CONFIG_KEXEC_CORE=n, smp_85xx_ops.give_timebase is NULL, smp_85xx_ops.take_timebase is NULL, As a result, the timebase is not synchronized. Timebase synchronization does not depend on CONFIG_HOTPLUG_CPU. Fixes: 56f1ba280719 ("powerpc/mpc85xx: refactor the PM operations") Cc: stable@vger.kernel.org # v4.6+ Signed-off-by: Xiaoming Ni <nixiaoming@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210929033646.39630-3-nixiaoming@huawei.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17powerpc/pseries/mobility: ignore ibm, platform-facilities updatesNathan Lynch1-0/+34
commit 319fa1a52e438a6e028329187783a25ad498c4e6 upstream. On VMs with NX encryption, compression, and/or RNG offload, these capabilities are described by nodes in the ibm,platform-facilities device tree hierarchy: $ tree -d /sys/firmware/devicetree/base/ibm,platform-facilities/ /sys/firmware/devicetree/base/ibm,platform-facilities/ ├── ibm,compression-v1 ├── ibm,random-v1 └── ibm,sym-encryption-v1 3 directories The acceleration functions that these nodes describe are not disrupted by live migration, not even temporarily. But the post-migration ibm,update-nodes sequence firmware always sends "delete" messages for this hierarchy, followed by an "add" directive to reconstruct it via ibm,configure-connector (log with debugging statements enabled in mobility.c): mobility: removing node /ibm,platform-facilities/ibm,random-v1:4294967285 mobility: removing node /ibm,platform-facilities/ibm,compression-v1:4294967284 mobility: removing node /ibm,platform-facilities/ibm,sym-encryption-v1:4294967283 mobility: removing node /ibm,platform-facilities:4294967286 ... mobility: added node /ibm,platform-facilities:4294967286 Note we receive a single "add" message for the entire hierarchy, and what we receive from the ibm,configure-connector sequence is the top-level platform-facilities node along with its three children. The debug message simply reports the parent node and not the whole subtree. Also, significantly, the nodes added are almost completely equivalent to the ones removed; even phandles are unchanged. ibm,shared-interrupt-pool in the leaf nodes is the only property I've observed to differ, and Linux does not use that. So in practice, the sum of update messages Linux receives for this hierarchy is equivalent to minor property updates. We succeed in removing the original hierarchy from the device tree. But the vio bus code is ignorant of this, and does not unbind or relinquish its references. The leaf nodes, still reachable through sysfs, of course still refer to the now-freed ibm,platform-facilities parent node, which makes use-after-free possible: refcount_t: addition on 0; use-after-free. WARNING: CPU: 3 PID: 1706 at lib/refcount.c:25 refcount_warn_saturate+0x164/0x1f0 refcount_warn_saturate+0x160/0x1f0 (unreliable) kobject_get+0xf0/0x100 of_node_get+0x30/0x50 of_get_parent+0x50/0xb0 of_fwnode_get_parent+0x54/0x90 fwnode_count_parents+0x50/0x150 fwnode_full_name_string+0x30/0x110 device_node_string+0x49c/0x790 vsnprintf+0x1c0/0x4c0 sprintf+0x44/0x60 devspec_show+0x34/0x50 dev_attr_show+0x40/0xa0 sysfs_kf_seq_show+0xbc/0x200 kernfs_seq_show+0x44/0x60 seq_read_iter+0x2a4/0x740 kernfs_fop_read_iter+0x254/0x2e0 new_sync_read+0x120/0x190 vfs_read+0x1d0/0x240 Moreover, the "new" replacement subtree is not correctly added to the device tree, resulting in ibm,platform-facilities parent node without the appropriate leaf nodes, and broken symlinks in the sysfs device hierarchy: $ tree -d /sys/firmware/devicetree/base/ibm,platform-facilities/ /sys/firmware/devicetree/base/ibm,platform-facilities/ 0 directories $ cd /sys/devices/vio ; find . -xtype l -exec file {} + ./ibm,sym-encryption-v1/of_node: broken symbolic link to ../../../firmware/devicetree/base/ibm,platform-facilities/ibm,sym-encryption-v1 ./ibm,random-v1/of_node: broken symbolic link to ../../../firmware/devicetree/base/ibm,platform-facilities/ibm,random-v1 ./ibm,compression-v1/of_node: broken symbolic link to ../../../firmware/devicetree/base/ibm,platform-facilities/ibm,compression-v1 This is because add_dt_node() -> dlpar_attach_node() attaches only the parent node returned from configure-connector, ignoring any children. This should be corrected for the general case, but fixing that won't help with the stale OF node references, which is the more urgent problem. One way to address that would be to make the drivers respond to node removal notifications, so that node references can be dropped appropriately. But this would likely force the drivers to disrupt active clients for no useful purpose: equivalent nodes are immediately re-added. And recall that the acceleration capabilities described by the nodes remain available throughout the whole process. The solution I believe to be robust for this situation is to convert remove+add of a node with an unchanged phandle to an update of the node's properties in the Linux device tree structure. That would involve changing and adding a fair amount of code, and may take several iterations to land. Until that can be realized we have a confirmed use-after-free and the possibility of memory corruption. So add a limited workaround that discriminates on the node type, ignoring adds and removes. This should be amenable to backporting in the meantime. Fixes: 410bccf97881 ("powerpc/pseries: Partition migration in the kernel") Cc: stable@vger.kernel.org Signed-off-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211020194703.2613093-1-nathanl@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17powerpc/64s/interrupt: Fix check_return_regs_valid() false positiveNicholas Piggin1-1/+1
commit 4a5cb51f3db4be547225a4bce7a43d41b231382b upstream. The check_return_regs_valid() can cause a false positive if the return regs are marked as norestart and they are an HSRR type interrupt, because the low bit in the bottom of regs->trap causes interrupt type matching to fail. This can occcur for example on bare metal with a HV privileged doorbell interrupt that causes a signal, but do_signal returns early because get_signal() fails, and takes the "No signal to deliver" path. In this case no signal was delivered so the return location is not changed so return SRRs are not invalidated, yet set_trap_norestart is called, which messes up the match. Building go-1.16.6 is known to reproduce this. Fix it by using the TRAP() accessor which masks out the low bit. Fixes: 6eaaf9de3599 ("powerpc/64s/interrupt: Check and fix srr_valid without crashing") Cc: stable@vger.kernel.org # v5.14+ Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211026122531.3599918-1-npiggin@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17powerpc/security: Use a mutex for interrupt exit code patchingRussell Currey1-0/+11
commit 3c12b4df8d5e026345a19886ae375b3ebc33c0b6 upstream. The mitigation-patching.sh script in the powerpc selftests toggles all mitigations on and off simultaneously, revealing that rfi_flush and stf_barrier cannot safely operate at the same time due to races in updating the static key. On some systems, the static key code throws a warning and the kernel remains functional. On others, the kernel will hang or crash. Fix this by slapping on a mutex. Fixes: 13799748b957 ("powerpc/64: use interrupt restart table to speed up return from interrupt") Cc: stable@vger.kernel.org # v5.14+ Signed-off-by: Russell Currey <ruscur@russell.cc> Acked-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211027072410.40950-1-ruscur@russell.cc Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17powerpc/powernv/prd: Unregister OPAL_MSG_PRD2 notifier during module unloadVasant Hegde1-1/+11
commit 52862ab33c5d97490f3fa345d6529829e6d6637b upstream. Commit 587164cd, introduced new opal message type (OPAL_MSG_PRD2) and added opal notifier. But I missed to unregister the notifier during module unload path. This results in below call trace if you try to unload and load opal_prd module. Also add new notifier_block for OPAL_MSG_PRD2 message. Sample calltrace (modprobe -r opal_prd; modprobe opal_prd) BUG: Unable to handle kernel data access on read at 0xc0080000192200e0 Faulting instruction address: 0xc00000000018d1cc Oops: Kernel access of bad area, sig: 11 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV CPU: 66 PID: 7446 Comm: modprobe Kdump: loaded Tainted: G E 5.14.0prd #759 NIP: c00000000018d1cc LR: c00000000018d2a8 CTR: c0000000000cde10 REGS: c0000003c4c0f0a0 TRAP: 0300 Tainted: G E (5.14.0prd) MSR: 9000000002009033 <SF,HV,VEC,EE,ME,IR,DR,RI,LE> CR: 24224824 XER: 20040000 CFAR: c00000000018d2a4 DAR: c0080000192200e0 DSISR: 40000000 IRQMASK: 1 ... NIP notifier_chain_register+0x2c/0xc0 LR atomic_notifier_chain_register+0x48/0x80 Call Trace: 0xc000000002090610 (unreliable) atomic_notifier_chain_register+0x58/0x80 opal_message_notifier_register+0x7c/0x1e0 opal_prd_probe+0x84/0x150 [opal_prd] platform_probe+0x78/0x130 really_probe+0x110/0x5d0 __driver_probe_device+0x17c/0x230 driver_probe_device+0x60/0x130 __driver_attach+0xfc/0x220 bus_for_each_dev+0xa8/0x130 driver_attach+0x34/0x50 bus_add_driver+0x1b0/0x300 driver_register+0x98/0x1a0 __platform_driver_register+0x38/0x50 opal_prd_driver_init+0x34/0x50 [opal_prd] do_one_initcall+0x60/0x2d0 do_init_module+0x7c/0x320 load_module+0x3394/0x3650 __do_sys_finit_module+0xd4/0x160 system_call_exception+0x140/0x290 system_call_common+0xf4/0x258 Fixes: 587164cd593c ("powerpc/powernv: Add new opal message type") Cc: stable@vger.kernel.org # v5.4+ Signed-off-by: Vasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211028165716.41300-1-hegdevasant@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17powerpc/32e: Ignore ESR in instruction storage interrupt handlerNicholas Piggin1-3/+12
commit 81291383ffde08b23bce75e7d6b2575ce9d3475c upstream. A e5500 machine running a 32-bit kernel sometimes hangs at boot, seemingly going into an infinite loop of instruction storage interrupts. The ESR (Exception Syndrome Register) has a value of 0x800000 (store) when this happens, which is likely set by a previous store. An instruction TLB miss interrupt would then leave ESR unchanged, and if no PTE exists it calls directly to the instruction storage interrupt handler without changing ESR. access_error() does not cause a segfault due to a store to a read-only vma because is_exec is true. Most subsequent fault handling does not check for a write fault on a read-only vma, and might do strange things like create a writeable PTE or call page_mkwrite on a read only vma or file. It's not clear what happens here to cause the infinite faulting in this case, a fault handler failure or low level PTE or TLB handling. In any case this can be fixed by having the instruction storage interrupt zero regs->dsisr rather than storing the ESR value to it. Fixes: a01a3f2ddbcd ("powerpc: remove arguments from fault handler functions") Cc: stable@vger.kernel.org # v5.12+ Reported-by: Jacques de Laval <jacques.delaval@protonmail.com> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Tested-by: Jacques de Laval <jacques.delaval@protonmail.com> Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211028133043.4159501-1-npiggin@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17powerpc/bpf: Fix write protecting JIT codeHari Bathini1-1/+1
commit 44a8214de96bafb5210e43bfa2c97c19bf75af3d upstream. Running program with bpf-to-bpf function calls results in data access exception (0x300) with the below call trace: bpf_int_jit_compile+0x238/0x750 (unreliable) bpf_check+0x2008/0x2710 bpf_prog_load+0xb00/0x13a0 __sys_bpf+0x6f4/0x27c0 sys_bpf+0x2c/0x40 system_call_exception+0x164/0x330 system_call_vectored_common+0xe8/0x278 as bpf_int_jit_compile() tries writing to write protected JIT code location during the extra pass. Fix it by holding off write protection of JIT code until the extra pass, where branch target addresses fixup happens. Fixes: 62e3d4210ac9 ("powerpc/bpf: Write protect JIT code") Cc: stable@vger.kernel.org # v5.14+ Signed-off-by: Hari Bathini <hbathini@linux.ibm.com> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211025055649.114728-1-hbathini@linux.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17powerpc/vas: Fix potential NULL pointer dereferenceGustavo A. R. Silva1-2/+2
commit 61cb9ac66b30374c7fd8a8b2a3c4f8f432c72e36 upstream. (!ptr && !ptr->foo) strikes again. :) The expression (!ptr && !ptr->foo) is bogus and in case ptr is NULL, it leads to a NULL pointer dereference: ptr->foo. Fix this by converting && to || This issue was detected with the help of Coccinelle, and audited and fixed manually. Fixes: 1a0d0d5ed5e3 ("powerpc/vas: Add platform specific user window operations") Cc: stable@vger.kernel.org Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Reviewed-by: Tyrel Datwyler <tyreld@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20211015050345.GA1161918@embeddedor Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: au1550nd: Keep the driver compatible with on-die ECC enginesMiquel Raynal1-3/+9
commit 7e3cdba176ba59eaf4d463d273da0718e3626140 upstream. Following the introduction of the generic ECC engine infrastructure, it was necessary to reorganize the code and move the ECC configuration in the ->attach_chip() hook. Failing to do that properly lead to a first series of fixes supposed to stabilize the situation. Unfortunately, this only fixed the use of software ECC engines, preventing any other kind of engine to be used, including on-die ones. It is now time to (finally) fix the situation by ensuring that we still provide a default (eg. software ECC) but will still support different ECC engines such as on-die ECC engines if properly described in the device tree. There are no changes needed on the core side in order to do this, but we just need to leverage the logic there which allows: 1- a subsystem default (set to Host engines in the raw NAND world) 2- a driver specific default (here set to software ECC engines) 3- any type of engine requested by the user (ie. described in the DT) As the raw NAND subsystem has not yet been fully converted to the ECC engine infrastructure, in order to provide a default ECC engine for this driver we need to set chip->ecc.engine_type *before* calling nand_scan(). During the initialization step, the core will consider this entry as the default engine for this driver. This value may of course be overloaded by the user if the usual DT properties are provided. Fixes: dbffc8ccdf3a ("mtd: rawnand: au1550: Move the ECC initialization to ->attach_chip()") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-3-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: plat_nand: Keep the driver compatible with on-die ECC enginesMiquel Raynal1-3/+9
commit 325fd539fc84f0aaa0ceb9d7d3b8718582473dc5 upstream. Following the introduction of the generic ECC engine infrastructure, it was necessary to reorganize the code and move the ECC configuration in the ->attach_chip() hook. Failing to do that properly lead to a first series of fixes supposed to stabilize the situation. Unfortunately, this only fixed the use of software ECC engines, preventing any other kind of engine to be used, including on-die ones. It is now time to (finally) fix the situation by ensuring that we still provide a default (eg. software ECC) but will still support different ECC engines such as on-die ECC engines if properly described in the device tree. There are no changes needed on the core side in order to do this, but we just need to leverage the logic there which allows: 1- a subsystem default (set to Host engines in the raw NAND world) 2- a driver specific default (here set to software ECC engines) 3- any type of engine requested by the user (ie. described in the DT) As the raw NAND subsystem has not yet been fully converted to the ECC engine infrastructure, in order to provide a default ECC engine for this driver we need to set chip->ecc.engine_type *before* calling nand_scan(). During the initialization step, the core will consider this entry as the default engine for this driver. This value may of course be overloaded by the user if the usual DT properties are provided. Fixes: 612e048e6aab ("mtd: rawnand: plat_nand: Move the ECC initialization to ->attach_chip()") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-8-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: orion: Keep the driver compatible with on-die ECC enginesMiquel Raynal1-3/+9
commit 194ac63de6ff56d30c48e3ac19c8a412f9c1408e upstream. Following the introduction of the generic ECC engine infrastructure, it was necessary to reorganize the code and move the ECC configuration in the ->attach_chip() hook. Failing to do that properly lead to a first series of fixes supposed to stabilize the situation. Unfortunately, this only fixed the use of software ECC engines, preventing any other kind of engine to be used, including on-die ones. It is now time to (finally) fix the situation by ensuring that we still provide a default (eg. software ECC) but will still support different ECC engines such as on-die ECC engines if properly described in the device tree. There are no changes needed on the core side in order to do this, but we just need to leverage the logic there which allows: 1- a subsystem default (set to Host engines in the raw NAND world) 2- a driver specific default (here set to software ECC engines) 3- any type of engine requested by the user (ie. described in the DT) As the raw NAND subsystem has not yet been fully converted to the ECC engine infrastructure, in order to provide a default ECC engine for this driver we need to set chip->ecc.engine_type *before* calling nand_scan(). During the initialization step, the core will consider this entry as the default engine for this driver. This value may of course be overloaded by the user if the usual DT properties are provided. Fixes: 553508cec2e8 ("mtd: rawnand: orion: Move the ECC initialization to ->attach_chip()") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-6-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: pasemi: Keep the driver compatible with on-die ECC enginesMiquel Raynal1-3/+9
commit f16b7d2a5e810fcf4b15d096246d0d445da9cc88 upstream. Following the introduction of the generic ECC engine infrastructure, it was necessary to reorganize the code and move the ECC configuration in the ->attach_chip() hook. Failing to do that properly lead to a first series of fixes supposed to stabilize the situation. Unfortunately, this only fixed the use of software ECC engines, preventing any other kind of engine to be used, including on-die ones. It is now time to (finally) fix the situation by ensuring that we still provide a default (eg. software ECC) but will still support different ECC engines such as on-die ECC engines if properly described in the device tree. There are no changes needed on the core side in order to do this, but we just need to leverage the logic there which allows: 1- a subsystem default (set to Host engines in the raw NAND world) 2- a driver specific default (here set to software ECC engines) 3- any type of engine requested by the user (ie. described in the DT) As the raw NAND subsystem has not yet been fully converted to the ECC engine infrastructure, in order to provide a default ECC engine for this driver we need to set chip->ecc.engine_type *before* calling nand_scan(). During the initialization step, the core will consider this entry as the default engine for this driver. This value may of course be overloaded by the user if the usual DT properties are provided. Fixes: 8fc6f1f042b2 ("mtd: rawnand: pasemi: Move the ECC initialization to ->attach_chip()") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-7-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: gpio: Keep the driver compatible with on-die ECC enginesMiquel Raynal1-3/+9
commit b5b5b4dc6fcd8194b9dd38c8acdc5ab71adf44f8 upstream. Following the introduction of the generic ECC engine infrastructure, it was necessary to reorganize the code and move the ECC configuration in the ->attach_chip() hook. Failing to do that properly lead to a first series of fixes supposed to stabilize the situation. Unfortunately, this only fixed the use of software ECC engines, preventing any other kind of engine to be used, including on-die ones. It is now time to (finally) fix the situation by ensuring that we still provide a default (eg. software ECC) but will still support different ECC engines such as on-die ECC engines if properly described in the device tree. There are no changes needed on the core side in order to do this, but we just need to leverage the logic there which allows: 1- a subsystem default (set to Host engines in the raw NAND world) 2- a driver specific default (here set to software ECC engines) 3- any type of engine requested by the user (ie. described in the DT) As the raw NAND subsystem has not yet been fully converted to the ECC engine infrastructure, in order to provide a default ECC engine for this driver we need to set chip->ecc.engine_type *before* calling nand_scan(). During the initialization step, the core will consider this entry as the default engine for this driver. This value may of course be overloaded by the user if the usual DT properties are provided. Fixes: f6341f6448e0 ("mtd: rawnand: gpio: Move the ECC initialization to ->attach_chip()") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-4-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: mpc5121: Keep the driver compatible with on-die ECC enginesMiquel Raynal1-3/+9
commit f9d8570b7fd6f4f08528ce2f5e39787a8a260cd6 upstream. Following the introduction of the generic ECC engine infrastructure, it was necessary to reorganize the code and move the ECC configuration in the ->attach_chip() hook. Failing to do that properly lead to a first series of fixes supposed to stabilize the situation. Unfortunately, this only fixed the use of software ECC engines, preventing any other kind of engine to be used, including on-die ones. It is now time to (finally) fix the situation by ensuring that we still provide a default (eg. software ECC) but will still support different ECC engines such as on-die ECC engines if properly described in the device tree. There are no changes needed on the core side in order to do this, but we just need to leverage the logic there which allows: 1- a subsystem default (set to Host engines in the raw NAND world) 2- a driver specific default (here set to software ECC engines) 3- any type of engine requested by the user (ie. described in the DT) As the raw NAND subsystem has not yet been fully converted to the ECC engine infrastructure, in order to provide a default ECC engine for this driver we need to set chip->ecc.engine_type *before* calling nand_scan(). During the initialization step, the core will consider this entry as the default engine for this driver. This value may of course be overloaded by the user if the usual DT properties are provided. Fixes: 6dd09f775b72 ("mtd: rawnand: mpc5121: Move the ECC initialization to ->attach_chip()") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-5-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: xway: Keep the driver compatible with on-die ECC enginesMiquel Raynal1-3/+9
commit 6bcd2960af1b7bacb2f1e710ab0c0b802d900501 upstream. Following the introduction of the generic ECC engine infrastructure, it was necessary to reorganize the code and move the ECC configuration in the ->attach_chip() hook. Failing to do that properly lead to a first series of fixes supposed to stabilize the situation. Unfortunately, this only fixed the use of software ECC engines, preventing any other kind of engine to be used, including on-die ones. It is now time to (finally) fix the situation by ensuring that we still provide a default (eg. software ECC) but will still support different ECC engines such as on-die ECC engines if properly described in the device tree. There are no changes needed on the core side in order to do this, but we just need to leverage the logic there which allows: 1- a subsystem default (set to Host engines in the raw NAND world) 2- a driver specific default (here set to software ECC engines) 3- any type of engine requested by the user (ie. described in the DT) As the raw NAND subsystem has not yet been fully converted to the ECC engine infrastructure, in order to provide a default ECC engine for this driver we need to set chip->ecc.engine_type *before* calling nand_scan(). During the initialization step, the core will consider this entry as the default engine for this driver. This value may of course be overloaded by the user if the usual DT properties are provided. Fixes: d525914b5bd8 ("mtd: rawnand: xway: Move the ECC initialization to ->attach_chip()") Cc: stable@vger.kernel.org Cc: Jan Hoffmann <jan@3e8.eu> Cc: Kestrel seventyfour <kestrelseventyfour@gmail.com> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Tested-by: Jan Hoffmann <jan@3e8.eu> Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-10-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: ams-delta: Keep the driver compatible with on-die ECC enginesMiquel Raynal1-3/+9
commit d707bb74daae07879e0fc1b4b960f8f2d0a5fe5d upstream. Following the introduction of the generic ECC engine infrastructure, it was necessary to reorganize the code and move the ECC configuration in the ->attach_chip() hook. Failing to do that properly lead to a first series of fixes supposed to stabilize the situation. Unfortunately, this only fixed the use of software ECC engines, preventing any other kind of engine to be used, including on-die ones. It is now time to (finally) fix the situation by ensuring that we still provide a default (eg. software ECC) but will still support different ECC engines such as on-die ECC engines if properly described in the device tree. There are no changes needed on the core side in order to do this, but we just need to leverage the logic there which allows: 1- a subsystem default (set to Host engines in the raw NAND world) 2- a driver specific default (here set to software ECC engines) 3- any type of engine requested by the user (ie. described in the DT) As the raw NAND subsystem has not yet been fully converted to the ECC engine infrastructure, in order to provide a default ECC engine for this driver we need to set chip->ecc.engine_type *before* calling nand_scan(). During the initialization step, the core will consider this entry as the default engine for this driver. This value may of course be overloaded by the user if the usual DT properties are provided. Fixes: 59d93473323a ("mtd: rawnand: ams-delta: Move the ECC initialization to ->attach_chip()") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20210928222258.199726-2-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mtd: rawnand: fsmc: Fix use of SM ORDERMiquel Raynal1-1/+3
commit 9be1446ece291a1f08164bd056bed3d698681f8b upstream. The introduction of the generic ECC engine API lead to a number of changes in various drivers which broke some of them. Here is a typical example: I expected the SM_ORDER option to be handled by the Hamming ECC engine internals. Problem: the fsmc driver does not instantiate (yet) a real ECC engine object so we had to use a 'bare' ECC helper instead of the shiny rawnand functions. However, when not intializing this engine properly and using the bare helpers, we do not get the SM ORDER feature handled automatically. It looks like this was lost in the process so let's ensure we use the right SM ORDER now. Fixes: ad9ffdce4539 ("mtd: rawnand: fsmc: Fix external use of SW Hamming ECC helper") Cc: stable@vger.kernel.org Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/linux-mtd/20210928221507.199198-2-miquel.raynal@bootlin.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17remoteproc: imx_rproc: Fix rsc-table nameDong Aisheng1-1/+1
commit e90547d59d4e29e269e22aa6ce590ed0b41207d2 upstream. Usually the dash '-' is preferred in node name. So far, not dts in upstream kernel, so we just update node name in driver. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Fixes: 5e4c1243071d ("remoteproc: imx_rproc: support remote cores booted before Linux Kernel") Reviewed-and-tested-by: Peng Fan <peng.fan@nxp.com> Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com> Signed-off-by: Peng Fan <peng.fan@nxp.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20210910090621.3073540-6-peng.fan@oss.nxp.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17remoteproc: imx_rproc: Fix ignoring mapping vdev regionsDong Aisheng1-2/+2
commit afe670e23af91d8a74a8d7049f6e0984bbf6ea11 upstream. vdev regions are typically named vdev0buffer, vdev0ring0, vdev0ring1 and etc. Change to strncmp to cover them all. Fixes: 8f2d8961640f ("remoteproc: imx_rproc: ignore mapping vdev regions") Reviewed-and-tested-by: Peng Fan <peng.fan@nxp.com> Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com> Signed-off-by: Peng Fan <peng.fan@nxp.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20210910090621.3073540-5-peng.fan@oss.nxp.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17remoteproc: Fix the wrong default value of is_iomemDong Aisheng2-2/+2
commit 970675f61bf5761d7e5326f6e4df995ecdba5e11 upstream. Currently the is_iomem is a random value in the stack which may be default to true even on those platforms that not use iomem to store firmware. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Fixes: 40df0a91b2a5 ("remoteproc: add is_iomem to da_to_va") Reviewed-and-tested-by: Peng Fan <peng.fan@nxp.com> Signed-off-by: Dong Aisheng <aisheng.dong@nxp.com> Signed-off-by: Peng Fan <peng.fan@nxp.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20210910090621.3073540-3-peng.fan@oss.nxp.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17remoteproc: elf_loader: Fix loading segment when is_iomem truePeng Fan1-1/+1
commit 24acbd9dc934f5d9418a736c532d3970a272063e upstream. It seems luckliy work on i.MX platform, but it is wrong. Need use memcpy_toio, not memcpy_fromio. Fixes: 40df0a91b2a5 ("remoteproc: add is_iomem to da_to_va") Tested-by: Dong Aisheng <aisheng.dong@nxp.com> (i.MX8MQ) Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dong Aisheng <aisheng.dong@nxp.com> Signed-off-by: Peng Fan <peng.fan@nxp.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20210910090621.3073540-2-peng.fan@oss.nxp.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org> Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17s390/cio: make ccw_device_dma_* more robustHalil Pasic1-1/+11
commit ad9a14517263a16af040598c7920c09ca9670a31 upstream. Since commit 48720ba56891 ("virtio/s390: use DMA memory for ccw I/O and classic notifiers") we were supposed to make sure that virtio_ccw_release_dev() completes before the ccw device and the attached dma pool are torn down, but unfortunately we did not. Before that commit it used to be OK to delay cleaning up the memory allocated by virtio-ccw indefinitely (which isn't really intuitive for guys used to destruction happens in reverse construction order), but now we trigger a BUG_ON if the genpool is destroyed before all memory allocated from it is deallocated. Which brings down the guest. We can observe this problem, when unregister_virtio_device() does not give up the last reference to the virtio_device (e.g. because a virtio-scsi attached scsi disk got removed without previously unmounting its previously mounted partition). To make sure that the genpool is only destroyed after all the necessary freeing is done let us take a reference on the ccw device on each ccw_device_dma_zalloc() and give it up on each ccw_device_dma_free(). Actually there are multiple approaches to fixing the problem at hand that can work. The upside of this one is that it is the safest one while remaining simple. We don't crash the guest even if the driver does not pair allocations and frees. The downside is the reference counting overhead, that the reference counting for ccw devices becomes more complex, in a sense that we need to pair the calls to the aforementioned functions for it to be correct, and that if we happen to leak, we leak more than necessary (the whole ccw device instead of just the genpool). Some alternatives to this approach are taking a reference in virtio_ccw_online() and giving it up in virtio_ccw_release_dev() or making sure virtio_ccw_release_dev() completes its work before virtio_ccw_remove() returns. The downside of these approaches is that these are less safe against programming errors. Cc: <stable@vger.kernel.org> # v5.3 Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Fixes: 48720ba56891 ("virtio/s390: use DMA memory for ccw I/O and classic notifiers") Reported-by: bfu@redhat.com Reviewed-by: Vineeth Vijayan <vneethv@linux.ibm.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17s390/ap: Fix hanging ioctl caused by orphaned repliesHarald Freudenberger1-0/+2
commit 3826350e6dd435e244eb6e47abad5a47c169ebc2 upstream. When a queue is switched to soft offline during heavy load and later switched to soft online again and now used, it may be that the caller is blocked forever in the ioctl call. The failure occurs because there is a pending reply after the queue(s) have been switched to offline. This orphaned reply is received when the queue is switched to online and is accidentally counted for the outstanding replies. So when there was a valid outstanding reply and this orphaned reply is received it counts as the outstanding one thus dropping the outstanding counter to 0. Voila, with this counter the receive function is not called any more and the real outstanding reply is never received (until another request comes in...) and the ioctl blocks. The fix is simple. However, instead of readjusting the counter when an orphaned reply is detected, I check the queue status for not empty and compare this to the outstanding counter. So if the queue is not empty then the counter must not drop to 0 but at least have a value of 1. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Cc: stable@vger.kernel.org Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17s390/tape: fix timer initialization in tape_std_assign()Sven Schnelle1-2/+1
commit 213fca9e23b59581c573d558aa477556f00b8198 upstream. commit 9c6c273aa424 ("timer: Remove init_timer_on_stack() in favor of timer_setup_on_stack()") changed the timer setup from init_timer_on_stack(() to timer_setup(), but missed to change the mod_timer() call. And while at it, use msecs_to_jiffies() instead of the open coded timeout calculation. Cc: stable@vger.kernel.org Fixes: 9c6c273aa424 ("timer: Remove init_timer_on_stack() in favor of timer_setup_on_stack()") Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17s390/cio: check the subchannel validity for dev_busidVineeth Vijayan1-2/+2
commit a4751f157c194431fae9e9c493f456df8272b871 upstream. Check the validity of subchanel before reading other fields in the schib. Fixes: d3683c055212 ("s390/cio: add dev_busid sysfs entry for each subchannel") CC: <stable@vger.kernel.org> Reported-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Vineeth Vijayan <vneethv@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Link: https://lore.kernel.org/r/20211105154451.847288-1-vneethv@linux.ibm.com Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17s390/cpumf: cpum_cf PMU displays invalid value after hotplug removeThomas Richter1-1/+3
commit 9d48c7afedf91a02d03295837ec76b2fb5e7d3fe upstream. When a CPU is hotplugged while the perf stat -e cycles command is running, a wrong (very large) value is displayed immediately after the CPU removal: Check the values, shouldn't be too high as in time counts unit events 1.001101919 29261846 cycles 2.002454499 17523405 cycles 3.003659292 24361161 cycles 4.004816983 18446744073638406144 cycles 5.005671647 <not counted> cycles ... The CPU hotplug off took place after 3 seconds. The issue is the read of the event count value after 4 seconds when the CPU is not available and the read of the counter returns an error. This is treated as a counter value of zero. This results in a very large value (0 - previous_value). Fix this by detecting the hotplugged off CPU and report 0 instead of a very large number. Cc: stable@vger.kernel.org Fixes: a029a4eab39e ("s390/cpumf: Allow concurrent access for CPU Measurement Counter Facility") Reported-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Thomas Richter <tmricht@linux.ibm.com> Reviewed-by: Sumanth Korikkar <sumanthk@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17PM: sleep: Avoid calling put_device() under dpm_list_mtxRafael J. Wysocki1-27/+57
commit 2aa36604e8243698ff22bd5fef0dd0c6bb07ba92 upstream. It is generally unsafe to call put_device() with dpm_list_mtx held, because the given device's release routine may carry out an action depending on that lock which then may deadlock, so modify the system-wide suspend and resume of devices to always drop dpm_list_mtx before calling put_device() (and adjust white space somewhat while at it). For instance, this prevents the following splat from showing up in the kernel log after a system resume in certain configurations: [ 3290.969514] ====================================================== [ 3290.969517] WARNING: possible circular locking dependency detected [ 3290.969519] 5.15.0+ #2420 Tainted: G S [ 3290.969523] ------------------------------------------------------ [ 3290.969525] systemd-sleep/4553 is trying to acquire lock: [ 3290.969529] ffff888117ab1138 ((wq_completion)hci0#2){+.+.}-{0:0}, at: flush_workqueue+0x87/0x4a0 [ 3290.969554] but task is already holding lock: [ 3290.969556] ffffffff8280fca8 (dpm_list_mtx){+.+.}-{3:3}, at: dpm_resume+0x12e/0x3e0 [ 3290.969571] which lock already depends on the new lock. [ 3290.969573] the existing dependency chain (in reverse order) is: [ 3290.969575] -> #3 (dpm_list_mtx){+.+.}-{3:3}: [ 3290.969583] __mutex_lock+0x9d/0xa30 [ 3290.969591] device_pm_add+0x2e/0xe0 [ 3290.969597] device_add+0x4d5/0x8f0 [ 3290.969605] hci_conn_add_sysfs+0x43/0xb0 [bluetooth] [ 3290.969689] hci_conn_complete_evt.isra.71+0x124/0x750 [bluetooth] [ 3290.969747] hci_event_packet+0xd6c/0x28a0 [bluetooth] [ 3290.969798] hci_rx_work+0x213/0x640 [bluetooth] [ 3290.969842] process_one_work+0x2aa/0x650 [ 3290.969851] worker_thread+0x39/0x400 [ 3290.969859] kthread+0x142/0x170 [ 3290.969865] ret_from_fork+0x22/0x30 [ 3290.969872] -> #2 (&hdev->lock){+.+.}-{3:3}: [ 3290.969881] __mutex_lock+0x9d/0xa30 [ 3290.969887] hci_event_packet+0xba/0x28a0 [bluetooth] [ 3290.969935] hci_rx_work+0x213/0x640 [bluetooth] [ 3290.969978] process_one_work+0x2aa/0x650 [ 3290.969985] worker_thread+0x39/0x400 [ 3290.969993] kthread+0x142/0x170 [ 3290.969999] ret_from_fork+0x22/0x30 [ 3290.970004] -> #1 ((work_completion)(&hdev->rx_work)){+.+.}-{0:0}: [ 3290.970013] process_one_work+0x27d/0x650 [ 3290.970020] worker_thread+0x39/0x400 [ 3290.970028] kthread+0x142/0x170 [ 3290.970033] ret_from_fork+0x22/0x30 [ 3290.970038] -> #0 ((wq_completion)hci0#2){+.+.}-{0:0}: [ 3290.970047] __lock_acquire+0x15cb/0x1b50 [ 3290.970054] lock_acquire+0x26c/0x300 [ 3290.970059] flush_workqueue+0xae/0x4a0 [ 3290.970066] drain_workqueue+0xa1/0x130 [ 3290.970073] destroy_workqueue+0x34/0x1f0 [ 3290.970081] hci_release_dev+0x49/0x180 [bluetooth] [ 3290.970130] bt_host_release+0x1d/0x30 [bluetooth] [ 3290.970195] device_release+0x33/0x90 [ 3290.970201] kobject_release+0x63/0x160 [ 3290.970211] dpm_resume+0x164/0x3e0 [ 3290.970215] dpm_resume_end+0xd/0x20 [ 3290.970220] suspend_devices_and_enter+0x1a4/0xba0 [ 3290.970229] pm_suspend+0x26b/0x310 [ 3290.970236] state_store+0x42/0x90 [ 3290.970243] kernfs_fop_write_iter+0x135/0x1b0 [ 3290.970251] new_sync_write+0x125/0x1c0 [ 3290.970257] vfs_write+0x360/0x3c0 [ 3290.970263] ksys_write+0xa7/0xe0 [ 3290.970269] do_syscall_64+0x3a/0x80 [ 3290.970276] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 3290.970284] other info that might help us debug this: [ 3290.970285] Chain exists of: (wq_completion)hci0#2 --> &hdev->lock --> dpm_list_mtx [ 3290.970297] Possible unsafe locking scenario: [ 3290.970299] CPU0 CPU1 [ 3290.970300] ---- ---- [ 3290.970302] lock(dpm_list_mtx); [ 3290.970306] lock(&hdev->lock); [ 3290.970310] lock(dpm_list_mtx); [ 3290.970314] lock((wq_completion)hci0#2); [ 3290.970319] *** DEADLOCK *** [ 3290.970321] 7 locks held by systemd-sleep/4553: [ 3290.970325] #0: ffff888103bcd448 (sb_writers#4){.+.+}-{0:0}, at: ksys_write+0xa7/0xe0 [ 3290.970341] #1: ffff888115a14488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x103/0x1b0 [ 3290.970355] #2: ffff888100f719e0 (kn->active#233){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x10c/0x1b0 [ 3290.970369] #3: ffffffff82661048 (autosleep_lock){+.+.}-{3:3}, at: state_store+0x12/0x90 [ 3290.970384] #4: ffffffff82658ac8 (system_transition_mutex){+.+.}-{3:3}, at: pm_suspend+0x9f/0x310 [ 3290.970399] #5: ffffffff827f2a48 (acpi_scan_lock){+.+.}-{3:3}, at: acpi_suspend_begin+0x4c/0x80 [ 3290.970416] #6: ffffffff8280fca8 (dpm_list_mtx){+.+.}-{3:3}, at: dpm_resume+0x12e/0x3e0 [ 3290.970428] stack backtrace: [ 3290.970431] CPU: 3 PID: 4553 Comm: systemd-sleep Tainted: G S 5.15.0+ #2420 [ 3290.970438] Hardware name: Dell Inc. XPS 13 9380/0RYJWW, BIOS 1.5.0 06/03/2019 [ 3290.970441] Call Trace: [ 3290.970446] dump_stack_lvl+0x44/0x57 [ 3290.970454] check_noncircular+0x105/0x120 [ 3290.970468] ? __lock_acquire+0x15cb/0x1b50 [ 3290.970474] __lock_acquire+0x15cb/0x1b50 [ 3290.970487] lock_acquire+0x26c/0x300 [ 3290.970493] ? flush_workqueue+0x87/0x4a0 [ 3290.970503] ? __raw_spin_lock_init+0x3b/0x60 [ 3290.970510] ? lockdep_init_map_type+0x58/0x240 [ 3290.970519] flush_workqueue+0xae/0x4a0 [ 3290.970526] ? flush_workqueue+0x87/0x4a0 [ 3290.970544] ? drain_workqueue+0xa1/0x130 [ 3290.970552] drain_workqueue+0xa1/0x130 [ 3290.970561] destroy_workqueue+0x34/0x1f0 [ 3290.970572] hci_release_dev+0x49/0x180 [bluetooth] [ 3290.970624] bt_host_release+0x1d/0x30 [bluetooth] [ 3290.970687] device_release+0x33/0x90 [ 3290.970695] kobject_release+0x63/0x160 [ 3290.970705] dpm_resume+0x164/0x3e0 [ 3290.970710] ? dpm_resume_early+0x251/0x3b0 [ 3290.970718] dpm_resume_end+0xd/0x20 [ 3290.970723] suspend_devices_and_enter+0x1a4/0xba0 [ 3290.970737] pm_suspend+0x26b/0x310 [ 3290.970746] state_store+0x42/0x90 [ 3290.970755] kernfs_fop_write_iter+0x135/0x1b0 [ 3290.970764] new_sync_write+0x125/0x1c0 [ 3290.970777] vfs_write+0x360/0x3c0 [ 3290.970785] ksys_write+0xa7/0xe0 [ 3290.970794] do_syscall_64+0x3a/0x80 [ 3290.970803] entry_SYSCALL_64_after_hwframe+0x44/0xae [ 3290.970811] RIP: 0033:0x7f41b1328164 [ 3290.970819] Code: 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b7 0f 1f 80 00 00 00 00 8b 05 4a d2 2c 00 48 63 ff 85 c0 75 13 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 54 f3 c3 66 90 55 53 48 89 d5 48 89 f3 48 83 [ 3290.970824] RSP: 002b:00007ffe6ae21b28 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 [ 3290.970831] RAX: ffffffffffffffda RBX: 0000000000000004 RCX: 00007f41b1328164 [ 3290.970836] RDX: 0000000000000004 RSI: 000055965e651070 RDI: 0000000000000004 [ 3290.970839] RBP: 000055965e651070 R08: 000055965e64f390 R09: 00007f41b1e3d1c0 [ 3290.970843] R10: 000000000000000a R11: 0000000000000246 R12: 0000000000000004 [ 3290.970846] R13: 0000000000000001 R14: 000055965e64f2b0 R15: 0000000000000004 Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17video: backlight: Drop maximum brightness override for brightness zeroMarek Vasut1-6/+0
commit 33a5471f8da976bf271a1ebbd6b9d163cb0cb6aa upstream. The note in c2adda27d202f ("video: backlight: Add of_find_backlight helper in backlight.c") says that gpio-backlight uses brightness as power state. This has been fixed since in ec665b756e6f7 ("backlight: gpio-backlight: Correct initial power state handling") and other backlight drivers do not require this workaround. Drop the workaround. This fixes the case where e.g. pwm-backlight can perfectly well be set to brightness 0 on boot in DT, which without this patch leads to the display brightness to be max instead of off. Fixes: c2adda27d202f ("video: backlight: Add of_find_backlight helper in backlight.c") Cc: <stable@vger.kernel.org> # 5.4+ Cc: <stable@vger.kernel.org> # 4.19.x: ec665b756e6f7: backlight: gpio-backlight: Correct initial power state handling Signed-off-by: Marek Vasut <marex@denx.de> Acked-by: Noralf Trønnes <noralf@tronnes.org> Reviewed-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mfd: dln2: Add cell for initializing DLN2 ADCJack Andersen1-0/+18
commit 313c84b5ae4104e48c661d5d706f9f4c425fd50f upstream. This patch extends the DLN2 driver; adding cell for adc_dln2 module. The original patch[1] fell through the cracks when the driver was added so ADC has never actually been usable. That patch did not have ACPI support which was added in v5.9, so the oldest supported version this current patch can be backported to is 5.10. [1] https://www.spinics.net/lists/linux-iio/msg33975.html Cc: <stable@vger.kernel.org> # 5.10+ Signed-off-by: Jack Andersen <jackoalan@gmail.com> Signed-off-by: Noralf Trønnes <noralf@tronnes.org> Signed-off-by: Lee Jones <lee.jones@linaro.org> Link: https://lore.kernel.org/r/20211018112541.25466-1-noralf@tronnes.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mm, oom: do not trigger out_of_memory from the #PFMichal Hocko1-14/+8
commit 60e2793d440a3ec95abb5d6d4fc034a4b480472d upstream. Any allocation failure during the #PF path will return with VM_FAULT_OOM which in turn results in pagefault_out_of_memory. This can happen for 2 different reasons. a) Memcg is out of memory and we rely on mem_cgroup_oom_synchronize to perform the memcg OOM handling or b) normal allocation fails. The latter is quite problematic because allocation paths already trigger out_of_memory and the page allocator tries really hard to not fail allocations. Anyway, if the OOM killer has been already invoked there is no reason to invoke it again from the #PF path. Especially when the OOM condition might be gone by that time and we have no way to find out other than allocate. Moreover if the allocation failed and the OOM killer hasn't been invoked then we are unlikely to do the right thing from the #PF context because we have already lost the allocation context and restictions and therefore might oom kill a task from a different NUMA domain. This all suggests that there is no legitimate reason to trigger out_of_memory from pagefault_out_of_memory so drop it. Just to be sure that no #PF path returns with VM_FAULT_OOM without allocation print a warning that this is happening before we restart the #PF. [VvS: #PF allocation can hit into limit of cgroup v1 kmem controller. This is a local problem related to memcg, however, it causes unnecessary global OOM kills that are repeated over and over again and escalate into a real disaster. This has been broken since kmem accounting has been introduced for cgroup v1 (3.8). There was no kmem specific reclaim for the separate limit so the only way to handle kmem hard limit was to return with ENOMEM. In upstream the problem will be fixed by removing the outdated kmem limit, however stable and LTS kernels cannot do it and are still affected. This patch fixes the problem and should be backported into stable/LTS.] Link: https://lkml.kernel.org/r/f5fd8dd8-0ad4-c524-5f65-920b01972a42@virtuozzo.com Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Vasily Averin <vvs@virtuozzo.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mm, oom: pagefault_out_of_memory: don't force global OOM for dying tasksVasily Averin1-0/+3
commit 0b28179a6138a5edd9d82ad2687c05b3773c387b upstream. Patch series "memcg: prohibit unconditional exceeding the limit of dying tasks", v3. Memory cgroup charging allows killed or exiting tasks to exceed the hard limit. It can be misused and allowed to trigger global OOM from inside a memcg-limited container. On the other hand if memcg fails allocation, called from inside #PF handler it triggers global OOM from inside pagefault_out_of_memory(). To prevent these problems this patchset: (a) removes execution of out_of_memory() from pagefault_out_of_memory(), becasue nobody can explain why it is necessary. (b) allow memcg to fail allocation of dying/killed tasks. This patch (of 3): Any allocation failure during the #PF path will return with VM_FAULT_OOM which in turn results in pagefault_out_of_memory which in turn executes out_out_memory() and can kill a random task. An allocation might fail when the current task is the oom victim and there are no memory reserves left. The OOM killer is already handled at the page allocator level for the global OOM and at the charging level for the memcg one. Both have much more information about the scope of allocation/charge request. This means that either the OOM killer has been invoked properly and didn't lead to the allocation success or it has been skipped because it couldn't have been invoked. In both cases triggering it from here is pointless and even harmful. It makes much more sense to let the killed task die rather than to wake up an eternally hungry oom-killer and send him to choose a fatter victim for breakfast. Link: https://lkml.kernel.org/r/0828a149-786e-7c06-b70a-52d086818ea3@virtuozzo.com Signed-off-by: Vasily Averin <vvs@virtuozzo.com> Suggested-by: Michal Hocko <mhocko@suse.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17io-wq: serialize hash clear with wakeupJens Axboe1-3/+16
commit d3e3c102d107bb84251455a298cf475f24bab995 upstream. We need to ensure that we serialize the stalled and hash bits with the wait_queue wait handler, or we could be racing with someone modifying the hashed state after we find it busy, but before we then give up and wait for it to be cleared. This can cause random delays or stalls when handling buffered writes for many files, where some of these files cause hash collisions between the worker threads. Cc: stable@vger.kernel.org Reported-by: Daniel Black <daniel@mariadb.org> Fixes: e941894eae31 ("io-wq: make buffered file write hashed work map per-ctx") Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17io-wq: fix queue stalling raceJens Axboe1-8/+7
commit 0242f6426ea78fbe3933b44f8c55ae93ec37f6cc upstream. We need to set the stalled bit early, before we drop the lock for adding us to the stall hash queue. If not, then we can race with new work being queued between adding us to the stall hash and io_worker_handle_work() marking us stalled. Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17io-wq: ensure that hash wait lock is IRQ disablingJens Axboe1-2/+2
commit 08bdbd39b58474d762242e1fadb7f2eb9ffcca71 upstream. A previous commit removed the IRQ safety of the worker and wqe locks, but that left one spot of the hash wait lock now being done without already having IRQs disabled. Ensure that we use the right locking variant for the hashed waitqueue lock. Fixes: a9a4aa9fbfc5 ("io-wq: wqe and worker locks no longer need to be IRQ safe") Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17memcg: prohibit unconditional exceeding the limit of dying tasksVasily Averin1-19/+8
commit a4ebf1b6ca1e011289677239a2a361fde4a88076 upstream. Memory cgroup charging allows killed or exiting tasks to exceed the hard limit. It is assumed that the amount of the memory charged by those tasks is bound and most of the memory will get released while the task is exiting. This is resembling a heuristic for the global OOM situation when tasks get access to memory reserves. There is no global memory shortage at the memcg level so the memcg heuristic is more relieved. The above assumption is overly optimistic though. E.g. vmalloc can scale to really large requests and the heuristic would allow that. We used to have an early break in the vmalloc allocator for killed tasks but this has been reverted by commit b8c8a338f75e ("Revert "vmalloc: back off when the current task is killed""). There are likely other similar code paths which do not check for fatal signals in an allocation&charge loop. Also there are some kernel objects charged to a memcg which are not bound to a process life time. It has been observed that it is not really hard to trigger these bypasses and cause global OOM situation. One potential way to address these runaways would be to limit the amount of excess (similar to the global OOM with limited oom reserves). This is certainly possible but it is not really clear how much of an excess is desirable and still protects from global OOMs as that would have to consider the overall memcg configuration. This patch is addressing the problem by removing the heuristic altogether. Bypass is only allowed for requests which either cannot fail or where the failure is not desirable while excess should be still limited (e.g. atomic requests). Implementation wise a killed or dying task fails to charge if it has passed the OOM killer stage. That should give all forms of reclaim chance to restore the limit before the failure (ENOMEM) and tell the caller to back off. In addition, this patch renames should_force_charge() helper to task_is_dying() because now its use is not associated witch forced charging. This patch depends on pagefault_out_of_memory() to not trigger out_of_memory(), because then a memcg failure can unwind to VM_FAULT_OOM and cause a global OOM killer. Link: https://lkml.kernel.org/r/8f5cebbb-06da-4902-91f0-6566fc4b4203@virtuozzo.com Signed-off-by: Vasily Averin <vvs@virtuozzo.com> Suggested-by: Michal Hocko <mhocko@suse.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Roman Gushchin <guro@fb.com> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Shakeel Butt <shakeelb@google.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17mm/filemap.c: remove bogus VM_BUG_ONMatthew Wilcox (Oracle)1-1/+0
commit d417b49fff3e2f21043c834841e8623a6098741d upstream. It is not safe to check page->index without holding the page lock. It can be changed if the page is moved between the swap cache and the page cache for a shmem file, for example. There is a VM_BUG_ON below which checks page->index is correct after taking the page lock. Link: https://lkml.kernel.org/r/20210818144932.940640-1-willy@infradead.org Fixes: 5c211ba29deb ("mm: add and use find_lock_entries") Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Reported-by: <syzbot+c87be4f669d920c76330@syzkaller.appspotmail.com> Cc: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-179p/net: fix missing error check in p9_check_errorsDominique Martinet1-0/+2
commit 27eb4c3144f7a5ebef3c9a261d80cb3e1fa784dc upstream. Link: https://lkml.kernel.org/r/99338965-d36c-886e-cd0e-1d8fff2b4746@gmail.com Reported-by: syzbot+06472778c97ed94af66d@syzkaller.appspotmail.com Cc: stable@vger.kernel.org Signed-off-by: Dominique Martinet <asmadeus@codewreck.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17bpf, cgroup: Assign cgroup in cgroup_sk_alloc when called from interruptDaniel Borkmann1-5/+12
[ Upstream commit 78cc316e9583067884eb8bd154301dc1e9ee945c ] If cgroup_sk_alloc() is called from interrupt context, then just assign the root cgroup to skcd->cgroup. Prior to commit 8520e224f547 ("bpf, cgroups: Fix cgroup v2 fallback on v1/v2 mixed mode") we would just return, and later on in sock_cgroup_ptr(), we were NULL-testing the cgroup in fast-path, and iff indeed NULL returning the root cgroup (v ?: &cgrp_dfl_root.cgrp). Rather than re-adding the NULL-test to the fast-path we can just assign it once from cgroup_sk_alloc() given v1/v2 handling has been simplified. The migration from NULL test with returning &cgrp_dfl_root.cgrp to assigning &cgrp_dfl_root.cgrp directly does /not/ change behavior for callers of sock_cgroup_ptr(). syzkaller was able to trigger a splat in the legacy netrom code base, where the RX handler in nr_rx_frame() calls nr_make_new() which calls sk_alloc() and therefore cgroup_sk_alloc() with in_interrupt() condition. Thus the NULL skcd->cgroup, where it trips over on cgroup_sk_free() side given it expects a non-NULL object. There are a few other candidates aside from netrom which have similar pattern where in their accept-like implementation, they just call to sk_alloc() and thus cgroup_sk_alloc() instead of sk_clone_lock() with the corresponding cgroup_sk_clone() which then inherits the cgroup from the parent socket. None of them are related to core protocols where BPF cgroup programs are running from. However, in future, they should follow to implement a similar inheritance mechanism. Additionally, with a !CONFIG_CGROUP_NET_PRIO and !CONFIG_CGROUP_NET_CLASSID configuration, the same issue was exposed also prior to 8520e224f547 due to commit e876ecc67db8 ("cgroup: memcg: net: do not associate sock with unrelated cgroup") which added the early in_interrupt() return back then. Fixes: 8520e224f547 ("bpf, cgroups: Fix cgroup v2 fallback on v1/v2 mixed mode") Fixes: e876ecc67db8 ("cgroup: memcg: net: do not associate sock with unrelated cgroup") Reported-by: syzbot+df709157a4ecaf192b03@syzkaller.appspotmail.com Reported-by: syzbot+533f389d4026d86a2a95@syzkaller.appspotmail.com Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: syzbot+df709157a4ecaf192b03@syzkaller.appspotmail.com Tested-by: syzbot+533f389d4026d86a2a95@syzkaller.appspotmail.com Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/bpf/20210927123921.21535-1-daniel@iogearbox.net Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-11-17bpf, cgroups: Fix cgroup v2 fallback on v1/v2 mixed modeDaniel Borkmann5-155/+41
[ Upstream commit 8520e224f547cd070c7c8f97b1fc6d58cff7ccaa ] Fix cgroup v1 interference when non-root cgroup v2 BPF programs are used. Back in the days, commit bd1060a1d671 ("sock, cgroup: add sock->sk_cgroup") embedded per-socket cgroup information into sock->sk_cgrp_data and in order to save 8 bytes in struct sock made both mutually exclusive, that is, when cgroup v1 socket tagging (e.g. net_cls/net_prio) is used, then cgroup v2 falls back to the root cgroup in sock_cgroup_ptr() (&cgrp_dfl_root.cgrp). The assumption made was "there is no reason to mix the two and this is in line with how legacy and v2 compatibility is handled" as stated in bd1060a1d671. However, with Kubernetes more widely supporting cgroups v2 as well nowadays, this assumption no longer holds, and the possibility of the v1/v2 mixed mode with the v2 root fallback being hit becomes a real security issue. Many of the cgroup v2 BPF programs are also used for policy enforcement, just to pick _one_ example, that is, to programmatically deny socket related system calls like connect(2) or bind(2). A v2 root fallback would implicitly cause a policy bypass for the affected Pods. In production environments, we have recently seen this case due to various circumstances: i) a different 3rd party agent and/or ii) a container runtime such as [0] in the user's environment configuring legacy cgroup v1 net_cls tags, which triggered implicitly mentioned root fallback. Another case is Kubernetes projects like kind [1] which create Kubernetes nodes in a container and also add cgroup namespaces to the mix, meaning programs which are attached to the cgroup v2 root of the cgroup namespace get attached to a non-root cgroup v2 path from init namespace point of view. And the latter's root is out of reach for agents on a kind Kubernetes node to configure. Meaning, any entity on the node setting cgroup v1 net_cls tag will trigger the bypass despite cgroup v2 BPF programs attached to the namespace root. Generally, this mutual exclusiveness does not hold anymore in today's user environments and makes cgroup v2 usage from BPF side fragile and unreliable. This fix adds proper struct cgroup pointer for the cgroup v2 case to struct sock_cgroup_data in order to address these issues; this implicitly also fixes the tradeoffs being made back then with regards to races and refcount leaks as stated in bd1060a1d671, and removes the fallback, so that cgroup v2 BPF programs always operate as expected. [0] https://github.com/nestybox/sysbox/ [1] https://kind.sigs.k8s.io/ Fixes: bd1060a1d671 ("sock, cgroup: add sock->sk_cgroup") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Stanislav Fomichev <sdf@google.com> Acked-by: Tejun Heo <tj@kernel.org> Link: https://lore.kernel.org/bpf/20210913230759.2313-1-daniel@iogearbox.net Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-11-17net, neigh: Enable state migration between NUD_PERMANENT and NTF_USEDaniel Borkmann2-9/+14
[ Upstream commit 3dc20f4762c62d3b3f0940644881ed818aa7b2f5 ] Currently, it is not possible to migrate a neighbor entry between NUD_PERMANENT state and NTF_USE flag with a dynamic NUD state from a user space control plane. Similarly, it is not possible to add/remove NTF_EXT_LEARNED flag from an existing neighbor entry in combination with NTF_USE flag. This is due to the latter directly calling into neigh_event_send() without any meta data updates as happening in __neigh_update(). Thus, to enable this use case, extend the latter with a NEIGH_UPDATE_F_USE flag where we break the NUD_PERMANENT state in particular so that a latter neigh_event_send() is able to re-resolve a neighbor entry. Before fix, NUD_PERMANENT -> NUD_* & NTF_USE: # ./ip/ip n replace 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a PERMANENT [...] # ./ip/ip n replace 192.168.178.30 dev enp5s0 use extern_learn # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a PERMANENT [...] As can be seen, despite the admin-triggered replace, the entry remains in the NUD_PERMANENT state. After fix, NUD_PERMANENT -> NUD_* & NTF_USE: # ./ip/ip n replace 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a PERMANENT [...] # ./ip/ip n replace 192.168.178.30 dev enp5s0 use extern_learn # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a extern_learn REACHABLE [...] # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a extern_learn STALE [...] # ./ip/ip n replace 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a PERMANENT [...] After the fix, the admin-triggered replace switches to a dynamic state from the NTF_USE flag which triggered a new neighbor resolution. Likewise, we can transition back from there, if needed, into NUD_PERMANENT. Similar before/after behavior can be observed for below transitions: Before fix, NTF_USE -> NTF_USE | NTF_EXT_LEARNED -> NTF_USE: # ./ip/ip n replace 192.168.178.30 dev enp5s0 use # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a REACHABLE [...] # ./ip/ip n replace 192.168.178.30 dev enp5s0 use extern_learn # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a REACHABLE [...] After fix, NTF_USE -> NTF_USE | NTF_EXT_LEARNED -> NTF_USE: # ./ip/ip n replace 192.168.178.30 dev enp5s0 use # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a REACHABLE [...] # ./ip/ip n replace 192.168.178.30 dev enp5s0 use extern_learn # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a extern_learn REACHABLE [...] # ./ip/ip n replace 192.168.178.30 dev enp5s0 use # ./ip/ip n 192.168.178.30 dev enp5s0 lladdr f4:8c:50:5e:71:9a REACHABLE [..] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Roopa Prabhu <roopa@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-11-17dmaengine: bestcomm: fix system boot lockupsAnatolij Gustschin4-16/+16
commit adec566b05288f2787a1f88dbaf77ed8b0c644fa upstream. memset() and memcpy() on an MMIO region like here results in a lockup at startup on mpc5200 platform (since this first happens during probing of the ATA and Ethernet drivers). Use memset_io() and memcpy_toio() instead. Fixes: 2f9ea1bde0d1 ("bestcomm: core bestcomm support for Freescale MPC5200") Cc: stable@vger.kernel.org # v5.14+ Signed-off-by: Anatolij Gustschin <agust@denx.de> Link: https://lore.kernel.org/r/20211014094012.21286-1-agust@denx.de Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-11-17dmaengine: ti: k3-udma: Set r/tchan or rflow to NULL if request failKishon Vijay Abraham I1-4/+20
commit eb91224e47ec33a0a32c9be0ec0fcb3433e555fd upstream. udma_get_*() checks if rchan/tchan/rflow is already allocated by checking if it has a NON NULL value. For the error cases, rchan/tchan/rflow will have error value and udma_get_*() considers this as already allocated (PASS) since the error values are NON NULL. This results in NULL pointer dereference error while de-referencing rchan/tchan/rflow. Reset the value of rchan/tchan/rflow to NULL if a channel request fails. CC: stable@vger.kernel.org Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com> Link: https://lore.kernel.org/r/20211031032411.27235-3-kishon@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>