summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)AuthorFilesLines
2021-10-01fixup! ADC linux driver for AST2600Jae Hyun Yoo1-87/+12
Revert d17922945eeaded06b6424a8e8d666d211d57dd4. Signed-off-by: Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com>
2021-10-01fixup! Igore 0x3FF in aspeed_adc driverJae Hyun Yoo1-15/+0
Revert 0256271cd3840545a13728b07e1bc6baf0cd75db. Signed-off-by: Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com>
2021-10-01ASD Add Shift IR/DR from Exit IR/DR for HW2 JTAG xfersErnesto Corona1-0/+1
Before this change JTAG HW2 mode shift operation failed to process a shift request when current state was Exit IR/DR for the same type of xfer SHIFTIR/SHIFTDR respectively. After this change we will support shift operations in HW2 mode from the following jtag states: For SHIFTDR: JTAG_STATE_SHIFTDR, JTAG_STATE_IDLE, JTAG_STATE_TLRESET, JTAG_STATE_PAUSEDR, JTAG_STATE_EXIT1DR and JTAG_STATE_EXIT1IR For SHIFTIR: JTAG_STATE_SHIFTIR, JTAG_STATE_IDLE, JTAG_STATE_TLRESET, JTAG_STATE_PAUSEIR, JTAG_STATE_EXIT1IR and JTAG_STATE_EXIT1DR Test: ASD Sanity(SW mode) finished successfully(SPR) ASD Sanity(HW mode) finished successfully(SPR) Cscripts(SW mode) finished successfully(SPR) Cscripts(HW mode) finished successfully(SPR) Signed-off-by: Ernesto Corona <ernesto.corona@intel.com> Change-Id: Ide878b8986639c63e41c2bc360e06a261cdffee5
2021-09-17fixup! Enable GPIOE0 and GPIOE2 pass-through by defaultJason M. Bills1-10/+0
This reverts the pass-through workaround from the kernel. It is no longer needed for the AST2600 and will be applied as a patch for the AST2500. Change-Id: I9e4cc712a31e2d8da2ebdda7c12430da4fa4185d Signed-off-by: Jason M. Bills <jason.m.bills@linux.intel.com>
2021-09-13fixup! i2c: Add mux hold/unhold msg typesJae Hyun Yoo2-2/+2
Fix error checking logic for unholding mux cases since i2c_smbus_xfer() and master_xfer() can return a positive value as a tranferred data count. Signed-off-by: Jae Hyun Yoo <jae.hyun.yoo@intel.com> Change-Id: I2a5f59977949cb6300936a4b6fa054e940e4cde7
2021-09-13gpio: gpio-aspeed-sgpio: Fix wrong hwirq in irq handler.Steven Lee1-1/+1
The current hwirq is calculated based on the old GPIO pin order(input GPIO range is from 0 to ngpios - 1). It should be calculated based on the current GPIO input pin order(input GPIOs are 0, 2, 4, ..., (ngpios - 1) * 2). Signed-off-by: Steven Lee <steven_lee@aspeedtech.com>
2021-09-13fixup! Add ASPEED SGPIO driverJae Hyun Yoo3-713/+0
This reverts commit 572ca1d8367c592535641e55cf74bab5d3171c20 to use upstream sgpio driver. Signed-off-by: Jae Hyun Yoo <jae.hyun.yoo@intel.com>
2021-09-13gpio: gpio-aspeed-sgpio: Return error if ngpios is not multiple of 8.Steven Lee1-0/+4
Add an else-if condition in the probe function to check whether ngpios is multiple of 8. Per AST datasheet, numbers of available serial GPIO pins in Serial GPIO Configuration Register must be n bytes. For instance, if n = 1, it means AST SoC supports 8 GPIO pins. Signed-off-by: Steven Lee <steven_lee@aspeedtech.com> Reviewed-by: Andrew Jeffery <andrew@aj.id.au> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
2021-09-13gpio: gpio-aspeed-sgpio: Use generic device property APIsSteven Lee1-2/+2
Replace all of_property_read_u32() with device_property_read_u32(). Signed-off-by: Steven Lee <steven_lee@aspeedtech.com> Acked-by: Andrew Jeffery <andrew@aj.id.au> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
2021-09-13gpio: gpio-aspeed-sgpio: Move irq_chip to aspeed-sgpio structSteven Lee1-9/+8
The current design initializes irq->chip from a global irqchip struct, which causes multiple sgpio devices use the same irq_chip. The patch moves irq_chip to aspeed_sgpio struct for initializing irq_chip from their private gpio struct. Signed-off-by: Steven Lee <steven_lee@aspeedtech.com> Reviewed-by: Andrew Jeffery <andrew@aj.id.au> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
2021-09-13gpio: gpio-aspeed-sgpio: Add set_config functionSteven Lee1-4/+50
AST SoC supports *retain pin state* function when wdt reset. The patch adds set_config function for handling sgpio reset tolerance register. Signed-off-by: Steven Lee <steven_lee@aspeedtech.com> Reviewed-by: Andrew Jeffery <andrew@aj.id.au> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
2021-09-13gpio: gpio-aspeed-sgpio: Add AST2600 sgpio supportSteven Lee1-54/+47
The maximum number of gpio pins of SoC is hardcoded as 80 and the gpio pin count mask for GPIO Configuration register is hardcode as GENMASK(9,6). However, AST2600 has 2 sgpio master interfaces, one of them supports up to 128 gpio pins and pin count mask of GPIO Configuration Register is 5 bits. The patch adds ast2600 compatibles, removes MAX_NR_HW_SGPIO and corresponding design to make the gpio input/output pin base are determined by ngpios. The patch also removed hardcoded pin mask and adds ast2400, ast2500, ast2600 platform data that include gpio pin count mask for GPIO Configuration Register. The original pin order is as follows: (suppose MAX_NR_HW_SGPIO is 80 and ngpios is 10 as well) Input: 0 1 2 3 ... 9 Output: 80 81 82 ... 89 The new pin order is as follows: Input: 0 2 4 6 ... 18 Output: 1 3 5 7 ... 19 SGPIO pin id and input/output pin mapping is as follows: SGPIO0(0,1), SGPIO1(2,3), ..., SGPIO79(158,159) For example: Access SGPIO5(10,11) Get SGPIO pin 5 (suppose sgpio chip id is 2) gpioget 2 10 Set SGPIO pin 5 (suppose sgpio chip id is 2) gpioset 2 11=1 gpioset 2 11=0 Signed-off-by: Steven Lee <steven_lee@aspeedtech.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
2021-09-13fixup! SGPIO DT and pinctrl fixupJae Hyun Yoo1-24/+24
This reverts commit 6d512838ba348d7bc7989102ddd56cfdb48c85ee to use upstream sgpio driver. Signed-off-by: Jae Hyun Yoo <jae.hyun.yoo@intel.com>
2021-09-09soc: aspeed: mctp: Clear TX channel and client rings after resetIwona Winiarska1-15/+64
Currently, after TX channel wr_ptr reaches the maximum value, there is no mechanism to clear it. The driver expects that the HW will make forward progress, and eventually the buffer will have free space available. That's not true for some reset scenarios and when this happens, we may end up with a blocked full channel. While we could fix it by just adding a trigger after reset, this may cause userspace to receive unexpected responses for packets that got stuck in queues during reset. Let's fix it by flushing both TX channel and client rings. Note: The barriers are necessary just to enforce ordering between flush and producing new packets by the clients (we don't want to flush packets that were sent after reset, just the ones that were sent before it) but to keep things consistent, introduce helpers for all priv->pcie.bdf accesses. Signed-off-by: Iwona Winiarska <iwona.winiarska@intel.com> Change-Id: I457a05a6e7f0cb1fd8ce6f2bff6a59b166d32149
2021-09-08Merge branch 'dev-5.10' into dev-5.10-intelJae Hyun Yoo237-1166/+2257
Pull 5.10.60 stable from OpenBMC upstream Signed-off-by: Jae Hyun Yoo <jae.hyun.yoo@intel.com>
2021-08-26ASD Fix AST26xx HW mode end tap state supportErnesto Corona1-3/+13
Previous this change we send JTAG state machine to RTI in all shifts that end state was EXIT1IR/EXIT1DR. With this rule we were able to guarantee a smooth chain transition in HW mode (chain is expected to be in RTI after chain has been selected). However, there were uncovered cases such as Go to Ex1IR and then Go to ShfDR which doesn't expect to visit RTI. With this change AST26xx shift behavior in HW mode will be more accurate to what it is being requested. JTAG state machine will remain in EXIT1IR/EXIT1DR when those end states are being sent to the aspeed_jtag_xfer_hw2 driver function. It is now expected that ASD application calculates the right end state and send either EXIT1IR, EXIT1DR, RTI or PAUSEDR to control shift operations and JTAG state transitions in HW mode. Additionally aspeed_jtag_set_tap_state_hw2() function was updated to handle go to RTI from EXIT1 included in two different network packets. Tested: Using Software, HW1 and HW2 modes: ASD Sanity ended successfully. Signed-off-by: Ernesto Corona <ernesto.corona@intel.com> Change-Id: Ic49b0ce26c49d08e806c29e5ae72a4b3b7e68553
2021-08-19Merge tag 'v5.10.60' into dev-5.10Joel Stanley233-1159/+2251
This is the 5.10.60 stable release Signed-off-by: Joel Stanley <joel@jms.id.au>
2021-08-19soc: aspeed: socinfo: Add AST2625 variantJoel Stanley1-0/+1
Add AST26XX series AST2625-A3 SOC ID, taken from the vendor u-boot SDK: arch/arm/mach-aspeed/ast2600/scu_info.c + SOC_ID("AST2625-A3", 0x0503040305030403), OpenBMC-Staging-Count: 1 Reviewed-by: Dylan Hung <dylan_hung@aspeedtech.com> Link: https://lore.kernel.org/r/20210818010534.2508122-1-joel@jms.id.au Signed-off-by: Joel Stanley <joel@jms.id.au>
2021-08-19soc: aspeed: p2a-ctrl: Fix boundary check for mmapIwona Winiarska1-1/+1
The check mixes pages (vm_pgoff) with bytes (vm_start, vm_end) on one side of the comparison, and uses resource address (rather than just the resource size) on the other side of the comparison. This can allow malicious userspace to easily bypass the boundary check and map pages that are located outside memory-region reserved by the driver. OpenBMC-Staging-Count: 1 Fixes: 01c60dcea9f7 ("drivers/misc: Add Aspeed P2A control driver") Cc: stable@vger.kernel.org Signed-off-by: Iwona Winiarska <iwona.winiarska@intel.com> Reviewed-by: Andrew Jeffery <andrew@aj.id.au> Tested-by: Andrew Jeffery <andrew@aj.id.au> Reviewed-by: Joel Stanley <joel@aj.id.au> Signed-off-by: Joel Stanley <joel@jms.id.au>
2021-08-19soc: aspeed: lpc-ctrl: Fix boundary check for mmapIwona Winiarska1-1/+1
The check mixes pages (vm_pgoff) with bytes (vm_start, vm_end) on one side of the comparison, and uses resource address (rather than just the resource size) on the other side of the comparison. This can allow malicious userspace to easily bypass the boundary check and map pages that are located outside memory-region reserved by the driver. OpenBMC-Staging-Count: 1 Fixes: 6c4e97678501 ("drivers/misc: Add Aspeed LPC control driver") Cc: stable@vger.kernel.org Signed-off-by: Iwona Winiarska <iwona.winiarska@intel.com> Reviewed-by: Andrew Jeffery <andrew@aj.id.au> Tested-by: Andrew Jeffery <andrew@aj.id.au> Reviewed-by: Joel Stanley <joel@aj.id.au> Signed-off-by: Joel Stanley <joel@jms.id.au>
2021-08-18net: dsa: microchip: ksz8795: Use software untagging on CPU portBen Hutchings1-1/+8
commit 9130c2d30c17846287b803a9803106318cbe5266 upstream. On the CPU port, we can support both tagged and untagged VLANs at the same time by doing any necessary untagging in software rather than hardware. To enable that, keep the CPU port's Remove Tag flag cleared and set the dsa_switch::untag_bridge_pvid flag. Fixes: e66f840c08a2 ("net: dsa: ksz: Add Microchip KSZ8795 DSA driver") Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: David S. Miller <davem@davemloft.net> [bwh: Backport to 5.10: adjust context] Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18net: dsa: microchip: ksz8795: Fix VLAN untagged flag change on deletionBen Hutchings1-3/+0
commit af01754f9e3c553a2ee63b4693c79a3956e230ab upstream. When a VLAN is deleted from a port, the flags in struct switchdev_obj_port_vlan are always 0. ksz8_port_vlan_del() copies the BRIDGE_VLAN_INFO_UNTAGGED flag to the port's Tag Removal flag, and therefore always clears it. In case there are multiple VLANs configured as untagged on this port - which seems useless, but is allowed - deleting one of them changes the remaining VLANs to be tagged. It's only ever necessary to change this flag when a VLAN is added to the port, so leave it unchanged in ksz8_port_vlan_del(). Fixes: e66f840c08a2 ("net: dsa: ksz: Add Microchip KSZ8795 DSA driver") Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: David S. Miller <davem@davemloft.net> [bwh: Backport to 5.10: adjust context] Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18net: dsa: microchip: ksz8795: Reject unsupported VLAN configurationBen Hutchings2-2/+54
commit 8f4f58f88fe0d9bd591f21f53de7dbd42baeb3fa upstream. The switches supported by ksz8795 only have a per-port flag for Tag Removal. This means it is not possible to support both tagged and untagged VLANs on the same port. Reject attempts to add a VLAN that requires the flag to be changed, unless there are no VLANs currently configured. VID 0 is excluded from this check since it is untagged regardless of the state of the flag. On the CPU port we could support tagged and untagged VLANs at the same time. This will be enabled by a later patch. Fixes: e66f840c08a2 ("net: dsa: ksz: Add Microchip KSZ8795 DSA driver") Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: David S. Miller <davem@davemloft.net> [bwh: Backport to 5.10: - This configuration has to be detected and rejected in the port_vlan_prepare operation - ksz8795_port_vlan_add() has to check again to decide whether to change the Tag Removal flag, so put the common condition in a separate function - Handle VID ranges] Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18net: dsa: microchip: ksz8795: Fix PVID tag insertionBen Hutchings1-5/+10
commit ef3b02a1d79b691f9a354c4903cf1e6917e315f9 upstream. ksz8795 has never actually enabled PVID tag insertion, and it also programmed the PVID incorrectly. To fix this: * Allow tag insertion to be controlled per ingress port. On most chips, set bit 2 in Global Control 19. On KSZ88x3 this control flag doesn't exist. * When adding a PVID: - Set the appropriate register bits to enable tag insertion on egress at every other port if this was the packet's ingress port. - Mask *out* the VID from the default tag, before or-ing in the new PVID. * When removing a PVID: - Clear the same control bits to disable tag insertion. - Don't update the default tag. This wasn't doing anything useful. Fixes: e66f840c08a2 ("net: dsa: ksz: Add Microchip KSZ8795 DSA driver") Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: David S. Miller <davem@davemloft.net> [bwh: Backport to 5.10: - Drop the KSZ88x3 cases as those chips are not supported here - Handle VID ranges in ksz8795_port_vlan_del()] Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18net: dsa: microchip: Fix probing KSZ87xx switch with DT node for host portBen Hutchings1-1/+1
The ksz8795 and ksz9477 drivers differ in the way they count ports. For ksz8795, ksz_device::port_cnt does not include the host port whereas for ksz9477 it does. This inconsistency was fixed in Linux 5.11 by a series of changes, but remains in 5.10-stable. When probing, the common code treats a port device node with an address >= dev->port_cnt as a fatal error. As a minimal fix, change it to compare again dev->mib_port_cnt. This is the length of the dev->ports array that the port number will be used to index, and always includes the host port. Cc: Woojung Huh <woojung.huh@microchip.com> Cc: Microchip Linux Driver Support <UNGLinuxDriver@microchip.com> Cc: Michael Grzeschik <m.grzeschik@pengutronix.de> Cc: Marek Vasut <marex@denx.de> Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18efi/libstub: arm64: Double check image alignment at entryArd Biesheuvel1-0/+4
commit c32ac11da3f83bb42b986702a9b92f0a14ed4182 upstream. On arm64, the stub only moves the kernel image around in memory if needed, which is typically only for KASLR, given that relocatable kernels (which is the default) can run from any 64k aligned address, which is also the minimum alignment communicated to EFI via the PE/COFF header. Unfortunately, some loaders appear to ignore this header, and load the kernel at some arbitrary offset in memory. We can deal with this, but let's check for this condition anyway, so non-compliant code can be spotted and fixed. Cc: <stable@vger.kernel.org> # v5.10+ Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18PCI/MSI: Protect msi_desc::masked for multi-MSIThomas Gleixner2-9/+11
commit 77e89afc25f30abd56e76a809ee2884d7c1b63ce upstream. Multi-MSI uses a single MSI descriptor and there is a single mask register when the device supports per vector masking. To avoid reading back the mask register the value is cached in the MSI descriptor and updates are done by clearing and setting bits in the cache and writing it to the device. But nothing protects msi_desc::masked and the mask register from being modified concurrently on two different CPUs for two different Linux interrupts which belong to the same multi-MSI descriptor. Add a lock to struct device and protect any operation on the mask and the mask register with it. This makes the update of msi_desc::masked unconditional, but there is no place which requires a modification of the hardware register without updating the masked cache. msi_mask_irq() is now an empty wrapper which will be cleaned up in follow up changes. The problem goes way back to the initial support of multi-MSI, but picking the commit which introduced the mask cache is a valid cut off point (2.6.30). Fixes: f2440d9acbe8 ("PCI MSI: Refactor interrupt masking code") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.726833414@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18PCI/MSI: Use msi_mask_irq() in pci_msi_shutdown()Thomas Gleixner1-1/+1
commit d28d4ad2a1aef27458b3383725bb179beb8d015c upstream. No point in using the raw write function from shutdown. Preparatory change to introduce proper serialization for the msi_desc::masked cache. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.674391354@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18PCI/MSI: Correct misleading commentsThomas Gleixner1-4/+1
commit 689e6b5351573c38ccf92a0dd8b3e2c2241e4aff upstream. The comments about preserving the cached state in pci_msi[x]_shutdown() are misleading as the MSI descriptors are freed right after those functions return. So there is nothing to restore. Preparatory change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.621609423@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18PCI/MSI: Do not set invalid bits in MSI maskThomas Gleixner1-4/+4
commit 361fd37397f77578735907341579397d5bed0a2d upstream. msi_mask_irq() takes a mask and a flags argument. The mask argument is used to mask out bits from the cached mask and the flags argument to set bits. Some places invoke it with a flags argument which sets bits which are not used by the device, i.e. when the device supports up to 8 vectors a full unmask in some places sets the mask to 0xFFFFFF00. While devices probably do not care, it's still bad practice. Fixes: 7ba1930db02f ("PCI MSI: Unmask MSI if setup failed") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.568173099@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18PCI/MSI: Enforce MSI[X] entry updates to be visibleThomas Gleixner1-0/+5
commit b9255a7cb51754e8d2645b65dd31805e282b4f3e upstream. Nothing enforces the posted writes to be visible when the function returns. Flush them even if the flush might be redundant when the entry is masked already as the unmask will flush as well. This is either setup or a rare affinity change event so the extra flush is not the end of the world. While this is more a theoretical issue especially the logic in the X86 specific msi_set_affinity() function relies on the assumption that the update has reached the hardware when the function returns. Again, as this never has been enforced the Fixes tag refers to a commit in: git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git Fixes: f036d4ea5fa7 ("[PATCH] ia32 Message Signalled Interrupt support") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.515188147@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18PCI/MSI: Enforce that MSI-X table entry is masked for updateThomas Gleixner1-0/+15
commit da181dc974ad667579baece33c2c8d2d1e4558d5 upstream. The specification (PCIe r5.0, sec 6.1.4.5) states: For MSI-X, a function is permitted to cache Address and Data values from unmasked MSI-X Table entries. However, anytime software unmasks a currently masked MSI-X Table entry either by clearing its Mask bit or by clearing the Function Mask bit, the function must update any Address or Data values that it cached from that entry. If software changes the Address or Data value of an entry while the entry is unmasked, the result is undefined. The Linux kernel's MSI-X support never enforced that the entry is masked before the entry is modified hence the Fixes tag refers to a commit in: git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git Enforce the entry to be masked across the update. There is no point in enforcing this to be handled at all possible call sites as this is just pointless code duplication and the common update function is the obvious place to enforce this. Fixes: f036d4ea5fa7 ("[PATCH] ia32 Message Signalled Interrupt support") Reported-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.462096385@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18PCI/MSI: Mask all unused MSI-X entriesThomas Gleixner1-18/+27
commit 7d5ec3d3612396dc6d4b76366d20ab9fc06f399f upstream. When MSI-X is enabled the ordering of calls is: msix_map_region(); msix_setup_entries(); pci_msi_setup_msi_irqs(); msix_program_entries(); This has a few interesting issues: 1) msix_setup_entries() allocates the MSI descriptors and initializes them except for the msi_desc:masked member which is left zero initialized. 2) pci_msi_setup_msi_irqs() allocates the interrupt descriptors and sets up the MSI interrupts which ends up in pci_write_msi_msg() unless the interrupt chip provides its own irq_write_msi_msg() function. 3) msix_program_entries() does not do what the name suggests. It solely updates the entries array (if not NULL) and initializes the masked member for each MSI descriptor by reading the hardware state and then masks the entry. Obviously this has some issues: 1) The uninitialized masked member of msi_desc prevents the enforcement of masking the entry in pci_write_msi_msg() depending on the cached masked bit. Aside of that half initialized data is a NONO in general 2) msix_program_entries() only ensures that the actually allocated entries are masked. This is wrong as experimentation with crash testing and crash kernel kexec has shown. This limited testing unearthed that when the production kernel had more entries in use and unmasked when it crashed and the crash kernel allocated a smaller amount of entries, then a full scan of all entries found unmasked entries which were in use in the production kernel. This is obviously a device or emulation issue as the device reset should mask all MSI-X table entries, but obviously that's just part of the paper specification. Cure this by: 1) Masking all table entries in hardware 2) Initializing msi_desc::masked in msix_setup_entries() 3) Removing the mask dance in msix_program_entries() 4) Renaming msix_program_entries() to msix_update_entries() to reflect the purpose of that function. As the masking of unused entries has never been done the Fixes tag refers to a commit in: git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git Fixes: f036d4ea5fa7 ("[PATCH] ia32 Message Signalled Interrupt support") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Marc Zyngier <maz@kernel.org> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.403833459@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18PCI/MSI: Enable and mask MSI-X earlyThomas Gleixner1-13/+15
commit 438553958ba19296663c6d6583d208dfb6792830 upstream. The ordering of MSI-X enable in hardware is dysfunctional: 1) MSI-X is disabled in the control register 2) Various setup functions 3) pci_msi_setup_msi_irqs() is invoked which ends up accessing the MSI-X table entries 4) MSI-X is enabled and masked in the control register with the comment that enabling is required for some hardware to access the MSI-X table Step #4 obviously contradicts #3. The history of this is an issue with the NIU hardware. When #4 was introduced the table access actually happened in msix_program_entries() which was invoked after enabling and masking MSI-X. This was changed in commit d71d6432e105 ("PCI/MSI: Kill redundant call of irq_set_msi_desc() for MSI-X interrupts") which removed the table write from msix_program_entries(). Interestingly enough nobody noticed and either NIU still works or it did not get any testing with a kernel 3.19 or later. Nevertheless this is inconsistent and there is no reason why MSI-X can't be enabled and masked in the control register early on, i.e. move step #4 above to step #1. This preserves the NIU workaround and has no side effects on other hardware. Fixes: d71d6432e105 ("PCI/MSI: Kill redundant call of irq_set_msi_desc() for MSI-X interrupts") Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Marc Zyngier <maz@kernel.org> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20210729222542.344136412@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-08-18efi/libstub: arm64: Relax 2M alignment again for relocatable kernelsArd Biesheuvel1-15/+13
[ Upstream commit 3a262423755b83a5f85009ace415d6e7f572dfe8 ] Commit 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with alignment check") simplified the way the stub moves the kernel image around in memory before booting it, given that a relocatable image does not need to be copied to a 2M aligned offset if it was loaded on a 64k boundary by EFI. Commit d32de9130f6c ("efi/arm64: libstub: Deal gracefully with EFI_RNG_PROTOCOL failure") inadvertently defeated this logic by overriding the value of efi_nokaslr if EFI_RNG_PROTOCOL is not available, which was mistaken by the loader logic as an explicit request on the part of the user to disable KASLR and any associated relocation of an Image not loaded on a 2M boundary. So let's reinstate this functionality, by capturing the value of efi_nokaslr at function entry to choose the minimum alignment. Fixes: d32de9130f6c ("efi/arm64: libstub: Deal gracefully with EFI_RNG_PROTOCOL failure") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18efi/libstub: arm64: Force Image reallocation if BSS was not reservedArd Biesheuvel1-1/+48
[ Upstream commit 5b94046efb4706b3429c9c8e7377bd8d1621d588 ] Distro versions of GRUB replace the usual LoadImage/StartImage calls used to load the kernel image with some local code that fails to honor the allocation requirements described in the PE/COFF header, as it does not account for the image's BSS section at all: it fails to allocate space for it, and fails to zero initialize it. Since the EFI stub itself is allocated in the .init segment, which is in the middle of the image, its BSS section is not impacted by this, and the main consequence of this omission is that the BSS section may overlap with memory regions that are already used by the firmware. So let's warn about this condition, and force image reallocation to occur in this case, which works around the problem. Fixes: 82046702e288 ("efi/libstub/arm64: Replace 'preferred' offset with alignment check") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18arm64: efi: kaslr: Fix occasional random alloc (and boot) failureBenjamin Herrenschmidt1-0/+2
[ Upstream commit 4152433c397697acc4b02c4a10d17d5859c2730d ] The EFI stub random allocator used for kaslr on arm64 has a subtle bug. In function get_entry_num_slots() which counts the number of possible allocation "slots" for the image in a given chunk of free EFI memory, "last_slot" can become negative if the chunk is smaller than the requested allocation size. The test "if (first_slot > last_slot)" doesn't catch it because both first_slot and last_slot are unsigned. I chose not to make them signed to avoid problems if this is ever used on architectures where there are meaningful addresses with the top bit set. Instead, fix it with an additional test against the allocation size. This can cause a boot failure in addition to a loss of randomisation due to another bug in the arm64 stub fixed separately. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Fixes: 2ddbfc81eac8 ("efi: stub: add implementation of efi_random_alloc()") Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18nbd: Aovid double completion of a requestXie Yongji1-3/+11
[ Upstream commit cddce01160582a5f52ada3da9626c052d852ec42 ] There is a race between iterating over requests in nbd_clear_que() and completing requests in recv_work(), which can lead to double completion of a request. To fix it, flush the recv worker before iterating over the requests and don't abort the completed request while iterating. Fixes: 96d97e17828f ("nbd: clear_sock on netlink disconnect") Reported-by: Jiang Yadong <jiangyadong@bytedance.com> Signed-off-by: Xie Yongji <xieyongji@bytedance.com> Reviewed-by: Josef Bacik <josef@toxicpanda.com> Link: https://lore.kernel.org/r/20210813151330.96-1-xieyongji@bytedance.com Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18xen/events: Fix race in set_evtchn_to_irqMaximilian Heyne1-6/+14
[ Upstream commit 88ca2521bd5b4e8b83743c01a2d4cb09325b51e9 ] There is a TOCTOU issue in set_evtchn_to_irq. Rows in the evtchn_to_irq mapping are lazily allocated in this function. The check whether the row is already present and the row initialization is not synchronized. Two threads can at the same time allocate a new row for evtchn_to_irq and add the irq mapping to the their newly allocated row. One thread will overwrite what the other has set for evtchn_to_irq[row] and therefore the irq mapping is lost. This will trigger a BUG_ON later in bind_evtchn_to_cpu: INFO: pci 0000:1a:15.4: [1d0f:8061] type 00 class 0x010802 INFO: nvme 0000:1a:12.1: enabling device (0000 -> 0002) INFO: nvme nvme77: 1/0/0 default/read/poll queues CRIT: kernel BUG at drivers/xen/events/events_base.c:427! WARN: invalid opcode: 0000 [#1] SMP NOPTI WARN: Workqueue: nvme-reset-wq nvme_reset_work [nvme] WARN: RIP: e030:bind_evtchn_to_cpu+0xc2/0xd0 WARN: Call Trace: WARN: set_affinity_irq+0x121/0x150 WARN: irq_do_set_affinity+0x37/0xe0 WARN: irq_setup_affinity+0xf6/0x170 WARN: irq_startup+0x64/0xe0 WARN: __setup_irq+0x69e/0x740 WARN: ? request_threaded_irq+0xad/0x160 WARN: request_threaded_irq+0xf5/0x160 WARN: ? nvme_timeout+0x2f0/0x2f0 [nvme] WARN: pci_request_irq+0xa9/0xf0 WARN: ? pci_alloc_irq_vectors_affinity+0xbb/0x130 WARN: queue_request_irq+0x4c/0x70 [nvme] WARN: nvme_reset_work+0x82d/0x1550 [nvme] WARN: ? check_preempt_wakeup+0x14f/0x230 WARN: ? check_preempt_curr+0x29/0x80 WARN: ? nvme_irq_check+0x30/0x30 [nvme] WARN: process_one_work+0x18e/0x3c0 WARN: worker_thread+0x30/0x3a0 WARN: ? process_one_work+0x3c0/0x3c0 WARN: kthread+0x113/0x130 WARN: ? kthread_park+0x90/0x90 WARN: ret_from_fork+0x3a/0x50 This patch sets evtchn_to_irq rows via a cmpxchg operation so that they will be set only once. The row is now cleared before writing it to evtchn_to_irq in order to not create a race once the row is visible for other threads. While at it, do not require the page to be zeroed, because it will be overwritten with -1's in clear_evtchn_to_irq_row anyway. Signed-off-by: Maximilian Heyne <mheyne@amazon.de> Fixes: d0b075ffeede ("xen/events: Refactor evtchn_to_irq array to be dynamically allocated") Link: https://lore.kernel.org/r/20210812130930.127134-1-mheyne@amazon.de Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18drm/i915: Only access SFC_DONE when media domain is not fused offMatt Roper1-1/+18
[ Upstream commit 24d032e2359e3abc926b3d423f49a7c33e0b7836 ] The SFC_DONE register lives within the corresponding VD0/VD2/VD4/VD6 forcewake domain and is not accessible if the vdbox in that domain is fused off and the forcewake is not initialized. This mistake went unnoticed because until recently we were using the wrong register offset for the SFC_DONE register; once the register offset was corrected, we started hitting errors like <4> [544.989065] i915 0000:cc:00.0: Uninitialized forcewake domain(s) 0x80 accessed at 0x1ce000 on parts with fused-off vdbox engines. Fixes: e50dbdbfd9fb ("drm/i915/tgl: Add SFC instdone to error state") Fixes: 9c9c6d0ab08a ("drm/i915: Correct SFC_DONE register offset") Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com> Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210806174130.1058960-1-matthew.d.roper@intel.com Reviewed-by: José Roberto de Souza <jose.souza@intel.com> (cherry picked from commit c5589bb5dccb0c5cb74910da93663f489589f3ce) Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> [Changed Fixes tag to match the cherry-picked 82929a2140eb] Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18net: dsa: sja1105: fix broken backpressure in .port_fdb_dumpVladimir Oltean1-1/+3
[ Upstream commit 21b52fed928e96d2f75d2f6aa9eac7a4b0b55d22 ] rtnl_fdb_dump() has logic to split a dump of PF_BRIDGE neighbors into multiple netlink skbs if the buffer provided by user space is too small (one buffer will typically handle a few hundred FDB entries). When the current buffer becomes full, nlmsg_put() in dsa_slave_port_fdb_do_dump() returns -EMSGSIZE and DSA saves the index of the last dumped FDB entry, returns to rtnl_fdb_dump() up to that point, and then the dump resumes on the same port with a new skb, and FDB entries up to the saved index are simply skipped. Since dsa_slave_port_fdb_do_dump() is pointed to by the "cb" passed to drivers, then drivers must check for the -EMSGSIZE error code returned by it. Otherwise, when a netlink skb becomes full, DSA will no longer save newly dumped FDB entries to it, but the driver will continue dumping. So FDB entries will be missing from the dump. Fix the broken backpressure by propagating the "cb" return code and allow rtnl_fdb_dump() to restart the FDB dump with a new skb. Fixes: 291d1e72b756 ("net: dsa: sja1105: Add support for FDB and MDB management") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18net: dsa: lantiq: fix broken backpressure in .port_fdb_dumpVladimir Oltean1-4/+10
[ Upstream commit 871a73a1c8f55da0a3db234e9dd816ea4fd546f2 ] rtnl_fdb_dump() has logic to split a dump of PF_BRIDGE neighbors into multiple netlink skbs if the buffer provided by user space is too small (one buffer will typically handle a few hundred FDB entries). When the current buffer becomes full, nlmsg_put() in dsa_slave_port_fdb_do_dump() returns -EMSGSIZE and DSA saves the index of the last dumped FDB entry, returns to rtnl_fdb_dump() up to that point, and then the dump resumes on the same port with a new skb, and FDB entries up to the saved index are simply skipped. Since dsa_slave_port_fdb_do_dump() is pointed to by the "cb" passed to drivers, then drivers must check for the -EMSGSIZE error code returned by it. Otherwise, when a netlink skb becomes full, DSA will no longer save newly dumped FDB entries to it, but the driver will continue dumping. So FDB entries will be missing from the dump. Fix the broken backpressure by propagating the "cb" return code and allow rtnl_fdb_dump() to restart the FDB dump with a new skb. Fixes: 58c59ef9e930 ("net: dsa: lantiq: Add Forwarding Database access") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18net: dsa: lan9303: fix broken backpressure in .port_fdb_dumpVladimir Oltean1-15/+19
[ Upstream commit ada2fee185d8145afb89056558bb59545b9dbdd0 ] rtnl_fdb_dump() has logic to split a dump of PF_BRIDGE neighbors into multiple netlink skbs if the buffer provided by user space is too small (one buffer will typically handle a few hundred FDB entries). When the current buffer becomes full, nlmsg_put() in dsa_slave_port_fdb_do_dump() returns -EMSGSIZE and DSA saves the index of the last dumped FDB entry, returns to rtnl_fdb_dump() up to that point, and then the dump resumes on the same port with a new skb, and FDB entries up to the saved index are simply skipped. Since dsa_slave_port_fdb_do_dump() is pointed to by the "cb" passed to drivers, then drivers must check for the -EMSGSIZE error code returned by it. Otherwise, when a netlink skb becomes full, DSA will no longer save newly dumped FDB entries to it, but the driver will continue dumping. So FDB entries will be missing from the dump. Fix the broken backpressure by propagating the "cb" return code and allow rtnl_fdb_dump() to restart the FDB dump with a new skb. Fixes: ab335349b852 ("net: dsa: lan9303: Add port_fast_age and port_fdb_dump methods") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18net: dsa: microchip: ksz8795: Fix VLAN filteringBen Hutchings1-0/+11
[ Upstream commit 164844135a3f215d3018ee9d6875336beb942413 ] Currently ksz8_port_vlan_filtering() sets or clears the VLAN Enable hardware flag. That controls discarding of packets with a VID that has not been enabled for any port on the switch. Since it is a global flag, set the dsa_switch::vlan_filtering_is_global flag so that the DSA core understands this can't be controlled per port. When VLAN filtering is enabled, the switch should also discard packets with a VID that's not enabled on the ingress port. Set or clear each external port's VLAN Ingress Filter flag in ksz8_port_vlan_filtering() to make that happen. Fixes: e66f840c08a2 ("net: dsa: ksz: Add Microchip KSZ8795 DSA driver") Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18net: dsa: microchip: Fix ksz_read64()Ben Hutchings1-6/+2
[ Upstream commit c34f674c8875235725c3ef86147a627f165d23b4 ] ksz_read64() currently does some dubious byte-swapping on the two halves of a 64-bit register, and then only returns the high bits. Replace this with a straightforward expression. Fixes: e66f840c08a2 ("net: dsa: ksz: Add Microchip KSZ8795 DSA driver") Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18drm/meson: fix colour distortion from HDR set during vendor u-bootChristian Hewitt2-1/+11
[ Upstream commit bf33677a3c394bb8fddd48d3bbc97adf0262e045 ] Add support for the OSD1 HDR registers so meson DRM can handle the HDR properties set by Amlogic u-boot on G12A and newer devices which result in blue/green/pink colour distortion to display output. This takes the original patch submissions from Mathias [0] and [1] with corrections for formatting and the missing description and attribution needed for merge. [0] https://lore.kernel.org/linux-amlogic/59dfd7e6-fc91-3d61-04c4-94e078a3188c@baylibre.com/T/ [1] https://lore.kernel.org/linux-amlogic/CAOKfEHBx_fboUqkENEMd-OC-NSrf46nto+vDLgvgttzPe99kXg@mail.gmail.com/T/#u Fixes: 728883948b0d ("drm/meson: Add G12A Support for VIU setup") Suggested-by: Mathias Steiger <mathias.steiger@googlemail.com> Signed-off-by: Christian Hewitt <christianshewitt@gmail.com> Tested-by: Neil Armstrong <narmstrong@baylibre.com> Tested-by: Philip Milev <milev.philip@gmail.com> [narmsrong: adding missing space on second tested-by tag] Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Link: https://patchwork.freedesktop.org/patch/msgid/20210806094005.7136-1-christianshewitt@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18net/mlx5: Fix return value from tracer initializationAya Levin1-2/+9
[ Upstream commit bd37c2888ccaa5ceb9895718f6909b247cc372e0 ] Check return value of mlx5_fw_tracer_start(), set error path and fix return value of mlx5_fw_tracer_init() accordingly. Fixes: c71ad41ccb0c ("net/mlx5: FW tracer, events handling") Signed-off-by: Aya Levin <ayal@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18net/mlx5: Synchronize correct IRQ when destroying CQShay Drory9-28/+26
[ Upstream commit 563476ae0c5e48a028cbfa38fa9d2fc0418eb88f ] The CQ destroy is performed based on the IRQ number that is stored in cq->irqn. That number wasn't set explicitly during CQ creation and as expected some of the API users of mlx5_core_create_cq() forgot to update it. This caused to wrong synchronization call of the wrong IRQ with a number 0 instead of the real one. As a fix, set the IRQ number directly in the mlx5_core_create_cq() and update all users accordingly. Fixes: 1a86b377aa21 ("vdpa/mlx5: Add VDPA driver for supported mlx5 devices") Fixes: ef1659ade359 ("IB/mlx5: Add DEVX support for CQ events") Signed-off-by: Shay Drory <shayd@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18bareudp: Fix invalid read beyond skb's linear dataGuillaume Nault1-5/+11
[ Upstream commit 143a8526ab5fd4f8a0c4fe2a9cb28c181dc5a95f ] Data beyond the UDP header might not be part of the skb's linear data. Use skb_copy_bits() instead of direct access to skb->data+X, so that we read the correct bytes even on a fragmented skb. Fixes: 4b5f67232d95 ("net: Special handling for IP & MPLS.") Signed-off-by: Guillaume Nault <gnault@redhat.com> Link: https://lore.kernel.org/r/7741c46545c6ef02e70c80a9b32814b22d9616b3.1628264975.git.gnault@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18iavf: Set RSS LUT and key in reset handle pathMd Fahad Iqbal Polash1-5/+8
[ Upstream commit a7550f8b1c9712894f9e98d6caf5f49451ebd058 ] iavf driver should set RSS LUT and key unconditionally in reset path. Currently, the driver does not do that. This patch fixes this issue. Fixes: 2c86ac3c7079 ("i40evf: create a generic config RSS function") Signed-off-by: Md Fahad Iqbal Polash <md.fahad.iqbal.polash@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Sasha Levin <sashal@kernel.org>