summaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
AgeCommit message (Collapse)AuthorFilesLines
2022-03-15ice: rename ice_virtchnl_pf.c to ice_sriov.cJacob Keller1-437/+0
The ice_virtchnl_pf.c and ice_virtchnl_pf.h files are where most of the code for implementing Single Root IOV virtualization resides. This code includes support for bringing up and tearing down VFs, hooks into the kernel SR-IOV netdev operations, and for handling virtchnl messages from VFs. In the future, we plan to support Scalable IOV in addition to Single Root IOV as an alternative virtualization scheme. This implementation will re-use some but not all of the code in ice_virtchnl_pf.c To prepare for this future, we want to refactor and split up the code in ice_virtchnl_pf.c into the following scheme: * ice_vf_lib.[ch] Basic VF structures and accessors. This is where scheme-independent code will reside. * ice_virtchnl.[ch] Virtchnl message handling. This is where the bulk of the logic for processing messages from VFs using the virtchnl messaging scheme will reside. This is separated from ice_vf_lib.c because it is distinct and has a bulk of the processing code. * ice_sriov.[ch] Single Root IOV implementation, including initialization and the routines for interacting with SR-IOV based netdev operations. * (future) ice_siov.[ch] Scalable IOV implementation. As a first step, lets assume that all of the code in ice_virtchnl_pf.[ch] is for Single Root IOV. Rename this file to ice_sriov.c and its header to ice_sriov.h Future changes will further split out the code in these files following the plan outlined here. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-03-11Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netJakub Kicinski1-3/+0
net/dsa/dsa2.c commit afb3cc1a397d ("net: dsa: unlock the rtnl_mutex when dsa_master_setup() fails") commit e83d56537859 ("net: dsa: replay master state events in dsa_tree_{setup,teardown}_master") https://lore.kernel.org/all/20220307101436.7ae87da0@canb.auug.org.au/ drivers/net/ethernet/intel/ice/ice.h commit 97b0129146b1 ("ice: Fix error with handling of bonding MTU") commit 43113ff73453 ("ice: add TTY for GNSS module for E810T device") https://lore.kernel.org/all/20220310112843.3233bcf1@canb.auug.org.au/ drivers/staging/gdm724x/gdm_lte.c commit fc7f750dc9d1 ("staging: gdm724x: fix use after free in gdm_lte_rx()") commit 4bcc4249b4cf ("staging: Use netif_rx().") https://lore.kernel.org/all/20220308111043.1018a59d@canb.auug.org.au/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-09ice: stop disabling VFs due to PF error responsesJacob Keller1-3/+0
The ice_vc_send_msg_to_vf function has logic to detect "failure" responses being sent to a VF. If a VF is sent more than ICE_DFLT_NUM_INVAL_MSGS_ALLOWED then the VF is marked as disabled. Almost identical logic also existed in the i40e driver. This logic was added to the ice driver in commit 1071a8358a28 ("ice: Implement virtchnl commands for AVF support") which itself copied from the i40e implementation in commit 5c3c48ac6bf5 ("i40e: implement virtual device interface"). Neither commit provides a proper explanation or justification of the check. In fact, later commits to i40e changed the logic to allow bypassing the check in some specific instances. The "logic" for this seems to be that error responses somehow indicate a malicious VF. This is not really true. The PF might be sending an error for any number of reasons such as lack of resources, etc. Additionally, this causes the PF to log an info message for every failed VF response which may confuse users, and can spam the kernel log. This behavior is not documented as part of any requirement for our products and other operating system drivers such as the FreeBSD implementation of our drivers do not include this type of check. In fact, the change from dev_err to dev_info in i40e commit 18b7af57d9c1 ("i40e: Lower some message levels") explains that these messages typically don't actually indicate a real issue. It is quite likely that a user who hits this in practice will be very confused as the VF will be disabled without an obvious way to recover. We already have robust malicious driver detection logic using actual hardware detection mechanisms that detect and prevent invalid device usage. Remove the logic since its not a documented requirement and the behavior is not intuitive. Fixes: 1071a8358a28 ("ice: Implement virtchnl commands for AVF support") Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-03-03ice: convert VF storage to hash table with krefs and RCUJacob Keller1-7/+38
The ice driver stores VF structures in a simple array which is allocated once at the time of VF creation. The VF structures are then accessed from the array by their VF ID. The ID must be between 0 and the number of allocated VFs. Multiple threads can access this table: * .ndo operations such as .ndo_get_vf_cfg or .ndo_set_vf_trust * interrupts, such as due to messages from the VF using the virtchnl communication * processing such as device reset * commands to add or remove VFs The current implementation does not keep track of when all threads are done operating on a VF and can potentially result in use-after-free issues caused by one thread accessing a VF structure after it has been released when removing VFs. Some of these are prevented with various state flags and checks. In addition, this structure is quite static and does not support a planned future where virtualization can be more dynamic. As we begin to look at supporting Scalable IOV with the ice driver (as opposed to just supporting Single Root IOV), this structure is not sufficient. In the future, VFs will be able to be added and removed individually and dynamically. To allow for this, and to better protect against a whole class of use-after-free bugs, replace the VF storage with a combination of a hash table and krefs to reference track all of the accesses to VFs through the hash table. A hash table still allows efficient look up of the VF given its ID, but also allows adding and removing VFs. It does not require contiguous VF IDs. The use of krefs allows the cleanup of the VF memory to be delayed until after all threads have released their reference (by calling ice_put_vf). To prevent corruption of the hash table, a combination of RCU and the mutex table_lock are used. Addition and removal from the hash table use the RCU-aware hash macros. This allows simple read-only look ups that iterate to locate a single VF can be fast using RCU. Accesses which modify the hash table, or which can't take RCU because they sleep, will hold the mutex lock. By using this design, we have a stronger guarantee that the VF structure can't be released until after all threads are finished operating on it. We also pave the way for the more dynamic Scalable IOV implementation in the future. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-03-03ice: introduce VF accessor functionsJacob Keller1-0/+25
Before we switch the VF data structure storage mechanism to a hash, introduce new accessor functions to define the new interface. * ice_get_vf_by_id is a function used to obtain a reference to a VF from the table based on its VF ID * ice_has_vfs is used to quickly check if any VFs are configured * ice_get_num_vfs is used to get an exact count of how many VFs are configured We can drop the old ice_validate_vf_id function, since every caller was just going to immediately access the VF table to get a reference anyways. This way we simply use the single ice_get_vf_by_id to both validate the VF ID is within range and that there exists a VF with that ID. This change enables us to more easily convert the codebase to the hash table since most callers now properly use the interface. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-03-03ice: factor VF variables to separate structureJacob Keller1-2/+13
We maintain a number of values for VFs within the ice_pf structure. This includes the VF table, the number of allocated VFs, the maximum number of supported SR-IOV VFs, the number of queue pairs per VF, the number of MSI-X vectors per VF, and a bitmap of the VFs with detected MDD events. We're about to add a few more variables to this list. Clean this up first by extracting these members out into a new ice_vfs structure defined in ice_virtchnl_pf.h Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-03-03ice: convert ice_for_each_vf to include VF entry iteratorJacob Keller1-2/+14
The ice_for_each_vf macro is intended to be used to loop over all VFs. The current implementation relies on an iterator that is the index into the VF array in the PF structure. This forces all users to perform a look up themselves. This abstraction forces a lot of duplicate work on callers and leaks the interface implementation to the caller. Replace this with an implementation that includes the VF pointer the primary iterator. This version simplifies callers which just want to iterate over every VF, as they no longer need to perform their own lookup. The "i" iterator value is replaced with a new unsigned int "bkt" parameter, as this will match the necessary interface for replacing the VF array with a hash table. For now, the bkt is the VF ID, but in the future it will simply be the hash bucket index. Document that it should not be treated as a VF ID. This change aims to simplify switching from the array to a hash table. I considered alternative implementations such as an xarray but decided that the hash table was the simplest and most suitable implementation. I also looked at methods to hide the bkt iterator entirely, but I couldn't come up with a feasible solution that worked for hash table iterators. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-09ice: Add support for VIRTCHNL_VF_OFFLOAD_VLAN_V2Brett Creeley1-0/+8
Add support for the VF driver to be able to request VIRTCHNL_VF_OFFLOAD_VLAN_V2, negotiate its VLAN capabilities via VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS, add/delete VLAN filters, and enable/disable VLAN offloads. VFs supporting VIRTCHNL_OFFLOAD_VLAN_V2 will be able to use the following virtchnl opcodes: VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS VIRTCHNL_OP_ADD_VLAN_V2 VIRTCHNL_OP_DEL_VLAN_V2 VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2 VIRTCHNL_OP_ENABLE_VLAN_INSERTION_V2 VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2 Legacy VF drivers may expect the initial VLAN stripping settings to be configured by the PF, so the PF initializes VLAN stripping based on the VIRTCHNL_OP_GET_VF_RESOURCES opcode. However, with VLAN support via VIRTCHNL_VF_OFFLOAD_VLAN_V2, this function is only expected to be used for VFs that only support VIRTCHNL_VF_OFFLOAD_VLAN, which will only be supported when a port VLAN is configured. Update the function based on the new expectations. Also, change the message when the PF can't enable/disable VLAN stripping to a dev_dbg() as this isn't fatal. When a VF isn't in a port VLAN and it only supports VIRTCHNL_VF_OFFLOAD_VLAN when Double VLAN Mode (DVM) is enabled, then the PF needs to reject the VIRTCHNL_VF_OFFLOAD_VLAN capability and configure the VF in software only VLAN mode. To do this add the new function ice_vf_vsi_cfg_legacy_vlan_mode(), which updates the VF's inner and outer ice_vsi_vlan_ops functions and sets up software only VLAN mode. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-09ice: Add outer_vlan_ops and VSI specific VLAN ops implementationsBrett Creeley1-0/+6
Add a new outer_vlan_ops member to the ice_vsi structure as outer VLAN ops are only available when the device is in Double VLAN Mode (DVM). Depending on the VSI type, the requirements for what operations to use/allow differ. By default all VSI's have unsupported inner and outer VSI VLAN ops. This implementation was chosen to prevent unexpected crashes due to null pointer dereferences. Instead, if a VSI calls an unsupported op, it will just return -EOPNOTSUPP. Add implementations to support modifying outer VLAN fields for VSI context. This includes the ability to modify VLAN stripping, insertion, and the port VLAN based on the outer VLAN handling fields of the VSI context. These functions should only ever be used if DVM is enabled because that means the firmware supports the outer VLAN fields in the VSI context. If the device is in DVM, then always use the outer_vlan_ops, else use the vlan_ops since the device is in Single VLAN Mode (SVM). Also, move adding the untagged VLAN 0 filter from ice_vsi_setup() to ice_vsi_vlan_setup() as the latter function is specific to the PF and all other VSI types that need an untagged VLAN 0 filter already do this in their specific flows. Without this change, Flow Director is failing to initialize because it does not implement any VSI VLAN ops. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-09ice: Use the proto argument for VLAN opsBrett Creeley1-1/+1
Currently the proto argument is unused. This is because the driver only supports 802.1Q VLAN filtering. This policy is enforced via netdev features that the driver sets up when configuring the netdev, so the proto argument won't ever be anything other than 802.1Q. However, this will allow for future iterations of the driver to seemlessly support 802.1ad filtering. Begin using the proto argument and extend the related structures to support its use. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-02-09ice: Refactor vf->port_vlan_info to use ice_vlanBrett Creeley1-1/+2
The current vf->port_vlan_info variable is a packed u16 that contains the port VLAN ID and QoS/prio value. This is fine, but changes are incoming that allow for an 802.1ad port VLAN. Add flexibility by changing the vf->port_vlan_info member to be an ice_vlan structure. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Gurucharan G <gurucharanx.g@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-12-14ice: refactor PTYPE validatingJeff Guo1-0/+2
Since the capability of a PTYPE within a specific package could be negotiated by checking the HW bit map, it means that there's no need to maintain a different PTYPE list for each type of the package when parsing PTYPE. So refactor the PTYPE validating mechanism. Signed-off-by: Jeff Guo <jia.guo@intel.com> Tested-by: Tony Brelinski <tony.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-11-03ice: Fix race conditions between virtchnl handling and VF ndo opsBrett Creeley1-0/+5
The VF can be configured via the PF's ndo ops at the same time the PF is receiving/handling virtchnl messages. This has many issues, with one of them being the ndo op could be actively resetting a VF (i.e. resetting it to the default state and deleting/re-adding the VF's VSI) while a virtchnl message is being handled. The following error was seen because a VF ndo op was used to change a VF's trust setting while the VIRTCHNL_OP_CONFIG_VSI_QUEUES was ongoing: [35274.192484] ice 0000:88:00.0: Failed to set LAN Tx queue context, error: ICE_ERR_PARAM [35274.193074] ice 0000:88:00.0: VF 0 failed opcode 6, retval: -5 [35274.193640] iavf 0000:88:01.0: PF returned error -5 (IAVF_ERR_PARAM) to our request 6 Fix this by making sure the virtchnl handling and VF ndo ops that trigger VF resets cannot run concurrently. This is done by adding a struct mutex cfg_lock to each VF structure. For VF ndo ops, the mutex will be locked around the critical operations and VFR. Since the ndo ops will trigger a VFR, the virtchnl thread will use mutex_trylock(). This is done because if any other thread (i.e. VF ndo op) has the mutex, then that means the current VF message being handled is no longer valid, so just ignore it. This issue can be seen using the following commands: for i in {0..50}; do rmmod ice modprobe ice sleep 1 echo 1 > /sys/class/net/ens785f0/device/sriov_numvfs echo 1 > /sys/class/net/ens785f1/device/sriov_numvfs ip link set ens785f1 vf 0 trust on ip link set ens785f0 vf 0 trust on sleep 2 echo 0 > /sys/class/net/ens785f0/device/sriov_numvfs echo 0 > /sys/class/net/ens785f1/device/sriov_numvfs sleep 1 echo 1 > /sys/class/net/ens785f0/device/sriov_numvfs echo 1 > /sys/class/net/ens785f1/device/sriov_numvfs ip link set ens785f1 vf 0 trust on ip link set ens785f0 vf 0 trust on done Fixes: 7c710869d64e ("ice: Add handlers for VF netdevice operations") Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-19ice: Add support for VF rate limitingBrett Creeley1-1/+14
Implement ndo_set_vf_rate to support setting of min_tx_rate and max_tx_rate; set the appropriate bandwidth in the scheduler for the node representing the specified VF VSI. Co-developed-by: Tarun Singh <tarun.k.singh@intel.com> Signed-off-by: Tarun Singh <tarun.k.singh@intel.com> Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-07ice: add port representor ethtool ops and statsWojciech Drewek1-0/+14
Introduce the following ethtool operations for VF's representor: -get_drvinfo -get_strings -get_ethtool_stats -get_sset_count -get_link In all cases, existing operations were used with minor changes which allow us to detect if ethtool op was called for representor. Only VF VSI stats will be available for representor. Implement ndo_get_stats64 for port representor. This will update VF VSI stats and read them. Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-07ice: allow process VF opcodes in different waysMichal Swiatkowski1-0/+32
In switchdev driver shouldn't add MAC, VLAN and promisc filters on iavf demand but should return success to not break normal iavf flow. Achieve that by creating table of functions pointer with default functions used to parse iavf command. While parse iavf command, call correct function from table instead of calling function direct. When port representors are being created change functions in table to new one that behaves correctly for switchdev puprose (ignoring new filters). Change back to default ops when representors are being removed. Co-developed-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-07ice: introduce VF port representorMichal Swiatkowski1-0/+4
Port representor is used to manage VF from host side. To allow it each created representor registers netdevice with random hw address. Also devlink port is created for all representors. Port representor name is created based on switch id or managed by devlink core if devlink port was registered with success. Open and stop ndo ops are implemented to allow managing the VF link state. Link state is tracked in VF struct. Struct ice_netdev_priv is extended by pointer to representor field. This is needed to get correct representor from netdev struct mostly used in ndo calls. Implement helper functions to check if given netdev is netdev of port representor (ice_is_port_repr_netdev) and to get representor from netdev (ice_netdev_to_repr). As driver mostly will create or destroy port representors on all VFs instead of on single one, write functions to add and remove representor for each VF. Representor struct contains pointer to source VSI, which is VSI configured on VF, backpointer to VF, backpointer to netdev, q_vector pointer and metadata_dst which will be used in data path. Co-developed-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Grzegorz Nitka <grzegorz.nitka@intel.com> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-10-07ice: Move devlink port to PF/VF structWojciech Drewek1-0/+9
Keeping devlink port inside VSI data structure causes some issues. Since VF VSI is released during reset that means that we have to unregister devlink port and register it again every time reset is triggered. With the new changes in devlink API it might cause deadlock issues. After calling devlink_port_register/devlink_port_unregister devlink API is going to lock rtnl_mutex. It's an issue when VF reset is triggered in netlink operation context (like setting VF MAC address or VLAN), because rtnl_lock is already taken by netlink. Another call of rtnl_lock from devlink API results in dead-lock. By moving devlink port to PF/VF we avoid creating/destroying it during reset. Since this patch, devlink ports are created during ice_probe, destroyed during ice_remove for PF and created during ice_repr_add, destroyed during ice_repr_rem for VF. Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Tested-by: Sandeep Penigalapati <sandeep.penigalapati@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-06-07ice: use static inline for dummy functionsJesse Brandeburg1-10/+12
Trivial: The driver had previously attempted to use #define macros to make functions that have no use in certain configs disappear. Using static inlines instead allows for certain static checkers to process the code better, and results in no functional change. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-06-07ice: Save VF's MAC across rebootBrett Creeley1-0/+1
If a VM reboots and/or VF driver is unloaded, its cached hardware MAC address (hw_lan_addr.addr) is cleared in some cases. If the VF is trusted, then the PF driver allows the VF to clear its old MAC address even if this MAC was configured by a host administrator. If the VF is untrusted, then the PF driver allows the VF to clear its old MAC address only if the host admin did not set it. For the trusted VF case, this is unexpected and will cause issues because the host configured MAC (i.e. via XML) will be cleared on VM reboot. For the untrusted VF case, this is done to be consistent and it will allow the VF to keep the same MAC across VM reboot. Fix this by introducing dev_lan_addr to the VF structure. This will be the VF's MAC address when it's up and running and in most cases will be the same as the hw_lan_addr. However, to address the VM reboot and unload/reload problem, the driver will never allow the hw_lan_addr to be cleared via VIRTCHNL_OP_DEL_ETH_ADDR. When the VF's MAC is changed, the dev_lan_addr and hw_lan_addr will always be updated with the same value. The only ways the VF's MAC can change are the following: - Set the VF's MAC administratively on the host via iproute2. - If the VF is trusted and changes/sets its own MAC. - If the VF is untrusted and the host has not set the MAC via iproute2. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-06-07ice: Manage VF's MAC address for both legacy and new casesBrett Creeley1-1/+7
Currently there is no way for a VF driver to specify if it wants to change it's hardware address. New bits are being added to virtchnl.h in struct virtchnl_ether_addr that allow for the VF to correctly communicate this information. However, legacy VF drivers that don't support the new virtchnl.h bits still need to be supported. Make a best effort attempt at saving the VF's primary/device address in the legacy case and depend on the VIRTCHNL_ETHER_ADDR_PRIMARY type for the new case. Legacy case - If a unicast MAC is being added and the hw_lan_addr.addr is empty, then populate it. This assumes that the address is the VF's hardware address. If a unicast MAC is being added and the hw_lan_addr.addr is not empty, then cache it in the legacy_last_added_umac.addr. If a unicast MAC is being deleted and it matches the hw_lan_addr.addr, then zero the hw_lan_addr.addr. Also, if the legacy_last_added_umac.addr has not expired, copy the legacy_last_added_umac.addr into the hw_lan_addr.addr. This is done because we cannot guarantee the order of VIRTCHNL_OP_ADD_ETH_ADDR and VIRTCHNL_OP_DEL_ETH_ADDR. New case - If a unicast MAC is being added and it's specified as VIRTCHNL_ETHER_ADDR_PRIMARY, then replace the current hw_lan_addr.addr. If a unicast MAC is being deleted and it's type is specified as VIRTCHNL_ETHER_ADDR_PRIMARY, then zero the hw_lan_addr.addr. Untrusted VFs - Only allow above legacy/new changes to their hardware address if the PF has not set it administratively via iproute2. Trusted VFs - Always allow above legacy/new changes to their hardware address even if the PF has administratively set it via iproute2. Also, change the variable dflt_lan_addr to hw_lan_addr to clearly represent the purpose of this variable since it's purpose is to act as a hardware programmed MAC address for the VF. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-04-22ice: Allow ignoring opcodes on specific VFMichal Swiatkowski1-0/+1
Declare bitmap of allowed commands on VF. Initialize default opcodes list that should be always supported. Declare array of supported opcodes for each caps used in virtchnl code. Change allowed bitmap by setting or clearing corresponding bit to allowlist (bit set) or denylist (bit clear). Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-04-22ice: warn about potentially malicious VFsVignesh Sridhar1-0/+12
Attempt to detect malicious VFs and, if suspected, log the information but keep going to allow the user to take any desired actions. Potentially malicious VFs are identified by checking if the VFs are transmitting too many messages via the PF-VF mailbox which could cause an overflow of this channel resulting in denial of service. This is done by creating a snapshot or static capture of the mailbox buffer which can be traversed and in which the messages sent by VFs are tracked. Co-developed-by: Yashaswini Raghuram Prathivadi Bhayankaram <yashaswini.raghuram.prathivadi.bhayankaram@intel.com> Signed-off-by: Yashaswini Raghuram Prathivadi Bhayankaram <yashaswini.raghuram.prathivadi.bhayankaram@intel.com> Co-developed-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com> Co-developed-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com> Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-03-22ice: Enable FDIR Configure for AVFQi Zhang1-0/+6
The virtual channel is going to be extended to support FDIR and RSS configure from AVF. New data structures and OP codes will be added, the patch enable the FDIR part. To support above advanced AVF feature, we need to figure out what kind of data structure should be passed from VF to PF to describe an FDIR rule or RSS config rule. The common part of the requirement is we need a data structure to represent the input set selection of a rule's hash key. An input set selection is a group of fields be selected from one or more network protocol layers that could be identified as a specific flow. For example, select dst IP address from an IPv4 header combined with dst port from the TCP header as the input set for an IPv4/TCP flow. The patch adds a new data structure virtchnl_proto_hdrs to abstract a network protocol headers group which is composed of layers of network protocol header(virtchnl_proto_hdr). A protocol header contains a 32 bits mask (field_selector) to describe which fields are selected as input sets, as well as a header type (enum virtchnl_proto_hdr_type). Each bit is mapped to a field in enum virtchnl_proto_hdr_field guided by its header type. +------------+-----------+------------------------------+ | | Proto Hdr | Header Type A | | | +------------------------------+ | | | BIT 31 | ... | BIT 1 | BIT 0 | | |-----------+------------------------------+ |Proto Hdrs | Proto Hdr | Header Type B | | | +------------------------------+ | | | BIT 31 | ... | BIT 1 | BIT 0 | | |-----------+------------------------------+ | | Proto Hdr | Header Type C | | | +------------------------------+ | | | BIT 31 | ... | BIT 1 | BIT 0 | | |-----------+------------------------------+ | | .... | +-------------------------------------------------------+ All fields in enum virtchnl_proto_hdr_fields are grouped with header type and the value of the first field of a header type is always 32 aligned. enum proto_hdr_type { header_type_A = 0; header_type_B = 1; .... } enum proto_hdr_field { /* header type A */ header_A_field_0 = 0, header_A_field_1 = 1, header_A_field_2 = 2, header_A_field_3 = 3, /* header type B */ header_B_field_0 = 32, // = header_type_B << 5 header_B_field_0 = 33, header_B_field_0 = 34 header_B_field_0 = 35, .... }; So we have: proto_hdr_type = proto_hdr_field / 32 bit offset = proto_hdr_field % 32 To simply the protocol header's operations, couple help macros are added. For example, to select src IP and dst port as input set for an IPv4/UDP flow. we have: struct virtchnl_proto_hdr hdr[2]; VIRTCHNL_SET_PROTO_HDR_TYPE(&hdr[0], IPV4) VIRTCHNL_ADD_PROTO_HDR_FIELD(&hdr[0], IPV4, SRC) VIRTCHNL_SET_PROTO_HDR_TYPE(&hdr[1], UDP) VIRTCHNL_ADD_PROTO_HDR_FIELD(&hdr[1], UDP, DST) The byte array is used to store the protocol header of a training package. The byte array must be network order. The patch added virtual channel support for iAVF FDIR add/validate/delete filter. iAVF FDIR is Flow Director for Intel Adaptive Virtual Function which can direct Ethernet packets to the queues of the Network Interface Card. Add/delete command is adding or deleting one rule for each virtual channel message, while validate command is just verifying if this rule is valid without any other operations. To add or delete one rule, driver needs to config TCAM and Profile, build training packets which contains the input set value, and send the training packets through FDIR Tx queue. In addition, driver needs to manage the software context to avoid adding duplicated rules, deleting non-existent rule, input set conflicts and other invalid cases. NOTE: Supported pattern/actions and their parse functions are not be included in this patch, they will be added in a separate one. Signed-off-by: Jeff Guo <jia.guo@intel.com> Signed-off-by: Yahui Cao <yahui.cao@intel.com> Signed-off-by: Simei Su <simei.su@intel.com> Signed-off-by: Beilei Xing <beilei.xing@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Tested-by: Chen Bo <BoX.C.Chen@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2021-03-22ice: Add support for per VF ctrl VSI enablingQi Zhang1-0/+2
We are going to enable FDIR configure for AVF through virtual channel. The first step is to add helper functions to support control VSI setup. A control VSI will be allocated for a VF when AVF creates its first FDIR rule through ice_vf_ctrl_vsi_setup(). The patch will also allocate FDIR rule space for VF's control VSI. If a VF asks for flow director rules, then those should come entirely from the best effort pool and not from the guaranteed pool. The patch allow a VF VSI to have only space in the best effort rules. Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com> Signed-off-by: Yahui Cao <yahui.cao@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com> Tested-by: Chen Bo <BoX.C.Chen@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-08-01ice: Allow 2 queue pairs per VF on SR-IOV initializationBrett Creeley1-0/+1
Currently VFs are only allowed to get 16, 4, and 1 queue pair by default, which require 17, 5, and 2 MSI-X vectors respectively. This is because each VF needs a MSI-X per data queue and a MSI-X for its other interrupt. The calculation is based on the number of VFs created, MSI-X available, and queue pairs available at the time of VF creation. Unfortunately the values above exclude 2 queue pairs when only 3 MSI-X are available to each VF based on resource constraints. The current calculation would default to 2 MSI-X and 1 queue pair. This is a waste of resources, so fix this by allowing 2 queue pairs per VF when there are between 2 and 5 MSI-X available per VF. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-07-29ice: restore VF MSI-X state during PCI resetNick Nunley1-0/+2
During a PCI FLR the MSI-X Enable flag in the VF PCI MSI-X capability register will be cleared. This can lead to issues when a VF is assigned to a VM because in these cases the VF driver receives no indication of the PF PCI error/reset and additionally it is incapable of restoring the cleared flag in the hypervisor configuration space without fully reinitializing the driver interrupt functionality. Since the VF driver is unable to easily resolve this condition on its own, restore the VF MSI-X flag during the PF PCI reset handling. Signed-off-by: Nick Nunley <nicholas.d.nunley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2020-05-31ice: support adding 16 unicast/multicast filter on untrusted VFPaul Greenwalt1-1/+4
Allow untrusted VF to add 16 unicast/multicast filters. VF uses 1 filter for the default/perm_addr/LAA MAC, 1 for broadcast, and 16 additional unicast/multicast filters. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-05-23ice: print Rx MDD auto reset message before VF resetPaul Greenwalt1-0/+2
Rx MDD auto reset message was not being logged because logging occurred after the VF reset and the VF MDD data was reinitialized. Log the Rx MDD auto reset message before triggering the VF reset. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-05-22ice: cleanup vf_id signednessJesse Brandeburg1-1/+1
The vf_id variable is dealt with in the code in inconsistent ways of sign usage, preventing compilation with -Werror=sign-compare. Fix this problem in the code by always treating vf_id as unsigned, since there are no valid values of vf_id that are negative. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-05-22ice: Add VF promiscuous supportBrett Creeley1-0/+6
Implement promiscuous support for VF VSIs. Behaviour of promiscuous support is based on VF trust as well as the, introduced, vf-true-promisc flag. A trusted VF with vf-true-promisc disabled will be the default VSI, which means that all traffic without a matching destination MAC address in the device's internal switch will be forwarded to this VF VSI. A trusted VF with vf-true-promisc enabled will go into "true promiscuous mode". This amounts to the VF receiving all ingress and egress traffic that hits the device's internal switch. An untrusted VF will only receive traffic destined for that VF. The vf-true-promisc-support flag cannot be toggled while any VF is in promiscuous mode. This flag should be set prior to loading the iavf driver or spawning VF(s). Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-03-10ice: allow bigger VFsMitch Williams1-9/+6
Unlike the XL710 series, 800-series hardware can allocate more than 4 MSI-X vectors per VF. This patch enables that functionality. We dynamically allocate vectors and queues depending on how many VFs are enabled. Allocating the maximum number of VFs replicates XL710 behavior with 4 queues and 4 vectors. But allocating a smaller number of VFs will give you 16 queues and 16 vectors. Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-02-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller1-1/+2
Conflict resolution of ice_virtchnl_pf.c based upon work by Stephen Rothwell. Signed-off-by: David S. Miller <davem@davemloft.net>
2020-02-19ice: update malicious driver detection event handlingPaul Greenwalt1-2/+18
Update the PF VFs MDD event message to rate limit once per second and report the total number Rx|Tx event count. Add support to print pending MDD events that occur during the rate limit. The use of net_ratelimit did not allow for per VF Rx|Tx granularity. Additional PF MDD log messages are guarded by netif_msg_[rx|tx]_err(). Since VF RX MDD events disable the queue, add ethtool private flag mdd-auto-reset-vf to configure VF reset to re-enable the queue. Disable anti-spoof detection interrupt to prevent spurious events during a function reset. To avoid race condition do not make PF MDD register reads conditional on global MDD result. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-02-19ice: Wait for VF to be reset/ready before configurationBrett Creeley1-1/+2
The configuration/command below is failing when the VF in the xml file is already bound to the host iavf driver. pci_0000_af_0_0.xml: <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0x0000' bus='0xaf' slot='0x0' function='0x0'/> </source> <mac address='00:de:ad:00:11:01'/> </interface> > virsh attach-device domain_name pci_0000_af_0_0.xml error: Failed to attach device from pci_0000_af_0_0.xml error: Cannot set interface MAC/vlanid to 00:de:ad:00:11:01/0 for ifname ens1f1 vf 0: Device or resource busy This is failing because the VF has not been completely removed/reset after being unbound (via the virsh command above) from the host iavf driver and ice_set_vf_mac() checks if the VF is disabled before waiting for the reset to finish. Fix this by waiting for the VF remove/reset process to happen before checking if the VF is disabled. Also, since many functions for VF administration on the PF were more or less calling the same 3 functions (ice_wait_on_vf_reset(), ice_is_vf_disabled(), and ice_check_vf_init()) move these into the helper function ice_check_vf_ready_for_cfg(). Then call this function in any flow that attempts to configure/query a VF from the PF. Lastly, increase the maximum wait time in ice_wait_on_vf_reset() to 800ms, and modify/add the #define(s) that determine the wait time. This was done for robustness because in rare/stress cases VF removal can take a max of ~800ms and previously the wait was a max of ~300ms. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-02-16ice: Fix and refactor Rx queue disable for VFsBrett Creeley1-1/+0
Currently when a VF driver sends the PF a request to disable Rx queues we will disable them one at a time, even if the VF driver sent us a batch of queues to disable. This is causing issues where the Rx queue disable times out with LFC enabled. This can be improved by detecting when the VF is trying to disable all of its queues. Also remove the variable num_qs_ena from the ice_vf structure as it was only used to see if there were no Rx and no Tx queues active. Instead add a function that checks if both the vf->rxq_ena and vf->txq_ena bitmaps are empty. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-02-16ice: Handle LAN overflow event for VF queuesBrett Creeley1-0/+4
Currently we are not handling LAN overflow events. There can be cases where LAN overflow events occur on VF queues, especially with Link Flow Control (LFC) enabled on the controlling PF. In order to recover from the LAN overflow event caused by a VF we need to determine if the queue belongs to a VF and reset that VF accordingly. The struct ice_aqc_event_lan_overflow returns a copy of the GLDCB_RTCTQ register, which tells us what the queue index is in the global/device space. The global queue index needs to first be converted to a PF space queue index and then it can be used to find if a VF owns it. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-02-16ice: Fix Port VLAN priority bitsBrett Creeley1-5/+0
Currently when configuring a port VLAN for a VF we are only shifting the QoS bits by 12. This is incorrect. Fix this by getting rid of the ICE specific VLAN defines and use the kernel VLAN defines instead. Also, don't assign a value to vlanprio until the VLAN ID and QoS parameters have been validated. Also, there are many places we do (le16_to_cpu(vsi->info.pvid) & VLAN_VID_MASK). Instead do (vf->port_vlan_info & VLAN_VID_MASK) because we always save what's stored in vsi->info.pvid to vf->port_vlan_info in the CPU's endianness. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-02-16ice: Refactor port vlan configuration for the VFBrett Creeley1-1/+1
Currently ice_vsi_manage_pvid() calls ice_vsi_[set|kill]_pvid_fill_ctxt() when enabling/disabling a port VLAN on a VSI respectively. These two functions have some duplication so just move their unique pieces inline in ice_vsi_manage_pvid() and then the duplicate code can be reused for both the enabling/disabling paths. Before this patch the info.pvid field was not being written correctly via ice_vsi_kill_pvid_fill_ctxt() so it was being hard coded to 0 in ice_set_vf_port_vlan(). Fix this by setting the info.pvid field to 0 before calling ice_vsi_update() in ice_vsi_manage_pvid(). We currently use vf->port_vlan_id to keep track of the port VLAN ID and QoS, which is a bit misleading. Fix this by renaming it to vf->port_vlan_info. Also change the name of the argument for ice_vsi_manage_pvid() from vid to pvid_info. In ice_vsi_manage_pvid() only save the fields that were modified in the VSI properties structure on success instead of the entire thing. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-01-04ice: Add ice_for_each_vf() macroBrett Creeley1-0/+3
Currently we do "for (i = 0; i < pf->num_alloc_vfs; i++)" all over the place. Many other places use macros to contain this repeated for loop, So create the macro ice_for_each_vf(pf, i) that does the same thing. There were a couple places we were using one loop variable and a VF iterator, which were changed to using a local variable within the ice_for_each_vf() macro. Also in ice_alloc_vfs() we were setting pf->num_alloc_vfs after doing "for (i = 0; i < num_alloc_vfs; i++)". Instead assign pf->num_alloc_vfs right after allocating memory for the pf->vf array. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2020-01-04ice: Fix VF spoofchkBrett Creeley1-1/+0
There are many things wrong with the function ice_set_vf_spoofchk(). 1. The VSI being modified is the PF VSI, not the VF VSI. 2. We are enabling Rx VLAN pruning instead of Tx VLAN anti-spoof. 3. The spoofchk setting for each VF is not initialized correctly or re-initialized correctly on reset. To fix [1] we need to make sure we are modifying the VF VSI. This is done by using the vf->lan_vsi_idx to index into the PF's VSI array. To fix [2] replace setting Rx VLAN pruning in ice_set_vf_spoofchk() with setting Tx VLAN anti-spoof. To Fix [3] we need to make sure the initial VSI settings match what is done in ice_set_vf_spoofchk() for spoofchk=on. Also make sure this also works for VF reset. This was done by modifying ice_vsi_init() to account for the current spoofchk state of the VF VSI. Because of these changes, Tx VLAN anti-spoof needs to be removed from ice_cfg_vlan_pruning(). This is okay for the VF because this is now controlled from the admin enabling/disabling spoofchk. For the PF, Tx VLAN anti-spoof should not be set. This change requires us to call ice_set_vf_spoofchk() when configuring promiscuous mode for the VF which requires ice_set_vf_spoofchk() to move in order to prevent a forward declaration prototype. Also, add VLAN 0 by default when allocating a VF since the PF is unaware if the guest OS is running the 8021q module. Without this, MDD events will trigger on untagged traffic because spoofcheck is enabled by default. Due to this change, ignore add/delete messages for VLAN 0 from VIRTCHNL since this is added/deleted during VF initialization/teardown respectively and should not be modified. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-11-23ice: implement VF stats NDOJesse Brandeburg1-0/+11
Implement the VF stats gathering via the kernel via ndo_get_vf_stats(). The driver will show per-VF stats in the output of the ip -s link show dev <PF> command. Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-11-08ice: Check if VF is disabled for Opcode and other operationsAkeem G Abodunrin1-0/+1
This patch adds code to check if PF or VF is disabled before honoring mailbox message to configure VF - If it is disabled, and opcode is for resetting VF, the PF driver simply tell VF that all is set. In addition, if reset is ongoing, and Admin intend to configure VF on the host, we can poll the VF enabling bit to make sure it is ready before continue - If after ~250 milliseconds, VF is not in active state, we can bail out with invalid error. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-08-27ice: add support for virtchnl_queue_select.[tx|rx]_queues bitmapPaul Greenwalt1-3/+9
The VF driver can call VIRTCHNL_OP_[ENABLE|DISABLE]_QUEUES separately for each queue. Add support for virtchnl_queue_select.[tx|rx]_queues bitmap which is used to indicate which queues to enable and disable. Add tracing of VF Tx/Rx per queue enable state to avoid enabling enabled queues and disabling disabled queues. Add total queues enabled count and clear ICE_VF_STATE_QS_ENA when count is zero. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Signed-off-by: Peng Huang <peng.huang@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-08-27ice: Don't clog kernel debug log with VF MDD events errorsAkeem G Abodunrin1-1/+1
In case of MDD events on VF, don't clog kernel log with unlimited VF MDD events message "VF 0 has had 1018 MDD events since last boot" - limit events log message to 30, based on the observation in some experimentation with sending malicious packet once, and number of events reported before device stopped observing MDD events. Also removed defunct macro "ICE_DFLT_NUM_MDD_EVENTS_ALLOWED" for tracking number of MDD events allowed before disabling the interface... Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-08-21ice: Move VF resources definition to SR-IOV specific fileAkeem G Abodunrin1-0/+13
In order to use some of the VF resources definition in the SR-IOV specific virtchnl header file, this patch moves applicable code to ice_virtchnl_pf.h file accordingly... and they should have been defined in the destination file originally. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-08-21ice: Reduce wait times during VF bringup/resetBrett Creeley1-0/+4
Currently there are a couple places where the VF is waiting too long when checking the status of registers. This is causing the AVF driver to spin for longer than necessary in the __IAVF_STARTUP state. Sometimes it causes the AVF to go into the __IAVF_COMM_FAILED, which may retrigger the __IAVF_STARTUP state. Try to reduce the chance of this happening by removing unnecessary wait times in VF bringup/resets. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-08-21ice: update GLINT_DYN_CTL and GLINT_VECT2FUNC register accessPaul Greenwalt1-1/+2
Register access for GLINT_DYN_CTL and GLINT_VECT2FUNC should be within the PF space and not the absolute device space. Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-07-31ice: Remove flag to track VF interrupt statusAkeem G Abodunrin1-5/+0
As a result of refactoring of VF VSIs interrupts code, there is no need to track its configuration status again with ICE_VF_STATE_CFG_INTR flag - In fact, it is not being checked anywhere in the code right now, so this patch removes the dead code as applicable to the flag. Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2019-05-29ice: Refactor interrupt trackingBrett Creeley1-0/+8
Currently we have two MSI-x (IRQ) trackers, one for OS requested MSI-x entries (sw_irq_tracker) and one for hardware MSI-x vectors (hw_irq_tracker). Generally the sw_irq_tracker has less entries than the hw_irq_tracker because the hw_irq_tracker has entries equal to the max allowed MSI-x per PF and the sw_irq_tracker is mainly the minimum (non SR-IOV portion of the vectors, kernel granted IRQs). All of the non SR-IOV portions of the driver (i.e. LAN queues, RDMA queues, OICR, etc.) take at least one of each type of tracker resource. SR-IOV only grabs entries from the hw_irq_tracker. There are a few issues with this approach that can be seen when doing any kind of device reconfiguration (i.e. ethtool -L, SR-IOV, etc.). One of them being, any time the driver creates an ice_q_vector and associates it to a LAN queue pair it will grab and use one entry from the hw_irq_tracker and one from the sw_irq_tracker. If the indices on these does not match it will cause a Tx timeout, which will cause a reset and then the indices will match up again and traffic will resume. The mismatched indices come from the trackers not being the same size and/or the search_hint in the two trackers not being equal. Another reason for the refactor is the co-existence of features with SR-IOV. If SR-IOV is enabled and the interrupts are taken from the end of the sw_irq_tracker then other features can no longer use this space because the hardware has now given the remaining interrupts to SR-IOV. This patch reworks how we track MSI-x vectors by removing the hw_irq_tracker completely and instead MSI-x resources needed for SR-IOV are determined all at once instead of per VF. This can be done because when creating VFs we know how many are wanted and how many MSI-x vectors each VF needs. This also allows us to start using MSI-x resources from the end of the PF's allowed MSI-x vectors so we are less likely to use entries needed for other features (i.e. RDMA, L2 Offload, etc). This patch also reworks the ice_res_tracker structure by removing the search_hint and adding a new member - "end". Instead of having a search_hint we will always search from 0. The new member, "end", will be used to manipulate the end of the ice_res_tracker (specifically sw_irq_tracker) during runtime based on MSI-x vectors needed by SR-IOV. In the normal case, the end of ice_res_tracker will be equal to the ice_res_tracker's num_entries. The sriov_base_vector member was added to the PF structure. It is used to represent the starting MSI-x index of all the needed MSI-x vectors for all SR-IOV VFs. Depending on how many MSI-x are needed, SR-IOV may have to take resources from the sw_irq_tracker. This is done by setting the sw_irq_tracker->end equal to the pf->sriov_base_vector. When all SR-IOV VFs are removed then the sw_irq_tracker->end is reset back to sw_irq_tracker->num_entries. The sriov_base_vector, along with the VF's number of MSI-x (pf->num_vf_msix), vf_id, and the base MSI-x index on the PF (pf->hw.func_caps.common_cap.msix_vector_first_id), is used to calculate the first HW absolute MSI-x index for each VF, which is used to write to the VPINT_ALLOC[_PCI] and GLINT_VECT2FUNC registers to program the VFs MSI-x PCI configuration bits. Also, the sriov_base_vector is used along with VF's num_vf_msix, vf_id, and q_vector->v_idx to determine the MSI-x register index (used for writing to GLINT_DYN_CTL) within the PF's space. Interrupt changes removed any references to hw_base_vector, hw_oicr_idx, and hw_irq_tracker. Only sw_base_vector, sw_oicr_idx, and sw_irq_tracker variables remain. Change all of these by removing the "sw_" prefix to help avoid confusion with these variables and their use. Signed-off-by: Brett Creeley <brett.creeley@intel.com> Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>