Including fixes from netfilter.

Current release - regressions:
 
  - sched: act_pedit: free pedit keys on bail from offset check
 
 Current release - new code bugs:
 
  - pds_core:
   - Kconfig fixes (DEBUGFS and AUXILIARY_BUS)
   - fix mutex double unlock in error path
 
 Previous releases - regressions:
 
  - sched: cls_api: remove block_cb from driver_list before freeing
 
  - nf_tables: fix ct untracked match breakage
 
  - eth: mtk_eth_soc: drop generic vlan rx offload
 
  - sched: flower: fix error handler on replace
 
 Previous releases - always broken:
 
  - tcp: fix skb_copy_ubufs() vs BIG TCP
 
  - ipv6: fix skb hash for some RST packets
 
  - af_packet: don't send zero-byte data in packet_sendmsg_spkt()
 
  - rxrpc: timeout handling fixes after moving client call connection
    to the I/O thread
 
  - ixgbe: fix panic during XDP_TX with > 64 CPUs
 
  - igc: RMW the SRRCTL register to prevent losing timestamp config
 
  - dsa: mt7530: fix corrupt frames using TRGMII on 40 MHz XTAL MT7621
 
  - r8152:
    - fix flow control issue of RTL8156A
    - fix the poor throughput for 2.5G devices
    - move setting r8153b_rx_agg_chg_indicate() to fix coalescing
    - enable autosuspend
 
  - ncsi: clear Tx enable mode when handling a Config required AEN
 
  - octeontx2-pf: macsec: fixes for CN10KB ASIC rev
 
 Misc:
 
  - 9p: remove INET dependency
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmRVeUIACgkQMUZtbf5S
 IrtTug/9Hhg/L0PTSwrfuGh4W1/cjheMWppNLkwyWQUiKG7FcZQ9vu9PxceE3VRu
 2fTqHyvgDMZ8jACovXObeda8z1+g3s/tIPaXELephBIjVlF/h3kG2OaIzlU4jDb4
 A4vklwf8eLbfyVBG22QgKl/I70zVMtnmnOo6c6CPuIOTcMPzslndFO9tB0nCg99F
 DCgCM1BBP1tz+OUch2rLnSzYcqkWqS49BhRk6dhYSliawUFU/5+1tDGDjwWolkfm
 0jqP9DjBOSpZKO8m7SpsUNz7NFRIfYErWZ+YebWbggNxj/6TRJTP83MM0tGoK1rE
 /mz2xpuOki59frlwVOAD6gb/qefjHUp21P4NA7bnhizxFlQL5MHpCeGQ9yLHBSmY
 9Q4ArJkM4jXQ0oDA2nII/pz+cDZGEWFGQ14WW3kYUb7WFmISH4I9OiA9i0TBW6OL
 r1Y/rqzkUvtKWzh9RpiAF9lsdHAm3SX9ES5RfMxzv0x886VOZR4jaMmokRDdPRzq
 0r2Oyj75b62+X0r44Fe22Pl/kPS/uh3642xo9h85aAv/EvhT9JNzMvomJm9d6tkb
 966I085AVbwxPAy+rl5SWyAq60EWDExNTjZvPv0mSMlmSsQ9iK5//xOF2Saw2zai
 /44zQ27tVGkCC44Ou5KmfJN3u4OrKkhcuyxtcDr9QeoOdKZRkMg=
 =9xND
 -----END PGP SIGNATURE-----

Merge tag 'net-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from netfilter.

  Current release - regressions:

   - sched: act_pedit: free pedit keys on bail from offset check

  Current release - new code bugs:

   - pds_core:
      - Kconfig fixes (DEBUGFS and AUXILIARY_BUS)
      - fix mutex double unlock in error path

  Previous releases - regressions:

   - sched: cls_api: remove block_cb from driver_list before freeing

   - nf_tables: fix ct untracked match breakage

   - eth: mtk_eth_soc: drop generic vlan rx offload

   - sched: flower: fix error handler on replace

  Previous releases - always broken:

   - tcp: fix skb_copy_ubufs() vs BIG TCP

   - ipv6: fix skb hash for some RST packets

   - af_packet: don't send zero-byte data in packet_sendmsg_spkt()

   - rxrpc: timeout handling fixes after moving client call connection
     to the I/O thread

   - ixgbe: fix panic during XDP_TX with > 64 CPUs

   - igc: RMW the SRRCTL register to prevent losing timestamp config

   - dsa: mt7530: fix corrupt frames using TRGMII on 40 MHz XTAL MT7621

   - r8152:
      - fix flow control issue of RTL8156A
      - fix the poor throughput for 2.5G devices
      - move setting r8153b_rx_agg_chg_indicate() to fix coalescing
      - enable autosuspend

   - ncsi: clear Tx enable mode when handling a Config required AEN

   - octeontx2-pf: macsec: fixes for CN10KB ASIC rev

  Misc:

   - 9p: remove INET dependency"

* tag 'net-6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (69 commits)
  net: bcmgenet: Remove phy_stop() from bcmgenet_netif_stop()
  pds_core: fix mutex double unlock in error path
  net/sched: flower: fix error handler on replace
  Revert "net/sched: flower: Fix wrong handle assignment during filter change"
  net/sched: flower: fix filter idr initialization
  net: fec: correct the counting of XDP sent frames
  bonding: add xdp_features support
  net: enetc: check the index of the SFI rather than the handle
  sfc: Add back mailing list
  virtio_net: suppress cpu stall when free_unused_bufs
  ice: block LAN in case of VF to VF offload
  net: dsa: mt7530: fix network connectivity with multiple CPU ports
  net: dsa: mt7530: fix corrupt frames using trgmii on 40 MHz XTAL MT7621
  9p: Remove INET dependency
  netfilter: nf_tables: fix ct untracked match breakage
  af_packet: Don't send zero-byte data in packet_sendmsg_spkt().
  igc: read before write to SRRCTL register
  pds_core: add AUXILIARY_BUS and NET_DEVLINK to Kconfig
  pds_core: remove CONFIG_DEBUG_FS from makefile
  ionic: catch failure from devlink_alloc
  ...
This commit is contained in:
Linus Torvalds 2023-05-05 19:12:01 -07:00
commit ed23734c23
84 changed files with 734 additions and 406 deletions

View File

@ -19059,6 +19059,7 @@ SFC NETWORK DRIVER
M: Edward Cree <ecree.xilinx@gmail.com> M: Edward Cree <ecree.xilinx@gmail.com>
M: Martin Habets <habetsm.xilinx@gmail.com> M: Martin Habets <habetsm.xilinx@gmail.com>
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
L: linux-net-drivers@amd.com
S: Supported S: Supported
F: Documentation/networking/devlink/sfc.rst F: Documentation/networking/devlink/sfc.rst
F: drivers/net/ethernet/sfc/ F: drivers/net/ethernet/sfc/

View File

@ -141,17 +141,6 @@
/*#define CMX_DELAY_DEBUG * gives rx-buffer delay overview */ /*#define CMX_DELAY_DEBUG * gives rx-buffer delay overview */
/*#define CMX_TX_DEBUG * massive read/write on tx-buffer with content */ /*#define CMX_TX_DEBUG * massive read/write on tx-buffer with content */
static inline int
count_list_member(struct list_head *head)
{
int cnt = 0;
struct list_head *m;
list_for_each(m, head)
cnt++;
return cnt;
}
/* /*
* debug cmx memory structure * debug cmx memory structure
*/ */
@ -1672,7 +1661,7 @@ dsp_cmx_send(void *arg)
mustmix = 0; mustmix = 0;
members = 0; members = 0;
if (conf) { if (conf) {
members = count_list_member(&conf->mlist); members = list_count_nodes(&conf->mlist);
#ifdef CMX_CONF_DEBUG #ifdef CMX_CONF_DEBUG
if (conf->software && members > 1) if (conf->software && members > 1)
#else #else
@ -1695,7 +1684,7 @@ dsp_cmx_send(void *arg)
/* loop all members that require conference mixing */ /* loop all members that require conference mixing */
list_for_each_entry(conf, &conf_ilist, list) { list_for_each_entry(conf, &conf_ilist, list) {
/* count members and check hardware */ /* count members and check hardware */
members = count_list_member(&conf->mlist); members = list_count_nodes(&conf->mlist);
#ifdef CMX_CONF_DEBUG #ifdef CMX_CONF_DEBUG
if (conf->software && members > 1) { if (conf->software && members > 1) {
#else #else

View File

@ -1789,6 +1789,26 @@ static void bond_ether_setup(struct net_device *bond_dev)
bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING; bond_dev->priv_flags &= ~IFF_TX_SKB_SHARING;
} }
void bond_xdp_set_features(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
xdp_features_t val = NETDEV_XDP_ACT_MASK;
struct list_head *iter;
struct slave *slave;
ASSERT_RTNL();
if (!bond_xdp_check(bond)) {
xdp_clear_features_flag(bond_dev);
return;
}
bond_for_each_slave(bond, slave, iter)
val &= slave->dev->xdp_features;
xdp_set_features_flag(bond_dev, val);
}
/* enslave device <slave> to bond device <master> */ /* enslave device <slave> to bond device <master> */
int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev, int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
struct netlink_ext_ack *extack) struct netlink_ext_ack *extack)
@ -2236,6 +2256,8 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
bpf_prog_inc(bond->xdp_prog); bpf_prog_inc(bond->xdp_prog);
} }
bond_xdp_set_features(bond_dev);
slave_info(bond_dev, slave_dev, "Enslaving as %s interface with %s link\n", slave_info(bond_dev, slave_dev, "Enslaving as %s interface with %s link\n",
bond_is_active_slave(new_slave) ? "an active" : "a backup", bond_is_active_slave(new_slave) ? "an active" : "a backup",
new_slave->link != BOND_LINK_DOWN ? "an up" : "a down"); new_slave->link != BOND_LINK_DOWN ? "an up" : "a down");
@ -2483,6 +2505,7 @@ static int __bond_release_one(struct net_device *bond_dev,
if (!netif_is_bond_master(slave_dev)) if (!netif_is_bond_master(slave_dev))
slave_dev->priv_flags &= ~IFF_BONDING; slave_dev->priv_flags &= ~IFF_BONDING;
bond_xdp_set_features(bond_dev);
kobject_put(&slave->kobj); kobject_put(&slave->kobj);
return 0; return 0;
@ -3930,6 +3953,9 @@ static int bond_slave_netdev_event(unsigned long event,
/* Propagate to master device */ /* Propagate to master device */
call_netdevice_notifiers(event, slave->bond->dev); call_netdevice_notifiers(event, slave->bond->dev);
break; break;
case NETDEV_XDP_FEAT_CHANGE:
bond_xdp_set_features(bond_dev);
break;
default: default:
break; break;
} }
@ -5874,6 +5900,9 @@ void bond_setup(struct net_device *bond_dev)
if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP) if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP)
bond_dev->features |= BOND_XFRM_FEATURES; bond_dev->features |= BOND_XFRM_FEATURES;
#endif /* CONFIG_XFRM_OFFLOAD */ #endif /* CONFIG_XFRM_OFFLOAD */
if (bond_xdp_check(bond))
bond_dev->xdp_features = NETDEV_XDP_ACT_MASK;
} }
/* Destroy a bonding device. /* Destroy a bonding device.

View File

@ -877,6 +877,8 @@ static int bond_option_mode_set(struct bonding *bond,
netdev_update_features(bond->dev); netdev_update_features(bond->dev);
} }
bond_xdp_set_features(bond->dev);
return 0; return 0;
} }

View File

@ -426,9 +426,9 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, phy_interface_t interface)
else else
ssc_delta = 0x87; ssc_delta = 0x87;
if (priv->id == ID_MT7621) { if (priv->id == ID_MT7621) {
/* PLL frequency: 150MHz: 1.2GBit */ /* PLL frequency: 125MHz: 1.0GBit */
if (xtal == HWTRAP_XTAL_40MHZ) if (xtal == HWTRAP_XTAL_40MHZ)
ncpo1 = 0x0780; ncpo1 = 0x0640;
if (xtal == HWTRAP_XTAL_25MHZ) if (xtal == HWTRAP_XTAL_25MHZ)
ncpo1 = 0x0a00; ncpo1 = 0x0a00;
} else { /* PLL frequency: 250MHz: 2.0Gbit */ } else { /* PLL frequency: 250MHz: 2.0Gbit */
@ -1002,9 +1002,9 @@ mt753x_cpu_port_enable(struct dsa_switch *ds, int port)
mt7530_write(priv, MT7530_PVC_P(port), mt7530_write(priv, MT7530_PVC_P(port),
PORT_SPEC_TAG); PORT_SPEC_TAG);
/* Disable flooding by default */ /* Enable flooding on the CPU port */
mt7530_rmw(priv, MT7530_MFC, BC_FFP_MASK | UNM_FFP_MASK | UNU_FFP_MASK, mt7530_set(priv, MT7530_MFC, BC_FFP(BIT(port)) | UNM_FFP(BIT(port)) |
BC_FFP(BIT(port)) | UNM_FFP(BIT(port)) | UNU_FFP(BIT(port))); UNU_FFP(BIT(port)));
/* Set CPU port number */ /* Set CPU port number */
if (priv->id == ID_MT7621) if (priv->id == ID_MT7621)
@ -2367,6 +2367,10 @@ mt7531_setup_common(struct dsa_switch *ds)
/* Enable and reset MIB counters */ /* Enable and reset MIB counters */
mt7530_mib_reset(ds); mt7530_mib_reset(ds);
/* Disable flooding on all ports */
mt7530_clear(priv, MT7530_MFC, BC_FFP_MASK | UNM_FFP_MASK |
UNU_FFP_MASK);
for (i = 0; i < MT7530_NUM_PORTS; i++) { for (i = 0; i < MT7530_NUM_PORTS; i++) {
/* Disable forwarding by default on all ports */ /* Disable forwarding by default on all ports */
mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK, mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK,

View File

@ -5194,6 +5194,7 @@ static const struct mv88e6xxx_ops mv88e6321_ops = {
.set_cpu_port = mv88e6095_g1_set_cpu_port, .set_cpu_port = mv88e6095_g1_set_cpu_port,
.set_egress_port = mv88e6095_g1_set_egress_port, .set_egress_port = mv88e6095_g1_set_egress_port,
.watchdog_ops = &mv88e6390_watchdog_ops, .watchdog_ops = &mv88e6390_watchdog_ops,
.mgmt_rsvd2cpu = mv88e6352_g2_mgmt_rsvd2cpu,
.reset = mv88e6352_g1_reset, .reset = mv88e6352_g1_reset,
.vtu_getnext = mv88e6185_g1_vtu_getnext, .vtu_getnext = mv88e6185_g1_vtu_getnext,
.vtu_loadpurge = mv88e6185_g1_vtu_loadpurge, .vtu_loadpurge = mv88e6185_g1_vtu_loadpurge,

View File

@ -189,6 +189,8 @@ config AMD_XGBE_HAVE_ECC
config PDS_CORE config PDS_CORE
tristate "AMD/Pensando Data Systems Core Device Support" tristate "AMD/Pensando Data Systems Core Device Support"
depends on 64BIT && PCI depends on 64BIT && PCI
select AUXILIARY_BUS
select NET_DEVLINK
help help
This enables the support for the AMD/Pensando Core device family of This enables the support for the AMD/Pensando Core device family of
adapters. More specific information on this driver can be adapters. More specific information on this driver can be

View File

@ -9,6 +9,5 @@ pds_core-y := main.o \
dev.o \ dev.o \
adminq.o \ adminq.o \
core.o \ core.o \
debugfs.o \
fw.o fw.o
pds_core-$(CONFIG_DEBUG_FS) += debugfs.o

View File

@ -244,11 +244,16 @@ static int pdsc_init_pf(struct pdsc *pdsc)
set_bit(PDSC_S_FW_DEAD, &pdsc->state); set_bit(PDSC_S_FW_DEAD, &pdsc->state);
err = pdsc_setup(pdsc, PDSC_SETUP_INIT); err = pdsc_setup(pdsc, PDSC_SETUP_INIT);
if (err) if (err) {
mutex_unlock(&pdsc->config_lock);
goto err_out_unmap_bars; goto err_out_unmap_bars;
}
err = pdsc_start(pdsc); err = pdsc_start(pdsc);
if (err) if (err) {
mutex_unlock(&pdsc->config_lock);
goto err_out_teardown; goto err_out_teardown;
}
mutex_unlock(&pdsc->config_lock); mutex_unlock(&pdsc->config_lock);
@ -257,13 +262,15 @@ static int pdsc_init_pf(struct pdsc *pdsc)
err = devl_params_register(dl, pdsc_dl_params, err = devl_params_register(dl, pdsc_dl_params,
ARRAY_SIZE(pdsc_dl_params)); ARRAY_SIZE(pdsc_dl_params));
if (err) { if (err) {
devl_unlock(dl);
dev_warn(pdsc->dev, "Failed to register devlink params: %pe\n", dev_warn(pdsc->dev, "Failed to register devlink params: %pe\n",
ERR_PTR(err)); ERR_PTR(err));
goto err_out_unlock_dl; goto err_out_stop;
} }
hr = devl_health_reporter_create(dl, &pdsc_fw_reporter_ops, 0, pdsc); hr = devl_health_reporter_create(dl, &pdsc_fw_reporter_ops, 0, pdsc);
if (IS_ERR(hr)) { if (IS_ERR(hr)) {
devl_unlock(dl);
dev_warn(pdsc->dev, "Failed to create fw reporter: %pe\n", hr); dev_warn(pdsc->dev, "Failed to create fw reporter: %pe\n", hr);
err = PTR_ERR(hr); err = PTR_ERR(hr);
goto err_out_unreg_params; goto err_out_unreg_params;
@ -279,15 +286,13 @@ static int pdsc_init_pf(struct pdsc *pdsc)
return 0; return 0;
err_out_unreg_params: err_out_unreg_params:
devl_params_unregister(dl, pdsc_dl_params, devlink_params_unregister(dl, pdsc_dl_params,
ARRAY_SIZE(pdsc_dl_params)); ARRAY_SIZE(pdsc_dl_params));
err_out_unlock_dl: err_out_stop:
devl_unlock(dl);
pdsc_stop(pdsc); pdsc_stop(pdsc);
err_out_teardown: err_out_teardown:
pdsc_teardown(pdsc, PDSC_TEARDOWN_REMOVING); pdsc_teardown(pdsc, PDSC_TEARDOWN_REMOVING);
err_out_unmap_bars: err_out_unmap_bars:
mutex_unlock(&pdsc->config_lock);
del_timer_sync(&pdsc->wdtimer); del_timer_sync(&pdsc->wdtimer);
if (pdsc->wq) if (pdsc->wq)
destroy_workqueue(pdsc->wq); destroy_workqueue(pdsc->wq);

View File

@ -379,6 +379,7 @@ static void aq_pci_shutdown(struct pci_dev *pdev)
} }
} }
#ifdef CONFIG_PM
static int aq_suspend_common(struct device *dev) static int aq_suspend_common(struct device *dev)
{ {
struct aq_nic_s *nic = pci_get_drvdata(to_pci_dev(dev)); struct aq_nic_s *nic = pci_get_drvdata(to_pci_dev(dev));
@ -463,6 +464,7 @@ static const struct dev_pm_ops aq_pm_ops = {
.restore = aq_pm_resume_restore, .restore = aq_pm_resume_restore,
.thaw = aq_pm_thaw, .thaw = aq_pm_thaw,
}; };
#endif
static struct pci_driver aq_pci_ops = { static struct pci_driver aq_pci_ops = {
.name = AQ_CFG_DRV_NAME, .name = AQ_CFG_DRV_NAME,

View File

@ -336,7 +336,7 @@ static int aq_a2_fw_get_mac_permanent(struct aq_hw_s *self, u8 *mac)
static void aq_a2_fill_a0_stats(struct aq_hw_s *self, static void aq_a2_fill_a0_stats(struct aq_hw_s *self,
struct statistics_s *stats) struct statistics_s *stats)
{ {
struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv; struct hw_atl2_priv *priv = self->priv;
struct aq_stats_s *cs = &self->curr_stats; struct aq_stats_s *cs = &self->curr_stats;
struct aq_stats_s curr_stats = *cs; struct aq_stats_s curr_stats = *cs;
bool corrupted_stats = false; bool corrupted_stats = false;
@ -378,7 +378,7 @@ do { \
static void aq_a2_fill_b0_stats(struct aq_hw_s *self, static void aq_a2_fill_b0_stats(struct aq_hw_s *self,
struct statistics_s *stats) struct statistics_s *stats)
{ {
struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv; struct hw_atl2_priv *priv = self->priv;
struct aq_stats_s *cs = &self->curr_stats; struct aq_stats_s *cs = &self->curr_stats;
struct aq_stats_s curr_stats = *cs; struct aq_stats_s curr_stats = *cs;
bool corrupted_stats = false; bool corrupted_stats = false;

View File

@ -3465,7 +3465,6 @@ static void bcmgenet_netif_stop(struct net_device *dev)
/* Disable MAC transmit. TX DMA disabled must be done before this */ /* Disable MAC transmit. TX DMA disabled must be done before this */
umac_enable_set(priv, CMD_TX_EN, false); umac_enable_set(priv, CMD_TX_EN, false);
phy_stop(dev->phydev);
bcmgenet_disable_rx_napi(priv); bcmgenet_disable_rx_napi(priv);
bcmgenet_intr_disable(priv); bcmgenet_intr_disable(priv);

View File

@ -1247,7 +1247,7 @@ static int enetc_psfp_parse_clsflower(struct enetc_ndev_priv *priv,
int index; int index;
index = enetc_get_free_index(priv); index = enetc_get_free_index(priv);
if (sfi->handle < 0) { if (index < 0) {
NL_SET_ERR_MSG_MOD(extack, "No Stream Filter resource!"); NL_SET_ERR_MSG_MOD(extack, "No Stream Filter resource!");
err = -ENOSPC; err = -ENOSPC;
goto free_fmi; goto free_fmi;

View File

@ -3798,7 +3798,8 @@ static int fec_enet_txq_xmit_frame(struct fec_enet_private *fep,
entries_free = fec_enet_get_free_txdesc_num(txq); entries_free = fec_enet_get_free_txdesc_num(txq);
if (entries_free < MAX_SKB_FRAGS + 1) { if (entries_free < MAX_SKB_FRAGS + 1) {
netdev_err(fep->netdev, "NOT enough BD for SG!\n"); netdev_err(fep->netdev, "NOT enough BD for SG!\n");
return NETDEV_TX_OK; xdp_return_frame(frame);
return NETDEV_TX_BUSY;
} }
/* Fill in a Tx ring entry */ /* Fill in a Tx ring entry */
@ -3856,6 +3857,7 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
struct fec_enet_private *fep = netdev_priv(dev); struct fec_enet_private *fep = netdev_priv(dev);
struct fec_enet_priv_tx_q *txq; struct fec_enet_priv_tx_q *txq;
int cpu = smp_processor_id(); int cpu = smp_processor_id();
unsigned int sent_frames = 0;
struct netdev_queue *nq; struct netdev_queue *nq;
unsigned int queue; unsigned int queue;
int i; int i;
@ -3866,8 +3868,11 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
__netif_tx_lock(nq, cpu); __netif_tx_lock(nq, cpu);
for (i = 0; i < num_frames; i++) for (i = 0; i < num_frames; i++) {
fec_enet_txq_xmit_frame(fep, txq, frames[i]); if (fec_enet_txq_xmit_frame(fep, txq, frames[i]) != 0)
break;
sent_frames++;
}
/* Make sure the update to bdp and tx_skbuff are performed. */ /* Make sure the update to bdp and tx_skbuff are performed. */
wmb(); wmb();
@ -3877,7 +3882,7 @@ static int fec_enet_xdp_xmit(struct net_device *dev,
__netif_tx_unlock(nq); __netif_tx_unlock(nq);
return num_frames; return sent_frames;
} }
static const struct net_device_ops fec_netdev_ops = { static const struct net_device_ops fec_netdev_ops = {

View File

@ -693,17 +693,18 @@ ice_eswitch_add_tc_fltr(struct ice_vsi *vsi, struct ice_tc_flower_fltr *fltr)
* results into order of switch rule evaluation. * results into order of switch rule evaluation.
*/ */
rule_info.priority = 7; rule_info.priority = 7;
rule_info.flags_info.act_valid = true;
if (fltr->direction == ICE_ESWITCH_FLTR_INGRESS) { if (fltr->direction == ICE_ESWITCH_FLTR_INGRESS) {
rule_info.sw_act.flag |= ICE_FLTR_RX; rule_info.sw_act.flag |= ICE_FLTR_RX;
rule_info.sw_act.src = hw->pf_id; rule_info.sw_act.src = hw->pf_id;
rule_info.rx = true; rule_info.rx = true;
rule_info.flags_info.act = ICE_SINGLE_ACT_LB_ENABLE;
} else { } else {
rule_info.sw_act.flag |= ICE_FLTR_TX; rule_info.sw_act.flag |= ICE_FLTR_TX;
rule_info.sw_act.src = vsi->idx; rule_info.sw_act.src = vsi->idx;
rule_info.rx = false; rule_info.rx = false;
rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE; rule_info.flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE;
rule_info.flags_info.act_valid = true;
} }
/* specify the cookie as filter_rule_id */ /* specify the cookie as filter_rule_id */

View File

@ -87,8 +87,13 @@ union igc_adv_rx_desc {
#define IGC_RXDCTL_SWFLUSH 0x04000000 /* Receive Software Flush */ #define IGC_RXDCTL_SWFLUSH 0x04000000 /* Receive Software Flush */
/* SRRCTL bit definitions */ /* SRRCTL bit definitions */
#define IGC_SRRCTL_BSIZEPKT_SHIFT 10 /* Shift _right_ */ #define IGC_SRRCTL_BSIZEPKT_MASK GENMASK(6, 0)
#define IGC_SRRCTL_BSIZEHDRSIZE_SHIFT 2 /* Shift _left_ */ #define IGC_SRRCTL_BSIZEPKT(x) FIELD_PREP(IGC_SRRCTL_BSIZEPKT_MASK, \
#define IGC_SRRCTL_DESCTYPE_ADV_ONEBUF 0x02000000 (x) / 1024) /* in 1 KB resolution */
#define IGC_SRRCTL_BSIZEHDR_MASK GENMASK(13, 8)
#define IGC_SRRCTL_BSIZEHDR(x) FIELD_PREP(IGC_SRRCTL_BSIZEHDR_MASK, \
(x) / 64) /* in 64 bytes resolution */
#define IGC_SRRCTL_DESCTYPE_MASK GENMASK(27, 25)
#define IGC_SRRCTL_DESCTYPE_ADV_ONEBUF FIELD_PREP(IGC_SRRCTL_DESCTYPE_MASK, 1)
#endif /* _IGC_BASE_H */ #endif /* _IGC_BASE_H */

View File

@ -640,8 +640,11 @@ static void igc_configure_rx_ring(struct igc_adapter *adapter,
else else
buf_size = IGC_RXBUFFER_2048; buf_size = IGC_RXBUFFER_2048;
srrctl = IGC_RX_HDR_LEN << IGC_SRRCTL_BSIZEHDRSIZE_SHIFT; srrctl = rd32(IGC_SRRCTL(reg_idx));
srrctl |= buf_size >> IGC_SRRCTL_BSIZEPKT_SHIFT; srrctl &= ~(IGC_SRRCTL_BSIZEPKT_MASK | IGC_SRRCTL_BSIZEHDR_MASK |
IGC_SRRCTL_DESCTYPE_MASK);
srrctl |= IGC_SRRCTL_BSIZEHDR(IGC_RX_HDR_LEN);
srrctl |= IGC_SRRCTL_BSIZEPKT(buf_size);
srrctl |= IGC_SRRCTL_DESCTYPE_ADV_ONEBUF; srrctl |= IGC_SRRCTL_DESCTYPE_ADV_ONEBUF;
wr32(IGC_SRRCTL(reg_idx), srrctl); wr32(IGC_SRRCTL(reg_idx), srrctl);

View File

@ -1035,9 +1035,6 @@ static void ixgbe_free_q_vector(struct ixgbe_adapter *adapter, int v_idx)
adapter->q_vector[v_idx] = NULL; adapter->q_vector[v_idx] = NULL;
__netif_napi_del(&q_vector->napi); __netif_napi_del(&q_vector->napi);
if (static_key_enabled(&ixgbe_xdp_locking_key))
static_branch_dec(&ixgbe_xdp_locking_key);
/* /*
* after a call to __netif_napi_del() napi may still be used and * after a call to __netif_napi_del() napi may still be used and
* ixgbe_get_stats64() might access the rings on this vector, * ixgbe_get_stats64() might access the rings on this vector,

View File

@ -6487,6 +6487,10 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter,
set_bit(0, adapter->fwd_bitmask); set_bit(0, adapter->fwd_bitmask);
set_bit(__IXGBE_DOWN, &adapter->state); set_bit(__IXGBE_DOWN, &adapter->state);
/* enable locking for XDP_TX if we have more CPUs than queues */
if (nr_cpu_ids > IXGBE_MAX_XDP_QS)
static_branch_enable(&ixgbe_xdp_locking_key);
return 0; return 0;
} }
@ -10270,8 +10274,6 @@ static int ixgbe_xdp_setup(struct net_device *dev, struct bpf_prog *prog)
*/ */
if (nr_cpu_ids > IXGBE_MAX_XDP_QS * 2) if (nr_cpu_ids > IXGBE_MAX_XDP_QS * 2)
return -ENOMEM; return -ENOMEM;
else if (nr_cpu_ids > IXGBE_MAX_XDP_QS)
static_branch_inc(&ixgbe_xdp_locking_key);
old_prog = xchg(&adapter->xdp_prog, prog); old_prog = xchg(&adapter->xdp_prog, prog);
need_reset = (!!prog != !!old_prog); need_reset = (!!prog != !!old_prog);

View File

@ -1231,6 +1231,14 @@ static inline void link_status_user_format(u64 lstat,
linfo->an = FIELD_GET(RESP_LINKSTAT_AN, lstat); linfo->an = FIELD_GET(RESP_LINKSTAT_AN, lstat);
linfo->fec = FIELD_GET(RESP_LINKSTAT_FEC, lstat); linfo->fec = FIELD_GET(RESP_LINKSTAT_FEC, lstat);
linfo->lmac_type_id = FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, lstat); linfo->lmac_type_id = FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, lstat);
if (linfo->lmac_type_id >= LMAC_MODE_MAX) {
dev_err(&cgx->pdev->dev, "Unknown lmac_type_id %d reported by firmware on cgx port%d:%d",
linfo->lmac_type_id, cgx->cgx_id, lmac_id);
strncpy(linfo->lmac_type, "Unknown", LMACTYPE_STR_LEN - 1);
return;
}
lmac_string = cgx_lmactype_string[linfo->lmac_type_id]; lmac_string = cgx_lmactype_string[linfo->lmac_type_id];
strncpy(linfo->lmac_type, lmac_string, LMACTYPE_STR_LEN - 1); strncpy(linfo->lmac_type, lmac_string, LMACTYPE_STR_LEN - 1);
} }

View File

@ -157,7 +157,7 @@ EXPORT_SYMBOL(otx2_mbox_init);
*/ */
int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase, int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase,
struct pci_dev *pdev, void *reg_base, struct pci_dev *pdev, void *reg_base,
int direction, int ndevs) int direction, int ndevs, unsigned long *pf_bmap)
{ {
struct otx2_mbox_dev *mdev; struct otx2_mbox_dev *mdev;
int devid, err; int devid, err;
@ -169,6 +169,9 @@ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void **hwbase,
mbox->hwbase = hwbase[0]; mbox->hwbase = hwbase[0];
for (devid = 0; devid < ndevs; devid++) { for (devid = 0; devid < ndevs; devid++) {
if (!test_bit(devid, pf_bmap))
continue;
mdev = &mbox->dev[devid]; mdev = &mbox->dev[devid];
mdev->mbase = hwbase[devid]; mdev->mbase = hwbase[devid];
mdev->hwbase = hwbase[devid]; mdev->hwbase = hwbase[devid];

View File

@ -96,9 +96,10 @@ void otx2_mbox_destroy(struct otx2_mbox *mbox);
int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase, int otx2_mbox_init(struct otx2_mbox *mbox, void __force *hwbase,
struct pci_dev *pdev, void __force *reg_base, struct pci_dev *pdev, void __force *reg_base,
int direction, int ndevs); int direction, int ndevs);
int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase, int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase,
struct pci_dev *pdev, void __force *reg_base, struct pci_dev *pdev, void __force *reg_base,
int direction, int ndevs); int direction, int ndevs, unsigned long *bmap);
void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid); void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid); int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid); int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
@ -245,9 +246,9 @@ M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, \
M(NPC_MCAM_GET_STATS, 0x6012, npc_mcam_entry_stats, \ M(NPC_MCAM_GET_STATS, 0x6012, npc_mcam_entry_stats, \
npc_mcam_get_stats_req, \ npc_mcam_get_stats_req, \
npc_mcam_get_stats_rsp) \ npc_mcam_get_stats_rsp) \
M(NPC_GET_SECRET_KEY, 0x6013, npc_get_secret_key, \ M(NPC_GET_FIELD_HASH_INFO, 0x6013, npc_get_field_hash_info, \
npc_get_secret_key_req, \ npc_get_field_hash_info_req, \
npc_get_secret_key_rsp) \ npc_get_field_hash_info_rsp) \
M(NPC_GET_FIELD_STATUS, 0x6014, npc_get_field_status, \ M(NPC_GET_FIELD_STATUS, 0x6014, npc_get_field_status, \
npc_get_field_status_req, \ npc_get_field_status_req, \
npc_get_field_status_rsp) \ npc_get_field_status_rsp) \
@ -1524,14 +1525,20 @@ struct npc_mcam_get_stats_rsp {
u8 stat_ena; /* enabled */ u8 stat_ena; /* enabled */
}; };
struct npc_get_secret_key_req { struct npc_get_field_hash_info_req {
struct mbox_msghdr hdr; struct mbox_msghdr hdr;
u8 intf; u8 intf;
}; };
struct npc_get_secret_key_rsp { struct npc_get_field_hash_info_rsp {
struct mbox_msghdr hdr; struct mbox_msghdr hdr;
u64 secret_key[3]; u64 secret_key[3];
#define NPC_MAX_HASH 2
#define NPC_MAX_HASH_MASK 2
/* NPC_AF_INTF(0..1)_HASH(0..1)_MASK(0..1) */
u64 hash_mask[NPC_MAX_INTF][NPC_MAX_HASH][NPC_MAX_HASH_MASK];
/* NPC_AF_INTF(0..1)_HASH(0..1)_RESULT_CTRL */
u64 hash_ctrl[NPC_MAX_INTF][NPC_MAX_HASH];
}; };
enum ptp_op { enum ptp_op {

View File

@ -473,6 +473,8 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id,
for (reg_id = 0; reg_id < 4; reg_id++) { for (reg_id = 0; reg_id < 4; reg_id++) {
reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id); reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id);
mcs_reg_write(mcs, reg, data[reg_id]); mcs_reg_write(mcs, reg, data[reg_id]);
}
for (reg_id = 0; reg_id < 4; reg_id++) {
reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
mcs_reg_write(mcs, reg, mask[reg_id]); mcs_reg_write(mcs, reg, mask[reg_id]);
} }
@ -480,6 +482,8 @@ void mcs_flowid_entry_write(struct mcs *mcs, u64 *data, u64 *mask, int flow_id,
for (reg_id = 0; reg_id < 4; reg_id++) { for (reg_id = 0; reg_id < 4; reg_id++) {
reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id); reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_DATAX(reg_id, flow_id);
mcs_reg_write(mcs, reg, data[reg_id]); mcs_reg_write(mcs, reg, data[reg_id]);
}
for (reg_id = 0; reg_id < 4; reg_id++) {
reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); reg = MCSX_CPM_TX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
mcs_reg_write(mcs, reg, mask[reg_id]); mcs_reg_write(mcs, reg, mask[reg_id]);
} }
@ -494,6 +498,9 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs)
/* Flow entry */ /* Flow entry */
flow_id = mcs->hw->tcam_entries - MCS_RSRC_RSVD_CNT; flow_id = mcs->hw->tcam_entries - MCS_RSRC_RSVD_CNT;
__set_bit(flow_id, mcs->rx.flow_ids.bmap);
__set_bit(flow_id, mcs->tx.flow_ids.bmap);
for (reg_id = 0; reg_id < 4; reg_id++) { for (reg_id = 0; reg_id < 4; reg_id++) {
reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id); reg = MCSX_CPM_RX_SLAVE_FLOWID_TCAM_MASKX(reg_id, flow_id);
mcs_reg_write(mcs, reg, GENMASK_ULL(63, 0)); mcs_reg_write(mcs, reg, GENMASK_ULL(63, 0));
@ -504,6 +511,8 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs)
} }
/* secy */ /* secy */
secy_id = mcs->hw->secy_entries - MCS_RSRC_RSVD_CNT; secy_id = mcs->hw->secy_entries - MCS_RSRC_RSVD_CNT;
__set_bit(secy_id, mcs->rx.secy.bmap);
__set_bit(secy_id, mcs->tx.secy.bmap);
/* Set validate frames to NULL and enable control port */ /* Set validate frames to NULL and enable control port */
plcy = 0x7ull; plcy = 0x7ull;
@ -528,6 +537,7 @@ int mcs_install_flowid_bypass_entry(struct mcs *mcs)
/* Enable Flowid entry */ /* Enable Flowid entry */
mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_RX, true); mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_RX, true);
mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_TX, true); mcs_ena_dis_flowid_entry(mcs, flow_id, MCS_TX, true);
return 0; return 0;
} }
@ -926,60 +936,42 @@ static void mcs_tx_misc_intr_handler(struct mcs *mcs, u64 intr)
mcs_add_intr_wq_entry(mcs, &event); mcs_add_intr_wq_entry(mcs, &event);
} }
static void mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir) void cn10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr,
enum mcs_direction dir)
{ {
struct mcs_intr_event event = { 0 }; u64 val, reg;
int i; int lmac;
if (!(intr & MCS_BBE_INT_MASK)) if (!(intr & 0x6ULL))
return; return;
event.mcs_id = mcs->mcs_id; if (intr & BIT_ULL(1))
event.pcifunc = mcs->pf_map[0]; reg = (dir == MCS_RX) ? MCSX_BBE_RX_SLAVE_DFIFO_OVERFLOW_0 :
MCSX_BBE_TX_SLAVE_DFIFO_OVERFLOW_0;
else
reg = (dir == MCS_RX) ? MCSX_BBE_RX_SLAVE_PLFIFO_OVERFLOW_0 :
MCSX_BBE_TX_SLAVE_PLFIFO_OVERFLOW_0;
val = mcs_reg_read(mcs, reg);
for (i = 0; i < MCS_MAX_BBE_INT; i++) { /* policy/data over flow occurred */
if (!(intr & BIT_ULL(i))) for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) {
if (!(val & BIT_ULL(lmac)))
continue; continue;
dev_warn(mcs->dev, "BEE:Policy or data overflow occurred on lmac:%d\n", lmac);
/* Lower nibble denotes data fifo overflow interrupts and
* upper nibble indicates policy fifo overflow interrupts.
*/
if (intr & 0xFULL)
event.intr_mask = (dir == MCS_RX) ?
MCS_BBE_RX_DFIFO_OVERFLOW_INT :
MCS_BBE_TX_DFIFO_OVERFLOW_INT;
else
event.intr_mask = (dir == MCS_RX) ?
MCS_BBE_RX_PLFIFO_OVERFLOW_INT :
MCS_BBE_TX_PLFIFO_OVERFLOW_INT;
/* Notify the lmac_id info which ran into BBE fatal error */
event.lmac_id = i & 0x3ULL;
mcs_add_intr_wq_entry(mcs, &event);
} }
} }
static void mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir) void cn10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr,
enum mcs_direction dir)
{ {
struct mcs_intr_event event = { 0 }; int lmac;
int i;
if (!(intr & MCS_PAB_INT_MASK)) if (!(intr & 0xFFFFFULL))
return; return;
event.mcs_id = mcs->mcs_id; for (lmac = 0; lmac < mcs->hw->lmac_cnt; lmac++) {
event.pcifunc = mcs->pf_map[0]; if (intr & BIT_ULL(lmac))
dev_warn(mcs->dev, "PAB: overflow occurred on lmac:%d\n", lmac);
for (i = 0; i < MCS_MAX_PAB_INT; i++) {
if (!(intr & BIT_ULL(i)))
continue;
event.intr_mask = (dir == MCS_RX) ? MCS_PAB_RX_CHAN_OVERFLOW_INT :
MCS_PAB_TX_CHAN_OVERFLOW_INT;
/* Notify the lmac_id info which ran into PAB fatal error */
event.lmac_id = i;
mcs_add_intr_wq_entry(mcs, &event);
} }
} }
@ -988,9 +980,8 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
struct mcs *mcs = (struct mcs *)mcs_irq; struct mcs *mcs = (struct mcs *)mcs_irq;
u64 intr, cpm_intr, bbe_intr, pab_intr; u64 intr, cpm_intr, bbe_intr, pab_intr;
/* Disable and clear the interrupt */ /* Disable the interrupt */
mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1C, BIT_ULL(0)); mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1C, BIT_ULL(0));
mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0));
/* Check which block has interrupt*/ /* Check which block has interrupt*/
intr = mcs_reg_read(mcs, MCSX_TOP_SLAVE_INT_SUM); intr = mcs_reg_read(mcs, MCSX_TOP_SLAVE_INT_SUM);
@ -1037,7 +1028,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
/* BBE RX */ /* BBE RX */
if (intr & MCS_BBE_RX_INT_ENA) { if (intr & MCS_BBE_RX_INT_ENA) {
bbe_intr = mcs_reg_read(mcs, MCSX_BBE_RX_SLAVE_BBE_INT); bbe_intr = mcs_reg_read(mcs, MCSX_BBE_RX_SLAVE_BBE_INT);
mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX); mcs->mcs_ops->mcs_bbe_intr_handler(mcs, bbe_intr, MCS_RX);
/* Clear the interrupt */ /* Clear the interrupt */
mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_INTR_RW, 0); mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_INTR_RW, 0);
@ -1047,7 +1038,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
/* BBE TX */ /* BBE TX */
if (intr & MCS_BBE_TX_INT_ENA) { if (intr & MCS_BBE_TX_INT_ENA) {
bbe_intr = mcs_reg_read(mcs, MCSX_BBE_TX_SLAVE_BBE_INT); bbe_intr = mcs_reg_read(mcs, MCSX_BBE_TX_SLAVE_BBE_INT);
mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX); mcs->mcs_ops->mcs_bbe_intr_handler(mcs, bbe_intr, MCS_TX);
/* Clear the interrupt */ /* Clear the interrupt */
mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_INTR_RW, 0); mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_INTR_RW, 0);
@ -1057,7 +1048,7 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
/* PAB RX */ /* PAB RX */
if (intr & MCS_PAB_RX_INT_ENA) { if (intr & MCS_PAB_RX_INT_ENA) {
pab_intr = mcs_reg_read(mcs, MCSX_PAB_RX_SLAVE_PAB_INT); pab_intr = mcs_reg_read(mcs, MCSX_PAB_RX_SLAVE_PAB_INT);
mcs_pab_intr_handler(mcs, pab_intr, MCS_RX); mcs->mcs_ops->mcs_pab_intr_handler(mcs, pab_intr, MCS_RX);
/* Clear the interrupt */ /* Clear the interrupt */
mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_INTR_RW, 0); mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_INTR_RW, 0);
@ -1067,14 +1058,15 @@ static irqreturn_t mcs_ip_intr_handler(int irq, void *mcs_irq)
/* PAB TX */ /* PAB TX */
if (intr & MCS_PAB_TX_INT_ENA) { if (intr & MCS_PAB_TX_INT_ENA) {
pab_intr = mcs_reg_read(mcs, MCSX_PAB_TX_SLAVE_PAB_INT); pab_intr = mcs_reg_read(mcs, MCSX_PAB_TX_SLAVE_PAB_INT);
mcs_pab_intr_handler(mcs, pab_intr, MCS_TX); mcs->mcs_ops->mcs_pab_intr_handler(mcs, pab_intr, MCS_TX);
/* Clear the interrupt */ /* Clear the interrupt */
mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_INTR_RW, 0); mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_INTR_RW, 0);
mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT, pab_intr); mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT, pab_intr);
} }
/* Enable the interrupt */ /* Clear and enable the interrupt */
mcs_reg_write(mcs, MCSX_IP_INT, BIT_ULL(0));
mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1S, BIT_ULL(0)); mcs_reg_write(mcs, MCSX_IP_INT_ENA_W1S, BIT_ULL(0));
return IRQ_HANDLED; return IRQ_HANDLED;
@ -1156,7 +1148,7 @@ static int mcs_register_interrupts(struct mcs *mcs)
return ret; return ret;
} }
ret = request_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP), ret = request_irq(pci_irq_vector(mcs->pdev, mcs->hw->ip_vec),
mcs_ip_intr_handler, 0, "MCS_IP", mcs); mcs_ip_intr_handler, 0, "MCS_IP", mcs);
if (ret) { if (ret) {
dev_err(mcs->dev, "MCS IP irq registration failed\n"); dev_err(mcs->dev, "MCS IP irq registration failed\n");
@ -1175,11 +1167,11 @@ static int mcs_register_interrupts(struct mcs *mcs)
mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_INT_ENB, 0x7ULL); mcs_reg_write(mcs, MCSX_CPM_TX_SLAVE_TX_INT_ENB, 0x7ULL);
mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_RX_INT_ENB, 0x7FULL); mcs_reg_write(mcs, MCSX_CPM_RX_SLAVE_RX_INT_ENB, 0x7FULL);
mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xff); mcs_reg_write(mcs, MCSX_BBE_RX_SLAVE_BBE_INT_ENB, 0xFFULL);
mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xff); mcs_reg_write(mcs, MCSX_BBE_TX_SLAVE_BBE_INT_ENB, 0xFFULL);
mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xff); mcs_reg_write(mcs, MCSX_PAB_RX_SLAVE_PAB_INT_ENB, 0xFFFFFULL);
mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xff); mcs_reg_write(mcs, MCSX_PAB_TX_SLAVE_PAB_INT_ENB, 0xFFFFFULL);
mcs->tx_sa_active = alloc_mem(mcs, mcs->hw->sc_entries); mcs->tx_sa_active = alloc_mem(mcs, mcs->hw->sc_entries);
if (!mcs->tx_sa_active) { if (!mcs->tx_sa_active) {
@ -1190,7 +1182,7 @@ static int mcs_register_interrupts(struct mcs *mcs)
return ret; return ret;
free_irq: free_irq:
free_irq(pci_irq_vector(mcs->pdev, MCS_INT_VEC_IP), mcs); free_irq(pci_irq_vector(mcs->pdev, mcs->hw->ip_vec), mcs);
exit: exit:
pci_free_irq_vectors(mcs->pdev); pci_free_irq_vectors(mcs->pdev);
mcs->num_vec = 0; mcs->num_vec = 0;
@ -1325,8 +1317,11 @@ void mcs_reset_port(struct mcs *mcs, u8 port_id, u8 reset)
void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode) void mcs_set_lmac_mode(struct mcs *mcs, int lmac_id, u8 mode)
{ {
u64 reg; u64 reg;
int id = lmac_id * 2;
reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(lmac_id * 2); reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG(id);
mcs_reg_write(mcs, reg, (u64)mode);
reg = MCSX_MCS_TOP_SLAVE_CHANNEL_CFG((id + 1));
mcs_reg_write(mcs, reg, (u64)mode); mcs_reg_write(mcs, reg, (u64)mode);
} }
@ -1484,6 +1479,7 @@ void cn10kb_mcs_set_hw_capabilities(struct mcs *mcs)
hw->lmac_cnt = 20; /* lmacs/ports per mcs block */ hw->lmac_cnt = 20; /* lmacs/ports per mcs block */
hw->mcs_x2p_intf = 5; /* x2p clabration intf */ hw->mcs_x2p_intf = 5; /* x2p clabration intf */
hw->mcs_blks = 1; /* MCS blocks */ hw->mcs_blks = 1; /* MCS blocks */
hw->ip_vec = MCS_CN10KB_INT_VEC_IP; /* IP vector */
} }
static struct mcs_ops cn10kb_mcs_ops = { static struct mcs_ops cn10kb_mcs_ops = {
@ -1492,6 +1488,8 @@ static struct mcs_ops cn10kb_mcs_ops = {
.mcs_tx_sa_mem_map_write = cn10kb_mcs_tx_sa_mem_map_write, .mcs_tx_sa_mem_map_write = cn10kb_mcs_tx_sa_mem_map_write,
.mcs_rx_sa_mem_map_write = cn10kb_mcs_rx_sa_mem_map_write, .mcs_rx_sa_mem_map_write = cn10kb_mcs_rx_sa_mem_map_write,
.mcs_flowid_secy_map = cn10kb_mcs_flowid_secy_map, .mcs_flowid_secy_map = cn10kb_mcs_flowid_secy_map,
.mcs_bbe_intr_handler = cn10kb_mcs_bbe_intr_handler,
.mcs_pab_intr_handler = cn10kb_mcs_pab_intr_handler,
}; };
static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id) static int mcs_probe(struct pci_dev *pdev, const struct pci_device_id *id)
@ -1592,7 +1590,7 @@ static void mcs_remove(struct pci_dev *pdev)
/* Set MCS to external bypass */ /* Set MCS to external bypass */
mcs_set_external_bypass(mcs, true); mcs_set_external_bypass(mcs, true);
free_irq(pci_irq_vector(pdev, MCS_INT_VEC_IP), mcs); free_irq(pci_irq_vector(pdev, mcs->hw->ip_vec), mcs);
pci_free_irq_vectors(pdev); pci_free_irq_vectors(pdev);
pci_release_regions(pdev); pci_release_regions(pdev);
pci_disable_device(pdev); pci_disable_device(pdev);

View File

@ -43,24 +43,15 @@
/* Reserved resources for default bypass entry */ /* Reserved resources for default bypass entry */
#define MCS_RSRC_RSVD_CNT 1 #define MCS_RSRC_RSVD_CNT 1
/* MCS Interrupt Vector Enumeration */ /* MCS Interrupt Vector */
enum mcs_int_vec_e { #define MCS_CNF10KB_INT_VEC_IP 0x13
MCS_INT_VEC_MIL_RX_GBL = 0x0, #define MCS_CN10KB_INT_VEC_IP 0x53
MCS_INT_VEC_MIL_RX_LMACX = 0x1,
MCS_INT_VEC_MIL_TX_LMACX = 0x5,
MCS_INT_VEC_HIL_RX_GBL = 0x9,
MCS_INT_VEC_HIL_RX_LMACX = 0xa,
MCS_INT_VEC_HIL_TX_GBL = 0xe,
MCS_INT_VEC_HIL_TX_LMACX = 0xf,
MCS_INT_VEC_IP = 0x13,
MCS_INT_VEC_CNT = 0x14,
};
#define MCS_MAX_BBE_INT 8ULL #define MCS_MAX_BBE_INT 8ULL
#define MCS_BBE_INT_MASK 0xFFULL #define MCS_BBE_INT_MASK 0xFFULL
#define MCS_MAX_PAB_INT 4ULL #define MCS_MAX_PAB_INT 8ULL
#define MCS_PAB_INT_MASK 0xFULL #define MCS_PAB_INT_MASK 0xFULL
#define MCS_BBE_RX_INT_ENA BIT_ULL(0) #define MCS_BBE_RX_INT_ENA BIT_ULL(0)
#define MCS_BBE_TX_INT_ENA BIT_ULL(1) #define MCS_BBE_TX_INT_ENA BIT_ULL(1)
@ -137,6 +128,7 @@ struct hwinfo {
u8 lmac_cnt; u8 lmac_cnt;
u8 mcs_blks; u8 mcs_blks;
unsigned long lmac_bmap; /* bitmap of enabled mcs lmac */ unsigned long lmac_bmap; /* bitmap of enabled mcs lmac */
u16 ip_vec;
}; };
struct mcs { struct mcs {
@ -165,6 +157,8 @@ struct mcs_ops {
void (*mcs_tx_sa_mem_map_write)(struct mcs *mcs, struct mcs_tx_sc_sa_map *map); void (*mcs_tx_sa_mem_map_write)(struct mcs *mcs, struct mcs_tx_sc_sa_map *map);
void (*mcs_rx_sa_mem_map_write)(struct mcs *mcs, struct mcs_rx_sc_sa_map *map); void (*mcs_rx_sa_mem_map_write)(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
void (*mcs_flowid_secy_map)(struct mcs *mcs, struct secy_mem_map *map, int dir); void (*mcs_flowid_secy_map)(struct mcs *mcs, struct secy_mem_map *map, int dir);
void (*mcs_bbe_intr_handler)(struct mcs *mcs, u64 intr, enum mcs_direction dir);
void (*mcs_pab_intr_handler)(struct mcs *mcs, u64 intr, enum mcs_direction dir);
}; };
extern struct pci_driver mcs_driver; extern struct pci_driver mcs_driver;
@ -219,6 +213,8 @@ void cn10kb_mcs_tx_sa_mem_map_write(struct mcs *mcs, struct mcs_tx_sc_sa_map *ma
void cn10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir); void cn10kb_mcs_flowid_secy_map(struct mcs *mcs, struct secy_mem_map *map, int dir);
void cn10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map); void cn10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *map);
void cn10kb_mcs_parser_cfg(struct mcs *mcs); void cn10kb_mcs_parser_cfg(struct mcs *mcs);
void cn10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
void cn10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
/* CNF10K-B APIs */ /* CNF10K-B APIs */
struct mcs_ops *cnf10kb_get_mac_ops(void); struct mcs_ops *cnf10kb_get_mac_ops(void);
@ -229,6 +225,8 @@ void cnf10kb_mcs_rx_sa_mem_map_write(struct mcs *mcs, struct mcs_rx_sc_sa_map *m
void cnf10kb_mcs_parser_cfg(struct mcs *mcs); void cnf10kb_mcs_parser_cfg(struct mcs *mcs);
void cnf10kb_mcs_tx_pn_thresh_reached_handler(struct mcs *mcs); void cnf10kb_mcs_tx_pn_thresh_reached_handler(struct mcs *mcs);
void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs); void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs);
void cnf10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
void cnf10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr, enum mcs_direction dir);
/* Stats APIs */ /* Stats APIs */
void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats, int id, int dir); void mcs_get_sc_stats(struct mcs *mcs, struct mcs_sc_stats *stats, int id, int dir);

View File

@ -13,6 +13,8 @@ static struct mcs_ops cnf10kb_mcs_ops = {
.mcs_tx_sa_mem_map_write = cnf10kb_mcs_tx_sa_mem_map_write, .mcs_tx_sa_mem_map_write = cnf10kb_mcs_tx_sa_mem_map_write,
.mcs_rx_sa_mem_map_write = cnf10kb_mcs_rx_sa_mem_map_write, .mcs_rx_sa_mem_map_write = cnf10kb_mcs_rx_sa_mem_map_write,
.mcs_flowid_secy_map = cnf10kb_mcs_flowid_secy_map, .mcs_flowid_secy_map = cnf10kb_mcs_flowid_secy_map,
.mcs_bbe_intr_handler = cnf10kb_mcs_bbe_intr_handler,
.mcs_pab_intr_handler = cnf10kb_mcs_pab_intr_handler,
}; };
struct mcs_ops *cnf10kb_get_mac_ops(void) struct mcs_ops *cnf10kb_get_mac_ops(void)
@ -31,6 +33,7 @@ void cnf10kb_mcs_set_hw_capabilities(struct mcs *mcs)
hw->lmac_cnt = 4; /* lmacs/ports per mcs block */ hw->lmac_cnt = 4; /* lmacs/ports per mcs block */
hw->mcs_x2p_intf = 1; /* x2p clabration intf */ hw->mcs_x2p_intf = 1; /* x2p clabration intf */
hw->mcs_blks = 7; /* MCS blocks */ hw->mcs_blks = 7; /* MCS blocks */
hw->ip_vec = MCS_CNF10KB_INT_VEC_IP; /* IP vector */
} }
void cnf10kb_mcs_parser_cfg(struct mcs *mcs) void cnf10kb_mcs_parser_cfg(struct mcs *mcs)
@ -212,3 +215,63 @@ void cnf10kb_mcs_tx_pn_wrapped_handler(struct mcs *mcs)
mcs_add_intr_wq_entry(mcs, &event); mcs_add_intr_wq_entry(mcs, &event);
} }
} }
void cnf10kb_mcs_bbe_intr_handler(struct mcs *mcs, u64 intr,
enum mcs_direction dir)
{
struct mcs_intr_event event = { 0 };
int i;
if (!(intr & MCS_BBE_INT_MASK))
return;
event.mcs_id = mcs->mcs_id;
event.pcifunc = mcs->pf_map[0];
for (i = 0; i < MCS_MAX_BBE_INT; i++) {
if (!(intr & BIT_ULL(i)))
continue;
/* Lower nibble denotes data fifo overflow interrupts and
* upper nibble indicates policy fifo overflow interrupts.
*/
if (intr & 0xFULL)
event.intr_mask = (dir == MCS_RX) ?
MCS_BBE_RX_DFIFO_OVERFLOW_INT :
MCS_BBE_TX_DFIFO_OVERFLOW_INT;
else
event.intr_mask = (dir == MCS_RX) ?
MCS_BBE_RX_PLFIFO_OVERFLOW_INT :
MCS_BBE_TX_PLFIFO_OVERFLOW_INT;
/* Notify the lmac_id info which ran into BBE fatal error */
event.lmac_id = i & 0x3ULL;
mcs_add_intr_wq_entry(mcs, &event);
}
}
void cnf10kb_mcs_pab_intr_handler(struct mcs *mcs, u64 intr,
enum mcs_direction dir)
{
struct mcs_intr_event event = { 0 };
int i;
if (!(intr & MCS_PAB_INT_MASK))
return;
event.mcs_id = mcs->mcs_id;
event.pcifunc = mcs->pf_map[0];
for (i = 0; i < MCS_MAX_PAB_INT; i++) {
if (!(intr & BIT_ULL(i)))
continue;
event.intr_mask = (dir == MCS_RX) ?
MCS_PAB_RX_CHAN_OVERFLOW_INT :
MCS_PAB_TX_CHAN_OVERFLOW_INT;
/* Notify the lmac_id info which ran into PAB fatal error */
event.lmac_id = i;
mcs_add_intr_wq_entry(mcs, &event);
}
}

View File

@ -97,6 +97,7 @@
#define MCSX_PEX_TX_SLAVE_VLAN_CFGX(a) (0x46f8ull + (a) * 0x8ull) #define MCSX_PEX_TX_SLAVE_VLAN_CFGX(a) (0x46f8ull + (a) * 0x8ull)
#define MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(a) (0x788ull + (a) * 0x8ull) #define MCSX_PEX_TX_SLAVE_CUSTOM_TAG_REL_MODE_SEL(a) (0x788ull + (a) * 0x8ull)
#define MCSX_PEX_TX_SLAVE_PORT_CONFIG(a) (0x4738ull + (a) * 0x8ull) #define MCSX_PEX_TX_SLAVE_PORT_CONFIG(a) (0x4738ull + (a) * 0x8ull)
#define MCSX_PEX_RX_SLAVE_PORT_CFGX(a) (0x3b98ull + (a) * 0x8ull)
#define MCSX_PEX_RX_SLAVE_RULE_ETYPE_CFGX(a) ({ \ #define MCSX_PEX_RX_SLAVE_RULE_ETYPE_CFGX(a) ({ \
u64 offset; \ u64 offset; \
\ \
@ -275,7 +276,10 @@
#define MCSX_BBE_RX_SLAVE_CAL_ENTRY 0x180ull #define MCSX_BBE_RX_SLAVE_CAL_ENTRY 0x180ull
#define MCSX_BBE_RX_SLAVE_CAL_LEN 0x188ull #define MCSX_BBE_RX_SLAVE_CAL_LEN 0x188ull
#define MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(a) (0x290ull + (a) * 0x40ull) #define MCSX_PAB_RX_SLAVE_FIFO_SKID_CFGX(a) (0x290ull + (a) * 0x40ull)
#define MCSX_BBE_RX_SLAVE_DFIFO_OVERFLOW_0 0xe20
#define MCSX_BBE_TX_SLAVE_DFIFO_OVERFLOW_0 0x1298
#define MCSX_BBE_RX_SLAVE_PLFIFO_OVERFLOW_0 0xe40
#define MCSX_BBE_TX_SLAVE_PLFIFO_OVERFLOW_0 0x12b8
#define MCSX_BBE_RX_SLAVE_BBE_INT ({ \ #define MCSX_BBE_RX_SLAVE_BBE_INT ({ \
u64 offset; \ u64 offset; \
\ \

View File

@ -11,6 +11,7 @@
#include "mcs.h" #include "mcs.h"
#include "rvu.h" #include "rvu.h"
#include "mcs_reg.h"
#include "lmac_common.h" #include "lmac_common.h"
#define M(_name, _id, _fn_name, _req_type, _rsp_type) \ #define M(_name, _id, _fn_name, _req_type, _rsp_type) \
@ -32,6 +33,42 @@ static struct _req_type __maybe_unused \
MBOX_UP_MCS_MESSAGES MBOX_UP_MCS_MESSAGES
#undef M #undef M
void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena)
{
struct mcs *mcs;
u64 cfg;
u8 port;
if (!rvu->mcs_blk_cnt)
return;
/* When ptp is enabled, RPM appends 8B header for all
* RX packets. MCS PEX need to configure to skip 8B
* during packet parsing.
*/
/* CNF10K-B */
if (rvu->mcs_blk_cnt > 1) {
mcs = mcs_get_pdata(rpm_id);
cfg = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION);
if (ena)
cfg |= BIT_ULL(lmac_id);
else
cfg &= ~BIT_ULL(lmac_id);
mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PEX_CONFIGURATION, cfg);
return;
}
/* CN10KB */
mcs = mcs_get_pdata(0);
port = (rpm_id * rvu->hw->lmac_per_cgx) + lmac_id;
cfg = mcs_reg_read(mcs, MCSX_PEX_RX_SLAVE_PORT_CFGX(port));
if (ena)
cfg |= BIT_ULL(0);
else
cfg &= ~BIT_ULL(0);
mcs_reg_write(mcs, MCSX_PEX_RX_SLAVE_PORT_CFGX(port), cfg);
}
int rvu_mbox_handler_mcs_set_lmac_mode(struct rvu *rvu, int rvu_mbox_handler_mcs_set_lmac_mode(struct rvu *rvu,
struct mcs_set_lmac_mode *req, struct mcs_set_lmac_mode *req,
struct msg_rsp *rsp) struct msg_rsp *rsp)

View File

@ -2282,7 +2282,7 @@ static inline void rvu_afvf_mbox_up_handler(struct work_struct *work)
} }
static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr, static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr,
int num, int type) int num, int type, unsigned long *pf_bmap)
{ {
struct rvu_hwinfo *hw = rvu->hw; struct rvu_hwinfo *hw = rvu->hw;
int region; int region;
@ -2294,6 +2294,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr,
*/ */
if (type == TYPE_AFVF) { if (type == TYPE_AFVF) {
for (region = 0; region < num; region++) { for (region = 0; region < num; region++) {
if (!test_bit(region, pf_bmap))
continue;
if (hw->cap.per_pf_mbox_regs) { if (hw->cap.per_pf_mbox_regs) {
bar4 = rvu_read64(rvu, BLKADDR_RVUM, bar4 = rvu_read64(rvu, BLKADDR_RVUM,
RVU_AF_PFX_BAR4_ADDR(0)) + RVU_AF_PFX_BAR4_ADDR(0)) +
@ -2315,6 +2318,9 @@ static int rvu_get_mbox_regions(struct rvu *rvu, void **mbox_addr,
* RVU_AF_PF_BAR4_ADDR register. * RVU_AF_PF_BAR4_ADDR register.
*/ */
for (region = 0; region < num; region++) { for (region = 0; region < num; region++) {
if (!test_bit(region, pf_bmap))
continue;
if (hw->cap.per_pf_mbox_regs) { if (hw->cap.per_pf_mbox_regs) {
bar4 = rvu_read64(rvu, BLKADDR_RVUM, bar4 = rvu_read64(rvu, BLKADDR_RVUM,
RVU_AF_PFX_BAR4_ADDR(region)); RVU_AF_PFX_BAR4_ADDR(region));
@ -2343,12 +2349,33 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
int err = -EINVAL, i, dir, dir_up; int err = -EINVAL, i, dir, dir_up;
void __iomem *reg_base; void __iomem *reg_base;
struct rvu_work *mwork; struct rvu_work *mwork;
unsigned long *pf_bmap;
void **mbox_regions; void **mbox_regions;
const char *name; const char *name;
u64 cfg;
pf_bmap = bitmap_zalloc(num, GFP_KERNEL);
if (!pf_bmap)
return -ENOMEM;
/* RVU VFs */
if (type == TYPE_AFVF)
bitmap_set(pf_bmap, 0, num);
if (type == TYPE_AFPF) {
/* Mark enabled PFs in bitmap */
for (i = 0; i < num; i++) {
cfg = rvu_read64(rvu, BLKADDR_RVUM, RVU_PRIV_PFX_CFG(i));
if (cfg & BIT_ULL(20))
set_bit(i, pf_bmap);
}
}
mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL); mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL);
if (!mbox_regions) if (!mbox_regions) {
return -ENOMEM; err = -ENOMEM;
goto free_bitmap;
}
switch (type) { switch (type) {
case TYPE_AFPF: case TYPE_AFPF:
@ -2356,7 +2383,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
dir = MBOX_DIR_AFPF; dir = MBOX_DIR_AFPF;
dir_up = MBOX_DIR_AFPF_UP; dir_up = MBOX_DIR_AFPF_UP;
reg_base = rvu->afreg_base; reg_base = rvu->afreg_base;
err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF); err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFPF, pf_bmap);
if (err) if (err)
goto free_regions; goto free_regions;
break; break;
@ -2365,7 +2392,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
dir = MBOX_DIR_PFVF; dir = MBOX_DIR_PFVF;
dir_up = MBOX_DIR_PFVF_UP; dir_up = MBOX_DIR_PFVF_UP;
reg_base = rvu->pfreg_base; reg_base = rvu->pfreg_base;
err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF); err = rvu_get_mbox_regions(rvu, mbox_regions, num, TYPE_AFVF, pf_bmap);
if (err) if (err)
goto free_regions; goto free_regions;
break; break;
@ -2396,16 +2423,19 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
} }
err = otx2_mbox_regions_init(&mw->mbox, mbox_regions, rvu->pdev, err = otx2_mbox_regions_init(&mw->mbox, mbox_regions, rvu->pdev,
reg_base, dir, num); reg_base, dir, num, pf_bmap);
if (err) if (err)
goto exit; goto exit;
err = otx2_mbox_regions_init(&mw->mbox_up, mbox_regions, rvu->pdev, err = otx2_mbox_regions_init(&mw->mbox_up, mbox_regions, rvu->pdev,
reg_base, dir_up, num); reg_base, dir_up, num, pf_bmap);
if (err) if (err)
goto exit; goto exit;
for (i = 0; i < num; i++) { for (i = 0; i < num; i++) {
if (!test_bit(i, pf_bmap))
continue;
mwork = &mw->mbox_wrk[i]; mwork = &mw->mbox_wrk[i];
mwork->rvu = rvu; mwork->rvu = rvu;
INIT_WORK(&mwork->work, mbox_handler); INIT_WORK(&mwork->work, mbox_handler);
@ -2414,8 +2444,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
mwork->rvu = rvu; mwork->rvu = rvu;
INIT_WORK(&mwork->work, mbox_up_handler); INIT_WORK(&mwork->work, mbox_up_handler);
} }
kfree(mbox_regions); goto free_regions;
return 0;
exit: exit:
destroy_workqueue(mw->mbox_wq); destroy_workqueue(mw->mbox_wq);
@ -2424,6 +2453,8 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
iounmap((void __iomem *)mbox_regions[num]); iounmap((void __iomem *)mbox_regions[num]);
free_regions: free_regions:
kfree(mbox_regions); kfree(mbox_regions);
free_bitmap:
bitmap_free(pf_bmap);
return err; return err;
} }

View File

@ -920,6 +920,7 @@ int rvu_get_hwvf(struct rvu *rvu, int pcifunc);
/* CN10K MCS */ /* CN10K MCS */
int rvu_mcs_init(struct rvu *rvu); int rvu_mcs_init(struct rvu *rvu);
int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc); int rvu_mcs_flr_handler(struct rvu *rvu, u16 pcifunc);
void rvu_mcs_ptp_cfg(struct rvu *rvu, u8 rpm_id, u8 lmac_id, bool ena);
void rvu_mcs_exit(struct rvu *rvu); void rvu_mcs_exit(struct rvu *rvu);
#endif /* RVU_H */ #endif /* RVU_H */

View File

@ -773,6 +773,8 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable)
/* This flag is required to clean up CGX conf if app gets killed */ /* This flag is required to clean up CGX conf if app gets killed */
pfvf->hw_rx_tstamp_en = enable; pfvf->hw_rx_tstamp_en = enable;
/* Inform MCS about 8B RX header */
rvu_mcs_ptp_cfg(rvu, cgx_id, lmac_id, enable);
return 0; return 0;
} }

View File

@ -60,13 +60,14 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
u64 iova, u64 *lmt_addr) u64 iova, u64 *lmt_addr)
{ {
u64 pa, val, pf; u64 pa, val, pf;
int err; int err = 0;
if (!iova) { if (!iova) {
dev_err(rvu->dev, "%s Requested Null address for transulation\n", __func__); dev_err(rvu->dev, "%s Requested Null address for transulation\n", __func__);
return -EINVAL; return -EINVAL;
} }
mutex_lock(&rvu->rsrc_lock);
rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova); rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_REQ, iova);
pf = rvu_get_pf(pcifunc) & 0x1F; pf = rvu_get_pf(pcifunc) & 0x1F;
val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 | val = BIT_ULL(63) | BIT_ULL(14) | BIT_ULL(13) | pf << 8 |
@ -76,12 +77,13 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
err = rvu_poll_reg(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS, BIT_ULL(0), false); err = rvu_poll_reg(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS, BIT_ULL(0), false);
if (err) { if (err) {
dev_err(rvu->dev, "%s LMTLINE iova transulation failed\n", __func__); dev_err(rvu->dev, "%s LMTLINE iova transulation failed\n", __func__);
return err; goto exit;
} }
val = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS); val = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_ADDR_RSP_STS);
if (val & ~0x1ULL) { if (val & ~0x1ULL) {
dev_err(rvu->dev, "%s LMTLINE iova transulation failed err:%llx\n", __func__, val); dev_err(rvu->dev, "%s LMTLINE iova transulation failed err:%llx\n", __func__, val);
return -EIO; err = -EIO;
goto exit;
} }
/* PA[51:12] = RVU_AF_SMMU_TLN_FLIT0[57:18] /* PA[51:12] = RVU_AF_SMMU_TLN_FLIT0[57:18]
* PA[11:0] = IOVA[11:0] * PA[11:0] = IOVA[11:0]
@ -89,8 +91,9 @@ static int rvu_get_lmtaddr(struct rvu *rvu, u16 pcifunc,
pa = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TLN_FLIT0) >> 18; pa = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_SMMU_TLN_FLIT0) >> 18;
pa &= GENMASK_ULL(39, 0); pa &= GENMASK_ULL(39, 0);
*lmt_addr = (pa << 12) | (iova & 0xFFF); *lmt_addr = (pa << 12) | (iova & 0xFFF);
exit:
return 0; mutex_unlock(&rvu->rsrc_lock);
return err;
} }
static int rvu_update_lmtaddr(struct rvu *rvu, u16 pcifunc, u64 lmt_addr) static int rvu_update_lmtaddr(struct rvu *rvu, u16 pcifunc, u64 lmt_addr)

View File

@ -497,8 +497,9 @@ static int rvu_dbg_mcs_rx_secy_stats_display(struct seq_file *filp, void *unused
stats.octet_validated_cnt); stats.octet_validated_cnt);
seq_printf(filp, "secy%d: Pkts on disable port: %lld\n", secy_id, seq_printf(filp, "secy%d: Pkts on disable port: %lld\n", secy_id,
stats.pkt_port_disabled_cnt); stats.pkt_port_disabled_cnt);
seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_badtag_cnt); seq_printf(filp, "secy%d: Pkts with badtag: %lld\n", secy_id, stats.pkt_badtag_cnt);
seq_printf(filp, "secy%d: Octets validated: %lld\n", secy_id, stats.pkt_nosa_cnt); seq_printf(filp, "secy%d: Pkts with no SA(sectag.tci.c=0): %lld\n", secy_id,
stats.pkt_nosa_cnt);
seq_printf(filp, "secy%d: Pkts with nosaerror: %lld\n", secy_id, seq_printf(filp, "secy%d: Pkts with nosaerror: %lld\n", secy_id,
stats.pkt_nosaerror_cnt); stats.pkt_nosaerror_cnt);
seq_printf(filp, "secy%d: Tagged ctrl pkts: %lld\n", secy_id, seq_printf(filp, "secy%d: Tagged ctrl pkts: %lld\n", secy_id,

View File

@ -13,11 +13,6 @@
#include "rvu_npc_fs.h" #include "rvu_npc_fs.h"
#include "rvu_npc_hash.h" #include "rvu_npc_hash.h"
#define NPC_BYTESM GENMASK_ULL(19, 16)
#define NPC_HDR_OFFSET GENMASK_ULL(15, 8)
#define NPC_KEY_OFFSET GENMASK_ULL(5, 0)
#define NPC_LDATA_EN BIT_ULL(7)
static const char * const npc_flow_names[] = { static const char * const npc_flow_names[] = {
[NPC_DMAC] = "dmac", [NPC_DMAC] = "dmac",
[NPC_SMAC] = "smac", [NPC_SMAC] = "smac",
@ -442,6 +437,7 @@ static void npc_handle_multi_layer_fields(struct rvu *rvu, int blkaddr, u8 intf)
static void npc_scan_ldata(struct rvu *rvu, int blkaddr, u8 lid, static void npc_scan_ldata(struct rvu *rvu, int blkaddr, u8 lid,
u8 lt, u64 cfg, u8 intf) u8 lt, u64 cfg, u8 intf)
{ {
struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash;
struct npc_mcam *mcam = &rvu->hw->mcam; struct npc_mcam *mcam = &rvu->hw->mcam;
u8 hdr, key, nr_bytes, bit_offset; u8 hdr, key, nr_bytes, bit_offset;
u8 la_ltype, la_start; u8 la_ltype, la_start;
@ -490,8 +486,21 @@ do { \
NPC_SCAN_HDR(NPC_SIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 12, 4); NPC_SCAN_HDR(NPC_SIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 12, 4);
NPC_SCAN_HDR(NPC_DIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 16, 4); NPC_SCAN_HDR(NPC_DIP_IPV4, NPC_LID_LC, NPC_LT_LC_IP, 16, 4);
NPC_SCAN_HDR(NPC_IPFRAG_IPV6, NPC_LID_LC, NPC_LT_LC_IP6_EXT, 6, 1); NPC_SCAN_HDR(NPC_IPFRAG_IPV6, NPC_LID_LC, NPC_LT_LC_IP6_EXT, 6, 1);
NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16); if (rvu->hw->cap.npc_hash_extract) {
NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16); if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][0])
NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 4);
else
NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16);
if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][1])
NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 4);
else
NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16);
} else {
NPC_SCAN_HDR(NPC_SIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 8, 16);
NPC_SCAN_HDR(NPC_DIP_IPV6, NPC_LID_LC, NPC_LT_LC_IP6, 24, 16);
}
NPC_SCAN_HDR(NPC_SPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 0, 2); NPC_SCAN_HDR(NPC_SPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 0, 2);
NPC_SCAN_HDR(NPC_DPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 2, 2); NPC_SCAN_HDR(NPC_DPORT_UDP, NPC_LID_LD, NPC_LT_LD_UDP, 2, 2);
NPC_SCAN_HDR(NPC_SPORT_TCP, NPC_LID_LD, NPC_LT_LD_TCP, 0, 2); NPC_SCAN_HDR(NPC_SPORT_TCP, NPC_LID_LD, NPC_LT_LD_TCP, 0, 2);
@ -594,8 +603,7 @@ static int npc_scan_kex(struct rvu *rvu, int blkaddr, u8 intf)
*/ */
masked_cfg = cfg & NPC_EXACT_NIBBLE; masked_cfg = cfg & NPC_EXACT_NIBBLE;
bitnr = NPC_EXACT_NIBBLE_START; bitnr = NPC_EXACT_NIBBLE_START;
for_each_set_bit_from(bitnr, (unsigned long *)&masked_cfg, for_each_set_bit_from(bitnr, (unsigned long *)&masked_cfg, NPC_EXACT_NIBBLE_END + 1) {
NPC_EXACT_NIBBLE_START) {
npc_scan_exact_result(mcam, bitnr, key_nibble, intf); npc_scan_exact_result(mcam, bitnr, key_nibble, intf);
key_nibble++; key_nibble++;
} }

View File

@ -9,6 +9,10 @@
#define __RVU_NPC_FS_H #define __RVU_NPC_FS_H
#define IPV6_WORDS 4 #define IPV6_WORDS 4
#define NPC_BYTESM GENMASK_ULL(19, 16)
#define NPC_HDR_OFFSET GENMASK_ULL(15, 8)
#define NPC_KEY_OFFSET GENMASK_ULL(5, 0)
#define NPC_LDATA_EN BIT_ULL(7)
void npc_update_entry(struct rvu *rvu, enum key_fields type, void npc_update_entry(struct rvu *rvu, enum key_fields type,
struct mcam_entry *entry, u64 val_lo, struct mcam_entry *entry, u64 val_lo,

View File

@ -78,42 +78,43 @@ static u32 rvu_npc_toeplitz_hash(const u64 *data, u64 *key, size_t data_bit_len,
return hash_out; return hash_out;
} }
u32 npc_field_hash_calc(u64 *ldata, struct npc_mcam_kex_hash *mkex_hash, u32 npc_field_hash_calc(u64 *ldata, struct npc_get_field_hash_info_rsp rsp,
u64 *secret_key, u8 intf, u8 hash_idx) u8 intf, u8 hash_idx)
{ {
u64 hash_key[3]; u64 hash_key[3];
u64 data_padded[2]; u64 data_padded[2];
u32 field_hash; u32 field_hash;
hash_key[0] = secret_key[1] << 31; hash_key[0] = rsp.secret_key[1] << 31;
hash_key[0] |= secret_key[2]; hash_key[0] |= rsp.secret_key[2];
hash_key[1] = secret_key[1] >> 33; hash_key[1] = rsp.secret_key[1] >> 33;
hash_key[1] |= secret_key[0] << 31; hash_key[1] |= rsp.secret_key[0] << 31;
hash_key[2] = secret_key[0] >> 33; hash_key[2] = rsp.secret_key[0] >> 33;
data_padded[0] = mkex_hash->hash_mask[intf][hash_idx][0] & ldata[0]; data_padded[0] = rsp.hash_mask[intf][hash_idx][0] & ldata[0];
data_padded[1] = mkex_hash->hash_mask[intf][hash_idx][1] & ldata[1]; data_padded[1] = rsp.hash_mask[intf][hash_idx][1] & ldata[1];
field_hash = rvu_npc_toeplitz_hash(data_padded, hash_key, 128, 159); field_hash = rvu_npc_toeplitz_hash(data_padded, hash_key, 128, 159);
field_hash &= mkex_hash->hash_ctrl[intf][hash_idx] >> 32; field_hash &= FIELD_GET(GENMASK(63, 32), rsp.hash_ctrl[intf][hash_idx]);
field_hash |= mkex_hash->hash_ctrl[intf][hash_idx]; field_hash += FIELD_GET(GENMASK(31, 0), rsp.hash_ctrl[intf][hash_idx]);
return field_hash; return field_hash;
} }
static u64 npc_update_use_hash(int lt, int ld) static u64 npc_update_use_hash(struct rvu *rvu, int blkaddr,
u8 intf, int lid, int lt, int ld)
{ {
u64 cfg = 0; u8 hdr, key;
u64 cfg;
switch (lt) { cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_LIDX_LTX_LDX_CFG(intf, lid, lt, ld));
case NPC_LT_LC_IP6: hdr = FIELD_GET(NPC_HDR_OFFSET, cfg);
/* Update use_hash(bit-20) and bytesm1 (bit-16:19) key = FIELD_GET(NPC_KEY_OFFSET, cfg);
* in KEX_LD_CFG
*/ /* Update use_hash(bit-20) to 'true' and
cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03, * bytesm1(bit-16:19) to '0x3' in KEX_LD_CFG
ld ? 0x8 : 0x18, */
0x1, 0x0, 0x10); cfg = KEX_LD_CFG_USE_HASH(0x1, 0x03,
break; hdr, 0x1, 0x0, key);
}
return cfg; return cfg;
} }
@ -132,12 +133,13 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr,
for (lt = 0; lt < NPC_MAX_LT; lt++) { for (lt = 0; lt < NPC_MAX_LT; lt++) {
for (ld = 0; ld < NPC_MAX_LD; ld++) { for (ld = 0; ld < NPC_MAX_LD; ld++) {
if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) {
u64 cfg = npc_update_use_hash(lt, ld); u64 cfg;
hash_cnt++;
if (hash_cnt == NPC_MAX_HASH) if (hash_cnt == NPC_MAX_HASH)
return; return;
cfg = npc_update_use_hash(rvu, blkaddr,
intf, lid, lt, ld);
/* Set updated KEX configuration */ /* Set updated KEX configuration */
SET_KEX_LD(intf, lid, lt, ld, cfg); SET_KEX_LD(intf, lid, lt, ld, cfg);
/* Set HASH configuration */ /* Set HASH configuration */
@ -149,6 +151,8 @@ static void npc_program_mkex_hash_rx(struct rvu *rvu, int blkaddr,
mkex_hash->hash_mask[intf][ld][1]); mkex_hash->hash_mask[intf][ld][1]);
SET_KEX_LD_HASH_CTRL(intf, ld, SET_KEX_LD_HASH_CTRL(intf, ld,
mkex_hash->hash_ctrl[intf][ld]); mkex_hash->hash_ctrl[intf][ld]);
hash_cnt++;
} }
} }
} }
@ -169,12 +173,13 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr,
for (lt = 0; lt < NPC_MAX_LT; lt++) { for (lt = 0; lt < NPC_MAX_LT; lt++) {
for (ld = 0; ld < NPC_MAX_LD; ld++) for (ld = 0; ld < NPC_MAX_LD; ld++)
if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) { if (mkex_hash->lid_lt_ld_hash_en[intf][lid][lt][ld]) {
u64 cfg = npc_update_use_hash(lt, ld); u64 cfg;
hash_cnt++;
if (hash_cnt == NPC_MAX_HASH) if (hash_cnt == NPC_MAX_HASH)
return; return;
cfg = npc_update_use_hash(rvu, blkaddr,
intf, lid, lt, ld);
/* Set updated KEX configuration */ /* Set updated KEX configuration */
SET_KEX_LD(intf, lid, lt, ld, cfg); SET_KEX_LD(intf, lid, lt, ld, cfg);
/* Set HASH configuration */ /* Set HASH configuration */
@ -187,8 +192,6 @@ static void npc_program_mkex_hash_tx(struct rvu *rvu, int blkaddr,
SET_KEX_LD_HASH_CTRL(intf, ld, SET_KEX_LD_HASH_CTRL(intf, ld,
mkex_hash->hash_ctrl[intf][ld]); mkex_hash->hash_ctrl[intf][ld]);
hash_cnt++; hash_cnt++;
if (hash_cnt == NPC_MAX_HASH)
return;
} }
} }
} }
@ -238,8 +241,8 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
struct flow_msg *omask) struct flow_msg *omask)
{ {
struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash; struct npc_mcam_kex_hash *mkex_hash = rvu->kpu.mkex_hash;
struct npc_get_secret_key_req req; struct npc_get_field_hash_info_req req;
struct npc_get_secret_key_rsp rsp; struct npc_get_field_hash_info_rsp rsp;
u64 ldata[2], cfg; u64 ldata[2], cfg;
u32 field_hash; u32 field_hash;
u8 hash_idx; u8 hash_idx;
@ -250,7 +253,7 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
} }
req.intf = intf; req.intf = intf;
rvu_mbox_handler_npc_get_secret_key(rvu, &req, &rsp); rvu_mbox_handler_npc_get_field_hash_info(rvu, &req, &rsp);
for (hash_idx = 0; hash_idx < NPC_MAX_HASH; hash_idx++) { for (hash_idx = 0; hash_idx < NPC_MAX_HASH; hash_idx++) {
cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_CFG(intf, hash_idx)); cfg = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_CFG(intf, hash_idx));
@ -266,44 +269,45 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
* is hashed to 32 bit value. * is hashed to 32 bit value.
*/ */
case NPC_LT_LC_IP6: case NPC_LT_LC_IP6:
if (features & BIT_ULL(NPC_SIP_IPV6)) { /* ld[0] == hash_idx[0] == Source IPv6
* ld[1] == hash_idx[1] == Destination IPv6
*/
if ((features & BIT_ULL(NPC_SIP_IPV6)) && !hash_idx) {
u32 src_ip[IPV6_WORDS]; u32 src_ip[IPV6_WORDS];
be32_to_cpu_array(src_ip, pkt->ip6src, IPV6_WORDS); be32_to_cpu_array(src_ip, pkt->ip6src, IPV6_WORDS);
ldata[0] = (u64)src_ip[0] << 32 | src_ip[1]; ldata[1] = (u64)src_ip[0] << 32 | src_ip[1];
ldata[1] = (u64)src_ip[2] << 32 | src_ip[3]; ldata[0] = (u64)src_ip[2] << 32 | src_ip[3];
field_hash = npc_field_hash_calc(ldata, field_hash = npc_field_hash_calc(ldata,
mkex_hash, rsp,
rsp.secret_key,
intf, intf,
hash_idx); hash_idx);
npc_update_entry(rvu, NPC_SIP_IPV6, entry, npc_update_entry(rvu, NPC_SIP_IPV6, entry,
field_hash, 0, 32, 0, intf); field_hash, 0,
GENMASK(31, 0), 0, intf);
memcpy(&opkt->ip6src, &pkt->ip6src, memcpy(&opkt->ip6src, &pkt->ip6src,
sizeof(pkt->ip6src)); sizeof(pkt->ip6src));
memcpy(&omask->ip6src, &mask->ip6src, memcpy(&omask->ip6src, &mask->ip6src,
sizeof(mask->ip6src)); sizeof(mask->ip6src));
break; } else if ((features & BIT_ULL(NPC_DIP_IPV6)) && hash_idx) {
}
if (features & BIT_ULL(NPC_DIP_IPV6)) {
u32 dst_ip[IPV6_WORDS]; u32 dst_ip[IPV6_WORDS];
be32_to_cpu_array(dst_ip, pkt->ip6dst, IPV6_WORDS); be32_to_cpu_array(dst_ip, pkt->ip6dst, IPV6_WORDS);
ldata[0] = (u64)dst_ip[0] << 32 | dst_ip[1]; ldata[1] = (u64)dst_ip[0] << 32 | dst_ip[1];
ldata[1] = (u64)dst_ip[2] << 32 | dst_ip[3]; ldata[0] = (u64)dst_ip[2] << 32 | dst_ip[3];
field_hash = npc_field_hash_calc(ldata, field_hash = npc_field_hash_calc(ldata,
mkex_hash, rsp,
rsp.secret_key,
intf, intf,
hash_idx); hash_idx);
npc_update_entry(rvu, NPC_DIP_IPV6, entry, npc_update_entry(rvu, NPC_DIP_IPV6, entry,
field_hash, 0, 32, 0, intf); field_hash, 0,
GENMASK(31, 0), 0, intf);
memcpy(&opkt->ip6dst, &pkt->ip6dst, memcpy(&opkt->ip6dst, &pkt->ip6dst,
sizeof(pkt->ip6dst)); sizeof(pkt->ip6dst));
memcpy(&omask->ip6dst, &mask->ip6dst, memcpy(&omask->ip6dst, &mask->ip6dst,
sizeof(mask->ip6dst)); sizeof(mask->ip6dst));
} }
break; break;
} }
} }
@ -311,13 +315,13 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
} }
} }
int rvu_mbox_handler_npc_get_secret_key(struct rvu *rvu, int rvu_mbox_handler_npc_get_field_hash_info(struct rvu *rvu,
struct npc_get_secret_key_req *req, struct npc_get_field_hash_info_req *req,
struct npc_get_secret_key_rsp *rsp) struct npc_get_field_hash_info_rsp *rsp)
{ {
u64 *secret_key = rsp->secret_key; u64 *secret_key = rsp->secret_key;
u8 intf = req->intf; u8 intf = req->intf;
int blkaddr; int i, j, blkaddr;
blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0); blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NPC, 0);
if (blkaddr < 0) { if (blkaddr < 0) {
@ -329,6 +333,19 @@ int rvu_mbox_handler_npc_get_secret_key(struct rvu *rvu,
secret_key[1] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY1(intf)); secret_key[1] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY1(intf));
secret_key[2] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY2(intf)); secret_key[2] = rvu_read64(rvu, blkaddr, NPC_AF_INTFX_SECRET_KEY2(intf));
for (i = 0; i < NPC_MAX_HASH; i++) {
for (j = 0; j < NPC_MAX_HASH_MASK; j++) {
rsp->hash_mask[NIX_INTF_RX][i][j] =
GET_KEX_LD_HASH_MASK(NIX_INTF_RX, i, j);
rsp->hash_mask[NIX_INTF_TX][i][j] =
GET_KEX_LD_HASH_MASK(NIX_INTF_TX, i, j);
}
}
for (i = 0; i < NPC_MAX_INTF; i++)
for (j = 0; j < NPC_MAX_HASH; j++)
rsp->hash_ctrl[i][j] = GET_KEX_LD_HASH_CTRL(i, j);
return 0; return 0;
} }
@ -1868,9 +1885,9 @@ int rvu_npc_exact_init(struct rvu *rvu)
rvu->hw->table = table; rvu->hw->table = table;
/* Read table size, ways and depth */ /* Read table size, ways and depth */
table->mem_table.depth = FIELD_GET(GENMASK_ULL(31, 24), npc_const3);
table->mem_table.ways = FIELD_GET(GENMASK_ULL(19, 16), npc_const3); table->mem_table.ways = FIELD_GET(GENMASK_ULL(19, 16), npc_const3);
table->cam_table.depth = FIELD_GET(GENMASK_ULL(15, 0), npc_const3); table->mem_table.depth = FIELD_GET(GENMASK_ULL(15, 0), npc_const3);
table->cam_table.depth = FIELD_GET(GENMASK_ULL(31, 24), npc_const3);
dev_dbg(rvu->dev, "%s: NPC exact match 4way_2k table(ways=%d, depth=%d)\n", dev_dbg(rvu->dev, "%s: NPC exact match 4way_2k table(ways=%d, depth=%d)\n",
__func__, table->mem_table.ways, table->cam_table.depth); __func__, table->mem_table.ways, table->cam_table.depth);

View File

@ -31,6 +31,12 @@
rvu_write64(rvu, blkaddr, \ rvu_write64(rvu, blkaddr, \
NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx), cfg) NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx), cfg)
#define GET_KEX_LD_HASH_CTRL(intf, ld) \
rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld))
#define GET_KEX_LD_HASH_MASK(intf, ld, mask_idx) \
rvu_read64(rvu, blkaddr, NPC_AF_INTFX_HASHX_MASKX(intf, ld, mask_idx))
#define SET_KEX_LD_HASH_CTRL(intf, ld, cfg) \ #define SET_KEX_LD_HASH_CTRL(intf, ld, cfg) \
rvu_write64(rvu, blkaddr, \ rvu_write64(rvu, blkaddr, \
NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld), cfg) NPC_AF_INTFX_HASHX_RESULT_CTRL(intf, ld), cfg)
@ -56,8 +62,8 @@ void npc_update_field_hash(struct rvu *rvu, u8 intf,
struct flow_msg *omask); struct flow_msg *omask);
void npc_config_secret_key(struct rvu *rvu, int blkaddr); void npc_config_secret_key(struct rvu *rvu, int blkaddr);
void npc_program_mkex_hash(struct rvu *rvu, int blkaddr); void npc_program_mkex_hash(struct rvu *rvu, int blkaddr);
u32 npc_field_hash_calc(u64 *ldata, struct npc_mcam_kex_hash *mkex_hash, u32 npc_field_hash_calc(u64 *ldata, struct npc_get_field_hash_info_rsp rsp,
u64 *secret_key, u8 intf, u8 hash_idx); u8 intf, u8 hash_idx);
static struct npc_mcam_kex_hash npc_mkex_hash_default __maybe_unused = { static struct npc_mcam_kex_hash npc_mkex_hash_default __maybe_unused = {
.lid_lt_ld_hash_en = { .lid_lt_ld_hash_en = {

View File

@ -9,6 +9,7 @@
#include <net/macsec.h> #include <net/macsec.h>
#include "otx2_common.h" #include "otx2_common.h"
#define MCS_TCAM0_MAC_DA_MASK GENMASK_ULL(47, 0)
#define MCS_TCAM0_MAC_SA_MASK GENMASK_ULL(63, 48) #define MCS_TCAM0_MAC_SA_MASK GENMASK_ULL(63, 48)
#define MCS_TCAM1_MAC_SA_MASK GENMASK_ULL(31, 0) #define MCS_TCAM1_MAC_SA_MASK GENMASK_ULL(31, 0)
#define MCS_TCAM1_ETYPE_MASK GENMASK_ULL(47, 32) #define MCS_TCAM1_ETYPE_MASK GENMASK_ULL(47, 32)
@ -149,11 +150,20 @@ static void cn10k_mcs_free_rsrc(struct otx2_nic *pfvf, enum mcs_direction dir,
enum mcs_rsrc_type type, u16 hw_rsrc_id, enum mcs_rsrc_type type, u16 hw_rsrc_id,
bool all) bool all)
{ {
struct mcs_clear_stats *clear_req;
struct mbox *mbox = &pfvf->mbox; struct mbox *mbox = &pfvf->mbox;
struct mcs_free_rsrc_req *req; struct mcs_free_rsrc_req *req;
mutex_lock(&mbox->lock); mutex_lock(&mbox->lock);
clear_req = otx2_mbox_alloc_msg_mcs_clear_stats(mbox);
if (!clear_req)
goto fail;
clear_req->id = hw_rsrc_id;
clear_req->type = type;
clear_req->dir = dir;
req = otx2_mbox_alloc_msg_mcs_free_resources(mbox); req = otx2_mbox_alloc_msg_mcs_free_resources(mbox);
if (!req) if (!req)
goto fail; goto fail;
@ -237,8 +247,10 @@ static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf,
struct cn10k_mcs_rxsc *rxsc, u8 hw_secy_id) struct cn10k_mcs_rxsc *rxsc, u8 hw_secy_id)
{ {
struct macsec_rx_sc *sw_rx_sc = rxsc->sw_rxsc; struct macsec_rx_sc *sw_rx_sc = rxsc->sw_rxsc;
struct macsec_secy *secy = rxsc->sw_secy;
struct mcs_flowid_entry_write_req *req; struct mcs_flowid_entry_write_req *req;
struct mbox *mbox = &pfvf->mbox; struct mbox *mbox = &pfvf->mbox;
u64 mac_da;
int ret; int ret;
mutex_lock(&mbox->lock); mutex_lock(&mbox->lock);
@ -249,11 +261,16 @@ static int cn10k_mcs_write_rx_flowid(struct otx2_nic *pfvf,
goto fail; goto fail;
} }
mac_da = ether_addr_to_u64(secy->netdev->dev_addr);
req->data[0] = FIELD_PREP(MCS_TCAM0_MAC_DA_MASK, mac_da);
req->mask[0] = ~0ULL;
req->mask[0] = ~MCS_TCAM0_MAC_DA_MASK;
req->data[1] = FIELD_PREP(MCS_TCAM1_ETYPE_MASK, ETH_P_MACSEC); req->data[1] = FIELD_PREP(MCS_TCAM1_ETYPE_MASK, ETH_P_MACSEC);
req->mask[1] = ~0ULL; req->mask[1] = ~0ULL;
req->mask[1] &= ~MCS_TCAM1_ETYPE_MASK; req->mask[1] &= ~MCS_TCAM1_ETYPE_MASK;
req->mask[0] = ~0ULL;
req->mask[2] = ~0ULL; req->mask[2] = ~0ULL;
req->mask[3] = ~0ULL; req->mask[3] = ~0ULL;
@ -997,7 +1014,7 @@ static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy
/* Check if sync is really needed */ /* Check if sync is really needed */
if (secy->validate_frames == txsc->last_validate_frames && if (secy->validate_frames == txsc->last_validate_frames &&
secy->protect_frames == txsc->last_protect_frames) secy->replay_protect == txsc->last_replay_protect)
return; return;
cn10k_mcs_secy_stats(pfvf, txsc->hw_secy_id_rx, &rx_rsp, MCS_RX, true); cn10k_mcs_secy_stats(pfvf, txsc->hw_secy_id_rx, &rx_rsp, MCS_RX, true);
@ -1019,19 +1036,19 @@ static void cn10k_mcs_sync_stats(struct otx2_nic *pfvf, struct macsec_secy *secy
rxsc->stats.InPktsInvalid += sc_rsp.pkt_invalid_cnt; rxsc->stats.InPktsInvalid += sc_rsp.pkt_invalid_cnt;
rxsc->stats.InPktsNotValid += sc_rsp.pkt_notvalid_cnt; rxsc->stats.InPktsNotValid += sc_rsp.pkt_notvalid_cnt;
if (txsc->last_protect_frames) if (txsc->last_replay_protect)
rxsc->stats.InPktsLate += sc_rsp.pkt_late_cnt; rxsc->stats.InPktsLate += sc_rsp.pkt_late_cnt;
else else
rxsc->stats.InPktsDelayed += sc_rsp.pkt_late_cnt; rxsc->stats.InPktsDelayed += sc_rsp.pkt_late_cnt;
if (txsc->last_validate_frames == MACSEC_VALIDATE_CHECK) if (txsc->last_validate_frames == MACSEC_VALIDATE_DISABLED)
rxsc->stats.InPktsUnchecked += sc_rsp.pkt_unchecked_cnt; rxsc->stats.InPktsUnchecked += sc_rsp.pkt_unchecked_cnt;
else else
rxsc->stats.InPktsOK += sc_rsp.pkt_unchecked_cnt; rxsc->stats.InPktsOK += sc_rsp.pkt_unchecked_cnt;
} }
txsc->last_validate_frames = secy->validate_frames; txsc->last_validate_frames = secy->validate_frames;
txsc->last_protect_frames = secy->protect_frames; txsc->last_replay_protect = secy->replay_protect;
} }
static int cn10k_mdo_open(struct macsec_context *ctx) static int cn10k_mdo_open(struct macsec_context *ctx)
@ -1100,7 +1117,7 @@ static int cn10k_mdo_add_secy(struct macsec_context *ctx)
txsc->sw_secy = secy; txsc->sw_secy = secy;
txsc->encoding_sa = secy->tx_sc.encoding_sa; txsc->encoding_sa = secy->tx_sc.encoding_sa;
txsc->last_validate_frames = secy->validate_frames; txsc->last_validate_frames = secy->validate_frames;
txsc->last_protect_frames = secy->protect_frames; txsc->last_replay_protect = secy->replay_protect;
list_add(&txsc->entry, &cfg->txsc_list); list_add(&txsc->entry, &cfg->txsc_list);
@ -1117,6 +1134,7 @@ static int cn10k_mdo_upd_secy(struct macsec_context *ctx)
struct macsec_secy *secy = ctx->secy; struct macsec_secy *secy = ctx->secy;
struct macsec_tx_sa *sw_tx_sa; struct macsec_tx_sa *sw_tx_sa;
struct cn10k_mcs_txsc *txsc; struct cn10k_mcs_txsc *txsc;
bool active;
u8 sa_num; u8 sa_num;
int err; int err;
@ -1124,15 +1142,19 @@ static int cn10k_mdo_upd_secy(struct macsec_context *ctx)
if (!txsc) if (!txsc)
return -ENOENT; return -ENOENT;
txsc->encoding_sa = secy->tx_sc.encoding_sa; /* Encoding SA got changed */
if (txsc->encoding_sa != secy->tx_sc.encoding_sa) {
sa_num = txsc->encoding_sa; txsc->encoding_sa = secy->tx_sc.encoding_sa;
sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[sa_num]); sa_num = txsc->encoding_sa;
sw_tx_sa = rcu_dereference_bh(secy->tx_sc.sa[sa_num]);
active = sw_tx_sa ? sw_tx_sa->active : false;
cn10k_mcs_link_tx_sa2sc(pfvf, secy, txsc, sa_num, active);
}
if (netif_running(secy->netdev)) { if (netif_running(secy->netdev)) {
cn10k_mcs_sync_stats(pfvf, secy, txsc); cn10k_mcs_sync_stats(pfvf, secy, txsc);
err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, sw_tx_sa, sa_num); err = cn10k_mcs_secy_tx_cfg(pfvf, secy, txsc, NULL, 0);
if (err) if (err)
return err; return err;
} }
@ -1521,12 +1543,12 @@ static int cn10k_mdo_get_rx_sc_stats(struct macsec_context *ctx)
rxsc->stats.InPktsInvalid += rsp.pkt_invalid_cnt; rxsc->stats.InPktsInvalid += rsp.pkt_invalid_cnt;
rxsc->stats.InPktsNotValid += rsp.pkt_notvalid_cnt; rxsc->stats.InPktsNotValid += rsp.pkt_notvalid_cnt;
if (secy->protect_frames) if (secy->replay_protect)
rxsc->stats.InPktsLate += rsp.pkt_late_cnt; rxsc->stats.InPktsLate += rsp.pkt_late_cnt;
else else
rxsc->stats.InPktsDelayed += rsp.pkt_late_cnt; rxsc->stats.InPktsDelayed += rsp.pkt_late_cnt;
if (secy->validate_frames == MACSEC_VALIDATE_CHECK) if (secy->validate_frames == MACSEC_VALIDATE_DISABLED)
rxsc->stats.InPktsUnchecked += rsp.pkt_unchecked_cnt; rxsc->stats.InPktsUnchecked += rsp.pkt_unchecked_cnt;
else else
rxsc->stats.InPktsOK += rsp.pkt_unchecked_cnt; rxsc->stats.InPktsOK += rsp.pkt_unchecked_cnt;

View File

@ -335,11 +335,11 @@ struct otx2_flow_config {
#define OTX2_PER_VF_VLAN_FLOWS 2 /* Rx + Tx per VF */ #define OTX2_PER_VF_VLAN_FLOWS 2 /* Rx + Tx per VF */
#define OTX2_VF_VLAN_RX_INDEX 0 #define OTX2_VF_VLAN_RX_INDEX 0
#define OTX2_VF_VLAN_TX_INDEX 1 #define OTX2_VF_VLAN_TX_INDEX 1
u16 max_flows;
u8 dmacflt_max_flows;
u32 *bmap_to_dmacindex; u32 *bmap_to_dmacindex;
unsigned long *dmacflt_bmap; unsigned long *dmacflt_bmap;
struct list_head flow_list; struct list_head flow_list;
u32 dmacflt_max_flows;
u16 max_flows;
}; };
struct otx2_tc_info { struct otx2_tc_info {
@ -389,7 +389,7 @@ struct cn10k_mcs_txsc {
struct cn10k_txsc_stats stats; struct cn10k_txsc_stats stats;
struct list_head entry; struct list_head entry;
enum macsec_validation_type last_validate_frames; enum macsec_validation_type last_validate_frames;
bool last_protect_frames; bool last_replay_protect;
u16 hw_secy_id_tx; u16 hw_secy_id_tx;
u16 hw_secy_id_rx; u16 hw_secy_id_rx;
u16 hw_flow_id; u16 hw_flow_id;

View File

@ -1835,13 +1835,22 @@ int otx2_open(struct net_device *netdev)
otx2_dmacflt_reinstall_flows(pf); otx2_dmacflt_reinstall_flows(pf);
err = otx2_rxtx_enable(pf, true); err = otx2_rxtx_enable(pf, true);
if (err) /* If a mbox communication error happens at this point then interface
* will end up in a state such that it is in down state but hardware
* mcam entries are enabled to receive the packets. Hence disable the
* packet I/O.
*/
if (err == EIO)
goto err_disable_rxtx;
else if (err)
goto err_tx_stop_queues; goto err_tx_stop_queues;
otx2_do_set_rx_mode(pf); otx2_do_set_rx_mode(pf);
return 0; return 0;
err_disable_rxtx:
otx2_rxtx_enable(pf, false);
err_tx_stop_queues: err_tx_stop_queues:
netif_tx_stop_all_queues(netdev); netif_tx_stop_all_queues(netdev);
netif_carrier_off(netdev); netif_carrier_off(netdev);
@ -3073,8 +3082,6 @@ static void otx2_remove(struct pci_dev *pdev)
otx2_config_pause_frm(pf); otx2_config_pause_frm(pf);
} }
cn10k_mcs_free(pf);
#ifdef CONFIG_DCB #ifdef CONFIG_DCB
/* Disable PFC config */ /* Disable PFC config */
if (pf->pfc_en) { if (pf->pfc_en) {
@ -3088,6 +3095,7 @@ static void otx2_remove(struct pci_dev *pdev)
otx2_unregister_dl(pf); otx2_unregister_dl(pf);
unregister_netdev(netdev); unregister_netdev(netdev);
cn10k_mcs_free(pf);
otx2_sriov_disable(pf->pdev); otx2_sriov_disable(pf->pdev);
otx2_sriov_vfcfg_cleanup(pf); otx2_sriov_vfcfg_cleanup(pf);
if (pf->otx2_wq) if (pf->otx2_wq)

View File

@ -544,7 +544,7 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
if (match.mask->flags & FLOW_DIS_IS_FRAGMENT) { if (match.mask->flags & FLOW_DIS_IS_FRAGMENT) {
if (ntohs(flow_spec->etype) == ETH_P_IP) { if (ntohs(flow_spec->etype) == ETH_P_IP) {
flow_spec->ip_flag = IPV4_FLAG_MORE; flow_spec->ip_flag = IPV4_FLAG_MORE;
flow_mask->ip_flag = 0xff; flow_mask->ip_flag = IPV4_FLAG_MORE;
req->features |= BIT_ULL(NPC_IPFRAG_IPV4); req->features |= BIT_ULL(NPC_IPFRAG_IPV4);
} else if (ntohs(flow_spec->etype) == ETH_P_IPV6) { } else if (ntohs(flow_spec->etype) == ETH_P_IPV6) {
flow_spec->next_header = IPPROTO_FRAGMENT; flow_spec->next_header = IPPROTO_FRAGMENT;

View File

@ -621,7 +621,7 @@ static int otx2vf_probe(struct pci_dev *pdev, const struct pci_device_id *id)
err = otx2vf_realloc_msix_vectors(vf); err = otx2vf_realloc_msix_vectors(vf);
if (err) if (err)
goto err_mbox_destroy; goto err_detach_rsrc;
err = otx2_set_real_num_queues(netdev, qcount, qcount); err = otx2_set_real_num_queues(netdev, qcount, qcount);
if (err) if (err)

View File

@ -1918,9 +1918,7 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
while (done < budget) { while (done < budget) {
unsigned int pktlen, *rxdcsum; unsigned int pktlen, *rxdcsum;
bool has_hwaccel_tag = false;
struct net_device *netdev; struct net_device *netdev;
u16 vlan_proto, vlan_tci;
dma_addr_t dma_addr; dma_addr_t dma_addr;
u32 hash, reason; u32 hash, reason;
int mac = 0; int mac = 0;
@ -2055,31 +2053,16 @@ static int mtk_poll_rx(struct napi_struct *napi, int budget,
skb_checksum_none_assert(skb); skb_checksum_none_assert(skb);
skb->protocol = eth_type_trans(skb, netdev); skb->protocol = eth_type_trans(skb, netdev);
if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX) {
if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
if (trxd.rxd3 & RX_DMA_VTAG_V2) {
vlan_proto = RX_DMA_VPID(trxd.rxd4);
vlan_tci = RX_DMA_VID(trxd.rxd4);
has_hwaccel_tag = true;
}
} else if (trxd.rxd2 & RX_DMA_VTAG) {
vlan_proto = RX_DMA_VPID(trxd.rxd3);
vlan_tci = RX_DMA_VID(trxd.rxd3);
has_hwaccel_tag = true;
}
}
/* When using VLAN untagging in combination with DSA, the /* When using VLAN untagging in combination with DSA, the
* hardware treats the MTK special tag as a VLAN and untags it. * hardware treats the MTK special tag as a VLAN and untags it.
*/ */
if (has_hwaccel_tag && netdev_uses_dsa(netdev)) { if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2) &&
unsigned int port = vlan_proto & GENMASK(2, 0); (trxd.rxd2 & RX_DMA_VTAG) && netdev_uses_dsa(netdev)) {
unsigned int port = RX_DMA_VPID(trxd.rxd3) & GENMASK(2, 0);
if (port < ARRAY_SIZE(eth->dsa_meta) && if (port < ARRAY_SIZE(eth->dsa_meta) &&
eth->dsa_meta[port]) eth->dsa_meta[port])
skb_dst_set_noref(skb, &eth->dsa_meta[port]->dst); skb_dst_set_noref(skb, &eth->dsa_meta[port]->dst);
} else if (has_hwaccel_tag) {
__vlan_hwaccel_put_tag(skb, htons(vlan_proto), vlan_tci);
} }
if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED) if (reason == MTK_PPE_CPU_REASON_HIT_UNBIND_RATE_REACHED)
@ -2907,29 +2890,11 @@ static netdev_features_t mtk_fix_features(struct net_device *dev,
static int mtk_set_features(struct net_device *dev, netdev_features_t features) static int mtk_set_features(struct net_device *dev, netdev_features_t features)
{ {
struct mtk_mac *mac = netdev_priv(dev);
struct mtk_eth *eth = mac->hw;
netdev_features_t diff = dev->features ^ features; netdev_features_t diff = dev->features ^ features;
int i;
if ((diff & NETIF_F_LRO) && !(features & NETIF_F_LRO)) if ((diff & NETIF_F_LRO) && !(features & NETIF_F_LRO))
mtk_hwlro_netdev_disable(dev); mtk_hwlro_netdev_disable(dev);
/* Set RX VLAN offloading */
if (!(diff & NETIF_F_HW_VLAN_CTAG_RX))
return 0;
mtk_w32(eth, !!(features & NETIF_F_HW_VLAN_CTAG_RX),
MTK_CDMP_EG_CTRL);
/* sync features with other MAC */
for (i = 0; i < MTK_MAC_COUNT; i++) {
if (!eth->netdev[i] || eth->netdev[i] == dev)
continue;
eth->netdev[i]->features &= ~NETIF_F_HW_VLAN_CTAG_RX;
eth->netdev[i]->features |= features & NETIF_F_HW_VLAN_CTAG_RX;
}
return 0; return 0;
} }
@ -3247,30 +3212,6 @@ static int mtk_open(struct net_device *dev)
struct mtk_eth *eth = mac->hw; struct mtk_eth *eth = mac->hw;
int i, err; int i, err;
if (mtk_uses_dsa(dev) && !eth->prog) {
for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) {
struct metadata_dst *md_dst = eth->dsa_meta[i];
if (md_dst)
continue;
md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX,
GFP_KERNEL);
if (!md_dst)
return -ENOMEM;
md_dst->u.port_info.port_id = i;
eth->dsa_meta[i] = md_dst;
}
} else {
/* Hardware special tag parsing needs to be disabled if at least
* one MAC does not use DSA.
*/
u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL);
val &= ~MTK_CDMP_STAG_EN;
mtk_w32(eth, val, MTK_CDMP_IG_CTRL);
}
err = phylink_of_phy_connect(mac->phylink, mac->of_node, 0); err = phylink_of_phy_connect(mac->phylink, mac->of_node, 0);
if (err) { if (err) {
netdev_err(dev, "%s: could not attach PHY: %d\n", __func__, netdev_err(dev, "%s: could not attach PHY: %d\n", __func__,
@ -3309,6 +3250,40 @@ static int mtk_open(struct net_device *dev)
phylink_start(mac->phylink); phylink_start(mac->phylink);
netif_tx_start_all_queues(dev); netif_tx_start_all_queues(dev);
if (MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2))
return 0;
if (mtk_uses_dsa(dev) && !eth->prog) {
for (i = 0; i < ARRAY_SIZE(eth->dsa_meta); i++) {
struct metadata_dst *md_dst = eth->dsa_meta[i];
if (md_dst)
continue;
md_dst = metadata_dst_alloc(0, METADATA_HW_PORT_MUX,
GFP_KERNEL);
if (!md_dst)
return -ENOMEM;
md_dst->u.port_info.port_id = i;
eth->dsa_meta[i] = md_dst;
}
} else {
/* Hardware special tag parsing needs to be disabled if at least
* one MAC does not use DSA.
*/
u32 val = mtk_r32(eth, MTK_CDMP_IG_CTRL);
val &= ~MTK_CDMP_STAG_EN;
mtk_w32(eth, val, MTK_CDMP_IG_CTRL);
val = mtk_r32(eth, MTK_CDMQ_IG_CTRL);
val &= ~MTK_CDMQ_STAG_EN;
mtk_w32(eth, val, MTK_CDMQ_IG_CTRL);
mtk_w32(eth, 0, MTK_CDMP_EG_CTRL);
}
return 0; return 0;
} }
@ -3793,10 +3768,9 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset)
if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) { if (!MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2)) {
val = mtk_r32(eth, MTK_CDMP_IG_CTRL); val = mtk_r32(eth, MTK_CDMP_IG_CTRL);
mtk_w32(eth, val | MTK_CDMP_STAG_EN, MTK_CDMP_IG_CTRL); mtk_w32(eth, val | MTK_CDMP_STAG_EN, MTK_CDMP_IG_CTRL);
}
/* Enable RX VLan Offloading */ mtk_w32(eth, 1, MTK_CDMP_EG_CTRL);
mtk_w32(eth, 1, MTK_CDMP_EG_CTRL); }
/* set interrupt delays based on current Net DIM sample */ /* set interrupt delays based on current Net DIM sample */
mtk_dim_rx(&eth->rx_dim.work); mtk_dim_rx(&eth->rx_dim.work);
@ -4453,7 +4427,7 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
eth->netdev[id]->hw_features |= NETIF_F_LRO; eth->netdev[id]->hw_features |= NETIF_F_LRO;
eth->netdev[id]->vlan_features = eth->soc->hw_features & eth->netdev[id]->vlan_features = eth->soc->hw_features &
~(NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX); ~NETIF_F_HW_VLAN_CTAG_TX;
eth->netdev[id]->features |= eth->soc->hw_features; eth->netdev[id]->features |= eth->soc->hw_features;
eth->netdev[id]->ethtool_ops = &mtk_ethtool_ops; eth->netdev[id]->ethtool_ops = &mtk_ethtool_ops;

View File

@ -48,7 +48,6 @@
#define MTK_HW_FEATURES (NETIF_F_IP_CSUM | \ #define MTK_HW_FEATURES (NETIF_F_IP_CSUM | \
NETIF_F_RXCSUM | \ NETIF_F_RXCSUM | \
NETIF_F_HW_VLAN_CTAG_TX | \ NETIF_F_HW_VLAN_CTAG_TX | \
NETIF_F_HW_VLAN_CTAG_RX | \
NETIF_F_SG | NETIF_F_TSO | \ NETIF_F_SG | NETIF_F_TSO | \
NETIF_F_TSO6 | \ NETIF_F_TSO6 | \
NETIF_F_IPV6_CSUM |\ NETIF_F_IPV6_CSUM |\

View File

@ -61,6 +61,8 @@ struct ionic *ionic_devlink_alloc(struct device *dev)
struct devlink *dl; struct devlink *dl;
dl = devlink_alloc(&ionic_dl_ops, sizeof(struct ionic), dev); dl = devlink_alloc(&ionic_dl_ops, sizeof(struct ionic), dev);
if (!dl)
return NULL;
return devlink_priv(dl); return devlink_priv(dl);
} }

View File

@ -794,7 +794,7 @@ static int ionic_get_rxnfc(struct net_device *netdev,
info->data = lif->nxqs; info->data = lif->nxqs;
break; break;
default: default:
netdev_err(netdev, "Command parameter %d is not supported\n", netdev_dbg(netdev, "Command parameter %d is not supported\n",
info->cmd); info->cmd);
err = -EOPNOTSUPP; err = -EOPNOTSUPP;
} }

View File

@ -972,12 +972,15 @@ static u32 efx_mcdi_phy_module_type(struct efx_nic *efx)
/* A QSFP+ NIC may actually have an SFP+ module attached. /* A QSFP+ NIC may actually have an SFP+ module attached.
* The ID is page 0, byte 0. * The ID is page 0, byte 0.
* QSFP28 is of type SFF_8636, however, this is treated
* the same by ethtool, so we can also treat them the same.
*/ */
switch (efx_mcdi_phy_get_module_eeprom_byte(efx, 0, 0)) { switch (efx_mcdi_phy_get_module_eeprom_byte(efx, 0, 0)) {
case 0x3: case 0x3: /* SFP */
return MC_CMD_MEDIA_SFP_PLUS; return MC_CMD_MEDIA_SFP_PLUS;
case 0xc: case 0xc: /* QSFP */
case 0xd: case 0xd: /* QSFP+ */
case 0x11: /* QSFP28 */
return MC_CMD_MEDIA_QSFP_PLUS; return MC_CMD_MEDIA_QSFP_PLUS;
default: default:
return 0; return 0;
@ -1075,7 +1078,7 @@ int efx_mcdi_phy_get_module_info(struct efx_nic *efx, struct ethtool_modinfo *mo
case MC_CMD_MEDIA_QSFP_PLUS: case MC_CMD_MEDIA_QSFP_PLUS:
modinfo->type = ETH_MODULE_SFF_8436; modinfo->type = ETH_MODULE_SFF_8436;
modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN; modinfo->eeprom_len = ETH_MODULE_SFF_8436_MAX_LEN;
break; break;
default: default:

View File

@ -199,6 +199,7 @@
#define OCP_EEE_AR 0xa41a #define OCP_EEE_AR 0xa41a
#define OCP_EEE_DATA 0xa41c #define OCP_EEE_DATA 0xa41c
#define OCP_PHY_STATUS 0xa420 #define OCP_PHY_STATUS 0xa420
#define OCP_INTR_EN 0xa424
#define OCP_NCTL_CFG 0xa42c #define OCP_NCTL_CFG 0xa42c
#define OCP_POWER_CFG 0xa430 #define OCP_POWER_CFG 0xa430
#define OCP_EEE_CFG 0xa432 #define OCP_EEE_CFG 0xa432
@ -620,6 +621,9 @@ enum spd_duplex {
#define PHY_STAT_LAN_ON 3 #define PHY_STAT_LAN_ON 3
#define PHY_STAT_PWRDN 5 #define PHY_STAT_PWRDN 5
/* OCP_INTR_EN */
#define INTR_SPEED_FORCE BIT(3)
/* OCP_NCTL_CFG */ /* OCP_NCTL_CFG */
#define PGA_RETURN_EN BIT(1) #define PGA_RETURN_EN BIT(1)
@ -3023,12 +3027,16 @@ static int rtl_enable(struct r8152 *tp)
ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CR, ocp_data); ocp_write_byte(tp, MCU_TYPE_PLA, PLA_CR, ocp_data);
switch (tp->version) { switch (tp->version) {
case RTL_VER_08: case RTL_VER_01:
case RTL_VER_09: case RTL_VER_02:
case RTL_VER_14: case RTL_VER_03:
r8153b_rx_agg_chg_indicate(tp); case RTL_VER_04:
case RTL_VER_05:
case RTL_VER_06:
case RTL_VER_07:
break; break;
default: default:
r8153b_rx_agg_chg_indicate(tp);
break; break;
} }
@ -3082,7 +3090,6 @@ static void r8153_set_rx_early_timeout(struct r8152 *tp)
640 / 8); 640 / 8);
ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EXTRA_AGGR_TMR, ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EXTRA_AGGR_TMR,
ocp_data); ocp_data);
r8153b_rx_agg_chg_indicate(tp);
break; break;
default: default:
@ -3116,7 +3123,6 @@ static void r8153_set_rx_early_size(struct r8152 *tp)
case RTL_VER_15: case RTL_VER_15:
ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EARLY_SIZE, ocp_write_word(tp, MCU_TYPE_USB, USB_RX_EARLY_SIZE,
ocp_data / 8); ocp_data / 8);
r8153b_rx_agg_chg_indicate(tp);
break; break;
default: default:
WARN_ON_ONCE(1); WARN_ON_ONCE(1);
@ -5986,6 +5992,25 @@ static void rtl8153_disable(struct r8152 *tp)
r8153_aldps_en(tp, true); r8153_aldps_en(tp, true);
} }
static u32 fc_pause_on_auto(struct r8152 *tp)
{
return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 6 * 1024);
}
static u32 fc_pause_off_auto(struct r8152 *tp)
{
return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 14 * 1024);
}
static void r8156_fc_parameter(struct r8152 *tp)
{
u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp);
u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16);
}
static int rtl8156_enable(struct r8152 *tp) static int rtl8156_enable(struct r8152 *tp)
{ {
u32 ocp_data; u32 ocp_data;
@ -5994,6 +6019,7 @@ static int rtl8156_enable(struct r8152 *tp)
if (test_bit(RTL8152_UNPLUG, &tp->flags)) if (test_bit(RTL8152_UNPLUG, &tp->flags))
return -ENODEV; return -ENODEV;
r8156_fc_parameter(tp);
set_tx_qlen(tp); set_tx_qlen(tp);
rtl_set_eee_plus(tp); rtl_set_eee_plus(tp);
r8153_set_rx_early_timeout(tp); r8153_set_rx_early_timeout(tp);
@ -6025,9 +6051,24 @@ static int rtl8156_enable(struct r8152 *tp)
ocp_write_word(tp, MCU_TYPE_USB, USB_L1_CTRL, ocp_data); ocp_write_word(tp, MCU_TYPE_USB, USB_L1_CTRL, ocp_data);
} }
ocp_data = ocp_read_word(tp, MCU_TYPE_USB, USB_FW_TASK);
ocp_data &= ~FC_PATCH_TASK;
ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data);
usleep_range(1000, 2000);
ocp_data |= FC_PATCH_TASK;
ocp_write_word(tp, MCU_TYPE_USB, USB_FW_TASK, ocp_data);
return rtl_enable(tp); return rtl_enable(tp);
} }
static void rtl8156_disable(struct r8152 *tp)
{
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, 0);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, 0);
rtl8153_disable(tp);
}
static int rtl8156b_enable(struct r8152 *tp) static int rtl8156b_enable(struct r8152 *tp)
{ {
u32 ocp_data; u32 ocp_data;
@ -6429,25 +6470,6 @@ static void rtl8153c_up(struct r8152 *tp)
r8153b_u1u2en(tp, true); r8153b_u1u2en(tp, true);
} }
static inline u32 fc_pause_on_auto(struct r8152 *tp)
{
return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 6 * 1024);
}
static inline u32 fc_pause_off_auto(struct r8152 *tp)
{
return (ALIGN(mtu_to_size(tp->netdev->mtu), 1024) + 14 * 1024);
}
static void r8156_fc_parameter(struct r8152 *tp)
{
u32 pause_on = tp->fc_pause_on ? tp->fc_pause_on : fc_pause_on_auto(tp);
u32 pause_off = tp->fc_pause_off ? tp->fc_pause_off : fc_pause_off_auto(tp);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_FULL, pause_on / 16);
ocp_write_word(tp, MCU_TYPE_PLA, PLA_RX_FIFO_EMPTY, pause_off / 16);
}
static void rtl8156_change_mtu(struct r8152 *tp) static void rtl8156_change_mtu(struct r8152 *tp)
{ {
u32 rx_max_size = mtu_to_size(tp->netdev->mtu); u32 rx_max_size = mtu_to_size(tp->netdev->mtu);
@ -7538,6 +7560,11 @@ static void r8156_hw_phy_cfg(struct r8152 *tp)
((swap_a & 0x1f) << 8) | ((swap_a & 0x1f) << 8) |
((swap_a >> 8) & 0x1f)); ((swap_a >> 8) & 0x1f));
} }
/* Notify the MAC when the speed is changed to force mode. */
data = ocp_reg_read(tp, OCP_INTR_EN);
data |= INTR_SPEED_FORCE;
ocp_reg_write(tp, OCP_INTR_EN, data);
break; break;
default: default:
break; break;
@ -7933,6 +7960,11 @@ static void r8156b_hw_phy_cfg(struct r8152 *tp)
break; break;
} }
/* Notify the MAC when the speed is changed to force mode. */
data = ocp_reg_read(tp, OCP_INTR_EN);
data |= INTR_SPEED_FORCE;
ocp_reg_write(tp, OCP_INTR_EN, data);
if (rtl_phy_patch_request(tp, true, true)) if (rtl_phy_patch_request(tp, true, true))
return; return;
@ -9340,7 +9372,7 @@ static int rtl_ops_init(struct r8152 *tp)
case RTL_VER_10: case RTL_VER_10:
ops->init = r8156_init; ops->init = r8156_init;
ops->enable = rtl8156_enable; ops->enable = rtl8156_enable;
ops->disable = rtl8153_disable; ops->disable = rtl8156_disable;
ops->up = rtl8156_up; ops->up = rtl8156_up;
ops->down = rtl8156_down; ops->down = rtl8156_down;
ops->unload = rtl8153_unload; ops->unload = rtl8153_unload;
@ -9878,6 +9910,7 @@ static struct usb_device_driver rtl8152_cfgselector_driver = {
.probe = rtl8152_cfgselector_probe, .probe = rtl8152_cfgselector_probe,
.id_table = rtl8152_table, .id_table = rtl8152_table,
.generic_subclass = 1, .generic_subclass = 1,
.supports_autosuspend = 1,
}; };
static int __init rtl8152_driver_init(void) static int __init rtl8152_driver_init(void)

View File

@ -3560,12 +3560,14 @@ static void free_unused_bufs(struct virtnet_info *vi)
struct virtqueue *vq = vi->sq[i].vq; struct virtqueue *vq = vi->sq[i].vq;
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
virtnet_sq_free_unused_buf(vq, buf); virtnet_sq_free_unused_buf(vq, buf);
cond_resched();
} }
for (i = 0; i < vi->max_queue_pairs; i++) { for (i = 0; i < vi->max_queue_pairs; i++) {
struct virtqueue *vq = vi->rq[i].vq; struct virtqueue *vq = vi->rq[i].vq;
while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) while ((buf = virtqueue_detach_unused_buf(vq)) != NULL)
virtnet_rq_free_unused_buf(vq, buf); virtnet_rq_free_unused_buf(vq, buf);
cond_resched();
} }
} }

View File

@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only # SPDX-License-Identifier: GPL-2.0-only
config 9P_FS config 9P_FS
tristate "Plan 9 Resource Sharing Support (9P2000)" tristate "Plan 9 Resource Sharing Support (9P2000)"
depends on INET && NET_9P depends on NET_9P
select NETFS_SUPPORT select NETFS_SUPPORT
help help
If you say Y here, you will get experimental support for If you say Y here, you will get experimental support for

View File

@ -12,7 +12,6 @@
#include <linux/file.h> #include <linux/file.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/inet.h>
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/swap.h> #include <linux/swap.h>

View File

@ -13,7 +13,6 @@
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/inet.h>
#include <linux/namei.h> #include <linux/namei.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slab.h> #include <linux/slab.h>

View File

@ -13,7 +13,6 @@
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/inet.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/uio.h> #include <linux/uio.h>
#include <linux/fscache.h> #include <linux/fscache.h>

View File

@ -14,7 +14,6 @@
#include <linux/file.h> #include <linux/file.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/inet.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/utsname.h> #include <linux/utsname.h>

View File

@ -15,7 +15,6 @@
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/inet.h>
#include <linux/namei.h> #include <linux/namei.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slab.h> #include <linux/slab.h>

View File

@ -13,7 +13,6 @@
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/inet.h>
#include <linux/namei.h> #include <linux/namei.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/slab.h> #include <linux/slab.h>

View File

@ -12,7 +12,6 @@
#include <linux/file.h> #include <linux/file.h>
#include <linux/stat.h> #include <linux/stat.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/inet.h>
#include <linux/pagemap.h> #include <linux/pagemap.h>
#include <linux/mount.h> #include <linux/mount.h>
#include <linux/sched.h> #include <linux/sched.h>

View File

@ -19,8 +19,8 @@
#define AFSPATHMAX 1024 /* Maximum length of a pathname plus NUL */ #define AFSPATHMAX 1024 /* Maximum length of a pathname plus NUL */
#define AFSOPAQUEMAX 1024 /* Maximum length of an opaque field */ #define AFSOPAQUEMAX 1024 /* Maximum length of an opaque field */
#define AFS_VL_MAX_LIFESPAN (120 * HZ) #define AFS_VL_MAX_LIFESPAN 120
#define AFS_PROBE_MAX_LIFESPAN (30 * HZ) #define AFS_PROBE_MAX_LIFESPAN 30
typedef u64 afs_volid_t; typedef u64 afs_volid_t;
typedef u64 afs_vnodeid_t; typedef u64 afs_vnodeid_t;

View File

@ -128,7 +128,7 @@ struct afs_call {
spinlock_t state_lock; spinlock_t state_lock;
int error; /* error code */ int error; /* error code */
u32 abort_code; /* Remote abort ID or 0 */ u32 abort_code; /* Remote abort ID or 0 */
unsigned int max_lifespan; /* Maximum lifespan to set if not 0 */ unsigned int max_lifespan; /* Maximum lifespan in secs to set if not 0 */
unsigned request_size; /* size of request data */ unsigned request_size; /* size of request data */
unsigned reply_max; /* maximum size of reply */ unsigned reply_max; /* maximum size of reply */
unsigned count2; /* count used in unmarshalling */ unsigned count2; /* count used in unmarshalling */

View File

@ -335,7 +335,9 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
/* create a call */ /* create a call */
rxcall = rxrpc_kernel_begin_call(call->net->socket, srx, call->key, rxcall = rxrpc_kernel_begin_call(call->net->socket, srx, call->key,
(unsigned long)call, (unsigned long)call,
tx_total_len, gfp, tx_total_len,
call->max_lifespan,
gfp,
(call->async ? (call->async ?
afs_wake_up_async_call : afs_wake_up_async_call :
afs_wake_up_call_waiter), afs_wake_up_call_waiter),
@ -350,10 +352,6 @@ void afs_make_call(struct afs_addr_cursor *ac, struct afs_call *call, gfp_t gfp)
} }
call->rxcall = rxcall; call->rxcall = rxcall;
if (call->max_lifespan)
rxrpc_kernel_set_max_life(call->net->socket, rxcall,
call->max_lifespan);
call->issue_time = ktime_get_real(); call->issue_time = ktime_get_real();
/* send the request */ /* send the request */

View File

@ -40,16 +40,17 @@ typedef void (*rxrpc_user_attach_call_t)(struct rxrpc_call *, unsigned long);
void rxrpc_kernel_new_call_notification(struct socket *, void rxrpc_kernel_new_call_notification(struct socket *,
rxrpc_notify_new_call_t, rxrpc_notify_new_call_t,
rxrpc_discard_new_call_t); rxrpc_discard_new_call_t);
struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *, struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
struct sockaddr_rxrpc *, struct sockaddr_rxrpc *srx,
struct key *, struct key *key,
unsigned long, unsigned long user_call_ID,
s64, s64 tx_total_len,
gfp_t, u32 hard_timeout,
rxrpc_notify_rx_t, gfp_t gfp,
bool, rxrpc_notify_rx_t notify_rx,
enum rxrpc_interruptibility, bool upgrade,
unsigned int); enum rxrpc_interruptibility interruptibility,
unsigned int debug_id);
int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *, int rxrpc_kernel_send_data(struct socket *, struct rxrpc_call *,
struct msghdr *, size_t, struct msghdr *, size_t,
rxrpc_notify_end_tx_t); rxrpc_notify_end_tx_t);

View File

@ -659,6 +659,7 @@ void bond_destroy_sysfs(struct bond_net *net);
void bond_prepare_sysfs_group(struct bonding *bond); void bond_prepare_sysfs_group(struct bonding *bond);
int bond_sysfs_slave_add(struct slave *slave); int bond_sysfs_slave_add(struct slave *slave);
void bond_sysfs_slave_del(struct slave *slave); void bond_sysfs_slave_del(struct slave *slave);
void bond_xdp_set_features(struct net_device *bond_dev);
int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev, int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
struct netlink_ext_ack *extack); struct netlink_ext_ack *extack);
int bond_release(struct net_device *bond_dev, struct net_device *slave_dev); int bond_release(struct net_device *bond_dev, struct net_device *slave_dev);

View File

@ -619,6 +619,7 @@ struct nft_set_binding {
}; };
enum nft_trans_phase; enum nft_trans_phase;
void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set);
void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set, void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_binding *binding, struct nft_set_binding *binding,
enum nft_trans_phase phase); enum nft_trans_phase phase);

View File

@ -17,6 +17,8 @@ if NET_9P
config NET_9P_FD config NET_9P_FD
default NET_9P default NET_9P
imply INET
imply UNIX
tristate "9P FD Transport" tristate "9P FD Transport"
help help
This builds support for transports over TCP, Unix sockets and This builds support for transports over TCP, Unix sockets and

View File

@ -1758,7 +1758,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
{ {
int num_frags = skb_shinfo(skb)->nr_frags; int num_frags = skb_shinfo(skb)->nr_frags;
struct page *page, *head = NULL; struct page *page, *head = NULL;
int i, new_frags; int i, order, psize, new_frags;
u32 d_off; u32 d_off;
if (skb_shared(skb) || skb_unclone(skb, gfp_mask)) if (skb_shared(skb) || skb_unclone(skb, gfp_mask))
@ -1767,9 +1767,17 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
if (!num_frags) if (!num_frags)
goto release; goto release;
new_frags = (__skb_pagelen(skb) + PAGE_SIZE - 1) >> PAGE_SHIFT; /* We might have to allocate high order pages, so compute what minimum
* page order is needed.
*/
order = 0;
while ((PAGE_SIZE << order) * MAX_SKB_FRAGS < __skb_pagelen(skb))
order++;
psize = (PAGE_SIZE << order);
new_frags = (__skb_pagelen(skb) + psize - 1) >> (PAGE_SHIFT + order);
for (i = 0; i < new_frags; i++) { for (i = 0; i < new_frags; i++) {
page = alloc_page(gfp_mask); page = alloc_pages(gfp_mask | __GFP_COMP, order);
if (!page) { if (!page) {
while (head) { while (head) {
struct page *next = (struct page *)page_private(head); struct page *next = (struct page *)page_private(head);
@ -1796,11 +1804,11 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
vaddr = kmap_atomic(p); vaddr = kmap_atomic(p);
while (done < p_len) { while (done < p_len) {
if (d_off == PAGE_SIZE) { if (d_off == psize) {
d_off = 0; d_off = 0;
page = (struct page *)page_private(page); page = (struct page *)page_private(page);
} }
copy = min_t(u32, PAGE_SIZE - d_off, p_len - done); copy = min_t(u32, psize - d_off, p_len - done);
memcpy(page_address(page) + d_off, memcpy(page_address(page) + d_off,
vaddr + p_off + done, copy); vaddr + p_off + done, copy);
done += copy; done += copy;
@ -1816,7 +1824,7 @@ int skb_copy_ubufs(struct sk_buff *skb, gfp_t gfp_mask)
/* skb frags point to kernel buffers */ /* skb frags point to kernel buffers */
for (i = 0; i < new_frags - 1; i++) { for (i = 0; i < new_frags - 1; i++) {
__skb_fill_page_desc(skb, i, head, 0, PAGE_SIZE); __skb_fill_page_desc(skb, i, head, 0, psize);
head = (struct page *)page_private(head); head = (struct page *)page_private(head);
} }
__skb_fill_page_desc(skb, new_frags - 1, head, 0, d_off); __skb_fill_page_desc(skb, new_frags - 1, head, 0, d_off);

View File

@ -574,8 +574,8 @@ static int ethtool_get_link_ksettings(struct net_device *dev,
static int ethtool_set_link_ksettings(struct net_device *dev, static int ethtool_set_link_ksettings(struct net_device *dev,
void __user *useraddr) void __user *useraddr)
{ {
struct ethtool_link_ksettings link_ksettings = {};
int err; int err;
struct ethtool_link_ksettings link_ksettings;
ASSERT_RTNL(); ASSERT_RTNL();

View File

@ -1095,12 +1095,13 @@ static netdev_tx_t sit_tunnel_xmit(struct sk_buff *skb,
static void ipip6_tunnel_bind_dev(struct net_device *dev) static void ipip6_tunnel_bind_dev(struct net_device *dev)
{ {
struct ip_tunnel *tunnel = netdev_priv(dev);
int t_hlen = tunnel->hlen + sizeof(struct iphdr);
struct net_device *tdev = NULL; struct net_device *tdev = NULL;
struct ip_tunnel *tunnel; int hlen = LL_MAX_HEADER;
const struct iphdr *iph; const struct iphdr *iph;
struct flowi4 fl4; struct flowi4 fl4;
tunnel = netdev_priv(dev);
iph = &tunnel->parms.iph; iph = &tunnel->parms.iph;
if (iph->daddr) { if (iph->daddr) {
@ -1123,14 +1124,15 @@ static void ipip6_tunnel_bind_dev(struct net_device *dev)
tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link); tdev = __dev_get_by_index(tunnel->net, tunnel->parms.link);
if (tdev && !netif_is_l3_master(tdev)) { if (tdev && !netif_is_l3_master(tdev)) {
int t_hlen = tunnel->hlen + sizeof(struct iphdr);
int mtu; int mtu;
mtu = tdev->mtu - t_hlen; mtu = tdev->mtu - t_hlen;
if (mtu < IPV6_MIN_MTU) if (mtu < IPV6_MIN_MTU)
mtu = IPV6_MIN_MTU; mtu = IPV6_MIN_MTU;
WRITE_ONCE(dev->mtu, mtu); WRITE_ONCE(dev->mtu, mtu);
hlen = tdev->hard_header_len + tdev->needed_headroom;
} }
dev->needed_headroom = t_hlen + hlen;
} }
static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p, static void ipip6_tunnel_update(struct ip_tunnel *t, struct ip_tunnel_parm *p,

View File

@ -1065,7 +1065,7 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
if (np->repflow) if (np->repflow)
label = ip6_flowlabel(ipv6h); label = ip6_flowlabel(ipv6h);
priority = sk->sk_priority; priority = sk->sk_priority;
txhash = sk->sk_hash; txhash = sk->sk_txhash;
} }
if (sk->sk_state == TCP_TIME_WAIT) { if (sk->sk_state == TCP_TIME_WAIT) {
label = cpu_to_be32(inet_twsk(sk)->tw_flowlabel); label = cpu_to_be32(inet_twsk(sk)->tw_flowlabel);

View File

@ -165,6 +165,7 @@ static int ncsi_aen_handler_cr(struct ncsi_dev_priv *ndp,
nc->state = NCSI_CHANNEL_INACTIVE; nc->state = NCSI_CHANNEL_INACTIVE;
list_add_tail_rcu(&nc->link, &ndp->channel_queue); list_add_tail_rcu(&nc->link, &ndp->channel_queue);
spin_unlock_irqrestore(&ndp->lock, flags); spin_unlock_irqrestore(&ndp->lock, flags);
nc->modes[NCSI_MODE_TX_ENABLE].enable = 0;
return ncsi_process_next_channel(ndp); return ncsi_process_next_channel(ndp);
} }

View File

@ -2075,8 +2075,10 @@ static int nft_chain_parse_hook(struct net *net,
if (!basechain) { if (!basechain) {
if (!ha[NFTA_HOOK_HOOKNUM] || if (!ha[NFTA_HOOK_HOOKNUM] ||
!ha[NFTA_HOOK_PRIORITY]) !ha[NFTA_HOOK_PRIORITY]) {
return -EINVAL; NL_SET_BAD_ATTR(extack, nla[NFTA_CHAIN_NAME]);
return -ENOENT;
}
hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM])); hook->num = ntohl(nla_get_be32(ha[NFTA_HOOK_HOOKNUM]));
hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY])); hook->priority = ntohl(nla_get_be32(ha[NFTA_HOOK_PRIORITY]));
@ -5125,12 +5127,24 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
} }
} }
void nf_tables_activate_set(const struct nft_ctx *ctx, struct nft_set *set)
{
if (nft_set_is_anonymous(set))
nft_clear(ctx->net, set);
set->use++;
}
EXPORT_SYMBOL_GPL(nf_tables_activate_set);
void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set, void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_binding *binding, struct nft_set_binding *binding,
enum nft_trans_phase phase) enum nft_trans_phase phase)
{ {
switch (phase) { switch (phase) {
case NFT_TRANS_PREPARE: case NFT_TRANS_PREPARE:
if (nft_set_is_anonymous(set))
nft_deactivate_next(ctx->net, set);
set->use--; set->use--;
return; return;
case NFT_TRANS_ABORT: case NFT_TRANS_ABORT:
@ -7693,7 +7707,7 @@ static const struct nla_policy nft_flowtable_hook_policy[NFTA_FLOWTABLE_HOOK_MAX
}; };
static int nft_flowtable_parse_hook(const struct nft_ctx *ctx, static int nft_flowtable_parse_hook(const struct nft_ctx *ctx,
const struct nlattr *attr, const struct nlattr * const nla[],
struct nft_flowtable_hook *flowtable_hook, struct nft_flowtable_hook *flowtable_hook,
struct nft_flowtable *flowtable, struct nft_flowtable *flowtable,
struct netlink_ext_ack *extack, bool add) struct netlink_ext_ack *extack, bool add)
@ -7705,15 +7719,18 @@ static int nft_flowtable_parse_hook(const struct nft_ctx *ctx,
INIT_LIST_HEAD(&flowtable_hook->list); INIT_LIST_HEAD(&flowtable_hook->list);
err = nla_parse_nested_deprecated(tb, NFTA_FLOWTABLE_HOOK_MAX, attr, err = nla_parse_nested_deprecated(tb, NFTA_FLOWTABLE_HOOK_MAX,
nla[NFTA_FLOWTABLE_HOOK],
nft_flowtable_hook_policy, NULL); nft_flowtable_hook_policy, NULL);
if (err < 0) if (err < 0)
return err; return err;
if (add) { if (add) {
if (!tb[NFTA_FLOWTABLE_HOOK_NUM] || if (!tb[NFTA_FLOWTABLE_HOOK_NUM] ||
!tb[NFTA_FLOWTABLE_HOOK_PRIORITY]) !tb[NFTA_FLOWTABLE_HOOK_PRIORITY]) {
return -EINVAL; NL_SET_BAD_ATTR(extack, nla[NFTA_FLOWTABLE_NAME]);
return -ENOENT;
}
hooknum = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_NUM])); hooknum = ntohl(nla_get_be32(tb[NFTA_FLOWTABLE_HOOK_NUM]));
if (hooknum != NF_NETDEV_INGRESS) if (hooknum != NF_NETDEV_INGRESS)
@ -7898,8 +7915,8 @@ static int nft_flowtable_update(struct nft_ctx *ctx, const struct nlmsghdr *nlh,
u32 flags; u32 flags;
int err; int err;
err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], err = nft_flowtable_parse_hook(ctx, nla, &flowtable_hook, flowtable,
&flowtable_hook, flowtable, extack, false); extack, false);
if (err < 0) if (err < 0)
return err; return err;
@ -8044,8 +8061,8 @@ static int nf_tables_newflowtable(struct sk_buff *skb,
if (err < 0) if (err < 0)
goto err3; goto err3;
err = nft_flowtable_parse_hook(&ctx, nla[NFTA_FLOWTABLE_HOOK], err = nft_flowtable_parse_hook(&ctx, nla, &flowtable_hook, flowtable,
&flowtable_hook, flowtable, extack, true); extack, true);
if (err < 0) if (err < 0)
goto err4; goto err4;
@ -8107,8 +8124,8 @@ static int nft_delflowtable_hook(struct nft_ctx *ctx,
struct nft_trans *trans; struct nft_trans *trans;
int err; int err;
err = nft_flowtable_parse_hook(ctx, nla[NFTA_FLOWTABLE_HOOK], err = nft_flowtable_parse_hook(ctx, nla, &flowtable_hook, flowtable,
&flowtable_hook, flowtable, extack, false); extack, false);
if (err < 0) if (err < 0)
return err; return err;

View File

@ -15,10 +15,6 @@ void nft_ct_get_fast_eval(const struct nft_expr *expr,
unsigned int state; unsigned int state;
ct = nf_ct_get(pkt->skb, &ctinfo); ct = nf_ct_get(pkt->skb, &ctinfo);
if (!ct) {
regs->verdict.code = NFT_BREAK;
return;
}
switch (priv->key) { switch (priv->key) {
case NFT_CT_STATE: case NFT_CT_STATE:
@ -30,6 +26,16 @@ void nft_ct_get_fast_eval(const struct nft_expr *expr,
state = NF_CT_STATE_INVALID_BIT; state = NF_CT_STATE_INVALID_BIT;
*dest = state; *dest = state;
return; return;
default:
break;
}
if (!ct) {
regs->verdict.code = NFT_BREAK;
return;
}
switch (priv->key) {
case NFT_CT_DIRECTION: case NFT_CT_DIRECTION:
nft_reg_store8(dest, CTINFO2DIR(ctinfo)); nft_reg_store8(dest, CTINFO2DIR(ctinfo));
return; return;

View File

@ -342,7 +342,7 @@ static void nft_dynset_activate(const struct nft_ctx *ctx,
{ {
struct nft_dynset *priv = nft_expr_priv(expr); struct nft_dynset *priv = nft_expr_priv(expr);
priv->set->use++; nf_tables_activate_set(ctx, priv->set);
} }
static void nft_dynset_destroy(const struct nft_ctx *ctx, static void nft_dynset_destroy(const struct nft_ctx *ctx,

View File

@ -167,7 +167,7 @@ static void nft_lookup_activate(const struct nft_ctx *ctx,
{ {
struct nft_lookup *priv = nft_expr_priv(expr); struct nft_lookup *priv = nft_expr_priv(expr);
priv->set->use++; nf_tables_activate_set(ctx, priv->set);
} }
static void nft_lookup_destroy(const struct nft_ctx *ctx, static void nft_lookup_destroy(const struct nft_ctx *ctx,

View File

@ -185,7 +185,7 @@ static void nft_objref_map_activate(const struct nft_ctx *ctx,
{ {
struct nft_objref_map *priv = nft_expr_priv(expr); struct nft_objref_map *priv = nft_expr_priv(expr);
priv->set->use++; nf_tables_activate_set(ctx, priv->set);
} }
static void nft_objref_map_destroy(const struct nft_ctx *ctx, static void nft_objref_map_destroy(const struct nft_ctx *ctx,

View File

@ -2033,7 +2033,7 @@ static int packet_sendmsg_spkt(struct socket *sock, struct msghdr *msg,
goto retry; goto retry;
} }
if (!dev_validate_header(dev, skb->data, len)) { if (!dev_validate_header(dev, skb->data, len) || !skb->len) {
err = -EINVAL; err = -EINVAL;
goto out_unlock; goto out_unlock;
} }

View File

@ -265,6 +265,7 @@ static int rxrpc_listen(struct socket *sock, int backlog)
* @key: The security context to use (defaults to socket setting) * @key: The security context to use (defaults to socket setting)
* @user_call_ID: The ID to use * @user_call_ID: The ID to use
* @tx_total_len: Total length of data to transmit during the call (or -1) * @tx_total_len: Total length of data to transmit during the call (or -1)
* @hard_timeout: The maximum lifespan of the call in sec
* @gfp: The allocation constraints * @gfp: The allocation constraints
* @notify_rx: Where to send notifications instead of socket queue * @notify_rx: Where to send notifications instead of socket queue
* @upgrade: Request service upgrade for call * @upgrade: Request service upgrade for call
@ -283,6 +284,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
struct key *key, struct key *key,
unsigned long user_call_ID, unsigned long user_call_ID,
s64 tx_total_len, s64 tx_total_len,
u32 hard_timeout,
gfp_t gfp, gfp_t gfp,
rxrpc_notify_rx_t notify_rx, rxrpc_notify_rx_t notify_rx,
bool upgrade, bool upgrade,
@ -313,6 +315,7 @@ struct rxrpc_call *rxrpc_kernel_begin_call(struct socket *sock,
p.tx_total_len = tx_total_len; p.tx_total_len = tx_total_len;
p.interruptibility = interruptibility; p.interruptibility = interruptibility;
p.kernel = true; p.kernel = true;
p.timeouts.hard = hard_timeout;
memset(&cp, 0, sizeof(cp)); memset(&cp, 0, sizeof(cp));
cp.local = rx->local; cp.local = rx->local;

View File

@ -616,6 +616,7 @@ struct rxrpc_call {
unsigned long expect_term_by; /* When we expect call termination by */ unsigned long expect_term_by; /* When we expect call termination by */
u32 next_rx_timo; /* Timeout for next Rx packet (jif) */ u32 next_rx_timo; /* Timeout for next Rx packet (jif) */
u32 next_req_timo; /* Timeout for next Rx request packet (jif) */ u32 next_req_timo; /* Timeout for next Rx request packet (jif) */
u32 hard_timo; /* Maximum lifetime or 0 (jif) */
struct timer_list timer; /* Combined event timer */ struct timer_list timer; /* Combined event timer */
struct work_struct destroyer; /* In-process-context destroyer */ struct work_struct destroyer; /* In-process-context destroyer */
rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */ rxrpc_notify_rx_t notify_rx; /* kernel service Rx notification function */

View File

@ -224,6 +224,13 @@ static struct rxrpc_call *rxrpc_alloc_client_call(struct rxrpc_sock *rx,
if (cp->exclusive) if (cp->exclusive)
__set_bit(RXRPC_CALL_EXCLUSIVE, &call->flags); __set_bit(RXRPC_CALL_EXCLUSIVE, &call->flags);
if (p->timeouts.normal)
call->next_rx_timo = min(msecs_to_jiffies(p->timeouts.normal), 1UL);
if (p->timeouts.idle)
call->next_req_timo = min(msecs_to_jiffies(p->timeouts.idle), 1UL);
if (p->timeouts.hard)
call->hard_timo = p->timeouts.hard * HZ;
ret = rxrpc_init_client_call_security(call); ret = rxrpc_init_client_call_security(call);
if (ret < 0) { if (ret < 0) {
rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, ret); rxrpc_prefail_call(call, RXRPC_CALL_LOCAL_ERROR, ret);
@ -255,7 +262,7 @@ void rxrpc_start_call_timer(struct rxrpc_call *call)
call->keepalive_at = j; call->keepalive_at = j;
call->expect_rx_by = j; call->expect_rx_by = j;
call->expect_req_by = j; call->expect_req_by = j;
call->expect_term_by = j; call->expect_term_by = j + call->hard_timo;
call->timer.expires = now; call->timer.expires = now;
} }

View File

@ -50,15 +50,11 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo)
_enter("%d", call->debug_id); _enter("%d", call->debug_id);
if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN)
return call->error; goto no_wait;
add_wait_queue_exclusive(&call->waitq, &myself); add_wait_queue_exclusive(&call->waitq, &myself);
for (;;) { for (;;) {
ret = call->error;
if (ret < 0)
break;
switch (call->interruptibility) { switch (call->interruptibility) {
case RXRPC_INTERRUPTIBLE: case RXRPC_INTERRUPTIBLE:
case RXRPC_PREINTERRUPTIBLE: case RXRPC_PREINTERRUPTIBLE:
@ -69,10 +65,9 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo)
set_current_state(TASK_UNINTERRUPTIBLE); set_current_state(TASK_UNINTERRUPTIBLE);
break; break;
} }
if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN) {
ret = call->error; if (rxrpc_call_state(call) != RXRPC_CALL_CLIENT_AWAIT_CONN)
break; break;
}
if ((call->interruptibility == RXRPC_INTERRUPTIBLE || if ((call->interruptibility == RXRPC_INTERRUPTIBLE ||
call->interruptibility == RXRPC_PREINTERRUPTIBLE) && call->interruptibility == RXRPC_PREINTERRUPTIBLE) &&
signal_pending(current)) { signal_pending(current)) {
@ -85,6 +80,7 @@ static int rxrpc_wait_to_be_connected(struct rxrpc_call *call, long *timeo)
remove_wait_queue(&call->waitq, &myself); remove_wait_queue(&call->waitq, &myself);
__set_current_state(TASK_RUNNING); __set_current_state(TASK_RUNNING);
no_wait:
if (ret == 0 && rxrpc_call_is_complete(call)) if (ret == 0 && rxrpc_call_is_complete(call))
ret = call->error; ret = call->error;
@ -655,15 +651,19 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
if (IS_ERR(call)) if (IS_ERR(call))
return PTR_ERR(call); return PTR_ERR(call);
/* ... and we have the call lock. */ /* ... and we have the call lock. */
p.call.nr_timeouts = 0;
ret = 0; ret = 0;
if (rxrpc_call_is_complete(call)) if (rxrpc_call_is_complete(call))
goto out_put_unlock; goto out_put_unlock;
} else { } else {
switch (rxrpc_call_state(call)) { switch (rxrpc_call_state(call)) {
case RXRPC_CALL_UNINITIALISED:
case RXRPC_CALL_CLIENT_AWAIT_CONN: case RXRPC_CALL_CLIENT_AWAIT_CONN:
case RXRPC_CALL_SERVER_PREALLOC:
case RXRPC_CALL_SERVER_SECURING: case RXRPC_CALL_SERVER_SECURING:
if (p.command == RXRPC_CMD_SEND_ABORT)
break;
fallthrough;
case RXRPC_CALL_UNINITIALISED:
case RXRPC_CALL_SERVER_PREALLOC:
rxrpc_put_call(call, rxrpc_call_put_sendmsg); rxrpc_put_call(call, rxrpc_call_put_sendmsg);
ret = -EBUSY; ret = -EBUSY;
goto error_release_sock; goto error_release_sock;
@ -703,7 +703,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
fallthrough; fallthrough;
case 1: case 1:
if (p.call.timeouts.hard > 0) { if (p.call.timeouts.hard > 0) {
j = msecs_to_jiffies(p.call.timeouts.hard); j = p.call.timeouts.hard * HZ;
now = jiffies; now = jiffies;
j += now; j += now;
WRITE_ONCE(call->expect_term_by, j); WRITE_ONCE(call->expect_term_by, j);

View File

@ -264,7 +264,7 @@ TC_INDIRECT_SCOPE int tcf_mirred_act(struct sk_buff *skb,
goto out; goto out;
} }
if (unlikely(!(dev->flags & IFF_UP))) { if (unlikely(!(dev->flags & IFF_UP)) || !netif_carrier_ok(dev)) {
net_notice_ratelimited("tc mirred to Houston: device %s is down\n", net_notice_ratelimited("tc mirred to Houston: device %s is down\n",
dev->name); dev->name);
goto out; goto out;

View File

@ -258,7 +258,7 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
if (!offmask && cur % 4) { if (!offmask && cur % 4) {
NL_SET_ERR_MSG_MOD(extack, "Offsets must be on 32bit boundaries"); NL_SET_ERR_MSG_MOD(extack, "Offsets must be on 32bit boundaries");
ret = -EINVAL; ret = -EINVAL;
goto put_chain; goto out_free_keys;
} }
/* sanitize the shift value for any later use */ /* sanitize the shift value for any later use */
@ -291,6 +291,8 @@ static int tcf_pedit_init(struct net *net, struct nlattr *nla,
return ret; return ret;
out_free_keys:
kfree(nparms->tcfp_keys);
put_chain: put_chain:
if (goto_ch) if (goto_ch)
tcf_chain_put_by_act(goto_ch); tcf_chain_put_by_act(goto_ch);

View File

@ -1589,6 +1589,7 @@ static int tcf_block_bind(struct tcf_block *block,
err_unroll: err_unroll:
list_for_each_entry_safe(block_cb, next, &bo->cb_list, list) { list_for_each_entry_safe(block_cb, next, &bo->cb_list, list) {
list_del(&block_cb->driver_list);
if (i-- > 0) { if (i-- > 0) {
list_del(&block_cb->list); list_del(&block_cb->list);
tcf_block_playback_offloads(block, block_cb->cb, tcf_block_playback_offloads(block, block_cb->cb,

View File

@ -2210,10 +2210,10 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
spin_lock(&tp->lock); spin_lock(&tp->lock);
if (!handle) { if (!handle) {
handle = 1; handle = 1;
err = idr_alloc_u32(&head->handle_idr, fnew, &handle, err = idr_alloc_u32(&head->handle_idr, NULL, &handle,
INT_MAX, GFP_ATOMIC); INT_MAX, GFP_ATOMIC);
} else { } else {
err = idr_alloc_u32(&head->handle_idr, fnew, &handle, err = idr_alloc_u32(&head->handle_idr, NULL, &handle,
handle, GFP_ATOMIC); handle, GFP_ATOMIC);
/* Filter with specified handle was concurrently /* Filter with specified handle was concurrently
@ -2339,7 +2339,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
errout_mask: errout_mask:
fl_mask_put(head, fnew->mask); fl_mask_put(head, fnew->mask);
errout_idr: errout_idr:
idr_remove(&head->handle_idr, fnew->handle); if (!fold)
idr_remove(&head->handle_idr, fnew->handle);
__fl_put(fnew); __fl_put(fnew);
errout_tb: errout_tb:
kfree(tb); kfree(tb);
@ -2378,7 +2379,7 @@ static void fl_walk(struct tcf_proto *tp, struct tcf_walker *arg,
rcu_read_lock(); rcu_read_lock();
idr_for_each_entry_continue_ul(&head->handle_idr, f, tmp, id) { idr_for_each_entry_continue_ul(&head->handle_idr, f, tmp, id) {
/* don't return filters that are being deleted */ /* don't return filters that are being deleted */
if (!refcount_inc_not_zero(&f->refcnt)) if (!f || !refcount_inc_not_zero(&f->refcnt))
continue; continue;
rcu_read_unlock(); rcu_read_unlock();

View File

@ -292,6 +292,11 @@ setup_hs()
ip netns exec ${hsname} sysctl -wq net.ipv6.conf.all.accept_dad=0 ip netns exec ${hsname} sysctl -wq net.ipv6.conf.all.accept_dad=0
ip netns exec ${hsname} sysctl -wq net.ipv6.conf.default.accept_dad=0 ip netns exec ${hsname} sysctl -wq net.ipv6.conf.default.accept_dad=0
# disable the rp_filter otherwise the kernel gets confused about how
# to route decap ipv4 packets.
ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0
ip netns exec ${rtname} sysctl -wq net.ipv4.conf.default.rp_filter=0
ip -netns ${hsname} link add veth0 type veth peer name ${rtveth} ip -netns ${hsname} link add veth0 type veth peer name ${rtveth}
ip -netns ${hsname} link set ${rtveth} netns ${rtname} ip -netns ${hsname} link set ${rtveth} netns ${rtname}
ip -netns ${hsname} addr add ${IPv6_HS_NETWORK}::${hs}/64 dev veth0 nodad ip -netns ${hsname} addr add ${IPv6_HS_NETWORK}::${hs}/64 dev veth0 nodad
@ -316,11 +321,6 @@ setup_hs()
ip netns exec ${rtname} sysctl -wq net.ipv6.conf.${rtveth}.proxy_ndp=1 ip netns exec ${rtname} sysctl -wq net.ipv6.conf.${rtveth}.proxy_ndp=1
ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.proxy_arp=1 ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.proxy_arp=1
# disable the rp_filter otherwise the kernel gets confused about how
# to route decap ipv4 packets.
ip netns exec ${rtname} sysctl -wq net.ipv4.conf.all.rp_filter=0
ip netns exec ${rtname} sysctl -wq net.ipv4.conf.${rtveth}.rp_filter=0
ip netns exec ${rtname} sh -c "echo 1 > /proc/sys/net/vrf/strict_mode" ip netns exec ${rtname} sh -c "echo 1 > /proc/sys/net/vrf/strict_mode"
} }

View File

@ -8,8 +8,11 @@ TEST_PROGS := nft_trans_stress.sh nft_fib.sh nft_nat.sh bridge_brouter.sh \
ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh \ ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh \
conntrack_vrf.sh nft_synproxy.sh rpath.sh conntrack_vrf.sh nft_synproxy.sh rpath.sh
CFLAGS += $(shell pkg-config --cflags libmnl 2>/dev/null || echo "-I/usr/include/libmnl") HOSTPKG_CONFIG := pkg-config
LDLIBS = -lmnl
CFLAGS += $(shell $(HOSTPKG_CONFIG) --cflags libmnl 2>/dev/null)
LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl)
TEST_GEN_FILES = nf-queue connect_close TEST_GEN_FILES = nf-queue connect_close
include ../lib.mk include ../lib.mk