Including fixes from wifi, netfilter and can.

A handful of awaited fixes here - revert of the FEC changes,
 bluetooth fix, fixes for iwlwifi spew.
 
 We added a warning in PHY/MDIO code which is triggering on
 a couple of platforms in a false-positive-ish way. If we can't
 iron that out over the week we'll drop it and re-add for 6.1.
 
 I've added a new "follow up fixes" section for fixes to fixes
 in 6.0-rcs but it may actually give the false impression that
 those are problematic or that more testing time would have
 caught them. So likely a one time thing.
 
 Follow up fixes:
 
  - nf_tables_addchain: fix nft_counters_enabled underflow
 
  - ebtables: fix memory leak when blob is malformed
 
  - nf_ct_ftp: fix deadlock when nat rewrite is needed
 
 Current release - regressions:
 
  - Revert "fec: Restart PPS after link state change"
  - Revert "net: fec: Use a spinlock to guard `fep->ptp_clk_on`"
 
  - Bluetooth: fix HCIGETDEVINFO regression
 
  - wifi: mt76: fix 5 GHz connection regression on mt76x0/mt76x2
 
  - mptcp: fix fwd memory accounting on coalesce
 
  - rwlock removal fall out:
    - ipmr: always call ip{,6}_mr_forward() from RCU read-side
      critical section
    - ipv6: fix crash when IPv6 is administratively disabled
 
  - tcp: read multiple skbs in tcp_read_skb()
 
  - mdio_bus_phy_resume state warning fallout:
    - eth: ravb: fix PHY state warning splat during system resume
    - eth: sh_eth: fix PHY state warning splat during system resume
 
 Current release - new code bugs:
 
  - wifi: iwlwifi: don't spam logs with NSS>2 messages
 
  - eth: mtk_eth_soc: enable XDP support just for MT7986 SoC
 
 Previous releases - regressions:
 
  - bonding: fix NULL deref in bond_rr_gen_slave_id
 
  - wifi: iwlwifi: mark IWLMEI as broken
 
 Previous releases - always broken:
 
  - nf_conntrack helpers:
    - irc: tighten matching on DCC message
    - sip: fix ct_sip_walk_headers
    - osf: fix possible bogus match in nf_osf_find()
 
  - ipvlan: fix out-of-bound bugs caused by unset skb->mac_header
 
  - core: fix flow symmetric hash
 
  - bonding, team: unsync device addresses on ndo_stop
 
  - phy: micrel: fix shared interrupt on LAN8814
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmMsj3EACgkQMUZtbf5S
 IrsUgQ//eXxuUZeGTg7cgJKPFJelrZ3iL16B1+s2qX94GPIqXRAShgC78iM7IbSe
 y3vR/7YVE7sKXm88wnLefMQVXPp0cE2p0+8++E/j4zcRZsM5sHb2+d3gW6nos2ed
 U8Ldm7LzWUNt/o1ZHDqZWBSoreFkmbFyHO6FVPCuH11tFUJqxJ/SP860mwo6tbuT
 HOoVphKis41IMEXCgybs2V0DAQewba0gejzAmySDy8epNhOj2F4Vo6aadnUCI68U
 HrIFYe2wiEi6MZDsB9zpRXc9seb6ZBKbBjgQnTK7MwfBEQCzxtR2lkNobJM1WbdL
 nYwHBOJ16yX0BnlSpUEepv6iJYY5Q7FS35Wk3Rq5Mik6DaEir6vVSBdRxHpYOkO2
 KPIyyMMAA5E8mAtqH3PcpnwDK+9c3KlZYYKXxIp2IjQm87DpOZJFynwsC3Crmbzo
 C7UTMav2nkHljoapMLUwzqyw2ip+Qo14XA043FDPUru1sXY9CY6q50XZa5GmrNKh
 xyaBdp4Ckj1kOuXUR9jz3Rq8skOZ8lNGHtiCdgPZitWhNKW1YORJihC7/9zdieCR
 1gOE7Dpz/MhVmFn2e8S5O3TkU5lXfALfPDJi4QiML5VLHXd/nCE5sHPiOBWcoo4w
 2djKbIGpLRnO6qMs4NkWNmPbG+/ouvpM+lewqn+xU4TGyn/NTbI=
 =wrep
 -----END PGP SIGNATURE-----

Merge tag 'net-6.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from wifi, netfilter and can.

  A handful of awaited fixes here - revert of the FEC changes, bluetooth
  fix, fixes for iwlwifi spew.

  We added a warning in PHY/MDIO code which is triggering on a couple of
  platforms in a false-positive-ish way. If we can't iron that out over
  the week we'll drop it and re-add for 6.1.

  I've added a new "follow up fixes" section for fixes to fixes in
  6.0-rcs but it may actually give the false impression that those are
  problematic or that more testing time would have caught them. So
  likely a one time thing.

  Follow up fixes:

   - nf_tables_addchain: fix nft_counters_enabled underflow

   - ebtables: fix memory leak when blob is malformed

   - nf_ct_ftp: fix deadlock when nat rewrite is needed

  Current release - regressions:

   - Revert "fec: Restart PPS after link state change" and the related
     "net: fec: Use a spinlock to guard `fep->ptp_clk_on`"

   - Bluetooth: fix HCIGETDEVINFO regression

   - wifi: mt76: fix 5 GHz connection regression on mt76x0/mt76x2

   - mptcp: fix fwd memory accounting on coalesce

   - rwlock removal fall out:
      - ipmr: always call ip{,6}_mr_forward() from RCU read-side
        critical section
      - ipv6: fix crash when IPv6 is administratively disabled

   - tcp: read multiple skbs in tcp_read_skb()

   - mdio_bus_phy_resume state warning fallout:
      - eth: ravb: fix PHY state warning splat during system resume
      - eth: sh_eth: fix PHY state warning splat during system resume

  Current release - new code bugs:

   - wifi: iwlwifi: don't spam logs with NSS>2 messages

   - eth: mtk_eth_soc: enable XDP support just for MT7986 SoC

  Previous releases - regressions:

   - bonding: fix NULL deref in bond_rr_gen_slave_id

   - wifi: iwlwifi: mark IWLMEI as broken

  Previous releases - always broken:

   - nf_conntrack helpers:
      - irc: tighten matching on DCC message
      - sip: fix ct_sip_walk_headers
      - osf: fix possible bogus match in nf_osf_find()

   - ipvlan: fix out-of-bound bugs caused by unset skb->mac_header

   - core: fix flow symmetric hash

   - bonding, team: unsync device addresses on ndo_stop

   - phy: micrel: fix shared interrupt on LAN8814"

* tag 'net-6.0-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (83 commits)
  selftests: forwarding: add shebang for sch_red.sh
  bnxt: prevent skb UAF after handing over to PTP worker
  net: marvell: Fix refcounting bugs in prestera_port_sfp_bind()
  net: sched: fix possible refcount leak in tc_new_tfilter()
  net: sunhme: Fix packet reception for len < RX_COPY_THRESHOLD
  udp: Use WARN_ON_ONCE() in udp_read_skb()
  selftests: bonding: cause oops in bond_rr_gen_slave_id
  bonding: fix NULL deref in bond_rr_gen_slave_id
  net: phy: micrel: fix shared interrupt on LAN8814
  net/smc: Stop the CLC flow if no link to map buffers on
  ice: Fix ice_xdp_xmit() when XDP TX queue number is not sufficient
  net: atlantic: fix potential memory leak in aq_ndev_close()
  can: gs_usb: gs_usb_set_phys_id(): return with error if identify is not supported
  can: gs_usb: gs_can_open(): fix race dev->can.state condition
  can: flexcan: flexcan_mailbox_read() fix return value for drop = true
  net: sh_eth: Fix PHY state warning splat during system resume
  net: ravb: Fix PHY state warning splat during system resume
  netfilter: nf_ct_ftp: fix deadlock when nat rewrite is needed
  netfilter: ebtables: fix memory leak when blob is malformed
  netfilter: nf_tables: fix percpu memory leak at nf_tables_addchain()
  ...
This commit is contained in:
Linus Torvalds 2022-09-22 10:58:13 -07:00
commit 504c25cb76
94 changed files with 1059 additions and 434 deletions

View File

@ -47,7 +47,6 @@ allow_join_initial_addr_port - BOOLEAN
Default: 1
pm_type - INTEGER
Set the default path manager type to use for each new MPTCP
socket. In-kernel path management will control subflow
connections and address advertisements according to

View File

@ -70,15 +70,6 @@ nf_conntrack_generic_timeout - INTEGER (seconds)
Default for generic timeout. This refers to layer 4 unknown/unsupported
protocols.
nf_conntrack_helper - BOOLEAN
- 0 - disabled (default)
- not 0 - enabled
Enable automatic conntrack helper assignment.
If disabled it is required to set up iptables rules to assign
helpers to connections. See the CT target description in the
iptables-extensions(8) man page for further information.
nf_conntrack_icmp_timeout - INTEGER (seconds)
default 30

View File

@ -8652,8 +8652,8 @@ F: drivers/input/touchscreen/goodix*
GOOGLE ETHERNET DRIVERS
M: Jeroen de Borst <jeroendb@google.com>
R: Catherine Sullivan <csully@google.com>
R: David Awogbemila <awogbemila@google.com>
M: Catherine Sullivan <csully@google.com>
R: Shailend Chand <shailend@google.com>
L: netdev@vger.kernel.org
S: Supported
F: Documentation/networking/device_drivers/ethernet/google/gve.rst
@ -16857,6 +16857,7 @@ F: drivers/net/ethernet/qualcomm/emac/
QUALCOMM ETHQOS ETHERNET DRIVER
M: Vinod Koul <vkoul@kernel.org>
R: Bhupesh Sharma <bhupesh.sharma@linaro.org>
L: netdev@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/net/qcom,ethqos.txt
@ -19959,6 +19960,7 @@ S: Supported
F: drivers/net/team/
F: include/linux/if_team.h
F: include/uapi/linux/if_team.h
F: tools/testing/selftests/net/team/
TECHNOLOGIC SYSTEMS TS-5500 PLATFORM SUPPORT
M: "Savoir-faire Linux Inc." <kernel@savoirfairelinux.com>

View File

@ -88,8 +88,9 @@ static const u8 null_mac_addr[ETH_ALEN + 2] __long_aligned = {
static const u16 ad_ticks_per_sec = 1000 / AD_TIMER_INTERVAL;
static const int ad_delta_in_ticks = (AD_TIMER_INTERVAL * HZ) / 1000;
static const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned =
MULTICAST_LACPDU_ADDR;
const u8 lacpdu_mcast_addr[ETH_ALEN + 2] __long_aligned = {
0x01, 0x80, 0xC2, 0x00, 0x00, 0x02
};
/* ================= main 802.3ad protocol functions ================== */
static int ad_lacpdu_send(struct port *port);

View File

@ -865,12 +865,8 @@ static void bond_hw_addr_flush(struct net_device *bond_dev,
dev_uc_unsync(slave_dev, bond_dev);
dev_mc_unsync(slave_dev, bond_dev);
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
/* del lacpdu mc addr from mc list */
u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
dev_mc_del(slave_dev, lacpdu_multicast);
}
if (BOND_MODE(bond) == BOND_MODE_8023AD)
dev_mc_del(slave_dev, lacpdu_mcast_addr);
}
/*--------------------------- Active slave change ---------------------------*/
@ -890,7 +886,8 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
if (bond->dev->flags & IFF_ALLMULTI)
dev_set_allmulti(old_active->dev, -1);
bond_hw_addr_flush(bond->dev, old_active->dev);
if (bond->dev->flags & IFF_UP)
bond_hw_addr_flush(bond->dev, old_active->dev);
}
if (new_active) {
@ -901,10 +898,12 @@ static void bond_hw_addr_swap(struct bonding *bond, struct slave *new_active,
if (bond->dev->flags & IFF_ALLMULTI)
dev_set_allmulti(new_active->dev, 1);
netif_addr_lock_bh(bond->dev);
dev_uc_sync(new_active->dev, bond->dev);
dev_mc_sync(new_active->dev, bond->dev);
netif_addr_unlock_bh(bond->dev);
if (bond->dev->flags & IFF_UP) {
netif_addr_lock_bh(bond->dev);
dev_uc_sync(new_active->dev, bond->dev);
dev_mc_sync(new_active->dev, bond->dev);
netif_addr_unlock_bh(bond->dev);
}
}
}
@ -2166,16 +2165,14 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
}
}
netif_addr_lock_bh(bond_dev);
dev_mc_sync_multiple(slave_dev, bond_dev);
dev_uc_sync_multiple(slave_dev, bond_dev);
netif_addr_unlock_bh(bond_dev);
if (bond_dev->flags & IFF_UP) {
netif_addr_lock_bh(bond_dev);
dev_mc_sync_multiple(slave_dev, bond_dev);
dev_uc_sync_multiple(slave_dev, bond_dev);
netif_addr_unlock_bh(bond_dev);
if (BOND_MODE(bond) == BOND_MODE_8023AD) {
/* add lacpdu mc addr to mc list */
u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
dev_mc_add(slave_dev, lacpdu_multicast);
if (BOND_MODE(bond) == BOND_MODE_8023AD)
dev_mc_add(slave_dev, lacpdu_mcast_addr);
}
}
@ -2447,7 +2444,8 @@ static int __bond_release_one(struct net_device *bond_dev,
if (old_flags & IFF_ALLMULTI)
dev_set_allmulti(slave_dev, -1);
bond_hw_addr_flush(bond_dev, slave_dev);
if (old_flags & IFF_UP)
bond_hw_addr_flush(bond_dev, slave_dev);
}
slave_disable_netpoll(slave);
@ -4184,6 +4182,12 @@ static int bond_open(struct net_device *bond_dev)
struct list_head *iter;
struct slave *slave;
if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN && !bond->rr_tx_counter) {
bond->rr_tx_counter = alloc_percpu(u32);
if (!bond->rr_tx_counter)
return -ENOMEM;
}
/* reset slave->backup and slave->inactive */
if (bond_has_slaves(bond)) {
bond_for_each_slave(bond, slave, iter) {
@ -4221,6 +4225,9 @@ static int bond_open(struct net_device *bond_dev)
/* register to receive LACPDUs */
bond->recv_probe = bond_3ad_lacpdu_recv;
bond_3ad_initiate_agg_selection(bond, 1);
bond_for_each_slave(bond, slave, iter)
dev_mc_add(slave->dev, lacpdu_mcast_addr);
}
if (bond_mode_can_use_xmit_hash(bond))
@ -4232,6 +4239,7 @@ static int bond_open(struct net_device *bond_dev)
static int bond_close(struct net_device *bond_dev)
{
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave;
bond_work_cancel_all(bond);
bond->send_peer_notif = 0;
@ -4239,6 +4247,19 @@ static int bond_close(struct net_device *bond_dev)
bond_alb_deinitialize(bond);
bond->recv_probe = NULL;
if (bond_uses_primary(bond)) {
rcu_read_lock();
slave = rcu_dereference(bond->curr_active_slave);
if (slave)
bond_hw_addr_flush(bond_dev, slave->dev);
rcu_read_unlock();
} else {
struct list_head *iter;
bond_for_each_slave(bond, slave, iter)
bond_hw_addr_flush(bond_dev, slave->dev);
}
return 0;
}
@ -6228,15 +6249,6 @@ static int bond_init(struct net_device *bond_dev)
if (!bond->wq)
return -ENOMEM;
if (BOND_MODE(bond) == BOND_MODE_ROUNDROBIN) {
bond->rr_tx_counter = alloc_percpu(u32);
if (!bond->rr_tx_counter) {
destroy_workqueue(bond->wq);
bond->wq = NULL;
return -ENOMEM;
}
}
spin_lock_init(&bond->stats_lock);
netdev_lockdep_set_classes(bond_dev);

View File

@ -941,11 +941,6 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
u32 reg_ctrl, reg_id, reg_iflag1;
int i;
if (unlikely(drop)) {
skb = ERR_PTR(-ENOBUFS);
goto mark_as_read;
}
mb = flexcan_get_mb(priv, n);
if (priv->devtype_data.quirks & FLEXCAN_QUIRK_USE_RX_MAILBOX) {
@ -974,6 +969,11 @@ static struct sk_buff *flexcan_mailbox_read(struct can_rx_offload *offload,
reg_ctrl = priv->read(&mb->can_ctrl);
}
if (unlikely(drop)) {
skb = ERR_PTR(-ENOBUFS);
goto mark_as_read;
}
if (reg_ctrl & FLEXCAN_MB_CNT_EDL)
skb = alloc_canfd_skb(offload->dev, &cfd);
else

View File

@ -824,6 +824,7 @@ static int gs_can_open(struct net_device *netdev)
flags |= GS_CAN_MODE_TRIPLE_SAMPLE;
/* finally start device */
dev->can.state = CAN_STATE_ERROR_ACTIVE;
dm->mode = cpu_to_le32(GS_CAN_MODE_START);
dm->flags = cpu_to_le32(flags);
rc = usb_control_msg(interface_to_usbdev(dev->iface),
@ -835,13 +836,12 @@ static int gs_can_open(struct net_device *netdev)
if (rc < 0) {
netdev_err(netdev, "Couldn't start device (err=%d)\n", rc);
kfree(dm);
dev->can.state = CAN_STATE_STOPPED;
return rc;
}
kfree(dm);
dev->can.state = CAN_STATE_ERROR_ACTIVE;
parent->active_channels++;
if (!(dev->can.ctrlmode & CAN_CTRLMODE_LISTENONLY))
netif_start_queue(netdev);
@ -925,17 +925,21 @@ static int gs_usb_set_identify(struct net_device *netdev, bool do_identify)
}
/* blink LED's for finding the this interface */
static int gs_usb_set_phys_id(struct net_device *dev,
static int gs_usb_set_phys_id(struct net_device *netdev,
enum ethtool_phys_id_state state)
{
const struct gs_can *dev = netdev_priv(netdev);
int rc = 0;
if (!(dev->feature & GS_CAN_FEATURE_IDENTIFY))
return -EOPNOTSUPP;
switch (state) {
case ETHTOOL_ID_ACTIVE:
rc = gs_usb_set_identify(dev, GS_CAN_IDENTIFY_ON);
rc = gs_usb_set_identify(netdev, GS_CAN_IDENTIFY_ON);
break;
case ETHTOOL_ID_INACTIVE:
rc = gs_usb_set_identify(dev, GS_CAN_IDENTIFY_OFF);
rc = gs_usb_set_identify(netdev, GS_CAN_IDENTIFY_OFF);
break;
default:
break;
@ -1072,9 +1076,10 @@ static struct gs_can *gs_make_candev(unsigned int channel,
dev->feature |= GS_CAN_FEATURE_REQ_USB_QUIRK_LPC546XX |
GS_CAN_FEATURE_QUIRK_BREQ_CANTACT_PRO;
if (le32_to_cpu(dconf->sw_version) > 1)
if (feature & GS_CAN_FEATURE_IDENTIFY)
netdev->ethtool_ops = &gs_usb_ethtool_ops;
/* GS_CAN_FEATURE_IDENTIFY is only supported for sw_version > 1 */
if (!(le32_to_cpu(dconf->sw_version) > 1 &&
feature & GS_CAN_FEATURE_IDENTIFY))
dev->feature &= ~GS_CAN_FEATURE_IDENTIFY;
kfree(bt_const);

View File

@ -244,10 +244,6 @@ void lan937x_port_setup(struct ksz_device *dev, int port, bool cpu_port)
lan937x_port_cfg(dev, port, REG_PORT_CTRL_0,
PORT_TAIL_TAG_ENABLE, true);
/* disable frame check length field */
lan937x_port_cfg(dev, port, REG_PORT_MAC_CTRL_0, PORT_CHECK_LENGTH,
false);
/* set back pressure for half duplex */
lan937x_port_cfg(dev, port, REG_PORT_MAC_CTRL_1, PORT_BACK_PRESSURE,
true);

View File

@ -94,11 +94,8 @@ static int aq_ndev_close(struct net_device *ndev)
int err = 0;
err = aq_nic_stop(aq_nic);
if (err < 0)
goto err_exit;
aq_nic_deinit(aq_nic, true);
err_exit:
return err;
}

View File

@ -659,7 +659,6 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
for (i = 0; i < nr_pkts; i++) {
struct bnxt_sw_tx_bd *tx_buf;
bool compl_deferred = false;
struct sk_buff *skb;
int j, last;
@ -668,6 +667,8 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
skb = tx_buf->skb;
tx_buf->skb = NULL;
tx_bytes += skb->len;
if (tx_buf->is_push) {
tx_buf->is_push = 0;
goto next_tx_int;
@ -688,8 +689,9 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
}
if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS)) {
if (bp->flags & BNXT_FLAG_CHIP_P5) {
/* PTP worker takes ownership of the skb */
if (!bnxt_get_tx_ts_p5(bp, skb))
compl_deferred = true;
skb = NULL;
else
atomic_inc(&bp->ptp_cfg->tx_avail);
}
@ -698,9 +700,7 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
next_tx_int:
cons = NEXT_TX(cons);
tx_bytes += skb->len;
if (!compl_deferred)
dev_kfree_skb_any(skb);
dev_kfree_skb_any(skb);
}
netdev_tx_completed_queue(txq, nr_pkts, tx_bytes);

View File

@ -317,9 +317,9 @@ void bnxt_ptp_cfg_tstamp_filters(struct bnxt *bp)
if (!(bp->fw_cap & BNXT_FW_CAP_RX_ALL_PKT_TS) && (ptp->tstamp_filters &
(PORT_MAC_CFG_REQ_FLAGS_ALL_RX_TS_CAPTURE_ENABLE |
PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_DISABLE))) {
PORT_MAC_CFG_REQ_FLAGS_ALL_RX_TS_CAPTURE_DISABLE))) {
ptp->tstamp_filters &= ~(PORT_MAC_CFG_REQ_FLAGS_ALL_RX_TS_CAPTURE_ENABLE |
PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_DISABLE);
PORT_MAC_CFG_REQ_FLAGS_ALL_RX_TS_CAPTURE_DISABLE);
netdev_warn(bp->dev, "Unsupported FW for all RX pkts timestamp filter\n");
}

View File

@ -9,7 +9,6 @@ fsl-enetc-$(CONFIG_FSL_ENETC_QOS) += enetc_qos.o
obj-$(CONFIG_FSL_ENETC_VF) += fsl-enetc-vf.o
fsl-enetc-vf-y := enetc_vf.o $(common-objs)
fsl-enetc-vf-$(CONFIG_FSL_ENETC_QOS) += enetc_qos.o
obj-$(CONFIG_FSL_ENETC_IERB) += fsl-enetc-ierb.o
fsl-enetc-ierb-y := enetc_ierb.o

View File

@ -2432,7 +2432,7 @@ int enetc_close(struct net_device *ndev)
return 0;
}
static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
struct tc_mqprio_qopt *mqprio = type_data;
@ -2486,25 +2486,6 @@ static int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data)
return 0;
}
int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void *type_data)
{
switch (type) {
case TC_SETUP_QDISC_MQPRIO:
return enetc_setup_tc_mqprio(ndev, type_data);
case TC_SETUP_QDISC_TAPRIO:
return enetc_setup_tc_taprio(ndev, type_data);
case TC_SETUP_QDISC_CBS:
return enetc_setup_tc_cbs(ndev, type_data);
case TC_SETUP_QDISC_ETF:
return enetc_setup_tc_txtime(ndev, type_data);
case TC_SETUP_BLOCK:
return enetc_setup_tc_psfp(ndev, type_data);
default:
return -EOPNOTSUPP;
}
}
static int enetc_setup_xdp_prog(struct net_device *dev, struct bpf_prog *prog,
struct netlink_ext_ack *extack)
{
@ -2600,29 +2581,6 @@ static int enetc_set_rss(struct net_device *ndev, int en)
return 0;
}
static int enetc_set_psfp(struct net_device *ndev, int en)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
int err;
if (en) {
err = enetc_psfp_enable(priv);
if (err)
return err;
priv->active_offloads |= ENETC_F_QCI;
return 0;
}
err = enetc_psfp_disable(priv);
if (err)
return err;
priv->active_offloads &= ~ENETC_F_QCI;
return 0;
}
static void enetc_enable_rxvlan(struct net_device *ndev, bool en)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
@ -2641,11 +2599,9 @@ static void enetc_enable_txvlan(struct net_device *ndev, bool en)
enetc_bdr_enable_txvlan(&priv->si->hw, i, en);
}
int enetc_set_features(struct net_device *ndev,
netdev_features_t features)
void enetc_set_features(struct net_device *ndev, netdev_features_t features)
{
netdev_features_t changed = ndev->features ^ features;
int err = 0;
if (changed & NETIF_F_RXHASH)
enetc_set_rss(ndev, !!(features & NETIF_F_RXHASH));
@ -2657,11 +2613,6 @@ int enetc_set_features(struct net_device *ndev,
if (changed & NETIF_F_HW_VLAN_CTAG_TX)
enetc_enable_txvlan(ndev,
!!(features & NETIF_F_HW_VLAN_CTAG_TX));
if (changed & NETIF_F_HW_TC)
err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
return err;
}
#ifdef CONFIG_FSL_ENETC_PTP_CLOCK

View File

@ -393,11 +393,9 @@ void enetc_start(struct net_device *ndev);
void enetc_stop(struct net_device *ndev);
netdev_tx_t enetc_xmit(struct sk_buff *skb, struct net_device *ndev);
struct net_device_stats *enetc_get_stats(struct net_device *ndev);
int enetc_set_features(struct net_device *ndev,
netdev_features_t features);
void enetc_set_features(struct net_device *ndev, netdev_features_t features);
int enetc_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd);
int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void *type_data);
int enetc_setup_tc_mqprio(struct net_device *ndev, void *type_data);
int enetc_setup_bpf(struct net_device *dev, struct netdev_bpf *xdp);
int enetc_xdp_xmit(struct net_device *ndev, int num_frames,
struct xdp_frame **frames, u32 flags);
@ -465,6 +463,7 @@ int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
int enetc_setup_tc_psfp(struct net_device *ndev, void *type_data);
int enetc_psfp_init(struct enetc_ndev_priv *priv);
int enetc_psfp_clean(struct enetc_ndev_priv *priv);
int enetc_set_psfp(struct net_device *ndev, bool en);
static inline void enetc_get_max_cap(struct enetc_ndev_priv *priv)
{
@ -540,4 +539,9 @@ static inline int enetc_psfp_disable(struct enetc_ndev_priv *priv)
{
return 0;
}
static inline int enetc_set_psfp(struct net_device *ndev, bool en)
{
return 0;
}
#endif

View File

@ -709,6 +709,13 @@ static int enetc_pf_set_features(struct net_device *ndev,
{
netdev_features_t changed = ndev->features ^ features;
struct enetc_ndev_priv *priv = netdev_priv(ndev);
int err;
if (changed & NETIF_F_HW_TC) {
err = enetc_set_psfp(ndev, !!(features & NETIF_F_HW_TC));
if (err)
return err;
}
if (changed & NETIF_F_HW_VLAN_CTAG_FILTER) {
struct enetc_pf *pf = enetc_si_priv(priv->si);
@ -722,7 +729,28 @@ static int enetc_pf_set_features(struct net_device *ndev,
if (changed & NETIF_F_LOOPBACK)
enetc_set_loopback(ndev, !!(features & NETIF_F_LOOPBACK));
return enetc_set_features(ndev, features);
enetc_set_features(ndev, features);
return 0;
}
static int enetc_pf_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void *type_data)
{
switch (type) {
case TC_SETUP_QDISC_MQPRIO:
return enetc_setup_tc_mqprio(ndev, type_data);
case TC_SETUP_QDISC_TAPRIO:
return enetc_setup_tc_taprio(ndev, type_data);
case TC_SETUP_QDISC_CBS:
return enetc_setup_tc_cbs(ndev, type_data);
case TC_SETUP_QDISC_ETF:
return enetc_setup_tc_txtime(ndev, type_data);
case TC_SETUP_BLOCK:
return enetc_setup_tc_psfp(ndev, type_data);
default:
return -EOPNOTSUPP;
}
}
static const struct net_device_ops enetc_ndev_ops = {
@ -739,7 +767,7 @@ static const struct net_device_ops enetc_ndev_ops = {
.ndo_set_vf_spoofchk = enetc_pf_set_vf_spoofchk,
.ndo_set_features = enetc_pf_set_features,
.ndo_eth_ioctl = enetc_ioctl,
.ndo_setup_tc = enetc_setup_tc,
.ndo_setup_tc = enetc_pf_setup_tc,
.ndo_bpf = enetc_setup_bpf,
.ndo_xdp_xmit = enetc_xdp_xmit,
};

View File

@ -1517,6 +1517,29 @@ int enetc_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
}
}
int enetc_set_psfp(struct net_device *ndev, bool en)
{
struct enetc_ndev_priv *priv = netdev_priv(ndev);
int err;
if (en) {
err = enetc_psfp_enable(priv);
if (err)
return err;
priv->active_offloads |= ENETC_F_QCI;
return 0;
}
err = enetc_psfp_disable(priv);
if (err)
return err;
priv->active_offloads &= ~ENETC_F_QCI;
return 0;
}
int enetc_psfp_init(struct enetc_ndev_priv *priv)
{
if (epsfp.psfp_sfi_bitmap)

View File

@ -88,7 +88,20 @@ static int enetc_vf_set_mac_addr(struct net_device *ndev, void *addr)
static int enetc_vf_set_features(struct net_device *ndev,
netdev_features_t features)
{
return enetc_set_features(ndev, features);
enetc_set_features(ndev, features);
return 0;
}
static int enetc_vf_setup_tc(struct net_device *ndev, enum tc_setup_type type,
void *type_data)
{
switch (type) {
case TC_SETUP_QDISC_MQPRIO:
return enetc_setup_tc_mqprio(ndev, type_data);
default:
return -EOPNOTSUPP;
}
}
/* Probing/ Init */
@ -100,7 +113,7 @@ static const struct net_device_ops enetc_ndev_ops = {
.ndo_set_mac_address = enetc_vf_set_mac_addr,
.ndo_set_features = enetc_vf_set_features,
.ndo_eth_ioctl = enetc_ioctl,
.ndo_setup_tc = enetc_setup_tc,
.ndo_setup_tc = enetc_vf_setup_tc,
};
static void enetc_vf_netdev_setup(struct enetc_si *si, struct net_device *ndev,

View File

@ -561,6 +561,7 @@ struct fec_enet_private {
struct clk *clk_2x_txclk;
bool ptp_clk_on;
struct mutex ptp_clk_mutex;
unsigned int num_tx_queues;
unsigned int num_rx_queues;
@ -638,13 +639,6 @@ struct fec_enet_private {
int pps_enable;
unsigned int next_counter;
struct {
struct timespec64 ts_phc;
u64 ns_sys;
u32 at_corr;
u8 at_inc_corr;
} ptp_saved_state;
u64 ethtool_stats[];
};
@ -655,8 +649,5 @@ void fec_ptp_disable_hwts(struct net_device *ndev);
int fec_ptp_set(struct net_device *ndev, struct ifreq *ifr);
int fec_ptp_get(struct net_device *ndev, struct ifreq *ifr);
void fec_ptp_save_state(struct fec_enet_private *fep);
int fec_ptp_restore_state(struct fec_enet_private *fep);
/****************************************************************************/
#endif /* FEC_H */

View File

@ -286,11 +286,8 @@ MODULE_PARM_DESC(macaddr, "FEC Ethernet MAC address");
#define FEC_MMFR_TA (2 << 16)
#define FEC_MMFR_DATA(v) (v & 0xffff)
/* FEC ECR bits definition */
#define FEC_ECR_RESET BIT(0)
#define FEC_ECR_ETHEREN BIT(1)
#define FEC_ECR_MAGICEN BIT(2)
#define FEC_ECR_SLEEP BIT(3)
#define FEC_ECR_EN1588 BIT(4)
#define FEC_ECR_MAGICEN (1 << 2)
#define FEC_ECR_SLEEP (1 << 3)
#define FEC_MII_TIMEOUT 30000 /* us */
@ -986,9 +983,6 @@ fec_restart(struct net_device *ndev)
u32 temp_mac[2];
u32 rcntl = OPT_FRAME_SIZE | 0x04;
u32 ecntl = 0x2; /* ETHEREN */
struct ptp_clock_request ptp_rq = { .type = PTP_CLK_REQ_PPS };
fec_ptp_save_state(fep);
/* Whack a reset. We should wait for this.
* For i.MX6SX SOC, enet use AXI bus, we use disable MAC
@ -1142,7 +1136,7 @@ fec_restart(struct net_device *ndev)
}
if (fep->bufdesc_ex)
ecntl |= FEC_ECR_EN1588;
ecntl |= (1 << 4);
if (fep->quirks & FEC_QUIRK_DELAYED_CLKS_SUPPORT &&
fep->rgmii_txc_dly)
@ -1163,14 +1157,6 @@ fec_restart(struct net_device *ndev)
if (fep->bufdesc_ex)
fec_ptp_start_cyclecounter(ndev);
/* Restart PPS if needed */
if (fep->pps_enable) {
/* Clear flag so fec_ptp_enable_pps() doesn't return immediately */
fep->pps_enable = 0;
fec_ptp_restore_state(fep);
fep->ptp_caps.enable(&fep->ptp_caps, &ptp_rq, 1);
}
/* Enable interrupts we wish to service */
if (fep->link)
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
@ -1221,8 +1207,6 @@ fec_stop(struct net_device *ndev)
struct fec_enet_private *fep = netdev_priv(ndev);
u32 rmii_mode = readl(fep->hwp + FEC_R_CNTRL) & (1 << 8);
u32 val;
struct ptp_clock_request ptp_rq = { .type = PTP_CLK_REQ_PPS };
u32 ecntl = 0;
/* We cannot expect a graceful transmit stop without link !!! */
if (fep->link) {
@ -1232,8 +1216,6 @@ fec_stop(struct net_device *ndev)
netdev_err(ndev, "Graceful transmit stop did not complete!\n");
}
fec_ptp_save_state(fep);
/* Whack a reset. We should wait for this.
* For i.MX6SX SOC, enet use AXI bus, we use disable MAC
* instead of reset MAC itself.
@ -1253,28 +1235,12 @@ fec_stop(struct net_device *ndev)
writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
writel(FEC_DEFAULT_IMASK, fep->hwp + FEC_IMASK);
if (fep->bufdesc_ex)
ecntl |= FEC_ECR_EN1588;
/* We have to keep ENET enabled to have MII interrupt stay working */
if (fep->quirks & FEC_QUIRK_ENET_MAC &&
!(fep->wol_flag & FEC_WOL_FLAG_SLEEP_ON)) {
ecntl |= FEC_ECR_ETHEREN;
writel(2, fep->hwp + FEC_ECNTRL);
writel(rmii_mode, fep->hwp + FEC_R_CNTRL);
}
writel(ecntl, fep->hwp + FEC_ECNTRL);
if (fep->bufdesc_ex)
fec_ptp_start_cyclecounter(ndev);
/* Restart PPS if needed */
if (fep->pps_enable) {
/* Clear flag so fec_ptp_enable_pps() doesn't return immediately */
fep->pps_enable = 0;
fec_ptp_restore_state(fep);
fep->ptp_caps.enable(&fep->ptp_caps, &ptp_rq, 1);
}
}
@ -2029,7 +1995,6 @@ static void fec_enet_phy_reset_after_clk_enable(struct net_device *ndev)
static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
{
struct fec_enet_private *fep = netdev_priv(ndev);
unsigned long flags;
int ret;
if (enable) {
@ -2038,15 +2003,15 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
return ret;
if (fep->clk_ptp) {
spin_lock_irqsave(&fep->tmreg_lock, flags);
mutex_lock(&fep->ptp_clk_mutex);
ret = clk_prepare_enable(fep->clk_ptp);
if (ret) {
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
mutex_unlock(&fep->ptp_clk_mutex);
goto failed_clk_ptp;
} else {
fep->ptp_clk_on = true;
}
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
mutex_unlock(&fep->ptp_clk_mutex);
}
ret = clk_prepare_enable(fep->clk_ref);
@ -2061,10 +2026,10 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
} else {
clk_disable_unprepare(fep->clk_enet_out);
if (fep->clk_ptp) {
spin_lock_irqsave(&fep->tmreg_lock, flags);
mutex_lock(&fep->ptp_clk_mutex);
clk_disable_unprepare(fep->clk_ptp);
fep->ptp_clk_on = false;
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
mutex_unlock(&fep->ptp_clk_mutex);
}
clk_disable_unprepare(fep->clk_ref);
clk_disable_unprepare(fep->clk_2x_txclk);
@ -2077,10 +2042,10 @@ static int fec_enet_clk_enable(struct net_device *ndev, bool enable)
clk_disable_unprepare(fep->clk_ref);
failed_clk_ref:
if (fep->clk_ptp) {
spin_lock_irqsave(&fep->tmreg_lock, flags);
mutex_lock(&fep->ptp_clk_mutex);
clk_disable_unprepare(fep->clk_ptp);
fep->ptp_clk_on = false;
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
mutex_unlock(&fep->ptp_clk_mutex);
}
failed_clk_ptp:
clk_disable_unprepare(fep->clk_enet_out);
@ -3915,7 +3880,7 @@ fec_probe(struct platform_device *pdev)
}
fep->ptp_clk_on = false;
spin_lock_init(&fep->tmreg_lock);
mutex_init(&fep->ptp_clk_mutex);
/* clk_ref is optional, depends on board */
fep->clk_ref = devm_clk_get_optional(&pdev->dev, "enet_clk_ref");

View File

@ -365,19 +365,21 @@ static int fec_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
*/
static int fec_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
{
struct fec_enet_private *fep =
struct fec_enet_private *adapter =
container_of(ptp, struct fec_enet_private, ptp_caps);
u64 ns;
unsigned long flags;
spin_lock_irqsave(&fep->tmreg_lock, flags);
mutex_lock(&adapter->ptp_clk_mutex);
/* Check the ptp clock */
if (!fep->ptp_clk_on) {
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
if (!adapter->ptp_clk_on) {
mutex_unlock(&adapter->ptp_clk_mutex);
return -EINVAL;
}
ns = timecounter_read(&fep->tc);
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
spin_lock_irqsave(&adapter->tmreg_lock, flags);
ns = timecounter_read(&adapter->tc);
spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
mutex_unlock(&adapter->ptp_clk_mutex);
*ts = ns_to_timespec64(ns);
@ -402,10 +404,10 @@ static int fec_ptp_settime(struct ptp_clock_info *ptp,
unsigned long flags;
u32 counter;
spin_lock_irqsave(&fep->tmreg_lock, flags);
mutex_lock(&fep->ptp_clk_mutex);
/* Check the ptp clock */
if (!fep->ptp_clk_on) {
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
mutex_unlock(&fep->ptp_clk_mutex);
return -EINVAL;
}
@ -415,9 +417,11 @@ static int fec_ptp_settime(struct ptp_clock_info *ptp,
*/
counter = ns & fep->cc.mask;
spin_lock_irqsave(&fep->tmreg_lock, flags);
writel(counter, fep->hwp + FEC_ATIME);
timecounter_init(&fep->tc, &fep->cc, ns);
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
mutex_unlock(&fep->ptp_clk_mutex);
return 0;
}
@ -514,11 +518,13 @@ static void fec_time_keep(struct work_struct *work)
struct fec_enet_private *fep = container_of(dwork, struct fec_enet_private, time_keep);
unsigned long flags;
spin_lock_irqsave(&fep->tmreg_lock, flags);
mutex_lock(&fep->ptp_clk_mutex);
if (fep->ptp_clk_on) {
spin_lock_irqsave(&fep->tmreg_lock, flags);
timecounter_read(&fep->tc);
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
}
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
mutex_unlock(&fep->ptp_clk_mutex);
schedule_delayed_work(&fep->time_keep, HZ);
}
@ -593,6 +599,8 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
}
fep->ptp_inc = NSEC_PER_SEC / fep->cycle_speed;
spin_lock_init(&fep->tmreg_lock);
fec_ptp_start_cyclecounter(ndev);
INIT_DELAYED_WORK(&fep->time_keep, fec_time_keep);
@ -625,36 +633,7 @@ void fec_ptp_stop(struct platform_device *pdev)
struct net_device *ndev = platform_get_drvdata(pdev);
struct fec_enet_private *fep = netdev_priv(ndev);
if (fep->pps_enable)
fec_ptp_enable_pps(fep, 0);
cancel_delayed_work_sync(&fep->time_keep);
if (fep->ptp_clock)
ptp_clock_unregister(fep->ptp_clock);
}
void fec_ptp_save_state(struct fec_enet_private *fep)
{
u32 atime_inc_corr;
fec_ptp_gettime(&fep->ptp_caps, &fep->ptp_saved_state.ts_phc);
fep->ptp_saved_state.ns_sys = ktime_get_ns();
fep->ptp_saved_state.at_corr = readl(fep->hwp + FEC_ATIME_CORR);
atime_inc_corr = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_CORR_MASK;
fep->ptp_saved_state.at_inc_corr = (u8)(atime_inc_corr >> FEC_T_INC_CORR_OFFSET);
}
int fec_ptp_restore_state(struct fec_enet_private *fep)
{
u32 atime_inc = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_MASK;
u64 ns_sys;
writel(fep->ptp_saved_state.at_corr, fep->hwp + FEC_ATIME_CORR);
atime_inc |= ((u32)fep->ptp_saved_state.at_inc_corr) << FEC_T_INC_CORR_OFFSET;
writel(atime_inc, fep->hwp + FEC_ATIME_INC);
ns_sys = ktime_get_ns() - fep->ptp_saved_state.ns_sys;
timespec64_add_ns(&fep->ptp_saved_state.ts_phc, ns_sys);
return fec_ptp_settime(&fep->ptp_caps, &fep->ptp_saved_state.ts_phc);
}

View File

@ -157,7 +157,7 @@ static int gve_alloc_page_dqo(struct gve_priv *priv,
int err;
err = gve_alloc_page(priv, &priv->pdev->dev, &buf_state->page_info.page,
&buf_state->addr, DMA_FROM_DEVICE, GFP_KERNEL);
&buf_state->addr, DMA_FROM_DEVICE, GFP_ATOMIC);
if (err)
return err;

View File

@ -5908,6 +5908,26 @@ static int i40e_get_link_speed(struct i40e_vsi *vsi)
}
}
/**
* i40e_bw_bytes_to_mbits - Convert max_tx_rate from bytes to mbits
* @vsi: Pointer to vsi structure
* @max_tx_rate: max TX rate in bytes to be converted into Mbits
*
* Helper function to convert units before send to set BW limit
**/
static u64 i40e_bw_bytes_to_mbits(struct i40e_vsi *vsi, u64 max_tx_rate)
{
if (max_tx_rate < I40E_BW_MBPS_DIVISOR) {
dev_warn(&vsi->back->pdev->dev,
"Setting max tx rate to minimum usable value of 50Mbps.\n");
max_tx_rate = I40E_BW_CREDIT_DIVISOR;
} else {
do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
}
return max_tx_rate;
}
/**
* i40e_set_bw_limit - setup BW limit for Tx traffic based on max_tx_rate
* @vsi: VSI to be configured
@ -5930,10 +5950,10 @@ int i40e_set_bw_limit(struct i40e_vsi *vsi, u16 seid, u64 max_tx_rate)
max_tx_rate, seid);
return -EINVAL;
}
if (max_tx_rate && max_tx_rate < 50) {
if (max_tx_rate && max_tx_rate < I40E_BW_CREDIT_DIVISOR) {
dev_warn(&pf->pdev->dev,
"Setting max tx rate to minimum usable value of 50Mbps.\n");
max_tx_rate = 50;
max_tx_rate = I40E_BW_CREDIT_DIVISOR;
}
/* Tx rate credits are in values of 50Mbps, 0 is disabled */
@ -8224,9 +8244,9 @@ static int i40e_setup_tc(struct net_device *netdev, void *type_data)
if (i40e_is_tc_mqprio_enabled(pf)) {
if (vsi->mqprio_qopt.max_rate[0]) {
u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,
vsi->mqprio_qopt.max_rate[0]);
do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);
if (!ret) {
u64 credits = max_tx_rate;
@ -10971,10 +10991,10 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired)
}
if (vsi->mqprio_qopt.max_rate[0]) {
u64 max_tx_rate = vsi->mqprio_qopt.max_rate[0];
u64 max_tx_rate = i40e_bw_bytes_to_mbits(vsi,
vsi->mqprio_qopt.max_rate[0]);
u64 credits = 0;
do_div(max_tx_rate, I40E_BW_MBPS_DIVISOR);
ret = i40e_set_bw_limit(vsi, vsi->seid, max_tx_rate);
if (ret)
goto end_unlock;

View File

@ -2038,6 +2038,25 @@ static void i40e_del_qch(struct i40e_vf *vf)
}
}
/**
* i40e_vc_get_max_frame_size
* @vf: pointer to the VF
*
* Max frame size is determined based on the current port's max frame size and
* whether a port VLAN is configured on this VF. The VF is not aware whether
* it's in a port VLAN so the PF needs to account for this in max frame size
* checks and sending the max frame size to the VF.
**/
static u16 i40e_vc_get_max_frame_size(struct i40e_vf *vf)
{
u16 max_frame_size = vf->pf->hw.phy.link_info.max_frame_size;
if (vf->port_vlan_id)
max_frame_size -= VLAN_HLEN;
return max_frame_size;
}
/**
* i40e_vc_get_vf_resources_msg
* @vf: pointer to the VF info
@ -2139,6 +2158,7 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
vfres->max_vectors = pf->hw.func_caps.num_msix_vectors_vf;
vfres->rss_key_size = I40E_HKEY_ARRAY_SIZE;
vfres->rss_lut_size = I40E_VF_HLUT_ARRAY_SIZE;
vfres->max_mtu = i40e_vc_get_max_frame_size(vf);
if (vf->lan_vsi_idx) {
vfres->vsi_res[0].vsi_id = vf->lan_vsi_id;

View File

@ -1077,7 +1077,6 @@ static int iavf_set_mac(struct net_device *netdev, void *p)
{
struct iavf_adapter *adapter = netdev_priv(netdev);
struct sockaddr *addr = p;
bool handle_mac = iavf_is_mac_set_handled(netdev, addr->sa_data);
int ret;
if (!is_valid_ether_addr(addr->sa_data))
@ -1094,10 +1093,9 @@ static int iavf_set_mac(struct net_device *netdev, void *p)
return 0;
}
if (handle_mac)
goto done;
ret = wait_event_interruptible_timeout(adapter->vc_waitqueue, false, msecs_to_jiffies(2500));
ret = wait_event_interruptible_timeout(adapter->vc_waitqueue,
iavf_is_mac_set_handled(netdev, addr->sa_data),
msecs_to_jiffies(2500));
/* If ret < 0 then it means wait was interrupted.
* If ret == 0 then it means we got a timeout.
@ -1111,7 +1109,6 @@ static int iavf_set_mac(struct net_device *netdev, void *p)
if (!ret)
return -EAGAIN;
done:
if (!ether_addr_equal(netdev->dev_addr, addr->sa_data))
return -EACCES;

View File

@ -114,8 +114,11 @@ u32 iavf_get_tx_pending(struct iavf_ring *ring, bool in_sw)
{
u32 head, tail;
/* underlying hardware might not allow access and/or always return
* 0 for the head/tail registers so just use the cached values
*/
head = ring->next_to_clean;
tail = readl(ring->tail);
tail = ring->next_to_use;
if (head != tail)
return (head < tail) ?
@ -1390,7 +1393,7 @@ static struct sk_buff *iavf_build_skb(struct iavf_ring *rx_ring,
#endif
struct sk_buff *skb;
if (!rx_buffer)
if (!rx_buffer || !size)
return NULL;
/* prefetch first cache line of first page */
va = page_address(rx_buffer->page) + rx_buffer->page_offset;
@ -1548,7 +1551,7 @@ static int iavf_clean_rx_irq(struct iavf_ring *rx_ring, int budget)
/* exit if we failed to retrieve a buffer */
if (!skb) {
rx_ring->rx_stats.alloc_buff_failed++;
if (rx_buffer)
if (rx_buffer && size)
rx_buffer->pagecnt_bias++;
break;
}

View File

@ -269,11 +269,14 @@ int iavf_get_vf_vlan_v2_caps(struct iavf_adapter *adapter)
void iavf_configure_queues(struct iavf_adapter *adapter)
{
struct virtchnl_vsi_queue_config_info *vqci;
struct virtchnl_queue_pair_info *vqpi;
int i, max_frame = adapter->vf_res->max_mtu;
int pairs = adapter->num_active_queues;
int i, max_frame = IAVF_MAX_RXBUFFER;
struct virtchnl_queue_pair_info *vqpi;
size_t len;
if (max_frame > IAVF_MAX_RXBUFFER || !max_frame)
max_frame = IAVF_MAX_RXBUFFER;
if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
dev_err(&adapter->pdev->dev, "Cannot configure queues, command %d pending\n",

View File

@ -914,7 +914,7 @@ static void ice_set_dflt_vsi_ctx(struct ice_hw *hw, struct ice_vsi_ctx *ctxt)
*/
static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
{
u16 offset = 0, qmap = 0, tx_count = 0, pow = 0;
u16 offset = 0, qmap = 0, tx_count = 0, rx_count = 0, pow = 0;
u16 num_txq_per_tc, num_rxq_per_tc;
u16 qcount_tx = vsi->alloc_txq;
u16 qcount_rx = vsi->alloc_rxq;
@ -981,22 +981,24 @@ static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
* at least 1)
*/
if (offset)
vsi->num_rxq = offset;
rx_count = offset;
else
vsi->num_rxq = num_rxq_per_tc;
rx_count = num_rxq_per_tc;
if (vsi->num_rxq > vsi->alloc_rxq) {
if (rx_count > vsi->alloc_rxq) {
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",
vsi->num_rxq, vsi->alloc_rxq);
rx_count, vsi->alloc_rxq);
return -EINVAL;
}
if (tx_count > vsi->alloc_txq) {
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
tx_count, vsi->alloc_txq);
return -EINVAL;
}
vsi->num_txq = tx_count;
if (vsi->num_txq > vsi->alloc_txq) {
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
vsi->num_txq, vsi->alloc_txq);
return -EINVAL;
}
vsi->num_rxq = rx_count;
if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) {
dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n");
@ -3490,6 +3492,7 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
u16 pow, offset = 0, qcount_tx = 0, qcount_rx = 0, qmap;
u16 tc0_offset = vsi->mqprio_qopt.qopt.offset[0];
int tc0_qcount = vsi->mqprio_qopt.qopt.count[0];
u16 new_txq, new_rxq;
u8 netdev_tc = 0;
int i;
@ -3530,21 +3533,24 @@ ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,
}
}
/* Set actual Tx/Rx queue pairs */
vsi->num_txq = offset + qcount_tx;
if (vsi->num_txq > vsi->alloc_txq) {
new_txq = offset + qcount_tx;
if (new_txq > vsi->alloc_txq) {
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",
vsi->num_txq, vsi->alloc_txq);
new_txq, vsi->alloc_txq);
return -EINVAL;
}
vsi->num_rxq = offset + qcount_rx;
if (vsi->num_rxq > vsi->alloc_rxq) {
new_rxq = offset + qcount_rx;
if (new_rxq > vsi->alloc_rxq) {
dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",
vsi->num_rxq, vsi->alloc_rxq);
new_rxq, vsi->alloc_rxq);
return -EINVAL;
}
/* Set actual Tx/Rx queue pairs */
vsi->num_txq = new_txq;
vsi->num_rxq = new_rxq;
/* Setup queue TC[0].qmap for given VSI context */
ctxt->info.tc_mapping[0] = cpu_to_le16(qmap);
ctxt->info.q_mapping[0] = cpu_to_le16(vsi->rxq_map[0]);
@ -3576,6 +3582,7 @@ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc)
{
u16 max_txqs[ICE_MAX_TRAFFIC_CLASS] = { 0 };
struct ice_pf *pf = vsi->back;
struct ice_tc_cfg old_tc_cfg;
struct ice_vsi_ctx *ctx;
struct device *dev;
int i, ret = 0;
@ -3600,6 +3607,7 @@ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc)
max_txqs[i] = vsi->num_txq;
}
memcpy(&old_tc_cfg, &vsi->tc_cfg, sizeof(old_tc_cfg));
vsi->tc_cfg.ena_tc = ena_tc;
vsi->tc_cfg.numtc = num_tc;
@ -3616,8 +3624,10 @@ int ice_vsi_cfg_tc(struct ice_vsi *vsi, u8 ena_tc)
else
ret = ice_vsi_setup_q_map(vsi, ctx);
if (ret)
if (ret) {
memcpy(&vsi->tc_cfg, &old_tc_cfg, sizeof(vsi->tc_cfg));
goto out;
}
/* must to indicate which section of VSI context are being modified */
ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);

View File

@ -2399,8 +2399,6 @@ int ice_schedule_reset(struct ice_pf *pf, enum ice_reset_req reset)
return -EBUSY;
}
ice_unplug_aux_dev(pf);
switch (reset) {
case ICE_RESET_PFR:
set_bit(ICE_PFR_REQ, pf->state);
@ -6651,7 +6649,7 @@ static void ice_napi_disable_all(struct ice_vsi *vsi)
*/
int ice_down(struct ice_vsi *vsi)
{
int i, tx_err, rx_err, link_err = 0, vlan_err = 0;
int i, tx_err, rx_err, vlan_err = 0;
WARN_ON(!test_bit(ICE_VSI_DOWN, vsi->state));
@ -6685,20 +6683,13 @@ int ice_down(struct ice_vsi *vsi)
ice_napi_disable_all(vsi);
if (test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, vsi->back->flags)) {
link_err = ice_force_phys_link_state(vsi, false);
if (link_err)
netdev_err(vsi->netdev, "Failed to set physical link down, VSI %d error %d\n",
vsi->vsi_num, link_err);
}
ice_for_each_txq(vsi, i)
ice_clean_tx_ring(vsi->tx_rings[i]);
ice_for_each_rxq(vsi, i)
ice_clean_rx_ring(vsi->rx_rings[i]);
if (tx_err || rx_err || link_err || vlan_err) {
if (tx_err || rx_err || vlan_err) {
netdev_err(vsi->netdev, "Failed to close VSI 0x%04X on switch 0x%04X\n",
vsi->vsi_num, vsi->vsw->sw_id);
return -EIO;
@ -6860,6 +6851,8 @@ int ice_vsi_open(struct ice_vsi *vsi)
if (err)
goto err_setup_rx;
ice_vsi_cfg_netdev_tc(vsi, vsi->tc_cfg.ena_tc);
if (vsi->type == ICE_VSI_PF) {
/* Notify the stack of the actual queue counts. */
err = netif_set_real_num_tx_queues(vsi->netdev, vsi->num_txq);
@ -8892,6 +8885,16 @@ int ice_stop(struct net_device *netdev)
return -EBUSY;
}
if (test_bit(ICE_FLAG_LINK_DOWN_ON_CLOSE_ENA, vsi->back->flags)) {
int link_err = ice_force_phys_link_state(vsi, false);
if (link_err) {
netdev_err(vsi->netdev, "Failed to set physical link down, VSI %d error %d\n",
vsi->vsi_num, link_err);
return -EIO;
}
}
ice_vsi_close(vsi);
return 0;

View File

@ -610,7 +610,7 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
if (test_bit(ICE_VSI_DOWN, vsi->state))
return -ENETDOWN;
if (!ice_is_xdp_ena_vsi(vsi) || queue_index >= vsi->num_xdp_txq)
if (!ice_is_xdp_ena_vsi(vsi))
return -ENXIO;
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
@ -621,6 +621,9 @@ ice_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
xdp_ring = vsi->xdp_rings[queue_index];
spin_lock(&xdp_ring->tx_lock);
} else {
/* Generally, should not happen */
if (unlikely(queue_index >= vsi->num_xdp_txq))
return -ENXIO;
xdp_ring = vsi->xdp_rings[queue_index];
}

View File

@ -368,6 +368,7 @@ static int prestera_port_sfp_bind(struct prestera_port *port)
if (!sw->np)
return 0;
of_node_get(sw->np);
ports = of_find_node_by_name(sw->np, "ports");
for_each_child_of_node(ports, node) {
@ -417,6 +418,7 @@ static int prestera_port_sfp_bind(struct prestera_port *port)
}
out:
of_node_put(node);
of_node_put(ports);
return err;
}

View File

@ -872,6 +872,7 @@ static void prestera_pci_remove(struct pci_dev *pdev)
static const struct pci_device_id prestera_pci_devices[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0xC804) },
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0xC80C) },
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0xCC1E) },
{ }
};
MODULE_DEVICE_TABLE(pci, prestera_pci_devices);

View File

@ -1458,7 +1458,7 @@ static void mtk_update_rx_cpu_idx(struct mtk_eth *eth)
static bool mtk_page_pool_enabled(struct mtk_eth *eth)
{
return !eth->hwlro;
return MTK_HAS_CAPS(eth->soc->caps, MTK_NETSYS_V2);
}
static struct page_pool *mtk_create_page_pool(struct mtk_eth *eth,

View File

@ -179,6 +179,9 @@ static int mlxbf_gige_mdio_read(struct mii_bus *bus, int phy_add, int phy_reg)
/* Only return ad bits of the gw register */
ret &= MLXBF_GIGE_MDIO_GW_AD_MASK;
/* The MDIO lock is set on read. To release it, clear gw register */
writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
return ret;
}
@ -203,6 +206,9 @@ static int mlxbf_gige_mdio_write(struct mii_bus *bus, int phy_add,
temp, !(temp & MLXBF_GIGE_MDIO_GW_BUSY_MASK),
5, 1000000);
/* The MDIO lock is set on read. To release it, clear gw register */
writel(0, priv->mdio_io + MLXBF_GIGE_MDIO_GW_OFFSET);
return ret;
}

View File

@ -397,6 +397,11 @@ static void mana_gd_process_eq_events(void *arg)
break;
}
/* Per GDMA spec, rmb is necessary after checking owner_bits, before
* reading eqe.
*/
rmb();
mana_gd_process_eqe(eq);
eq->head++;
@ -1134,6 +1139,11 @@ static int mana_gd_read_cqe(struct gdma_queue *cq, struct gdma_comp *comp)
if (WARN_ON_ONCE(owner_bits != new_bits))
return -1;
/* Per GDMA spec, rmb is necessary after checking owner_bits, before
* reading completion info
*/
rmb();
comp->wq_num = cqe->cqe_info.wq_num;
comp->is_sq = cqe->cqe_info.is_sq;
memcpy(comp->cqe_data, cqe->cqe_data, GDMA_COMP_DATA_SIZE);

View File

@ -1449,6 +1449,8 @@ static int ravb_phy_init(struct net_device *ndev)
phy_remove_link_mode(phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
}
/* Indicate that the MAC is responsible for managing PHY PM */
phydev->mac_managed_pm = true;
phy_attached_info(phydev);
return 0;

View File

@ -2029,6 +2029,8 @@ static int sh_eth_phy_init(struct net_device *ndev)
if (mdp->cd->register_type != SH_ETH_REG_GIGABIT)
phy_set_max_speed(phydev, SPEED_100);
/* Indicate that the MAC is responsible for managing PHY PM */
phydev->mac_managed_pm = true;
phy_attached_info(phydev);
return 0;

View File

@ -319,7 +319,7 @@ int efx_probe_interrupts(struct efx_nic *efx)
efx->n_channels = 1 + (efx_separate_tx_channels ? 1 : 0);
efx->n_rx_channels = 1;
efx->n_tx_channels = 1;
efx->tx_channel_offset = 1;
efx->tx_channel_offset = efx_separate_tx_channels ? 1 : 0;
efx->n_xdp_channels = 0;
efx->xdp_channel_offset = efx->n_channels;
efx->legacy_irq = efx->pci_dev->irq;

View File

@ -320,7 +320,7 @@ int efx_siena_probe_interrupts(struct efx_nic *efx)
efx->n_channels = 1 + (efx_siena_separate_tx_channels ? 1 : 0);
efx->n_rx_channels = 1;
efx->n_tx_channels = 1;
efx->tx_channel_offset = 1;
efx->tx_channel_offset = efx_siena_separate_tx_channels ? 1 : 0;
efx->n_xdp_channels = 0;
efx->xdp_channel_offset = efx->n_channels;
efx->legacy_irq = efx->pci_dev->irq;

View File

@ -336,7 +336,7 @@ netdev_tx_t efx_siena_hard_start_xmit(struct sk_buff *skb,
* previous packets out.
*/
if (!netdev_xmit_more())
efx_tx_send_pending(tx_queue->channel);
efx_tx_send_pending(efx_get_tx_channel(efx, index));
return NETDEV_TX_OK;
}

View File

@ -549,7 +549,7 @@ netdev_tx_t efx_hard_start_xmit(struct sk_buff *skb,
* previous packets out.
*/
if (!netdev_xmit_more())
efx_tx_send_pending(tx_queue->channel);
efx_tx_send_pending(efx_get_tx_channel(efx, index));
return NETDEV_TX_OK;
}

View File

@ -2020,9 +2020,9 @@ static void happy_meal_rx(struct happy_meal *hp, struct net_device *dev)
skb_reserve(copy_skb, 2);
skb_put(copy_skb, len);
dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE);
dma_sync_single_for_cpu(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE);
skb_copy_from_linear_data(skb, copy_skb->data, len);
dma_sync_single_for_device(hp->dma_dev, dma_addr, len, DMA_FROM_DEVICE);
dma_sync_single_for_device(hp->dma_dev, dma_addr, len + 2, DMA_FROM_DEVICE);
/* Reuse original ring buffer. */
hme_write_rxd(hp, this,
(RXFLAG_OWN|((RX_BUF_ALLOC_SIZE-RX_OFFSET)<<16)),

View File

@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE);
req.v4_route_tbl_info_valid = 1;
req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
req.v4_route_tbl_info.count = mem->size / sizeof(__le64);
req.v4_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE);
req.v6_route_tbl_info_valid = 1;
req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
req.v6_route_tbl_info.count = mem->size / sizeof(__le64);
req.v6_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER);
req.v4_filter_tbl_start_valid = 1;
@ -352,7 +352,7 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
req.v4_hash_route_tbl_info_valid = 1;
req.v4_hash_route_tbl_info.start =
ipa->mem_offset + mem->offset;
req.v4_hash_route_tbl_info.count = mem->size / sizeof(__le64);
req.v4_hash_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
}
mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE_HASHED);
@ -360,7 +360,7 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
req.v6_hash_route_tbl_info_valid = 1;
req.v6_hash_route_tbl_info.start =
ipa->mem_offset + mem->offset;
req.v6_hash_route_tbl_info.count = mem->size / sizeof(__le64);
req.v6_hash_route_tbl_info.end = IPA_ROUTE_MODEM_COUNT - 1;
}
mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER_HASHED);

View File

@ -311,7 +311,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
.tlv_type = 0x12,
.offset = offsetof(struct ipa_init_modem_driver_req,
v4_route_tbl_info),
.ei_array = ipa_mem_array_ei,
.ei_array = ipa_mem_bounds_ei,
},
{
.data_type = QMI_OPT_FLAG,
@ -332,7 +332,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
.tlv_type = 0x13,
.offset = offsetof(struct ipa_init_modem_driver_req,
v6_route_tbl_info),
.ei_array = ipa_mem_array_ei,
.ei_array = ipa_mem_bounds_ei,
},
{
.data_type = QMI_OPT_FLAG,
@ -496,7 +496,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
.tlv_type = 0x1b,
.offset = offsetof(struct ipa_init_modem_driver_req,
v4_hash_route_tbl_info),
.ei_array = ipa_mem_array_ei,
.ei_array = ipa_mem_bounds_ei,
},
{
.data_type = QMI_OPT_FLAG,
@ -517,7 +517,7 @@ struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
.tlv_type = 0x1c,
.offset = offsetof(struct ipa_init_modem_driver_req,
v6_hash_route_tbl_info),
.ei_array = ipa_mem_array_ei,
.ei_array = ipa_mem_bounds_ei,
},
{
.data_type = QMI_OPT_FLAG,

View File

@ -86,9 +86,11 @@ enum ipa_platform_type {
IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01 = 0x5, /* QNX MSM */
};
/* This defines the start and end offset of a range of memory. Both
* fields are offsets relative to the start of IPA shared memory.
* The end value is the last addressable byte *within* the range.
/* This defines the start and end offset of a range of memory. The start
* value is a byte offset relative to the start of IPA shared memory. The
* end value is the last addressable unit *within* the range. Typically
* the end value is in units of bytes, however it can also be a maximum
* array index value.
*/
struct ipa_mem_bounds {
u32 start;
@ -129,18 +131,19 @@ struct ipa_init_modem_driver_req {
u8 hdr_tbl_info_valid;
struct ipa_mem_bounds hdr_tbl_info;
/* Routing table information. These define the location and size of
* non-hashable IPv4 and IPv6 filter tables. The start values are
* offsets relative to the start of IPA shared memory.
/* Routing table information. These define the location and maximum
* *index* (not byte) for the modem portion of non-hashable IPv4 and
* IPv6 routing tables. The start values are byte offsets relative
* to the start of IPA shared memory.
*/
u8 v4_route_tbl_info_valid;
struct ipa_mem_array v4_route_tbl_info;
struct ipa_mem_bounds v4_route_tbl_info;
u8 v6_route_tbl_info_valid;
struct ipa_mem_array v6_route_tbl_info;
struct ipa_mem_bounds v6_route_tbl_info;
/* Filter table information. These define the location of the
* non-hashable IPv4 and IPv6 filter tables. The start values are
* offsets relative to the start of IPA shared memory.
* byte offsets relative to the start of IPA shared memory.
*/
u8 v4_filter_tbl_start_valid;
u32 v4_filter_tbl_start;
@ -181,18 +184,20 @@ struct ipa_init_modem_driver_req {
u8 zip_tbl_info_valid;
struct ipa_mem_bounds zip_tbl_info;
/* Routing table information. These define the location and size
* of hashable IPv4 and IPv6 filter tables. The start values are
* offsets relative to the start of IPA shared memory.
/* Routing table information. These define the location and maximum
* *index* (not byte) for the modem portion of hashable IPv4 and IPv6
* routing tables (if supported by hardware). The start values are
* byte offsets relative to the start of IPA shared memory.
*/
u8 v4_hash_route_tbl_info_valid;
struct ipa_mem_array v4_hash_route_tbl_info;
struct ipa_mem_bounds v4_hash_route_tbl_info;
u8 v6_hash_route_tbl_info_valid;
struct ipa_mem_array v6_hash_route_tbl_info;
struct ipa_mem_bounds v6_hash_route_tbl_info;
/* Filter table information. These define the location and size
* of hashable IPv4 and IPv6 filter tables. The start values are
* offsets relative to the start of IPA shared memory.
* of hashable IPv4 and IPv6 filter tables (if supported by hardware).
* The start values are byte offsets relative to the start of IPA
* shared memory.
*/
u8 v4_hash_filter_tbl_start_valid;
u32 v4_hash_filter_tbl_start;

View File

@ -108,8 +108,6 @@
/* Assignment of route table entries to the modem and AP */
#define IPA_ROUTE_MODEM_MIN 0
#define IPA_ROUTE_MODEM_COUNT 8
#define IPA_ROUTE_AP_MIN IPA_ROUTE_MODEM_COUNT
#define IPA_ROUTE_AP_COUNT \
(IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT)

View File

@ -13,6 +13,9 @@ struct ipa;
/* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
#define IPA_FILTER_COUNT_MAX 14
/* The number of route table entries allotted to the modem */
#define IPA_ROUTE_MODEM_COUNT 8
/* The maximum number of route table entries (IPv4, IPv6; hashed or not) */
#define IPA_ROUTE_COUNT_MAX 15

View File

@ -495,7 +495,6 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
static int ipvlan_process_outbound(struct sk_buff *skb)
{
struct ethhdr *ethh = eth_hdr(skb);
int ret = NET_XMIT_DROP;
/* The ipvlan is a pseudo-L2 device, so the packets that we receive
@ -505,6 +504,8 @@ static int ipvlan_process_outbound(struct sk_buff *skb)
if (skb_mac_header_was_set(skb)) {
/* In this mode we dont care about
* multicast and broadcast traffic */
struct ethhdr *ethh = eth_hdr(skb);
if (is_multicast_ether_addr(ethh->h_dest)) {
pr_debug_ratelimited(
"Dropped {multi|broad}cast of type=[%x]\n",
@ -589,7 +590,7 @@ static int ipvlan_xmit_mode_l3(struct sk_buff *skb, struct net_device *dev)
static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
{
const struct ipvl_dev *ipvlan = netdev_priv(dev);
struct ethhdr *eth = eth_hdr(skb);
struct ethhdr *eth = skb_eth_hdr(skb);
struct ipvl_addr *addr;
void *lyr3h;
int addr_type;
@ -619,6 +620,7 @@ static int ipvlan_xmit_mode_l2(struct sk_buff *skb, struct net_device *dev)
return dev_forward_skb(ipvlan->phy_dev, skb);
} else if (is_multicast_ether_addr(eth->h_dest)) {
skb_reset_mac_header(skb);
ipvlan_skb_crossing_ns(skb, NULL);
ipvlan_multicast_enqueue(ipvlan->port, skb, true);
return NET_XMIT_SUCCESS;

View File

@ -231,6 +231,7 @@ int of_mdiobus_register(struct mii_bus *mdio, struct device_node *np)
return 0;
unregister:
of_node_put(child);
mdiobus_unregister(mdio);
return rc;
}

View File

@ -433,11 +433,11 @@ int nsim_dev_hwstats_init(struct nsim_dev *nsim_dev)
goto err_remove_hwstats_recursive;
}
debugfs_create_file("enable_ifindex", 0600, hwstats->l3_ddir, hwstats,
debugfs_create_file("enable_ifindex", 0200, hwstats->l3_ddir, hwstats,
&nsim_dev_hwstats_l3_enable_fops.fops);
debugfs_create_file("disable_ifindex", 0600, hwstats->l3_ddir, hwstats,
debugfs_create_file("disable_ifindex", 0200, hwstats->l3_ddir, hwstats,
&nsim_dev_hwstats_l3_disable_fops.fops);
debugfs_create_file("fail_next_enable", 0600, hwstats->l3_ddir, hwstats,
debugfs_create_file("fail_next_enable", 0200, hwstats->l3_ddir, hwstats,
&nsim_dev_hwstats_l3_fail_fops.fops);
INIT_DELAYED_WORK(&hwstats->traffic_dw,

View File

@ -91,6 +91,9 @@
#define VEND1_GLOBAL_FW_ID_MAJOR GENMASK(15, 8)
#define VEND1_GLOBAL_FW_ID_MINOR GENMASK(7, 0)
#define VEND1_GLOBAL_GEN_STAT2 0xc831
#define VEND1_GLOBAL_GEN_STAT2_OP_IN_PROG BIT(15)
#define VEND1_GLOBAL_RSVD_STAT1 0xc885
#define VEND1_GLOBAL_RSVD_STAT1_FW_BUILD_ID GENMASK(7, 4)
#define VEND1_GLOBAL_RSVD_STAT1_PROV_ID GENMASK(3, 0)
@ -125,6 +128,12 @@
#define VEND1_GLOBAL_INT_VEND_MASK_GLOBAL2 BIT(1)
#define VEND1_GLOBAL_INT_VEND_MASK_GLOBAL3 BIT(0)
/* Sleep and timeout for checking if the Processor-Intensive
* MDIO operation is finished
*/
#define AQR107_OP_IN_PROG_SLEEP 1000
#define AQR107_OP_IN_PROG_TIMEOUT 100000
struct aqr107_hw_stat {
const char *name;
int reg;
@ -597,16 +606,52 @@ static void aqr107_link_change_notify(struct phy_device *phydev)
phydev_info(phydev, "Aquantia 1000Base-T2 mode active\n");
}
static int aqr107_wait_processor_intensive_op(struct phy_device *phydev)
{
int val, err;
/* The datasheet notes to wait at least 1ms after issuing a
* processor intensive operation before checking.
* We cannot use the 'sleep_before_read' parameter of read_poll_timeout
* because that just determines the maximum time slept, not the minimum.
*/
usleep_range(1000, 5000);
err = phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1,
VEND1_GLOBAL_GEN_STAT2, val,
!(val & VEND1_GLOBAL_GEN_STAT2_OP_IN_PROG),
AQR107_OP_IN_PROG_SLEEP,
AQR107_OP_IN_PROG_TIMEOUT, false);
if (err) {
phydev_err(phydev, "timeout: processor-intensive MDIO operation\n");
return err;
}
return 0;
}
static int aqr107_suspend(struct phy_device *phydev)
{
return phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
MDIO_CTRL1_LPOWER);
int err;
err = phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
MDIO_CTRL1_LPOWER);
if (err)
return err;
return aqr107_wait_processor_intensive_op(phydev);
}
static int aqr107_resume(struct phy_device *phydev)
{
return phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
MDIO_CTRL1_LPOWER);
int err;
err = phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MDIO_CTRL1,
MDIO_CTRL1_LPOWER);
if (err)
return err;
return aqr107_wait_processor_intensive_op(phydev);
}
static int aqr107_probe(struct phy_device *phydev)

View File

@ -2679,16 +2679,19 @@ static int lan8804_config_init(struct phy_device *phydev)
static irqreturn_t lan8814_handle_interrupt(struct phy_device *phydev)
{
int irq_status, tsu_irq_status;
int ret = IRQ_NONE;
irq_status = phy_read(phydev, LAN8814_INTS);
if (irq_status > 0 && (irq_status & LAN8814_INT_LINK))
phy_trigger_machine(phydev);
if (irq_status < 0) {
phy_error(phydev);
return IRQ_NONE;
}
if (irq_status & LAN8814_INT_LINK) {
phy_trigger_machine(phydev);
ret = IRQ_HANDLED;
}
while (1) {
tsu_irq_status = lanphy_read_page_reg(phydev, 4,
LAN8814_INTR_STS_REG);
@ -2697,12 +2700,15 @@ static irqreturn_t lan8814_handle_interrupt(struct phy_device *phydev)
(tsu_irq_status & (LAN8814_INTR_STS_REG_1588_TSU0_ |
LAN8814_INTR_STS_REG_1588_TSU1_ |
LAN8814_INTR_STS_REG_1588_TSU2_ |
LAN8814_INTR_STS_REG_1588_TSU3_)))
LAN8814_INTR_STS_REG_1588_TSU3_))) {
lan8814_handle_ptp_interrupt(phydev);
else
ret = IRQ_HANDLED;
} else {
break;
}
}
return IRQ_HANDLED;
return ret;
}
static int lan8814_ack_interrupt(struct phy_device *phydev)

View File

@ -1275,10 +1275,12 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
}
}
netif_addr_lock_bh(dev);
dev_uc_sync_multiple(port_dev, dev);
dev_mc_sync_multiple(port_dev, dev);
netif_addr_unlock_bh(dev);
if (dev->flags & IFF_UP) {
netif_addr_lock_bh(dev);
dev_uc_sync_multiple(port_dev, dev);
dev_mc_sync_multiple(port_dev, dev);
netif_addr_unlock_bh(dev);
}
port->index = -1;
list_add_tail_rcu(&port->list, &team->port_list);
@ -1349,8 +1351,10 @@ static int team_port_del(struct team *team, struct net_device *port_dev)
netdev_rx_handler_unregister(port_dev);
team_port_disable_netpoll(port);
vlan_vids_del_by_dev(port_dev, dev);
dev_uc_unsync(port_dev, dev);
dev_mc_unsync(port_dev, dev);
if (dev->flags & IFF_UP) {
dev_uc_unsync(port_dev, dev);
dev_mc_unsync(port_dev, dev);
}
dev_close(port_dev);
team_port_leave(team, port);
@ -1700,6 +1704,14 @@ static int team_open(struct net_device *dev)
static int team_close(struct net_device *dev)
{
struct team *team = netdev_priv(dev);
struct team_port *port;
list_for_each_entry(port, &team->port_list, list) {
dev_uc_unsync(port->dev, dev);
dev_mc_unsync(port->dev, dev);
}
return 0;
}

View File

@ -436,14 +436,13 @@ static int set_peer(struct wg_device *wg, struct nlattr **attrs)
if (attrs[WGPEER_A_ENDPOINT]) {
struct sockaddr *addr = nla_data(attrs[WGPEER_A_ENDPOINT]);
size_t len = nla_len(attrs[WGPEER_A_ENDPOINT]);
struct endpoint endpoint = { { { 0 } } };
if ((len == sizeof(struct sockaddr_in) &&
addr->sa_family == AF_INET) ||
(len == sizeof(struct sockaddr_in6) &&
addr->sa_family == AF_INET6)) {
struct endpoint endpoint = { { { 0 } } };
memcpy(&endpoint.addr, addr, len);
if (len == sizeof(struct sockaddr_in) && addr->sa_family == AF_INET) {
endpoint.addr4 = *(struct sockaddr_in *)addr;
wg_socket_set_peer_endpoint(peer, &endpoint);
} else if (len == sizeof(struct sockaddr_in6) && addr->sa_family == AF_INET6) {
endpoint.addr6 = *(struct sockaddr_in6 *)addr;
wg_socket_set_peer_endpoint(peer, &endpoint);
}
}

View File

@ -6,29 +6,28 @@
#ifdef DEBUG
#include <linux/jiffies.h>
#include <linux/hrtimer.h>
static const struct {
bool result;
u64 nsec_to_sleep_before;
unsigned int msec_to_sleep_before;
} expected_results[] __initconst = {
[0 ... PACKETS_BURSTABLE - 1] = { true, 0 },
[PACKETS_BURSTABLE] = { false, 0 },
[PACKETS_BURSTABLE + 1] = { true, NSEC_PER_SEC / PACKETS_PER_SECOND },
[PACKETS_BURSTABLE + 1] = { true, MSEC_PER_SEC / PACKETS_PER_SECOND },
[PACKETS_BURSTABLE + 2] = { false, 0 },
[PACKETS_BURSTABLE + 3] = { true, (NSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
[PACKETS_BURSTABLE + 3] = { true, (MSEC_PER_SEC / PACKETS_PER_SECOND) * 2 },
[PACKETS_BURSTABLE + 4] = { true, 0 },
[PACKETS_BURSTABLE + 5] = { false, 0 }
};
static __init unsigned int maximum_jiffies_at_index(int index)
{
u64 total_nsecs = 2 * NSEC_PER_SEC / PACKETS_PER_SECOND / 3;
unsigned int total_msecs = 2 * MSEC_PER_SEC / PACKETS_PER_SECOND / 3;
int i;
for (i = 0; i <= index; ++i)
total_nsecs += expected_results[i].nsec_to_sleep_before;
return nsecs_to_jiffies(total_nsecs);
total_msecs += expected_results[i].msec_to_sleep_before;
return msecs_to_jiffies(total_msecs);
}
static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
@ -43,12 +42,8 @@ static __init int timings_test(struct sk_buff *skb4, struct iphdr *hdr4,
loop_start_time = jiffies;
for (i = 0; i < ARRAY_SIZE(expected_results); ++i) {
if (expected_results[i].nsec_to_sleep_before) {
ktime_t timeout = ktime_add(ktime_add_ns(ktime_get_coarse_boottime(), TICK_NSEC * 4 / 3),
ns_to_ktime(expected_results[i].nsec_to_sleep_before));
set_current_state(TASK_UNINTERRUPTIBLE);
schedule_hrtimeout_range_clock(&timeout, 0, HRTIMER_MODE_ABS, CLOCK_BOOTTIME);
}
if (expected_results[i].msec_to_sleep_before)
msleep(expected_results[i].msec_to_sleep_before);
if (time_is_before_jiffies(loop_start_time +
maximum_jiffies_at_index(i)))
@ -132,7 +127,7 @@ bool __init wg_ratelimiter_selftest(void)
if (IS_ENABLED(CONFIG_KASAN) || IS_ENABLED(CONFIG_UBSAN))
return true;
BUILD_BUG_ON(NSEC_PER_SEC % PACKETS_PER_SECOND != 0);
BUILD_BUG_ON(MSEC_PER_SEC % PACKETS_PER_SECOND != 0);
if (wg_ratelimiter_init())
goto out;
@ -172,7 +167,7 @@ bool __init wg_ratelimiter_selftest(void)
++test;
#endif
for (trials = TRIALS_BEFORE_GIVING_UP;;) {
for (trials = TRIALS_BEFORE_GIVING_UP; IS_ENABLED(DEBUG_RATELIMITER_TIMINGS);) {
int test_count = 0, ret;
ret = timings_test(skb4, hdr4, skb6, hdr6, &test_count);

View File

@ -140,6 +140,7 @@ config IWLMEI
depends on INTEL_MEI
depends on PM
depends on CFG80211
depends on BROKEN
help
Enables the iwlmei kernel module.

View File

@ -1833,8 +1833,8 @@ static void iwl_mvm_parse_ppe(struct iwl_mvm *mvm,
* If nss < MAX: we can set zeros in other streams
*/
if (nss > MAX_HE_SUPP_NSS) {
IWL_INFO(mvm, "Got NSS = %d - trimming to %d\n", nss,
MAX_HE_SUPP_NSS);
IWL_DEBUG_INFO(mvm, "Got NSS = %d - trimming to %d\n", nss,
MAX_HE_SUPP_NSS);
nss = MAX_HE_SUPP_NSS;
}

View File

@ -267,7 +267,8 @@ static void mt76_init_stream_cap(struct mt76_phy *phy,
}
vht_cap->vht_mcs.rx_mcs_map = cpu_to_le16(mcs_map);
vht_cap->vht_mcs.tx_mcs_map = cpu_to_le16(mcs_map);
vht_cap->vht_mcs.tx_highest |=
if (ieee80211_hw_check(phy->hw, SUPPORTS_VHT_EXT_NSS_BW))
vht_cap->vht_mcs.tx_highest |=
cpu_to_le16(IEEE80211_VHT_EXT_NSS_BW_CAPABLE);
}

View File

@ -1088,7 +1088,7 @@ u32 mt7615_mac_get_sta_tid_sn(struct mt7615_dev *dev, int wcid, u8 tid)
offset %= 32;
val = mt76_rr(dev, addr);
val >>= (tid % 32);
val >>= offset;
if (offset > 20) {
addr += 4;

View File

@ -124,8 +124,6 @@ struct hci_dev_info {
__u16 acl_pkts;
__u16 sco_mtu;
__u16 sco_pkts;
__u16 iso_mtu;
__u16 iso_pkts;
struct hci_dev_stats stat;
};

View File

@ -15,8 +15,6 @@
#define PKT_TYPE_LACPDU cpu_to_be16(ETH_P_SLOW)
#define AD_TIMER_INTERVAL 100 /*msec*/
#define MULTICAST_LACPDU_ADDR {0x01, 0x80, 0xC2, 0x00, 0x00, 0x02}
#define AD_LACP_SLOW 0
#define AD_LACP_FAST 1

View File

@ -786,6 +786,9 @@ extern struct rtnl_link_ops bond_link_ops;
/* exported from bond_sysfs_slave.c */
extern const struct sysfs_ops slave_sysfs_ops;
/* exported from bond_3ad.c */
extern const u8 lacpdu_mcast_addr[];
static inline netdev_tx_t bond_tx_drop(struct net_device *dev, struct sk_buff *skb)
{
dev_core_stats_tx_dropped_inc(dev);

View File

@ -15,6 +15,22 @@
#ifndef IEEE802154_NETDEVICE_H
#define IEEE802154_NETDEVICE_H
#define IEEE802154_REQUIRED_SIZE(struct_type, member) \
(offsetof(typeof(struct_type), member) + \
sizeof(((typeof(struct_type) *)(NULL))->member))
#define IEEE802154_ADDR_OFFSET \
offsetof(typeof(struct sockaddr_ieee802154), addr)
#define IEEE802154_MIN_NAMELEN (IEEE802154_ADDR_OFFSET + \
IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, addr_type))
#define IEEE802154_NAMELEN_SHORT (IEEE802154_ADDR_OFFSET + \
IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, short_addr))
#define IEEE802154_NAMELEN_LONG (IEEE802154_ADDR_OFFSET + \
IEEE802154_REQUIRED_SIZE(struct ieee802154_addr_sa, hwaddr))
#include <net/af_ieee802154.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
@ -165,6 +181,27 @@ static inline void ieee802154_devaddr_to_raw(void *raw, __le64 addr)
memcpy(raw, &temp, IEEE802154_ADDR_LEN);
}
static inline int
ieee802154_sockaddr_check_size(struct sockaddr_ieee802154 *daddr, int len)
{
struct ieee802154_addr_sa *sa;
sa = &daddr->addr;
if (len < IEEE802154_MIN_NAMELEN)
return -EINVAL;
switch (sa->addr_type) {
case IEEE802154_ADDR_SHORT:
if (len < IEEE802154_NAMELEN_SHORT)
return -EINVAL;
break;
case IEEE802154_ADDR_LONG:
if (len < IEEE802154_NAMELEN_LONG)
return -EINVAL;
break;
}
return 0;
}
static inline void ieee802154_addr_from_sa(struct ieee802154_addr *a,
const struct ieee802154_addr_sa *sa)
{

View File

@ -10,6 +10,7 @@
#include <linux/atomic.h>
#include <linux/byteorder/generic.h>
#include <linux/container_of.h>
#include <linux/errno.h>
#include <linux/gfp.h>
#include <linux/if.h>
#include <linux/if_arp.h>
@ -700,6 +701,9 @@ int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface,
int max_header_len = batadv_max_header_len();
int ret;
if (hard_iface->net_dev->mtu < ETH_MIN_MTU + max_header_len)
return -EINVAL;
if (hard_iface->if_status != BATADV_IF_NOT_IN_USE)
goto out;

View File

@ -1040,8 +1040,10 @@ static int do_replace_finish(struct net *net, struct ebt_replace *repl,
goto free_iterate;
}
if (repl->valid_hooks != t->valid_hooks)
if (repl->valid_hooks != t->valid_hooks) {
ret = -EINVAL;
goto free_unlock;
}
if (repl->num_counters && repl->num_counters != t->private->nentries) {
ret = -EINVAL;

View File

@ -52,6 +52,7 @@ int __get_compat_msghdr(struct msghdr *kmsg,
kmsg->msg_namelen = sizeof(struct sockaddr_storage);
kmsg->msg_control_is_user = true;
kmsg->msg_get_inq = 0;
kmsg->msg_control_user = compat_ptr(msg->msg_control);
kmsg->msg_controllen = msg->msg_controllen;

View File

@ -1611,9 +1611,8 @@ static inline void __flow_hash_consistentify(struct flow_keys *keys)
switch (keys->control.addr_type) {
case FLOW_DISSECTOR_KEY_IPV4_ADDRS:
addr_diff = (__force u32)keys->addrs.v4addrs.dst -
(__force u32)keys->addrs.v4addrs.src;
if (addr_diff < 0)
if ((__force u32)keys->addrs.v4addrs.dst <
(__force u32)keys->addrs.v4addrs.src)
swap(keys->addrs.v4addrs.src, keys->addrs.v4addrs.dst);
if ((__force u16)keys->ports.dst <

View File

@ -200,8 +200,9 @@ static int raw_bind(struct sock *sk, struct sockaddr *_uaddr, int len)
int err = 0;
struct net_device *dev = NULL;
if (len < sizeof(*uaddr))
return -EINVAL;
err = ieee802154_sockaddr_check_size(uaddr, len);
if (err < 0)
return err;
uaddr = (struct sockaddr_ieee802154 *)_uaddr;
if (uaddr->family != AF_IEEE802154)
@ -493,7 +494,8 @@ static int dgram_bind(struct sock *sk, struct sockaddr *uaddr, int len)
ro->bound = 0;
if (len < sizeof(*addr))
err = ieee802154_sockaddr_check_size(addr, len);
if (err < 0)
goto out;
if (addr->family != AF_IEEE802154)
@ -564,8 +566,9 @@ static int dgram_connect(struct sock *sk, struct sockaddr *uaddr,
struct dgram_sock *ro = dgram_sk(sk);
int err = 0;
if (len < sizeof(*addr))
return -EINVAL;
err = ieee802154_sockaddr_check_size(addr, len);
if (err < 0)
return err;
if (addr->family != AF_IEEE802154)
return -EINVAL;
@ -604,6 +607,7 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
struct ieee802154_mac_cb *cb;
struct dgram_sock *ro = dgram_sk(sk);
struct ieee802154_addr dst_addr;
DECLARE_SOCKADDR(struct sockaddr_ieee802154*, daddr, msg->msg_name);
int hlen, tlen;
int err;
@ -612,10 +616,20 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
return -EOPNOTSUPP;
}
if (!ro->connected && !msg->msg_name)
return -EDESTADDRREQ;
else if (ro->connected && msg->msg_name)
return -EISCONN;
if (msg->msg_name) {
if (ro->connected)
return -EISCONN;
if (msg->msg_namelen < IEEE802154_MIN_NAMELEN)
return -EINVAL;
err = ieee802154_sockaddr_check_size(daddr, msg->msg_namelen);
if (err < 0)
return err;
ieee802154_addr_from_sa(&dst_addr, &daddr->addr);
} else {
if (!ro->connected)
return -EDESTADDRREQ;
dst_addr = ro->dst_addr;
}
if (!ro->bound)
dev = dev_getfirstbyhwtype(sock_net(sk), ARPHRD_IEEE802154);
@ -651,16 +665,6 @@ static int dgram_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
cb = mac_cb_init(skb);
cb->type = IEEE802154_FC_TYPE_DATA;
cb->ackreq = ro->want_ack;
if (msg->msg_name) {
DECLARE_SOCKADDR(struct sockaddr_ieee802154*,
daddr, msg->msg_name);
ieee802154_addr_from_sa(&dst_addr, &daddr->addr);
} else {
dst_addr = ro->dst_addr;
}
cb->secen = ro->secen;
cb->secen_override = ro->secen_override;
cb->seclevel = ro->seclevel;

View File

@ -1004,7 +1004,9 @@ static void ipmr_cache_resolve(struct net *net, struct mr_table *mrt,
rtnl_unicast(skb, net, NETLINK_CB(skb).portid);
} else {
rcu_read_lock();
ip_mr_forward(net, mrt, skb->dev, skb, c, 0);
rcu_read_unlock();
}
}
}

View File

@ -1761,19 +1761,28 @@ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
if (sk->sk_state == TCP_LISTEN)
return -ENOTCONN;
skb = tcp_recv_skb(sk, seq, &offset);
if (!skb)
return 0;
while ((skb = tcp_recv_skb(sk, seq, &offset)) != NULL) {
u8 tcp_flags;
int used;
__skb_unlink(skb, &sk->sk_receive_queue);
WARN_ON(!skb_set_owner_sk_safe(skb, sk));
copied = recv_actor(sk, skb);
if (copied >= 0) {
seq += copied;
if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
__skb_unlink(skb, &sk->sk_receive_queue);
WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk));
tcp_flags = TCP_SKB_CB(skb)->tcp_flags;
used = recv_actor(sk, skb);
consume_skb(skb);
if (used < 0) {
if (!copied)
copied = used;
break;
}
seq += used;
copied += used;
if (tcp_flags & TCPHDR_FIN) {
++seq;
break;
}
}
consume_skb(skb);
WRITE_ONCE(tp->copied_seq, seq);
tcp_rcv_space_adjust(sk);

View File

@ -1821,7 +1821,7 @@ int udp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
continue;
}
WARN_ON(!skb_set_owner_sk_safe(skb, sk));
WARN_ON_ONCE(!skb_set_owner_sk_safe(skb, sk));
used = recv_actor(sk, skb);
if (used <= 0) {
if (!copied)

View File

@ -1070,13 +1070,13 @@ static int __init inet6_init(void)
for (r = &inetsw6[0]; r < &inetsw6[SOCK_MAX]; ++r)
INIT_LIST_HEAD(r);
raw_hashinfo_init(&raw_v6_hashinfo);
if (disable_ipv6_mod) {
pr_info("Loaded, but administratively disabled, reboot required to enable\n");
goto out;
}
raw_hashinfo_init(&raw_v6_hashinfo);
err = proto_register(&tcpv6_prot, 1);
if (err)
goto out;

View File

@ -1028,8 +1028,11 @@ static void ip6mr_cache_resolve(struct net *net, struct mr_table *mrt,
((struct nlmsgerr *)nlmsg_data(nlh))->error = -EMSGSIZE;
}
rtnl_unicast(skb, net, NETLINK_CB(skb).portid);
} else
} else {
rcu_read_lock();
ip6_mr_forward(net, mrt, skb->dev, skb, c);
rcu_read_unlock();
}
}
}

View File

@ -150,9 +150,15 @@ static bool mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
MPTCP_SKB_CB(from)->map_seq, MPTCP_SKB_CB(to)->map_seq,
to->len, MPTCP_SKB_CB(from)->end_seq);
MPTCP_SKB_CB(to)->end_seq = MPTCP_SKB_CB(from)->end_seq;
kfree_skb_partial(from, fragstolen);
/* note the fwd memory can reach a negative value after accounting
* for the delta, but the later skb free will restore a non
* negative one
*/
atomic_add(delta, &sk->sk_rmem_alloc);
mptcp_rmem_charge(sk, delta);
kfree_skb_partial(from, fragstolen);
return true;
}

View File

@ -33,6 +33,7 @@ MODULE_AUTHOR("Rusty Russell <rusty@rustcorp.com.au>");
MODULE_DESCRIPTION("ftp connection tracking helper");
MODULE_ALIAS("ip_conntrack_ftp");
MODULE_ALIAS_NFCT_HELPER(HELPER_NAME);
static DEFINE_SPINLOCK(nf_ftp_lock);
#define MAX_PORTS 8
static u_int16_t ports[MAX_PORTS];
@ -409,7 +410,8 @@ static int help(struct sk_buff *skb,
}
datalen = skb->len - dataoff;
spin_lock_bh(&ct->lock);
/* seqadj (nat) uses ct->lock internally, nf_nat_ftp would cause deadlock */
spin_lock_bh(&nf_ftp_lock);
fb_ptr = skb->data + dataoff;
ends_in_nl = (fb_ptr[datalen - 1] == '\n');
@ -538,7 +540,7 @@ static int help(struct sk_buff *skb,
if (ends_in_nl)
update_nl_seq(ct, seq, ct_ftp_info, dir, skb);
out:
spin_unlock_bh(&ct->lock);
spin_unlock_bh(&nf_ftp_lock);
return ret;
}

View File

@ -157,15 +157,37 @@ static int help(struct sk_buff *skb, unsigned int protoff,
data = ib_ptr;
data_limit = ib_ptr + datalen;
/* strlen("\1DCC SENT t AAAAAAAA P\1\n")=24
* 5+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=14 */
while (data < data_limit - (19 + MINMATCHLEN)) {
if (memcmp(data, "\1DCC ", 5)) {
/* Skip any whitespace */
while (data < data_limit - 10) {
if (*data == ' ' || *data == '\r' || *data == '\n')
data++;
else
break;
}
/* strlen("PRIVMSG x ")=10 */
if (data < data_limit - 10) {
if (strncasecmp("PRIVMSG ", data, 8))
goto out;
data += 8;
}
/* strlen(" :\1DCC SENT t AAAAAAAA P\1\n")=26
* 7+MINMATCHLEN+strlen("t AAAAAAAA P\1\n")=26
*/
while (data < data_limit - (21 + MINMATCHLEN)) {
/* Find first " :", the start of message */
if (memcmp(data, " :", 2)) {
data++;
continue;
}
data += 2;
/* then check that place only for the DCC command */
if (memcmp(data, "\1DCC ", 5))
goto out;
data += 5;
/* we have at least (19+MINMATCHLEN)-5 bytes valid data left */
/* we have at least (21+MINMATCHLEN)-(2+5) bytes valid data left */
iph = ip_hdr(skb);
pr_debug("DCC found in master %pI4:%u %pI4:%u\n",
@ -181,7 +203,7 @@ static int help(struct sk_buff *skb, unsigned int protoff,
pr_debug("DCC %s detected\n", dccprotos[i]);
/* we have at least
* (19+MINMATCHLEN)-5-dccprotos[i].matchlen bytes valid
* (21+MINMATCHLEN)-7-dccprotos[i].matchlen bytes valid
* data left (== 14/13 bytes) */
if (parse_dcc(data, data_limit, &dcc_ip,
&dcc_port, &addr_beg_p, &addr_end_p)) {

View File

@ -477,7 +477,7 @@ static int ct_sip_walk_headers(const struct nf_conn *ct, const char *dptr,
return ret;
if (ret == 0)
break;
dataoff += *matchoff;
dataoff = *matchoff;
}
*in_header = 0;
}
@ -489,7 +489,7 @@ static int ct_sip_walk_headers(const struct nf_conn *ct, const char *dptr,
break;
if (ret == 0)
return ret;
dataoff += *matchoff;
dataoff = *matchoff;
}
if (in_header)

View File

@ -2197,7 +2197,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
struct netlink_ext_ack *extack)
{
const struct nlattr * const *nla = ctx->nla;
struct nft_stats __percpu *stats = NULL;
struct nft_table *table = ctx->table;
struct nft_base_chain *basechain;
struct net *net = ctx->net;
@ -2212,6 +2211,7 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
return -EOVERFLOW;
if (nla[NFTA_CHAIN_HOOK]) {
struct nft_stats __percpu *stats = NULL;
struct nft_chain_hook hook;
if (flags & NFT_CHAIN_BINDING)
@ -2243,8 +2243,11 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
if (err < 0) {
nft_chain_release_hook(&hook);
kfree(basechain);
free_percpu(stats);
return err;
}
if (stats)
static_branch_inc(&nft_counters_enabled);
} else {
if (flags & NFT_CHAIN_BASE)
return -EINVAL;
@ -2319,9 +2322,6 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,
goto err_unregister_hook;
}
if (stats)
static_branch_inc(&nft_counters_enabled);
table->use++;
return 0;

View File

@ -269,6 +269,7 @@ bool nf_osf_find(const struct sk_buff *skb,
struct nf_osf_hdr_ctx ctx;
const struct tcphdr *tcp;
struct tcphdr _tcph;
bool found = false;
memset(&ctx, 0, sizeof(ctx));
@ -283,10 +284,11 @@ bool nf_osf_find(const struct sk_buff *skb,
data->genre = f->genre;
data->version = f->version;
found = true;
break;
}
return true;
return found;
}
EXPORT_SYMBOL_GPL(nf_osf_find);

View File

@ -2137,6 +2137,7 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
}
if (chain->tmplt_ops && chain->tmplt_ops != tp->ops) {
tfilter_put(tp, fh);
NL_SET_ERR_MSG(extack, "Chain template is set to a different filter kind");
err = -EINVAL;
goto errout;

View File

@ -67,6 +67,7 @@ struct taprio_sched {
u32 flags;
enum tk_offsets tk_offset;
int clockid;
bool offloaded;
atomic64_t picos_per_byte; /* Using picoseconds because for 10Gbps+
* speeds it's sub-nanoseconds per byte
*/
@ -1279,6 +1280,8 @@ static int taprio_enable_offload(struct net_device *dev,
goto done;
}
q->offloaded = true;
done:
taprio_offload_free(offload);
@ -1293,12 +1296,9 @@ static int taprio_disable_offload(struct net_device *dev,
struct tc_taprio_qopt_offload *offload;
int err;
if (!FULL_OFFLOAD_IS_ENABLED(q->flags))
if (!q->offloaded)
return 0;
if (!ops->ndo_setup_tc)
return -EOPNOTSUPP;
offload = taprio_offload_alloc(0);
if (!offload) {
NL_SET_ERR_MSG(extack,
@ -1314,6 +1314,8 @@ static int taprio_disable_offload(struct net_device *dev,
goto out;
}
q->offloaded = false;
out:
taprio_offload_free(offload);
@ -1949,12 +1951,14 @@ static int taprio_dump(struct Qdisc *sch, struct sk_buff *skb)
static struct Qdisc *taprio_leaf(struct Qdisc *sch, unsigned long cl)
{
struct netdev_queue *dev_queue = taprio_queue_get(sch, cl);
struct taprio_sched *q = qdisc_priv(sch);
struct net_device *dev = qdisc_dev(sch);
unsigned int ntx = cl - 1;
if (!dev_queue)
if (ntx >= dev->num_tx_queues)
return NULL;
return dev_queue->qdisc_sleeping;
return q->qdiscs[ntx];
}
static unsigned long taprio_find(struct Qdisc *sch, u32 classid)

View File

@ -2239,7 +2239,7 @@ static struct smc_buf_desc *smcr_new_buf_create(struct smc_link_group *lgr,
static int smcr_buf_map_usable_links(struct smc_link_group *lgr,
struct smc_buf_desc *buf_desc, bool is_rmb)
{
int i, rc = 0;
int i, rc = 0, cnt = 0;
/* protect against parallel link reconfiguration */
mutex_lock(&lgr->llc_conf_mutex);
@ -2252,9 +2252,12 @@ static int smcr_buf_map_usable_links(struct smc_link_group *lgr,
rc = -ENOMEM;
goto out;
}
cnt++;
}
out:
mutex_unlock(&lgr->llc_conf_mutex);
if (!rc && !cnt)
rc = -EINVAL;
return rc;
}

View File

@ -13,6 +13,7 @@ TARGETS += damon
TARGETS += drivers/dma-buf
TARGETS += drivers/s390x/uvdevice
TARGETS += drivers/net/bonding
TARGETS += drivers/net/team
TARGETS += efivarfs
TARGETS += exec
TARGETS += filesystems

View File

@ -1,6 +1,10 @@
# SPDX-License-Identifier: GPL-2.0
# Makefile for net selftests
TEST_PROGS := bond-break-lacpdu-tx.sh
TEST_PROGS := bond-break-lacpdu-tx.sh \
dev_addr_lists.sh \
bond-arp-interval-causes-panic.sh
TEST_FILES := lag_lib.sh
include ../../../lib.mk

View File

@ -0,0 +1,49 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
#
# cause kernel oops in bond_rr_gen_slave_id
DEBUG=${DEBUG:-0}
set -e
test ${DEBUG} -ne 0 && set -x
finish()
{
ip netns delete server || true
ip netns delete client || true
ip link del link1_1 || true
}
trap finish EXIT
client_ip4=192.168.1.198
server_ip4=192.168.1.254
# setup kernel so it reboots after causing the panic
echo 180 >/proc/sys/kernel/panic
# build namespaces
ip link add dev link1_1 type veth peer name link1_2
ip netns add "server"
ip link set dev link1_2 netns server up name eth0
ip netns exec server ip addr add ${server_ip4}/24 dev eth0
ip netns add "client"
ip link set dev link1_1 netns client down name eth0
ip netns exec client ip link add dev bond0 down type bond mode 1 \
miimon 100 all_slaves_active 1
ip netns exec client ip link set dev eth0 down master bond0
ip netns exec client ip link set dev bond0 up
ip netns exec client ip addr add ${client_ip4}/24 dev bond0
ip netns exec client ping -c 5 $server_ip4 >/dev/null
ip netns exec client ip link set dev eth0 down nomaster
ip netns exec client ip link set dev bond0 down
ip netns exec client ip link set dev bond0 type bond mode 0 \
arp_interval 1000 arp_ip_target "+${server_ip4}"
ip netns exec client ip link set dev eth0 down master bond0
ip netns exec client ip link set dev bond0 up
ip netns exec client ping -c 5 $server_ip4 >/dev/null
exit 0

View File

@ -1 +1,2 @@
CONFIG_BONDING=y
CONFIG_MACVLAN=y

View File

@ -0,0 +1,109 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
#
# Test bond device handling of addr lists (dev->uc, mc)
#
ALL_TESTS="
bond_cleanup_mode1
bond_cleanup_mode4
bond_listen_lacpdu_multicast_case_down
bond_listen_lacpdu_multicast_case_up
"
REQUIRE_MZ=no
NUM_NETIFS=0
lib_dir=$(dirname "$0")
source "$lib_dir"/../../../net/forwarding/lib.sh
source "$lib_dir"/lag_lib.sh
destroy()
{
local ifnames=(dummy1 dummy2 bond1 mv0)
local ifname
for ifname in "${ifnames[@]}"; do
ip link del "$ifname" &>/dev/null
done
}
cleanup()
{
pre_cleanup
destroy
}
# bond driver control paths vary between modes that have a primary slave
# (bond_uses_primary()) and others. Test both kinds of modes.
bond_cleanup_mode1()
{
RET=0
test_LAG_cleanup "bonding" "active-backup"
}
bond_cleanup_mode4() {
RET=0
test_LAG_cleanup "bonding" "802.3ad"
}
bond_listen_lacpdu_multicast()
{
# Initial state of bond device, up | down
local init_state=$1
local lacpdu_mc="01:80:c2:00:00:02"
ip link add dummy1 type dummy
ip link add bond1 "$init_state" type bond mode 802.3ad
ip link set dev dummy1 master bond1
if [ "$init_state" = "down" ]; then
ip link set dev bond1 up
fi
grep_bridge_fdb "$lacpdu_mc" bridge fdb show brport dummy1 >/dev/null
check_err $? "LACPDU multicast address not present on slave (1)"
ip link set dev bond1 down
not grep_bridge_fdb "$lacpdu_mc" bridge fdb show brport dummy1 >/dev/null
check_err $? "LACPDU multicast address still present on slave"
ip link set dev bond1 up
grep_bridge_fdb "$lacpdu_mc" bridge fdb show brport dummy1 >/dev/null
check_err $? "LACPDU multicast address not present on slave (2)"
cleanup
log_test "bonding LACPDU multicast address to slave (from bond $init_state)"
}
# The LACPDU mc addr is added by different paths depending on the initial state
# of the bond when enslaving a device. Test both cases.
bond_listen_lacpdu_multicast_case_down()
{
RET=0
bond_listen_lacpdu_multicast "down"
}
bond_listen_lacpdu_multicast_case_up()
{
RET=0
bond_listen_lacpdu_multicast "up"
}
trap cleanup EXIT
tests_run
exit "$EXIT_STATUS"

View File

@ -0,0 +1,61 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
# Test that a link aggregation device (bonding, team) removes the hardware
# addresses that it adds on its underlying devices.
test_LAG_cleanup()
{
local driver=$1
local mode=$2
local ucaddr="02:00:00:12:34:56"
local addr6="fe80::78:9abc/64"
local mcaddr="33:33:ff:78:9a:bc"
local name
ip link add dummy1 type dummy
ip link add dummy2 type dummy
if [ "$driver" = "bonding" ]; then
name="bond1"
ip link add "$name" up type bond mode "$mode"
ip link set dev dummy1 master "$name"
ip link set dev dummy2 master "$name"
elif [ "$driver" = "team" ]; then
name="team0"
teamd -d -c '
{
"device": "'"$name"'",
"runner": {
"name": "'"$mode"'"
},
"ports": {
"dummy1":
{},
"dummy2":
{}
}
}
'
ip link set dev "$name" up
else
check_err 1
log_test test_LAG_cleanup ": unknown driver \"$driver\""
return
fi
# Used to test dev->uc handling
ip link add mv0 link "$name" up address "$ucaddr" type macvlan
# Used to test dev->mc handling
ip address add "$addr6" dev "$name"
ip link set dev "$name" down
ip link del "$name"
not grep_bridge_fdb "$ucaddr" bridge fdb show >/dev/null
check_err $? "macvlan unicast address still present on a slave"
not grep_bridge_fdb "$mcaddr" bridge fdb show >/dev/null
check_err $? "IPv6 solicited-node multicast mac address still present on a slave"
cleanup
log_test "$driver cleanup mode $mode"
}

View File

@ -0,0 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
# Makefile for net selftests
TEST_PROGS := dev_addr_lists.sh
include ../../../lib.mk

View File

@ -0,0 +1,3 @@
CONFIG_NET_TEAM=y
CONFIG_NET_TEAM_MODE_LOADBALANCE=y
CONFIG_MACVLAN=y

View File

@ -0,0 +1,51 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
#
# Test team device handling of addr lists (dev->uc, mc)
#
ALL_TESTS="
team_cleanup
"
REQUIRE_MZ=no
NUM_NETIFS=0
lib_dir=$(dirname "$0")
source "$lib_dir"/../../../net/forwarding/lib.sh
source "$lib_dir"/../bonding/lag_lib.sh
destroy()
{
local ifnames=(dummy0 dummy1 team0 mv0)
local ifname
for ifname in "${ifnames[@]}"; do
ip link del "$ifname" &>/dev/null
done
}
cleanup()
{
pre_cleanup
destroy
}
team_cleanup()
{
RET=0
test_LAG_cleanup "team" "lacp"
}
require_command teamd
trap cleanup EXIT
tests_run
exit "$EXIT_STATUS"

View File

@ -28,7 +28,7 @@
# +------------------+ +------------------+
#
ALL_TESTS="mcast_v4 mcast_v6 rpf_v4 rpf_v6"
ALL_TESTS="mcast_v4 mcast_v6 rpf_v4 rpf_v6 unres_v4 unres_v6"
NUM_NETIFS=6
source lib.sh
source tc_common.sh
@ -406,6 +406,96 @@ rpf_v6()
log_test "RPF IPv6"
}
unres_v4()
{
# Send a multicast packet not corresponding to an installed route,
# causing the kernel to queue the packet for resolution and emit an
# IGMPMSG_NOCACHE notification. smcrouted will react to this
# notification by consulting its (*, G) list and installing an (S, G)
# route, which will be used to forward the queued packet.
RET=0
tc filter add dev $h2 ingress protocol ip pref 1 handle 1 flower \
dst_ip 225.1.2.3 ip_proto udp dst_port 12345 action drop
tc filter add dev $h3 ingress protocol ip pref 1 handle 1 flower \
dst_ip 225.1.2.3 ip_proto udp dst_port 12345 action drop
# Forwarding should fail before installing a matching (*, G).
$MZ $h1 -c 1 -p 128 -t udp "ttl=10,sp=54321,dp=12345" \
-a 00:11:22:33:44:55 -b 01:00:5e:01:02:03 \
-A 198.51.100.2 -B 225.1.2.3 -q
tc_check_packets "dev $h2 ingress" 1 0
check_err $? "Multicast received on first host when should not"
tc_check_packets "dev $h3 ingress" 1 0
check_err $? "Multicast received on second host when should not"
# Create (*, G). Will not be installed in the kernel.
create_mcast_sg $rp1 0.0.0.0 225.1.2.3 $rp2 $rp3
$MZ $h1 -c 1 -p 128 -t udp "ttl=10,sp=54321,dp=12345" \
-a 00:11:22:33:44:55 -b 01:00:5e:01:02:03 \
-A 198.51.100.2 -B 225.1.2.3 -q
tc_check_packets "dev $h2 ingress" 1 1
check_err $? "Multicast not received on first host"
tc_check_packets "dev $h3 ingress" 1 1
check_err $? "Multicast not received on second host"
delete_mcast_sg $rp1 0.0.0.0 225.1.2.3 $rp2 $rp3
tc filter del dev $h3 ingress protocol ip pref 1 handle 1 flower
tc filter del dev $h2 ingress protocol ip pref 1 handle 1 flower
log_test "Unresolved queue IPv4"
}
unres_v6()
{
# Send a multicast packet not corresponding to an installed route,
# causing the kernel to queue the packet for resolution and emit an
# MRT6MSG_NOCACHE notification. smcrouted will react to this
# notification by consulting its (*, G) list and installing an (S, G)
# route, which will be used to forward the queued packet.
RET=0
tc filter add dev $h2 ingress protocol ipv6 pref 1 handle 1 flower \
dst_ip ff0e::3 ip_proto udp dst_port 12345 action drop
tc filter add dev $h3 ingress protocol ipv6 pref 1 handle 1 flower \
dst_ip ff0e::3 ip_proto udp dst_port 12345 action drop
# Forwarding should fail before installing a matching (*, G).
$MZ $h1 -6 -c 1 -p 128 -t udp "ttl=10,sp=54321,dp=12345" \
-a 00:11:22:33:44:55 -b 33:33:00:00:00:03 \
-A 2001:db8:1::2 -B ff0e::3 -q
tc_check_packets "dev $h2 ingress" 1 0
check_err $? "Multicast received on first host when should not"
tc_check_packets "dev $h3 ingress" 1 0
check_err $? "Multicast received on second host when should not"
# Create (*, G). Will not be installed in the kernel.
create_mcast_sg $rp1 :: ff0e::3 $rp2 $rp3
$MZ $h1 -6 -c 1 -p 128 -t udp "ttl=10,sp=54321,dp=12345" \
-a 00:11:22:33:44:55 -b 33:33:00:00:00:03 \
-A 2001:db8:1::2 -B ff0e::3 -q
tc_check_packets "dev $h2 ingress" 1 1
check_err $? "Multicast not received on first host"
tc_check_packets "dev $h3 ingress" 1 1
check_err $? "Multicast not received on second host"
delete_mcast_sg $rp1 :: ff0e::3 $rp2 $rp3
tc filter del dev $h3 ingress protocol ipv6 pref 1 handle 1 flower
tc filter del dev $h2 ingress protocol ipv6 pref 1 handle 1 flower
log_test "Unresolved queue IPv6"
}
trap cleanup EXIT
setup_prepare

View File

@ -1,3 +1,4 @@
#!/bin/bash
# SPDX-License-Identifier: GPL-2.0
# This test sends one stream of traffic from H1 through a TBF shaper, to a RED

View File

@ -91,7 +91,7 @@ src
start 1
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp
race_repeat 3
@ -116,7 +116,7 @@ src
start 10
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp6
race_repeat 3
@ -141,7 +141,7 @@ src
start 1
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp
race_repeat 0
@ -163,7 +163,7 @@ src mac
start 10
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp6
race_repeat 0
@ -185,7 +185,7 @@ src mac proto
start 10
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp6
race_repeat 0
@ -207,7 +207,7 @@ src addr4
start 1
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp
race_repeat 3
@ -227,7 +227,7 @@ src addr6 port
start 10
count 5
src_delta 2000
tools sendip nc
tools sendip socat nc
proto udp6
race_repeat 3
@ -247,7 +247,7 @@ src mac proto addr4
start 1
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp
race_repeat 0
@ -264,7 +264,7 @@ src mac
start 1
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp
race_repeat 0
@ -286,7 +286,7 @@ src mac addr4
start 1
count 5
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp
race_repeat 0
@ -337,7 +337,7 @@ src addr4
start 1
count 5
src_delta 2000
tools sendip nc
tools sendip socat nc
proto udp
race_repeat 3
@ -363,7 +363,7 @@ src mac
start 1
count 1
src_delta 2000
tools sendip nc bash
tools sendip socat nc bash
proto udp
race_repeat 0
@ -541,6 +541,24 @@ setup_send_udp() {
dst_port=
src_addr4=
}
elif command -v socat -v >/dev/null; then
send_udp() {
if [ -n "${src_addr4}" ]; then
B ip addr add "${src_addr4}" dev veth_b
__socatbind=",bind=${src_addr4}"
if [ -n "${src_port}" ];then
__socatbind="${__socatbind}:${src_port}"
fi
fi
ip addr add "${dst_addr4}" dev veth_a 2>/dev/null
[ -z "${dst_port}" ] && dst_port=12345
echo "test4" | B socat -t 0.01 STDIN UDP4-DATAGRAM:${dst_addr4}:${dst_port}"${__socatbind}"
src_addr4=
src_port=
}
elif command -v nc >/dev/null; then
if nc -u -w0 1.1.1.1 1 2>/dev/null; then
# OpenBSD netcat
@ -606,6 +624,29 @@ setup_send_udp6() {
dst_port=
src_addr6=
}
elif command -v socat -v >/dev/null; then
send_udp6() {
ip -6 addr add "${dst_addr6}" dev veth_a nodad \
2>/dev/null
__socatbind6=
if [ -n "${src_addr6}" ]; then
if [ -n "${src_addr6} != "${src_addr6_added} ]; then
B ip addr add "${src_addr6}" dev veth_b nodad
src_addr6_added=${src_addr6}
fi
__socatbind6=",bind=[${src_addr6}]"
if [ -n "${src_port}" ] ;then
__socatbind6="${__socatbind6}:${src_port}"
fi
fi
echo "test6" | B socat -t 0.01 STDIN UDP6-DATAGRAM:[${dst_addr6}]:${dst_port}"${__socatbind6}"
}
elif command -v nc >/dev/null && nc -u -w0 1.1.1.1 1 2>/dev/null; then
# GNU netcat might not work with IPv6, try next tool
send_udp6() {

View File

@ -343,8 +343,10 @@ $(KERNEL_BZIMAGE): $(TOOLCHAIN_PATH)/.installed $(KERNEL_BUILD_PATH)/.config $(B
.PHONY: $(KERNEL_BZIMAGE)
$(TOOLCHAIN_PATH)/$(CHOST)/include/linux/.installed: | $(KERNEL_BUILD_PATH)/.config $(TOOLCHAIN_PATH)/.installed
ifneq ($(ARCH),um)
rm -rf $(TOOLCHAIN_PATH)/$(CHOST)/include/linux
$(MAKE) -C $(KERNEL_PATH) O=$(KERNEL_BUILD_PATH) INSTALL_HDR_PATH=$(TOOLCHAIN_PATH)/$(CHOST) ARCH=$(KERNEL_ARCH) CROSS_COMPILE=$(CROSS_COMPILE) headers_install
endif
touch $@
$(TOOLCHAIN_PATH)/.installed: $(TOOLCHAIN_TAR)