Including fixes from bluetooth.

Current release - regressions:
 
   - rtnetlink: fix rtnl_dump_ifinfo() error path
 
   - bluetooth: remove the redundant sco_conn_put
 
 Previous releases - regressions:
 
   - netlink: fix false positive warning in extack during dumps
 
   - sched: sch_fq: don't follow the fast path if Tx is behind now
 
   - ipv6: delete temporary address if mngtmpaddr is removed or unmanaged
 
   - tcp: fix use-after-free of nreq in reqsk_timer_handler().
 
   - bluetooth: fix slab-use-after-free Read in set_powered_sync
 
   - l2tp: fix warning in l2tp_exit_net found
 
   - eth: bnxt_en: fix receive ring space parameters when XDP is active
 
   - eth: lan78xx: fix double free issue with interrupt buffer allocation
 
   - eth: tg3: set coherent DMA mask bits to 31 for BCM57766 chipsets
 
 Previous releases - always broken:
 
   - ipmr: fix tables suspicious RCU usage
 
   - iucv: MSG_PEEK causes memory leak in iucv_sock_destruct()
 
   - eth: octeontx2-af: fix low network performance
 
   - eth: stmmac: dwmac-socfpga: set RX watchdog interrupt as broken
 
   - eth: rtase: correct the speed for RTL907XD-V1
 
 Misc:
 
   - some documentation fixup
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmdIolwSHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOk/fEP/01Nuobq5teEiJgfV25xMqKT8EtvtrTk
 QatoPMD4UrpxbTBlA6wc23wBewBCVHG6IKVTVH00mUsWbZv561PNnXexD5yTLlor
 p4XSyaUwXeUzD+9LsxlTJGyp2gKGrir6NY6R/pYaJJ7pjxuRQKOl+qXf7s7IjIye
 Fnh8LAxIhr/LdBCJBV4tajS5VfCB6svT+uFCflbOw0Ng/quGfKchTHGTBxyHr3Ef
 mw0XsFew+6hDt72l9u0BNUewsSNfcfxSR343Z/DCaS03ZRQxhsB9I2v0WfgteO+U
 3xdRG1WvphfYsN/C/zJ19OThAmbKE+u4gz8Z07yebpgFN5jbe5Rcf7IVcXiexd0Y
 2fivK7DFU06TLukqBkUqqwPzAgh1w/KA+ia119WteYKxxTchu9td7+L4pr9qU4Tg
 Nipq0MYaj0cEebf+DdlG+2UFjMzaTiN/Ph1Cdh15bqMaVhn/eOk+L959y/XUlBm0
 vpNL2SaFg8ki1N3SyTCFvmS3w8P+jM/KaA3fQv8hfG9Ceab5NKEoUff1VdjDBh9X
 sS7I15rg8s0CV1DWDJn6Mvex30e2+/yesjJbD/D9HDcb1y2vmbwz9t5L3yFpoNbc
 +qxRawoxj+Vi/4DZNnZKHvTkc0+hOm4f+BtUGiGBfBnIIrqvYh3DnQTc5res6l0e
 ZdG0B4yEZedj
 =7dW1
 -----END PGP SIGNATURE-----

Merge tag 'net-6.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from bluetooth.

  Current release - regressions:

   - rtnetlink: fix rtnl_dump_ifinfo() error path

   - bluetooth: remove the redundant sco_conn_put

  Previous releases - regressions:

   - netlink: fix false positive warning in extack during dumps

   - sched: sch_fq: don't follow the fast path if Tx is behind now

   - ipv6: delete temporary address if mngtmpaddr is removed or
     unmanaged

   - tcp: fix use-after-free of nreq in reqsk_timer_handler().

   - bluetooth: fix slab-use-after-free Read in set_powered_sync

   - l2tp: fix warning in l2tp_exit_net found

   - eth:
       - bnxt_en: fix receive ring space parameters when XDP is active
       - lan78xx: fix double free issue with interrupt buffer allocation
       - tg3: set coherent DMA mask bits to 31 for BCM57766 chipsets

  Previous releases - always broken:

   - ipmr: fix tables suspicious RCU usage

   - iucv: MSG_PEEK causes memory leak in iucv_sock_destruct()

   - eth:
       - octeontx2-af: fix low network performance
       - stmmac: dwmac-socfpga: set RX watchdog interrupt as broken
       - rtase: correct the speed for RTL907XD-V1

  Misc:

   - some documentation fixup"

* tag 'net-6.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (49 commits)
  ipmr: fix build with clang and DEBUG_NET disabled.
  Documentation: tls_offload: fix typos and grammar
  Fix spelling mistake
  ipmr: fix tables suspicious RCU usage
  ip6mr: fix tables suspicious RCU usage
  ipmr: add debug check for mr table cleanup
  selftests: rds: move test.py to TEST_FILES
  net_sched: sch_fq: don't follow the fast path if Tx is behind now
  tcp: Fix use-after-free of nreq in reqsk_timer_handler().
  net: phy: fix phy_ethtool_set_eee() incorrectly enabling LPI
  net: Comment copy_from_sockptr() explaining its behaviour
  rxrpc: Improve setsockopt() handling of malformed user input
  llc: Improve setsockopt() handling of malformed user input
  Bluetooth: SCO: remove the redundant sco_conn_put
  Bluetooth: MGMT: Fix possible deadlocks
  Bluetooth: MGMT: Fix slab-use-after-free Read in set_powered_sync
  bnxt_en: Unregister PTP during PCI shutdown and suspend
  bnxt_en: Refactor bnxt_ptp_init()
  bnxt_en: Fix receive ring space parameters when XDP is active
  bnxt_en: Fix queue start to update vnic RSS table
  ...
This commit is contained in:
Linus Torvalds 2024-11-28 10:15:20 -08:00
commit 65ae975e97
46 changed files with 814 additions and 229 deletions

View File

@ -51,7 +51,7 @@ Such userspace applications includes, but are not limited to:
- mbimcli (included with the libmbim [3] library), and - mbimcli (included with the libmbim [3] library), and
- ModemManager [4] - ModemManager [4]
Establishing a MBIM IP session reequires at least these actions by the Establishing a MBIM IP session requires at least these actions by the
management application: management application:
- open the control channel - open the control channel

View File

@ -51,7 +51,7 @@ and send them to the device for encryption and transmission.
RX RX
-- --
On the receive side if the device handled decryption and authentication On the receive side, if the device handled decryption and authentication
successfully, the driver will set the decrypted bit in the associated successfully, the driver will set the decrypted bit in the associated
:c:type:`struct sk_buff <sk_buff>`. The packets reach the TCP stack and :c:type:`struct sk_buff <sk_buff>`. The packets reach the TCP stack and
are handled normally. ``ktls`` is informed when data is queued to the socket are handled normally. ``ktls`` is informed when data is queued to the socket
@ -120,8 +120,9 @@ before installing the connection state in the kernel.
RX RX
-- --
In RX direction local networking stack has little control over the segmentation, In the RX direction, the local networking stack has little control over
so the initial records' TCP sequence number may be anywhere inside the segment. segmentation, so the initial records' TCP sequence number may be anywhere
inside the segment.
Normal operation Normal operation
================ ================
@ -138,8 +139,8 @@ There are no guarantees on record length or record segmentation. In particular
segments may start at any point of a record and contain any number of records. segments may start at any point of a record and contain any number of records.
Assuming segments are received in order, the device should be able to perform Assuming segments are received in order, the device should be able to perform
crypto operations and authentication regardless of segmentation. For this crypto operations and authentication regardless of segmentation. For this
to be possible device has to keep small amount of segment-to-segment state. to be possible, the device has to keep a small amount of segment-to-segment
This includes at least: state. This includes at least:
* partial headers (if a segment carried only a part of the TLS header) * partial headers (if a segment carried only a part of the TLS header)
* partial data block * partial data block
@ -175,12 +176,12 @@ and packet transformation functions) the device validates the Layer 4
checksum and performs a 5-tuple lookup to find any TLS connection the packet checksum and performs a 5-tuple lookup to find any TLS connection the packet
may belong to (technically a 4-tuple may belong to (technically a 4-tuple
lookup is sufficient - IP addresses and TCP port numbers, as the protocol lookup is sufficient - IP addresses and TCP port numbers, as the protocol
is always TCP). If connection is matched device confirms if the TCP sequence is always TCP). If the packet is matched to a connection, the device confirms
number is the expected one and proceeds to TLS handling (record delineation, if the TCP sequence number is the expected one and proceeds to TLS handling
decryption, authentication for each record in the packet). The device leaves (record delineation, decryption, authentication for each record in the packet).
the record framing unmodified, the stack takes care of record decapsulation. The device leaves the record framing unmodified, the stack takes care of record
Device indicates successful handling of TLS offload in the per-packet context decapsulation. Device indicates successful handling of TLS offload in the
(descriptor) passed to the host. per-packet context (descriptor) passed to the host.
Upon reception of a TLS offloaded packet, the driver sets Upon reception of a TLS offloaded packet, the driver sets
the :c:member:`decrypted` mark in :c:type:`struct sk_buff <sk_buff>` the :c:member:`decrypted` mark in :c:type:`struct sk_buff <sk_buff>`
@ -439,7 +440,7 @@ by the driver:
* ``rx_tls_resync_req_end`` - number of times the TLS async resync request * ``rx_tls_resync_req_end`` - number of times the TLS async resync request
properly ended with providing the HW tracked tcp-seq. properly ended with providing the HW tracked tcp-seq.
* ``rx_tls_resync_req_skip`` - number of times the TLS async resync request * ``rx_tls_resync_req_skip`` - number of times the TLS async resync request
procedure was started by not properly ended. procedure was started but not properly ended.
* ``rx_tls_resync_res_ok`` - number of times the TLS resync response call to * ``rx_tls_resync_res_ok`` - number of times the TLS resync response call to
the driver was successfully handled. the driver was successfully handled.
* ``rx_tls_resync_res_skip`` - number of times the TLS resync response call to * ``rx_tls_resync_res_skip`` - number of times the TLS resync response call to
@ -507,8 +508,8 @@ in packets as seen on the wire.
Transport layer transparency Transport layer transparency
---------------------------- ----------------------------
The device should not modify any packet headers for the purpose For the purpose of simplifying TLS offload, the device should not modify any
of the simplifying TLS offload. packet headers.
The device should not depend on any packet headers beyond what is strictly The device should not depend on any packet headers beyond what is strictly
necessary for TLS offload. necessary for TLS offload.

View File

@ -4661,7 +4661,7 @@ int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
struct net_device *dev = bp->dev; struct net_device *dev = bp->dev;
if (page_mode) { if (page_mode) {
bp->flags &= ~BNXT_FLAG_AGG_RINGS; bp->flags &= ~(BNXT_FLAG_AGG_RINGS | BNXT_FLAG_NO_AGG_RINGS);
bp->flags |= BNXT_FLAG_RX_PAGE_MODE; bp->flags |= BNXT_FLAG_RX_PAGE_MODE;
if (bp->xdp_prog->aux->xdp_has_frags) if (bp->xdp_prog->aux->xdp_has_frags)
@ -9299,7 +9299,6 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
struct hwrm_port_mac_ptp_qcfg_output *resp; struct hwrm_port_mac_ptp_qcfg_output *resp;
struct hwrm_port_mac_ptp_qcfg_input *req; struct hwrm_port_mac_ptp_qcfg_input *req;
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
bool phc_cfg;
u8 flags; u8 flags;
int rc; int rc;
@ -9346,8 +9345,9 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
rc = -ENODEV; rc = -ENODEV;
goto exit; goto exit;
} }
phc_cfg = (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0; ptp->rtc_configured =
rc = bnxt_ptp_init(bp, phc_cfg); (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
rc = bnxt_ptp_init(bp);
if (rc) if (rc)
netdev_warn(bp->dev, "PTP initialization failed.\n"); netdev_warn(bp->dev, "PTP initialization failed.\n");
exit: exit:
@ -14746,6 +14746,14 @@ static int bnxt_change_mtu(struct net_device *dev, int new_mtu)
bnxt_close_nic(bp, true, false); bnxt_close_nic(bp, true, false);
WRITE_ONCE(dev->mtu, new_mtu); WRITE_ONCE(dev->mtu, new_mtu);
/* MTU change may change the AGG ring settings if an XDP multi-buffer
* program is attached. We need to set the AGG rings settings and
* rx_skb_func accordingly.
*/
if (READ_ONCE(bp->xdp_prog))
bnxt_set_rx_skb_mode(bp, true);
bnxt_set_ring_params(bp); bnxt_set_ring_params(bp);
if (netif_running(dev)) if (netif_running(dev))
@ -15483,6 +15491,13 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) { for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) {
vnic = &bp->vnic_info[i]; vnic = &bp->vnic_info[i];
rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);
if (rc) {
netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",
vnic->vnic_id, rc);
return rc;
}
vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN; vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;
bnxt_hwrm_vnic_update(bp, vnic, bnxt_hwrm_vnic_update(bp, vnic,
VNIC_UPDATE_REQ_ENABLES_MRU_VALID); VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
@ -16236,6 +16251,7 @@ static void bnxt_shutdown(struct pci_dev *pdev)
if (netif_running(dev)) if (netif_running(dev))
dev_close(dev); dev_close(dev);
bnxt_ptp_clear(bp);
bnxt_clear_int_mode(bp); bnxt_clear_int_mode(bp);
pci_disable_device(pdev); pci_disable_device(pdev);
@ -16263,6 +16279,7 @@ static int bnxt_suspend(struct device *device)
rc = bnxt_close(dev); rc = bnxt_close(dev);
} }
bnxt_hwrm_func_drv_unrgtr(bp); bnxt_hwrm_func_drv_unrgtr(bp);
bnxt_ptp_clear(bp);
pci_disable_device(bp->pdev); pci_disable_device(bp->pdev);
bnxt_free_ctx_mem(bp, false); bnxt_free_ctx_mem(bp, false);
rtnl_unlock(); rtnl_unlock();
@ -16306,6 +16323,10 @@ static int bnxt_resume(struct device *device)
if (bp->fw_crash_mem) if (bp->fw_crash_mem)
bnxt_hwrm_crash_dump_mem_cfg(bp); bnxt_hwrm_crash_dump_mem_cfg(bp);
if (bnxt_ptp_init(bp)) {
kfree(bp->ptp_cfg);
bp->ptp_cfg = NULL;
}
bnxt_get_wol_settings(bp); bnxt_get_wol_settings(bp);
if (netif_running(dev)) { if (netif_running(dev)) {
rc = bnxt_open(dev); rc = bnxt_open(dev);
@ -16484,8 +16505,12 @@ static void bnxt_io_resume(struct pci_dev *pdev)
rtnl_lock(); rtnl_lock();
err = bnxt_hwrm_func_qcaps(bp); err = bnxt_hwrm_func_qcaps(bp);
if (!err && netif_running(netdev)) if (!err) {
err = bnxt_open(netdev); if (netif_running(netdev))
err = bnxt_open(netdev);
else
err = bnxt_reserve_rings(bp, true);
}
if (!err) if (!err)
netif_device_attach(netdev); netif_device_attach(netdev);

View File

@ -2837,19 +2837,24 @@ static int bnxt_get_link_ksettings(struct net_device *dev,
} }
base->port = PORT_NONE; base->port = PORT_NONE;
if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) { if (media == BNXT_MEDIA_TP) {
base->port = PORT_TP; base->port = PORT_TP;
linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
lk_ksettings->link_modes.supported); lk_ksettings->link_modes.supported);
linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT, linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
lk_ksettings->link_modes.advertising); lk_ksettings->link_modes.advertising);
} else if (media == BNXT_MEDIA_KR) {
linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT,
lk_ksettings->link_modes.supported);
linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT,
lk_ksettings->link_modes.advertising);
} else { } else {
linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
lk_ksettings->link_modes.supported); lk_ksettings->link_modes.supported);
linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
lk_ksettings->link_modes.advertising); lk_ksettings->link_modes.advertising);
if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC) if (media == BNXT_MEDIA_CR)
base->port = PORT_DA; base->port = PORT_DA;
else else
base->port = PORT_FIBRE; base->port = PORT_FIBRE;

View File

@ -1038,7 +1038,7 @@ static void bnxt_ptp_free(struct bnxt *bp)
} }
} }
int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg) int bnxt_ptp_init(struct bnxt *bp)
{ {
struct bnxt_ptp_cfg *ptp = bp->ptp_cfg; struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
int rc; int rc;
@ -1061,7 +1061,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
if (BNXT_PTP_USE_RTC(bp)) { if (BNXT_PTP_USE_RTC(bp)) {
bnxt_ptp_timecounter_init(bp, false); bnxt_ptp_timecounter_init(bp, false);
rc = bnxt_ptp_init_rtc(bp, phc_cfg); rc = bnxt_ptp_init_rtc(bp, ptp->rtc_configured);
if (rc) if (rc)
goto out; goto out;
} else { } else {

View File

@ -135,6 +135,7 @@ struct bnxt_ptp_cfg {
BNXT_PTP_MSG_PDELAY_REQ | \ BNXT_PTP_MSG_PDELAY_REQ | \
BNXT_PTP_MSG_PDELAY_RESP) BNXT_PTP_MSG_PDELAY_RESP)
u8 tx_tstamp_en:1; u8 tx_tstamp_en:1;
u8 rtc_configured:1;
int rx_filter; int rx_filter;
u32 tstamp_filters; u32 tstamp_filters;
@ -168,7 +169,7 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi,
struct tx_ts_cmp *tscmp); struct tx_ts_cmp *tscmp);
void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns); void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns);
int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg); int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg);
int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg); int bnxt_ptp_init(struct bnxt *bp);
void bnxt_ptp_clear(struct bnxt *bp); void bnxt_ptp_clear(struct bnxt *bp);
static inline u64 bnxt_timecounter_cyc2time(struct bnxt_ptp_cfg *ptp, u64 ts) static inline u64 bnxt_timecounter_cyc2time(struct bnxt_ptp_cfg *ptp, u64 ts)
{ {

View File

@ -17839,6 +17839,9 @@ static int tg3_init_one(struct pci_dev *pdev,
} else } else
persist_dma_mask = dma_mask = DMA_BIT_MASK(64); persist_dma_mask = dma_mask = DMA_BIT_MASK(64);
if (tg3_asic_rev(tp) == ASIC_REV_57766)
persist_dma_mask = DMA_BIT_MASK(31);
/* Configure DMA attributes. */ /* Configure DMA attributes. */
if (dma_mask > DMA_BIT_MASK(32)) { if (dma_mask > DMA_BIT_MASK(32)) {
err = dma_set_mask(&pdev->dev, dma_mask); err = dma_set_mask(&pdev->dev, dma_mask);

View File

@ -112,6 +112,11 @@ struct mac_ops *get_mac_ops(void *cgxd)
return ((struct cgx *)cgxd)->mac_ops; return ((struct cgx *)cgxd)->mac_ops;
} }
u32 cgx_get_fifo_len(void *cgxd)
{
return ((struct cgx *)cgxd)->fifo_len;
}
void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val) void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val)
{ {
writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) + writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) +
@ -209,6 +214,24 @@ u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id)
return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT; return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT;
} }
static u8 cgx_get_nix_resetbit(struct cgx *cgx)
{
int first_lmac;
u8 p2x;
/* non 98XX silicons supports only NIX0 block */
if (cgx->pdev->subsystem_device != PCI_SUBSYS_DEVID_98XX)
return CGX_NIX0_RESET;
first_lmac = find_first_bit(&cgx->lmac_bmap, cgx->max_lmac_per_mac);
p2x = cgx_lmac_get_p2x(cgx->cgx_id, first_lmac);
if (p2x == CMR_P2X_SEL_NIX1)
return CGX_NIX1_RESET;
else
return CGX_NIX0_RESET;
}
/* Ensure the required lock for event queue(where asynchronous events are /* Ensure the required lock for event queue(where asynchronous events are
* posted) is acquired before calling this API. Else an asynchronous event(with * posted) is acquired before calling this API. Else an asynchronous event(with
* latest link status) can reach the destination before this function returns * latest link status) can reach the destination before this function returns
@ -501,7 +524,7 @@ static u32 cgx_get_lmac_fifo_len(void *cgxd, int lmac_id)
u8 num_lmacs; u8 num_lmacs;
u32 fifo_len; u32 fifo_len;
fifo_len = cgx->mac_ops->fifo_len; fifo_len = cgx->fifo_len;
num_lmacs = cgx->mac_ops->get_nr_lmacs(cgx); num_lmacs = cgx->mac_ops->get_nr_lmacs(cgx);
switch (num_lmacs) { switch (num_lmacs) {
@ -1719,6 +1742,8 @@ static int cgx_lmac_init(struct cgx *cgx)
lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id); lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id);
} }
/* Start X2P reset on given MAC block */
cgx->mac_ops->mac_x2p_reset(cgx, true);
return cgx_lmac_verify_fwi_version(cgx); return cgx_lmac_verify_fwi_version(cgx);
err_bitmap_free: err_bitmap_free:
@ -1764,7 +1789,7 @@ static void cgx_populate_features(struct cgx *cgx)
u64 cfg; u64 cfg;
cfg = cgx_read(cgx, 0, CGX_CONST); cfg = cgx_read(cgx, 0, CGX_CONST);
cgx->mac_ops->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg); cgx->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
cgx->max_lmac_per_mac = FIELD_GET(CGX_CONST_MAX_LMACS, cfg); cgx->max_lmac_per_mac = FIELD_GET(CGX_CONST_MAX_LMACS, cfg);
if (is_dev_rpm(cgx)) if (is_dev_rpm(cgx))
@ -1784,6 +1809,45 @@ static u8 cgx_get_rxid_mapoffset(struct cgx *cgx)
return 0x60; return 0x60;
} }
static void cgx_x2p_reset(void *cgxd, bool enable)
{
struct cgx *cgx = cgxd;
int lmac_id;
u64 cfg;
if (enable) {
for_each_set_bit(lmac_id, &cgx->lmac_bmap, cgx->max_lmac_per_mac)
cgx->mac_ops->mac_enadis_rx(cgx, lmac_id, false);
usleep_range(1000, 2000);
cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
cfg |= cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP;
cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
} else {
cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
cfg &= ~(cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP);
cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
}
}
static int cgx_enadis_rx(void *cgxd, int lmac_id, bool enable)
{
struct cgx *cgx = cgxd;
u64 cfg;
if (!is_lmac_valid(cgx, lmac_id))
return -ENODEV;
cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG);
if (enable)
cfg |= DATA_PKT_RX_EN;
else
cfg &= ~DATA_PKT_RX_EN;
cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg);
return 0;
}
static struct mac_ops cgx_mac_ops = { static struct mac_ops cgx_mac_ops = {
.name = "cgx", .name = "cgx",
.csr_offset = 0, .csr_offset = 0,
@ -1815,6 +1879,8 @@ static struct mac_ops cgx_mac_ops = {
.mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg, .mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg,
.mac_reset = cgx_lmac_reset, .mac_reset = cgx_lmac_reset,
.mac_stats_reset = cgx_stats_reset, .mac_stats_reset = cgx_stats_reset,
.mac_x2p_reset = cgx_x2p_reset,
.mac_enadis_rx = cgx_enadis_rx,
}; };
static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id) static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)

View File

@ -32,6 +32,10 @@
#define CGX_LMAC_TYPE_MASK 0xF #define CGX_LMAC_TYPE_MASK 0xF
#define CGXX_CMRX_INT 0x040 #define CGXX_CMRX_INT 0x040
#define FW_CGX_INT BIT_ULL(1) #define FW_CGX_INT BIT_ULL(1)
#define CGXX_CMR_GLOBAL_CONFIG 0x08
#define CGX_NIX0_RESET BIT_ULL(2)
#define CGX_NIX1_RESET BIT_ULL(3)
#define CGX_NSCI_DROP BIT_ULL(9)
#define CGXX_CMRX_INT_ENA_W1S 0x058 #define CGXX_CMRX_INT_ENA_W1S 0x058
#define CGXX_CMRX_RX_ID_MAP 0x060 #define CGXX_CMRX_RX_ID_MAP 0x060
#define CGXX_CMRX_RX_STAT0 0x070 #define CGXX_CMRX_RX_STAT0 0x070
@ -185,4 +189,5 @@ int cgx_lmac_get_pfc_frm_cfg(void *cgxd, int lmac_id, u8 *tx_pause,
int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause, int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause,
int pfvf_idx); int pfvf_idx);
int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr); int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr);
u32 cgx_get_fifo_len(void *cgxd);
#endif /* CGX_H */ #endif /* CGX_H */

View File

@ -72,7 +72,6 @@ struct mac_ops {
u8 irq_offset; u8 irq_offset;
u8 int_ena_bit; u8 int_ena_bit;
u8 lmac_fwi; u8 lmac_fwi;
u32 fifo_len;
bool non_contiguous_serdes_lane; bool non_contiguous_serdes_lane;
/* RPM & CGX differs in number of Receive/transmit stats */ /* RPM & CGX differs in number of Receive/transmit stats */
u8 rx_stats_cnt; u8 rx_stats_cnt;
@ -133,6 +132,8 @@ struct mac_ops {
int (*get_fec_stats)(void *cgxd, int lmac_id, int (*get_fec_stats)(void *cgxd, int lmac_id,
struct cgx_fec_stats_rsp *rsp); struct cgx_fec_stats_rsp *rsp);
int (*mac_stats_reset)(void *cgxd, int lmac_id); int (*mac_stats_reset)(void *cgxd, int lmac_id);
void (*mac_x2p_reset)(void *cgxd, bool enable);
int (*mac_enadis_rx)(void *cgxd, int lmac_id, bool enable);
}; };
struct cgx { struct cgx {
@ -142,6 +143,10 @@ struct cgx {
u8 lmac_count; u8 lmac_count;
/* number of LMACs per MAC could be 4 or 8 */ /* number of LMACs per MAC could be 4 or 8 */
u8 max_lmac_per_mac; u8 max_lmac_per_mac;
/* length of fifo varies depending on the number
* of LMACS
*/
u32 fifo_len;
#define MAX_LMAC_COUNT 8 #define MAX_LMAC_COUNT 8
struct lmac *lmac_idmap[MAX_LMAC_COUNT]; struct lmac *lmac_idmap[MAX_LMAC_COUNT];
struct work_struct cgx_cmd_work; struct work_struct cgx_cmd_work;

View File

@ -39,6 +39,8 @@ static struct mac_ops rpm_mac_ops = {
.mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg, .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
.mac_reset = rpm_lmac_reset, .mac_reset = rpm_lmac_reset,
.mac_stats_reset = rpm_stats_reset, .mac_stats_reset = rpm_stats_reset,
.mac_x2p_reset = rpm_x2p_reset,
.mac_enadis_rx = rpm_enadis_rx,
}; };
static struct mac_ops rpm2_mac_ops = { static struct mac_ops rpm2_mac_ops = {
@ -72,6 +74,8 @@ static struct mac_ops rpm2_mac_ops = {
.mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg, .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
.mac_reset = rpm_lmac_reset, .mac_reset = rpm_lmac_reset,
.mac_stats_reset = rpm_stats_reset, .mac_stats_reset = rpm_stats_reset,
.mac_x2p_reset = rpm_x2p_reset,
.mac_enadis_rx = rpm_enadis_rx,
}; };
bool is_dev_rpm2(void *rpmd) bool is_dev_rpm2(void *rpmd)
@ -467,7 +471,7 @@ u8 rpm_get_lmac_type(void *rpmd, int lmac_id)
int err; int err;
req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req); req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req);
err = cgx_fwi_cmd_generic(req, &resp, rpm, 0); err = cgx_fwi_cmd_generic(req, &resp, rpm, lmac_id);
if (!err) if (!err)
return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp); return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp);
return err; return err;
@ -480,7 +484,7 @@ u32 rpm_get_lmac_fifo_len(void *rpmd, int lmac_id)
u8 num_lmacs; u8 num_lmacs;
u32 fifo_len; u32 fifo_len;
fifo_len = rpm->mac_ops->fifo_len; fifo_len = rpm->fifo_len;
num_lmacs = rpm->mac_ops->get_nr_lmacs(rpm); num_lmacs = rpm->mac_ops->get_nr_lmacs(rpm);
switch (num_lmacs) { switch (num_lmacs) {
@ -533,9 +537,9 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
*/ */
max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF; max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF;
if (max_lmac > 4) if (max_lmac > 4)
fifo_len = rpm->mac_ops->fifo_len / 2; fifo_len = rpm->fifo_len / 2;
else else
fifo_len = rpm->mac_ops->fifo_len; fifo_len = rpm->fifo_len;
if (lmac_id < 4) { if (lmac_id < 4) {
num_lmacs = hweight8(lmac_info & 0xF); num_lmacs = hweight8(lmac_info & 0xF);
@ -699,46 +703,51 @@ int rpm_get_fec_stats(void *rpmd, int lmac_id, struct cgx_fec_stats_rsp *rsp)
if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_NONE) if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_NONE)
return 0; return 0;
/* latched registers FCFECX_CW_HI/RSFEC_STAT_FAST_DATA_HI_CDC are common
* for all counters. Acquire lock to ensure serialized reads
*/
mutex_lock(&rpm->lock);
if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_BASER) { if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_BASER) {
val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_CCW_LO); val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_CCW_LO(lmac_id));
val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI); val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
rsp->fec_corr_blks = (val_hi << 16 | val_lo); rsp->fec_corr_blks = (val_hi << 16 | val_lo);
val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_NCCW_LO); val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_NCCW_LO(lmac_id));
val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI); val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
rsp->fec_uncorr_blks = (val_hi << 16 | val_lo); rsp->fec_uncorr_blks = (val_hi << 16 | val_lo);
/* 50G uses 2 Physical serdes lines */ /* 50G uses 2 Physical serdes lines */
if (rpm->lmac_idmap[lmac_id]->link_info.lmac_type_id == if (rpm->lmac_idmap[lmac_id]->link_info.lmac_type_id ==
LMAC_MODE_50G_R) { LMAC_MODE_50G_R) {
val_lo = rpm_read(rpm, lmac_id, val_lo = rpm_read(rpm, 0,
RPMX_MTI_FCFECX_VL1_CCW_LO); RPMX_MTI_FCFECX_VL1_CCW_LO(lmac_id));
val_hi = rpm_read(rpm, lmac_id, val_hi = rpm_read(rpm, 0,
RPMX_MTI_FCFECX_CW_HI); RPMX_MTI_FCFECX_CW_HI(lmac_id));
rsp->fec_corr_blks += (val_hi << 16 | val_lo); rsp->fec_corr_blks += (val_hi << 16 | val_lo);
val_lo = rpm_read(rpm, lmac_id, val_lo = rpm_read(rpm, 0,
RPMX_MTI_FCFECX_VL1_NCCW_LO); RPMX_MTI_FCFECX_VL1_NCCW_LO(lmac_id));
val_hi = rpm_read(rpm, lmac_id, val_hi = rpm_read(rpm, 0,
RPMX_MTI_FCFECX_CW_HI); RPMX_MTI_FCFECX_CW_HI(lmac_id));
rsp->fec_uncorr_blks += (val_hi << 16 | val_lo); rsp->fec_uncorr_blks += (val_hi << 16 | val_lo);
} }
} else { } else {
/* enable RS-FEC capture */ /* enable RS-FEC capture */
cfg = rpm_read(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL); cfg = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL);
cfg |= RPMX_RSFEC_RX_CAPTURE | BIT(lmac_id); cfg |= RPMX_RSFEC_RX_CAPTURE | BIT(lmac_id);
rpm_write(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL, cfg); rpm_write(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL, cfg);
val_lo = rpm_read(rpm, 0, val_lo = rpm_read(rpm, 0,
RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2); RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2);
val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC); val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
rsp->fec_corr_blks = (val_hi << 32 | val_lo); rsp->fec_corr_blks = (val_hi << 32 | val_lo);
val_lo = rpm_read(rpm, 0, val_lo = rpm_read(rpm, 0,
RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3); RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3);
val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC); val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
rsp->fec_uncorr_blks = (val_hi << 32 | val_lo); rsp->fec_uncorr_blks = (val_hi << 32 | val_lo);
} }
mutex_unlock(&rpm->lock);
return 0; return 0;
} }
@ -763,3 +772,41 @@ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr)
return 0; return 0;
} }
void rpm_x2p_reset(void *rpmd, bool enable)
{
rpm_t *rpm = rpmd;
int lmac_id;
u64 cfg;
if (enable) {
for_each_set_bit(lmac_id, &rpm->lmac_bmap, rpm->max_lmac_per_mac)
rpm->mac_ops->mac_enadis_rx(rpm, lmac_id, false);
usleep_range(1000, 2000);
cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg | RPM_NIX0_RESET);
} else {
cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
cfg &= ~RPM_NIX0_RESET;
rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg);
}
}
int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable)
{
rpm_t *rpm = rpmd;
u64 cfg;
if (!is_lmac_valid(rpm, lmac_id))
return -ENODEV;
cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
if (enable)
cfg |= RPM_RX_EN;
else
cfg &= ~RPM_RX_EN;
rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
return 0;
}

View File

@ -17,6 +17,8 @@
/* Registers */ /* Registers */
#define RPMX_CMRX_CFG 0x00 #define RPMX_CMRX_CFG 0x00
#define RPMX_CMR_GLOBAL_CFG 0x08
#define RPM_NIX0_RESET BIT_ULL(3)
#define RPMX_RX_TS_PREPEND BIT_ULL(22) #define RPMX_RX_TS_PREPEND BIT_ULL(22)
#define RPMX_TX_PTP_1S_SUPPORT BIT_ULL(17) #define RPMX_TX_PTP_1S_SUPPORT BIT_ULL(17)
#define RPMX_CMRX_RX_ID_MAP 0x80 #define RPMX_CMRX_RX_ID_MAP 0x80
@ -84,16 +86,18 @@
/* FEC stats */ /* FEC stats */
#define RPMX_MTI_STAT_STATN_CONTROL 0x10018 #define RPMX_MTI_STAT_STATN_CONTROL 0x10018
#define RPMX_MTI_STAT_DATA_HI_CDC 0x10038 #define RPMX_MTI_STAT_DATA_HI_CDC 0x10038
#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(27) #define RPMX_RSFEC_RX_CAPTURE BIT_ULL(28)
#define RPMX_CMD_CLEAR_RX BIT_ULL(30) #define RPMX_CMD_CLEAR_RX BIT_ULL(30)
#define RPMX_CMD_CLEAR_TX BIT_ULL(31) #define RPMX_CMD_CLEAR_TX BIT_ULL(31)
#define RPMX_MTI_RSFEC_STAT_STATN_CONTROL 0x40018
#define RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC 0x40000
#define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2 0x40050 #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2 0x40050
#define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3 0x40058 #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3 0x40058
#define RPMX_MTI_FCFECX_VL0_CCW_LO 0x38618 #define RPMX_MTI_FCFECX_VL0_CCW_LO(a) (0x38618 + ((a) * 0x40))
#define RPMX_MTI_FCFECX_VL0_NCCW_LO 0x38620 #define RPMX_MTI_FCFECX_VL0_NCCW_LO(a) (0x38620 + ((a) * 0x40))
#define RPMX_MTI_FCFECX_VL1_CCW_LO 0x38628 #define RPMX_MTI_FCFECX_VL1_CCW_LO(a) (0x38628 + ((a) * 0x40))
#define RPMX_MTI_FCFECX_VL1_NCCW_LO 0x38630 #define RPMX_MTI_FCFECX_VL1_NCCW_LO(a) (0x38630 + ((a) * 0x40))
#define RPMX_MTI_FCFECX_CW_HI 0x38638 #define RPMX_MTI_FCFECX_CW_HI(a) (0x38638 + ((a) * 0x40))
/* CN10KB CSR Declaration */ /* CN10KB CSR Declaration */
#define RPM2_CMRX_SW_INT 0x1b0 #define RPM2_CMRX_SW_INT 0x1b0
@ -137,4 +141,6 @@ bool is_dev_rpm2(void *rpmd);
int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp); int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp);
int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr); int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr);
int rpm_stats_reset(void *rpmd, int lmac_id); int rpm_stats_reset(void *rpmd, int lmac_id);
void rpm_x2p_reset(void *rpmd, bool enable);
int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable);
#endif /* RPM_H */ #endif /* RPM_H */

View File

@ -1162,6 +1162,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
} }
rvu_program_channels(rvu); rvu_program_channels(rvu);
cgx_start_linkup(rvu);
err = rvu_mcs_init(rvu); err = rvu_mcs_init(rvu);
if (err) { if (err) {

View File

@ -1025,6 +1025,7 @@ int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_
int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause); int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause);
void rvu_mac_reset(struct rvu *rvu, u16 pcifunc); void rvu_mac_reset(struct rvu *rvu, u16 pcifunc);
u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac); u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac);
void cgx_start_linkup(struct rvu *rvu);
int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf, int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
int type); int type);
bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr, bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr,

View File

@ -349,6 +349,7 @@ static void rvu_cgx_wq_destroy(struct rvu *rvu)
int rvu_cgx_init(struct rvu *rvu) int rvu_cgx_init(struct rvu *rvu)
{ {
struct mac_ops *mac_ops;
int cgx, err; int cgx, err;
void *cgxd; void *cgxd;
@ -375,6 +376,15 @@ int rvu_cgx_init(struct rvu *rvu)
if (err) if (err)
return err; return err;
/* Clear X2P reset on all MAC blocks */
for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) {
cgxd = rvu_cgx_pdata(cgx, rvu);
if (!cgxd)
continue;
mac_ops = get_mac_ops(cgxd);
mac_ops->mac_x2p_reset(cgxd, false);
}
/* Register for CGX events */ /* Register for CGX events */
err = cgx_lmac_event_handler_init(rvu); err = cgx_lmac_event_handler_init(rvu);
if (err) if (err)
@ -382,10 +392,26 @@ int rvu_cgx_init(struct rvu *rvu)
mutex_init(&rvu->cgx_cfg_lock); mutex_init(&rvu->cgx_cfg_lock);
/* Ensure event handler registration is completed, before return 0;
* we turn on the links }
*/
mb(); void cgx_start_linkup(struct rvu *rvu)
{
unsigned long lmac_bmap;
struct mac_ops *mac_ops;
int cgx, lmac, err;
void *cgxd;
/* Enable receive on all LMACS */
for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
cgxd = rvu_cgx_pdata(cgx, rvu);
if (!cgxd)
continue;
mac_ops = get_mac_ops(cgxd);
lmac_bmap = cgx_get_lmac_bmap(cgxd);
for_each_set_bit(lmac, &lmac_bmap, rvu->hw->lmac_per_cgx)
mac_ops->mac_enadis_rx(cgxd, lmac, true);
}
/* Do link up for all CGX ports */ /* Do link up for all CGX ports */
for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) { for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
@ -398,8 +424,6 @@ int rvu_cgx_init(struct rvu *rvu)
"Link up process failed to start on cgx %d\n", "Link up process failed to start on cgx %d\n",
cgx); cgx);
} }
return 0;
} }
int rvu_cgx_exit(struct rvu *rvu) int rvu_cgx_exit(struct rvu *rvu)
@ -923,13 +947,12 @@ int rvu_mbox_handler_cgx_features_get(struct rvu *rvu,
u32 rvu_cgx_get_fifolen(struct rvu *rvu) u32 rvu_cgx_get_fifolen(struct rvu *rvu)
{ {
struct mac_ops *mac_ops; void *cgxd = rvu_first_cgx_pdata(rvu);
u32 fifo_len;
mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu)); if (!cgxd)
fifo_len = mac_ops ? mac_ops->fifo_len : 0; return 0;
return fifo_len; return cgx_get_fifo_len(cgxd);
} }
u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac) u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac)

View File

@ -1394,18 +1394,15 @@ static int pxa168_eth_probe(struct platform_device *pdev)
printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n"); printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
clk = devm_clk_get(&pdev->dev, NULL); clk = devm_clk_get_enabled(&pdev->dev, NULL);
if (IS_ERR(clk)) { if (IS_ERR(clk)) {
dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n"); dev_err(&pdev->dev, "Fast Ethernet failed to get and enable clock\n");
return -ENODEV; return -ENODEV;
} }
clk_prepare_enable(clk);
dev = alloc_etherdev(sizeof(struct pxa168_eth_private)); dev = alloc_etherdev(sizeof(struct pxa168_eth_private));
if (!dev) { if (!dev)
err = -ENOMEM; return -ENOMEM;
goto err_clk;
}
platform_set_drvdata(pdev, dev); platform_set_drvdata(pdev, dev);
pep = netdev_priv(dev); pep = netdev_priv(dev);
@ -1523,8 +1520,6 @@ static int pxa168_eth_probe(struct platform_device *pdev)
mdiobus_free(pep->smi_bus); mdiobus_free(pep->smi_bus);
err_netdev: err_netdev:
free_netdev(dev); free_netdev(dev);
err_clk:
clk_disable_unprepare(clk);
return err; return err;
} }
@ -1542,7 +1537,6 @@ static void pxa168_eth_remove(struct platform_device *pdev)
if (dev->phydev) if (dev->phydev)
phy_disconnect(dev->phydev); phy_disconnect(dev->phydev);
clk_disable_unprepare(pep->clk);
mdiobus_unregister(pep->smi_bus); mdiobus_unregister(pep->smi_bus);
mdiobus_free(pep->smi_bus); mdiobus_free(pep->smi_bus);
unregister_netdev(dev); unregister_netdev(dev);

View File

@ -366,12 +366,13 @@ static void vcap_api_iterator_init_test(struct kunit *test)
struct vcap_typegroup typegroups[] = { struct vcap_typegroup typegroups[] = {
{ .offset = 0, .width = 2, .value = 2, }, { .offset = 0, .width = 2, .value = 2, },
{ .offset = 156, .width = 1, .value = 0, }, { .offset = 156, .width = 1, .value = 0, },
{ .offset = 0, .width = 0, .value = 0, }, { }
}; };
struct vcap_typegroup typegroups2[] = { struct vcap_typegroup typegroups2[] = {
{ .offset = 0, .width = 3, .value = 4, }, { .offset = 0, .width = 3, .value = 4, },
{ .offset = 49, .width = 2, .value = 0, }, { .offset = 49, .width = 2, .value = 0, },
{ .offset = 98, .width = 2, .value = 0, }, { .offset = 98, .width = 2, .value = 0, },
{ }
}; };
vcap_iter_init(&iter, 52, typegroups, 86); vcap_iter_init(&iter, 52, typegroups, 86);
@ -399,6 +400,7 @@ static void vcap_api_iterator_next_test(struct kunit *test)
{ .offset = 147, .width = 3, .value = 0, }, { .offset = 147, .width = 3, .value = 0, },
{ .offset = 196, .width = 2, .value = 0, }, { .offset = 196, .width = 2, .value = 0, },
{ .offset = 245, .width = 1, .value = 0, }, { .offset = 245, .width = 1, .value = 0, },
{ }
}; };
int idx; int idx;
@ -433,7 +435,7 @@ static void vcap_api_encode_typegroups_test(struct kunit *test)
{ .offset = 147, .width = 3, .value = 5, }, { .offset = 147, .width = 3, .value = 5, },
{ .offset = 196, .width = 2, .value = 2, }, { .offset = 196, .width = 2, .value = 2, },
{ .offset = 245, .width = 5, .value = 27, }, { .offset = 245, .width = 5, .value = 27, },
{ .offset = 0, .width = 0, .value = 0, }, { }
}; };
vcap_encode_typegroups(stream, 49, typegroups, false); vcap_encode_typegroups(stream, 49, typegroups, false);
@ -463,6 +465,7 @@ static void vcap_api_encode_bit_test(struct kunit *test)
{ .offset = 147, .width = 3, .value = 5, }, { .offset = 147, .width = 3, .value = 5, },
{ .offset = 196, .width = 2, .value = 2, }, { .offset = 196, .width = 2, .value = 2, },
{ .offset = 245, .width = 1, .value = 0, }, { .offset = 245, .width = 1, .value = 0, },
{ }
}; };
vcap_iter_init(&iter, 49, typegroups, 44); vcap_iter_init(&iter, 49, typegroups, 44);
@ -489,7 +492,7 @@ static void vcap_api_encode_field_test(struct kunit *test)
{ .offset = 147, .width = 3, .value = 5, }, { .offset = 147, .width = 3, .value = 5, },
{ .offset = 196, .width = 2, .value = 2, }, { .offset = 196, .width = 2, .value = 2, },
{ .offset = 245, .width = 5, .value = 27, }, { .offset = 245, .width = 5, .value = 27, },
{ .offset = 0, .width = 0, .value = 0, }, { }
}; };
struct vcap_field rf = { struct vcap_field rf = {
.type = VCAP_FIELD_U32, .type = VCAP_FIELD_U32,
@ -538,7 +541,7 @@ static void vcap_api_encode_short_field_test(struct kunit *test)
{ .offset = 0, .width = 3, .value = 7, }, { .offset = 0, .width = 3, .value = 7, },
{ .offset = 21, .width = 2, .value = 3, }, { .offset = 21, .width = 2, .value = 3, },
{ .offset = 42, .width = 1, .value = 1, }, { .offset = 42, .width = 1, .value = 1, },
{ .offset = 0, .width = 0, .value = 0, }, { }
}; };
struct vcap_field rf = { struct vcap_field rf = {
.type = VCAP_FIELD_U32, .type = VCAP_FIELD_U32,
@ -608,7 +611,7 @@ static void vcap_api_encode_keyfield_test(struct kunit *test)
struct vcap_typegroup tgt[] = { struct vcap_typegroup tgt[] = {
{ .offset = 0, .width = 2, .value = 2, }, { .offset = 0, .width = 2, .value = 2, },
{ .offset = 156, .width = 1, .value = 1, }, { .offset = 156, .width = 1, .value = 1, },
{ .offset = 0, .width = 0, .value = 0, }, { }
}; };
vcap_test_api_init(&admin); vcap_test_api_init(&admin);
@ -671,7 +674,7 @@ static void vcap_api_encode_max_keyfield_test(struct kunit *test)
struct vcap_typegroup tgt[] = { struct vcap_typegroup tgt[] = {
{ .offset = 0, .width = 2, .value = 2, }, { .offset = 0, .width = 2, .value = 2, },
{ .offset = 156, .width = 1, .value = 1, }, { .offset = 156, .width = 1, .value = 1, },
{ .offset = 0, .width = 0, .value = 0, }, { }
}; };
u32 keyres[] = { u32 keyres[] = {
0x928e8a84, 0x928e8a84,
@ -732,7 +735,7 @@ static void vcap_api_encode_actionfield_test(struct kunit *test)
{ .offset = 0, .width = 2, .value = 2, }, { .offset = 0, .width = 2, .value = 2, },
{ .offset = 21, .width = 1, .value = 1, }, { .offset = 21, .width = 1, .value = 1, },
{ .offset = 42, .width = 1, .value = 0, }, { .offset = 42, .width = 1, .value = 0, },
{ .offset = 0, .width = 0, .value = 0, }, { }
}; };
vcap_encode_actionfield(&rule, &caf, &rf, tgt); vcap_encode_actionfield(&rule, &caf, &rf, tgt);

View File

@ -9,7 +9,10 @@
#ifndef RTASE_H #ifndef RTASE_H
#define RTASE_H #define RTASE_H
#define RTASE_HW_VER_MASK 0x7C800000 #define RTASE_HW_VER_MASK 0x7C800000
#define RTASE_HW_VER_906X_7XA 0x00800000
#define RTASE_HW_VER_906X_7XC 0x04000000
#define RTASE_HW_VER_907XD_V1 0x04800000
#define RTASE_RX_DMA_BURST_256 4 #define RTASE_RX_DMA_BURST_256 4
#define RTASE_TX_DMA_BURST_UNLIMITED 7 #define RTASE_TX_DMA_BURST_UNLIMITED 7
@ -327,6 +330,8 @@ struct rtase_private {
u16 int_nums; u16 int_nums;
u16 tx_int_mit; u16 tx_int_mit;
u16 rx_int_mit; u16 rx_int_mit;
u32 hw_ver;
}; };
#define RTASE_LSO_64K 64000 #define RTASE_LSO_64K 64000

View File

@ -1714,10 +1714,21 @@ static int rtase_get_settings(struct net_device *dev,
struct ethtool_link_ksettings *cmd) struct ethtool_link_ksettings *cmd)
{ {
u32 supported = SUPPORTED_MII | SUPPORTED_Pause | SUPPORTED_Asym_Pause; u32 supported = SUPPORTED_MII | SUPPORTED_Pause | SUPPORTED_Asym_Pause;
const struct rtase_private *tp = netdev_priv(dev);
ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported, ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
supported); supported);
cmd->base.speed = SPEED_5000;
switch (tp->hw_ver) {
case RTASE_HW_VER_906X_7XA:
case RTASE_HW_VER_906X_7XC:
cmd->base.speed = SPEED_5000;
break;
case RTASE_HW_VER_907XD_V1:
cmd->base.speed = SPEED_10000;
break;
}
cmd->base.duplex = DUPLEX_FULL; cmd->base.duplex = DUPLEX_FULL;
cmd->base.port = PORT_MII; cmd->base.port = PORT_MII;
cmd->base.autoneg = AUTONEG_DISABLE; cmd->base.autoneg = AUTONEG_DISABLE;
@ -1972,20 +1983,21 @@ static void rtase_init_software_variable(struct pci_dev *pdev,
tp->dev->max_mtu = RTASE_MAX_JUMBO_SIZE; tp->dev->max_mtu = RTASE_MAX_JUMBO_SIZE;
} }
static bool rtase_check_mac_version_valid(struct rtase_private *tp) static int rtase_check_mac_version_valid(struct rtase_private *tp)
{ {
u32 hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK; int ret = -ENODEV;
bool known_ver = false;
switch (hw_ver) { tp->hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK;
case 0x00800000:
case 0x04000000: switch (tp->hw_ver) {
case 0x04800000: case RTASE_HW_VER_906X_7XA:
known_ver = true; case RTASE_HW_VER_906X_7XC:
case RTASE_HW_VER_907XD_V1:
ret = 0;
break; break;
} }
return known_ver; return ret;
} }
static int rtase_init_board(struct pci_dev *pdev, struct net_device **dev_out, static int rtase_init_board(struct pci_dev *pdev, struct net_device **dev_out,
@ -2105,9 +2117,13 @@ static int rtase_init_one(struct pci_dev *pdev,
tp->pdev = pdev; tp->pdev = pdev;
/* identify chip attached to board */ /* identify chip attached to board */
if (!rtase_check_mac_version_valid(tp)) ret = rtase_check_mac_version_valid(tp);
return dev_err_probe(&pdev->dev, -ENODEV, if (ret != 0) {
"unknown chip version, contact rtase maintainers (see MAINTAINERS file)\n"); dev_err(&pdev->dev,
"unknown chip version: 0x%08x, contact rtase maintainers (see MAINTAINERS file)\n",
tp->hw_ver);
goto err_out_release_board;
}
rtase_init_software_variable(pdev, tp); rtase_init_software_variable(pdev, tp);
rtase_init_hardware(tp); rtase_init_hardware(tp);
@ -2181,6 +2197,7 @@ static int rtase_init_one(struct pci_dev *pdev,
netif_napi_del(&ivec->napi); netif_napi_del(&ivec->napi);
} }
err_out_release_board:
rtase_release_board(pdev, dev, ioaddr); rtase_release_board(pdev, dev, ioaddr);
return ret; return ret;

View File

@ -487,6 +487,8 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
plat_dat->select_pcs = socfpga_dwmac_select_pcs; plat_dat->select_pcs = socfpga_dwmac_select_pcs;
plat_dat->has_gmac = true; plat_dat->has_gmac = true;
plat_dat->riwt_off = 1;
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res); ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret) if (ret)
return ret; return ret;

View File

@ -1177,6 +1177,9 @@ static int stmmac_init_phy(struct net_device *dev)
return -ENODEV; return -ENODEV;
} }
if (priv->dma_cap.eee)
phy_support_eee(phydev);
ret = phylink_connect_phy(priv->phylink, phydev); ret = phylink_connect_phy(priv->phylink, phydev);
} else { } else {
fwnode_handle_put(phy_fwnode); fwnode_handle_put(phy_fwnode);

View File

@ -352,8 +352,11 @@ static int ipq4019_mdio_probe(struct platform_device *pdev)
/* The platform resource is provided on the chipset IPQ5018 */ /* The platform resource is provided on the chipset IPQ5018 */
/* This resource is optional */ /* This resource is optional */
res = platform_get_resource(pdev, IORESOURCE_MEM, 1); res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (res) if (res) {
priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res); priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(priv->eth_ldo_rdy))
return PTR_ERR(priv->eth_ldo_rdy);
}
bus->name = "ipq4019_mdio"; bus->name = "ipq4019_mdio";
bus->read = ipq4019_mdio_read_c22; bus->read = ipq4019_mdio_read_c22;

View File

@ -1530,7 +1530,7 @@ int genphy_c45_ethtool_get_eee(struct phy_device *phydev,
return ret; return ret;
data->eee_enabled = is_enabled; data->eee_enabled = is_enabled;
data->eee_active = ret; data->eee_active = phydev->eee_active;
linkmode_copy(data->supported, phydev->supported_eee); linkmode_copy(data->supported, phydev->supported_eee);
return 0; return 0;

View File

@ -990,14 +990,14 @@ static int phy_check_link_status(struct phy_device *phydev)
phydev->state = PHY_RUNNING; phydev->state = PHY_RUNNING;
err = genphy_c45_eee_is_active(phydev, err = genphy_c45_eee_is_active(phydev,
NULL, NULL, NULL); NULL, NULL, NULL);
if (err <= 0) phydev->eee_active = err > 0;
phydev->enable_tx_lpi = false; phydev->enable_tx_lpi = phydev->eee_cfg.tx_lpi_enabled &&
else phydev->eee_active;
phydev->enable_tx_lpi = phydev->eee_cfg.tx_lpi_enabled;
phy_link_up(phydev); phy_link_up(phydev);
} else if (!phydev->link && phydev->state != PHY_NOLINK) { } else if (!phydev->link && phydev->state != PHY_NOLINK) {
phydev->state = PHY_NOLINK; phydev->state = PHY_NOLINK;
phydev->eee_active = false;
phydev->enable_tx_lpi = false; phydev->enable_tx_lpi = false;
phy_link_down(phydev); phy_link_down(phydev);
} }
@ -1672,7 +1672,7 @@ EXPORT_SYMBOL(phy_ethtool_get_eee);
* phy_ethtool_set_eee_noneg - Adjusts MAC LPI configuration without PHY * phy_ethtool_set_eee_noneg - Adjusts MAC LPI configuration without PHY
* renegotiation * renegotiation
* @phydev: pointer to the target PHY device structure * @phydev: pointer to the target PHY device structure
* @data: pointer to the ethtool_keee structure containing the new EEE settings * @old_cfg: pointer to the eee_config structure containing the old EEE settings
* *
* This function updates the Energy Efficient Ethernet (EEE) configuration * This function updates the Energy Efficient Ethernet (EEE) configuration
* for cases where only the MAC's Low Power Idle (LPI) configuration changes, * for cases where only the MAC's Low Power Idle (LPI) configuration changes,
@ -1683,18 +1683,23 @@ EXPORT_SYMBOL(phy_ethtool_get_eee);
* configuration. * configuration.
*/ */
static void phy_ethtool_set_eee_noneg(struct phy_device *phydev, static void phy_ethtool_set_eee_noneg(struct phy_device *phydev,
struct ethtool_keee *data) const struct eee_config *old_cfg)
{ {
if (phydev->eee_cfg.tx_lpi_enabled != data->tx_lpi_enabled || bool enable_tx_lpi;
phydev->eee_cfg.tx_lpi_timer != data->tx_lpi_timer) {
eee_to_eeecfg(&phydev->eee_cfg, data); if (!phydev->link)
phydev->enable_tx_lpi = eeecfg_mac_can_tx_lpi(&phydev->eee_cfg); return;
if (phydev->link) {
phydev->link = false; enable_tx_lpi = phydev->eee_cfg.tx_lpi_enabled && phydev->eee_active;
phy_link_down(phydev);
phydev->link = true; if (phydev->enable_tx_lpi != enable_tx_lpi ||
phy_link_up(phydev); phydev->eee_cfg.tx_lpi_timer != old_cfg->tx_lpi_timer) {
} phydev->enable_tx_lpi = false;
phydev->link = false;
phy_link_down(phydev);
phydev->enable_tx_lpi = enable_tx_lpi;
phydev->link = true;
phy_link_up(phydev);
} }
} }
@ -1707,18 +1712,23 @@ static void phy_ethtool_set_eee_noneg(struct phy_device *phydev,
*/ */
int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_keee *data) int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_keee *data)
{ {
struct eee_config old_cfg;
int ret; int ret;
if (!phydev->drv) if (!phydev->drv)
return -EIO; return -EIO;
mutex_lock(&phydev->lock); mutex_lock(&phydev->lock);
old_cfg = phydev->eee_cfg;
eee_to_eeecfg(&phydev->eee_cfg, data);
ret = genphy_c45_ethtool_set_eee(phydev, data); ret = genphy_c45_ethtool_set_eee(phydev, data);
if (ret >= 0) { if (ret == 0)
if (ret == 0) phy_ethtool_set_eee_noneg(phydev, &old_cfg);
phy_ethtool_set_eee_noneg(phydev, data); else if (ret < 0)
eee_to_eeecfg(&phydev->eee_cfg, data); phydev->eee_cfg = old_cfg;
}
mutex_unlock(&phydev->lock); mutex_unlock(&phydev->lock);
return ret < 0 ? ret : 0; return ret < 0 ? ret : 0;

View File

@ -1652,13 +1652,13 @@ static int lan78xx_set_wol(struct net_device *netdev,
struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]); struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
int ret; int ret;
if (wol->wolopts & ~WAKE_ALL)
return -EINVAL;
ret = usb_autopm_get_interface(dev->intf); ret = usb_autopm_get_interface(dev->intf);
if (ret < 0) if (ret < 0)
return ret; return ret;
if (wol->wolopts & ~WAKE_ALL)
return -EINVAL;
pdata->wol = wol->wolopts; pdata->wol = wol->wolopts;
device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts); device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
@ -2380,6 +2380,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
if (dev->chipid == ID_REV_CHIP_ID_7801_) { if (dev->chipid == ID_REV_CHIP_ID_7801_) {
if (phy_is_pseudo_fixed_link(phydev)) { if (phy_is_pseudo_fixed_link(phydev)) {
fixed_phy_unregister(phydev); fixed_phy_unregister(phydev);
phy_device_free(phydev);
} else { } else {
phy_unregister_fixup_for_uid(PHY_KSZ9031RNX, phy_unregister_fixup_for_uid(PHY_KSZ9031RNX,
0xfffffff0); 0xfffffff0);
@ -4246,8 +4247,10 @@ static void lan78xx_disconnect(struct usb_interface *intf)
phy_disconnect(net->phydev); phy_disconnect(net->phydev);
if (phy_is_pseudo_fixed_link(phydev)) if (phy_is_pseudo_fixed_link(phydev)) {
fixed_phy_unregister(phydev); fixed_phy_unregister(phydev);
phy_device_free(phydev);
}
usb_scuttle_anchored_urbs(&dev->deferred); usb_scuttle_anchored_urbs(&dev->deferred);
@ -4414,29 +4417,30 @@ static int lan78xx_probe(struct usb_interface *intf,
period = ep_intr->desc.bInterval; period = ep_intr->desc.bInterval;
maxp = usb_maxpacket(dev->udev, dev->pipe_intr); maxp = usb_maxpacket(dev->udev, dev->pipe_intr);
buf = kmalloc(maxp, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
goto out5;
}
dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL); dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
if (!dev->urb_intr) { if (!dev->urb_intr) {
ret = -ENOMEM; ret = -ENOMEM;
goto out6; goto out5;
} else {
usb_fill_int_urb(dev->urb_intr, dev->udev,
dev->pipe_intr, buf, maxp,
intr_complete, dev, period);
dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
} }
buf = kmalloc(maxp, GFP_KERNEL);
if (!buf) {
ret = -ENOMEM;
goto free_urbs;
}
usb_fill_int_urb(dev->urb_intr, dev->udev,
dev->pipe_intr, buf, maxp,
intr_complete, dev, period);
dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out); dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out);
/* Reject broken descriptors. */ /* Reject broken descriptors. */
if (dev->maxpacket == 0) { if (dev->maxpacket == 0) {
ret = -ENODEV; ret = -ENODEV;
goto out6; goto free_urbs;
} }
/* driver requires remote-wakeup capability during autosuspend. */ /* driver requires remote-wakeup capability during autosuspend. */
@ -4444,7 +4448,7 @@ static int lan78xx_probe(struct usb_interface *intf,
ret = lan78xx_phy_init(dev); ret = lan78xx_phy_init(dev);
if (ret < 0) if (ret < 0)
goto out7; goto free_urbs;
ret = register_netdev(netdev); ret = register_netdev(netdev);
if (ret != 0) { if (ret != 0) {
@ -4466,10 +4470,8 @@ static int lan78xx_probe(struct usb_interface *intf,
out8: out8:
phy_disconnect(netdev->phydev); phy_disconnect(netdev->phydev);
out7: free_urbs:
usb_free_urb(dev->urb_intr); usb_free_urb(dev->urb_intr);
out6:
kfree(buf);
out5: out5:
lan78xx_unbind(dev, intf); lan78xx_unbind(dev, intf);
out4: out4:

View File

@ -602,6 +602,7 @@ struct macsec_ops;
* @supported_eee: supported PHY EEE linkmodes * @supported_eee: supported PHY EEE linkmodes
* @advertising_eee: Currently advertised EEE linkmodes * @advertising_eee: Currently advertised EEE linkmodes
* @enable_tx_lpi: When True, MAC should transmit LPI to PHY * @enable_tx_lpi: When True, MAC should transmit LPI to PHY
* @eee_active: phylib private state, indicating that EEE has been negotiated
* @eee_cfg: User configuration of EEE * @eee_cfg: User configuration of EEE
* @lp_advertising: Current link partner advertised linkmodes * @lp_advertising: Current link partner advertised linkmodes
* @host_interfaces: PHY interface modes supported by host * @host_interfaces: PHY interface modes supported by host
@ -723,6 +724,7 @@ struct phy_device {
/* Energy efficient ethernet modes which should be prohibited */ /* Energy efficient ethernet modes which should be prohibited */
__ETHTOOL_DECLARE_LINK_MODE_MASK(eee_broken_modes); __ETHTOOL_DECLARE_LINK_MODE_MASK(eee_broken_modes);
bool enable_tx_lpi; bool enable_tx_lpi;
bool eee_active;
struct eee_config eee_cfg; struct eee_config eee_cfg;
/* Host supported PHY interface types. Should be ignored if empty. */ /* Host supported PHY interface types. Should be ignored if empty. */

View File

@ -53,6 +53,8 @@ static inline int copy_from_sockptr_offset(void *dst, sockptr_t src,
/* Deprecated. /* Deprecated.
* This is unsafe, unless caller checked user provided optlen. * This is unsafe, unless caller checked user provided optlen.
* Prefer copy_safe_from_sockptr() instead. * Prefer copy_safe_from_sockptr() instead.
*
* Returns 0 for success, or number of bytes not copied on error.
*/ */
static inline int copy_from_sockptr(void *dst, sockptr_t src, size_t size) static inline int copy_from_sockptr(void *dst, sockptr_t src, size_t size)
{ {

View File

@ -1318,7 +1318,8 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
struct mgmt_mode *cp; struct mgmt_mode *cp;
/* Make sure cmd still outstanding. */ /* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev)) if (err == -ECANCELED ||
cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
return; return;
cp = cmd->param; cp = cmd->param;
@ -1351,7 +1352,13 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
static int set_powered_sync(struct hci_dev *hdev, void *data) static int set_powered_sync(struct hci_dev *hdev, void *data)
{ {
struct mgmt_pending_cmd *cmd = data; struct mgmt_pending_cmd *cmd = data;
struct mgmt_mode *cp = cmd->param; struct mgmt_mode *cp;
/* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
return -ECANCELED;
cp = cmd->param;
BT_DBG("%s", hdev->name); BT_DBG("%s", hdev->name);
@ -1511,7 +1518,8 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "err %d", err); bt_dev_dbg(hdev, "err %d", err);
/* Make sure cmd still outstanding. */ /* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev)) if (err == -ECANCELED ||
cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
return; return;
hci_dev_lock(hdev); hci_dev_lock(hdev);
@ -1685,7 +1693,8 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
bt_dev_dbg(hdev, "err %d", err); bt_dev_dbg(hdev, "err %d", err);
/* Make sure cmd still outstanding. */ /* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev)) if (err == -ECANCELED ||
cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
return; return;
hci_dev_lock(hdev); hci_dev_lock(hdev);
@ -1917,7 +1926,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
bool changed; bool changed;
/* Make sure cmd still outstanding. */ /* Make sure cmd still outstanding. */
if (cmd != pending_find(MGMT_OP_SET_SSP, hdev)) if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev))
return; return;
if (err) { if (err) {
@ -3841,7 +3850,8 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
bt_dev_dbg(hdev, "err %d", err); bt_dev_dbg(hdev, "err %d", err);
if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev)) if (err == -ECANCELED ||
cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
return; return;
if (status) { if (status) {
@ -4016,7 +4026,8 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
struct sk_buff *skb = cmd->skb; struct sk_buff *skb = cmd->skb;
u8 status = mgmt_status(err); u8 status = mgmt_status(err);
if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev)) if (err == -ECANCELED ||
cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
return; return;
if (!status) { if (!status) {
@ -5907,13 +5918,16 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
{ {
struct mgmt_pending_cmd *cmd = data; struct mgmt_pending_cmd *cmd = data;
bt_dev_dbg(hdev, "err %d", err);
if (err == -ECANCELED)
return;
if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) && if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) && cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev)) cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
return; return;
bt_dev_dbg(hdev, "err %d", err);
mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err), mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
cmd->param, 1); cmd->param, 1);
mgmt_pending_remove(cmd); mgmt_pending_remove(cmd);
@ -6146,7 +6160,8 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
{ {
struct mgmt_pending_cmd *cmd = data; struct mgmt_pending_cmd *cmd = data;
if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev)) if (err == -ECANCELED ||
cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
return; return;
bt_dev_dbg(hdev, "err %d", err); bt_dev_dbg(hdev, "err %d", err);
@ -8137,7 +8152,8 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
u8 status = mgmt_status(err); u8 status = mgmt_status(err);
u16 eir_len; u16 eir_len;
if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev)) if (err == -ECANCELED ||
cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
return; return;
if (!status) { if (!status) {

View File

@ -143,6 +143,7 @@ static void sco_sock_timeout(struct work_struct *work)
sco_conn_lock(conn); sco_conn_lock(conn);
if (!conn->hcon) { if (!conn->hcon) {
sco_conn_unlock(conn); sco_conn_unlock(conn);
sco_conn_put(conn);
return; return;
} }
sk = sco_sock_hold(conn); sk = sco_sock_hold(conn);
@ -192,7 +193,6 @@ static struct sco_conn *sco_conn_add(struct hci_conn *hcon)
conn->hcon = hcon; conn->hcon = hcon;
sco_conn_unlock(conn); sco_conn_unlock(conn);
} }
sco_conn_put(conn);
return conn; return conn;
} }

View File

@ -2442,7 +2442,9 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
tgt_net = rtnl_get_net_ns_capable(skb->sk, netnsid); tgt_net = rtnl_get_net_ns_capable(skb->sk, netnsid);
if (IS_ERR(tgt_net)) { if (IS_ERR(tgt_net)) {
NL_SET_ERR_MSG(extack, "Invalid target network namespace id"); NL_SET_ERR_MSG(extack, "Invalid target network namespace id");
return PTR_ERR(tgt_net); err = PTR_ERR(tgt_net);
netnsid = -1;
goto out;
} }
break; break;
case IFLA_EXT_MASK: case IFLA_EXT_MASK:
@ -2457,7 +2459,8 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
default: default:
if (cb->strict_check) { if (cb->strict_check) {
NL_SET_ERR_MSG(extack, "Unsupported attribute in link dump request"); NL_SET_ERR_MSG(extack, "Unsupported attribute in link dump request");
return -EINVAL; err = -EINVAL;
goto out;
} }
} }
} }
@ -2479,11 +2482,14 @@ static int rtnl_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb)
break; break;
} }
if (kind_ops)
rtnl_link_ops_put(kind_ops, ops_srcu_index);
cb->seq = tgt_net->dev_base_seq; cb->seq = tgt_net->dev_base_seq;
nl_dump_check_consistent(cb, nlmsg_hdr(skb)); nl_dump_check_consistent(cb, nlmsg_hdr(skb));
out:
if (kind_ops)
rtnl_link_ops_put(kind_ops, ops_srcu_index);
if (netnsid >= 0) if (netnsid >= 0)
put_net(tgt_net); put_net(tgt_net);

View File

@ -268,6 +268,8 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
skb->dev = master->dev; skb->dev = master->dev;
skb->priority = TC_PRIO_CONTROL; skb->priority = TC_PRIO_CONTROL;
skb_reset_network_header(skb);
skb_reset_transport_header(skb);
if (dev_hard_header(skb, skb->dev, ETH_P_PRP, if (dev_hard_header(skb, skb->dev, ETH_P_PRP,
hsr->sup_multicast_addr, hsr->sup_multicast_addr,
skb->dev->dev_addr, skb->len) <= 0) skb->dev->dev_addr, skb->len) <= 0)
@ -275,8 +277,6 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
skb_reset_mac_header(skb); skb_reset_mac_header(skb);
skb_reset_mac_len(skb); skb_reset_mac_len(skb);
skb_reset_network_header(skb);
skb_reset_transport_header(skb);
return skb; return skb;
out: out:

View File

@ -1191,7 +1191,7 @@ static void reqsk_timer_handler(struct timer_list *t)
drop: drop:
__inet_csk_reqsk_queue_drop(sk_listener, oreq, true); __inet_csk_reqsk_queue_drop(sk_listener, oreq, true);
reqsk_put(req); reqsk_put(oreq);
} }
static bool reqsk_queue_hash_req(struct request_sock *req, static bool reqsk_queue_hash_req(struct request_sock *req,

View File

@ -120,6 +120,11 @@ static void ipmr_expire_process(struct timer_list *t);
lockdep_rtnl_is_held() || \ lockdep_rtnl_is_held() || \
list_empty(&net->ipv4.mr_tables)) list_empty(&net->ipv4.mr_tables))
static bool ipmr_can_free_table(struct net *net)
{
return !check_net(net) || !net->ipv4.mr_rules_ops;
}
static struct mr_table *ipmr_mr_table_iter(struct net *net, static struct mr_table *ipmr_mr_table_iter(struct net *net,
struct mr_table *mrt) struct mr_table *mrt)
{ {
@ -137,7 +142,7 @@ static struct mr_table *ipmr_mr_table_iter(struct net *net,
return ret; return ret;
} }
static struct mr_table *ipmr_get_table(struct net *net, u32 id) static struct mr_table *__ipmr_get_table(struct net *net, u32 id)
{ {
struct mr_table *mrt; struct mr_table *mrt;
@ -148,6 +153,16 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
return NULL; return NULL;
} }
static struct mr_table *ipmr_get_table(struct net *net, u32 id)
{
struct mr_table *mrt;
rcu_read_lock();
mrt = __ipmr_get_table(net, id);
rcu_read_unlock();
return mrt;
}
static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4, static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
struct mr_table **mrt) struct mr_table **mrt)
{ {
@ -189,7 +204,7 @@ static int ipmr_rule_action(struct fib_rule *rule, struct flowi *flp,
arg->table = fib_rule_get_table(rule, arg); arg->table = fib_rule_get_table(rule, arg);
mrt = ipmr_get_table(rule->fr_net, arg->table); mrt = __ipmr_get_table(rule->fr_net, arg->table);
if (!mrt) if (!mrt)
return -EAGAIN; return -EAGAIN;
res->mrt = mrt; res->mrt = mrt;
@ -302,6 +317,11 @@ EXPORT_SYMBOL(ipmr_rule_default);
#define ipmr_for_each_table(mrt, net) \ #define ipmr_for_each_table(mrt, net) \
for (mrt = net->ipv4.mrt; mrt; mrt = NULL) for (mrt = net->ipv4.mrt; mrt; mrt = NULL)
static bool ipmr_can_free_table(struct net *net)
{
return !check_net(net);
}
static struct mr_table *ipmr_mr_table_iter(struct net *net, static struct mr_table *ipmr_mr_table_iter(struct net *net,
struct mr_table *mrt) struct mr_table *mrt)
{ {
@ -315,6 +335,8 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
return net->ipv4.mrt; return net->ipv4.mrt;
} }
#define __ipmr_get_table ipmr_get_table
static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4, static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
struct mr_table **mrt) struct mr_table **mrt)
{ {
@ -403,7 +425,7 @@ static struct mr_table *ipmr_new_table(struct net *net, u32 id)
if (id != RT_TABLE_DEFAULT && id >= 1000000000) if (id != RT_TABLE_DEFAULT && id >= 1000000000)
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
mrt = ipmr_get_table(net, id); mrt = __ipmr_get_table(net, id);
if (mrt) if (mrt)
return mrt; return mrt;
@ -413,6 +435,10 @@ static struct mr_table *ipmr_new_table(struct net *net, u32 id)
static void ipmr_free_table(struct mr_table *mrt) static void ipmr_free_table(struct mr_table *mrt)
{ {
struct net *net = read_pnet(&mrt->net);
WARN_ON_ONCE(!ipmr_can_free_table(net));
timer_shutdown_sync(&mrt->ipmr_expire_timer); timer_shutdown_sync(&mrt->ipmr_expire_timer);
mroute_clean_tables(mrt, MRT_FLUSH_VIFS | MRT_FLUSH_VIFS_STATIC | mroute_clean_tables(mrt, MRT_FLUSH_VIFS | MRT_FLUSH_VIFS_STATIC |
MRT_FLUSH_MFC | MRT_FLUSH_MFC_STATIC); MRT_FLUSH_MFC | MRT_FLUSH_MFC_STATIC);
@ -1374,7 +1400,7 @@ int ip_mroute_setsockopt(struct sock *sk, int optname, sockptr_t optval,
goto out_unlock; goto out_unlock;
} }
mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT); mrt = __ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
if (!mrt) { if (!mrt) {
ret = -ENOENT; ret = -ENOENT;
goto out_unlock; goto out_unlock;
@ -2262,11 +2288,13 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb,
struct mr_table *mrt; struct mr_table *mrt;
int err; int err;
mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
if (!mrt)
return -ENOENT;
rcu_read_lock(); rcu_read_lock();
mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
if (!mrt) {
rcu_read_unlock();
return -ENOENT;
}
cache = ipmr_cache_find(mrt, saddr, daddr); cache = ipmr_cache_find(mrt, saddr, daddr);
if (!cache && skb->dev) { if (!cache && skb->dev) {
int vif = ipmr_find_vif(mrt, skb->dev); int vif = ipmr_find_vif(mrt, skb->dev);
@ -2550,7 +2578,7 @@ static int ipmr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
grp = nla_get_in_addr_default(tb[RTA_DST], 0); grp = nla_get_in_addr_default(tb[RTA_DST], 0);
tableid = nla_get_u32_default(tb[RTA_TABLE], 0); tableid = nla_get_u32_default(tb[RTA_TABLE], 0);
mrt = ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT); mrt = __ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
if (!mrt) { if (!mrt) {
err = -ENOENT; err = -ENOENT;
goto errout_free; goto errout_free;
@ -2604,7 +2632,7 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
if (filter.table_id) { if (filter.table_id) {
struct mr_table *mrt; struct mr_table *mrt;
mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id); mrt = __ipmr_get_table(sock_net(skb->sk), filter.table_id);
if (!mrt) { if (!mrt) {
if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR) if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR)
return skb->len; return skb->len;
@ -2712,7 +2740,7 @@ static int rtm_to_ipmr_mfcc(struct net *net, struct nlmsghdr *nlh,
break; break;
} }
} }
mrt = ipmr_get_table(net, tblid); mrt = __ipmr_get_table(net, tblid);
if (!mrt) { if (!mrt) {
ret = -ENOENT; ret = -ENOENT;
goto out; goto out;
@ -2920,13 +2948,15 @@ static void *ipmr_vif_seq_start(struct seq_file *seq, loff_t *pos)
struct net *net = seq_file_net(seq); struct net *net = seq_file_net(seq);
struct mr_table *mrt; struct mr_table *mrt;
mrt = ipmr_get_table(net, RT_TABLE_DEFAULT); rcu_read_lock();
if (!mrt) mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
if (!mrt) {
rcu_read_unlock();
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
}
iter->mrt = mrt; iter->mrt = mrt;
rcu_read_lock();
return mr_vif_seq_start(seq, pos); return mr_vif_seq_start(seq, pos);
} }

View File

@ -2570,6 +2570,24 @@ static struct inet6_dev *addrconf_add_dev(struct net_device *dev)
return idev; return idev;
} }
static void delete_tempaddrs(struct inet6_dev *idev,
struct inet6_ifaddr *ifp)
{
struct inet6_ifaddr *ift, *tmp;
write_lock_bh(&idev->lock);
list_for_each_entry_safe(ift, tmp, &idev->tempaddr_list, tmp_list) {
if (ift->ifpub != ifp)
continue;
in6_ifa_hold(ift);
write_unlock_bh(&idev->lock);
ipv6_del_addr(ift);
write_lock_bh(&idev->lock);
}
write_unlock_bh(&idev->lock);
}
static void manage_tempaddrs(struct inet6_dev *idev, static void manage_tempaddrs(struct inet6_dev *idev,
struct inet6_ifaddr *ifp, struct inet6_ifaddr *ifp,
__u32 valid_lft, __u32 prefered_lft, __u32 valid_lft, __u32 prefered_lft,
@ -3124,11 +3142,12 @@ static int inet6_addr_del(struct net *net, int ifindex, u32 ifa_flags,
in6_ifa_hold(ifp); in6_ifa_hold(ifp);
read_unlock_bh(&idev->lock); read_unlock_bh(&idev->lock);
if (!(ifp->flags & IFA_F_TEMPORARY) &&
(ifa_flags & IFA_F_MANAGETEMPADDR))
manage_tempaddrs(idev, ifp, 0, 0, false,
jiffies);
ipv6_del_addr(ifp); ipv6_del_addr(ifp);
if (!(ifp->flags & IFA_F_TEMPORARY) &&
(ifp->flags & IFA_F_MANAGETEMPADDR))
delete_tempaddrs(idev, ifp);
addrconf_verify_rtnl(net); addrconf_verify_rtnl(net);
if (ipv6_addr_is_multicast(pfx)) { if (ipv6_addr_is_multicast(pfx)) {
ipv6_mc_config(net->ipv6.mc_autojoin_sk, ipv6_mc_config(net->ipv6.mc_autojoin_sk,
@ -4952,14 +4971,12 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
} }
if (was_managetempaddr || ifp->flags & IFA_F_MANAGETEMPADDR) { if (was_managetempaddr || ifp->flags & IFA_F_MANAGETEMPADDR) {
if (was_managetempaddr && if (was_managetempaddr && !(ifp->flags & IFA_F_MANAGETEMPADDR))
!(ifp->flags & IFA_F_MANAGETEMPADDR)) { delete_tempaddrs(ifp->idev, ifp);
cfg->valid_lft = 0; else
cfg->preferred_lft = 0; manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
} cfg->preferred_lft, !was_managetempaddr,
manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft, jiffies);
cfg->preferred_lft, !was_managetempaddr,
jiffies);
} }
addrconf_verify_rtnl(net); addrconf_verify_rtnl(net);

View File

@ -108,6 +108,11 @@ static void ipmr_expire_process(struct timer_list *t);
lockdep_rtnl_is_held() || \ lockdep_rtnl_is_held() || \
list_empty(&net->ipv6.mr6_tables)) list_empty(&net->ipv6.mr6_tables))
static bool ip6mr_can_free_table(struct net *net)
{
return !check_net(net) || !net->ipv6.mr6_rules_ops;
}
static struct mr_table *ip6mr_mr_table_iter(struct net *net, static struct mr_table *ip6mr_mr_table_iter(struct net *net,
struct mr_table *mrt) struct mr_table *mrt)
{ {
@ -125,7 +130,7 @@ static struct mr_table *ip6mr_mr_table_iter(struct net *net,
return ret; return ret;
} }
static struct mr_table *ip6mr_get_table(struct net *net, u32 id) static struct mr_table *__ip6mr_get_table(struct net *net, u32 id)
{ {
struct mr_table *mrt; struct mr_table *mrt;
@ -136,6 +141,16 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
return NULL; return NULL;
} }
static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
{
struct mr_table *mrt;
rcu_read_lock();
mrt = __ip6mr_get_table(net, id);
rcu_read_unlock();
return mrt;
}
static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6, static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
struct mr_table **mrt) struct mr_table **mrt)
{ {
@ -177,7 +192,7 @@ static int ip6mr_rule_action(struct fib_rule *rule, struct flowi *flp,
arg->table = fib_rule_get_table(rule, arg); arg->table = fib_rule_get_table(rule, arg);
mrt = ip6mr_get_table(rule->fr_net, arg->table); mrt = __ip6mr_get_table(rule->fr_net, arg->table);
if (!mrt) if (!mrt)
return -EAGAIN; return -EAGAIN;
res->mrt = mrt; res->mrt = mrt;
@ -291,6 +306,11 @@ EXPORT_SYMBOL(ip6mr_rule_default);
#define ip6mr_for_each_table(mrt, net) \ #define ip6mr_for_each_table(mrt, net) \
for (mrt = net->ipv6.mrt6; mrt; mrt = NULL) for (mrt = net->ipv6.mrt6; mrt; mrt = NULL)
static bool ip6mr_can_free_table(struct net *net)
{
return !check_net(net);
}
static struct mr_table *ip6mr_mr_table_iter(struct net *net, static struct mr_table *ip6mr_mr_table_iter(struct net *net,
struct mr_table *mrt) struct mr_table *mrt)
{ {
@ -304,6 +324,8 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
return net->ipv6.mrt6; return net->ipv6.mrt6;
} }
#define __ip6mr_get_table ip6mr_get_table
static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6, static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
struct mr_table **mrt) struct mr_table **mrt)
{ {
@ -382,7 +404,7 @@ static struct mr_table *ip6mr_new_table(struct net *net, u32 id)
{ {
struct mr_table *mrt; struct mr_table *mrt;
mrt = ip6mr_get_table(net, id); mrt = __ip6mr_get_table(net, id);
if (mrt) if (mrt)
return mrt; return mrt;
@ -392,6 +414,10 @@ static struct mr_table *ip6mr_new_table(struct net *net, u32 id)
static void ip6mr_free_table(struct mr_table *mrt) static void ip6mr_free_table(struct mr_table *mrt)
{ {
struct net *net = read_pnet(&mrt->net);
WARN_ON_ONCE(!ip6mr_can_free_table(net));
timer_shutdown_sync(&mrt->ipmr_expire_timer); timer_shutdown_sync(&mrt->ipmr_expire_timer);
mroute_clean_tables(mrt, MRT6_FLUSH_MIFS | MRT6_FLUSH_MIFS_STATIC | mroute_clean_tables(mrt, MRT6_FLUSH_MIFS | MRT6_FLUSH_MIFS_STATIC |
MRT6_FLUSH_MFC | MRT6_FLUSH_MFC_STATIC); MRT6_FLUSH_MFC | MRT6_FLUSH_MFC_STATIC);
@ -411,13 +437,15 @@ static void *ip6mr_vif_seq_start(struct seq_file *seq, loff_t *pos)
struct net *net = seq_file_net(seq); struct net *net = seq_file_net(seq);
struct mr_table *mrt; struct mr_table *mrt;
mrt = ip6mr_get_table(net, RT6_TABLE_DFLT); rcu_read_lock();
if (!mrt) mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
if (!mrt) {
rcu_read_unlock();
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
}
iter->mrt = mrt; iter->mrt = mrt;
rcu_read_lock();
return mr_vif_seq_start(seq, pos); return mr_vif_seq_start(seq, pos);
} }
@ -2278,11 +2306,13 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm,
struct mfc6_cache *cache; struct mfc6_cache *cache;
struct rt6_info *rt = dst_rt6_info(skb_dst(skb)); struct rt6_info *rt = dst_rt6_info(skb_dst(skb));
mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
if (!mrt)
return -ENOENT;
rcu_read_lock(); rcu_read_lock();
mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
if (!mrt) {
rcu_read_unlock();
return -ENOENT;
}
cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr); cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr);
if (!cache && skb->dev) { if (!cache && skb->dev) {
int vif = ip6mr_find_vif(mrt, skb->dev); int vif = ip6mr_find_vif(mrt, skb->dev);
@ -2562,7 +2592,7 @@ static int ip6mr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
grp = nla_get_in6_addr(tb[RTA_DST]); grp = nla_get_in6_addr(tb[RTA_DST]);
tableid = nla_get_u32_default(tb[RTA_TABLE], 0); tableid = nla_get_u32_default(tb[RTA_TABLE], 0);
mrt = ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT); mrt = __ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
if (!mrt) { if (!mrt) {
NL_SET_ERR_MSG_MOD(extack, "MR table does not exist"); NL_SET_ERR_MSG_MOD(extack, "MR table does not exist");
return -ENOENT; return -ENOENT;
@ -2609,7 +2639,7 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
if (filter.table_id) { if (filter.table_id) {
struct mr_table *mrt; struct mr_table *mrt;
mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id); mrt = __ip6mr_get_table(sock_net(skb->sk), filter.table_id);
if (!mrt) { if (!mrt) {
if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR) if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR)
return skb->len; return skb->len;

View File

@ -1236,7 +1236,9 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
return -EOPNOTSUPP; return -EOPNOTSUPP;
/* receive/dequeue next skb: /* receive/dequeue next skb:
* the function understands MSG_PEEK and, thus, does not dequeue skb */ * the function understands MSG_PEEK and, thus, does not dequeue skb
* only refcount is increased.
*/
skb = skb_recv_datagram(sk, flags, &err); skb = skb_recv_datagram(sk, flags, &err);
if (!skb) { if (!skb) {
if (sk->sk_shutdown & RCV_SHUTDOWN) if (sk->sk_shutdown & RCV_SHUTDOWN)
@ -1252,9 +1254,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
cskb = skb; cskb = skb;
if (skb_copy_datagram_msg(cskb, offset, msg, copied)) { if (skb_copy_datagram_msg(cskb, offset, msg, copied)) {
if (!(flags & MSG_PEEK)) err = -EFAULT;
skb_queue_head(&sk->sk_receive_queue, skb); goto err_out;
return -EFAULT;
} }
/* SOCK_SEQPACKET: set MSG_TRUNC if recv buf size is too small */ /* SOCK_SEQPACKET: set MSG_TRUNC if recv buf size is too small */
@ -1271,11 +1272,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS, err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS,
sizeof(IUCV_SKB_CB(skb)->class), sizeof(IUCV_SKB_CB(skb)->class),
(void *)&IUCV_SKB_CB(skb)->class); (void *)&IUCV_SKB_CB(skb)->class);
if (err) { if (err)
if (!(flags & MSG_PEEK)) goto err_out;
skb_queue_head(&sk->sk_receive_queue, skb);
return err;
}
/* Mark read part of skb as used */ /* Mark read part of skb as used */
if (!(flags & MSG_PEEK)) { if (!(flags & MSG_PEEK)) {
@ -1331,8 +1329,18 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
/* SOCK_SEQPACKET: return real length if MSG_TRUNC is set */ /* SOCK_SEQPACKET: return real length if MSG_TRUNC is set */
if (sk->sk_type == SOCK_SEQPACKET && (flags & MSG_TRUNC)) if (sk->sk_type == SOCK_SEQPACKET && (flags & MSG_TRUNC))
copied = rlen; copied = rlen;
if (flags & MSG_PEEK)
skb_unref(skb);
return copied; return copied;
err_out:
if (!(flags & MSG_PEEK))
skb_queue_head(&sk->sk_receive_queue, skb);
else
skb_unref(skb);
return err;
} }
static inline __poll_t iucv_accept_poll(struct sock *parent) static inline __poll_t iucv_accept_poll(struct sock *parent)

View File

@ -1870,15 +1870,31 @@ static __net_exit void l2tp_pre_exit_net(struct net *net)
} }
} }
static int l2tp_idr_item_unexpected(int id, void *p, void *data)
{
const char *idr_name = data;
pr_err("l2tp: %s IDR not empty at net %d exit\n", idr_name, id);
WARN_ON_ONCE(1);
return 1;
}
static __net_exit void l2tp_exit_net(struct net *net) static __net_exit void l2tp_exit_net(struct net *net)
{ {
struct l2tp_net *pn = l2tp_pernet(net); struct l2tp_net *pn = l2tp_pernet(net);
WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v2_session_idr)); /* Our per-net IDRs should be empty. Check that is so, to
* help catch cleanup races or refcnt leaks.
*/
idr_for_each(&pn->l2tp_v2_session_idr, l2tp_idr_item_unexpected,
"v2_session");
idr_for_each(&pn->l2tp_v3_session_idr, l2tp_idr_item_unexpected,
"v3_session");
idr_for_each(&pn->l2tp_tunnel_idr, l2tp_idr_item_unexpected,
"tunnel");
idr_destroy(&pn->l2tp_v2_session_idr); idr_destroy(&pn->l2tp_v2_session_idr);
WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v3_session_idr));
idr_destroy(&pn->l2tp_v3_session_idr); idr_destroy(&pn->l2tp_v3_session_idr);
WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_tunnel_idr));
idr_destroy(&pn->l2tp_tunnel_idr); idr_destroy(&pn->l2tp_tunnel_idr);
} }

View File

@ -1098,7 +1098,7 @@ static int llc_ui_setsockopt(struct socket *sock, int level, int optname,
lock_sock(sk); lock_sock(sk);
if (unlikely(level != SOL_LLC || optlen != sizeof(int))) if (unlikely(level != SOL_LLC || optlen != sizeof(int)))
goto out; goto out;
rc = copy_from_sockptr(&opt, optval, sizeof(opt)); rc = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
if (rc) if (rc)
goto out; goto out;
rc = -EINVAL; rc = -EINVAL;

View File

@ -2181,9 +2181,14 @@ netlink_ack_tlv_len(struct netlink_sock *nlk, int err,
return tlvlen; return tlvlen;
} }
static bool nlmsg_check_in_payload(const struct nlmsghdr *nlh, const void *addr)
{
return !WARN_ON(addr < nlmsg_data(nlh) ||
addr - (const void *) nlh >= nlh->nlmsg_len);
}
static void static void
netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb, netlink_ack_tlv_fill(struct sk_buff *skb, const struct nlmsghdr *nlh, int err,
const struct nlmsghdr *nlh, int err,
const struct netlink_ext_ack *extack) const struct netlink_ext_ack *extack)
{ {
if (extack->_msg) if (extack->_msg)
@ -2195,9 +2200,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
if (!err) if (!err)
return; return;
if (extack->bad_attr && if (extack->bad_attr && nlmsg_check_in_payload(nlh, extack->bad_attr))
!WARN_ON((u8 *)extack->bad_attr < in_skb->data ||
(u8 *)extack->bad_attr >= in_skb->data + in_skb->len))
WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS, WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS,
(u8 *)extack->bad_attr - (const u8 *)nlh)); (u8 *)extack->bad_attr - (const u8 *)nlh));
if (extack->policy) if (extack->policy)
@ -2206,9 +2209,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
if (extack->miss_type) if (extack->miss_type)
WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_TYPE, WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_TYPE,
extack->miss_type)); extack->miss_type));
if (extack->miss_nest && if (extack->miss_nest && nlmsg_check_in_payload(nlh, extack->miss_nest))
!WARN_ON((u8 *)extack->miss_nest < in_skb->data ||
(u8 *)extack->miss_nest > in_skb->data + in_skb->len))
WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_NEST, WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_NEST,
(u8 *)extack->miss_nest - (const u8 *)nlh)); (u8 *)extack->miss_nest - (const u8 *)nlh));
} }
@ -2237,7 +2238,7 @@ static int netlink_dump_done(struct netlink_sock *nlk, struct sk_buff *skb,
if (extack_len) { if (extack_len) {
nlh->nlmsg_flags |= NLM_F_ACK_TLVS; nlh->nlmsg_flags |= NLM_F_ACK_TLVS;
if (skb_tailroom(skb) >= extack_len) { if (skb_tailroom(skb) >= extack_len) {
netlink_ack_tlv_fill(cb->skb, skb, cb->nlh, netlink_ack_tlv_fill(skb, cb->nlh,
nlk->dump_done_errno, extack); nlk->dump_done_errno, extack);
nlmsg_end(skb, nlh); nlmsg_end(skb, nlh);
} }
@ -2496,7 +2497,7 @@ void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err,
} }
if (tlvlen) if (tlvlen)
netlink_ack_tlv_fill(in_skb, skb, nlh, err, extack); netlink_ack_tlv_fill(skb, nlh, err, extack);
nlmsg_end(skb, rep); nlmsg_end(skb, rep);

View File

@ -707,9 +707,10 @@ static int rxrpc_setsockopt(struct socket *sock, int level, int optname,
ret = -EISCONN; ret = -EISCONN;
if (rx->sk.sk_state != RXRPC_UNBOUND) if (rx->sk.sk_state != RXRPC_UNBOUND)
goto error; goto error;
ret = copy_from_sockptr(&min_sec_level, optval, ret = copy_safe_from_sockptr(&min_sec_level,
sizeof(unsigned int)); sizeof(min_sec_level),
if (ret < 0) optval, optlen);
if (ret)
goto error; goto error;
ret = -EINVAL; ret = -EINVAL;
if (min_sec_level > RXRPC_SECURITY_MAX) if (min_sec_level > RXRPC_SECURITY_MAX)

View File

@ -332,6 +332,12 @@ static bool fq_fastpath_check(const struct Qdisc *sch, struct sk_buff *skb,
*/ */
if (q->internal.qlen >= 8) if (q->internal.qlen >= 8)
return false; return false;
/* Ordering invariants fall apart if some delayed flows
* are ready but we haven't serviced them, yet.
*/
if (q->time_next_delayed_flow <= now + q->offload_horizon)
return false;
} }
sk = skb->sk; sk = skb->sk;

View File

@ -218,5 +218,5 @@ class LinkConfig:
json_data = process[0] json_data = process[0]
"""Check if the field exist in the json data""" """Check if the field exist in the json data"""
if field not in json_data: if field not in json_data:
raise KsftSkipEx(f"Field {field} does not exist in the output of interface {json_data["ifname"]}") raise KsftSkipEx(f'Field {field} does not exist in the output of interface {json_data["ifname"]}')
return json_data[field] return json_data[field]

View File

@ -78,7 +78,6 @@ TEST_PROGS += test_vxlan_vnifiltering.sh
TEST_GEN_FILES += io_uring_zerocopy_tx TEST_GEN_FILES += io_uring_zerocopy_tx
TEST_PROGS += io_uring_zerocopy_tx.sh TEST_PROGS += io_uring_zerocopy_tx.sh
TEST_GEN_FILES += bind_bhash TEST_GEN_FILES += bind_bhash
TEST_GEN_PROGS += netlink-dumps
TEST_GEN_PROGS += sk_bind_sendto_listen TEST_GEN_PROGS += sk_bind_sendto_listen
TEST_GEN_PROGS += sk_connect_zero_addr TEST_GEN_PROGS += sk_connect_zero_addr
TEST_GEN_PROGS += sk_so_peek_off TEST_GEN_PROGS += sk_so_peek_off
@ -101,7 +100,7 @@ TEST_PROGS += ipv6_route_update_soft_lockup.sh
TEST_PROGS += busy_poll_test.sh TEST_PROGS += busy_poll_test.sh
# YNL files, must be before "include ..lib.mk" # YNL files, must be before "include ..lib.mk"
YNL_GEN_FILES := busy_poller YNL_GEN_FILES := busy_poller netlink-dumps
TEST_GEN_FILES += $(YNL_GEN_FILES) TEST_GEN_FILES += $(YNL_GEN_FILES)
TEST_FILES := settings TEST_FILES := settings

View File

@ -12,11 +12,140 @@
#include <unistd.h> #include <unistd.h>
#include <linux/genetlink.h> #include <linux/genetlink.h>
#include <linux/neighbour.h>
#include <linux/netdevice.h>
#include <linux/netlink.h> #include <linux/netlink.h>
#include <linux/mqueue.h> #include <linux/mqueue.h>
#include <linux/rtnetlink.h>
#include "../kselftest_harness.h" #include "../kselftest_harness.h"
#include <ynl.h>
struct ext_ack {
int err;
__u32 attr_offs;
__u32 miss_type;
__u32 miss_nest;
const char *str;
};
/* 0: no done, 1: done found, 2: extack found, -1: error */
static int nl_get_extack(char *buf, size_t n, struct ext_ack *ea)
{
const struct nlmsghdr *nlh;
const struct nlattr *attr;
ssize_t rem;
for (rem = n; rem > 0; NLMSG_NEXT(nlh, rem)) {
nlh = (struct nlmsghdr *)&buf[n - rem];
if (!NLMSG_OK(nlh, rem))
return -1;
if (nlh->nlmsg_type != NLMSG_DONE)
continue;
ea->err = -*(int *)NLMSG_DATA(nlh);
if (!(nlh->nlmsg_flags & NLM_F_ACK_TLVS))
return 1;
ynl_attr_for_each(attr, nlh, sizeof(int)) {
switch (ynl_attr_type(attr)) {
case NLMSGERR_ATTR_OFFS:
ea->attr_offs = ynl_attr_get_u32(attr);
break;
case NLMSGERR_ATTR_MISS_TYPE:
ea->miss_type = ynl_attr_get_u32(attr);
break;
case NLMSGERR_ATTR_MISS_NEST:
ea->miss_nest = ynl_attr_get_u32(attr);
break;
case NLMSGERR_ATTR_MSG:
ea->str = ynl_attr_get_str(attr);
break;
}
}
return 2;
}
return 0;
}
static const struct {
struct nlmsghdr nlhdr;
struct ndmsg ndm;
struct nlattr ahdr;
__u32 val;
} dump_neigh_bad = {
.nlhdr = {
.nlmsg_len = sizeof(dump_neigh_bad),
.nlmsg_type = RTM_GETNEIGH,
.nlmsg_flags = NLM_F_REQUEST | NLM_F_ACK | NLM_F_DUMP,
.nlmsg_seq = 1,
},
.ndm = {
.ndm_family = 123,
},
.ahdr = {
.nla_len = 4 + 4,
.nla_type = NDA_FLAGS_EXT,
},
.val = -1, // should fail MASK validation
};
TEST(dump_extack)
{
int netlink_sock;
char buf[8192];
int one = 1;
int i, cnt;
ssize_t n;
netlink_sock = socket(AF_NETLINK, SOCK_RAW, NETLINK_ROUTE);
ASSERT_GE(netlink_sock, 0);
n = setsockopt(netlink_sock, SOL_NETLINK, NETLINK_CAP_ACK,
&one, sizeof(one));
ASSERT_EQ(n, 0);
n = setsockopt(netlink_sock, SOL_NETLINK, NETLINK_EXT_ACK,
&one, sizeof(one));
ASSERT_EQ(n, 0);
n = setsockopt(netlink_sock, SOL_NETLINK, NETLINK_GET_STRICT_CHK,
&one, sizeof(one));
ASSERT_EQ(n, 0);
/* Dump so many times we fill up the buffer */
cnt = 64;
for (i = 0; i < cnt; i++) {
n = send(netlink_sock, &dump_neigh_bad,
sizeof(dump_neigh_bad), 0);
ASSERT_EQ(n, sizeof(dump_neigh_bad));
}
/* Read out the ENOBUFS */
n = recv(netlink_sock, buf, sizeof(buf), MSG_DONTWAIT);
EXPECT_EQ(n, -1);
EXPECT_EQ(errno, ENOBUFS);
for (i = 0; i < cnt; i++) {
struct ext_ack ea = {};
n = recv(netlink_sock, buf, sizeof(buf), MSG_DONTWAIT);
if (n < 0) {
ASSERT_GE(i, 10);
break;
}
ASSERT_GE(n, (ssize_t)sizeof(struct nlmsghdr));
EXPECT_EQ(nl_get_extack(buf, n, &ea), 2);
EXPECT_EQ(ea.attr_offs,
sizeof(struct nlmsghdr) + sizeof(struct ndmsg));
}
}
static const struct { static const struct {
struct nlmsghdr nlhdr; struct nlmsghdr nlhdr;
struct genlmsghdr genlhdr; struct genlmsghdr genlhdr;

View File

@ -3,10 +3,9 @@
all: all:
@echo mk_build_dir="$(shell pwd)" > include.sh @echo mk_build_dir="$(shell pwd)" > include.sh
TEST_PROGS := run.sh \ TEST_PROGS := run.sh
test.py
TEST_FILES := include.sh TEST_FILES := include.sh test.py
EXTRA_CLEAN := /tmp/rds_logs include.sh EXTRA_CLEAN := /tmp/rds_logs include.sh

View File

@ -29,6 +29,7 @@ ALL_TESTS="
kci_test_bridge_parent_id kci_test_bridge_parent_id
kci_test_address_proto kci_test_address_proto
kci_test_enslave_bonding kci_test_enslave_bonding
kci_test_mngtmpaddr
" "
devdummy="test-dummy0" devdummy="test-dummy0"
@ -44,6 +45,7 @@ check_err()
if [ $ret -eq 0 ]; then if [ $ret -eq 0 ]; then
ret=$1 ret=$1
fi fi
[ -n "$2" ] && echo "$2"
} }
# same but inverted -- used when command must fail for test to pass # same but inverted -- used when command must fail for test to pass
@ -1239,6 +1241,99 @@ kci_test_enslave_bonding()
ip netns del "$testns" ip netns del "$testns"
} }
# Called to validate the addresses on $IFNAME:
#
# 1. Every `temporary` address must have a matching `mngtmpaddr`
# 2. Every `mngtmpaddr` address must have some un`deprecated` `temporary`
#
# If the mngtmpaddr or tempaddr checking failed, return 0 and stop slowwait
validate_mngtmpaddr()
{
local dev=$1
local prefix=""
local addr_list=$(ip -j -n $testns addr show dev ${dev})
local temp_addrs=$(echo ${addr_list} | \
jq -r '.[].addr_info[] | select(.temporary == true) | .local')
local mng_prefixes=$(echo ${addr_list} | \
jq -r '.[].addr_info[] | select(.mngtmpaddr == true) | .local' | \
cut -d: -f1-4 | tr '\n' ' ')
local undep_prefixes=$(echo ${addr_list} | \
jq -r '.[].addr_info[] | select(.temporary == true and .deprecated != true) | .local' | \
cut -d: -f1-4 | tr '\n' ' ')
# 1. All temporary addresses (temp and dep) must have a matching mngtmpaddr
for address in ${temp_addrs}; do
prefix=$(echo ${address} | cut -d: -f1-4)
if [[ ! " ${mng_prefixes} " =~ " $prefix " ]]; then
check_err 1 "FAIL: Temporary $address with no matching mngtmpaddr!";
return 0
fi
done
# 2. All mngtmpaddr addresses must have a temporary address (not dep)
for prefix in ${mng_prefixes}; do
if [[ ! " ${undep_prefixes} " =~ " $prefix " ]]; then
check_err 1 "FAIL: No undeprecated temporary in $prefix!";
return 0
fi
done
return 1
}
kci_test_mngtmpaddr()
{
local ret=0
setup_ns testns
if [ $? -ne 0 ]; then
end_test "SKIP mngtmpaddr tests: cannot add net namespace $testns"
return $ksft_skip
fi
# 1. Create a dummy Ethernet interface
run_cmd ip -n $testns link add ${devdummy} type dummy
run_cmd ip -n $testns link set ${devdummy} up
run_cmd ip netns exec $testns sysctl -w net.ipv6.conf.${devdummy}.use_tempaddr=1
run_cmd ip netns exec $testns sysctl -w net.ipv6.conf.${devdummy}.temp_prefered_lft=10
run_cmd ip netns exec $testns sysctl -w net.ipv6.conf.${devdummy}.temp_valid_lft=25
run_cmd ip netns exec $testns sysctl -w net.ipv6.conf.${devdummy}.max_desync_factor=1
# 2. Create several mngtmpaddr addresses on that interface.
# with temp_*_lft configured to be pretty short (10 and 35 seconds
# for prefer/valid respectively)
for i in $(seq 1 9); do
run_cmd ip -n $testns addr add 2001:db8:7e57:${i}::1/64 mngtmpaddr dev ${devdummy}
done
# 3. Confirm that a preferred temporary address exists for each mngtmpaddr
# address at all times, polling once per second for 30 seconds.
slowwait 30 validate_mngtmpaddr ${devdummy}
# 4. Delete each mngtmpaddr address, one at a time (alternating between
# deleting and merely un-mngtmpaddr-ing), and confirm that the other
# mngtmpaddr addresses still have preferred temporaries.
for i in $(seq 1 9); do
(( $i % 4 == 0 )) && mng_flag="mngtmpaddr" || mng_flag=""
if (( $i % 2 == 0 )); then
run_cmd ip -n $testns addr del 2001:db8:7e57:${i}::1/64 $mng_flag dev ${devdummy}
else
run_cmd ip -n $testns addr change 2001:db8:7e57:${i}::1/64 dev ${devdummy}
fi
# the temp addr should be deleted
validate_mngtmpaddr ${devdummy}
done
if [ $ret -ne 0 ]; then
end_test "FAIL: mngtmpaddr add/remove incorrect"
else
end_test "PASS: mngtmpaddr add/remove correctly"
fi
ip netns del "$testns"
return $ret
}
kci_test_rtnl() kci_test_rtnl()
{ {
local current_test local current_test