Including fixes from ieee802154, bluetooth and netfilter.

Current release - regressions:
 
   - eth: mlx5: fix wrong reserved field in hca_cap_2 in mlx5_ifc
 
   - eth: am65-cpsw: fix forever loop in cleanup code
 
 Current release - new code bugs:
 
   - eth: mlx5: HWS, fixed double-free in error flow of creating SQ
 
 Previous releases - regressions:
 
   - core: avoid potential underflow in qdisc_pkt_len_init() with UFO
 
   - core: test for not too small csum_start in virtio_net_hdr_to_skb()
 
   - vrf: revert "vrf: remove unnecessary RCU-bh critical section"
 
   - bluetooth:
     - fix uaf in l2cap_connect
     - fix possible crash on mgmt_index_removed
 
   - dsa: improve shutdown sequence
 
   - eth: mlx5e: SHAMPO, fix overflow of hd_per_wq
 
   - eth: ip_gre: fix drops of small packets in ipgre_xmit
 
 Previous releases - always broken:
 
   - core: fix gso_features_check to check for both dev->gso_{ipv4_,}max_size
 
   - core: fix tcp fraglist segmentation after pull from frag_list
 
   - netfilter: nf_tables: prevent nf_skb_duplicated corruption
 
   - sctp: set sk_state back to CLOSED if autobind fails in sctp_listen_start
 
   - mac802154: fix potential RCU dereference issue in mac802154_scan_worker
 
   - eth: fec: restart PPS after link state change
 
 Signed-off-by: Paolo Abeni <pabeni@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmb+giESHHBhYmVuaUBy
 ZWRoYXQuY29tAAoJECkkeY3MjxOkDowP/25YsDA8uaH5yelI85vUgp1T50MWgFxJ
 ARm58Pzxr8byX6eIup95xSsjLvMbLaWj5LIA2Y49AV0fWVgGn0U8yx4mPy0Czhdg
 J1oxtyoV1pR2V/okWzD4yhZV2on7OGsS73I6J1s6BAowezr19A+aa5Un57dW/103
 ccwBuBOYlSIOIHmarOxuFhWMYcwXreNBHa9K7J6JtDFn9F56fUn+ZoIUJ7x27cSO
 eWhh9bIkeEb+xYeUXAjNP3pBvJ1xpwIyZv+JMTp40jNsAXPjSpI3Jwd1YlAAMuT9
 J2dW0Zs8uwm5LzBPFvI9iM0WHEmVy6+b32NjnKVwPn2+XGGWQss52bmRElNcJkrw
 4NeG6/6CPIE0xuczBECuMa0X68NDKIZsjy3Q3OahV82ef2cwhRk6FexyIg5oiMPx
 KmMi5B+UQw6ZY3ZF/ME/0jJx/H5ayOC01yNBaTUPrLJr8gjquWEMjZXEqJsdyixJ
 5OoZeKG5oN6HkN7g/IxoFjg/W/g93OULO3qH+IzLQG4NlVs6Zp4ykL7dT+Py2zzc
 Ru3n5+HA4PqDn2u7gmP1mu2g/lmKUIZEEvR+msP81Cywlz5qtWIH1a6oIeVC7bjt
 JNhgBgzKGGMGdgmhYNzXw213WCEbz0+as2SNlvlbiqMP5FKQPLzzBVuJoz4AtJVn
 cyVy7D66HuMW
 =cq2I
 -----END PGP SIGNATURE-----

Merge tag 'net-6.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Paolo Abeni:
 "Including fixes from ieee802154, bluetooth and netfilter.

  Current release - regressions:

   - eth: mlx5: fix wrong reserved field in hca_cap_2 in mlx5_ifc

   - eth: am65-cpsw: fix forever loop in cleanup code

  Current release - new code bugs:

   - eth: mlx5: HWS, fixed double-free in error flow of creating SQ

  Previous releases - regressions:

   - core: avoid potential underflow in qdisc_pkt_len_init() with UFO

   - core: test for not too small csum_start in virtio_net_hdr_to_skb()

   - vrf: revert "vrf: remove unnecessary RCU-bh critical section"

   - bluetooth:
       - fix uaf in l2cap_connect
       - fix possible crash on mgmt_index_removed

   - dsa: improve shutdown sequence

   - eth: mlx5e: SHAMPO, fix overflow of hd_per_wq

   - eth: ip_gre: fix drops of small packets in ipgre_xmit

  Previous releases - always broken:

   - core: fix gso_features_check to check for both
     dev->gso_{ipv4_,}max_size

   - core: fix tcp fraglist segmentation after pull from frag_list

   - netfilter: nf_tables: prevent nf_skb_duplicated corruption

   - sctp: set sk_state back to CLOSED if autobind fails in
     sctp_listen_start

   - mac802154: fix potential RCU dereference issue in
     mac802154_scan_worker

   - eth: fec: restart PPS after link state change"

* tag 'net-6.12-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (48 commits)
  sctp: set sk_state back to CLOSED if autobind fails in sctp_listen_start
  dt-bindings: net: xlnx,axi-ethernet: Add missing reg minItems
  doc: net: napi: Update documentation for napi_schedule_irqoff
  net/ncsi: Disable the ncsi work before freeing the associated structure
  net: phy: qt2025: Fix warning: unused import DeviceId
  gso: fix udp gso fraglist segmentation after pull from frag_list
  bridge: mcast: Fail MDB get request on empty entry
  vrf: revert "vrf: Remove unnecessary RCU-bh critical section"
  net: ethernet: ti: am65-cpsw: Fix forever loop in cleanup code
  net: phy: realtek: Check the index value in led_hw_control_get
  ppp: do not assume bh is held in ppp_channel_bridge_input()
  selftests: rds: move include.sh to TEST_FILES
  net: test for not too small csum_start in virtio_net_hdr_to_skb()
  net: gso: fix tcp fraglist segmentation after pull from frag_list
  ipv4: ip_gre: Fix drops of small packets in ipgre_xmit
  net: stmmac: dwmac4: extend timeout for VLAN Tag register busy bit check
  net: add more sanity checks to qdisc_pkt_len_init()
  net: avoid potential underflow in qdisc_pkt_len_init() with UFO
  net: ethernet: ti: cpsw_ale: Fix warning on some platforms
  net: microchip: Make FDMA config symbol invisible
  ...
This commit is contained in:
Linus Torvalds 2024-10-03 09:44:00 -07:00
commit 8c245fe7dd
55 changed files with 310 additions and 127 deletions

View File

@ -34,6 +34,7 @@ properties:
and length of the AXI DMA controller IO space, unless and length of the AXI DMA controller IO space, unless
axistream-connected is specified, in which case the reg axistream-connected is specified, in which case the reg
attribute of the node referenced by it is used. attribute of the node referenced by it is used.
minItems: 1
maxItems: 2 maxItems: 2
interrupts: interrupts:
@ -181,7 +182,7 @@ examples:
clock-names = "s_axi_lite_clk", "axis_clk", "ref_clk", "mgt_clk"; clock-names = "s_axi_lite_clk", "axis_clk", "ref_clk", "mgt_clk";
clocks = <&axi_clk>, <&axi_clk>, <&pl_enet_ref_clk>, <&mgt_clk>; clocks = <&axi_clk>, <&axi_clk>, <&pl_enet_ref_clk>, <&mgt_clk>;
phy-mode = "mii"; phy-mode = "mii";
reg = <0x00 0x40000000 0x00 0x40000>; reg = <0x40000000 0x40000>;
xlnx,rxcsum = <0x2>; xlnx,rxcsum = <0x2>;
xlnx,rxmem = <0x800>; xlnx,rxmem = <0x800>;
xlnx,txcsum = <0x2>; xlnx,txcsum = <0x2>;

View File

@ -144,9 +144,8 @@ IRQ should only be unmasked after a successful call to napi_complete_done():
napi_schedule_irqoff() is a variant of napi_schedule() which takes advantage napi_schedule_irqoff() is a variant of napi_schedule() which takes advantage
of guarantees given by being invoked in IRQ context (no need to of guarantees given by being invoked in IRQ context (no need to
mask interrupts). Note that PREEMPT_RT forces all interrupts mask interrupts). napi_schedule_irqoff() will fall back to napi_schedule() if
to be threaded so the interrupt may need to be marked ``IRQF_NO_THREAD`` IRQs are threaded (such as if ``PREEMPT_RT`` is enabled).
to avoid issues on real-time kernel configurations.
Instance to queue mapping Instance to queue mapping
------------------------- -------------------------

View File

@ -92,7 +92,7 @@ static int btmrvl_sdio_probe_of(struct device *dev,
} else { } else {
ret = devm_request_irq(dev, cfg->irq_bt, ret = devm_request_irq(dev, cfg->irq_bt,
btmrvl_wake_irq_bt, btmrvl_wake_irq_bt,
0, "bt_wake", card); IRQF_NO_AUTOEN, "bt_wake", card);
if (ret) { if (ret) {
dev_err(dev, dev_err(dev,
"Failed to request irq_bt %d (%d)\n", "Failed to request irq_bt %d (%d)\n",
@ -101,7 +101,6 @@ static int btmrvl_sdio_probe_of(struct device *dev,
/* Configure wakeup (enabled by default) */ /* Configure wakeup (enabled by default) */
device_init_wakeup(dev, true); device_init_wakeup(dev, true);
disable_irq(cfg->irq_bt);
} }
} }

View File

@ -691,10 +691,19 @@ struct fec_enet_private {
/* XDP BPF Program */ /* XDP BPF Program */
struct bpf_prog *xdp_prog; struct bpf_prog *xdp_prog;
struct {
int pps_enable;
u64 ns_sys, ns_phc;
u32 at_corr;
u8 at_inc_corr;
} ptp_saved_state;
u64 ethtool_stats[]; u64 ethtool_stats[];
}; };
void fec_ptp_init(struct platform_device *pdev, int irq_idx); void fec_ptp_init(struct platform_device *pdev, int irq_idx);
void fec_ptp_restore_state(struct fec_enet_private *fep);
void fec_ptp_save_state(struct fec_enet_private *fep);
void fec_ptp_stop(struct platform_device *pdev); void fec_ptp_stop(struct platform_device *pdev);
void fec_ptp_start_cyclecounter(struct net_device *ndev); void fec_ptp_start_cyclecounter(struct net_device *ndev);
int fec_ptp_set(struct net_device *ndev, struct kernel_hwtstamp_config *config, int fec_ptp_set(struct net_device *ndev, struct kernel_hwtstamp_config *config,

View File

@ -1077,6 +1077,8 @@ fec_restart(struct net_device *ndev)
u32 rcntl = OPT_FRAME_SIZE | 0x04; u32 rcntl = OPT_FRAME_SIZE | 0x04;
u32 ecntl = FEC_ECR_ETHEREN; u32 ecntl = FEC_ECR_ETHEREN;
fec_ptp_save_state(fep);
/* Whack a reset. We should wait for this. /* Whack a reset. We should wait for this.
* For i.MX6SX SOC, enet use AXI bus, we use disable MAC * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
* instead of reset MAC itself. * instead of reset MAC itself.
@ -1244,8 +1246,10 @@ fec_restart(struct net_device *ndev)
writel(ecntl, fep->hwp + FEC_ECNTRL); writel(ecntl, fep->hwp + FEC_ECNTRL);
fec_enet_active_rxring(ndev); fec_enet_active_rxring(ndev);
if (fep->bufdesc_ex) if (fep->bufdesc_ex) {
fec_ptp_start_cyclecounter(ndev); fec_ptp_start_cyclecounter(ndev);
fec_ptp_restore_state(fep);
}
/* Enable interrupts we wish to service */ /* Enable interrupts we wish to service */
if (fep->link) if (fep->link)
@ -1336,6 +1340,8 @@ fec_stop(struct net_device *ndev)
netdev_err(ndev, "Graceful transmit stop did not complete!\n"); netdev_err(ndev, "Graceful transmit stop did not complete!\n");
} }
fec_ptp_save_state(fep);
/* Whack a reset. We should wait for this. /* Whack a reset. We should wait for this.
* For i.MX6SX SOC, enet use AXI bus, we use disable MAC * For i.MX6SX SOC, enet use AXI bus, we use disable MAC
* instead of reset MAC itself. * instead of reset MAC itself.
@ -1366,6 +1372,9 @@ fec_stop(struct net_device *ndev)
val = readl(fep->hwp + FEC_ECNTRL); val = readl(fep->hwp + FEC_ECNTRL);
val |= FEC_ECR_EN1588; val |= FEC_ECR_EN1588;
writel(val, fep->hwp + FEC_ECNTRL); writel(val, fep->hwp + FEC_ECNTRL);
fec_ptp_start_cyclecounter(ndev);
fec_ptp_restore_state(fep);
} }
} }

View File

@ -764,6 +764,56 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
schedule_delayed_work(&fep->time_keep, HZ); schedule_delayed_work(&fep->time_keep, HZ);
} }
void fec_ptp_save_state(struct fec_enet_private *fep)
{
unsigned long flags;
u32 atime_inc_corr;
spin_lock_irqsave(&fep->tmreg_lock, flags);
fep->ptp_saved_state.pps_enable = fep->pps_enable;
fep->ptp_saved_state.ns_phc = timecounter_read(&fep->tc);
fep->ptp_saved_state.ns_sys = ktime_get_ns();
fep->ptp_saved_state.at_corr = readl(fep->hwp + FEC_ATIME_CORR);
atime_inc_corr = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_CORR_MASK;
fep->ptp_saved_state.at_inc_corr = (u8)(atime_inc_corr >> FEC_T_INC_CORR_OFFSET);
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
}
/* Restore PTP functionality after a reset */
void fec_ptp_restore_state(struct fec_enet_private *fep)
{
u32 atime_inc = readl(fep->hwp + FEC_ATIME_INC) & FEC_T_INC_MASK;
unsigned long flags;
u32 counter;
u64 ns;
spin_lock_irqsave(&fep->tmreg_lock, flags);
/* Reset turned it off, so adjust our status flag */
fep->pps_enable = 0;
writel(fep->ptp_saved_state.at_corr, fep->hwp + FEC_ATIME_CORR);
atime_inc |= ((u32)fep->ptp_saved_state.at_inc_corr) << FEC_T_INC_CORR_OFFSET;
writel(atime_inc, fep->hwp + FEC_ATIME_INC);
ns = ktime_get_ns() - fep->ptp_saved_state.ns_sys + fep->ptp_saved_state.ns_phc;
counter = ns & fep->cc.mask;
writel(counter, fep->hwp + FEC_ATIME);
timecounter_init(&fep->tc, &fep->cc, ns);
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
/* Restart PPS if needed */
if (fep->ptp_saved_state.pps_enable) {
/* Re-enable PPS */
fec_ptp_enable_pps(fep, 1);
}
}
void fec_ptp_stop(struct platform_device *pdev) void fec_ptp_stop(struct platform_device *pdev)
{ {
struct net_device *ndev = platform_get_drvdata(pdev); struct net_device *ndev = platform_get_drvdata(pdev);

View File

@ -481,7 +481,9 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
unsigned long flags; unsigned long flags;
u32 byte_offset; u32 byte_offset;
len = skb->len < ETH_ZLEN ? ETH_ZLEN : skb->len; if (skb_put_padto(skb, ETH_ZLEN))
return NETDEV_TX_OK;
len = skb->len;
if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) { if ((desc->ctl & (LTQ_DMA_OWN | LTQ_DMA_C)) || ch->skb[ch->dma.desc]) {
netdev_err(dev, "tx ring full\n"); netdev_err(dev, "tx ring full\n");

View File

@ -627,7 +627,7 @@ struct mlx5e_shampo_hd {
struct mlx5e_dma_info *info; struct mlx5e_dma_info *info;
struct mlx5e_frag_page *pages; struct mlx5e_frag_page *pages;
u16 curr_page_index; u16 curr_page_index;
u16 hd_per_wq; u32 hd_per_wq;
u16 hd_per_wqe; u16 hd_per_wqe;
unsigned long *bitmap; unsigned long *bitmap;
u16 pi; u16 pi;

View File

@ -23,6 +23,9 @@ struct mlx5e_tir_builder *mlx5e_tir_builder_alloc(bool modify)
struct mlx5e_tir_builder *builder; struct mlx5e_tir_builder *builder;
builder = kvzalloc(sizeof(*builder), GFP_KERNEL); builder = kvzalloc(sizeof(*builder), GFP_KERNEL);
if (!builder)
return NULL;
builder->modify = modify; builder->modify = modify;
return builder; return builder;

View File

@ -67,7 +67,6 @@ static void mlx5e_ipsec_handle_sw_limits(struct work_struct *_work)
return; return;
spin_lock_bh(&x->lock); spin_lock_bh(&x->lock);
xfrm_state_check_expire(x);
if (x->km.state == XFRM_STATE_EXPIRED) { if (x->km.state == XFRM_STATE_EXPIRED) {
sa_entry->attrs.drop = true; sa_entry->attrs.drop = true;
spin_unlock_bh(&x->lock); spin_unlock_bh(&x->lock);
@ -75,6 +74,13 @@ static void mlx5e_ipsec_handle_sw_limits(struct work_struct *_work)
mlx5e_accel_ipsec_fs_modify(sa_entry); mlx5e_accel_ipsec_fs_modify(sa_entry);
return; return;
} }
if (x->km.state != XFRM_STATE_VALID) {
spin_unlock_bh(&x->lock);
return;
}
xfrm_state_check_expire(x);
spin_unlock_bh(&x->lock); spin_unlock_bh(&x->lock);
queue_delayed_work(sa_entry->ipsec->wq, &dwork->dwork, queue_delayed_work(sa_entry->ipsec->wq, &dwork->dwork,

View File

@ -642,7 +642,6 @@ mlx5e_sq_xmit_mpwqe(struct mlx5e_txqsq *sq, struct sk_buff *skb,
return; return;
err_unmap: err_unmap:
mlx5e_dma_unmap_wqe_err(sq, 1);
sq->stats->dropped++; sq->stats->dropped++;
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
mlx5e_tx_flush(sq); mlx5e_tx_flush(sq);

View File

@ -24,6 +24,11 @@
pci_write_config_dword((dev)->pdev, (dev)->vsc_addr + (offset), (val)) pci_write_config_dword((dev)->pdev, (dev)->vsc_addr + (offset), (val))
#define VSC_MAX_RETRIES 2048 #define VSC_MAX_RETRIES 2048
/* Reading VSC registers can take relatively long time.
* Yield the cpu every 128 registers read.
*/
#define VSC_GW_READ_BLOCK_COUNT 128
enum { enum {
VSC_CTRL_OFFSET = 0x4, VSC_CTRL_OFFSET = 0x4,
VSC_COUNTER_OFFSET = 0x8, VSC_COUNTER_OFFSET = 0x8,
@ -273,6 +278,7 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
{ {
unsigned int next_read_addr = 0; unsigned int next_read_addr = 0;
unsigned int read_addr = 0; unsigned int read_addr = 0;
unsigned int count = 0;
while (read_addr < length) { while (read_addr < length) {
if (mlx5_vsc_gw_read_fast(dev, read_addr, &next_read_addr, if (mlx5_vsc_gw_read_fast(dev, read_addr, &next_read_addr,
@ -280,6 +286,10 @@ int mlx5_vsc_gw_read_block_fast(struct mlx5_core_dev *dev, u32 *data,
return read_addr; return read_addr;
read_addr = next_read_addr; read_addr = next_read_addr;
if (++count == VSC_GW_READ_BLOCK_COUNT) {
cond_resched();
count = 0;
}
} }
return length; return length;
} }

View File

@ -33,7 +33,7 @@ bool mlx5hws_bwc_match_params_is_complex(struct mlx5hws_context *ctx,
* and let the usual match creation path handle it, * and let the usual match creation path handle it,
* both for good and bad flows. * both for good and bad flows.
*/ */
if (ret == E2BIG) { if (ret == -E2BIG) {
is_complex = true; is_complex = true;
mlx5hws_dbg(ctx, "Matcher definer layout: need complex matcher\n"); mlx5hws_dbg(ctx, "Matcher definer layout: need complex matcher\n");
} else { } else {

View File

@ -1845,7 +1845,7 @@ hws_definer_find_best_match_fit(struct mlx5hws_context *ctx,
return 0; return 0;
} }
return E2BIG; return -E2BIG;
} }
static void static void
@ -1931,7 +1931,7 @@ mlx5hws_definer_calc_layout(struct mlx5hws_context *ctx,
/* Find the match definer layout for header layout match union */ /* Find the match definer layout for header layout match union */
ret = hws_definer_find_best_match_fit(ctx, match_definer, match_hl); ret = hws_definer_find_best_match_fit(ctx, match_definer, match_hl);
if (ret) { if (ret) {
if (ret == E2BIG) if (ret == -E2BIG)
mlx5hws_dbg(ctx, mlx5hws_dbg(ctx,
"Failed to create match definer from header layout - E2BIG\n"); "Failed to create match definer from header layout - E2BIG\n");
else else

View File

@ -675,7 +675,7 @@ static int hws_matcher_bind_mt(struct mlx5hws_matcher *matcher)
if (!(matcher->flags & MLX5HWS_MATCHER_FLAGS_COLLISION)) { if (!(matcher->flags & MLX5HWS_MATCHER_FLAGS_COLLISION)) {
ret = mlx5hws_definer_mt_init(ctx, matcher->mt); ret = mlx5hws_definer_mt_init(ctx, matcher->mt);
if (ret) { if (ret) {
if (ret == E2BIG) if (ret == -E2BIG)
mlx5hws_err(ctx, "Failed to set matcher templates with match definers\n"); mlx5hws_err(ctx, "Failed to set matcher templates with match definers\n");
return ret; return ret;
} }

View File

@ -653,6 +653,12 @@ static int hws_send_ring_create_sq(struct mlx5_core_dev *mdev, u32 pdn,
return err; return err;
} }
static void hws_send_ring_destroy_sq(struct mlx5_core_dev *mdev,
struct mlx5hws_send_ring_sq *sq)
{
mlx5_core_destroy_sq(mdev, sq->sqn);
}
static int hws_send_ring_set_sq_rdy(struct mlx5_core_dev *mdev, u32 sqn) static int hws_send_ring_set_sq_rdy(struct mlx5_core_dev *mdev, u32 sqn)
{ {
void *in, *sqc; void *in, *sqc;
@ -696,7 +702,7 @@ static int hws_send_ring_create_sq_rdy(struct mlx5_core_dev *mdev, u32 pdn,
err = hws_send_ring_set_sq_rdy(mdev, sq->sqn); err = hws_send_ring_set_sq_rdy(mdev, sq->sqn);
if (err) if (err)
hws_send_ring_close_sq(sq); hws_send_ring_destroy_sq(mdev, sq);
return err; return err;
} }

View File

@ -6,7 +6,7 @@
if NET_VENDOR_MICROCHIP if NET_VENDOR_MICROCHIP
config FDMA config FDMA
bool "FDMA API" bool "FDMA API" if COMPILE_TEST
help help
Provides the basic FDMA functionality for multiple Microchip Provides the basic FDMA functionality for multiple Microchip
switchcores. switchcores.

View File

@ -45,8 +45,12 @@ void sparx5_ifh_parse(u32 *ifh, struct frame_info *info)
fwd = (fwd >> 5); fwd = (fwd >> 5);
info->src_port = FIELD_GET(GENMASK(7, 1), fwd); info->src_port = FIELD_GET(GENMASK(7, 1), fwd);
/*
* Bit 270-271 are occasionally unexpectedly set by the hardware,
* clear bits before extracting timestamp
*/
info->timestamp = info->timestamp =
((u64)xtr_hdr[2] << 24) | ((u64)(xtr_hdr[2] & GENMASK(5, 0)) << 24) |
((u64)xtr_hdr[3] << 16) | ((u64)xtr_hdr[3] << 16) |
((u64)xtr_hdr[4] << 8) | ((u64)xtr_hdr[4] << 8) |
((u64)xtr_hdr[5] << 0); ((u64)xtr_hdr[5] << 0);

View File

@ -14,6 +14,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/ethtool.h> #include <linux/ethtool.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/iopoll.h>
#include "stmmac.h" #include "stmmac.h"
#include "stmmac_pcs.h" #include "stmmac_pcs.h"
#include "dwmac4.h" #include "dwmac4.h"
@ -471,7 +472,7 @@ static int dwmac4_write_vlan_filter(struct net_device *dev,
u8 index, u32 data) u8 index, u32 data)
{ {
void __iomem *ioaddr = (void __iomem *)dev->base_addr; void __iomem *ioaddr = (void __iomem *)dev->base_addr;
int i, timeout = 10; int ret;
u32 val; u32 val;
if (index >= hw->num_vlan) if (index >= hw->num_vlan)
@ -487,16 +488,15 @@ static int dwmac4_write_vlan_filter(struct net_device *dev,
writel(val, ioaddr + GMAC_VLAN_TAG); writel(val, ioaddr + GMAC_VLAN_TAG);
for (i = 0; i < timeout; i++) { ret = readl_poll_timeout(ioaddr + GMAC_VLAN_TAG, val,
val = readl(ioaddr + GMAC_VLAN_TAG); !(val & GMAC_VLAN_TAG_CTRL_OB),
if (!(val & GMAC_VLAN_TAG_CTRL_OB)) 1000, 500000);
return 0; if (ret) {
udelay(1); netdev_err(dev, "Timeout accessing MAC_VLAN_Tag_Filter\n");
return -EBUSY;
} }
netdev_err(dev, "Timeout accessing MAC_VLAN_Tag_Filter\n"); return 0;
return -EBUSY;
} }
static int dwmac4_add_hw_vlan_rx_fltr(struct net_device *dev, static int dwmac4_add_hw_vlan_rx_fltr(struct net_device *dev,

View File

@ -763,7 +763,7 @@ static int am65_cpsw_nuss_common_open(struct am65_cpsw_common *common)
k3_udma_glue_disable_rx_chn(rx_chn->rx_chn); k3_udma_glue_disable_rx_chn(rx_chn->rx_chn);
fail_rx: fail_rx:
for (i = 0; i < common->rx_ch_num_flows; i--) for (i = 0; i < common->rx_ch_num_flows; i++)
k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, &rx_chn->flows[i], k3_udma_glue_reset_rx_chn(rx_chn->rx_chn, i, &rx_chn->flows[i],
am65_cpsw_nuss_rx_cleanup, 0); am65_cpsw_nuss_rx_cleanup, 0);

View File

@ -96,6 +96,7 @@ enum {
* @features: features supported by ALE * @features: features supported by ALE
* @tbl_entries: number of ALE entries * @tbl_entries: number of ALE entries
* @reg_fields: pointer to array of register field configuration * @reg_fields: pointer to array of register field configuration
* @num_fields: number of fields in the reg_fields array
* @nu_switch_ale: NU Switch ALE * @nu_switch_ale: NU Switch ALE
* @vlan_entry_tbl: ALE vlan entry fields description tbl * @vlan_entry_tbl: ALE vlan entry fields description tbl
*/ */
@ -104,6 +105,7 @@ struct cpsw_ale_dev_id {
u32 features; u32 features;
u32 tbl_entries; u32 tbl_entries;
const struct reg_field *reg_fields; const struct reg_field *reg_fields;
int num_fields;
bool nu_switch_ale; bool nu_switch_ale;
const struct ale_entry_fld *vlan_entry_tbl; const struct ale_entry_fld *vlan_entry_tbl;
}; };
@ -1400,6 +1402,7 @@ static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = {
.dev_id = "cpsw", .dev_id = "cpsw",
.tbl_entries = 1024, .tbl_entries = 1024,
.reg_fields = ale_fields_cpsw, .reg_fields = ale_fields_cpsw,
.num_fields = ARRAY_SIZE(ale_fields_cpsw),
.vlan_entry_tbl = vlan_entry_cpsw, .vlan_entry_tbl = vlan_entry_cpsw,
}, },
{ {
@ -1407,12 +1410,14 @@ static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = {
.dev_id = "66ak2h-xgbe", .dev_id = "66ak2h-xgbe",
.tbl_entries = 2048, .tbl_entries = 2048,
.reg_fields = ale_fields_cpsw, .reg_fields = ale_fields_cpsw,
.num_fields = ARRAY_SIZE(ale_fields_cpsw),
.vlan_entry_tbl = vlan_entry_cpsw, .vlan_entry_tbl = vlan_entry_cpsw,
}, },
{ {
.dev_id = "66ak2el", .dev_id = "66ak2el",
.features = CPSW_ALE_F_STATUS_REG, .features = CPSW_ALE_F_STATUS_REG,
.reg_fields = ale_fields_cpsw_nu, .reg_fields = ale_fields_cpsw_nu,
.num_fields = ARRAY_SIZE(ale_fields_cpsw_nu),
.nu_switch_ale = true, .nu_switch_ale = true,
.vlan_entry_tbl = vlan_entry_nu, .vlan_entry_tbl = vlan_entry_nu,
}, },
@ -1421,6 +1426,7 @@ static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = {
.features = CPSW_ALE_F_STATUS_REG, .features = CPSW_ALE_F_STATUS_REG,
.tbl_entries = 64, .tbl_entries = 64,
.reg_fields = ale_fields_cpsw_nu, .reg_fields = ale_fields_cpsw_nu,
.num_fields = ARRAY_SIZE(ale_fields_cpsw_nu),
.nu_switch_ale = true, .nu_switch_ale = true,
.vlan_entry_tbl = vlan_entry_nu, .vlan_entry_tbl = vlan_entry_nu,
}, },
@ -1429,6 +1435,7 @@ static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = {
.features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING,
.tbl_entries = 64, .tbl_entries = 64,
.reg_fields = ale_fields_cpsw_nu, .reg_fields = ale_fields_cpsw_nu,
.num_fields = ARRAY_SIZE(ale_fields_cpsw_nu),
.nu_switch_ale = true, .nu_switch_ale = true,
.vlan_entry_tbl = vlan_entry_nu, .vlan_entry_tbl = vlan_entry_nu,
}, },
@ -1436,12 +1443,14 @@ static const struct cpsw_ale_dev_id cpsw_ale_id_match[] = {
.dev_id = "j721e-cpswxg", .dev_id = "j721e-cpswxg",
.features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING,
.reg_fields = ale_fields_cpsw_nu, .reg_fields = ale_fields_cpsw_nu,
.num_fields = ARRAY_SIZE(ale_fields_cpsw_nu),
.vlan_entry_tbl = vlan_entry_k3_cpswxg, .vlan_entry_tbl = vlan_entry_k3_cpswxg,
}, },
{ {
.dev_id = "am64-cpswxg", .dev_id = "am64-cpswxg",
.features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING, .features = CPSW_ALE_F_STATUS_REG | CPSW_ALE_F_HW_AUTOAGING,
.reg_fields = ale_fields_cpsw_nu, .reg_fields = ale_fields_cpsw_nu,
.num_fields = ARRAY_SIZE(ale_fields_cpsw_nu),
.vlan_entry_tbl = vlan_entry_k3_cpswxg, .vlan_entry_tbl = vlan_entry_k3_cpswxg,
.tbl_entries = 512, .tbl_entries = 512,
}, },
@ -1477,7 +1486,7 @@ static int cpsw_ale_regfield_init(struct cpsw_ale *ale)
struct regmap *regmap = ale->regmap; struct regmap *regmap = ale->regmap;
int i; int i;
for (i = 0; i < ALE_FIELDS_MAX; i++) { for (i = 0; i < ale->params.num_fields; i++) {
ale->fields[i] = devm_regmap_field_alloc(dev, regmap, ale->fields[i] = devm_regmap_field_alloc(dev, regmap,
reg_fields[i]); reg_fields[i]);
if (IS_ERR(ale->fields[i])) { if (IS_ERR(ale->fields[i])) {
@ -1503,6 +1512,7 @@ struct cpsw_ale *cpsw_ale_create(struct cpsw_ale_params *params)
params->ale_entries = ale_dev_id->tbl_entries; params->ale_entries = ale_dev_id->tbl_entries;
params->nu_switch_ale = ale_dev_id->nu_switch_ale; params->nu_switch_ale = ale_dev_id->nu_switch_ale;
params->reg_fields = ale_dev_id->reg_fields; params->reg_fields = ale_dev_id->reg_fields;
params->num_fields = ale_dev_id->num_fields;
ale = devm_kzalloc(params->dev, sizeof(*ale), GFP_KERNEL); ale = devm_kzalloc(params->dev, sizeof(*ale), GFP_KERNEL);
if (!ale) if (!ale)

View File

@ -24,6 +24,7 @@ struct cpsw_ale_params {
*/ */
bool nu_switch_ale; bool nu_switch_ale;
const struct reg_field *reg_fields; const struct reg_field *reg_fields;
int num_fields;
const char *dev_id; const char *dev_id;
unsigned long bus_freq; unsigned long bus_freq;
}; };

View File

@ -101,6 +101,7 @@ config IEEE802154_CA8210_DEBUGFS
config IEEE802154_MCR20A config IEEE802154_MCR20A
tristate "MCR20A transceiver driver" tristate "MCR20A transceiver driver"
select REGMAP_SPI
depends on IEEE802154_DRIVERS && MAC802154 depends on IEEE802154_DRIVERS && MAC802154
depends on SPI depends on SPI
help help

View File

@ -1302,16 +1302,13 @@ mcr20a_probe(struct spi_device *spi)
irq_type = IRQF_TRIGGER_FALLING; irq_type = IRQF_TRIGGER_FALLING;
ret = devm_request_irq(&spi->dev, spi->irq, mcr20a_irq_isr, ret = devm_request_irq(&spi->dev, spi->irq, mcr20a_irq_isr,
irq_type, dev_name(&spi->dev), lp); irq_type | IRQF_NO_AUTOEN, dev_name(&spi->dev), lp);
if (ret) { if (ret) {
dev_err(&spi->dev, "could not request_irq for mcr20a\n"); dev_err(&spi->dev, "could not request_irq for mcr20a\n");
ret = -ENODEV; ret = -ENODEV;
goto free_dev; goto free_dev;
} }
/* disable_irq by default and wait for starting hardware */
disable_irq(spi->irq);
ret = ieee802154_register_hw(hw); ret = ieee802154_register_hw(hw);
if (ret) { if (ret) {
dev_crit(&spi->dev, "ieee802154_register_hw failed\n"); dev_crit(&spi->dev, "ieee802154_register_hw failed\n");

View File

@ -109,7 +109,7 @@ static void txgbe_pma_config_1g(struct dw_xpcs *xpcs)
txgbe_write_pma(xpcs, TXGBE_DFE_TAP_CTL0, 0); txgbe_write_pma(xpcs, TXGBE_DFE_TAP_CTL0, 0);
val = txgbe_read_pma(xpcs, TXGBE_RX_GEN_CTL3); val = txgbe_read_pma(xpcs, TXGBE_RX_GEN_CTL3);
val = u16_replace_bits(val, 0x4, TXGBE_RX_GEN_CTL3_LOS_TRSHLD0); val = u16_replace_bits(val, 0x4, TXGBE_RX_GEN_CTL3_LOS_TRSHLD0);
txgbe_write_pma(xpcs, TXGBE_RX_EQ_ATTN_CTL, val); txgbe_write_pma(xpcs, TXGBE_RX_GEN_CTL3, val);
txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL0, 0x20); txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL0, 0x20);
txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL3, 0x46); txgbe_write_pma(xpcs, TXGBE_MPLLA_CTL3, 0x46);

View File

@ -15,7 +15,7 @@
use kernel::net::phy::{ use kernel::net::phy::{
self, self,
reg::{Mmd, C45}, reg::{Mmd, C45},
DeviceId, Driver, Driver,
}; };
use kernel::prelude::*; use kernel::prelude::*;
use kernel::sizes::{SZ_16K, SZ_8K}; use kernel::sizes::{SZ_16K, SZ_8K};
@ -23,7 +23,7 @@
kernel::module_phy_driver! { kernel::module_phy_driver! {
drivers: [PhyQT2025], drivers: [PhyQT2025],
device_table: [ device_table: [
DeviceId::new_with_driver::<PhyQT2025>(), phy::DeviceId::new_with_driver::<PhyQT2025>(),
], ],
name: "qt2025_phy", name: "qt2025_phy",
author: "FUJITA Tomonori <fujita.tomonori@gmail.com>", author: "FUJITA Tomonori <fujita.tomonori@gmail.com>",

View File

@ -527,6 +527,9 @@ static int rtl8211f_led_hw_control_get(struct phy_device *phydev, u8 index,
{ {
int val; int val;
if (index >= RTL8211F_LED_COUNT)
return -EINVAL;
val = phy_read_paged(phydev, 0xd04, RTL8211F_LEDCR); val = phy_read_paged(phydev, 0xd04, RTL8211F_LEDCR);
if (val < 0) if (val < 0)
return val; return val;

View File

@ -2269,7 +2269,7 @@ static bool ppp_channel_bridge_input(struct channel *pch, struct sk_buff *skb)
if (!pchb) if (!pchb)
goto out_rcu; goto out_rcu;
spin_lock(&pchb->downl); spin_lock_bh(&pchb->downl);
if (!pchb->chan) { if (!pchb->chan) {
/* channel got unregistered */ /* channel got unregistered */
kfree_skb(skb); kfree_skb(skb);
@ -2281,7 +2281,7 @@ static bool ppp_channel_bridge_input(struct channel *pch, struct sk_buff *skb)
kfree_skb(skb); kfree_skb(skb);
outl: outl:
spin_unlock(&pchb->downl); spin_unlock_bh(&pchb->downl);
out_rcu: out_rcu:
rcu_read_unlock(); rcu_read_unlock();

View File

@ -608,7 +608,9 @@ static void vrf_finish_direct(struct sk_buff *skb)
eth_zero_addr(eth->h_dest); eth_zero_addr(eth->h_dest);
eth->h_proto = skb->protocol; eth->h_proto = skb->protocol;
rcu_read_lock_bh();
dev_queue_xmit_nit(skb, vrf_dev); dev_queue_xmit_nit(skb, vrf_dev);
rcu_read_unlock_bh();
skb_pull(skb, ETH_HLEN); skb_pull(skb, ETH_HLEN);
} }

View File

@ -823,17 +823,17 @@ static int bam_dmux_probe(struct platform_device *pdev)
ret = devm_request_threaded_irq(dev, pc_ack_irq, NULL, bam_dmux_pc_ack_irq, ret = devm_request_threaded_irq(dev, pc_ack_irq, NULL, bam_dmux_pc_ack_irq,
IRQF_ONESHOT, NULL, dmux); IRQF_ONESHOT, NULL, dmux);
if (ret) if (ret)
return ret; goto err_disable_pm;
ret = devm_request_threaded_irq(dev, dmux->pc_irq, NULL, bam_dmux_pc_irq, ret = devm_request_threaded_irq(dev, dmux->pc_irq, NULL, bam_dmux_pc_irq,
IRQF_ONESHOT, NULL, dmux); IRQF_ONESHOT, NULL, dmux);
if (ret) if (ret)
return ret; goto err_disable_pm;
ret = irq_get_irqchip_state(dmux->pc_irq, IRQCHIP_STATE_LINE_LEVEL, ret = irq_get_irqchip_state(dmux->pc_irq, IRQCHIP_STATE_LINE_LEVEL,
&dmux->pc_state); &dmux->pc_state);
if (ret) if (ret)
return ret; goto err_disable_pm;
/* Check if remote finished initialization before us */ /* Check if remote finished initialization before us */
if (dmux->pc_state) { if (dmux->pc_state) {
@ -844,6 +844,11 @@ static int bam_dmux_probe(struct platform_device *pdev)
} }
return 0; return 0;
err_disable_pm:
pm_runtime_disable(dev);
pm_runtime_dont_use_autosuspend(dev);
return ret;
} }
static void bam_dmux_remove(struct platform_device *pdev) static void bam_dmux_remove(struct platform_device *pdev)

View File

@ -2138,7 +2138,7 @@ struct mlx5_ifc_cmd_hca_cap_2_bits {
u8 ts_cqe_metadata_size2wqe_counter[0x5]; u8 ts_cqe_metadata_size2wqe_counter[0x5];
u8 reserved_at_250[0x10]; u8 reserved_at_250[0x10];
u8 reserved_at_260[0x120]; u8 reserved_at_260[0x20];
u8 format_select_dw_gtpu_dw_0[0x8]; u8 format_select_dw_gtpu_dw_0[0x8];
u8 format_select_dw_gtpu_dw_1[0x8]; u8 format_select_dw_gtpu_dw_1[0x8];

View File

@ -5029,6 +5029,24 @@ void netif_set_tso_max_segs(struct net_device *dev, unsigned int segs);
void netif_inherit_tso_max(struct net_device *to, void netif_inherit_tso_max(struct net_device *to,
const struct net_device *from); const struct net_device *from);
static inline unsigned int
netif_get_gro_max_size(const struct net_device *dev, const struct sk_buff *skb)
{
/* pairs with WRITE_ONCE() in netif_set_gro(_ipv4)_max_size() */
return skb->protocol == htons(ETH_P_IPV6) ?
READ_ONCE(dev->gro_max_size) :
READ_ONCE(dev->gro_ipv4_max_size);
}
static inline unsigned int
netif_get_gso_max_size(const struct net_device *dev, const struct sk_buff *skb)
{
/* pairs with WRITE_ONCE() in netif_set_gso(_ipv4)_max_size() */
return skb->protocol == htons(ETH_P_IPV6) ?
READ_ONCE(dev->gso_max_size) :
READ_ONCE(dev->gso_ipv4_max_size);
}
static inline bool netif_is_macsec(const struct net_device *dev) static inline bool netif_is_macsec(const struct net_device *dev)
{ {
return dev->priv_flags & IFF_MACSEC; return dev->priv_flags & IFF_MACSEC;

View File

@ -103,8 +103,10 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
if (!skb_partial_csum_set(skb, start, off)) if (!skb_partial_csum_set(skb, start, off))
return -EINVAL; return -EINVAL;
if (skb_transport_offset(skb) < nh_min_len)
return -EINVAL;
nh_min_len = max_t(u32, nh_min_len, skb_transport_offset(skb)); nh_min_len = skb_transport_offset(skb);
p_off = nh_min_len + thlen; p_off = nh_min_len + thlen;
if (!pskb_may_pull(skb, p_off)) if (!pskb_may_pull(skb, p_off))
return -EINVAL; return -EINVAL;

View File

@ -1694,7 +1694,7 @@ enum nft_flowtable_flags {
* *
* @NFTA_FLOWTABLE_TABLE: name of the table containing the expression (NLA_STRING) * @NFTA_FLOWTABLE_TABLE: name of the table containing the expression (NLA_STRING)
* @NFTA_FLOWTABLE_NAME: name of this flow table (NLA_STRING) * @NFTA_FLOWTABLE_NAME: name of this flow table (NLA_STRING)
* @NFTA_FLOWTABLE_HOOK: netfilter hook configuration(NLA_U32) * @NFTA_FLOWTABLE_HOOK: netfilter hook configuration (NLA_NESTED)
* @NFTA_FLOWTABLE_USE: number of references to this flow table (NLA_U32) * @NFTA_FLOWTABLE_USE: number of references to this flow table (NLA_U32)
* @NFTA_FLOWTABLE_HANDLE: object handle (NLA_U64) * @NFTA_FLOWTABLE_HANDLE: object handle (NLA_U64)
* @NFTA_FLOWTABLE_FLAGS: flags (NLA_U32) * @NFTA_FLOWTABLE_FLAGS: flags (NLA_U32)

View File

@ -3782,6 +3782,8 @@ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
hci_dev_lock(hdev); hci_dev_lock(hdev);
conn = hci_conn_hash_lookup_handle(hdev, handle); conn = hci_conn_hash_lookup_handle(hdev, handle);
if (conn && hci_dev_test_flag(hdev, HCI_MGMT))
mgmt_device_connected(hdev, conn, NULL, 0);
hci_dev_unlock(hdev); hci_dev_unlock(hdev);
if (conn) { if (conn) {

View File

@ -3706,7 +3706,7 @@ static void hci_remote_features_evt(struct hci_dev *hdev, void *data,
goto unlock; goto unlock;
} }
if (!ev->status && !test_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags)) { if (!ev->status) {
struct hci_cp_remote_name_req cp; struct hci_cp_remote_name_req cp;
memset(&cp, 0, sizeof(cp)); memset(&cp, 0, sizeof(cp));
bacpy(&cp.bdaddr, &conn->dst); bacpy(&cp.bdaddr, &conn->dst);
@ -5324,19 +5324,16 @@ static void hci_user_confirm_request_evt(struct hci_dev *hdev, void *data,
goto unlock; goto unlock;
} }
/* If no side requires MITM protection; auto-accept */ /* If no side requires MITM protection; use JUST_CFM method */
if ((!loc_mitm || conn->remote_cap == HCI_IO_NO_INPUT_OUTPUT) && if ((!loc_mitm || conn->remote_cap == HCI_IO_NO_INPUT_OUTPUT) &&
(!rem_mitm || conn->io_capability == HCI_IO_NO_INPUT_OUTPUT)) { (!rem_mitm || conn->io_capability == HCI_IO_NO_INPUT_OUTPUT)) {
/* If we're not the initiators request authorization to /* If we're not the initiator of request authorization and the
* proceed from user space (mgmt_user_confirm with * local IO capability is not NoInputNoOutput, use JUST_WORKS
* confirm_hint set to 1). The exception is if neither * method (mgmt_user_confirm with confirm_hint set to 1).
* side had MITM or if the local IO capability is
* NoInputNoOutput, in which case we do auto-accept
*/ */
if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) && if (!test_bit(HCI_CONN_AUTH_PEND, &conn->flags) &&
conn->io_capability != HCI_IO_NO_INPUT_OUTPUT && conn->io_capability != HCI_IO_NO_INPUT_OUTPUT) {
(loc_mitm || rem_mitm)) {
bt_dev_dbg(hdev, "Confirming auto-accept as acceptor"); bt_dev_dbg(hdev, "Confirming auto-accept as acceptor");
confirm_hint = 1; confirm_hint = 1;
goto confirm; goto confirm;

View File

@ -4066,17 +4066,9 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
static int l2cap_connect_req(struct l2cap_conn *conn, static int l2cap_connect_req(struct l2cap_conn *conn,
struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data) struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data)
{ {
struct hci_dev *hdev = conn->hcon->hdev;
struct hci_conn *hcon = conn->hcon;
if (cmd_len < sizeof(struct l2cap_conn_req)) if (cmd_len < sizeof(struct l2cap_conn_req))
return -EPROTO; return -EPROTO;
hci_dev_lock(hdev);
if (hci_dev_test_flag(hdev, HCI_MGMT))
mgmt_device_connected(hdev, hcon, NULL, 0);
hci_dev_unlock(hdev);
l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP); l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP);
return 0; return 0;
} }

View File

@ -1453,10 +1453,15 @@ static void cmd_status_rsp(struct mgmt_pending_cmd *cmd, void *data)
static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data) static void cmd_complete_rsp(struct mgmt_pending_cmd *cmd, void *data)
{ {
if (cmd->cmd_complete) { struct cmd_lookup *match = data;
u8 *status = data;
cmd->cmd_complete(cmd, *status); /* dequeue cmd_sync entries using cmd as data as that is about to be
* removed/freed.
*/
hci_cmd_sync_dequeue(match->hdev, NULL, cmd, NULL);
if (cmd->cmd_complete) {
cmd->cmd_complete(cmd, match->mgmt_status);
mgmt_pending_remove(cmd); mgmt_pending_remove(cmd);
return; return;
@ -9394,12 +9399,12 @@ void mgmt_index_added(struct hci_dev *hdev)
void mgmt_index_removed(struct hci_dev *hdev) void mgmt_index_removed(struct hci_dev *hdev)
{ {
struct mgmt_ev_ext_index ev; struct mgmt_ev_ext_index ev;
u8 status = MGMT_STATUS_INVALID_INDEX; struct cmd_lookup match = { NULL, hdev, MGMT_STATUS_INVALID_INDEX };
if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks)) if (test_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks))
return; return;
mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status); mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) { if (hci_dev_test_flag(hdev, HCI_UNCONFIGURED)) {
mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0, mgmt_index_event(MGMT_EV_UNCONF_INDEX_REMOVED, hdev, NULL, 0,
@ -9450,7 +9455,7 @@ void mgmt_power_on(struct hci_dev *hdev, int err)
void __mgmt_power_off(struct hci_dev *hdev) void __mgmt_power_off(struct hci_dev *hdev)
{ {
struct cmd_lookup match = { NULL, hdev }; struct cmd_lookup match = { NULL, hdev };
u8 status, zero_cod[] = { 0, 0, 0 }; u8 zero_cod[] = { 0, 0, 0 };
mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match); mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
@ -9462,11 +9467,11 @@ void __mgmt_power_off(struct hci_dev *hdev)
* status responses. * status responses.
*/ */
if (hci_dev_test_flag(hdev, HCI_UNREGISTER)) if (hci_dev_test_flag(hdev, HCI_UNREGISTER))
status = MGMT_STATUS_INVALID_INDEX; match.mgmt_status = MGMT_STATUS_INVALID_INDEX;
else else
status = MGMT_STATUS_NOT_POWERED; match.mgmt_status = MGMT_STATUS_NOT_POWERED;
mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &status); mgmt_pending_foreach(0, hdev, cmd_complete_rsp, &match);
if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) { if (memcmp(hdev->dev_class, zero_cod, sizeof(zero_cod)) != 0) {
mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev, mgmt_limited_event(MGMT_EV_CLASS_OF_DEV_CHANGED, hdev,

View File

@ -1674,7 +1674,7 @@ int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
spin_lock_bh(&br->multicast_lock); spin_lock_bh(&br->multicast_lock);
mp = br_mdb_ip_get(br, &group); mp = br_mdb_ip_get(br, &group);
if (!mp) { if (!mp || (!mp->ports && !mp->host_joined)) {
NL_SET_ERR_MSG_MOD(extack, "MDB entry not found"); NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
err = -ENOENT; err = -ENOENT;
goto unlock; goto unlock;

View File

@ -3512,7 +3512,7 @@ static netdev_features_t gso_features_check(const struct sk_buff *skb,
if (gso_segs > READ_ONCE(dev->gso_max_segs)) if (gso_segs > READ_ONCE(dev->gso_max_segs))
return features & ~NETIF_F_GSO_MASK; return features & ~NETIF_F_GSO_MASK;
if (unlikely(skb->len >= READ_ONCE(dev->gso_max_size))) if (unlikely(skb->len >= netif_get_gso_max_size(dev, skb)))
return features & ~NETIF_F_GSO_MASK; return features & ~NETIF_F_GSO_MASK;
if (!skb_shinfo(skb)->gso_type) { if (!skb_shinfo(skb)->gso_type) {
@ -3758,7 +3758,7 @@ static void qdisc_pkt_len_init(struct sk_buff *skb)
sizeof(_tcphdr), &_tcphdr); sizeof(_tcphdr), &_tcphdr);
if (likely(th)) if (likely(th))
hdr_len += __tcp_hdrlen(th); hdr_len += __tcp_hdrlen(th);
} else { } else if (shinfo->gso_type & SKB_GSO_UDP_L4) {
struct udphdr _udphdr; struct udphdr _udphdr;
if (skb_header_pointer(skb, hdr_len, if (skb_header_pointer(skb, hdr_len,
@ -3766,10 +3766,14 @@ static void qdisc_pkt_len_init(struct sk_buff *skb)
hdr_len += sizeof(struct udphdr); hdr_len += sizeof(struct udphdr);
} }
if (shinfo->gso_type & SKB_GSO_DODGY) if (unlikely(shinfo->gso_type & SKB_GSO_DODGY)) {
gso_segs = DIV_ROUND_UP(skb->len - hdr_len, int payload = skb->len - hdr_len;
shinfo->gso_size);
/* Malicious packet. */
if (payload <= 0)
return;
gso_segs = DIV_ROUND_UP(payload, shinfo->gso_size);
}
qdisc_skb_cb(skb)->pkt_len += (gso_segs - 1) * hdr_len; qdisc_skb_cb(skb)->pkt_len += (gso_segs - 1) * hdr_len;
} }
} }

View File

@ -98,7 +98,6 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
unsigned int headlen = skb_headlen(skb); unsigned int headlen = skb_headlen(skb);
unsigned int len = skb_gro_len(skb); unsigned int len = skb_gro_len(skb);
unsigned int delta_truesize; unsigned int delta_truesize;
unsigned int gro_max_size;
unsigned int new_truesize; unsigned int new_truesize;
struct sk_buff *lp; struct sk_buff *lp;
int segs; int segs;
@ -112,12 +111,8 @@ int skb_gro_receive(struct sk_buff *p, struct sk_buff *skb)
if (p->pp_recycle != skb->pp_recycle) if (p->pp_recycle != skb->pp_recycle)
return -ETOOMANYREFS; return -ETOOMANYREFS;
/* pairs with WRITE_ONCE() in netif_set_gro(_ipv4)_max_size() */ if (unlikely(p->len + len >= netif_get_gro_max_size(p->dev, p) ||
gro_max_size = p->protocol == htons(ETH_P_IPV6) ? NAPI_GRO_CB(skb)->flush))
READ_ONCE(p->dev->gro_max_size) :
READ_ONCE(p->dev->gro_ipv4_max_size);
if (unlikely(p->len + len >= gro_max_size || NAPI_GRO_CB(skb)->flush))
return -E2BIG; return -E2BIG;
if (unlikely(p->len + len >= GRO_LEGACY_MAX_SIZE)) { if (unlikely(p->len + len >= GRO_LEGACY_MAX_SIZE)) {

View File

@ -1577,6 +1577,7 @@ EXPORT_SYMBOL_GPL(dsa_unregister_switch);
void dsa_switch_shutdown(struct dsa_switch *ds) void dsa_switch_shutdown(struct dsa_switch *ds)
{ {
struct net_device *conduit, *user_dev; struct net_device *conduit, *user_dev;
LIST_HEAD(close_list);
struct dsa_port *dp; struct dsa_port *dp;
mutex_lock(&dsa2_mutex); mutex_lock(&dsa2_mutex);
@ -1586,10 +1587,16 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
rtnl_lock(); rtnl_lock();
dsa_switch_for_each_cpu_port(dp, ds)
list_add(&dp->conduit->close_list, &close_list);
dev_close_many(&close_list, true);
dsa_switch_for_each_user_port(dp, ds) { dsa_switch_for_each_user_port(dp, ds) {
conduit = dsa_port_to_conduit(dp); conduit = dsa_port_to_conduit(dp);
user_dev = dp->user; user_dev = dp->user;
netif_device_detach(user_dev);
netdev_upper_dev_unlink(conduit, user_dev); netdev_upper_dev_unlink(conduit, user_dev);
} }

View File

@ -662,11 +662,11 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb,
if (skb_cow_head(skb, 0)) if (skb_cow_head(skb, 0))
goto free_skb; goto free_skb;
tnl_params = (const struct iphdr *)skb->data; if (!pskb_may_pull(skb, pull_len))
if (!pskb_network_may_pull(skb, pull_len))
goto free_skb; goto free_skb;
tnl_params = (const struct iphdr *)skb->data;
/* ip_tunnel_xmit() needs skb->data pointing to gre header. */ /* ip_tunnel_xmit() needs skb->data pointing to gre header. */
skb_pull(skb, pull_len); skb_pull(skb, pull_len);
skb_reset_mac_header(skb); skb_reset_mac_header(skb);

View File

@ -53,8 +53,9 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
{ {
struct iphdr *iph; struct iphdr *iph;
local_bh_disable();
if (this_cpu_read(nf_skb_duplicated)) if (this_cpu_read(nf_skb_duplicated))
return; goto out;
/* /*
* Copy the skb, and route the copy. Will later return %XT_CONTINUE for * Copy the skb, and route the copy. Will later return %XT_CONTINUE for
* the original skb, which should continue on its way as if nothing has * the original skb, which should continue on its way as if nothing has
@ -62,7 +63,7 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
*/ */
skb = pskb_copy(skb, GFP_ATOMIC); skb = pskb_copy(skb, GFP_ATOMIC);
if (skb == NULL) if (skb == NULL)
return; goto out;
#if IS_ENABLED(CONFIG_NF_CONNTRACK) #if IS_ENABLED(CONFIG_NF_CONNTRACK)
/* Avoid counting cloned packets towards the original connection. */ /* Avoid counting cloned packets towards the original connection. */
@ -91,6 +92,8 @@ void nf_dup_ipv4(struct net *net, struct sk_buff *skb, unsigned int hooknum,
} else { } else {
kfree_skb(skb); kfree_skb(skb);
} }
out:
local_bh_enable();
} }
EXPORT_SYMBOL_GPL(nf_dup_ipv4); EXPORT_SYMBOL_GPL(nf_dup_ipv4);

View File

@ -101,9 +101,15 @@ static struct sk_buff *tcp4_gso_segment(struct sk_buff *skb,
if (!pskb_may_pull(skb, sizeof(struct tcphdr))) if (!pskb_may_pull(skb, sizeof(struct tcphdr)))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) {
struct tcphdr *th = tcp_hdr(skb);
if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size)
return __tcp4_gso_segment_list(skb, features); return __tcp4_gso_segment_list(skb, features);
skb->ip_summed = CHECKSUM_NONE;
}
if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) { if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
const struct iphdr *iph = ip_hdr(skb); const struct iphdr *iph = ip_hdr(skb);
struct tcphdr *th = tcp_hdr(skb); struct tcphdr *th = tcp_hdr(skb);

View File

@ -296,9 +296,27 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
return NULL; return NULL;
} }
if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST) {
/* Detect modified geometry and pass those to skb_segment. */
if (skb_pagelen(gso_skb) - sizeof(*uh) == skb_shinfo(gso_skb)->gso_size)
return __udp_gso_segment_list(gso_skb, features, is_ipv6); return __udp_gso_segment_list(gso_skb, features, is_ipv6);
/* Setup csum, as fraglist skips this in udp4_gro_receive. */
gso_skb->csum_start = skb_transport_header(gso_skb) - gso_skb->head;
gso_skb->csum_offset = offsetof(struct udphdr, check);
gso_skb->ip_summed = CHECKSUM_PARTIAL;
uh = udp_hdr(gso_skb);
if (is_ipv6)
uh->check = ~udp_v6_check(gso_skb->len,
&ipv6_hdr(gso_skb)->saddr,
&ipv6_hdr(gso_skb)->daddr, 0);
else
uh->check = ~udp_v4_check(gso_skb->len,
ip_hdr(gso_skb)->saddr,
ip_hdr(gso_skb)->daddr, 0);
}
skb_pull(gso_skb, sizeof(*uh)); skb_pull(gso_skb, sizeof(*uh));
/* clear destructor to avoid skb_segment assigning it to tail */ /* clear destructor to avoid skb_segment assigning it to tail */

View File

@ -47,11 +47,12 @@ static bool nf_dup_ipv6_route(struct net *net, struct sk_buff *skb,
void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum, void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum,
const struct in6_addr *gw, int oif) const struct in6_addr *gw, int oif)
{ {
local_bh_disable();
if (this_cpu_read(nf_skb_duplicated)) if (this_cpu_read(nf_skb_duplicated))
return; goto out;
skb = pskb_copy(skb, GFP_ATOMIC); skb = pskb_copy(skb, GFP_ATOMIC);
if (skb == NULL) if (skb == NULL)
return; goto out;
#if IS_ENABLED(CONFIG_NF_CONNTRACK) #if IS_ENABLED(CONFIG_NF_CONNTRACK)
nf_reset_ct(skb); nf_reset_ct(skb);
@ -69,6 +70,8 @@ void nf_dup_ipv6(struct net *net, struct sk_buff *skb, unsigned int hooknum,
} else { } else {
kfree_skb(skb); kfree_skb(skb);
} }
out:
local_bh_enable();
} }
EXPORT_SYMBOL_GPL(nf_dup_ipv6); EXPORT_SYMBOL_GPL(nf_dup_ipv6);

View File

@ -159,9 +159,15 @@ static struct sk_buff *tcp6_gso_segment(struct sk_buff *skb,
if (!pskb_may_pull(skb, sizeof(*th))) if (!pskb_may_pull(skb, sizeof(*th)))
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST) {
struct tcphdr *th = tcp_hdr(skb);
if (skb_pagelen(skb) - th->doff * 4 == skb_shinfo(skb)->gso_size)
return __tcp6_gso_segment_list(skb, features); return __tcp6_gso_segment_list(skb, features);
skb->ip_summed = CHECKSUM_NONE;
}
if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) { if (unlikely(skb->ip_summed != CHECKSUM_PARTIAL)) {
const struct ipv6hdr *ipv6h = ipv6_hdr(skb); const struct ipv6hdr *ipv6h = ipv6_hdr(skb);
struct tcphdr *th = tcp_hdr(skb); struct tcphdr *th = tcp_hdr(skb);

View File

@ -176,6 +176,7 @@ void mac802154_scan_worker(struct work_struct *work)
struct ieee802154_local *local = struct ieee802154_local *local =
container_of(work, struct ieee802154_local, scan_work.work); container_of(work, struct ieee802154_local, scan_work.work);
struct cfg802154_scan_request *scan_req; struct cfg802154_scan_request *scan_req;
enum nl802154_scan_types scan_req_type;
struct ieee802154_sub_if_data *sdata; struct ieee802154_sub_if_data *sdata;
unsigned int scan_duration = 0; unsigned int scan_duration = 0;
struct wpan_phy *wpan_phy; struct wpan_phy *wpan_phy;
@ -209,6 +210,7 @@ void mac802154_scan_worker(struct work_struct *work)
} }
wpan_phy = scan_req->wpan_phy; wpan_phy = scan_req->wpan_phy;
scan_req_type = scan_req->type;
scan_req_duration = scan_req->duration; scan_req_duration = scan_req->duration;
/* Look for the next valid chan */ /* Look for the next valid chan */
@ -246,7 +248,7 @@ void mac802154_scan_worker(struct work_struct *work)
goto end_scan; goto end_scan;
} }
if (scan_req->type == NL802154_SCAN_ACTIVE) { if (scan_req_type == NL802154_SCAN_ACTIVE) {
ret = mac802154_transmit_beacon_req(local, sdata); ret = mac802154_transmit_beacon_req(local, sdata);
if (ret) if (ret)
dev_err(&sdata->dev->dev, dev_err(&sdata->dev->dev,

View File

@ -1954,6 +1954,8 @@ void ncsi_unregister_dev(struct ncsi_dev *nd)
list_del_rcu(&ndp->node); list_del_rcu(&ndp->node);
spin_unlock_irqrestore(&ncsi_dev_lock, flags); spin_unlock_irqrestore(&ncsi_dev_lock, flags);
disable_work_sync(&ndp->work);
kfree(ndp); kfree(ndp);
} }
EXPORT_SYMBOL_GPL(ncsi_unregister_dev); EXPORT_SYMBOL_GPL(ncsi_unregister_dev);

View File

@ -8557,8 +8557,10 @@ static int sctp_listen_start(struct sock *sk, int backlog)
*/ */
inet_sk_set_state(sk, SCTP_SS_LISTENING); inet_sk_set_state(sk, SCTP_SS_LISTENING);
if (!ep->base.bind_addr.port) { if (!ep->base.bind_addr.port) {
if (sctp_autobind(sk)) if (sctp_autobind(sk)) {
inet_sk_set_state(sk, SCTP_SS_CLOSED);
return -EAGAIN; return -EAGAIN;
}
} else { } else {
if (sctp_get_port(sk, inet_sk(sk)->inet_num)) { if (sctp_get_port(sk, inet_sk(sk)->inet_num)) {
inet_sk_set_state(sk, SCTP_SS_CLOSED); inet_sk_set_state(sk, SCTP_SS_CLOSED);

View File

@ -207,6 +207,7 @@ static int conntrack_data_generate_v6(struct mnl_socket *sock,
static int count_entries(const struct nlmsghdr *nlh, void *data) static int count_entries(const struct nlmsghdr *nlh, void *data)
{ {
reply_counter++; reply_counter++;
return MNL_CB_OK;
} }
static int conntracK_count_zone(struct mnl_socket *sock, uint16_t zone) static int conntracK_count_zone(struct mnl_socket *sock, uint16_t zone)

View File

@ -48,12 +48,31 @@ logread_pid=$!
trap 'kill $logread_pid; rm -f $logfile $rulefile' EXIT trap 'kill $logread_pid; rm -f $logfile $rulefile' EXIT
exec 3<"$logfile" exec 3<"$logfile"
lsplit='s/^\(.*\) entries=\([^ ]*\) \(.*\)$/pfx="\1"\nval="\2"\nsfx="\3"/'
summarize_logs() {
sum=0
while read line; do
eval $(sed "$lsplit" <<< "$line")
[[ $sum -gt 0 ]] && {
[[ "$pfx $sfx" == "$tpfx $tsfx" ]] && {
let "sum += val"
continue
}
echo "$tpfx entries=$sum $tsfx"
}
tpfx="$pfx"
tsfx="$sfx"
sum=$val
done
echo "$tpfx entries=$sum $tsfx"
}
do_test() { # (cmd, log) do_test() { # (cmd, log)
echo -n "testing for cmd: $1 ... " echo -n "testing for cmd: $1 ... "
cat <&3 >/dev/null cat <&3 >/dev/null
$1 >/dev/null || exit 1 $1 >/dev/null || exit 1
sleep 0.1 sleep 0.1
res=$(diff -a -u <(echo "$2") - <&3) res=$(diff -a -u <(echo "$2") <(summarize_logs <&3))
[ $? -eq 0 ] && { echo "OK"; return; } [ $? -eq 0 ] && { echo "OK"; return; }
echo "FAIL" echo "FAIL"
grep -v '^\(---\|+++\|@@\)' <<< "$res" grep -v '^\(---\|+++\|@@\)' <<< "$res"
@ -152,31 +171,17 @@ do_test 'nft reset rules t1 c2' \
'table=t1 family=2 entries=3 op=nft_reset_rule' 'table=t1 family=2 entries=3 op=nft_reset_rule'
do_test 'nft reset rules table t1' \ do_test 'nft reset rules table t1' \
'table=t1 family=2 entries=3 op=nft_reset_rule 'table=t1 family=2 entries=9 op=nft_reset_rule'
table=t1 family=2 entries=3 op=nft_reset_rule
table=t1 family=2 entries=3 op=nft_reset_rule'
do_test 'nft reset rules t2 c3' \ do_test 'nft reset rules t2 c3' \
'table=t2 family=2 entries=189 op=nft_reset_rule 'table=t2 family=2 entries=503 op=nft_reset_rule'
table=t2 family=2 entries=188 op=nft_reset_rule
table=t2 family=2 entries=126 op=nft_reset_rule'
do_test 'nft reset rules t2' \ do_test 'nft reset rules t2' \
'table=t2 family=2 entries=3 op=nft_reset_rule 'table=t2 family=2 entries=509 op=nft_reset_rule'
table=t2 family=2 entries=3 op=nft_reset_rule
table=t2 family=2 entries=186 op=nft_reset_rule
table=t2 family=2 entries=188 op=nft_reset_rule
table=t2 family=2 entries=129 op=nft_reset_rule'
do_test 'nft reset rules' \ do_test 'nft reset rules' \
'table=t1 family=2 entries=3 op=nft_reset_rule 'table=t1 family=2 entries=9 op=nft_reset_rule
table=t1 family=2 entries=3 op=nft_reset_rule table=t2 family=2 entries=509 op=nft_reset_rule'
table=t1 family=2 entries=3 op=nft_reset_rule
table=t2 family=2 entries=3 op=nft_reset_rule
table=t2 family=2 entries=3 op=nft_reset_rule
table=t2 family=2 entries=180 op=nft_reset_rule
table=t2 family=2 entries=188 op=nft_reset_rule
table=t2 family=2 entries=135 op=nft_reset_rule'
# resetting sets and elements # resetting sets and elements
@ -200,13 +205,11 @@ do_test 'nft reset counters t1' \
'table=t1 family=2 entries=1 op=nft_reset_obj' 'table=t1 family=2 entries=1 op=nft_reset_obj'
do_test 'nft reset counters t2' \ do_test 'nft reset counters t2' \
'table=t2 family=2 entries=342 op=nft_reset_obj 'table=t2 family=2 entries=500 op=nft_reset_obj'
table=t2 family=2 entries=158 op=nft_reset_obj'
do_test 'nft reset counters' \ do_test 'nft reset counters' \
'table=t1 family=2 entries=1 op=nft_reset_obj 'table=t1 family=2 entries=1 op=nft_reset_obj
table=t2 family=2 entries=341 op=nft_reset_obj table=t2 family=2 entries=500 op=nft_reset_obj'
table=t2 family=2 entries=159 op=nft_reset_obj'
# resetting quotas # resetting quotas
@ -217,13 +220,11 @@ do_test 'nft reset quotas t1' \
'table=t1 family=2 entries=1 op=nft_reset_obj' 'table=t1 family=2 entries=1 op=nft_reset_obj'
do_test 'nft reset quotas t2' \ do_test 'nft reset quotas t2' \
'table=t2 family=2 entries=315 op=nft_reset_obj 'table=t2 family=2 entries=500 op=nft_reset_obj'
table=t2 family=2 entries=185 op=nft_reset_obj'
do_test 'nft reset quotas' \ do_test 'nft reset quotas' \
'table=t1 family=2 entries=1 op=nft_reset_obj 'table=t1 family=2 entries=1 op=nft_reset_obj
table=t2 family=2 entries=314 op=nft_reset_obj table=t2 family=2 entries=500 op=nft_reset_obj'
table=t2 family=2 entries=186 op=nft_reset_obj'
# deleting rules # deleting rules

View File

@ -4,9 +4,10 @@ all:
@echo mk_build_dir="$(shell pwd)" > include.sh @echo mk_build_dir="$(shell pwd)" > include.sh
TEST_PROGS := run.sh \ TEST_PROGS := run.sh \
include.sh \
test.py test.py
TEST_FILES := include.sh
EXTRA_CLEAN := /tmp/rds_logs EXTRA_CLEAN := /tmp/rds_logs
include ../../lib.mk include ../../lib.mk

0
tools/testing/selftests/net/rds/test.py Normal file → Executable file
View File