mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git
synced 2025-01-01 10:42:11 +00:00
Networking fixes for 6.6-rc2, including fixes from netfilter and bpf
Current release - regressions: - bpf: adjust size_index according to the value of KMALLOC_MIN_SIZE - netfilter: fix entries val in rule reset audit log - eth: stmmac: fix incorrect rxq|txq_stats reference Previous releases - regressions: - ipv4: fix null-deref in ipv4_link_failure - netfilter: - fix several GC related issues - fix race between IPSET_CMD_CREATE and IPSET_CMD_SWAP - eth: team: fix null-ptr-deref when team device type is changed - eth: i40e: fix VF VLAN offloading when port VLAN is configured - eth: ionic: fix 16bit math issue when PAGE_SIZE >= 64KB Previous releases - always broken: - core: fix ETH_P_1588 flow dissector - mptcp: fix several connection hang-up conditions - bpf: - avoid deadlock when using queue and stack maps from NMI - add override check to kprobe multi link attach - hsr: properly parse HSRv1 supervisor frames. - eth: igc: fix infinite initialization loop with early XDP redirect - eth: octeon_ep: fix tx dma unmap len values in SG - eth: hns3: fix GRE checksum offload issue Signed-off-by: Paolo Abeni <pabeni@redhat.com> -----BEGIN PGP SIGNATURE----- iQJGBAABCAAwFiEEg1AjqC77wbdLX2LbKSR5jcyPE6QFAmUMFG8SHHBhYmVuaUBy ZWRoYXQuY29tAAoJECkkeY3MjxOksHAP+QE2eNf5yxo86dIS+3RQnOQ8kFBnNbEn 04lrheGnzG7PpNnGoCoTZna+xYQPYVLgbmmip2/CFnQvnQIsKyLQfCui85sfV2V9 KjUeE/kTgeC+jUQOWNDyz3zDP/MPC2LmiK8Gwyggvm9vFYn5tVZXC36aPZBZ7Vok /DUW6iXyl31SeVGOOEKakcwn0GIYJSABhVFNsjrDe4tV+leUwvf8obAq3ZWxOGaU D94ez28lSXgfOSWfQQ/l1rHI/yC0fr8HYyWJ60dNG2uS3fNEqT8LyqZfAUK24kVz XbAGZa+GA7CDq3cVsU7vCWNWbB5fO+kXtmGOwPtuKtJQM5LPo4X77CuSHlpzdyvq TuW0vxeVfdzAYVb3Zg+2QgWxDJjY0B8ujwdDWrnnKTPu4Ylhn6HLISXIlkMBoGwT 1/47TCnmn9t+lGagkMADppRRnJotHWObQG5wkzksqVa2CUB0HTESgbrm4rsxe6Ku JiZhHbTiiPWy7LgY6EFtj/YGPvLs0CSltvh4QUsd+QtDTM/EN7y3HcHqkv88ropG bSvJIh6WXdEJkwfSUdA0LECXSC6dizzZW2Y1glnT+7FMlhE1jVY4gruNJ37mCYMb 0gh9Zr76c2KYLA5vljGp6uo3j3A7wARJTdLfRFVcaFoz6NQmuFf9ZdBfDNDcymxs AGvO3j55JAZf =AoVg -----END PGP SIGNATURE----- Merge tag 'net-6.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net Pull networking fixes from Paolo Abeni: "Including fixes from netfilter and bpf. Current release - regressions: - bpf: adjust size_index according to the value of KMALLOC_MIN_SIZE - netfilter: fix entries val in rule reset audit log - eth: stmmac: fix incorrect rxq|txq_stats reference Previous releases - regressions: - ipv4: fix null-deref in ipv4_link_failure - netfilter: - fix several GC related issues - fix race between IPSET_CMD_CREATE and IPSET_CMD_SWAP - eth: team: fix null-ptr-deref when team device type is changed - eth: i40e: fix VF VLAN offloading when port VLAN is configured - eth: ionic: fix 16bit math issue when PAGE_SIZE >= 64KB Previous releases - always broken: - core: fix ETH_P_1588 flow dissector - mptcp: fix several connection hang-up conditions - bpf: - avoid deadlock when using queue and stack maps from NMI - add override check to kprobe multi link attach - hsr: properly parse HSRv1 supervisor frames. - eth: igc: fix infinite initialization loop with early XDP redirect - eth: octeon_ep: fix tx dma unmap len values in SG - eth: hns3: fix GRE checksum offload issue" * tag 'net-6.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (87 commits) sfc: handle error pointers returned by rhashtable_lookup_get_insert_fast() igc: Expose tx-usecs coalesce setting to user octeontx2-pf: Do xdp_do_flush() after redirects. bnxt_en: Flush XDP for bnxt_poll_nitroa0()'s NAPI net: ena: Flush XDP packets on error. net/handshake: Fix memory leak in __sock_create() and sock_alloc_file() net: hinic: Fix warning-hinic_set_vlan_fliter() warn: variable dereferenced before check 'hwdev' netfilter: ipset: Fix race between IPSET_CMD_CREATE and IPSET_CMD_SWAP netfilter: nf_tables: fix memleak when more than 255 elements expired netfilter: nf_tables: disable toggling dormant table state more than once vxlan: Add missing entries to vxlan_get_size() net: rds: Fix possible NULL-pointer dereference team: fix null-ptr-deref when team device type is changed net: bridge: use DEV_STATS_INC() net: hns3: add 5ms delay before clear firmware reset irq source net: hns3: fix fail to delete tc flower rules during reset issue net: hns3: only enable unicast promisc when mac table full net: hns3: fix GRE checksum offload issue net: hns3: add cmdq check for vf periodic service task net: stmmac: fix incorrect rxq|txq_stats reference ...
This commit is contained in:
commit
27bbf45eae
@ -7,9 +7,9 @@ AX.25
|
||||
To use the amateur radio protocols within Linux you will need to get a
|
||||
suitable copy of the AX.25 Utilities. More detailed information about
|
||||
AX.25, NET/ROM and ROSE, associated programs and utilities can be
|
||||
found on http://www.linux-ax25.org.
|
||||
found on https://linux-ax25.in-berlin.de.
|
||||
|
||||
There is an active mailing list for discussing Linux amateur radio matters
|
||||
There is a mailing list for discussing Linux amateur radio matters
|
||||
called linux-hams@vger.kernel.org. To subscribe to it, send a message to
|
||||
majordomo@vger.kernel.org with the words "subscribe linux-hams" in the body
|
||||
of the message, the subject field is ignored. You don't need to be
|
||||
|
@ -3344,7 +3344,7 @@ AX.25 NETWORK LAYER
|
||||
M: Ralf Baechle <ralf@linux-mips.org>
|
||||
L: linux-hams@vger.kernel.org
|
||||
S: Maintained
|
||||
W: http://www.linux-ax25.org/
|
||||
W: https://linux-ax25.in-berlin.de
|
||||
F: include/net/ax25.h
|
||||
F: include/uapi/linux/ax25.h
|
||||
F: net/ax25/
|
||||
@ -14756,7 +14756,7 @@ NETROM NETWORK LAYER
|
||||
M: Ralf Baechle <ralf@linux-mips.org>
|
||||
L: linux-hams@vger.kernel.org
|
||||
S: Maintained
|
||||
W: http://www.linux-ax25.org/
|
||||
W: https://linux-ax25.in-berlin.de
|
||||
F: include/net/netrom.h
|
||||
F: include/uapi/linux/netrom.h
|
||||
F: net/netrom/
|
||||
@ -18607,7 +18607,7 @@ ROSE NETWORK LAYER
|
||||
M: Ralf Baechle <ralf@linux-mips.org>
|
||||
L: linux-hams@vger.kernel.org
|
||||
S: Maintained
|
||||
W: http://www.linux-ax25.org/
|
||||
W: https://linux-ax25.in-berlin.de
|
||||
F: include/net/rose.h
|
||||
F: include/uapi/linux/rose.h
|
||||
F: net/rose/
|
||||
|
@ -1833,6 +1833,9 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
|
||||
return work_done;
|
||||
|
||||
error:
|
||||
if (xdp_flags & ENA_XDP_REDIRECT)
|
||||
xdp_do_flush();
|
||||
|
||||
adapter = netdev_priv(rx_ring->netdev);
|
||||
|
||||
if (rc == -ENOSPC) {
|
||||
|
@ -2614,6 +2614,7 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
|
||||
struct rx_cmp_ext *rxcmp1;
|
||||
u32 cp_cons, tmp_raw_cons;
|
||||
u32 raw_cons = cpr->cp_raw_cons;
|
||||
bool flush_xdp = false;
|
||||
u32 rx_pkts = 0;
|
||||
u8 event = 0;
|
||||
|
||||
@ -2648,6 +2649,8 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
|
||||
rx_pkts++;
|
||||
else if (rc == -EBUSY) /* partial completion */
|
||||
break;
|
||||
if (event & BNXT_REDIRECT_EVENT)
|
||||
flush_xdp = true;
|
||||
} else if (unlikely(TX_CMP_TYPE(txcmp) ==
|
||||
CMPL_BASE_TYPE_HWRM_DONE)) {
|
||||
bnxt_hwrm_handler(bp, txcmp);
|
||||
@ -2667,6 +2670,8 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget)
|
||||
|
||||
if (event & BNXT_AGG_EVENT)
|
||||
bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod);
|
||||
if (flush_xdp)
|
||||
xdp_do_flush();
|
||||
|
||||
if (!bnxt_has_work(bp, cpr) && rx_pkts < budget) {
|
||||
napi_complete_done(napi, rx_pkts);
|
||||
|
@ -300,10 +300,8 @@ static void tsnep_ethtool_get_channels(struct net_device *netdev,
|
||||
{
|
||||
struct tsnep_adapter *adapter = netdev_priv(netdev);
|
||||
|
||||
ch->max_rx = adapter->num_rx_queues;
|
||||
ch->max_tx = adapter->num_tx_queues;
|
||||
ch->rx_count = adapter->num_rx_queues;
|
||||
ch->tx_count = adapter->num_tx_queues;
|
||||
ch->max_combined = adapter->num_queues;
|
||||
ch->combined_count = adapter->num_queues;
|
||||
}
|
||||
|
||||
static int tsnep_ethtool_get_ts_info(struct net_device *netdev,
|
||||
|
@ -87,8 +87,11 @@ static irqreturn_t tsnep_irq(int irq, void *arg)
|
||||
|
||||
/* handle TX/RX queue 0 interrupt */
|
||||
if ((active & adapter->queue[0].irq_mask) != 0) {
|
||||
tsnep_disable_irq(adapter, adapter->queue[0].irq_mask);
|
||||
napi_schedule(&adapter->queue[0].napi);
|
||||
if (napi_schedule_prep(&adapter->queue[0].napi)) {
|
||||
tsnep_disable_irq(adapter, adapter->queue[0].irq_mask);
|
||||
/* schedule after masking to avoid races */
|
||||
__napi_schedule(&adapter->queue[0].napi);
|
||||
}
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
@ -99,8 +102,11 @@ static irqreturn_t tsnep_irq_txrx(int irq, void *arg)
|
||||
struct tsnep_queue *queue = arg;
|
||||
|
||||
/* handle TX/RX queue interrupt */
|
||||
tsnep_disable_irq(queue->adapter, queue->irq_mask);
|
||||
napi_schedule(&queue->napi);
|
||||
if (napi_schedule_prep(&queue->napi)) {
|
||||
tsnep_disable_irq(queue->adapter, queue->irq_mask);
|
||||
/* schedule after masking to avoid races */
|
||||
__napi_schedule(&queue->napi);
|
||||
}
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
@ -1728,6 +1734,10 @@ static int tsnep_poll(struct napi_struct *napi, int budget)
|
||||
if (queue->tx)
|
||||
complete = tsnep_tx_poll(queue->tx, budget);
|
||||
|
||||
/* handle case where we are called by netpoll with a budget of 0 */
|
||||
if (unlikely(budget <= 0))
|
||||
return budget;
|
||||
|
||||
if (queue->rx) {
|
||||
done = queue->rx->xsk_pool ?
|
||||
tsnep_rx_poll_zc(queue->rx, napi, budget) :
|
||||
|
@ -3353,6 +3353,15 @@ static void hns3_set_default_feature(struct net_device *netdev)
|
||||
NETIF_F_HW_TC);
|
||||
|
||||
netdev->hw_enc_features |= netdev->vlan_features | NETIF_F_TSO_MANGLEID;
|
||||
|
||||
/* The device_version V3 hardware can't offload the checksum for IP in
|
||||
* GRE packets, but can do it for NvGRE. So default to disable the
|
||||
* checksum and GSO offload for GRE.
|
||||
*/
|
||||
if (ae_dev->dev_version > HNAE3_DEVICE_VERSION_V2) {
|
||||
netdev->features &= ~NETIF_F_GSO_GRE;
|
||||
netdev->features &= ~NETIF_F_GSO_GRE_CSUM;
|
||||
}
|
||||
}
|
||||
|
||||
static int hns3_alloc_buffer(struct hns3_enet_ring *ring,
|
||||
|
@ -3564,9 +3564,14 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
|
||||
static void hclge_clear_event_cause(struct hclge_dev *hdev, u32 event_type,
|
||||
u32 regclr)
|
||||
{
|
||||
#define HCLGE_IMP_RESET_DELAY 5
|
||||
|
||||
switch (event_type) {
|
||||
case HCLGE_VECTOR0_EVENT_PTP:
|
||||
case HCLGE_VECTOR0_EVENT_RST:
|
||||
if (regclr == BIT(HCLGE_VECTOR0_IMPRESET_INT_B))
|
||||
mdelay(HCLGE_IMP_RESET_DELAY);
|
||||
|
||||
hclge_write_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG, regclr);
|
||||
break;
|
||||
case HCLGE_VECTOR0_EVENT_MBX:
|
||||
@ -7348,6 +7353,12 @@ static int hclge_del_cls_flower(struct hnae3_handle *handle,
|
||||
ret = hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true, rule->location,
|
||||
NULL, false);
|
||||
if (ret) {
|
||||
/* if tcam config fail, set rule state to TO_DEL,
|
||||
* so the rule will be deleted when periodic
|
||||
* task being scheduled.
|
||||
*/
|
||||
hclge_update_fd_list(hdev, HCLGE_FD_TO_DEL, rule->location, NULL);
|
||||
set_bit(HCLGE_STATE_FD_TBL_CHANGED, &hdev->state);
|
||||
spin_unlock_bh(&hdev->fd_rule_lock);
|
||||
return ret;
|
||||
}
|
||||
@ -8824,7 +8835,7 @@ static void hclge_update_overflow_flags(struct hclge_vport *vport,
|
||||
if (mac_type == HCLGE_MAC_ADDR_UC) {
|
||||
if (is_all_added)
|
||||
vport->overflow_promisc_flags &= ~HNAE3_OVERFLOW_UPE;
|
||||
else
|
||||
else if (hclge_is_umv_space_full(vport, true))
|
||||
vport->overflow_promisc_flags |= HNAE3_OVERFLOW_UPE;
|
||||
} else {
|
||||
if (is_all_added)
|
||||
|
@ -1855,7 +1855,8 @@ static void hclgevf_periodic_service_task(struct hclgevf_dev *hdev)
|
||||
unsigned long delta = round_jiffies_relative(HZ);
|
||||
struct hnae3_handle *handle = &hdev->nic;
|
||||
|
||||
if (test_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state))
|
||||
if (test_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state) ||
|
||||
test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state))
|
||||
return;
|
||||
|
||||
if (time_is_after_jiffies(hdev->last_serv_processed + HZ)) {
|
||||
|
@ -456,9 +456,6 @@ int hinic_set_vlan_fliter(struct hinic_dev *nic_dev, u32 en)
|
||||
u16 out_size = sizeof(vlan_filter);
|
||||
int err;
|
||||
|
||||
if (!hwdev)
|
||||
return -EINVAL;
|
||||
|
||||
vlan_filter.func_idx = HINIC_HWIF_FUNC_IDX(hwif);
|
||||
vlan_filter.enable = en;
|
||||
|
||||
|
@ -4475,9 +4475,7 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
|
||||
goto error_pvid;
|
||||
|
||||
i40e_vlan_stripping_enable(vsi);
|
||||
i40e_vc_reset_vf(vf, true);
|
||||
/* During reset the VF got a new VSI, so refresh a pointer. */
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
|
||||
/* Locked once because multiple functions below iterate list */
|
||||
spin_lock_bh(&vsi->mac_filter_hash_lock);
|
||||
|
||||
@ -4563,6 +4561,10 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id,
|
||||
*/
|
||||
vf->port_vlan_id = le16_to_cpu(vsi->info.pvid);
|
||||
|
||||
i40e_vc_reset_vf(vf, true);
|
||||
/* During reset the VF got a new VSI, so refresh a pointer. */
|
||||
vsi = pf->vsi[vf->lan_vsi_idx];
|
||||
|
||||
ret = i40e_config_vf_promiscuous_mode(vf, vsi->id, allmulti, alluni);
|
||||
if (ret) {
|
||||
dev_err(&pf->pdev->dev, "Unable to config vf promiscuous mode\n");
|
||||
|
@ -521,7 +521,7 @@ void iavf_down(struct iavf_adapter *adapter);
|
||||
int iavf_process_config(struct iavf_adapter *adapter);
|
||||
int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter);
|
||||
void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags);
|
||||
void iavf_schedule_request_stats(struct iavf_adapter *adapter);
|
||||
void iavf_schedule_aq_request(struct iavf_adapter *adapter, u64 flags);
|
||||
void iavf_schedule_finish_config(struct iavf_adapter *adapter);
|
||||
void iavf_reset(struct iavf_adapter *adapter);
|
||||
void iavf_set_ethtool_ops(struct net_device *netdev);
|
||||
|
@ -362,7 +362,7 @@ static void iavf_get_ethtool_stats(struct net_device *netdev,
|
||||
unsigned int i;
|
||||
|
||||
/* Explicitly request stats refresh */
|
||||
iavf_schedule_request_stats(adapter);
|
||||
iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_REQUEST_STATS);
|
||||
|
||||
iavf_add_ethtool_stats(&data, adapter, iavf_gstrings_stats);
|
||||
|
||||
|
@ -314,15 +314,13 @@ void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags)
|
||||
}
|
||||
|
||||
/**
|
||||
* iavf_schedule_request_stats - Set the flags and schedule statistics request
|
||||
* iavf_schedule_aq_request - Set the flags and schedule aq request
|
||||
* @adapter: board private structure
|
||||
*
|
||||
* Sets IAVF_FLAG_AQ_REQUEST_STATS flag so iavf_watchdog_task() will explicitly
|
||||
* request and refresh ethtool stats
|
||||
* @flags: requested aq flags
|
||||
**/
|
||||
void iavf_schedule_request_stats(struct iavf_adapter *adapter)
|
||||
void iavf_schedule_aq_request(struct iavf_adapter *adapter, u64 flags)
|
||||
{
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_REQUEST_STATS;
|
||||
adapter->aq_required |= flags;
|
||||
mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0);
|
||||
}
|
||||
|
||||
@ -823,7 +821,7 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter,
|
||||
list_add_tail(&f->list, &adapter->vlan_filter_list);
|
||||
f->state = IAVF_VLAN_ADD;
|
||||
adapter->num_vlan_filters++;
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER;
|
||||
iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER);
|
||||
}
|
||||
|
||||
clearout:
|
||||
@ -845,7 +843,7 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
|
||||
f = iavf_find_vlan(adapter, vlan);
|
||||
if (f) {
|
||||
f->state = IAVF_VLAN_REMOVE;
|
||||
adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER;
|
||||
iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_DEL_VLAN_FILTER);
|
||||
}
|
||||
|
||||
spin_unlock_bh(&adapter->mac_vlan_list_lock);
|
||||
@ -1421,7 +1419,8 @@ void iavf_down(struct iavf_adapter *adapter)
|
||||
iavf_clear_fdir_filters(adapter);
|
||||
iavf_clear_adv_rss_conf(adapter);
|
||||
|
||||
if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)) {
|
||||
if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) &&
|
||||
!(test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))) {
|
||||
/* cancel any current operation */
|
||||
adapter->current_op = VIRTCHNL_OP_UNKNOWN;
|
||||
/* Schedule operations to close down the HW. Don't wait
|
||||
|
@ -868,6 +868,18 @@ static void igc_ethtool_get_stats(struct net_device *netdev,
|
||||
spin_unlock(&adapter->stats64_lock);
|
||||
}
|
||||
|
||||
static int igc_ethtool_get_previous_rx_coalesce(struct igc_adapter *adapter)
|
||||
{
|
||||
return (adapter->rx_itr_setting <= 3) ?
|
||||
adapter->rx_itr_setting : adapter->rx_itr_setting >> 2;
|
||||
}
|
||||
|
||||
static int igc_ethtool_get_previous_tx_coalesce(struct igc_adapter *adapter)
|
||||
{
|
||||
return (adapter->tx_itr_setting <= 3) ?
|
||||
adapter->tx_itr_setting : adapter->tx_itr_setting >> 2;
|
||||
}
|
||||
|
||||
static int igc_ethtool_get_coalesce(struct net_device *netdev,
|
||||
struct ethtool_coalesce *ec,
|
||||
struct kernel_ethtool_coalesce *kernel_coal,
|
||||
@ -875,17 +887,8 @@ static int igc_ethtool_get_coalesce(struct net_device *netdev,
|
||||
{
|
||||
struct igc_adapter *adapter = netdev_priv(netdev);
|
||||
|
||||
if (adapter->rx_itr_setting <= 3)
|
||||
ec->rx_coalesce_usecs = adapter->rx_itr_setting;
|
||||
else
|
||||
ec->rx_coalesce_usecs = adapter->rx_itr_setting >> 2;
|
||||
|
||||
if (!(adapter->flags & IGC_FLAG_QUEUE_PAIRS)) {
|
||||
if (adapter->tx_itr_setting <= 3)
|
||||
ec->tx_coalesce_usecs = adapter->tx_itr_setting;
|
||||
else
|
||||
ec->tx_coalesce_usecs = adapter->tx_itr_setting >> 2;
|
||||
}
|
||||
ec->rx_coalesce_usecs = igc_ethtool_get_previous_rx_coalesce(adapter);
|
||||
ec->tx_coalesce_usecs = igc_ethtool_get_previous_tx_coalesce(adapter);
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -910,8 +913,12 @@ static int igc_ethtool_set_coalesce(struct net_device *netdev,
|
||||
ec->tx_coalesce_usecs == 2)
|
||||
return -EINVAL;
|
||||
|
||||
if ((adapter->flags & IGC_FLAG_QUEUE_PAIRS) && ec->tx_coalesce_usecs)
|
||||
if ((adapter->flags & IGC_FLAG_QUEUE_PAIRS) &&
|
||||
ec->tx_coalesce_usecs != igc_ethtool_get_previous_tx_coalesce(adapter)) {
|
||||
NL_SET_ERR_MSG_MOD(extack,
|
||||
"Queue Pair mode enabled, both Rx and Tx coalescing controlled by rx-usecs");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* If ITR is disabled, disable DMAC */
|
||||
if (ec->rx_coalesce_usecs == 0) {
|
||||
|
@ -6491,7 +6491,7 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames,
|
||||
struct igc_ring *ring;
|
||||
int i, drops;
|
||||
|
||||
if (unlikely(test_bit(__IGC_DOWN, &adapter->state)))
|
||||
if (unlikely(!netif_carrier_ok(dev)))
|
||||
return -ENETDOWN;
|
||||
|
||||
if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
|
||||
|
@ -734,13 +734,13 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
|
||||
dma_map_sg_err:
|
||||
if (si > 0) {
|
||||
dma_unmap_single(iq->dev, sglist[0].dma_ptr[0],
|
||||
sglist[0].len[0], DMA_TO_DEVICE);
|
||||
sglist[0].len[0] = 0;
|
||||
sglist[0].len[3], DMA_TO_DEVICE);
|
||||
sglist[0].len[3] = 0;
|
||||
}
|
||||
while (si > 1) {
|
||||
dma_unmap_page(iq->dev, sglist[si >> 2].dma_ptr[si & 3],
|
||||
sglist[si >> 2].len[si & 3], DMA_TO_DEVICE);
|
||||
sglist[si >> 2].len[si & 3] = 0;
|
||||
sglist[si >> 2].len[3 - (si & 3)], DMA_TO_DEVICE);
|
||||
sglist[si >> 2].len[3 - (si & 3)] = 0;
|
||||
si--;
|
||||
}
|
||||
tx_buffer->gather = 0;
|
||||
|
@ -69,12 +69,12 @@ int octep_iq_process_completions(struct octep_iq *iq, u16 budget)
|
||||
compl_sg++;
|
||||
|
||||
dma_unmap_single(iq->dev, tx_buffer->sglist[0].dma_ptr[0],
|
||||
tx_buffer->sglist[0].len[0], DMA_TO_DEVICE);
|
||||
tx_buffer->sglist[0].len[3], DMA_TO_DEVICE);
|
||||
|
||||
i = 1; /* entry 0 is main skb, unmapped above */
|
||||
while (frags--) {
|
||||
dma_unmap_page(iq->dev, tx_buffer->sglist[i >> 2].dma_ptr[i & 3],
|
||||
tx_buffer->sglist[i >> 2].len[i & 3], DMA_TO_DEVICE);
|
||||
tx_buffer->sglist[i >> 2].len[3 - (i & 3)], DMA_TO_DEVICE);
|
||||
i++;
|
||||
}
|
||||
|
||||
@ -131,13 +131,13 @@ static void octep_iq_free_pending(struct octep_iq *iq)
|
||||
|
||||
dma_unmap_single(iq->dev,
|
||||
tx_buffer->sglist[0].dma_ptr[0],
|
||||
tx_buffer->sglist[0].len[0],
|
||||
tx_buffer->sglist[0].len[3],
|
||||
DMA_TO_DEVICE);
|
||||
|
||||
i = 1; /* entry 0 is main skb, unmapped above */
|
||||
while (frags--) {
|
||||
dma_unmap_page(iq->dev, tx_buffer->sglist[i >> 2].dma_ptr[i & 3],
|
||||
tx_buffer->sglist[i >> 2].len[i & 3], DMA_TO_DEVICE);
|
||||
tx_buffer->sglist[i >> 2].len[3 - (i & 3)], DMA_TO_DEVICE);
|
||||
i++;
|
||||
}
|
||||
|
||||
|
@ -17,7 +17,21 @@
|
||||
#define TX_BUFTYPE_NET_SG 2
|
||||
#define NUM_TX_BUFTYPES 3
|
||||
|
||||
/* Hardware format for Scatter/Gather list */
|
||||
/* Hardware format for Scatter/Gather list
|
||||
*
|
||||
* 63 48|47 32|31 16|15 0
|
||||
* -----------------------------------------
|
||||
* | Len 0 | Len 1 | Len 2 | Len 3 |
|
||||
* -----------------------------------------
|
||||
* | Ptr 0 |
|
||||
* -----------------------------------------
|
||||
* | Ptr 1 |
|
||||
* -----------------------------------------
|
||||
* | Ptr 2 |
|
||||
* -----------------------------------------
|
||||
* | Ptr 3 |
|
||||
* -----------------------------------------
|
||||
*/
|
||||
struct octep_tx_sglist_desc {
|
||||
u16 len[4];
|
||||
dma_addr_t dma_ptr[4];
|
||||
|
@ -29,7 +29,8 @@
|
||||
static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
|
||||
struct bpf_prog *prog,
|
||||
struct nix_cqe_rx_s *cqe,
|
||||
struct otx2_cq_queue *cq);
|
||||
struct otx2_cq_queue *cq,
|
||||
bool *need_xdp_flush);
|
||||
|
||||
static int otx2_nix_cq_op_status(struct otx2_nic *pfvf,
|
||||
struct otx2_cq_queue *cq)
|
||||
@ -337,7 +338,7 @@ static bool otx2_check_rcv_errors(struct otx2_nic *pfvf,
|
||||
static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
|
||||
struct napi_struct *napi,
|
||||
struct otx2_cq_queue *cq,
|
||||
struct nix_cqe_rx_s *cqe)
|
||||
struct nix_cqe_rx_s *cqe, bool *need_xdp_flush)
|
||||
{
|
||||
struct nix_rx_parse_s *parse = &cqe->parse;
|
||||
struct nix_rx_sg_s *sg = &cqe->sg;
|
||||
@ -353,7 +354,7 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf,
|
||||
}
|
||||
|
||||
if (pfvf->xdp_prog)
|
||||
if (otx2_xdp_rcv_pkt_handler(pfvf, pfvf->xdp_prog, cqe, cq))
|
||||
if (otx2_xdp_rcv_pkt_handler(pfvf, pfvf->xdp_prog, cqe, cq, need_xdp_flush))
|
||||
return;
|
||||
|
||||
skb = napi_get_frags(napi);
|
||||
@ -388,6 +389,7 @@ static int otx2_rx_napi_handler(struct otx2_nic *pfvf,
|
||||
struct napi_struct *napi,
|
||||
struct otx2_cq_queue *cq, int budget)
|
||||
{
|
||||
bool need_xdp_flush = false;
|
||||
struct nix_cqe_rx_s *cqe;
|
||||
int processed_cqe = 0;
|
||||
|
||||
@ -409,13 +411,15 @@ static int otx2_rx_napi_handler(struct otx2_nic *pfvf,
|
||||
cq->cq_head++;
|
||||
cq->cq_head &= (cq->cqe_cnt - 1);
|
||||
|
||||
otx2_rcv_pkt_handler(pfvf, napi, cq, cqe);
|
||||
otx2_rcv_pkt_handler(pfvf, napi, cq, cqe, &need_xdp_flush);
|
||||
|
||||
cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID;
|
||||
cqe->sg.seg_addr = 0x00;
|
||||
processed_cqe++;
|
||||
cq->pend_cqe--;
|
||||
}
|
||||
if (need_xdp_flush)
|
||||
xdp_do_flush();
|
||||
|
||||
/* Free CQEs to HW */
|
||||
otx2_write64(pfvf, NIX_LF_CQ_OP_DOOR,
|
||||
@ -1354,7 +1358,8 @@ bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u64 iova, int len, u16 qidx)
|
||||
static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
|
||||
struct bpf_prog *prog,
|
||||
struct nix_cqe_rx_s *cqe,
|
||||
struct otx2_cq_queue *cq)
|
||||
struct otx2_cq_queue *cq,
|
||||
bool *need_xdp_flush)
|
||||
{
|
||||
unsigned char *hard_start, *data;
|
||||
int qidx = cq->cq_idx;
|
||||
@ -1391,8 +1396,10 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf,
|
||||
|
||||
otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize,
|
||||
DMA_FROM_DEVICE);
|
||||
if (!err)
|
||||
if (!err) {
|
||||
*need_xdp_flush = true;
|
||||
return true;
|
||||
}
|
||||
put_page(page);
|
||||
break;
|
||||
default:
|
||||
|
@ -243,10 +243,9 @@ static void vcap_test_api_init(struct vcap_admin *admin)
|
||||
}
|
||||
|
||||
/* Helper function to create a rule of a specific size */
|
||||
static struct vcap_rule *
|
||||
test_vcap_xn_rule_creator(struct kunit *test, int cid, enum vcap_user user,
|
||||
u16 priority,
|
||||
int id, int size, int expected_addr)
|
||||
static void test_vcap_xn_rule_creator(struct kunit *test, int cid,
|
||||
enum vcap_user user, u16 priority,
|
||||
int id, int size, int expected_addr)
|
||||
{
|
||||
struct vcap_rule *rule;
|
||||
struct vcap_rule_internal *ri;
|
||||
@ -311,7 +310,7 @@ test_vcap_xn_rule_creator(struct kunit *test, int cid, enum vcap_user user,
|
||||
ret = vcap_add_rule(rule);
|
||||
KUNIT_EXPECT_EQ(test, 0, ret);
|
||||
KUNIT_EXPECT_EQ(test, expected_addr, ri->addr);
|
||||
return rule;
|
||||
vcap_free_rule(rule);
|
||||
}
|
||||
|
||||
/* Prepare testing rule deletion */
|
||||
@ -995,6 +994,16 @@ static void vcap_api_encode_rule_actionset_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, (u32)0x00000000, actwords[11]);
|
||||
}
|
||||
|
||||
static void vcap_free_ckf(struct vcap_rule *rule)
|
||||
{
|
||||
struct vcap_client_keyfield *ckf, *next_ckf;
|
||||
|
||||
list_for_each_entry_safe(ckf, next_ckf, &rule->keyfields, ctrl.list) {
|
||||
list_del(&ckf->ctrl.list);
|
||||
kfree(ckf);
|
||||
}
|
||||
}
|
||||
|
||||
static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
|
||||
{
|
||||
struct vcap_admin admin = {
|
||||
@ -1027,6 +1036,7 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, kf->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0x0, kf->data.u1.value);
|
||||
KUNIT_EXPECT_EQ(test, 0x1, kf->data.u1.mask);
|
||||
vcap_free_ckf(rule);
|
||||
|
||||
INIT_LIST_HEAD(&rule->keyfields);
|
||||
ret = vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS, VCAP_BIT_1);
|
||||
@ -1039,6 +1049,7 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, kf->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0x1, kf->data.u1.value);
|
||||
KUNIT_EXPECT_EQ(test, 0x1, kf->data.u1.mask);
|
||||
vcap_free_ckf(rule);
|
||||
|
||||
INIT_LIST_HEAD(&rule->keyfields);
|
||||
ret = vcap_rule_add_key_bit(rule, VCAP_KF_LOOKUP_FIRST_IS,
|
||||
@ -1052,6 +1063,7 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, kf->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0x0, kf->data.u1.value);
|
||||
KUNIT_EXPECT_EQ(test, 0x0, kf->data.u1.mask);
|
||||
vcap_free_ckf(rule);
|
||||
|
||||
INIT_LIST_HEAD(&rule->keyfields);
|
||||
ret = vcap_rule_add_key_u32(rule, VCAP_KF_TYPE, 0x98765432, 0xff00ffab);
|
||||
@ -1064,6 +1076,7 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_U32, kf->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0x98765432, kf->data.u32.value);
|
||||
KUNIT_EXPECT_EQ(test, 0xff00ffab, kf->data.u32.mask);
|
||||
vcap_free_ckf(rule);
|
||||
|
||||
INIT_LIST_HEAD(&rule->keyfields);
|
||||
ret = vcap_rule_add_key_u128(rule, VCAP_KF_L3_IP6_SIP, &dip);
|
||||
@ -1078,6 +1091,18 @@ static void vcap_api_rule_add_keyvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, dip.value[idx], kf->data.u128.value[idx]);
|
||||
for (idx = 0; idx < ARRAY_SIZE(dip.mask); ++idx)
|
||||
KUNIT_EXPECT_EQ(test, dip.mask[idx], kf->data.u128.mask[idx]);
|
||||
vcap_free_ckf(rule);
|
||||
}
|
||||
|
||||
static void vcap_free_caf(struct vcap_rule *rule)
|
||||
{
|
||||
struct vcap_client_actionfield *caf, *next_caf;
|
||||
|
||||
list_for_each_entry_safe(caf, next_caf,
|
||||
&rule->actionfields, ctrl.list) {
|
||||
list_del(&caf->ctrl.list);
|
||||
kfree(caf);
|
||||
}
|
||||
}
|
||||
|
||||
static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
|
||||
@ -1105,6 +1130,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_AF_POLICE_ENA, af->ctrl.action);
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, af->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0x0, af->data.u1.value);
|
||||
vcap_free_caf(rule);
|
||||
|
||||
INIT_LIST_HEAD(&rule->actionfields);
|
||||
ret = vcap_rule_add_action_bit(rule, VCAP_AF_POLICE_ENA, VCAP_BIT_1);
|
||||
@ -1116,6 +1142,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_AF_POLICE_ENA, af->ctrl.action);
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, af->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0x1, af->data.u1.value);
|
||||
vcap_free_caf(rule);
|
||||
|
||||
INIT_LIST_HEAD(&rule->actionfields);
|
||||
ret = vcap_rule_add_action_bit(rule, VCAP_AF_POLICE_ENA, VCAP_BIT_ANY);
|
||||
@ -1127,6 +1154,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_AF_POLICE_ENA, af->ctrl.action);
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_BIT, af->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0x0, af->data.u1.value);
|
||||
vcap_free_caf(rule);
|
||||
|
||||
INIT_LIST_HEAD(&rule->actionfields);
|
||||
ret = vcap_rule_add_action_u32(rule, VCAP_AF_TYPE, 0x98765432);
|
||||
@ -1138,6 +1166,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_AF_TYPE, af->ctrl.action);
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_U32, af->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0x98765432, af->data.u32.value);
|
||||
vcap_free_caf(rule);
|
||||
|
||||
INIT_LIST_HEAD(&rule->actionfields);
|
||||
ret = vcap_rule_add_action_u32(rule, VCAP_AF_MASK_MODE, 0xaabbccdd);
|
||||
@ -1149,6 +1178,7 @@ static void vcap_api_rule_add_actionvalue_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, VCAP_AF_MASK_MODE, af->ctrl.action);
|
||||
KUNIT_EXPECT_EQ(test, VCAP_FIELD_U32, af->ctrl.type);
|
||||
KUNIT_EXPECT_EQ(test, 0xaabbccdd, af->data.u32.value);
|
||||
vcap_free_caf(rule);
|
||||
}
|
||||
|
||||
static void vcap_api_rule_find_keyset_basic_test(struct kunit *test)
|
||||
@ -1408,6 +1438,10 @@ static void vcap_api_encode_rule_test(struct kunit *test)
|
||||
ret = list_empty(&is2_admin.rules);
|
||||
KUNIT_EXPECT_EQ(test, false, ret);
|
||||
KUNIT_EXPECT_EQ(test, 0, ret);
|
||||
|
||||
vcap_enable_lookups(&test_vctrl, &test_netdev, 0, 0,
|
||||
rule->cookie, false);
|
||||
|
||||
vcap_free_rule(rule);
|
||||
|
||||
/* Check that the rule has been freed: tricky to access since this
|
||||
@ -1418,6 +1452,8 @@ static void vcap_api_encode_rule_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, true, ret);
|
||||
ret = list_empty(&rule->actionfields);
|
||||
KUNIT_EXPECT_EQ(test, true, ret);
|
||||
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, id);
|
||||
}
|
||||
|
||||
static void vcap_api_set_rule_counter_test(struct kunit *test)
|
||||
@ -1561,6 +1597,11 @@ static void vcap_api_rule_insert_in_order_test(struct kunit *test)
|
||||
test_vcap_xn_rule_creator(test, 10000, VCAP_USER_QOS, 20, 400, 6, 774);
|
||||
test_vcap_xn_rule_creator(test, 10000, VCAP_USER_QOS, 30, 300, 3, 771);
|
||||
test_vcap_xn_rule_creator(test, 10000, VCAP_USER_QOS, 40, 200, 2, 768);
|
||||
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 200);
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 300);
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 400);
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 500);
|
||||
}
|
||||
|
||||
static void vcap_api_rule_insert_reverse_order_test(struct kunit *test)
|
||||
@ -1619,6 +1660,11 @@ static void vcap_api_rule_insert_reverse_order_test(struct kunit *test)
|
||||
++idx;
|
||||
}
|
||||
KUNIT_EXPECT_EQ(test, 768, admin.last_used_addr);
|
||||
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 500);
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 400);
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 300);
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 200);
|
||||
}
|
||||
|
||||
static void vcap_api_rule_remove_at_end_test(struct kunit *test)
|
||||
@ -1819,6 +1865,9 @@ static void vcap_api_rule_remove_in_front_test(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, 786, test_init_start);
|
||||
KUNIT_EXPECT_EQ(test, 8, test_init_count);
|
||||
KUNIT_EXPECT_EQ(test, 794, admin.last_used_addr);
|
||||
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 200);
|
||||
vcap_del_rule(&test_vctrl, &test_netdev, 300);
|
||||
}
|
||||
|
||||
static struct kunit_case vcap_api_rule_remove_test_cases[] = {
|
||||
|
@ -187,6 +187,7 @@ typedef void (*ionic_desc_cb)(struct ionic_queue *q,
|
||||
struct ionic_desc_info *desc_info,
|
||||
struct ionic_cq_info *cq_info, void *cb_arg);
|
||||
|
||||
#define IONIC_MAX_BUF_LEN ((u16)-1)
|
||||
#define IONIC_PAGE_SIZE PAGE_SIZE
|
||||
#define IONIC_PAGE_SPLIT_SZ (PAGE_SIZE / 2)
|
||||
#define IONIC_PAGE_GFP_MASK (GFP_ATOMIC | __GFP_NOWARN |\
|
||||
|
@ -207,7 +207,8 @@ static struct sk_buff *ionic_rx_frags(struct ionic_queue *q,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
frag_len = min_t(u16, len, IONIC_PAGE_SIZE - buf_info->page_offset);
|
||||
frag_len = min_t(u16, len, min_t(u32, IONIC_MAX_BUF_LEN,
|
||||
IONIC_PAGE_SIZE - buf_info->page_offset));
|
||||
len -= frag_len;
|
||||
|
||||
dma_sync_single_for_cpu(dev,
|
||||
@ -452,7 +453,8 @@ void ionic_rx_fill(struct ionic_queue *q)
|
||||
|
||||
/* fill main descriptor - buf[0] */
|
||||
desc->addr = cpu_to_le64(buf_info->dma_addr + buf_info->page_offset);
|
||||
frag_len = min_t(u16, len, IONIC_PAGE_SIZE - buf_info->page_offset);
|
||||
frag_len = min_t(u16, len, min_t(u32, IONIC_MAX_BUF_LEN,
|
||||
IONIC_PAGE_SIZE - buf_info->page_offset));
|
||||
desc->len = cpu_to_le16(frag_len);
|
||||
remain_len -= frag_len;
|
||||
buf_info++;
|
||||
@ -471,7 +473,9 @@ void ionic_rx_fill(struct ionic_queue *q)
|
||||
}
|
||||
|
||||
sg_elem->addr = cpu_to_le64(buf_info->dma_addr + buf_info->page_offset);
|
||||
frag_len = min_t(u16, remain_len, IONIC_PAGE_SIZE - buf_info->page_offset);
|
||||
frag_len = min_t(u16, remain_len, min_t(u32, IONIC_MAX_BUF_LEN,
|
||||
IONIC_PAGE_SIZE -
|
||||
buf_info->page_offset));
|
||||
sg_elem->len = cpu_to_le16(frag_len);
|
||||
remain_len -= frag_len;
|
||||
buf_info++;
|
||||
|
@ -136,6 +136,8 @@ static struct efx_tc_mac_pedit_action *efx_tc_flower_get_mac(struct efx_nic *efx
|
||||
if (old) {
|
||||
/* don't need our new entry */
|
||||
kfree(ped);
|
||||
if (IS_ERR(old)) /* oh dear, it's actually an error */
|
||||
return ERR_CAST(old);
|
||||
if (!refcount_inc_not_zero(&old->ref))
|
||||
return ERR_PTR(-EAGAIN);
|
||||
/* existing entry found, ref taken */
|
||||
@ -602,6 +604,8 @@ static int efx_tc_flower_record_encap_match(struct efx_nic *efx,
|
||||
kfree(encap);
|
||||
if (pseudo) /* don't need our new pseudo either */
|
||||
efx_tc_flower_release_encap_match(efx, pseudo);
|
||||
if (IS_ERR(old)) /* oh dear, it's actually an error */
|
||||
return PTR_ERR(old);
|
||||
/* check old and new em_types are compatible */
|
||||
switch (old->type) {
|
||||
case EFX_TC_EM_DIRECT:
|
||||
@ -700,6 +704,8 @@ static struct efx_tc_recirc_id *efx_tc_get_recirc_id(struct efx_nic *efx,
|
||||
if (old) {
|
||||
/* don't need our new entry */
|
||||
kfree(rid);
|
||||
if (IS_ERR(old)) /* oh dear, it's actually an error */
|
||||
return ERR_CAST(old);
|
||||
if (!refcount_inc_not_zero(&old->ref))
|
||||
return ERR_PTR(-EAGAIN);
|
||||
/* existing entry found */
|
||||
@ -1482,7 +1488,10 @@ static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
|
||||
old = rhashtable_lookup_get_insert_fast(&efx->tc->match_action_ht,
|
||||
&rule->linkage,
|
||||
efx_tc_match_action_ht_params);
|
||||
if (old) {
|
||||
if (IS_ERR(old)) {
|
||||
rc = PTR_ERR(old);
|
||||
goto release;
|
||||
} else if (old) {
|
||||
netif_dbg(efx, drv, efx->net_dev,
|
||||
"Ignoring already-offloaded rule (cookie %lx)\n",
|
||||
tc->cookie);
|
||||
@ -1697,7 +1706,10 @@ static int efx_tc_flower_replace_lhs(struct efx_nic *efx,
|
||||
old = rhashtable_lookup_get_insert_fast(&efx->tc->lhs_rule_ht,
|
||||
&rule->linkage,
|
||||
efx_tc_lhs_rule_ht_params);
|
||||
if (old) {
|
||||
if (IS_ERR(old)) {
|
||||
rc = PTR_ERR(old);
|
||||
goto release;
|
||||
} else if (old) {
|
||||
netif_dbg(efx, drv, efx->net_dev,
|
||||
"Already offloaded rule (cookie %lx)\n", tc->cookie);
|
||||
rc = -EEXIST;
|
||||
@ -1858,7 +1870,10 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
|
||||
old = rhashtable_lookup_get_insert_fast(&efx->tc->match_action_ht,
|
||||
&rule->linkage,
|
||||
efx_tc_match_action_ht_params);
|
||||
if (old) {
|
||||
if (IS_ERR(old)) {
|
||||
rc = PTR_ERR(old);
|
||||
goto release;
|
||||
} else if (old) {
|
||||
netif_dbg(efx, drv, efx->net_dev,
|
||||
"Already offloaded rule (cookie %lx)\n", tc->cookie);
|
||||
NL_SET_ERR_MSG_MOD(extack, "Rule already offloaded");
|
||||
|
@ -298,7 +298,10 @@ static int efx_tc_ct_replace(struct efx_tc_ct_zone *ct_zone,
|
||||
old = rhashtable_lookup_get_insert_fast(&efx->tc->ct_ht,
|
||||
&conn->linkage,
|
||||
efx_tc_ct_ht_params);
|
||||
if (old) {
|
||||
if (IS_ERR(old)) {
|
||||
rc = PTR_ERR(old);
|
||||
goto release;
|
||||
} else if (old) {
|
||||
netif_dbg(efx, drv, efx->net_dev,
|
||||
"Already offloaded conntrack (cookie %lx)\n", tc->cookie);
|
||||
rc = -EEXIST;
|
||||
@ -482,6 +485,8 @@ struct efx_tc_ct_zone *efx_tc_ct_register_zone(struct efx_nic *efx, u16 zone,
|
||||
if (old) {
|
||||
/* don't need our new entry */
|
||||
kfree(ct_zone);
|
||||
if (IS_ERR(old)) /* oh dear, it's actually an error */
|
||||
return ERR_CAST(old);
|
||||
if (!refcount_inc_not_zero(&old->ref))
|
||||
return ERR_PTR(-EAGAIN);
|
||||
/* existing entry found */
|
||||
|
@ -236,6 +236,8 @@ struct efx_tc_counter_index *efx_tc_flower_get_counter_index(
|
||||
if (old) {
|
||||
/* don't need our new entry */
|
||||
kfree(ctr);
|
||||
if (IS_ERR(old)) /* oh dear, it's actually an error */
|
||||
return ERR_CAST(old);
|
||||
if (!refcount_inc_not_zero(&old->ref))
|
||||
return ERR_PTR(-EAGAIN);
|
||||
/* existing entry found */
|
||||
|
@ -132,6 +132,8 @@ static int efx_bind_neigh(struct efx_nic *efx,
|
||||
/* don't need our new entry */
|
||||
put_net_track(neigh->net, &neigh->ns_tracker);
|
||||
kfree(neigh);
|
||||
if (IS_ERR(old)) /* oh dear, it's actually an error */
|
||||
return PTR_ERR(old);
|
||||
if (!refcount_inc_not_zero(&old->ref))
|
||||
return -EAGAIN;
|
||||
/* existing entry found, ref taken */
|
||||
@ -640,6 +642,8 @@ struct efx_tc_encap_action *efx_tc_flower_create_encap_md(
|
||||
if (old) {
|
||||
/* don't need our new entry */
|
||||
kfree(encap);
|
||||
if (IS_ERR(old)) /* oh dear, it's actually an error */
|
||||
return ERR_CAST(old);
|
||||
if (!refcount_inc_not_zero(&old->ref))
|
||||
return ERR_PTR(-EAGAIN);
|
||||
/* existing entry found, ref taken */
|
||||
|
@ -70,7 +70,7 @@ struct stmmac_txq_stats {
|
||||
u64 tx_tso_frames;
|
||||
u64 tx_tso_nfrags;
|
||||
struct u64_stats_sync syncp;
|
||||
};
|
||||
} ____cacheline_aligned_in_smp;
|
||||
|
||||
struct stmmac_rxq_stats {
|
||||
u64 rx_bytes;
|
||||
@ -79,7 +79,7 @@ struct stmmac_rxq_stats {
|
||||
u64 rx_normal_irq_n;
|
||||
u64 napi_poll;
|
||||
struct u64_stats_sync syncp;
|
||||
};
|
||||
} ____cacheline_aligned_in_smp;
|
||||
|
||||
/* Extra statistic and debug information exposed by ethtool */
|
||||
struct stmmac_extra_stats {
|
||||
@ -202,6 +202,9 @@ struct stmmac_extra_stats {
|
||||
unsigned long mtl_est_hlbf;
|
||||
unsigned long mtl_est_btre;
|
||||
unsigned long mtl_est_btrlm;
|
||||
/* per queue statistics */
|
||||
struct stmmac_txq_stats txq_stats[MTL_MAX_TX_QUEUES];
|
||||
struct stmmac_rxq_stats rxq_stats[MTL_MAX_RX_QUEUES];
|
||||
unsigned long rx_dropped;
|
||||
unsigned long rx_errors;
|
||||
unsigned long tx_dropped;
|
||||
|
@ -441,8 +441,8 @@ static int sun8i_dwmac_dma_interrupt(struct stmmac_priv *priv,
|
||||
struct stmmac_extra_stats *x, u32 chan,
|
||||
u32 dir)
|
||||
{
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan];
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan];
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
|
||||
int ret = 0;
|
||||
u32 v;
|
||||
|
||||
@ -455,9 +455,9 @@ static int sun8i_dwmac_dma_interrupt(struct stmmac_priv *priv,
|
||||
|
||||
if (v & EMAC_TX_INT) {
|
||||
ret |= handle_tx;
|
||||
u64_stats_update_begin(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_normal_irq_n++;
|
||||
u64_stats_update_end(&tx_q->txq_stats.syncp);
|
||||
u64_stats_update_begin(&txq_stats->syncp);
|
||||
txq_stats->tx_normal_irq_n++;
|
||||
u64_stats_update_end(&txq_stats->syncp);
|
||||
}
|
||||
|
||||
if (v & EMAC_TX_DMA_STOP_INT)
|
||||
@ -479,9 +479,9 @@ static int sun8i_dwmac_dma_interrupt(struct stmmac_priv *priv,
|
||||
|
||||
if (v & EMAC_RX_INT) {
|
||||
ret |= handle_rx;
|
||||
u64_stats_update_begin(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.rx_normal_irq_n++;
|
||||
u64_stats_update_end(&rx_q->rxq_stats.syncp);
|
||||
u64_stats_update_begin(&rxq_stats->syncp);
|
||||
rxq_stats->rx_normal_irq_n++;
|
||||
u64_stats_update_end(&rxq_stats->syncp);
|
||||
}
|
||||
|
||||
if (v & EMAC_RX_BUF_UA_INT)
|
||||
|
@ -171,8 +171,8 @@ int dwmac4_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
|
||||
const struct dwmac4_addrs *dwmac4_addrs = priv->plat->dwmac4_addrs;
|
||||
u32 intr_status = readl(ioaddr + DMA_CHAN_STATUS(dwmac4_addrs, chan));
|
||||
u32 intr_en = readl(ioaddr + DMA_CHAN_INTR_ENA(dwmac4_addrs, chan));
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan];
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan];
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
|
||||
int ret = 0;
|
||||
|
||||
if (dir == DMA_DIR_RX)
|
||||
@ -201,15 +201,15 @@ int dwmac4_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
|
||||
}
|
||||
/* TX/RX NORMAL interrupts */
|
||||
if (likely(intr_status & DMA_CHAN_STATUS_RI)) {
|
||||
u64_stats_update_begin(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.rx_normal_irq_n++;
|
||||
u64_stats_update_end(&rx_q->rxq_stats.syncp);
|
||||
u64_stats_update_begin(&rxq_stats->syncp);
|
||||
rxq_stats->rx_normal_irq_n++;
|
||||
u64_stats_update_end(&rxq_stats->syncp);
|
||||
ret |= handle_rx;
|
||||
}
|
||||
if (likely(intr_status & DMA_CHAN_STATUS_TI)) {
|
||||
u64_stats_update_begin(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_normal_irq_n++;
|
||||
u64_stats_update_end(&tx_q->txq_stats.syncp);
|
||||
u64_stats_update_begin(&txq_stats->syncp);
|
||||
txq_stats->tx_normal_irq_n++;
|
||||
u64_stats_update_end(&txq_stats->syncp);
|
||||
ret |= handle_tx;
|
||||
}
|
||||
|
||||
|
@ -162,8 +162,8 @@ static void show_rx_process_state(unsigned int status)
|
||||
int dwmac_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
|
||||
struct stmmac_extra_stats *x, u32 chan, u32 dir)
|
||||
{
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan];
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan];
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
|
||||
int ret = 0;
|
||||
/* read the status register (CSR5) */
|
||||
u32 intr_status = readl(ioaddr + DMA_STATUS);
|
||||
@ -215,16 +215,16 @@ int dwmac_dma_interrupt(struct stmmac_priv *priv, void __iomem *ioaddr,
|
||||
u32 value = readl(ioaddr + DMA_INTR_ENA);
|
||||
/* to schedule NAPI on real RIE event. */
|
||||
if (likely(value & DMA_INTR_ENA_RIE)) {
|
||||
u64_stats_update_begin(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.rx_normal_irq_n++;
|
||||
u64_stats_update_end(&rx_q->rxq_stats.syncp);
|
||||
u64_stats_update_begin(&rxq_stats->syncp);
|
||||
rxq_stats->rx_normal_irq_n++;
|
||||
u64_stats_update_end(&rxq_stats->syncp);
|
||||
ret |= handle_rx;
|
||||
}
|
||||
}
|
||||
if (likely(intr_status & DMA_STATUS_TI)) {
|
||||
u64_stats_update_begin(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_normal_irq_n++;
|
||||
u64_stats_update_end(&tx_q->txq_stats.syncp);
|
||||
u64_stats_update_begin(&txq_stats->syncp);
|
||||
txq_stats->tx_normal_irq_n++;
|
||||
u64_stats_update_end(&txq_stats->syncp);
|
||||
ret |= handle_tx;
|
||||
}
|
||||
if (unlikely(intr_status & DMA_STATUS_ERI))
|
||||
|
@ -337,8 +337,8 @@ static int dwxgmac2_dma_interrupt(struct stmmac_priv *priv,
|
||||
struct stmmac_extra_stats *x, u32 chan,
|
||||
u32 dir)
|
||||
{
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[chan];
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[chan];
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[chan];
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[chan];
|
||||
u32 intr_status = readl(ioaddr + XGMAC_DMA_CH_STATUS(chan));
|
||||
u32 intr_en = readl(ioaddr + XGMAC_DMA_CH_INT_EN(chan));
|
||||
int ret = 0;
|
||||
@ -367,15 +367,15 @@ static int dwxgmac2_dma_interrupt(struct stmmac_priv *priv,
|
||||
/* TX/RX NORMAL interrupts */
|
||||
if (likely(intr_status & XGMAC_NIS)) {
|
||||
if (likely(intr_status & XGMAC_RI)) {
|
||||
u64_stats_update_begin(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.rx_normal_irq_n++;
|
||||
u64_stats_update_end(&rx_q->rxq_stats.syncp);
|
||||
u64_stats_update_begin(&rxq_stats->syncp);
|
||||
rxq_stats->rx_normal_irq_n++;
|
||||
u64_stats_update_end(&rxq_stats->syncp);
|
||||
ret |= handle_rx;
|
||||
}
|
||||
if (likely(intr_status & (XGMAC_TI | XGMAC_TBU))) {
|
||||
u64_stats_update_begin(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_normal_irq_n++;
|
||||
u64_stats_update_end(&tx_q->txq_stats.syncp);
|
||||
u64_stats_update_begin(&txq_stats->syncp);
|
||||
txq_stats->tx_normal_irq_n++;
|
||||
u64_stats_update_end(&txq_stats->syncp);
|
||||
ret |= handle_tx;
|
||||
}
|
||||
}
|
||||
|
@ -78,7 +78,6 @@ struct stmmac_tx_queue {
|
||||
dma_addr_t dma_tx_phy;
|
||||
dma_addr_t tx_tail_addr;
|
||||
u32 mss;
|
||||
struct stmmac_txq_stats txq_stats;
|
||||
};
|
||||
|
||||
struct stmmac_rx_buffer {
|
||||
@ -123,7 +122,6 @@ struct stmmac_rx_queue {
|
||||
unsigned int len;
|
||||
unsigned int error;
|
||||
} state;
|
||||
struct stmmac_rxq_stats rxq_stats;
|
||||
};
|
||||
|
||||
struct stmmac_channel {
|
||||
|
@ -548,14 +548,14 @@ static void stmmac_get_per_qstats(struct stmmac_priv *priv, u64 *data)
|
||||
|
||||
pos = data;
|
||||
for (q = 0; q < tx_cnt; q++) {
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[q];
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[q];
|
||||
struct stmmac_txq_stats snapshot;
|
||||
|
||||
data = pos;
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&tx_q->txq_stats.syncp);
|
||||
snapshot = tx_q->txq_stats;
|
||||
} while (u64_stats_fetch_retry(&tx_q->txq_stats.syncp, start));
|
||||
start = u64_stats_fetch_begin(&txq_stats->syncp);
|
||||
snapshot = *txq_stats;
|
||||
} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
|
||||
|
||||
p = (char *)&snapshot + offsetof(struct stmmac_txq_stats, tx_pkt_n);
|
||||
for (stat = 0; stat < STMMAC_TXQ_STATS; stat++) {
|
||||
@ -566,14 +566,14 @@ static void stmmac_get_per_qstats(struct stmmac_priv *priv, u64 *data)
|
||||
|
||||
pos = data;
|
||||
for (q = 0; q < rx_cnt; q++) {
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[q];
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[q];
|
||||
struct stmmac_rxq_stats snapshot;
|
||||
|
||||
data = pos;
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&rx_q->rxq_stats.syncp);
|
||||
snapshot = rx_q->rxq_stats;
|
||||
} while (u64_stats_fetch_retry(&rx_q->rxq_stats.syncp, start));
|
||||
start = u64_stats_fetch_begin(&rxq_stats->syncp);
|
||||
snapshot = *rxq_stats;
|
||||
} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
|
||||
|
||||
p = (char *)&snapshot + offsetof(struct stmmac_rxq_stats, rx_pkt_n);
|
||||
for (stat = 0; stat < STMMAC_RXQ_STATS; stat++) {
|
||||
@ -637,14 +637,14 @@ static void stmmac_get_ethtool_stats(struct net_device *dev,
|
||||
|
||||
pos = j;
|
||||
for (i = 0; i < rx_queues_count; i++) {
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[i];
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[i];
|
||||
struct stmmac_rxq_stats snapshot;
|
||||
|
||||
j = pos;
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&rx_q->rxq_stats.syncp);
|
||||
snapshot = rx_q->rxq_stats;
|
||||
} while (u64_stats_fetch_retry(&rx_q->rxq_stats.syncp, start));
|
||||
start = u64_stats_fetch_begin(&rxq_stats->syncp);
|
||||
snapshot = *rxq_stats;
|
||||
} while (u64_stats_fetch_retry(&rxq_stats->syncp, start));
|
||||
|
||||
data[j++] += snapshot.rx_pkt_n;
|
||||
data[j++] += snapshot.rx_normal_irq_n;
|
||||
@ -654,14 +654,14 @@ static void stmmac_get_ethtool_stats(struct net_device *dev,
|
||||
|
||||
pos = j;
|
||||
for (i = 0; i < tx_queues_count; i++) {
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[i];
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[i];
|
||||
struct stmmac_txq_stats snapshot;
|
||||
|
||||
j = pos;
|
||||
do {
|
||||
start = u64_stats_fetch_begin(&tx_q->txq_stats.syncp);
|
||||
snapshot = tx_q->txq_stats;
|
||||
} while (u64_stats_fetch_retry(&tx_q->txq_stats.syncp, start));
|
||||
start = u64_stats_fetch_begin(&txq_stats->syncp);
|
||||
snapshot = *txq_stats;
|
||||
} while (u64_stats_fetch_retry(&txq_stats->syncp, start));
|
||||
|
||||
data[j++] += snapshot.tx_pkt_n;
|
||||
data[j++] += snapshot.tx_normal_irq_n;
|
||||
|
@ -2426,6 +2426,7 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
|
||||
{
|
||||
struct netdev_queue *nq = netdev_get_tx_queue(priv->dev, queue);
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
|
||||
struct xsk_buff_pool *pool = tx_q->xsk_pool;
|
||||
unsigned int entry = tx_q->cur_tx;
|
||||
struct dma_desc *tx_desc = NULL;
|
||||
@ -2505,9 +2506,9 @@ static bool stmmac_xdp_xmit_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
|
||||
tx_q->cur_tx = STMMAC_GET_ENTRY(tx_q->cur_tx, priv->dma_conf.dma_tx_size);
|
||||
entry = tx_q->cur_tx;
|
||||
}
|
||||
flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_set_ic_bit += tx_set_ic_bit;
|
||||
u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
|
||||
flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
|
||||
txq_stats->tx_set_ic_bit += tx_set_ic_bit;
|
||||
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
|
||||
|
||||
if (tx_desc) {
|
||||
stmmac_flush_tx_descriptors(priv, queue);
|
||||
@ -2547,6 +2548,7 @@ static void stmmac_bump_dma_threshold(struct stmmac_priv *priv, u32 chan)
|
||||
static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
|
||||
{
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
|
||||
unsigned int bytes_compl = 0, pkts_compl = 0;
|
||||
unsigned int entry, xmits = 0, count = 0;
|
||||
u32 tx_packets = 0, tx_errors = 0;
|
||||
@ -2706,11 +2708,11 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
|
||||
if (tx_q->dirty_tx != tx_q->cur_tx)
|
||||
stmmac_tx_timer_arm(priv, queue);
|
||||
|
||||
flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_packets += tx_packets;
|
||||
tx_q->txq_stats.tx_pkt_n += tx_packets;
|
||||
tx_q->txq_stats.tx_clean++;
|
||||
u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
|
||||
flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
|
||||
txq_stats->tx_packets += tx_packets;
|
||||
txq_stats->tx_pkt_n += tx_packets;
|
||||
txq_stats->tx_clean++;
|
||||
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
|
||||
|
||||
priv->xstats.tx_errors += tx_errors;
|
||||
|
||||
@ -4114,6 +4116,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
int nfrags = skb_shinfo(skb)->nr_frags;
|
||||
u32 queue = skb_get_queue_mapping(skb);
|
||||
unsigned int first_entry, tx_packets;
|
||||
struct stmmac_txq_stats *txq_stats;
|
||||
int tmp_pay_len = 0, first_tx;
|
||||
struct stmmac_tx_queue *tx_q;
|
||||
bool has_vlan, set_ic;
|
||||
@ -4124,6 +4127,7 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
int i;
|
||||
|
||||
tx_q = &priv->dma_conf.tx_queue[queue];
|
||||
txq_stats = &priv->xstats.txq_stats[queue];
|
||||
first_tx = tx_q->cur_tx;
|
||||
|
||||
/* Compute header lengths */
|
||||
@ -4282,13 +4286,13 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
|
||||
}
|
||||
|
||||
flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_bytes += skb->len;
|
||||
tx_q->txq_stats.tx_tso_frames++;
|
||||
tx_q->txq_stats.tx_tso_nfrags += nfrags;
|
||||
flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
|
||||
txq_stats->tx_bytes += skb->len;
|
||||
txq_stats->tx_tso_frames++;
|
||||
txq_stats->tx_tso_nfrags += nfrags;
|
||||
if (set_ic)
|
||||
tx_q->txq_stats.tx_set_ic_bit++;
|
||||
u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
|
||||
txq_stats->tx_set_ic_bit++;
|
||||
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
|
||||
|
||||
if (priv->sarc_type)
|
||||
stmmac_set_desc_sarc(priv, first, priv->sarc_type);
|
||||
@ -4359,6 +4363,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
u32 queue = skb_get_queue_mapping(skb);
|
||||
int nfrags = skb_shinfo(skb)->nr_frags;
|
||||
int gso = skb_shinfo(skb)->gso_type;
|
||||
struct stmmac_txq_stats *txq_stats;
|
||||
struct dma_edesc *tbs_desc = NULL;
|
||||
struct dma_desc *desc, *first;
|
||||
struct stmmac_tx_queue *tx_q;
|
||||
@ -4368,6 +4373,7 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
dma_addr_t des;
|
||||
|
||||
tx_q = &priv->dma_conf.tx_queue[queue];
|
||||
txq_stats = &priv->xstats.txq_stats[queue];
|
||||
first_tx = tx_q->cur_tx;
|
||||
|
||||
if (priv->tx_path_in_lpi_mode && priv->eee_sw_timer_en)
|
||||
@ -4519,11 +4525,11 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
netif_tx_stop_queue(netdev_get_tx_queue(priv->dev, queue));
|
||||
}
|
||||
|
||||
flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_bytes += skb->len;
|
||||
flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
|
||||
txq_stats->tx_bytes += skb->len;
|
||||
if (set_ic)
|
||||
tx_q->txq_stats.tx_set_ic_bit++;
|
||||
u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
|
||||
txq_stats->tx_set_ic_bit++;
|
||||
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
|
||||
|
||||
if (priv->sarc_type)
|
||||
stmmac_set_desc_sarc(priv, first, priv->sarc_type);
|
||||
@ -4730,6 +4736,7 @@ static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv,
|
||||
static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue,
|
||||
struct xdp_frame *xdpf, bool dma_map)
|
||||
{
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
|
||||
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
|
||||
unsigned int entry = tx_q->cur_tx;
|
||||
struct dma_desc *tx_desc;
|
||||
@ -4789,9 +4796,9 @@ static int stmmac_xdp_xmit_xdpf(struct stmmac_priv *priv, int queue,
|
||||
unsigned long flags;
|
||||
tx_q->tx_count_frames = 0;
|
||||
stmmac_set_tx_ic(priv, tx_desc);
|
||||
flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.tx_set_ic_bit++;
|
||||
u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
|
||||
flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
|
||||
txq_stats->tx_set_ic_bit++;
|
||||
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
|
||||
}
|
||||
|
||||
stmmac_enable_dma_transmission(priv, priv->ioaddr);
|
||||
@ -4936,7 +4943,7 @@ static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue,
|
||||
struct dma_desc *p, struct dma_desc *np,
|
||||
struct xdp_buff *xdp)
|
||||
{
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue];
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[queue];
|
||||
struct stmmac_channel *ch = &priv->channel[queue];
|
||||
unsigned int len = xdp->data_end - xdp->data;
|
||||
enum pkt_hash_types hash_type;
|
||||
@ -4966,10 +4973,10 @@ static void stmmac_dispatch_skb_zc(struct stmmac_priv *priv, u32 queue,
|
||||
skb_record_rx_queue(skb, queue);
|
||||
napi_gro_receive(&ch->rxtx_napi, skb);
|
||||
|
||||
flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.rx_pkt_n++;
|
||||
rx_q->rxq_stats.rx_bytes += len;
|
||||
u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
|
||||
flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
|
||||
rxq_stats->rx_pkt_n++;
|
||||
rxq_stats->rx_bytes += len;
|
||||
u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
|
||||
}
|
||||
|
||||
static bool stmmac_rx_refill_zc(struct stmmac_priv *priv, u32 queue, u32 budget)
|
||||
@ -5042,6 +5049,7 @@ static struct stmmac_xdp_buff *xsk_buff_to_stmmac_ctx(struct xdp_buff *xdp)
|
||||
|
||||
static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
{
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[queue];
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue];
|
||||
unsigned int count = 0, error = 0, len = 0;
|
||||
int dirty = stmmac_rx_dirty(priv, queue);
|
||||
@ -5205,9 +5213,9 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
|
||||
stmmac_finalize_xdp_rx(priv, xdp_status);
|
||||
|
||||
flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.rx_pkt_n += count;
|
||||
u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
|
||||
flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
|
||||
rxq_stats->rx_pkt_n += count;
|
||||
u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
|
||||
|
||||
priv->xstats.rx_dropped += rx_dropped;
|
||||
priv->xstats.rx_errors += rx_errors;
|
||||
@ -5235,6 +5243,7 @@ static int stmmac_rx_zc(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
{
|
||||
u32 rx_errors = 0, rx_dropped = 0, rx_bytes = 0, rx_packets = 0;
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[queue];
|
||||
struct stmmac_rx_queue *rx_q = &priv->dma_conf.rx_queue[queue];
|
||||
struct stmmac_channel *ch = &priv->channel[queue];
|
||||
unsigned int count = 0, error = 0, len = 0;
|
||||
@ -5496,11 +5505,11 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
|
||||
|
||||
stmmac_rx_refill(priv, queue);
|
||||
|
||||
flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.rx_packets += rx_packets;
|
||||
rx_q->rxq_stats.rx_bytes += rx_bytes;
|
||||
rx_q->rxq_stats.rx_pkt_n += count;
|
||||
u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
|
||||
flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
|
||||
rxq_stats->rx_packets += rx_packets;
|
||||
rxq_stats->rx_bytes += rx_bytes;
|
||||
rxq_stats->rx_pkt_n += count;
|
||||
u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
|
||||
|
||||
priv->xstats.rx_dropped += rx_dropped;
|
||||
priv->xstats.rx_errors += rx_errors;
|
||||
@ -5513,15 +5522,15 @@ static int stmmac_napi_poll_rx(struct napi_struct *napi, int budget)
|
||||
struct stmmac_channel *ch =
|
||||
container_of(napi, struct stmmac_channel, rx_napi);
|
||||
struct stmmac_priv *priv = ch->priv_data;
|
||||
struct stmmac_rx_queue *rx_q;
|
||||
struct stmmac_rxq_stats *rxq_stats;
|
||||
u32 chan = ch->index;
|
||||
unsigned long flags;
|
||||
int work_done;
|
||||
|
||||
rx_q = &priv->dma_conf.rx_queue[chan];
|
||||
flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.napi_poll++;
|
||||
u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
|
||||
rxq_stats = &priv->xstats.rxq_stats[chan];
|
||||
flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
|
||||
rxq_stats->napi_poll++;
|
||||
u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
|
||||
|
||||
work_done = stmmac_rx(priv, budget, chan);
|
||||
if (work_done < budget && napi_complete_done(napi, work_done)) {
|
||||
@ -5540,15 +5549,15 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget)
|
||||
struct stmmac_channel *ch =
|
||||
container_of(napi, struct stmmac_channel, tx_napi);
|
||||
struct stmmac_priv *priv = ch->priv_data;
|
||||
struct stmmac_tx_queue *tx_q;
|
||||
struct stmmac_txq_stats *txq_stats;
|
||||
u32 chan = ch->index;
|
||||
unsigned long flags;
|
||||
int work_done;
|
||||
|
||||
tx_q = &priv->dma_conf.tx_queue[chan];
|
||||
flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.napi_poll++;
|
||||
u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
|
||||
txq_stats = &priv->xstats.txq_stats[chan];
|
||||
flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
|
||||
txq_stats->napi_poll++;
|
||||
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
|
||||
|
||||
work_done = stmmac_tx_clean(priv, budget, chan);
|
||||
work_done = min(work_done, budget);
|
||||
@ -5570,20 +5579,20 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
|
||||
container_of(napi, struct stmmac_channel, rxtx_napi);
|
||||
struct stmmac_priv *priv = ch->priv_data;
|
||||
int rx_done, tx_done, rxtx_done;
|
||||
struct stmmac_rx_queue *rx_q;
|
||||
struct stmmac_tx_queue *tx_q;
|
||||
struct stmmac_rxq_stats *rxq_stats;
|
||||
struct stmmac_txq_stats *txq_stats;
|
||||
u32 chan = ch->index;
|
||||
unsigned long flags;
|
||||
|
||||
rx_q = &priv->dma_conf.rx_queue[chan];
|
||||
flags = u64_stats_update_begin_irqsave(&rx_q->rxq_stats.syncp);
|
||||
rx_q->rxq_stats.napi_poll++;
|
||||
u64_stats_update_end_irqrestore(&rx_q->rxq_stats.syncp, flags);
|
||||
rxq_stats = &priv->xstats.rxq_stats[chan];
|
||||
flags = u64_stats_update_begin_irqsave(&rxq_stats->syncp);
|
||||
rxq_stats->napi_poll++;
|
||||
u64_stats_update_end_irqrestore(&rxq_stats->syncp, flags);
|
||||
|
||||
tx_q = &priv->dma_conf.tx_queue[chan];
|
||||
flags = u64_stats_update_begin_irqsave(&tx_q->txq_stats.syncp);
|
||||
tx_q->txq_stats.napi_poll++;
|
||||
u64_stats_update_end_irqrestore(&tx_q->txq_stats.syncp, flags);
|
||||
txq_stats = &priv->xstats.txq_stats[chan];
|
||||
flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
|
||||
txq_stats->napi_poll++;
|
||||
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
|
||||
|
||||
tx_done = stmmac_tx_clean(priv, budget, chan);
|
||||
tx_done = min(tx_done, budget);
|
||||
@ -6926,7 +6935,7 @@ static void stmmac_get_stats64(struct net_device *dev, struct rtnl_link_stats64
|
||||
int q;
|
||||
|
||||
for (q = 0; q < tx_cnt; q++) {
|
||||
struct stmmac_txq_stats *txq_stats = &priv->dma_conf.tx_queue[q].txq_stats;
|
||||
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[q];
|
||||
u64 tx_packets;
|
||||
u64 tx_bytes;
|
||||
|
||||
@ -6941,7 +6950,7 @@ static void stmmac_get_stats64(struct net_device *dev, struct rtnl_link_stats64
|
||||
}
|
||||
|
||||
for (q = 0; q < rx_cnt; q++) {
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->dma_conf.rx_queue[q].rxq_stats;
|
||||
struct stmmac_rxq_stats *rxq_stats = &priv->xstats.rxq_stats[q];
|
||||
u64 rx_packets;
|
||||
u64 rx_bytes;
|
||||
|
||||
@ -7342,9 +7351,9 @@ int stmmac_dvr_probe(struct device *device,
|
||||
priv->dev = ndev;
|
||||
|
||||
for (i = 0; i < MTL_MAX_RX_QUEUES; i++)
|
||||
u64_stats_init(&priv->dma_conf.rx_queue[i].rxq_stats.syncp);
|
||||
u64_stats_init(&priv->xstats.rxq_stats[i].syncp);
|
||||
for (i = 0; i < MTL_MAX_TX_QUEUES; i++)
|
||||
u64_stats_init(&priv->dma_conf.tx_queue[i].txq_stats.syncp);
|
||||
u64_stats_init(&priv->xstats.txq_stats[i].syncp);
|
||||
|
||||
stmmac_set_ethtool_ops(ndev);
|
||||
priv->pause = pause;
|
||||
|
@ -199,6 +199,7 @@ config TI_ICSSG_PRUETH
|
||||
|
||||
config TI_ICSS_IEP
|
||||
tristate "TI PRU ICSS IEP driver"
|
||||
depends on PTP_1588_CLOCK_OPTIONAL
|
||||
depends on TI_PRUSS
|
||||
default TI_PRUSS
|
||||
help
|
||||
|
@ -2115,7 +2115,12 @@ static const struct ethtool_ops team_ethtool_ops = {
|
||||
static void team_setup_by_port(struct net_device *dev,
|
||||
struct net_device *port_dev)
|
||||
{
|
||||
dev->header_ops = port_dev->header_ops;
|
||||
struct team *team = netdev_priv(dev);
|
||||
|
||||
if (port_dev->type == ARPHRD_ETHER)
|
||||
dev->header_ops = team->header_ops_cache;
|
||||
else
|
||||
dev->header_ops = port_dev->header_ops;
|
||||
dev->type = port_dev->type;
|
||||
dev->hard_header_len = port_dev->hard_header_len;
|
||||
dev->needed_headroom = port_dev->needed_headroom;
|
||||
@ -2162,8 +2167,11 @@ static int team_dev_type_check_change(struct net_device *dev,
|
||||
|
||||
static void team_setup(struct net_device *dev)
|
||||
{
|
||||
struct team *team = netdev_priv(dev);
|
||||
|
||||
ether_setup(dev);
|
||||
dev->max_mtu = ETH_MAX_MTU;
|
||||
team->header_ops_cache = dev->header_ops;
|
||||
|
||||
dev->netdev_ops = &team_netdev_ops;
|
||||
dev->ethtool_ops = &team_ethtool_ops;
|
||||
|
@ -1049,12 +1049,11 @@ static bool tbnet_xmit_csum_and_map(struct tbnet *net, struct sk_buff *skb,
|
||||
*tucso = ~csum_tcpudp_magic(ip_hdr(skb)->saddr,
|
||||
ip_hdr(skb)->daddr, 0,
|
||||
ip_hdr(skb)->protocol, 0);
|
||||
} else if (skb_is_gso_v6(skb)) {
|
||||
} else if (skb_is_gso(skb) && skb_is_gso_v6(skb)) {
|
||||
tucso = dest + ((void *)&(tcp_hdr(skb)->check) - data);
|
||||
*tucso = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
|
||||
&ipv6_hdr(skb)->daddr, 0,
|
||||
IPPROTO_TCP, 0);
|
||||
return false;
|
||||
} else if (protocol == htons(ETH_P_IPV6)) {
|
||||
tucso = dest + skb_checksum_start_offset(skb) + skb->csum_offset;
|
||||
*tucso = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
|
||||
|
@ -4331,6 +4331,10 @@ static size_t vxlan_get_size(const struct net_device *dev)
|
||||
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_TX */
|
||||
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_RX */
|
||||
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_LOCALBYPASS */
|
||||
nla_total_size(0) + /* IFLA_VXLAN_GBP */
|
||||
nla_total_size(0) + /* IFLA_VXLAN_GPE */
|
||||
nla_total_size(0) + /* IFLA_VXLAN_REMCSUM_NOPARTIAL */
|
||||
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_VNIFILTER */
|
||||
0;
|
||||
}
|
||||
|
||||
|
@ -724,6 +724,10 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session,
|
||||
return -EEXIST;
|
||||
}
|
||||
|
||||
err = -EINVAL;
|
||||
if (!sk_is_tcp(sock->sk))
|
||||
goto free_socket;
|
||||
|
||||
err = iscsi_conn_bind(cls_session, cls_conn, is_leading);
|
||||
if (err)
|
||||
goto free_socket;
|
||||
|
@ -49,7 +49,7 @@ word \
|
||||
____BTF_ID(symbol, word)
|
||||
|
||||
#define __ID(prefix) \
|
||||
__PASTE(prefix, __COUNTER__)
|
||||
__PASTE(__PASTE(prefix, __COUNTER__), __LINE__)
|
||||
|
||||
/*
|
||||
* The BTF_ID defines unique symbol for each ID pointing
|
||||
|
@ -189,6 +189,8 @@ struct team {
|
||||
struct net_device *dev; /* associated netdevice */
|
||||
struct team_pcpu_stats __percpu *pcpu_stats;
|
||||
|
||||
const struct header_ops *header_ops_cache;
|
||||
|
||||
struct mutex lock; /* used for overall locking, e.g. port lists write */
|
||||
|
||||
/*
|
||||
|
@ -1682,7 +1682,7 @@ struct nft_trans_gc {
|
||||
struct net *net;
|
||||
struct nft_set *set;
|
||||
u32 seq;
|
||||
u8 count;
|
||||
u16 count;
|
||||
void *priv[NFT_TRANS_GC_BATCHCOUNT];
|
||||
struct rcu_head rcu;
|
||||
};
|
||||
@ -1700,8 +1700,9 @@ void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans);
|
||||
|
||||
void nft_trans_gc_elem_add(struct nft_trans_gc *gc, void *priv);
|
||||
|
||||
struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
|
||||
unsigned int gc_seq);
|
||||
struct nft_trans_gc *nft_trans_gc_catchall_async(struct nft_trans_gc *gc,
|
||||
unsigned int gc_seq);
|
||||
struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc);
|
||||
|
||||
void nft_setelem_data_deactivate(const struct net *net,
|
||||
const struct nft_set *set,
|
||||
|
@ -1962,7 +1962,9 @@ union bpf_attr {
|
||||
* performed again, if the helper is used in combination with
|
||||
* direct packet access.
|
||||
* Return
|
||||
* 0 on success, or a negative error in case of failure.
|
||||
* 0 on success, or a negative error in case of failure. Positive
|
||||
* error indicates a potential drop or congestion in the target
|
||||
* device. The particular positive error codes are not defined.
|
||||
*
|
||||
* u64 bpf_get_current_pid_tgid(void)
|
||||
* Description
|
||||
|
@ -8501,7 +8501,7 @@ bool btf_nested_type_is_trusted(struct bpf_verifier_log *log,
|
||||
tname = btf_name_by_offset(btf, walk_type->name_off);
|
||||
|
||||
ret = snprintf(safe_tname, sizeof(safe_tname), "%s%s", tname, suffix);
|
||||
if (ret < 0)
|
||||
if (ret >= sizeof(safe_tname))
|
||||
return false;
|
||||
|
||||
safe_id = btf_find_by_name_kind(btf, safe_tname, BTF_INFO_KIND(walk_type->info));
|
||||
|
@ -785,7 +785,8 @@ static void replace_effective_prog(struct cgroup *cgrp,
|
||||
* to descendants
|
||||
* @cgrp: The cgroup which descendants to traverse
|
||||
* @link: A link for which to replace BPF program
|
||||
* @type: Type of attach operation
|
||||
* @new_prog: &struct bpf_prog for the target BPF program with its refcnt
|
||||
* incremented
|
||||
*
|
||||
* Must be called with cgroup_mutex held.
|
||||
*/
|
||||
@ -1334,7 +1335,7 @@ int cgroup_bpf_prog_query(const union bpf_attr *attr,
|
||||
* __cgroup_bpf_run_filter_skb() - Run a program for packet filtering
|
||||
* @sk: The socket sending or receiving traffic
|
||||
* @skb: The skb that is being sent or received
|
||||
* @type: The type of program to be executed
|
||||
* @atype: The type of program to be executed
|
||||
*
|
||||
* If no socket is passed, or the socket is not of type INET or INET6,
|
||||
* this function does nothing and returns 0.
|
||||
@ -1424,7 +1425,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_skb);
|
||||
/**
|
||||
* __cgroup_bpf_run_filter_sk() - Run a program on a sock
|
||||
* @sk: sock structure to manipulate
|
||||
* @type: The type of program to be executed
|
||||
* @atype: The type of program to be executed
|
||||
*
|
||||
* socket is passed is expected to be of type INET or INET6.
|
||||
*
|
||||
@ -1449,7 +1450,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sk);
|
||||
* provided by user sockaddr
|
||||
* @sk: sock struct that will use sockaddr
|
||||
* @uaddr: sockaddr struct provided by user
|
||||
* @type: The type of program to be executed
|
||||
* @atype: The type of program to be executed
|
||||
* @t_ctx: Pointer to attach type specific context
|
||||
* @flags: Pointer to u32 which contains higher bits of BPF program
|
||||
* return value (OR'ed together).
|
||||
@ -1496,7 +1497,7 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sock_addr);
|
||||
* @sock_ops: bpf_sock_ops_kern struct to pass to program. Contains
|
||||
* sk with connection information (IP addresses, etc.) May not contain
|
||||
* cgroup info if it is a req sock.
|
||||
* @type: The type of program to be executed
|
||||
* @atype: The type of program to be executed
|
||||
*
|
||||
* socket passed is expected to be of type INET or INET6.
|
||||
*
|
||||
@ -1670,7 +1671,7 @@ const struct bpf_verifier_ops cg_dev_verifier_ops = {
|
||||
* @ppos: value-result argument: value is position at which read from or write
|
||||
* to sysctl is happening, result is new position if program overrode it,
|
||||
* initial value otherwise
|
||||
* @type: type of program to be executed
|
||||
* @atype: type of program to be executed
|
||||
*
|
||||
* Program is run when sysctl is being accessed, either read or written, and
|
||||
* can allow or deny such access.
|
||||
|
@ -459,8 +459,7 @@ static void notrace irq_work_raise(struct bpf_mem_cache *c)
|
||||
* Typical case will be between 11K and 116K closer to 11K.
|
||||
* bpf progs can and should share bpf_mem_cache when possible.
|
||||
*/
|
||||
|
||||
static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
|
||||
static void init_refill_work(struct bpf_mem_cache *c)
|
||||
{
|
||||
init_irq_work(&c->refill_work, bpf_mem_refill);
|
||||
if (c->unit_size <= 256) {
|
||||
@ -476,7 +475,10 @@ static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
|
||||
c->high_watermark = max(96 * 256 / c->unit_size, 3);
|
||||
}
|
||||
c->batch = max((c->high_watermark - c->low_watermark) / 4 * 3, 1);
|
||||
}
|
||||
|
||||
static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
|
||||
{
|
||||
/* To avoid consuming memory assume that 1st run of bpf
|
||||
* prog won't be doing more than 4 map_update_elem from
|
||||
* irq disabled region
|
||||
@ -484,6 +486,31 @@ static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
|
||||
alloc_bulk(c, c->unit_size <= 256 ? 4 : 1, cpu_to_node(cpu), false);
|
||||
}
|
||||
|
||||
static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)
|
||||
{
|
||||
struct llist_node *first;
|
||||
unsigned int obj_size;
|
||||
|
||||
/* For per-cpu allocator, the size of free objects in free list doesn't
|
||||
* match with unit_size and now there is no way to get the size of
|
||||
* per-cpu pointer saved in free object, so just skip the checking.
|
||||
*/
|
||||
if (c->percpu_size)
|
||||
return 0;
|
||||
|
||||
first = c->free_llist.first;
|
||||
if (!first)
|
||||
return 0;
|
||||
|
||||
obj_size = ksize(first);
|
||||
if (obj_size != c->unit_size) {
|
||||
WARN_ONCE(1, "bpf_mem_cache[%u]: unexpected object size %u, expect %u\n",
|
||||
idx, obj_size, c->unit_size);
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* When size != 0 bpf_mem_cache for each cpu.
|
||||
* This is typical bpf hash map use case when all elements have equal size.
|
||||
*
|
||||
@ -494,10 +521,10 @@ static void prefill_mem_cache(struct bpf_mem_cache *c, int cpu)
|
||||
int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
|
||||
{
|
||||
static u16 sizes[NUM_CACHES] = {96, 192, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096};
|
||||
int cpu, i, err, unit_size, percpu_size = 0;
|
||||
struct bpf_mem_caches *cc, __percpu *pcc;
|
||||
struct bpf_mem_cache *c, __percpu *pc;
|
||||
struct obj_cgroup *objcg = NULL;
|
||||
int cpu, i, unit_size, percpu_size = 0;
|
||||
|
||||
if (size) {
|
||||
pc = __alloc_percpu_gfp(sizeof(*pc), 8, GFP_KERNEL);
|
||||
@ -521,6 +548,7 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
|
||||
c->objcg = objcg;
|
||||
c->percpu_size = percpu_size;
|
||||
c->tgt = c;
|
||||
init_refill_work(c);
|
||||
prefill_mem_cache(c, cpu);
|
||||
}
|
||||
ma->cache = pc;
|
||||
@ -534,6 +562,7 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
|
||||
pcc = __alloc_percpu_gfp(sizeof(*cc), 8, GFP_KERNEL);
|
||||
if (!pcc)
|
||||
return -ENOMEM;
|
||||
err = 0;
|
||||
#ifdef CONFIG_MEMCG_KMEM
|
||||
objcg = get_obj_cgroup_from_current();
|
||||
#endif
|
||||
@ -544,11 +573,30 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
|
||||
c->unit_size = sizes[i];
|
||||
c->objcg = objcg;
|
||||
c->tgt = c;
|
||||
|
||||
init_refill_work(c);
|
||||
/* Another bpf_mem_cache will be used when allocating
|
||||
* c->unit_size in bpf_mem_alloc(), so doesn't prefill
|
||||
* for the bpf_mem_cache because these free objects will
|
||||
* never be used.
|
||||
*/
|
||||
if (i != bpf_mem_cache_idx(c->unit_size))
|
||||
continue;
|
||||
prefill_mem_cache(c, cpu);
|
||||
err = check_obj_size(c, i);
|
||||
if (err)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
ma->caches = pcc;
|
||||
return 0;
|
||||
/* refill_work is either zeroed or initialized, so it is safe to
|
||||
* call irq_work_sync().
|
||||
*/
|
||||
if (err)
|
||||
bpf_mem_alloc_destroy(ma);
|
||||
return err;
|
||||
}
|
||||
|
||||
static void drain_mem_cache(struct bpf_mem_cache *c)
|
||||
@ -916,3 +964,41 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
|
||||
|
||||
return !ret ? NULL : ret + LLIST_NODE_SZ;
|
||||
}
|
||||
|
||||
/* Most of the logic is taken from setup_kmalloc_cache_index_table() */
|
||||
static __init int bpf_mem_cache_adjust_size(void)
|
||||
{
|
||||
unsigned int size, index;
|
||||
|
||||
/* Normally KMALLOC_MIN_SIZE is 8-bytes, but it can be
|
||||
* up-to 256-bytes.
|
||||
*/
|
||||
size = KMALLOC_MIN_SIZE;
|
||||
if (size <= 192)
|
||||
index = size_index[(size - 1) / 8];
|
||||
else
|
||||
index = fls(size - 1) - 1;
|
||||
for (size = 8; size < KMALLOC_MIN_SIZE && size <= 192; size += 8)
|
||||
size_index[(size - 1) / 8] = index;
|
||||
|
||||
/* The minimal alignment is 64-bytes, so disable 96-bytes cache and
|
||||
* use 128-bytes cache instead.
|
||||
*/
|
||||
if (KMALLOC_MIN_SIZE >= 64) {
|
||||
index = size_index[(128 - 1) / 8];
|
||||
for (size = 64 + 8; size <= 96; size += 8)
|
||||
size_index[(size - 1) / 8] = index;
|
||||
}
|
||||
|
||||
/* The minimal alignment is 128-bytes, so disable 192-bytes cache and
|
||||
* use 256-bytes cache instead.
|
||||
*/
|
||||
if (KMALLOC_MIN_SIZE >= 128) {
|
||||
index = fls(256 - 1) - 1;
|
||||
for (size = 128 + 8; size <= 192; size += 8)
|
||||
size_index[(size - 1) / 8] = index;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
subsys_initcall(bpf_mem_cache_adjust_size);
|
||||
|
@ -199,12 +199,14 @@ static int __bpf_prog_dev_bound_init(struct bpf_prog *prog, struct net_device *n
|
||||
offload->netdev = netdev;
|
||||
|
||||
ondev = bpf_offload_find_netdev(offload->netdev);
|
||||
/* When program is offloaded require presence of "true"
|
||||
* bpf_offload_netdev, avoid the one created for !ondev case below.
|
||||
*/
|
||||
if (bpf_prog_is_offloaded(prog->aux) && (!ondev || !ondev->offdev)) {
|
||||
err = -EINVAL;
|
||||
goto err_free;
|
||||
}
|
||||
if (!ondev) {
|
||||
if (bpf_prog_is_offloaded(prog->aux)) {
|
||||
err = -EINVAL;
|
||||
goto err_free;
|
||||
}
|
||||
|
||||
/* When only binding to the device, explicitly
|
||||
* create an entry in the hashtable.
|
||||
*/
|
||||
|
@ -98,7 +98,12 @@ static long __queue_map_get(struct bpf_map *map, void *value, bool delete)
|
||||
int err = 0;
|
||||
void *ptr;
|
||||
|
||||
raw_spin_lock_irqsave(&qs->lock, flags);
|
||||
if (in_nmi()) {
|
||||
if (!raw_spin_trylock_irqsave(&qs->lock, flags))
|
||||
return -EBUSY;
|
||||
} else {
|
||||
raw_spin_lock_irqsave(&qs->lock, flags);
|
||||
}
|
||||
|
||||
if (queue_stack_map_is_empty(qs)) {
|
||||
memset(value, 0, qs->map.value_size);
|
||||
@ -128,7 +133,12 @@ static long __stack_map_get(struct bpf_map *map, void *value, bool delete)
|
||||
void *ptr;
|
||||
u32 index;
|
||||
|
||||
raw_spin_lock_irqsave(&qs->lock, flags);
|
||||
if (in_nmi()) {
|
||||
if (!raw_spin_trylock_irqsave(&qs->lock, flags))
|
||||
return -EBUSY;
|
||||
} else {
|
||||
raw_spin_lock_irqsave(&qs->lock, flags);
|
||||
}
|
||||
|
||||
if (queue_stack_map_is_empty(qs)) {
|
||||
memset(value, 0, qs->map.value_size);
|
||||
@ -193,7 +203,12 @@ static long queue_stack_map_push_elem(struct bpf_map *map, void *value,
|
||||
if (flags & BPF_NOEXIST || flags > BPF_EXIST)
|
||||
return -EINVAL;
|
||||
|
||||
raw_spin_lock_irqsave(&qs->lock, irq_flags);
|
||||
if (in_nmi()) {
|
||||
if (!raw_spin_trylock_irqsave(&qs->lock, irq_flags))
|
||||
return -EBUSY;
|
||||
} else {
|
||||
raw_spin_lock_irqsave(&qs->lock, irq_flags);
|
||||
}
|
||||
|
||||
if (queue_stack_map_is_full(qs)) {
|
||||
if (!replace) {
|
||||
|
@ -2853,6 +2853,17 @@ static int get_modules_for_addrs(struct module ***mods, unsigned long *addrs, u3
|
||||
return arr.mods_cnt;
|
||||
}
|
||||
|
||||
static int addrs_check_error_injection_list(unsigned long *addrs, u32 cnt)
|
||||
{
|
||||
u32 i;
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
if (!within_error_injection_list(addrs[i]))
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
|
||||
{
|
||||
struct bpf_kprobe_multi_link *link = NULL;
|
||||
@ -2930,6 +2941,11 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
|
||||
goto error;
|
||||
}
|
||||
|
||||
if (prog->kprobe_override && addrs_check_error_injection_list(addrs, cnt)) {
|
||||
err = -EINVAL;
|
||||
goto error;
|
||||
}
|
||||
|
||||
link = kzalloc(sizeof(*link), GFP_KERNEL);
|
||||
if (!link) {
|
||||
err = -ENOMEM;
|
||||
@ -3207,8 +3223,10 @@ int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr
|
||||
rcu_read_lock();
|
||||
task = get_pid_task(find_vpid(pid), PIDTYPE_PID);
|
||||
rcu_read_unlock();
|
||||
if (!task)
|
||||
if (!task) {
|
||||
err = -ESRCH;
|
||||
goto error_path_put;
|
||||
}
|
||||
}
|
||||
|
||||
err = -ENOMEM;
|
||||
|
@ -10,7 +10,7 @@ menuconfig HAMRADIO
|
||||
If you want to connect your Linux box to an amateur radio, answer Y
|
||||
here. You want to read <https://www.tapr.org/>
|
||||
and more specifically about AX.25 on Linux
|
||||
<http://www.linux-ax25.org/>.
|
||||
<https://linux-ax25.in-berlin.de>.
|
||||
|
||||
Note that the answer to this question won't directly affect the
|
||||
kernel: saying N will just cause the configurator to skip all
|
||||
@ -61,7 +61,7 @@ config AX25_DAMA_SLAVE
|
||||
configuration. Linux cannot yet act as a DAMA server. This option
|
||||
only compiles DAMA slave support into the kernel. It still needs to
|
||||
be enabled at runtime. For more about DAMA see
|
||||
<http://www.linux-ax25.org>. If unsure, say Y.
|
||||
<https://linux-ax25.in-berlin.de>. If unsure, say Y.
|
||||
|
||||
# placeholder until implemented
|
||||
config AX25_DAMA_MASTER
|
||||
@ -87,9 +87,9 @@ config NETROM
|
||||
A comprehensive listing of all the software for Linux amateur radio
|
||||
users as well as information about how to configure an AX.25 port is
|
||||
contained in the Linux Ham Wiki, available from
|
||||
<http://www.linux-ax25.org>. You also might want to check out the
|
||||
file <file:Documentation/networking/ax25.rst>. More information about
|
||||
digital amateur radio in general is on the WWW at
|
||||
<https://linux-ax25.in-berlin.de>. You also might want to check out
|
||||
the file <file:Documentation/networking/ax25.rst>. More information
|
||||
about digital amateur radio in general is on the WWW at
|
||||
<https://www.tapr.org/>.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
@ -106,9 +106,9 @@ config ROSE
|
||||
A comprehensive listing of all the software for Linux amateur radio
|
||||
users as well as information about how to configure an AX.25 port is
|
||||
contained in the Linux Ham Wiki, available from
|
||||
<http://www.linux-ax25.org>. You also might want to check out the
|
||||
file <file:Documentation/networking/ax25.rst>. More information about
|
||||
digital amateur radio in general is on the WWW at
|
||||
<https://linux-ax25.in-berlin.de>. You also might want to check out
|
||||
the file <file:Documentation/networking/ax25.rst>. More information
|
||||
about digital amateur radio in general is on the WWW at
|
||||
<https://www.tapr.org/>.
|
||||
|
||||
To compile this driver as a module, choose M here: the
|
||||
|
@ -124,7 +124,7 @@ static int deliver_clone(const struct net_bridge_port *prev,
|
||||
|
||||
skb = skb_clone(skb, GFP_ATOMIC);
|
||||
if (!skb) {
|
||||
dev->stats.tx_dropped++;
|
||||
DEV_STATS_INC(dev, tx_dropped);
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
@ -268,7 +268,7 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb,
|
||||
|
||||
skb = skb_copy(skb, GFP_ATOMIC);
|
||||
if (!skb) {
|
||||
dev->stats.tx_dropped++;
|
||||
DEV_STATS_INC(dev, tx_dropped);
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -181,12 +181,12 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
|
||||
if ((mdst && mdst->host_joined) ||
|
||||
br_multicast_is_router(brmctx, skb)) {
|
||||
local_rcv = true;
|
||||
br->dev->stats.multicast++;
|
||||
DEV_STATS_INC(br->dev, multicast);
|
||||
}
|
||||
mcast_hit = true;
|
||||
} else {
|
||||
local_rcv = true;
|
||||
br->dev->stats.multicast++;
|
||||
DEV_STATS_INC(br->dev, multicast);
|
||||
}
|
||||
break;
|
||||
case BR_PKT_UNICAST:
|
||||
|
@ -69,7 +69,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/bitops.h>
|
||||
#include <linux/bitmap.h>
|
||||
#include <linux/capability.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/types.h>
|
||||
@ -1080,7 +1080,7 @@ static int __dev_alloc_name(struct net *net, const char *name, char *buf)
|
||||
return -EINVAL;
|
||||
|
||||
/* Use one page as a bit array of possible slots */
|
||||
inuse = (unsigned long *) get_zeroed_page(GFP_ATOMIC);
|
||||
inuse = bitmap_zalloc(max_netdevices, GFP_ATOMIC);
|
||||
if (!inuse)
|
||||
return -ENOMEM;
|
||||
|
||||
@ -1109,7 +1109,7 @@ static int __dev_alloc_name(struct net *net, const char *name, char *buf)
|
||||
}
|
||||
|
||||
i = find_first_zero_bit(inuse, max_netdevices);
|
||||
free_page((unsigned long) inuse);
|
||||
bitmap_free(inuse);
|
||||
}
|
||||
|
||||
snprintf(buf, IFNAMSIZ, name, i);
|
||||
|
@ -1446,7 +1446,7 @@ bool __skb_flow_dissect(const struct net *net,
|
||||
break;
|
||||
}
|
||||
|
||||
nhoff += ntohs(hdr->message_length);
|
||||
nhoff += sizeof(struct ptp_header);
|
||||
fdret = FLOW_DISSECT_RET_OUT_GOOD;
|
||||
break;
|
||||
}
|
||||
|
@ -254,13 +254,8 @@ static int dccp_v4_err(struct sk_buff *skb, u32 info)
|
||||
int err;
|
||||
struct net *net = dev_net(skb->dev);
|
||||
|
||||
/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
|
||||
* which is in byte 7 of the dccp header.
|
||||
* Our caller (icmp_socket_deliver()) already pulled 8 bytes for us.
|
||||
*
|
||||
* Later on, we want to access the sequence number fields, which are
|
||||
* beyond 8 bytes, so we have to pskb_may_pull() ourselves.
|
||||
*/
|
||||
if (!pskb_may_pull(skb, offset + sizeof(*dh)))
|
||||
return -EINVAL;
|
||||
dh = (struct dccp_hdr *)(skb->data + offset);
|
||||
if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
|
||||
return -EINVAL;
|
||||
|
@ -83,13 +83,8 @@ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
|
||||
__u64 seq;
|
||||
struct net *net = dev_net(skb->dev);
|
||||
|
||||
/* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x,
|
||||
* which is in byte 7 of the dccp header.
|
||||
* Our caller (icmpv6_notify()) already pulled 8 bytes for us.
|
||||
*
|
||||
* Later on, we want to access the sequence number fields, which are
|
||||
* beyond 8 bytes, so we have to pskb_may_pull() ourselves.
|
||||
*/
|
||||
if (!pskb_may_pull(skb, offset + sizeof(*dh)))
|
||||
return -EINVAL;
|
||||
dh = (struct dccp_hdr *)(skb->data + offset);
|
||||
if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh)))
|
||||
return -EINVAL;
|
||||
|
@ -235,7 +235,7 @@ static void handshake_req_submit_test4(struct kunit *test)
|
||||
KUNIT_EXPECT_PTR_EQ(test, req, result);
|
||||
|
||||
handshake_req_cancel(sock->sk);
|
||||
sock_release(sock);
|
||||
fput(filp);
|
||||
}
|
||||
|
||||
static void handshake_req_submit_test5(struct kunit *test)
|
||||
@ -272,7 +272,7 @@ static void handshake_req_submit_test5(struct kunit *test)
|
||||
/* Assert */
|
||||
KUNIT_EXPECT_EQ(test, err, -EAGAIN);
|
||||
|
||||
sock_release(sock);
|
||||
fput(filp);
|
||||
hn->hn_pending = saved;
|
||||
}
|
||||
|
||||
@ -306,7 +306,7 @@ static void handshake_req_submit_test6(struct kunit *test)
|
||||
KUNIT_EXPECT_EQ(test, err, -EBUSY);
|
||||
|
||||
handshake_req_cancel(sock->sk);
|
||||
sock_release(sock);
|
||||
fput(filp);
|
||||
}
|
||||
|
||||
static void handshake_req_cancel_test1(struct kunit *test)
|
||||
@ -340,7 +340,7 @@ static void handshake_req_cancel_test1(struct kunit *test)
|
||||
/* Assert */
|
||||
KUNIT_EXPECT_TRUE(test, result);
|
||||
|
||||
sock_release(sock);
|
||||
fput(filp);
|
||||
}
|
||||
|
||||
static void handshake_req_cancel_test2(struct kunit *test)
|
||||
@ -382,7 +382,7 @@ static void handshake_req_cancel_test2(struct kunit *test)
|
||||
/* Assert */
|
||||
KUNIT_EXPECT_TRUE(test, result);
|
||||
|
||||
sock_release(sock);
|
||||
fput(filp);
|
||||
}
|
||||
|
||||
static void handshake_req_cancel_test3(struct kunit *test)
|
||||
@ -427,7 +427,7 @@ static void handshake_req_cancel_test3(struct kunit *test)
|
||||
/* Assert */
|
||||
KUNIT_EXPECT_FALSE(test, result);
|
||||
|
||||
sock_release(sock);
|
||||
fput(filp);
|
||||
}
|
||||
|
||||
static struct handshake_req *handshake_req_destroy_test;
|
||||
@ -471,7 +471,7 @@ static void handshake_req_destroy_test1(struct kunit *test)
|
||||
handshake_req_cancel(sock->sk);
|
||||
|
||||
/* Act */
|
||||
sock_release(sock);
|
||||
fput(filp);
|
||||
|
||||
/* Assert */
|
||||
KUNIT_EXPECT_PTR_EQ(test, handshake_req_destroy_test, req);
|
||||
|
@ -288,13 +288,13 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame)
|
||||
|
||||
/* And leave the HSR tag. */
|
||||
if (ethhdr->h_proto == htons(ETH_P_HSR)) {
|
||||
pull_size = sizeof(struct ethhdr);
|
||||
pull_size = sizeof(struct hsr_tag);
|
||||
skb_pull(skb, pull_size);
|
||||
total_pull_size += pull_size;
|
||||
}
|
||||
|
||||
/* And leave the HSR sup tag. */
|
||||
pull_size = sizeof(struct hsr_tag);
|
||||
pull_size = sizeof(struct hsr_sup_tag);
|
||||
skb_pull(skb, pull_size);
|
||||
total_pull_size += pull_size;
|
||||
|
||||
|
@ -83,7 +83,7 @@ struct hsr_vlan_ethhdr {
|
||||
struct hsr_sup_tlv {
|
||||
u8 HSR_TLV_type;
|
||||
u8 HSR_TLV_length;
|
||||
};
|
||||
} __packed;
|
||||
|
||||
/* HSR/PRP Supervision Frame data types.
|
||||
* Field names as defined in the IEC:2010 standard for HSR.
|
||||
|
@ -1213,6 +1213,7 @@ EXPORT_INDIRECT_CALLABLE(ipv4_dst_check);
|
||||
|
||||
static void ipv4_send_dest_unreach(struct sk_buff *skb)
|
||||
{
|
||||
struct net_device *dev;
|
||||
struct ip_options opt;
|
||||
int res;
|
||||
|
||||
@ -1230,7 +1231,8 @@ static void ipv4_send_dest_unreach(struct sk_buff *skb)
|
||||
opt.optlen = ip_hdr(skb)->ihl * 4 - sizeof(struct iphdr);
|
||||
|
||||
rcu_read_lock();
|
||||
res = __ip_options_compile(dev_net(skb->dev), &opt, skb, NULL);
|
||||
dev = skb->dev ? skb->dev : skb_rtable(skb)->dst.dev;
|
||||
res = __ip_options_compile(dev_net(dev), &opt, skb, NULL);
|
||||
rcu_read_unlock();
|
||||
|
||||
if (res)
|
||||
|
@ -1269,12 +1269,13 @@ static void mptcp_set_rwin(struct tcp_sock *tp, struct tcphdr *th)
|
||||
|
||||
if (rcv_wnd == rcv_wnd_old)
|
||||
break;
|
||||
if (before64(rcv_wnd_new, rcv_wnd)) {
|
||||
|
||||
rcv_wnd_old = rcv_wnd;
|
||||
if (before64(rcv_wnd_new, rcv_wnd_old)) {
|
||||
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICTUPDATE);
|
||||
goto raise_win;
|
||||
}
|
||||
MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICT);
|
||||
rcv_wnd_old = rcv_wnd;
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
@ -405,7 +405,7 @@ static bool __mptcp_move_skb(struct mptcp_sock *msk, struct sock *ssk,
|
||||
return false;
|
||||
}
|
||||
|
||||
static void mptcp_stop_timer(struct sock *sk)
|
||||
static void mptcp_stop_rtx_timer(struct sock *sk)
|
||||
{
|
||||
struct inet_connection_sock *icsk = inet_csk(sk);
|
||||
|
||||
@ -770,6 +770,46 @@ static bool __mptcp_ofo_queue(struct mptcp_sock *msk)
|
||||
return moved;
|
||||
}
|
||||
|
||||
static bool __mptcp_subflow_error_report(struct sock *sk, struct sock *ssk)
|
||||
{
|
||||
int err = sock_error(ssk);
|
||||
int ssk_state;
|
||||
|
||||
if (!err)
|
||||
return false;
|
||||
|
||||
/* only propagate errors on fallen-back sockets or
|
||||
* on MPC connect
|
||||
*/
|
||||
if (sk->sk_state != TCP_SYN_SENT && !__mptcp_check_fallback(mptcp_sk(sk)))
|
||||
return false;
|
||||
|
||||
/* We need to propagate only transition to CLOSE state.
|
||||
* Orphaned socket will see such state change via
|
||||
* subflow_sched_work_if_closed() and that path will properly
|
||||
* destroy the msk as needed.
|
||||
*/
|
||||
ssk_state = inet_sk_state_load(ssk);
|
||||
if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD))
|
||||
inet_sk_state_store(sk, ssk_state);
|
||||
WRITE_ONCE(sk->sk_err, -err);
|
||||
|
||||
/* This barrier is coupled with smp_rmb() in mptcp_poll() */
|
||||
smp_wmb();
|
||||
sk_error_report(sk);
|
||||
return true;
|
||||
}
|
||||
|
||||
void __mptcp_error_report(struct sock *sk)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow;
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
|
||||
mptcp_for_each_subflow(msk, subflow)
|
||||
if (__mptcp_subflow_error_report(sk, mptcp_subflow_tcp_sock(subflow)))
|
||||
break;
|
||||
}
|
||||
|
||||
/* In most cases we will be able to lock the mptcp socket. If its already
|
||||
* owned, we need to defer to the work queue to avoid ABBA deadlock.
|
||||
*/
|
||||
@ -852,6 +892,7 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
|
||||
mptcp_subflow_ctx(ssk)->subflow_id = msk->subflow_id++;
|
||||
mptcp_sockopt_sync_locked(msk, ssk);
|
||||
mptcp_subflow_joined(msk, ssk);
|
||||
mptcp_stop_tout_timer(sk);
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -871,12 +912,12 @@ static void __mptcp_flush_join_list(struct sock *sk, struct list_head *join_list
|
||||
}
|
||||
}
|
||||
|
||||
static bool mptcp_timer_pending(struct sock *sk)
|
||||
static bool mptcp_rtx_timer_pending(struct sock *sk)
|
||||
{
|
||||
return timer_pending(&inet_csk(sk)->icsk_retransmit_timer);
|
||||
}
|
||||
|
||||
static void mptcp_reset_timer(struct sock *sk)
|
||||
static void mptcp_reset_rtx_timer(struct sock *sk)
|
||||
{
|
||||
struct inet_connection_sock *icsk = inet_csk(sk);
|
||||
unsigned long tout;
|
||||
@ -1010,10 +1051,10 @@ static void __mptcp_clean_una(struct sock *sk)
|
||||
out:
|
||||
if (snd_una == READ_ONCE(msk->snd_nxt) &&
|
||||
snd_una == READ_ONCE(msk->write_seq)) {
|
||||
if (mptcp_timer_pending(sk) && !mptcp_data_fin_enabled(msk))
|
||||
mptcp_stop_timer(sk);
|
||||
if (mptcp_rtx_timer_pending(sk) && !mptcp_data_fin_enabled(msk))
|
||||
mptcp_stop_rtx_timer(sk);
|
||||
} else {
|
||||
mptcp_reset_timer(sk);
|
||||
mptcp_reset_rtx_timer(sk);
|
||||
}
|
||||
}
|
||||
|
||||
@ -1586,8 +1627,8 @@ void __mptcp_push_pending(struct sock *sk, unsigned int flags)
|
||||
mptcp_push_release(ssk, &info);
|
||||
|
||||
/* ensure the rtx timer is running */
|
||||
if (!mptcp_timer_pending(sk))
|
||||
mptcp_reset_timer(sk);
|
||||
if (!mptcp_rtx_timer_pending(sk))
|
||||
mptcp_reset_rtx_timer(sk);
|
||||
if (do_check_data_fin)
|
||||
mptcp_check_send_data_fin(sk);
|
||||
}
|
||||
@ -1650,8 +1691,8 @@ static void __mptcp_subflow_push_pending(struct sock *sk, struct sock *ssk, bool
|
||||
if (copied) {
|
||||
tcp_push(ssk, 0, info.mss_now, tcp_sk(ssk)->nonagle,
|
||||
info.size_goal);
|
||||
if (!mptcp_timer_pending(sk))
|
||||
mptcp_reset_timer(sk);
|
||||
if (!mptcp_rtx_timer_pending(sk))
|
||||
mptcp_reset_rtx_timer(sk);
|
||||
|
||||
if (msk->snd_data_fin_enable &&
|
||||
msk->snd_nxt + 1 == msk->write_seq)
|
||||
@ -2220,7 +2261,7 @@ static void mptcp_retransmit_timer(struct timer_list *t)
|
||||
sock_put(sk);
|
||||
}
|
||||
|
||||
static void mptcp_timeout_timer(struct timer_list *t)
|
||||
static void mptcp_tout_timer(struct timer_list *t)
|
||||
{
|
||||
struct sock *sk = from_timer(sk, t, sk_timer);
|
||||
|
||||
@ -2329,18 +2370,14 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
|
||||
bool dispose_it, need_push = false;
|
||||
|
||||
/* If the first subflow moved to a close state before accept, e.g. due
|
||||
* to an incoming reset, mptcp either:
|
||||
* - if either the subflow or the msk are dead, destroy the context
|
||||
* (the subflow socket is deleted by inet_child_forget) and the msk
|
||||
* - otherwise do nothing at the moment and take action at accept and/or
|
||||
* listener shutdown - user-space must be able to accept() the closed
|
||||
* socket.
|
||||
* to an incoming reset or listener shutdown, the subflow socket is
|
||||
* already deleted by inet_child_forget() and the mptcp socket can't
|
||||
* survive too.
|
||||
*/
|
||||
if (msk->in_accept_queue && msk->first == ssk) {
|
||||
if (!sock_flag(sk, SOCK_DEAD) && !sock_flag(ssk, SOCK_DEAD))
|
||||
return;
|
||||
|
||||
if (msk->in_accept_queue && msk->first == ssk &&
|
||||
(sock_flag(sk, SOCK_DEAD) || sock_flag(ssk, SOCK_DEAD))) {
|
||||
/* ensure later check in mptcp_worker() will dispose the msk */
|
||||
mptcp_set_close_tout(sk, tcp_jiffies32 - (TCP_TIMEWAIT_LEN + 1));
|
||||
sock_set_flag(sk, SOCK_DEAD);
|
||||
lock_sock_nested(ssk, SINGLE_DEPTH_NESTING);
|
||||
mptcp_subflow_drop_ctx(ssk);
|
||||
@ -2392,6 +2429,7 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
|
||||
}
|
||||
|
||||
out_release:
|
||||
__mptcp_subflow_error_report(sk, ssk);
|
||||
release_sock(ssk);
|
||||
|
||||
sock_put(ssk);
|
||||
@ -2402,6 +2440,22 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
|
||||
out:
|
||||
if (need_push)
|
||||
__mptcp_push_pending(sk, 0);
|
||||
|
||||
/* Catch every 'all subflows closed' scenario, including peers silently
|
||||
* closing them, e.g. due to timeout.
|
||||
* For established sockets, allow an additional timeout before closing,
|
||||
* as the protocol can still create more subflows.
|
||||
*/
|
||||
if (list_is_singular(&msk->conn_list) && msk->first &&
|
||||
inet_sk_state_load(msk->first) == TCP_CLOSE) {
|
||||
if (sk->sk_state != TCP_ESTABLISHED ||
|
||||
msk->in_accept_queue || sock_flag(sk, SOCK_DEAD)) {
|
||||
inet_sk_state_store(sk, TCP_CLOSE);
|
||||
mptcp_close_wake_up(sk);
|
||||
} else {
|
||||
mptcp_start_tout_timer(sk);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
|
||||
@ -2445,23 +2499,14 @@ static void __mptcp_close_subflow(struct sock *sk)
|
||||
|
||||
}
|
||||
|
||||
static bool mptcp_should_close(const struct sock *sk)
|
||||
static bool mptcp_close_tout_expired(const struct sock *sk)
|
||||
{
|
||||
s32 delta = tcp_jiffies32 - inet_csk(sk)->icsk_mtup.probe_timestamp;
|
||||
struct mptcp_subflow_context *subflow;
|
||||
if (!inet_csk(sk)->icsk_mtup.probe_timestamp ||
|
||||
sk->sk_state == TCP_CLOSE)
|
||||
return false;
|
||||
|
||||
if (delta >= TCP_TIMEWAIT_LEN || mptcp_sk(sk)->in_accept_queue)
|
||||
return true;
|
||||
|
||||
/* if all subflows are in closed status don't bother with additional
|
||||
* timeout
|
||||
*/
|
||||
mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
|
||||
if (inet_sk_state_load(mptcp_subflow_tcp_sock(subflow)) !=
|
||||
TCP_CLOSE)
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
return time_after32(tcp_jiffies32,
|
||||
inet_csk(sk)->icsk_mtup.probe_timestamp + TCP_TIMEWAIT_LEN);
|
||||
}
|
||||
|
||||
static void mptcp_check_fastclose(struct mptcp_sock *msk)
|
||||
@ -2588,27 +2633,28 @@ static void __mptcp_retrans(struct sock *sk)
|
||||
reset_timer:
|
||||
mptcp_check_and_set_pending(sk);
|
||||
|
||||
if (!mptcp_timer_pending(sk))
|
||||
mptcp_reset_timer(sk);
|
||||
if (!mptcp_rtx_timer_pending(sk))
|
||||
mptcp_reset_rtx_timer(sk);
|
||||
}
|
||||
|
||||
/* schedule the timeout timer for the relevant event: either close timeout
|
||||
* or mp_fail timeout. The close timeout takes precedence on the mp_fail one
|
||||
*/
|
||||
void mptcp_reset_timeout(struct mptcp_sock *msk, unsigned long fail_tout)
|
||||
void mptcp_reset_tout_timer(struct mptcp_sock *msk, unsigned long fail_tout)
|
||||
{
|
||||
struct sock *sk = (struct sock *)msk;
|
||||
unsigned long timeout, close_timeout;
|
||||
|
||||
if (!fail_tout && !sock_flag(sk, SOCK_DEAD))
|
||||
if (!fail_tout && !inet_csk(sk)->icsk_mtup.probe_timestamp)
|
||||
return;
|
||||
|
||||
close_timeout = inet_csk(sk)->icsk_mtup.probe_timestamp - tcp_jiffies32 + jiffies + TCP_TIMEWAIT_LEN;
|
||||
close_timeout = inet_csk(sk)->icsk_mtup.probe_timestamp - tcp_jiffies32 + jiffies +
|
||||
TCP_TIMEWAIT_LEN;
|
||||
|
||||
/* the close timeout takes precedence on the fail one, and here at least one of
|
||||
* them is active
|
||||
*/
|
||||
timeout = sock_flag(sk, SOCK_DEAD) ? close_timeout : fail_tout;
|
||||
timeout = inet_csk(sk)->icsk_mtup.probe_timestamp ? close_timeout : fail_tout;
|
||||
|
||||
sk_reset_timer(sk, &sk->sk_timer, timeout);
|
||||
}
|
||||
@ -2627,8 +2673,6 @@ static void mptcp_mp_fail_no_response(struct mptcp_sock *msk)
|
||||
mptcp_subflow_reset(ssk);
|
||||
WRITE_ONCE(mptcp_subflow_ctx(ssk)->fail_tout, 0);
|
||||
unlock_sock_fast(ssk, slow);
|
||||
|
||||
mptcp_reset_timeout(msk, 0);
|
||||
}
|
||||
|
||||
static void mptcp_do_fastclose(struct sock *sk)
|
||||
@ -2665,18 +2709,14 @@ static void mptcp_worker(struct work_struct *work)
|
||||
if (test_and_clear_bit(MPTCP_WORK_CLOSE_SUBFLOW, &msk->flags))
|
||||
__mptcp_close_subflow(sk);
|
||||
|
||||
/* There is no point in keeping around an orphaned sk timedout or
|
||||
* closed, but we need the msk around to reply to incoming DATA_FIN,
|
||||
* even if it is orphaned and in FIN_WAIT2 state
|
||||
*/
|
||||
if (sock_flag(sk, SOCK_DEAD)) {
|
||||
if (mptcp_should_close(sk))
|
||||
mptcp_do_fastclose(sk);
|
||||
if (mptcp_close_tout_expired(sk)) {
|
||||
mptcp_do_fastclose(sk);
|
||||
mptcp_close_wake_up(sk);
|
||||
}
|
||||
|
||||
if (sk->sk_state == TCP_CLOSE) {
|
||||
__mptcp_destroy_sock(sk);
|
||||
goto unlock;
|
||||
}
|
||||
if (sock_flag(sk, SOCK_DEAD) && sk->sk_state == TCP_CLOSE) {
|
||||
__mptcp_destroy_sock(sk);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
if (test_and_clear_bit(MPTCP_WORK_RTX, &msk->flags))
|
||||
@ -2717,7 +2757,7 @@ static void __mptcp_init_sock(struct sock *sk)
|
||||
|
||||
/* re-use the csk retrans timer for MPTCP-level retrans */
|
||||
timer_setup(&msk->sk.icsk_retransmit_timer, mptcp_retransmit_timer, 0);
|
||||
timer_setup(&sk->sk_timer, mptcp_timeout_timer, 0);
|
||||
timer_setup(&sk->sk_timer, mptcp_tout_timer, 0);
|
||||
}
|
||||
|
||||
static void mptcp_ca_reset(struct sock *sk)
|
||||
@ -2808,8 +2848,8 @@ void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how)
|
||||
} else {
|
||||
pr_debug("Sending DATA_FIN on subflow %p", ssk);
|
||||
tcp_send_ack(ssk);
|
||||
if (!mptcp_timer_pending(sk))
|
||||
mptcp_reset_timer(sk);
|
||||
if (!mptcp_rtx_timer_pending(sk))
|
||||
mptcp_reset_rtx_timer(sk);
|
||||
}
|
||||
break;
|
||||
}
|
||||
@ -2892,7 +2932,7 @@ static void __mptcp_destroy_sock(struct sock *sk)
|
||||
|
||||
might_sleep();
|
||||
|
||||
mptcp_stop_timer(sk);
|
||||
mptcp_stop_rtx_timer(sk);
|
||||
sk_stop_timer(sk, &sk->sk_timer);
|
||||
msk->pm.status = 0;
|
||||
mptcp_release_sched(msk);
|
||||
@ -2975,7 +3015,6 @@ bool __mptcp_close(struct sock *sk, long timeout)
|
||||
|
||||
cleanup:
|
||||
/* orphan all the subflows */
|
||||
inet_csk(sk)->icsk_mtup.probe_timestamp = tcp_jiffies32;
|
||||
mptcp_for_each_subflow(msk, subflow) {
|
||||
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
|
||||
bool slow = lock_sock_fast_nested(ssk);
|
||||
@ -3012,7 +3051,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
|
||||
__mptcp_destroy_sock(sk);
|
||||
do_cancel_work = true;
|
||||
} else {
|
||||
mptcp_reset_timeout(msk, 0);
|
||||
mptcp_start_tout_timer(sk);
|
||||
}
|
||||
|
||||
return do_cancel_work;
|
||||
@ -3075,8 +3114,8 @@ static int mptcp_disconnect(struct sock *sk, int flags)
|
||||
mptcp_check_listen_stop(sk);
|
||||
inet_sk_state_store(sk, TCP_CLOSE);
|
||||
|
||||
mptcp_stop_timer(sk);
|
||||
sk_stop_timer(sk, &sk->sk_timer);
|
||||
mptcp_stop_rtx_timer(sk);
|
||||
mptcp_stop_tout_timer(sk);
|
||||
|
||||
if (msk->token)
|
||||
mptcp_event(MPTCP_EVENT_CLOSED, msk, NULL, GFP_KERNEL);
|
||||
|
@ -718,7 +718,29 @@ void mptcp_get_options(const struct sk_buff *skb,
|
||||
|
||||
void mptcp_finish_connect(struct sock *sk);
|
||||
void __mptcp_set_connected(struct sock *sk);
|
||||
void mptcp_reset_timeout(struct mptcp_sock *msk, unsigned long fail_tout);
|
||||
void mptcp_reset_tout_timer(struct mptcp_sock *msk, unsigned long fail_tout);
|
||||
|
||||
static inline void mptcp_stop_tout_timer(struct sock *sk)
|
||||
{
|
||||
if (!inet_csk(sk)->icsk_mtup.probe_timestamp)
|
||||
return;
|
||||
|
||||
sk_stop_timer(sk, &sk->sk_timer);
|
||||
inet_csk(sk)->icsk_mtup.probe_timestamp = 0;
|
||||
}
|
||||
|
||||
static inline void mptcp_set_close_tout(struct sock *sk, unsigned long tout)
|
||||
{
|
||||
/* avoid 0 timestamp, as that means no close timeout */
|
||||
inet_csk(sk)->icsk_mtup.probe_timestamp = tout ? : 1;
|
||||
}
|
||||
|
||||
static inline void mptcp_start_tout_timer(struct sock *sk)
|
||||
{
|
||||
mptcp_set_close_tout(sk, tcp_jiffies32);
|
||||
mptcp_reset_tout_timer(mptcp_sk(sk), 0);
|
||||
}
|
||||
|
||||
static inline bool mptcp_is_fully_established(struct sock *sk)
|
||||
{
|
||||
return inet_sk_state_load(sk) == TCP_ESTABLISHED &&
|
||||
|
@ -1226,7 +1226,7 @@ static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
|
||||
WRITE_ONCE(subflow->fail_tout, fail_tout);
|
||||
tcp_send_ack(ssk);
|
||||
|
||||
mptcp_reset_timeout(msk, subflow->fail_tout);
|
||||
mptcp_reset_tout_timer(msk, subflow->fail_tout);
|
||||
}
|
||||
|
||||
static bool subflow_check_data_avail(struct sock *ssk)
|
||||
@ -1362,42 +1362,6 @@ void mptcp_space(const struct sock *ssk, int *space, int *full_space)
|
||||
*full_space = mptcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf));
|
||||
}
|
||||
|
||||
void __mptcp_error_report(struct sock *sk)
|
||||
{
|
||||
struct mptcp_subflow_context *subflow;
|
||||
struct mptcp_sock *msk = mptcp_sk(sk);
|
||||
|
||||
mptcp_for_each_subflow(msk, subflow) {
|
||||
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
|
||||
int err = sock_error(ssk);
|
||||
int ssk_state;
|
||||
|
||||
if (!err)
|
||||
continue;
|
||||
|
||||
/* only propagate errors on fallen-back sockets or
|
||||
* on MPC connect
|
||||
*/
|
||||
if (sk->sk_state != TCP_SYN_SENT && !__mptcp_check_fallback(msk))
|
||||
continue;
|
||||
|
||||
/* We need to propagate only transition to CLOSE state.
|
||||
* Orphaned socket will see such state change via
|
||||
* subflow_sched_work_if_closed() and that path will properly
|
||||
* destroy the msk as needed.
|
||||
*/
|
||||
ssk_state = inet_sk_state_load(ssk);
|
||||
if (ssk_state == TCP_CLOSE && !sock_flag(sk, SOCK_DEAD))
|
||||
inet_sk_state_store(sk, ssk_state);
|
||||
WRITE_ONCE(sk->sk_err, -err);
|
||||
|
||||
/* This barrier is coupled with smp_rmb() in mptcp_poll() */
|
||||
smp_wmb();
|
||||
sk_error_report(sk);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
static void subflow_error_report(struct sock *ssk)
|
||||
{
|
||||
struct sock *sk = mptcp_subflow_ctx(ssk)->conn;
|
||||
@ -1588,6 +1552,7 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
|
||||
mptcp_sock_graft(ssk, sk->sk_socket);
|
||||
iput(SOCK_INODE(sf));
|
||||
WRITE_ONCE(msk->allow_infinite_fallback, false);
|
||||
mptcp_stop_tout_timer(sk);
|
||||
return 0;
|
||||
|
||||
failed_unlink:
|
||||
|
@ -89,6 +89,11 @@ static int ncsi_aen_handler_lsc(struct ncsi_dev_priv *ndp,
|
||||
if ((had_link == has_link) || chained)
|
||||
return 0;
|
||||
|
||||
if (had_link)
|
||||
netif_carrier_off(ndp->ndev.dev);
|
||||
else
|
||||
netif_carrier_on(ndp->ndev.dev);
|
||||
|
||||
if (!ndp->multi_package && !nc->package->multi_channel) {
|
||||
if (had_link) {
|
||||
ndp->flags |= NCSI_DEV_RESHUFFLE;
|
||||
|
@ -682,6 +682,14 @@ __ip_set_put(struct ip_set *set)
|
||||
/* set->ref can be swapped out by ip_set_swap, netlink events (like dump) need
|
||||
* a separate reference counter
|
||||
*/
|
||||
static void
|
||||
__ip_set_get_netlink(struct ip_set *set)
|
||||
{
|
||||
write_lock_bh(&ip_set_ref_lock);
|
||||
set->ref_netlink++;
|
||||
write_unlock_bh(&ip_set_ref_lock);
|
||||
}
|
||||
|
||||
static void
|
||||
__ip_set_put_netlink(struct ip_set *set)
|
||||
{
|
||||
@ -1693,11 +1701,11 @@ call_ad(struct net *net, struct sock *ctnl, struct sk_buff *skb,
|
||||
|
||||
do {
|
||||
if (retried) {
|
||||
__ip_set_get(set);
|
||||
__ip_set_get_netlink(set);
|
||||
nfnl_unlock(NFNL_SUBSYS_IPSET);
|
||||
cond_resched();
|
||||
nfnl_lock(NFNL_SUBSYS_IPSET);
|
||||
__ip_set_put(set);
|
||||
__ip_set_put_netlink(set);
|
||||
}
|
||||
|
||||
ip_set_lock(set);
|
||||
|
@ -381,6 +381,8 @@ __bpf_kfunc struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i)
|
||||
struct nf_conn *nfct = (struct nf_conn *)nfct_i;
|
||||
int err;
|
||||
|
||||
if (!nf_ct_is_confirmed(nfct))
|
||||
nfct->timeout += nfct_time_stamp;
|
||||
nfct->status |= IPS_CONFIRMED;
|
||||
err = nf_conntrack_hash_check_insert(nfct);
|
||||
if (err < 0) {
|
||||
|
@ -40,10 +40,10 @@ static const u8 nf_ct_ext_type_len[NF_CT_EXT_NUM] = {
|
||||
[NF_CT_EXT_ECACHE] = sizeof(struct nf_conntrack_ecache),
|
||||
#endif
|
||||
#ifdef CONFIG_NF_CONNTRACK_TIMESTAMP
|
||||
[NF_CT_EXT_TSTAMP] = sizeof(struct nf_conn_acct),
|
||||
[NF_CT_EXT_TSTAMP] = sizeof(struct nf_conn_tstamp),
|
||||
#endif
|
||||
#ifdef CONFIG_NF_CONNTRACK_TIMEOUT
|
||||
[NF_CT_EXT_TIMEOUT] = sizeof(struct nf_conn_tstamp),
|
||||
[NF_CT_EXT_TIMEOUT] = sizeof(struct nf_conn_timeout),
|
||||
#endif
|
||||
#ifdef CONFIG_NF_CONNTRACK_LABELS
|
||||
[NF_CT_EXT_LABELS] = sizeof(struct nf_conn_labels),
|
||||
|
@ -1219,6 +1219,10 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
|
||||
flags & NFT_TABLE_F_OWNER))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* No dormant off/on/off/on games in single transaction */
|
||||
if (ctx->table->flags & __NFT_TABLE_F_UPDATE)
|
||||
return -EINVAL;
|
||||
|
||||
trans = nft_trans_alloc(ctx, NFT_MSG_NEWTABLE,
|
||||
sizeof(struct nft_trans_table));
|
||||
if (trans == NULL)
|
||||
@ -1432,7 +1436,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
|
||||
if (!nft_is_active_next(ctx->net, chain))
|
||||
continue;
|
||||
|
||||
if (nft_chain_is_bound(chain))
|
||||
if (nft_chain_binding(chain))
|
||||
continue;
|
||||
|
||||
ctx->chain = chain;
|
||||
@ -1446,8 +1450,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
|
||||
if (!nft_is_active_next(ctx->net, set))
|
||||
continue;
|
||||
|
||||
if (nft_set_is_anonymous(set) &&
|
||||
!list_empty(&set->bindings))
|
||||
if (nft_set_is_anonymous(set))
|
||||
continue;
|
||||
|
||||
err = nft_delset(ctx, set);
|
||||
@ -1477,7 +1480,7 @@ static int nft_flush_table(struct nft_ctx *ctx)
|
||||
if (!nft_is_active_next(ctx->net, chain))
|
||||
continue;
|
||||
|
||||
if (nft_chain_is_bound(chain))
|
||||
if (nft_chain_binding(chain))
|
||||
continue;
|
||||
|
||||
ctx->chain = chain;
|
||||
@ -2910,6 +2913,9 @@ static int nf_tables_delchain(struct sk_buff *skb, const struct nfnl_info *info,
|
||||
return PTR_ERR(chain);
|
||||
}
|
||||
|
||||
if (nft_chain_binding(chain))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, chain, nla);
|
||||
|
||||
if (nla[NFTA_CHAIN_HOOK]) {
|
||||
@ -3449,6 +3455,8 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
|
||||
struct net *net = sock_net(skb->sk);
|
||||
const struct nft_rule *rule, *prule;
|
||||
unsigned int s_idx = cb->args[0];
|
||||
unsigned int entries = 0;
|
||||
int ret = 0;
|
||||
u64 handle;
|
||||
|
||||
prule = NULL;
|
||||
@ -3471,9 +3479,11 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
|
||||
NFT_MSG_NEWRULE,
|
||||
NLM_F_MULTI | NLM_F_APPEND,
|
||||
table->family,
|
||||
table, chain, rule, handle, reset) < 0)
|
||||
return 1;
|
||||
|
||||
table, chain, rule, handle, reset) < 0) {
|
||||
ret = 1;
|
||||
break;
|
||||
}
|
||||
entries++;
|
||||
nl_dump_check_consistent(cb, nlmsg_hdr(skb));
|
||||
cont:
|
||||
prule = rule;
|
||||
@ -3481,10 +3491,10 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
|
||||
(*idx)++;
|
||||
}
|
||||
|
||||
if (reset && *idx)
|
||||
audit_log_rule_reset(table, cb->seq, *idx);
|
||||
if (reset && entries)
|
||||
audit_log_rule_reset(table, cb->seq, entries);
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int nf_tables_dump_rules(struct sk_buff *skb,
|
||||
@ -3971,6 +3981,11 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
|
||||
}
|
||||
|
||||
if (info->nlh->nlmsg_flags & NLM_F_REPLACE) {
|
||||
if (nft_chain_binding(chain)) {
|
||||
err = -EOPNOTSUPP;
|
||||
goto err_destroy_flow_rule;
|
||||
}
|
||||
|
||||
err = nft_delrule(&ctx, old_rule);
|
||||
if (err < 0)
|
||||
goto err_destroy_flow_rule;
|
||||
@ -4078,7 +4093,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
|
||||
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN]);
|
||||
return PTR_ERR(chain);
|
||||
}
|
||||
if (nft_chain_is_bound(chain))
|
||||
if (nft_chain_binding(chain))
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
@ -4112,7 +4127,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
|
||||
list_for_each_entry(chain, &table->chains, list) {
|
||||
if (!nft_is_active_next(net, chain))
|
||||
continue;
|
||||
if (nft_chain_is_bound(chain))
|
||||
if (nft_chain_binding(chain))
|
||||
continue;
|
||||
|
||||
ctx.chain = chain;
|
||||
@ -7183,8 +7198,10 @@ static int nf_tables_delsetelem(struct sk_buff *skb,
|
||||
if (IS_ERR(set))
|
||||
return PTR_ERR(set);
|
||||
|
||||
if (!list_empty(&set->bindings) &&
|
||||
(set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS)))
|
||||
if (nft_set_is_anonymous(set))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
if (!list_empty(&set->bindings) && (set->flags & NFT_SET_CONSTANT))
|
||||
return -EBUSY;
|
||||
|
||||
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
|
||||
@ -9562,12 +9579,15 @@ static int nft_trans_gc_space(struct nft_trans_gc *trans)
|
||||
struct nft_trans_gc *nft_trans_gc_queue_async(struct nft_trans_gc *gc,
|
||||
unsigned int gc_seq, gfp_t gfp)
|
||||
{
|
||||
struct nft_set *set;
|
||||
|
||||
if (nft_trans_gc_space(gc))
|
||||
return gc;
|
||||
|
||||
set = gc->set;
|
||||
nft_trans_gc_queue_work(gc);
|
||||
|
||||
return nft_trans_gc_alloc(gc->set, gc_seq, gfp);
|
||||
return nft_trans_gc_alloc(set, gc_seq, gfp);
|
||||
}
|
||||
|
||||
void nft_trans_gc_queue_async_done(struct nft_trans_gc *trans)
|
||||
@ -9582,15 +9602,18 @@ void nft_trans_gc_queue_async_done(struct nft_trans_gc *trans)
|
||||
|
||||
struct nft_trans_gc *nft_trans_gc_queue_sync(struct nft_trans_gc *gc, gfp_t gfp)
|
||||
{
|
||||
struct nft_set *set;
|
||||
|
||||
if (WARN_ON_ONCE(!lockdep_commit_lock_is_held(gc->net)))
|
||||
return NULL;
|
||||
|
||||
if (nft_trans_gc_space(gc))
|
||||
return gc;
|
||||
|
||||
set = gc->set;
|
||||
call_rcu(&gc->rcu, nft_trans_gc_trans_free);
|
||||
|
||||
return nft_trans_gc_alloc(gc->set, 0, gfp);
|
||||
return nft_trans_gc_alloc(set, 0, gfp);
|
||||
}
|
||||
|
||||
void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans)
|
||||
@ -9605,8 +9628,9 @@ void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans)
|
||||
call_rcu(&trans->rcu, nft_trans_gc_trans_free);
|
||||
}
|
||||
|
||||
struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
|
||||
unsigned int gc_seq)
|
||||
static struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
|
||||
unsigned int gc_seq,
|
||||
bool sync)
|
||||
{
|
||||
struct nft_set_elem_catchall *catchall;
|
||||
const struct nft_set *set = gc->set;
|
||||
@ -9622,7 +9646,11 @@ struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
|
||||
|
||||
nft_set_elem_dead(ext);
|
||||
dead_elem:
|
||||
gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
|
||||
if (sync)
|
||||
gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
|
||||
else
|
||||
gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
|
||||
|
||||
if (!gc)
|
||||
return NULL;
|
||||
|
||||
@ -9632,6 +9660,17 @@ struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc,
|
||||
return gc;
|
||||
}
|
||||
|
||||
struct nft_trans_gc *nft_trans_gc_catchall_async(struct nft_trans_gc *gc,
|
||||
unsigned int gc_seq)
|
||||
{
|
||||
return nft_trans_gc_catchall(gc, gc_seq, false);
|
||||
}
|
||||
|
||||
struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc)
|
||||
{
|
||||
return nft_trans_gc_catchall(gc, 0, true);
|
||||
}
|
||||
|
||||
static void nf_tables_module_autoload_cleanup(struct net *net)
|
||||
{
|
||||
struct nftables_pernet *nft_net = nft_pernet(net);
|
||||
@ -11054,7 +11093,7 @@ static void __nft_release_table(struct net *net, struct nft_table *table)
|
||||
ctx.family = table->family;
|
||||
ctx.table = table;
|
||||
list_for_each_entry(chain, &table->chains, list) {
|
||||
if (nft_chain_is_bound(chain))
|
||||
if (nft_chain_binding(chain))
|
||||
continue;
|
||||
|
||||
ctx.chain = chain;
|
||||
|
@ -338,12 +338,9 @@ static void nft_rhash_gc(struct work_struct *work)
|
||||
|
||||
while ((he = rhashtable_walk_next(&hti))) {
|
||||
if (IS_ERR(he)) {
|
||||
if (PTR_ERR(he) != -EAGAIN) {
|
||||
nft_trans_gc_destroy(gc);
|
||||
gc = NULL;
|
||||
goto try_later;
|
||||
}
|
||||
continue;
|
||||
nft_trans_gc_destroy(gc);
|
||||
gc = NULL;
|
||||
goto try_later;
|
||||
}
|
||||
|
||||
/* Ruleset has been updated, try later. */
|
||||
@ -372,7 +369,7 @@ static void nft_rhash_gc(struct work_struct *work)
|
||||
nft_trans_gc_elem_add(gc, he);
|
||||
}
|
||||
|
||||
gc = nft_trans_gc_catchall(gc, gc_seq);
|
||||
gc = nft_trans_gc_catchall_async(gc, gc_seq);
|
||||
|
||||
try_later:
|
||||
/* catchall list iteration requires rcu read side lock. */
|
||||
|
@ -1596,7 +1596,7 @@ static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m)
|
||||
|
||||
gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
|
||||
if (!gc)
|
||||
break;
|
||||
return;
|
||||
|
||||
nft_pipapo_gc_deactivate(net, set, e);
|
||||
pipapo_drop(m, rulemap);
|
||||
@ -1610,7 +1610,7 @@ static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m)
|
||||
}
|
||||
}
|
||||
|
||||
gc = nft_trans_gc_catchall(gc, 0);
|
||||
gc = nft_trans_gc_catchall_sync(gc);
|
||||
if (gc) {
|
||||
nft_trans_gc_queue_sync_done(gc);
|
||||
priv->last_gc = jiffies;
|
||||
|
@ -622,8 +622,7 @@ static void nft_rbtree_gc(struct work_struct *work)
|
||||
if (!gc)
|
||||
goto done;
|
||||
|
||||
write_lock_bh(&priv->lock);
|
||||
write_seqcount_begin(&priv->count);
|
||||
read_lock_bh(&priv->lock);
|
||||
for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
|
||||
|
||||
/* Ruleset has been updated, try later. */
|
||||
@ -670,11 +669,10 @@ static void nft_rbtree_gc(struct work_struct *work)
|
||||
nft_trans_gc_elem_add(gc, rbe);
|
||||
}
|
||||
|
||||
gc = nft_trans_gc_catchall(gc, gc_seq);
|
||||
gc = nft_trans_gc_catchall_async(gc, gc_seq);
|
||||
|
||||
try_later:
|
||||
write_seqcount_end(&priv->count);
|
||||
write_unlock_bh(&priv->lock);
|
||||
read_unlock_bh(&priv->lock);
|
||||
|
||||
if (gc)
|
||||
nft_trans_gc_queue_async_done(gc);
|
||||
|
@ -86,11 +86,13 @@ static int rds_rdma_cm_event_handler_cmn(struct rdma_cm_id *cm_id,
|
||||
break;
|
||||
|
||||
case RDMA_CM_EVENT_ADDR_RESOLVED:
|
||||
rdma_set_service_type(cm_id, conn->c_tos);
|
||||
rdma_set_min_rnr_timer(cm_id, IB_RNR_TIMER_000_32);
|
||||
/* XXX do we need to clean up if this fails? */
|
||||
ret = rdma_resolve_route(cm_id,
|
||||
RDS_RDMA_RESOLVE_TIMEOUT_MS);
|
||||
if (conn) {
|
||||
rdma_set_service_type(cm_id, conn->c_tos);
|
||||
rdma_set_min_rnr_timer(cm_id, IB_RNR_TIMER_000_32);
|
||||
/* XXX do we need to clean up if this fails? */
|
||||
ret = rdma_resolve_route(cm_id,
|
||||
RDS_RDMA_RESOLVE_TIMEOUT_MS);
|
||||
}
|
||||
break;
|
||||
|
||||
case RDMA_CM_EVENT_ROUTE_RESOLVED:
|
||||
|
@ -38,7 +38,7 @@ asm( \
|
||||
____BTF_ID(symbol)
|
||||
|
||||
#define __ID(prefix) \
|
||||
__PASTE(prefix, __COUNTER__)
|
||||
__PASTE(__PASTE(prefix, __COUNTER__), __LINE__)
|
||||
|
||||
/*
|
||||
* The BTF_ID defines unique symbol for each ID pointing
|
||||
|
@ -1962,7 +1962,9 @@ union bpf_attr {
|
||||
* performed again, if the helper is used in combination with
|
||||
* direct packet access.
|
||||
* Return
|
||||
* 0 on success, or a negative error in case of failure.
|
||||
* 0 on success, or a negative error in case of failure. Positive
|
||||
* error indicates a potential drop or congestion in the target
|
||||
* device. The particular positive error codes are not defined.
|
||||
*
|
||||
* u64 bpf_get_current_pid_tgid(void)
|
||||
* Description
|
||||
|
@ -1,14 +1,8 @@
|
||||
bpf_cookie/multi_kprobe_attach_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3
|
||||
bpf_cookie/multi_kprobe_link_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3
|
||||
fexit_sleep # The test never returns. The remaining tests cannot start.
|
||||
kprobe_multi_bench_attach # bpf_program__attach_kprobe_multi_opts unexpected error: -95
|
||||
kprobe_multi_test/attach_api_addrs # bpf_program__attach_kprobe_multi_opts unexpected error: -95
|
||||
kprobe_multi_test/attach_api_pattern # bpf_program__attach_kprobe_multi_opts unexpected error: -95
|
||||
kprobe_multi_test/attach_api_syms # bpf_program__attach_kprobe_multi_opts unexpected error: -95
|
||||
kprobe_multi_test/bench_attach # bpf_program__attach_kprobe_multi_opts unexpected error: -95
|
||||
kprobe_multi_test/link_api_addrs # link_fd unexpected link_fd: actual -95 < expected 0
|
||||
kprobe_multi_test/link_api_syms # link_fd unexpected link_fd: actual -95 < expected 0
|
||||
kprobe_multi_test/skel_api # libbpf: failed to load BPF skeleton 'kprobe_multi': -3
|
||||
kprobe_multi_bench_attach # needs CONFIG_FPROBE
|
||||
kprobe_multi_test # needs CONFIG_FPROBE
|
||||
module_attach # prog 'kprobe_multi': failed to auto-attach: -95
|
||||
fentry_test/fentry_many_args # fentry_many_args:FAIL:fentry_many_args_attach unexpected error: -524
|
||||
fexit_test/fexit_many_args # fexit_many_args:FAIL:fexit_many_args_attach unexpected error: -524
|
||||
|
@ -4,6 +4,7 @@ CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
|
||||
CONFIG_BPF=y
|
||||
CONFIG_BPF_EVENTS=y
|
||||
CONFIG_BPF_JIT=y
|
||||
CONFIG_BPF_KPROBE_OVERRIDE=y
|
||||
CONFIG_BPF_LIRC_MODE2=y
|
||||
CONFIG_BPF_LSM=y
|
||||
CONFIG_BPF_STREAM_PARSER=y
|
||||
|
@ -20,7 +20,6 @@ CONFIG_BLK_DEV_THROTTLING=y
|
||||
CONFIG_BONDING=y
|
||||
CONFIG_BOOTTIME_TRACING=y
|
||||
CONFIG_BPF_JIT_ALWAYS_ON=y
|
||||
CONFIG_BPF_KPROBE_OVERRIDE=y
|
||||
CONFIG_BPF_PRELOAD=y
|
||||
CONFIG_BPF_PRELOAD_UMD=y
|
||||
CONFIG_BPFILTER=y
|
||||
|
@ -24,6 +24,7 @@ void test_empty_skb(void)
|
||||
int *ifindex;
|
||||
int err;
|
||||
int ret;
|
||||
int lwt_egress_ret; /* expected retval at lwt/egress */
|
||||
bool success_on_tc;
|
||||
} tests[] = {
|
||||
/* Empty packets are always rejected. */
|
||||
@ -57,6 +58,7 @@ void test_empty_skb(void)
|
||||
.data_size_in = sizeof(eth_hlen),
|
||||
.ifindex = &veth_ifindex,
|
||||
.ret = -ERANGE,
|
||||
.lwt_egress_ret = -ERANGE,
|
||||
.success_on_tc = true,
|
||||
},
|
||||
{
|
||||
@ -70,6 +72,7 @@ void test_empty_skb(void)
|
||||
.data_size_in = sizeof(eth_hlen),
|
||||
.ifindex = &ipip_ifindex,
|
||||
.ret = -ERANGE,
|
||||
.lwt_egress_ret = -ERANGE,
|
||||
},
|
||||
|
||||
/* ETH_HLEN+1-sized packet should be redirected. */
|
||||
@ -79,6 +82,7 @@ void test_empty_skb(void)
|
||||
.data_in = eth_hlen_pp,
|
||||
.data_size_in = sizeof(eth_hlen_pp),
|
||||
.ifindex = &veth_ifindex,
|
||||
.lwt_egress_ret = 1, /* veth_xmit NET_XMIT_DROP */
|
||||
},
|
||||
{
|
||||
.msg = "ipip ETH_HLEN+1 packet ingress",
|
||||
@ -108,8 +112,12 @@ void test_empty_skb(void)
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(tests); i++) {
|
||||
bpf_object__for_each_program(prog, bpf_obj->obj) {
|
||||
char buf[128];
|
||||
bool at_egress = strstr(bpf_program__name(prog), "egress") != NULL;
|
||||
bool at_tc = !strncmp(bpf_program__section_name(prog), "tc", 2);
|
||||
int expected_ret;
|
||||
char buf[128];
|
||||
|
||||
expected_ret = at_egress && !at_tc ? tests[i].lwt_egress_ret : tests[i].ret;
|
||||
|
||||
tattr.data_in = tests[i].data_in;
|
||||
tattr.data_size_in = tests[i].data_size_in;
|
||||
@ -128,7 +136,7 @@ void test_empty_skb(void)
|
||||
if (at_tc && tests[i].success_on_tc)
|
||||
ASSERT_GE(bpf_obj->bss->ret, 0, buf);
|
||||
else
|
||||
ASSERT_EQ(bpf_obj->bss->ret, tests[i].ret, buf);
|
||||
ASSERT_EQ(bpf_obj->bss->ret, expected_ret, buf);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include "kprobe_multi.skel.h"
|
||||
#include "trace_helpers.h"
|
||||
#include "kprobe_multi_empty.skel.h"
|
||||
#include "kprobe_multi_override.skel.h"
|
||||
#include "bpf/libbpf_internal.h"
|
||||
#include "bpf/hashmap.h"
|
||||
|
||||
@ -453,6 +454,40 @@ static void test_kprobe_multi_bench_attach(bool kernel)
|
||||
}
|
||||
}
|
||||
|
||||
static void test_attach_override(void)
|
||||
{
|
||||
struct kprobe_multi_override *skel = NULL;
|
||||
struct bpf_link *link = NULL;
|
||||
|
||||
skel = kprobe_multi_override__open_and_load();
|
||||
if (!ASSERT_OK_PTR(skel, "kprobe_multi_empty__open_and_load"))
|
||||
goto cleanup;
|
||||
|
||||
/* The test_override calls bpf_override_return so it should fail
|
||||
* to attach to bpf_fentry_test1 function, which is not on error
|
||||
* injection list.
|
||||
*/
|
||||
link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_override,
|
||||
"bpf_fentry_test1", NULL);
|
||||
if (!ASSERT_ERR_PTR(link, "override_attached_bpf_fentry_test1")) {
|
||||
bpf_link__destroy(link);
|
||||
goto cleanup;
|
||||
}
|
||||
|
||||
/* The should_fail_bio function is on error injection list,
|
||||
* attach should succeed.
|
||||
*/
|
||||
link = bpf_program__attach_kprobe_multi_opts(skel->progs.test_override,
|
||||
"should_fail_bio", NULL);
|
||||
if (!ASSERT_OK_PTR(link, "override_attached_should_fail_bio"))
|
||||
goto cleanup;
|
||||
|
||||
bpf_link__destroy(link);
|
||||
|
||||
cleanup:
|
||||
kprobe_multi_override__destroy(skel);
|
||||
}
|
||||
|
||||
void serial_test_kprobe_multi_bench_attach(void)
|
||||
{
|
||||
if (test__start_subtest("kernel"))
|
||||
@ -480,4 +515,6 @@ void test_kprobe_multi_test(void)
|
||||
test_attach_api_syms();
|
||||
if (test__start_subtest("attach_api_fails"))
|
||||
test_attach_api_fails();
|
||||
if (test__start_subtest("attach_override"))
|
||||
test_attach_override();
|
||||
}
|
||||
|
50
tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c
Normal file
50
tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c
Normal file
@ -0,0 +1,50 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (C) 2023. Huawei Technologies Co., Ltd */
|
||||
#define _GNU_SOURCE
|
||||
#include <sched.h>
|
||||
#include <pthread.h>
|
||||
#include <stdbool.h>
|
||||
#include <bpf/btf.h>
|
||||
#include <test_progs.h>
|
||||
|
||||
#include "test_bpf_ma.skel.h"
|
||||
|
||||
void test_test_bpf_ma(void)
|
||||
{
|
||||
struct test_bpf_ma *skel;
|
||||
struct btf *btf;
|
||||
int i, err;
|
||||
|
||||
skel = test_bpf_ma__open();
|
||||
if (!ASSERT_OK_PTR(skel, "open"))
|
||||
return;
|
||||
|
||||
btf = bpf_object__btf(skel->obj);
|
||||
if (!ASSERT_OK_PTR(btf, "btf"))
|
||||
goto out;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(skel->rodata->data_sizes); i++) {
|
||||
char name[32];
|
||||
int id;
|
||||
|
||||
snprintf(name, sizeof(name), "bin_data_%u", skel->rodata->data_sizes[i]);
|
||||
id = btf__find_by_name_kind(btf, name, BTF_KIND_STRUCT);
|
||||
if (!ASSERT_GT(id, 0, "bin_data"))
|
||||
goto out;
|
||||
skel->rodata->data_btf_ids[i] = id;
|
||||
}
|
||||
|
||||
err = test_bpf_ma__load(skel);
|
||||
if (!ASSERT_OK(err, "load"))
|
||||
goto out;
|
||||
|
||||
err = test_bpf_ma__attach(skel);
|
||||
if (!ASSERT_OK(err, "attach"))
|
||||
goto out;
|
||||
|
||||
skel->bss->pid = getpid();
|
||||
usleep(1);
|
||||
ASSERT_OK(skel->bss->err, "test error");
|
||||
out:
|
||||
test_bpf_ma__destroy(skel);
|
||||
}
|
61
tools/testing/selftests/bpf/prog_tests/xdp_dev_bound_only.c
Normal file
61
tools/testing/selftests/bpf/prog_tests/xdp_dev_bound_only.c
Normal file
@ -0,0 +1,61 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <net/if.h>
|
||||
#include <test_progs.h>
|
||||
#include <network_helpers.h>
|
||||
|
||||
#define LOCAL_NETNS "xdp_dev_bound_only_netns"
|
||||
|
||||
static int load_dummy_prog(char *name, __u32 ifindex, __u32 flags)
|
||||
{
|
||||
struct bpf_insn insns[] = { BPF_MOV64_IMM(BPF_REG_0, 0), BPF_EXIT_INSN() };
|
||||
LIBBPF_OPTS(bpf_prog_load_opts, opts);
|
||||
|
||||
opts.prog_flags = flags;
|
||||
opts.prog_ifindex = ifindex;
|
||||
return bpf_prog_load(BPF_PROG_TYPE_XDP, name, "GPL", insns, ARRAY_SIZE(insns), &opts);
|
||||
}
|
||||
|
||||
/* A test case for bpf_offload_netdev->offload handling bug:
|
||||
* - create a veth device (does not support offload);
|
||||
* - create a device bound XDP program with BPF_F_XDP_DEV_BOUND_ONLY flag
|
||||
* (such programs are not offloaded);
|
||||
* - create a device bound XDP program without flags (such programs are offloaded).
|
||||
* This might lead to 'BUG: kernel NULL pointer dereference'.
|
||||
*/
|
||||
void test_xdp_dev_bound_only_offdev(void)
|
||||
{
|
||||
struct nstoken *tok = NULL;
|
||||
__u32 ifindex;
|
||||
int fd1 = -1;
|
||||
int fd2 = -1;
|
||||
|
||||
SYS(out, "ip netns add " LOCAL_NETNS);
|
||||
tok = open_netns(LOCAL_NETNS);
|
||||
if (!ASSERT_OK_PTR(tok, "open_netns"))
|
||||
goto out;
|
||||
SYS(out, "ip link add eth42 type veth");
|
||||
ifindex = if_nametoindex("eth42");
|
||||
if (!ASSERT_NEQ(ifindex, 0, "if_nametoindex")) {
|
||||
perror("if_nametoindex");
|
||||
goto out;
|
||||
}
|
||||
fd1 = load_dummy_prog("dummy1", ifindex, BPF_F_XDP_DEV_BOUND_ONLY);
|
||||
if (!ASSERT_GE(fd1, 0, "load_dummy_prog #1")) {
|
||||
perror("load_dummy_prog #1");
|
||||
goto out;
|
||||
}
|
||||
/* Program with ifindex is considered offloaded, however veth
|
||||
* does not support offload => error should be reported.
|
||||
*/
|
||||
fd2 = load_dummy_prog("dummy2", ifindex, 0);
|
||||
ASSERT_EQ(fd2, -EINVAL, "load_dummy_prog #2 (offloaded)");
|
||||
|
||||
out:
|
||||
close(fd1);
|
||||
close(fd2);
|
||||
close_netns(tok);
|
||||
/* eth42 was added inside netns, removing the netns will
|
||||
* also remove eth42 veth pair.
|
||||
*/
|
||||
SYS_NOFAIL("ip netns del " LOCAL_NETNS);
|
||||
}
|
13
tools/testing/selftests/bpf/progs/kprobe_multi_override.c
Normal file
13
tools/testing/selftests/bpf/progs/kprobe_multi_override.c
Normal file
@ -0,0 +1,13 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include <linux/bpf.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
#include <bpf/bpf_tracing.h>
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
|
||||
SEC("kprobe.multi")
|
||||
int test_override(struct pt_regs *ctx)
|
||||
{
|
||||
bpf_override_return(ctx, 123);
|
||||
return 0;
|
||||
}
|
123
tools/testing/selftests/bpf/progs/test_bpf_ma.c
Normal file
123
tools/testing/selftests/bpf/progs/test_bpf_ma.c
Normal file
@ -0,0 +1,123 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (C) 2023. Huawei Technologies Co., Ltd */
|
||||
#include <vmlinux.h>
|
||||
#include <bpf/bpf_tracing.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
|
||||
#include "bpf_experimental.h"
|
||||
#include "bpf_misc.h"
|
||||
|
||||
#ifndef ARRAY_SIZE
|
||||
#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
|
||||
#endif
|
||||
|
||||
struct generic_map_value {
|
||||
void *data;
|
||||
};
|
||||
|
||||
char _license[] SEC("license") = "GPL";
|
||||
|
||||
const unsigned int data_sizes[] = {8, 16, 32, 64, 96, 128, 192, 256, 512, 1024, 2048, 4096};
|
||||
const volatile unsigned int data_btf_ids[ARRAY_SIZE(data_sizes)] = {};
|
||||
|
||||
int err = 0;
|
||||
int pid = 0;
|
||||
|
||||
#define DEFINE_ARRAY_WITH_KPTR(_size) \
|
||||
struct bin_data_##_size { \
|
||||
char data[_size - sizeof(void *)]; \
|
||||
}; \
|
||||
struct map_value_##_size { \
|
||||
struct bin_data_##_size __kptr * data; \
|
||||
/* To emit BTF info for bin_data_xx */ \
|
||||
struct bin_data_##_size not_used; \
|
||||
}; \
|
||||
struct { \
|
||||
__uint(type, BPF_MAP_TYPE_ARRAY); \
|
||||
__type(key, int); \
|
||||
__type(value, struct map_value_##_size); \
|
||||
__uint(max_entries, 128); \
|
||||
} array_##_size SEC(".maps");
|
||||
|
||||
static __always_inline void batch_alloc_free(struct bpf_map *map, unsigned int batch,
|
||||
unsigned int idx)
|
||||
{
|
||||
struct generic_map_value *value;
|
||||
unsigned int i, key;
|
||||
void *old, *new;
|
||||
|
||||
for (i = 0; i < batch; i++) {
|
||||
key = i;
|
||||
value = bpf_map_lookup_elem(map, &key);
|
||||
if (!value) {
|
||||
err = 1;
|
||||
return;
|
||||
}
|
||||
new = bpf_obj_new_impl(data_btf_ids[idx], NULL);
|
||||
if (!new) {
|
||||
err = 2;
|
||||
return;
|
||||
}
|
||||
old = bpf_kptr_xchg(&value->data, new);
|
||||
if (old) {
|
||||
bpf_obj_drop(old);
|
||||
err = 3;
|
||||
return;
|
||||
}
|
||||
}
|
||||
for (i = 0; i < batch; i++) {
|
||||
key = i;
|
||||
value = bpf_map_lookup_elem(map, &key);
|
||||
if (!value) {
|
||||
err = 4;
|
||||
return;
|
||||
}
|
||||
old = bpf_kptr_xchg(&value->data, NULL);
|
||||
if (!old) {
|
||||
err = 5;
|
||||
return;
|
||||
}
|
||||
bpf_obj_drop(old);
|
||||
}
|
||||
}
|
||||
|
||||
#define CALL_BATCH_ALLOC_FREE(size, batch, idx) \
|
||||
batch_alloc_free((struct bpf_map *)(&array_##size), batch, idx)
|
||||
|
||||
DEFINE_ARRAY_WITH_KPTR(8);
|
||||
DEFINE_ARRAY_WITH_KPTR(16);
|
||||
DEFINE_ARRAY_WITH_KPTR(32);
|
||||
DEFINE_ARRAY_WITH_KPTR(64);
|
||||
DEFINE_ARRAY_WITH_KPTR(96);
|
||||
DEFINE_ARRAY_WITH_KPTR(128);
|
||||
DEFINE_ARRAY_WITH_KPTR(192);
|
||||
DEFINE_ARRAY_WITH_KPTR(256);
|
||||
DEFINE_ARRAY_WITH_KPTR(512);
|
||||
DEFINE_ARRAY_WITH_KPTR(1024);
|
||||
DEFINE_ARRAY_WITH_KPTR(2048);
|
||||
DEFINE_ARRAY_WITH_KPTR(4096);
|
||||
|
||||
SEC("fentry/" SYS_PREFIX "sys_nanosleep")
|
||||
int test_bpf_mem_alloc_free(void *ctx)
|
||||
{
|
||||
if ((u32)bpf_get_current_pid_tgid() != pid)
|
||||
return 0;
|
||||
|
||||
/* Alloc 128 8-bytes objects in batch to trigger refilling,
|
||||
* then free 128 8-bytes objects in batch to trigger freeing.
|
||||
*/
|
||||
CALL_BATCH_ALLOC_FREE(8, 128, 0);
|
||||
CALL_BATCH_ALLOC_FREE(16, 128, 1);
|
||||
CALL_BATCH_ALLOC_FREE(32, 128, 2);
|
||||
CALL_BATCH_ALLOC_FREE(64, 128, 3);
|
||||
CALL_BATCH_ALLOC_FREE(96, 128, 4);
|
||||
CALL_BATCH_ALLOC_FREE(128, 128, 5);
|
||||
CALL_BATCH_ALLOC_FREE(192, 128, 6);
|
||||
CALL_BATCH_ALLOC_FREE(256, 128, 7);
|
||||
CALL_BATCH_ALLOC_FREE(512, 64, 8);
|
||||
CALL_BATCH_ALLOC_FREE(1024, 32, 9);
|
||||
CALL_BATCH_ALLOC_FREE(2048, 16, 10);
|
||||
CALL_BATCH_ALLOC_FREE(4096, 8, 11);
|
||||
|
||||
return 0;
|
||||
}
|
@ -1880,7 +1880,7 @@ int main(int argc, char **argv)
|
||||
}
|
||||
}
|
||||
|
||||
get_unpriv_disabled();
|
||||
unpriv_disabled = get_unpriv_disabled();
|
||||
if (unpriv && unpriv_disabled) {
|
||||
printf("Cannot run as unprivileged user with sysctl %s.\n",
|
||||
UNPRIV_SYSCTL);
|
||||
|
@ -41,61 +41,6 @@ cleanup()
|
||||
done
|
||||
}
|
||||
|
||||
ip -Version > /dev/null 2>&1
|
||||
if [ $? -ne 0 ];then
|
||||
echo "SKIP: Could not run test without ip tool"
|
||||
exit $ksft_skip
|
||||
fi
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
for i in "$ns1" "$ns2" "$ns3" ;do
|
||||
ip netns add $i || exit $ksft_skip
|
||||
ip -net $i link set lo up
|
||||
done
|
||||
|
||||
echo "INFO: preparing interfaces."
|
||||
# Three HSR nodes. Each node has one link to each of its neighbour, two links in total.
|
||||
#
|
||||
# ns1eth1 ----- ns2eth1
|
||||
# hsr1 hsr2
|
||||
# ns1eth2 ns2eth2
|
||||
# | |
|
||||
# ns3eth1 ns3eth2
|
||||
# \ /
|
||||
# hsr3
|
||||
#
|
||||
# Interfaces
|
||||
ip link add ns1eth1 netns "$ns1" type veth peer name ns2eth1 netns "$ns2"
|
||||
ip link add ns1eth2 netns "$ns1" type veth peer name ns3eth1 netns "$ns3"
|
||||
ip link add ns3eth2 netns "$ns3" type veth peer name ns2eth2 netns "$ns2"
|
||||
|
||||
# HSRv0.
|
||||
ip -net "$ns1" link add name hsr1 type hsr slave1 ns1eth1 slave2 ns1eth2 supervision 45 version 0 proto 0
|
||||
ip -net "$ns2" link add name hsr2 type hsr slave1 ns2eth1 slave2 ns2eth2 supervision 45 version 0 proto 0
|
||||
ip -net "$ns3" link add name hsr3 type hsr slave1 ns3eth1 slave2 ns3eth2 supervision 45 version 0 proto 0
|
||||
|
||||
# IP for HSR
|
||||
ip -net "$ns1" addr add 100.64.0.1/24 dev hsr1
|
||||
ip -net "$ns1" addr add dead:beef:1::1/64 dev hsr1 nodad
|
||||
ip -net "$ns2" addr add 100.64.0.2/24 dev hsr2
|
||||
ip -net "$ns2" addr add dead:beef:1::2/64 dev hsr2 nodad
|
||||
ip -net "$ns3" addr add 100.64.0.3/24 dev hsr3
|
||||
ip -net "$ns3" addr add dead:beef:1::3/64 dev hsr3 nodad
|
||||
|
||||
# All Links up
|
||||
ip -net "$ns1" link set ns1eth1 up
|
||||
ip -net "$ns1" link set ns1eth2 up
|
||||
ip -net "$ns1" link set hsr1 up
|
||||
|
||||
ip -net "$ns2" link set ns2eth1 up
|
||||
ip -net "$ns2" link set ns2eth2 up
|
||||
ip -net "$ns2" link set hsr2 up
|
||||
|
||||
ip -net "$ns3" link set ns3eth1 up
|
||||
ip -net "$ns3" link set ns3eth2 up
|
||||
ip -net "$ns3" link set hsr3 up
|
||||
|
||||
# $1: IP address
|
||||
is_v6()
|
||||
{
|
||||
@ -164,93 +109,168 @@ stop_if_error()
|
||||
fi
|
||||
}
|
||||
|
||||
do_complete_ping_test()
|
||||
{
|
||||
echo "INFO: Initial validation ping."
|
||||
# Each node has to be able each one.
|
||||
do_ping "$ns1" 100.64.0.2
|
||||
do_ping "$ns2" 100.64.0.1
|
||||
do_ping "$ns3" 100.64.0.1
|
||||
stop_if_error "Initial validation failed."
|
||||
|
||||
echo "INFO: Initial validation ping."
|
||||
# Each node has to be able each one.
|
||||
do_ping "$ns1" 100.64.0.2
|
||||
do_ping "$ns2" 100.64.0.1
|
||||
do_ping "$ns3" 100.64.0.1
|
||||
stop_if_error "Initial validation failed."
|
||||
do_ping "$ns1" 100.64.0.3
|
||||
do_ping "$ns2" 100.64.0.3
|
||||
do_ping "$ns3" 100.64.0.2
|
||||
|
||||
do_ping "$ns1" 100.64.0.3
|
||||
do_ping "$ns2" 100.64.0.3
|
||||
do_ping "$ns3" 100.64.0.2
|
||||
do_ping "$ns1" dead:beef:1::2
|
||||
do_ping "$ns1" dead:beef:1::3
|
||||
do_ping "$ns2" dead:beef:1::1
|
||||
do_ping "$ns2" dead:beef:1::2
|
||||
do_ping "$ns3" dead:beef:1::1
|
||||
do_ping "$ns3" dead:beef:1::2
|
||||
|
||||
do_ping "$ns1" dead:beef:1::2
|
||||
do_ping "$ns1" dead:beef:1::3
|
||||
do_ping "$ns2" dead:beef:1::1
|
||||
do_ping "$ns2" dead:beef:1::2
|
||||
do_ping "$ns3" dead:beef:1::1
|
||||
do_ping "$ns3" dead:beef:1::2
|
||||
|
||||
stop_if_error "Initial validation failed."
|
||||
stop_if_error "Initial validation failed."
|
||||
|
||||
# Wait until supervisor all supervision frames have been processed and the node
|
||||
# entries have been merged. Otherwise duplicate frames will be observed which is
|
||||
# valid at this stage.
|
||||
WAIT=5
|
||||
while [ ${WAIT} -gt 0 ]
|
||||
do
|
||||
grep 00:00:00:00:00:00 /sys/kernel/debug/hsr/hsr*/node_table
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
let WAIT = WAIT - 1
|
||||
done
|
||||
WAIT=5
|
||||
while [ ${WAIT} -gt 0 ]
|
||||
do
|
||||
grep 00:00:00:00:00:00 /sys/kernel/debug/hsr/hsr*/node_table
|
||||
if [ $? -ne 0 ]
|
||||
then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
let "WAIT = WAIT - 1"
|
||||
done
|
||||
|
||||
# Just a safety delay in case the above check didn't handle it.
|
||||
sleep 1
|
||||
sleep 1
|
||||
|
||||
echo "INFO: Longer ping test."
|
||||
do_ping_long "$ns1" 100.64.0.2
|
||||
do_ping_long "$ns1" dead:beef:1::2
|
||||
do_ping_long "$ns1" 100.64.0.3
|
||||
do_ping_long "$ns1" dead:beef:1::3
|
||||
echo "INFO: Longer ping test."
|
||||
do_ping_long "$ns1" 100.64.0.2
|
||||
do_ping_long "$ns1" dead:beef:1::2
|
||||
do_ping_long "$ns1" 100.64.0.3
|
||||
do_ping_long "$ns1" dead:beef:1::3
|
||||
|
||||
stop_if_error "Longer ping test failed."
|
||||
stop_if_error "Longer ping test failed."
|
||||
|
||||
do_ping_long "$ns2" 100.64.0.1
|
||||
do_ping_long "$ns2" dead:beef:1::1
|
||||
do_ping_long "$ns2" 100.64.0.3
|
||||
do_ping_long "$ns2" dead:beef:1::2
|
||||
stop_if_error "Longer ping test failed."
|
||||
do_ping_long "$ns2" 100.64.0.1
|
||||
do_ping_long "$ns2" dead:beef:1::1
|
||||
do_ping_long "$ns2" 100.64.0.3
|
||||
do_ping_long "$ns2" dead:beef:1::2
|
||||
stop_if_error "Longer ping test failed."
|
||||
|
||||
do_ping_long "$ns3" 100.64.0.1
|
||||
do_ping_long "$ns3" dead:beef:1::1
|
||||
do_ping_long "$ns3" 100.64.0.2
|
||||
do_ping_long "$ns3" dead:beef:1::2
|
||||
stop_if_error "Longer ping test failed."
|
||||
do_ping_long "$ns3" 100.64.0.1
|
||||
do_ping_long "$ns3" dead:beef:1::1
|
||||
do_ping_long "$ns3" 100.64.0.2
|
||||
do_ping_long "$ns3" dead:beef:1::2
|
||||
stop_if_error "Longer ping test failed."
|
||||
|
||||
echo "INFO: Cutting one link."
|
||||
do_ping_long "$ns1" 100.64.0.3 &
|
||||
echo "INFO: Cutting one link."
|
||||
do_ping_long "$ns1" 100.64.0.3 &
|
||||
|
||||
sleep 3
|
||||
ip -net "$ns3" link set ns3eth1 down
|
||||
wait
|
||||
sleep 3
|
||||
ip -net "$ns3" link set ns3eth1 down
|
||||
wait
|
||||
|
||||
ip -net "$ns3" link set ns3eth1 up
|
||||
ip -net "$ns3" link set ns3eth1 up
|
||||
|
||||
stop_if_error "Failed with one link down."
|
||||
stop_if_error "Failed with one link down."
|
||||
|
||||
echo "INFO: Delay the link and drop a few packages."
|
||||
tc -net "$ns3" qdisc add dev ns3eth1 root netem delay 50ms
|
||||
tc -net "$ns2" qdisc add dev ns2eth1 root netem delay 5ms loss 25%
|
||||
echo "INFO: Delay the link and drop a few packages."
|
||||
tc -net "$ns3" qdisc add dev ns3eth1 root netem delay 50ms
|
||||
tc -net "$ns2" qdisc add dev ns2eth1 root netem delay 5ms loss 25%
|
||||
|
||||
do_ping_long "$ns1" 100.64.0.2
|
||||
do_ping_long "$ns1" 100.64.0.3
|
||||
do_ping_long "$ns1" 100.64.0.2
|
||||
do_ping_long "$ns1" 100.64.0.3
|
||||
|
||||
stop_if_error "Failed with delay and packetloss."
|
||||
stop_if_error "Failed with delay and packetloss."
|
||||
|
||||
do_ping_long "$ns2" 100.64.0.1
|
||||
do_ping_long "$ns2" 100.64.0.3
|
||||
do_ping_long "$ns2" 100.64.0.1
|
||||
do_ping_long "$ns2" 100.64.0.3
|
||||
|
||||
stop_if_error "Failed with delay and packetloss."
|
||||
stop_if_error "Failed with delay and packetloss."
|
||||
|
||||
do_ping_long "$ns3" 100.64.0.1
|
||||
do_ping_long "$ns3" 100.64.0.2
|
||||
stop_if_error "Failed with delay and packetloss."
|
||||
do_ping_long "$ns3" 100.64.0.1
|
||||
do_ping_long "$ns3" 100.64.0.2
|
||||
stop_if_error "Failed with delay and packetloss."
|
||||
|
||||
echo "INFO: All good."
|
||||
}
|
||||
|
||||
setup_hsr_interfaces()
|
||||
{
|
||||
local HSRv="$1"
|
||||
|
||||
echo "INFO: preparing interfaces for HSRv${HSRv}."
|
||||
# Three HSR nodes. Each node has one link to each of its neighbour, two links in total.
|
||||
#
|
||||
# ns1eth1 ----- ns2eth1
|
||||
# hsr1 hsr2
|
||||
# ns1eth2 ns2eth2
|
||||
# | |
|
||||
# ns3eth1 ns3eth2
|
||||
# \ /
|
||||
# hsr3
|
||||
#
|
||||
# Interfaces
|
||||
ip link add ns1eth1 netns "$ns1" type veth peer name ns2eth1 netns "$ns2"
|
||||
ip link add ns1eth2 netns "$ns1" type veth peer name ns3eth1 netns "$ns3"
|
||||
ip link add ns3eth2 netns "$ns3" type veth peer name ns2eth2 netns "$ns2"
|
||||
|
||||
# HSRv0/1
|
||||
ip -net "$ns1" link add name hsr1 type hsr slave1 ns1eth1 slave2 ns1eth2 supervision 45 version $HSRv proto 0
|
||||
ip -net "$ns2" link add name hsr2 type hsr slave1 ns2eth1 slave2 ns2eth2 supervision 45 version $HSRv proto 0
|
||||
ip -net "$ns3" link add name hsr3 type hsr slave1 ns3eth1 slave2 ns3eth2 supervision 45 version $HSRv proto 0
|
||||
|
||||
# IP for HSR
|
||||
ip -net "$ns1" addr add 100.64.0.1/24 dev hsr1
|
||||
ip -net "$ns1" addr add dead:beef:1::1/64 dev hsr1 nodad
|
||||
ip -net "$ns2" addr add 100.64.0.2/24 dev hsr2
|
||||
ip -net "$ns2" addr add dead:beef:1::2/64 dev hsr2 nodad
|
||||
ip -net "$ns3" addr add 100.64.0.3/24 dev hsr3
|
||||
ip -net "$ns3" addr add dead:beef:1::3/64 dev hsr3 nodad
|
||||
|
||||
# All Links up
|
||||
ip -net "$ns1" link set ns1eth1 up
|
||||
ip -net "$ns1" link set ns1eth2 up
|
||||
ip -net "$ns1" link set hsr1 up
|
||||
|
||||
ip -net "$ns2" link set ns2eth1 up
|
||||
ip -net "$ns2" link set ns2eth2 up
|
||||
ip -net "$ns2" link set hsr2 up
|
||||
|
||||
ip -net "$ns3" link set ns3eth1 up
|
||||
ip -net "$ns3" link set ns3eth2 up
|
||||
ip -net "$ns3" link set hsr3 up
|
||||
}
|
||||
|
||||
ip -Version > /dev/null 2>&1
|
||||
if [ $? -ne 0 ];then
|
||||
echo "SKIP: Could not run test without ip tool"
|
||||
exit $ksft_skip
|
||||
fi
|
||||
|
||||
trap cleanup EXIT
|
||||
|
||||
for i in "$ns1" "$ns2" "$ns3" ;do
|
||||
ip netns add $i || exit $ksft_skip
|
||||
ip -net $i link set lo up
|
||||
done
|
||||
|
||||
setup_hsr_interfaces 0
|
||||
do_complete_ping_test
|
||||
cleanup
|
||||
|
||||
for i in "$ns1" "$ns2" "$ns3" ;do
|
||||
ip netns add $i || exit $ksft_skip
|
||||
ip -net $i link set lo up
|
||||
done
|
||||
|
||||
setup_hsr_interfaces 1
|
||||
do_complete_ping_test
|
||||
|
||||
echo "INFO: All good."
|
||||
exit $ret
|
||||
|
@ -613,11 +613,11 @@ TEST_F(tls, sendmsg_large)
|
||||
|
||||
msg.msg_iov = &vec;
|
||||
msg.msg_iovlen = 1;
|
||||
EXPECT_EQ(sendmsg(self->cfd, &msg, 0), send_len);
|
||||
EXPECT_EQ(sendmsg(self->fd, &msg, 0), send_len);
|
||||
}
|
||||
|
||||
while (recvs++ < sends) {
|
||||
EXPECT_NE(recv(self->fd, mem, send_len, 0), -1);
|
||||
EXPECT_NE(recv(self->cfd, mem, send_len, 0), -1);
|
||||
}
|
||||
|
||||
free(mem);
|
||||
@ -646,9 +646,9 @@ TEST_F(tls, sendmsg_multiple)
|
||||
msg.msg_iov = vec;
|
||||
msg.msg_iovlen = iov_len;
|
||||
|
||||
EXPECT_EQ(sendmsg(self->cfd, &msg, 0), total_len);
|
||||
EXPECT_EQ(sendmsg(self->fd, &msg, 0), total_len);
|
||||
buf = malloc(total_len);
|
||||
EXPECT_NE(recv(self->fd, buf, total_len, 0), -1);
|
||||
EXPECT_NE(recv(self->cfd, buf, total_len, 0), -1);
|
||||
for (i = 0; i < iov_len; i++) {
|
||||
EXPECT_EQ(memcmp(test_strs[i], buf + len_cmp,
|
||||
strlen(test_strs[i])),
|
||||
|
1
tools/testing/selftests/netfilter/.gitignore
vendored
1
tools/testing/selftests/netfilter/.gitignore
vendored
@ -1,3 +1,4 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
nf-queue
|
||||
connect_close
|
||||
audit_logread
|
||||
|
@ -6,13 +6,13 @@ TEST_PROGS := nft_trans_stress.sh nft_fib.sh nft_nat.sh bridge_brouter.sh \
|
||||
nft_concat_range.sh nft_conntrack_helper.sh \
|
||||
nft_queue.sh nft_meta.sh nf_nat_edemux.sh \
|
||||
ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh \
|
||||
conntrack_vrf.sh nft_synproxy.sh rpath.sh
|
||||
conntrack_vrf.sh nft_synproxy.sh rpath.sh nft_audit.sh
|
||||
|
||||
HOSTPKG_CONFIG := pkg-config
|
||||
|
||||
CFLAGS += $(shell $(HOSTPKG_CONFIG) --cflags libmnl 2>/dev/null)
|
||||
LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl)
|
||||
|
||||
TEST_GEN_FILES = nf-queue connect_close
|
||||
TEST_GEN_FILES = nf-queue connect_close audit_logread
|
||||
|
||||
include ../lib.mk
|
||||
|
165
tools/testing/selftests/netfilter/audit_logread.c
Normal file
165
tools/testing/selftests/netfilter/audit_logread.c
Normal file
@ -0,0 +1,165 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
#define _GNU_SOURCE
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <poll.h>
|
||||
#include <signal.h>
|
||||
#include <stdint.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <sys/socket.h>
|
||||
#include <unistd.h>
|
||||
#include <linux/audit.h>
|
||||
#include <linux/netlink.h>
|
||||
|
||||
static int fd;
|
||||
|
||||
#define MAX_AUDIT_MESSAGE_LENGTH 8970
|
||||
struct audit_message {
|
||||
struct nlmsghdr nlh;
|
||||
union {
|
||||
struct audit_status s;
|
||||
char data[MAX_AUDIT_MESSAGE_LENGTH];
|
||||
} u;
|
||||
};
|
||||
|
||||
int audit_recv(int fd, struct audit_message *rep)
|
||||
{
|
||||
struct sockaddr_nl addr;
|
||||
socklen_t addrlen = sizeof(addr);
|
||||
int ret;
|
||||
|
||||
do {
|
||||
ret = recvfrom(fd, rep, sizeof(*rep), 0,
|
||||
(struct sockaddr *)&addr, &addrlen);
|
||||
} while (ret < 0 && errno == EINTR);
|
||||
|
||||
if (ret < 0 ||
|
||||
addrlen != sizeof(addr) ||
|
||||
addr.nl_pid != 0 ||
|
||||
rep->nlh.nlmsg_type == NLMSG_ERROR) /* short-cut for now */
|
||||
return -1;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int audit_send(int fd, uint16_t type, uint32_t key, uint32_t val)
|
||||
{
|
||||
static int seq = 0;
|
||||
struct audit_message msg = {
|
||||
.nlh = {
|
||||
.nlmsg_len = NLMSG_SPACE(sizeof(msg.u.s)),
|
||||
.nlmsg_type = type,
|
||||
.nlmsg_flags = NLM_F_REQUEST | NLM_F_ACK,
|
||||
.nlmsg_seq = ++seq,
|
||||
},
|
||||
.u.s = {
|
||||
.mask = key,
|
||||
.enabled = key == AUDIT_STATUS_ENABLED ? val : 0,
|
||||
.pid = key == AUDIT_STATUS_PID ? val : 0,
|
||||
}
|
||||
};
|
||||
struct sockaddr_nl addr = {
|
||||
.nl_family = AF_NETLINK,
|
||||
};
|
||||
int ret;
|
||||
|
||||
do {
|
||||
ret = sendto(fd, &msg, msg.nlh.nlmsg_len, 0,
|
||||
(struct sockaddr *)&addr, sizeof(addr));
|
||||
} while (ret < 0 && errno == EINTR);
|
||||
|
||||
if (ret != (int)msg.nlh.nlmsg_len)
|
||||
return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int audit_set(int fd, uint32_t key, uint32_t val)
|
||||
{
|
||||
struct audit_message rep = { 0 };
|
||||
int ret;
|
||||
|
||||
ret = audit_send(fd, AUDIT_SET, key, val);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
ret = audit_recv(fd, &rep);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int readlog(int fd)
|
||||
{
|
||||
struct audit_message rep = { 0 };
|
||||
int ret = audit_recv(fd, &rep);
|
||||
const char *sep = "";
|
||||
char *k, *v;
|
||||
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
if (rep.nlh.nlmsg_type != AUDIT_NETFILTER_CFG)
|
||||
return 0;
|
||||
|
||||
/* skip the initial "audit(...): " part */
|
||||
strtok(rep.u.data, " ");
|
||||
|
||||
while ((k = strtok(NULL, "="))) {
|
||||
v = strtok(NULL, " ");
|
||||
|
||||
/* these vary and/or are uninteresting, ignore */
|
||||
if (!strcmp(k, "pid") ||
|
||||
!strcmp(k, "comm") ||
|
||||
!strcmp(k, "subj"))
|
||||
continue;
|
||||
|
||||
/* strip the varying sequence number */
|
||||
if (!strcmp(k, "table"))
|
||||
*strchrnul(v, ':') = '\0';
|
||||
|
||||
printf("%s%s=%s", sep, k, v);
|
||||
sep = " ";
|
||||
}
|
||||
if (*sep) {
|
||||
printf("\n");
|
||||
fflush(stdout);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
void cleanup(int sig)
|
||||
{
|
||||
audit_set(fd, AUDIT_STATUS_ENABLED, 0);
|
||||
close(fd);
|
||||
if (sig)
|
||||
exit(0);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
struct sigaction act = {
|
||||
.sa_handler = cleanup,
|
||||
};
|
||||
|
||||
fd = socket(PF_NETLINK, SOCK_RAW, NETLINK_AUDIT);
|
||||
if (fd < 0) {
|
||||
perror("Can't open netlink socket");
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (sigaction(SIGTERM, &act, NULL) < 0 ||
|
||||
sigaction(SIGINT, &act, NULL) < 0) {
|
||||
perror("Can't set signal handler");
|
||||
close(fd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
audit_set(fd, AUDIT_STATUS_ENABLED, 1);
|
||||
audit_set(fd, AUDIT_STATUS_PID, getpid());
|
||||
|
||||
while (1)
|
||||
readlog(fd);
|
||||
}
|
@ -6,3 +6,4 @@ CONFIG_NFT_REDIR=m
|
||||
CONFIG_NFT_MASQ=m
|
||||
CONFIG_NFT_FLOW_OFFLOAD=m
|
||||
CONFIG_NF_CT_NETLINK=m
|
||||
CONFIG_AUDIT=y
|
||||
|
108
tools/testing/selftests/netfilter/nft_audit.sh
Executable file
108
tools/testing/selftests/netfilter/nft_audit.sh
Executable file
@ -0,0 +1,108 @@
|
||||
#!/bin/bash
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
#
|
||||
# Check that audit logs generated for nft commands are as expected.
|
||||
|
||||
SKIP_RC=4
|
||||
RC=0
|
||||
|
||||
nft --version >/dev/null 2>&1 || {
|
||||
echo "SKIP: missing nft tool"
|
||||
exit $SKIP_RC
|
||||
}
|
||||
|
||||
logfile=$(mktemp)
|
||||
echo "logging into $logfile"
|
||||
./audit_logread >"$logfile" &
|
||||
logread_pid=$!
|
||||
trap 'kill $logread_pid; rm -f $logfile' EXIT
|
||||
exec 3<"$logfile"
|
||||
|
||||
do_test() { # (cmd, log)
|
||||
echo -n "testing for cmd: $1 ... "
|
||||
cat <&3 >/dev/null
|
||||
$1 >/dev/null || exit 1
|
||||
sleep 0.1
|
||||
res=$(diff -a -u <(echo "$2") - <&3)
|
||||
[ $? -eq 0 ] && { echo "OK"; return; }
|
||||
echo "FAIL"
|
||||
echo "$res"
|
||||
((RC++))
|
||||
}
|
||||
|
||||
nft flush ruleset
|
||||
|
||||
for table in t1 t2; do
|
||||
do_test "nft add table $table" \
|
||||
"table=$table family=2 entries=1 op=nft_register_table"
|
||||
|
||||
do_test "nft add chain $table c1" \
|
||||
"table=$table family=2 entries=1 op=nft_register_chain"
|
||||
|
||||
do_test "nft add chain $table c2; add chain $table c3" \
|
||||
"table=$table family=2 entries=2 op=nft_register_chain"
|
||||
|
||||
cmd="add rule $table c1 counter"
|
||||
|
||||
do_test "nft $cmd" \
|
||||
"table=$table family=2 entries=1 op=nft_register_rule"
|
||||
|
||||
do_test "nft $cmd; $cmd" \
|
||||
"table=$table family=2 entries=2 op=nft_register_rule"
|
||||
|
||||
cmd=""
|
||||
sep=""
|
||||
for chain in c2 c3; do
|
||||
for i in {1..3}; do
|
||||
cmd+="$sep add rule $table $chain counter"
|
||||
sep=";"
|
||||
done
|
||||
done
|
||||
do_test "nft $cmd" \
|
||||
"table=$table family=2 entries=6 op=nft_register_rule"
|
||||
done
|
||||
|
||||
do_test 'nft reset rules t1 c2' \
|
||||
'table=t1 family=2 entries=3 op=nft_reset_rule'
|
||||
|
||||
do_test 'nft reset rules table t1' \
|
||||
'table=t1 family=2 entries=3 op=nft_reset_rule
|
||||
table=t1 family=2 entries=3 op=nft_reset_rule
|
||||
table=t1 family=2 entries=3 op=nft_reset_rule'
|
||||
|
||||
do_test 'nft reset rules' \
|
||||
'table=t1 family=2 entries=3 op=nft_reset_rule
|
||||
table=t1 family=2 entries=3 op=nft_reset_rule
|
||||
table=t1 family=2 entries=3 op=nft_reset_rule
|
||||
table=t2 family=2 entries=3 op=nft_reset_rule
|
||||
table=t2 family=2 entries=3 op=nft_reset_rule
|
||||
table=t2 family=2 entries=3 op=nft_reset_rule'
|
||||
|
||||
for ((i = 0; i < 500; i++)); do
|
||||
echo "add rule t2 c3 counter accept comment \"rule $i\""
|
||||
done | do_test 'nft -f -' \
|
||||
'table=t2 family=2 entries=500 op=nft_register_rule'
|
||||
|
||||
do_test 'nft reset rules t2 c3' \
|
||||
'table=t2 family=2 entries=189 op=nft_reset_rule
|
||||
table=t2 family=2 entries=188 op=nft_reset_rule
|
||||
table=t2 family=2 entries=126 op=nft_reset_rule'
|
||||
|
||||
do_test 'nft reset rules t2' \
|
||||
'table=t2 family=2 entries=3 op=nft_reset_rule
|
||||
table=t2 family=2 entries=3 op=nft_reset_rule
|
||||
table=t2 family=2 entries=186 op=nft_reset_rule
|
||||
table=t2 family=2 entries=188 op=nft_reset_rule
|
||||
table=t2 family=2 entries=129 op=nft_reset_rule'
|
||||
|
||||
do_test 'nft reset rules' \
|
||||
'table=t1 family=2 entries=3 op=nft_reset_rule
|
||||
table=t1 family=2 entries=3 op=nft_reset_rule
|
||||
table=t1 family=2 entries=3 op=nft_reset_rule
|
||||
table=t2 family=2 entries=3 op=nft_reset_rule
|
||||
table=t2 family=2 entries=3 op=nft_reset_rule
|
||||
table=t2 family=2 entries=180 op=nft_reset_rule
|
||||
table=t2 family=2 entries=188 op=nft_reset_rule
|
||||
table=t2 family=2 entries=135 op=nft_reset_rule'
|
||||
|
||||
exit $RC
|
Loading…
Reference in New Issue
Block a user