Start the reconnect timer, fast_io_fail timer and dev_loss timers if a
transport layer error occurs.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Acked-by: David Dillow <dillowda@ornl.gov>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Enable fast_io_fail_tmo and dev_loss_tmo functionality for the IB SRP
initiator. Add kernel module parameters that allow to specify default
values for these parameters.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Acked-by: David Dillow <dillowda@ornl.gov>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Keep the rport data structure around after srp_remove_host() has
finished until cleanup of the IB transport layer has finished
completely. This is necessary because later patches use the rport
pointer inside the queuecommand callback. Without this patch
accessing the rport from inside a queuecommand callback is racy
because srp_remove_host() must be invoked before scsi_remove_host()
and because the queuecommand callback could get invoked after
srp_remove_host() has finished. In other words, without this patch
the queuecommand callback can get invoked after the rport data
structure has been freed.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Acked-by: David Dillow <dillowda@ornl.gov>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Allow the InfiniBand RC retry count to be configured by the user as an
option in the target login string. Reducing this retry count allows to
reduce the path failover time.
Signed-off-by: Vu Pham <vu@mellanox.com>
[ bvanassche: Rewrote patch description / changed default retry count ]
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Acked-by: David Dillow <dillowda@ornl.gov>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Commit 7fac33014f54("IB/qib: checkpatch fixes") was overzealous in
removing a simple_strtoul for a parse routine, setup_txselect(). That
routine is required to handle a multi-value string.
Unwind that aspect of the fix.
Cc: <stable@vger.kernel.org>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Convert __attribute__ ((packed)) to __packed.
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
qib_user_sdma_queue_pkts() gets called with mmap_sem held for
writing. Except for get_user_pages() deep down in
qib_user_sdma_pin_pages() we don't seem to need mmap_sem at all. Even
more interestingly the function qib_user_sdma_queue_pkts() (and also
qib_user_sdma_coalesce() called somewhat later) call copy_from_user()
which can hit a page fault and we deadlock on trying to get mmap_sem
when handling that fault.
So just make qib_user_sdma_pin_pages() use get_user_pages_fast() and
leave mmap_sem locking for mm.
This deadlock has actually been observed in the wild when the node
is under memory pressure.
Cc: <stable@vger.kernel.org>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Roland Dreier <roland@purestorage.com>
ipath_user_sdma_queue_pkts() gets called with mmap_sem held for
writing. Except for get_user_pages() deep down in
ipath_user_sdma_pin_pages() we don't seem to need mmap_sem at all.
Even more interestingly the function ipath_user_sdma_queue_pkts() (and
also ipath_user_sdma_coalesce() called somewhat later) call
copy_from_user() which can hit a page fault and we deadlock on trying
to get mmap_sem when handling that fault. So just make
ipath_user_sdma_pin_pages() use get_user_pages_fast() and leave
mmap_sem locking for mm.
This deadlock has actually been observed in the wild when the node
is under memory pressure.
Cc: <stable@vger.kernel.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
[ Merged in fix for call to get_user_pages_fast from Tetsuo Handa
<penguin-kernel@I-love.SAKURA.ne.jp>. - Roland ]
Signed-off-by: Roland Dreier <roland@purestorage.com>
Remove the redundant check of comparing if a 32-bit value is greater
than 0xffffffffULL.
Reported by Dan Carpenter.
Signed-off-by: Naresh Gottumukkala <bgottumukkala@emulex.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
1) ocrdma_remove_free() is called from a call_rcu callback funtion
context, which can be a bottom-half context. So the code in
ocrdma_remove_free should not sleep.
But ocrdma_cleanup_hw() can sleep, So move it ocrdma_remove()
instead of ocrdma_remove_free.
2) Fix a couple of kbuild test robot warnings.
Signed-off-by: Naresh Gottumukkala <bgottumukkala@emulex.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
We recently added a cap on "max_wqe_allocated" in 43a6b4025c
('RDMA/ocrdma: Create IRD queue fix').
My static checker complains that the cap has a problem because it
casts large values to negative. "attrs->cap.max_send_wr" is a u32.
It comes from the user, but it's capped in ocrdma_check_qp_params() so
it can't wrap here.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
The Connect-IB adapter has an inherent page size which equals 4K.
Define an new enum that equals the page shift and use it instead of
using the value 12 throughout the code.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
RTS to RTS transition should allow update of alternate path.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
mlx5_cur and mlx5_new cannot have negative values so remove the
redundant condition.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
In mlx5_mr_cache_init() the size variable is not used so remove it to
avoid compiler warnings when running with make W=1.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Connect-IB firmware requires 4K pages to be communicated with the
driver. This patch breaks larger pages to 4K units to enable support
for architectures utilizing larger page size, such as PowerPC. This
patch also fixes several places that referred to PAGE_SHIFT instead of
explicit 12 which is the inherent page shift on Connect-IB.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
On destroy QP the driver walks over the relevant CQ and removes CQEs
reported for the destroyed QP. It also frees the related SRQ entry
without checking that this is actually an SRQ-related CQE. In case of
a CQ used for both send and receive QP, we could free SRQ entries for
send CQEs. This patch resolves this issue by verifying that this is a
SRQ related CQE by checking the SRQ number in the CQE is not zero.
Signed-off-by: Moshe Lazer <moshel@mellanox.com>
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Make use of destroy_srq_kernel() to clear SRQ resouces.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Make sure not to overflow when reading the page list from struct
ib_fast_reg_page_list.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Use asynchronous commands to execute up to eight concurrent create MR
commands. This is to fill memory caches faster so we keep consuming
from there. Also, increase timeout for shrinking caches to five
minutes.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Verify that the value is non negative before rounding up to power of 2.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
'op' is the already RDMA_NL_GET_OP() masked 'type'. No need to mask it again.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Reviewed-by: Yann Droneaud <ydroneaud@opteya.com>
Acked-by: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Currently, we don't copy the immediate data from the userspace struct
to the kernel one when UD messages are being sent.
This patch makes sure that the immediate data is set correctly.
Signed-off-by: Latchesar Ionkov <lucho@ionkov.net>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Since commit 82dc3c63c692 ("net: introduce NAPI_POLL_WEIGHT")
netif_napi_add() produces an error message if a NAPI poll weight
greater than 64 is requested.
Use the standard NAPI weight.
Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
The driver starts the mcast_join task whenever the netdev interface is
UP without relation to the underlying IB port state.
Until the port state is ACTIVE all the join requests are irrelevant,
and the IB core returns -EINVAL. So the user will see errors such as:
"multicast join failed for ff12:401b:... , status -22".
Instead, have ipoib_mcast_join_task() return when the port is not active.
It will be called again when the port state is changed and the
low-level driver triggers the IB_EVENT_PORT_ACTIVE event or the
IB_EVENT_CLIENT_REREGISTER event.
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
The path_rec_completion() callback may be invoked asynchronously even
at the middle of "driver uninit" process. This can lead to scheduling
a task that tries to touch members of the priv object that are no
longer valid. For example the function cm_create_tx_qp can attempt to
create qp with no valid priv->pd object.
The following crash is one of the results:
RIP: 0010:[<ffffffffa021bb47>] [<ffffffffa021bb47>] ipoib_cm_create_tx_qp+0x57/0x90 [ib_ipoib]
Process ipoib (pid: 5916, threadinfo ffff8803786e4000, task ffff8804150e1500)
Stack:
Call Trace:
[<ffffffff81309ef0>] ? get_random_bytes+0x20/0x30
[<ffffffffa021be2a>] ipoib_cm_tx_init+0xca/0x340 [ib_ipoib]
[<ffffffffa021f765>] ipoib_cm_tx_start+0x215/0x3f0 [ib_ipoib]
[<ffffffffa021f550>] ? ipoib_cm_tx_start+0x0/0x3f0 [ib_ipoib]
[<ffffffff8108b2b0>] worker_thread+0x170/0x2a0
[<ffffffff81090bf0>] ? autoremove_wake_function+0x0/0x40
[<ffffffff8108b140>] ? worker_thread+0x0/0x2a0
[<ffffffff81090886>] kthread+0x96/0xa0
[<ffffffff8100c14a>] child_rip+0xa/0x20
[<ffffffff810907f0>] ? kthread+0x0/0xa0
[<ffffffff8100c140>] ? child_rip+0x0/0x20
Fix that by flushing all pending path queries at this point.
Signed-off-by: Alex Markuze <markuze@mellanox.com>
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
The driver should avoid calling ib_sa_free_multicast on the mcast->mc
object until it finishes its initialization state. Otherwise we can
crash when ipoib_mcast_dev_flush() attempts to use the uninitialized
multicast object.
Instead, only call wait_for_completion() for multicast entries that
started the join process, meaning that ib_sa_join_multicast() finished.
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
The driver should not flush the whole workqueue when only one work (the
pkey poll one) needs to be cancelled. Use cancel_delayed_work_sync()
instead.
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
When ipoib interface is going down it takes all of its children with
it, under mutex.
For each child, dev_change_flags() is called. That function calls
ipoib_stop() via the ndo, and causes flush of the workqueue.
Sometimes in the workqueue an __ipoib_dev_flush work() is waiting and
when invoked tries to get the same mutex, which leads to a deadlock,
as seen below.
The solution is to switch to rw-sem instead of mutex.
The deadlock:
[11028.165303] [<ffffffff812b0977>] ? vgacon_scroll+0x107/0x2e0
[11028.171844] [<ffffffff814eaac5>] schedule_timeout+0x215/0x2e0
[11028.178465] [<ffffffff8105a5c3>] ? perf_event_task_sched_out+0x33/0x80
[11028.185962] [<ffffffff814ea743>] wait_for_common+0x123/0x180
[11028.192491] [<ffffffff8105fa40>] ? default_wake_function+0x0/0x20
[11028.199504] [<ffffffff814ea85d>] wait_for_completion+0x1d/0x20
[11028.206224] [<ffffffff8108b4f1>] flush_cpu_workqueue+0x61/0x90
[11028.212948] [<ffffffff8108b5a0>] ? wq_barrier_func+0x0/0x20
[11028.219375] [<ffffffff8108bfc4>] flush_workqueue+0x54/0x80
[11028.225712] [<ffffffffa05a0576>] ipoib_mcast_stop_thread+0x66/0x90 [ib_ipoib]
[11028.233988] [<ffffffffa059ccea>] ipoib_ib_dev_down+0x6a/0x100 [ib_ipoib]
[11028.241678] [<ffffffffa059849a>] ipoib_stop+0x8a/0x140 [ib_ipoib]
[11028.248692] [<ffffffff8142adf1>] dev_close+0x71/0xc0
[11028.254447] [<ffffffff8142a631>] dev_change_flags+0xa1/0x1d0
[11028.261062] [<ffffffffa059851b>] ipoib_stop+0x10b/0x140 [ib_ipoib]
[11028.268172] [<ffffffff8142adf1>] dev_close+0x71/0xc0
[11028.273922] [<ffffffff8142a631>] dev_change_flags+0xa1/0x1d0
[11028.280452] [<ffffffff8148f20b>] devinet_ioctl+0x5eb/0x6a0
[11028.286786] [<ffffffff814903b8>] inet_ioctl+0x88/0xa0
[11028.292633] [<ffffffff8141591a>] sock_ioctl+0x7a/0x280
[11028.298576] [<ffffffff81189012>] vfs_ioctl+0x22/0xa0
[11028.304326] [<ffffffff81140540>] ? unmap_region+0x110/0x130
[11028.310756] [<ffffffff811891b4>] do_vfs_ioctl+0x84/0x580
[11028.316897] [<ffffffff81189731>] sys_ioctl+0x81/0xa0
and
11028.017533] [<ffffffff8105a5c3>] ? perf_event_task_sched_out+0x33/0x80
[11028.025030] [<ffffffff8100bb8e>] ? apic_timer_interrupt+0xe/0x20
[11028.031945] [<ffffffff814eb2ae>] __mutex_lock_slowpath+0x13e/0x180
[11028.039053] [<ffffffff814eb14b>] mutex_lock+0x2b/0x50
[11028.044910] [<ffffffffa059f7e7>] __ipoib_ib_dev_flush+0x37/0x210 [ib_ipoib]
[11028.052894] [<ffffffffa059fa00>] ? ipoib_ib_dev_flush_light+0x0/0x20 [ib_ipoib]
[11028.061363] [<ffffffffa059fa17>] ipoib_ib_dev_flush_light+0x17/0x20 [ib_ipoib]
[11028.069738] [<ffffffff8108b120>] worker_thread+0x170/0x2a0
[11028.076068] [<ffffffff81090990>] ? autoremove_wake_function+0x0/0x40
[11028.083374] [<ffffffff8108afb0>] ? worker_thread+0x0/0x2a0
[11028.089709] [<ffffffff81090626>] kthread+0x96/0xa0
[11028.095266] [<ffffffff8100c0ca>] child_rip+0xa/0x20
[11028.100921] [<ffffffff81090590>] ? kthread+0x0/0xa0
[11028.106573] [<ffffffff8100c0c0>] ? child_rip+0x0/0x20
[11028.112423] INFO: task ifconfig:23640 blocked for more than 120 seconds.
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Change CM skb memory allocation to use GFP_KERNEL when possible.
During device init there's no need to use GFP_ATOMIC when allocating
memory for the CM skbs -- use GFP_KERNEL instead.
Signed-off-by: Tal Alon <talal@mellanox.com>
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
If napi has never been enabled when calling ipoib_ib_dev_stop, a
kernel crash occurs, because the verbs layer completion handler
(ipoib_ib_completion) calls napi_schedule unconditionally.
If the napi structure passed in the napi_schedule call has not
been initialized, napi will crash.
The cleanest solution is to simply enable napi before calling
ipoib_ib_dev_stop in the dev_open error flow. (dev_stop then
immediately disables napi).
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Erez Shitrit <erezsh@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
Physical addresses may be wider than virtual addresses (e.g. on i386
with PAE) and must not be formatted with %p.
Compile-tested only.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Roland Dreier <roland@purestorage.com>
As a simple optimization that should speed up the vast majority of
connect attemps on IB devices, when we are searching for the GID of an
incoming connection in the cached GID lists of devices, search the
device that received the incoming connection request first. If we
don't find it there, then move on to other devices.
This reduces the time to perform 10,000 connections considerably.
Prior to this patch, a bad run of cmtime would look like this:
connect : 12399.26 12351.10 8609.00 1239.93
With this patch, it looks more like this:
connect : 5864.86 5799.80 8876.00 586.49
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
The cma_acquire_dev function was changed by commit 3c86aa70bf67
("RDMA/cm: Add RDMA CM support for IBoE devices") to use find_gid_port()
because multiport devices might have either IB or IBoE formatted gids.
The old function assumed that all ports on the same device used the
same GID format.
However, when it was changed to use find_gid_port(), we inadvertently
lost usage of the GID cache. This turned out to be a very costly
change. In our testing, each iteration through each index of the GID
table takes roughly 35us. When you have multiple devices in a system,
and the GID you are looking for is on one of the later devices, the
code loops through all of the GID indexes on all of the early devices
before it finally succeeds on the target device. This pathological
search behavior combined with 35us per GID table index retrieval
results in results such as the following from the cmtime application
that's part of the latest librdmacm git repo:
ib1:
step total ms max ms min us us / conn
create id : 29.42 0.04 1.00 2.94
bind addr : 186705.66 19.00 18556.00 18670.57
resolve addr : 41.93 9.68 619.00 4.19
resolve route: 486.93 0.48 101.00 48.69
create qp : 4021.95 6.18 330.00 402.20
connect : 68350.39 68588.17 24632.00 6835.04
disconnect : 1460.43 252.65-1862269.00 146.04
destroy : 41.16 0.04 2.00 4.12
ib0:
step total ms max ms min us us / conn
create id : 28.61 0.68 1.00 2.86
bind addr : 2178.86 2.95 201.00 217.89
resolve addr : 51.26 16.85 845.00 5.13
resolve route: 620.08 0.43 92.00 62.01
create qp : 3344.40 6.36 273.00 334.44
connect : 6435.99 6368.53 7844.00 643.60
disconnect : 5095.38 321.90 757.00 509.54
destroy : 37.13 0.02 2.00 3.71
Clearly, both the bind address and connect operations suffer
a huge penalty for being anything other than the default
GID on the first port in the system.
After applying this patch, the numbers now look like this:
ib1:
step total ms max ms min us us / conn
create id : 30.15 0.03 1.00 3.01
bind addr : 80.27 0.04 7.00 8.03
resolve addr : 43.02 13.53 589.00 4.30
resolve route: 482.90 0.45 100.00 48.29
create qp : 3986.55 5.80 330.00 398.66
connect : 7141.53 7051.29 5005.00 714.15
disconnect : 5038.85 193.63 918.00 503.88
destroy : 37.02 0.04 2.00 3.70
ib0:
step total ms max ms min us us / conn
create id : 34.27 0.05 1.00 3.43
bind addr : 26.45 0.04 1.00 2.64
resolve addr : 38.25 10.54 760.00 3.82
resolve route: 604.79 0.43 97.00 60.48
create qp : 3314.95 6.34 273.00 331.49
connect : 12399.26 12351.10 8609.00 1239.93
disconnect : 5096.76 270.72 1015.00 509.68
destroy : 37.10 0.03 2.00 3.71
It's worth noting that we still suffer a bit of a penalty on
connect to the wrong device, but the penalty is much less than
it used to be. Follow on patches deal with this penalty.
Many thanks to Neil Horman for helping to track the source of
slow function that allowed us to track down the fact that
the original patch I mentioned above backed out cache usage
and identify just how much that impacted the system.
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
To guarantee that all unused fields in all FW commands for both inboxes
and outboxes are zeroed out, initialize the mailbox buffer to all zeroes.
This is especially important for SRIOV comm-channel virtual commands
(such as QUERY_FUNC_CAP), where if new fields are added to support new
features, the driver can depend on older kernels passing zeroes in these
fields.
In addition to zeroing out the mailbox buffer at allocation time, all
(now unnecessary) calls to memset by the callers of
mlx4_alloc_cmd_mailbox() are removed.
Signed-off-by: Majd Dibbiny <majd@mellanox.com>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On top of commit 366cddb40 "IB/rdma_cm: TOS <=> UP mapping for IBoE", add
support for case vlan egress map is used.
When the IBoE session is being set over a vlan, inherit the socket priority
to vlan priority mapping which was configured for the vlan device egress map.
Signed-off-by: Eyal Perry <eyalpe@mellanox.com>
Signed-off-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now with percpu refcounting for se_lun in place, go ahead and drop
the legacy per se_cmd accounting for se_lun shutdown.
This includes __transport_clear_lun_from_sessions(), the associated
transport_lun_wait_for_tasks() logic, along with a handful of now
unused se_cmd structure members and ->transport_state bits.
Cc: Kent Overstreet <kmo@daterainc.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
This patch adds support for completion interrupt coalescing that
allows only every ISERT_COMP_BATCH_COUNT (8) to set IB_SEND_SIGNALED,
thus avoiding completion interrupts for every posted iser_tx_desc.
The batch processing is done using a per isert_conn llist that once
IB_SEND_SIGNALED has been set is saved to tx_desc->comp_llnode_batch,
and completion processing of previously posted iser_tx_descs is done
in a single shot from within isert_send_completion() code.
Note this is only done for response PDUs from ISCSI_OP_SCSI_CMD, and
all other control type of PDU responses will force an implicit batch
drain to occur.
Cc: Or Gerlitz <ogerlitz@mellanox.com>
Cc: Sagi Grimberg <sagig@mellanox.com>
Cc: Kent Overstreet <kmo@daterainc.com>
Cc: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
This is step #1 for implementing SRIOV resource quotas for VFs.
Quotas are implemented per resource type for VFs and the PF, to prevent
any entity from simply grabbing all the resources for itself and leaving
the other entities unable to obtain such resources.
Resources which are allocated using quotas: QPs, CQs, SRQs, MPTs, MTTs, MAC,
VLAN, and Counters.
The quota system works as follows:
Each entity (VF or PF) is given a max number of a given resource (its quota),
and a guaranteed minimum number for each resource (starvation prevention).
For QPs, CQs, SRQs, MPTs and MTTs:
50% of the available quantity for the resource is divided equally among
the PF and all the active VFs (i.e., the number of VFs in the mlx4_core module
parameter "num_vfs"). This 50% represents the "guaranteed minimum" pool.
The other 50% is the "free pool", allocated on a first-come-first-serve basis.
For each VF/PF, resources are first allocated from its "guaranteed-minimum"
pool. When that pool is exhausted, the driver attempts to allocate from
the resource "free-pool".
The quota (i.e., max) for the VFs and the PF is:
The free-pool amount (50% of the real max) + the guaranteed minimum
For MACs:
Guarantee 2 MACs per VF/PF per port. As a result, since we have only
128 MACs per port, reduce the allowable number of VFs from 64 to 63.
Any remaining MACs are put into a free pool.
For VLANs:
For the PF, the per-port quota is 128 and guarantee is 64
(to allow the PF to register at least a VLAN per VF in VST mode).
For the VFs, the per-port quota is 64 and the guarantee is 0.
We assume that VGT VFs are trusted not to abuse the VLAN resource.
For Counters:
For all functions (PF and VFs), the quota is 128 and the guarantee is 0.
In this patch, we define the needed structures, which are added to the
resource-tracker struct. In addition, we do initialization
for the resource quota, and adjust the query_device response to use quotas
rather than resource maxima.
As part of the implementation, we introduce a new field in
mlx4_dev: quotas. This field holds the resource quotas used
to report maxima to the upper layers (ib_core, via query_device).
The HCA maxima of these values are passed to the VFs (via
QUERY_HCA) so that they may continue to use these in handling
QPs, CQs, SRQs and MPTs.
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
drivers/net/ethernet/emulex/benet/be.h
drivers/net/netconsole.c
net/bridge/br_private.h
Three mostly trivial conflicts.
The net/bridge/br_private.h conflict was a function signature (argument
addition) change overlapping with the extern removals from Joe Perches.
In drivers/net/netconsole.c we had one change adjusting a printk message
whilst another changed "printk(KERN_INFO" into "pr_info(".
Lastly, the emulex change was a new inline function addition overlapping
with Joe Perches's extern removals.
Signed-off-by: David S. Miller <davem@davemloft.net>
Pull SCSI target fixes from Nicholas Bellinger:
"Here are the outstanding target pending fixes for v3.12-rc7.
This includes a number of EXTENDED_COPY related fixes as a result of
Thomas and Doug's continuing testing and feedback.
Also included is an important vhost/scsi fix that addresses a long
standing issue where the 'write' parameter for get_user_pages_fast()
was incorrectly set for virtio-scsi WRITEs -> DMA_TO_DEVICE, and not
for virtio-scsi READs -> DMA_FROM_DEVICE.
This resulted in random userspace segfaults and other unpleasantness
on KVM host, and unfortunately has been an issue since the initial
merge of vhost/scsi in v3.6. This patch is CC'ed to stable, along
with two other less critical items"
* git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending:
vhost/scsi: Fix incorrect usage of get_user_pages_fast write parameter
target/pscsi: fix return value check
target: Fail XCOPY for non matching source + destination block_size
target: Generate failure for XCOPY I/O with non-zero scsi_status
target: Add missing XCOPY I/O operation sense_buffer
iser-target: check device before dereferencing its variable
target: Return an error for WRITE SAME with ANCHOR==1
target: Fix assignment of LUN in tracepoints
target: Reject EXTENDED_COPY when emulate_3pc is disabled
target: Allow non zero ListID in EXTENDED_COPY parameter list
target: Make target_do_xcopy failures return INVALID_PARAMETER_LIST
This patch changes isert_connect_release() to correctly check for
the existence struct isert_device *device before checking for
isert_device->use_frwr.
Signed-off-by: Vu Pham <vu@mellanox.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Conflicts:
drivers/net/usb/qmi_wwan.c
include/net/dst.h
Trivial merge conflicts, both were overlapping changes.
Signed-off-by: David S. Miller <davem@davemloft.net>
The create_flow/destroy_flow uverbs and the associated extensions to
the user-kernel verbs ABI are under review and are too experimental to
freeze at this point.
So userspace is not exposed to experimental features and an uinstable
ABI, temporarily disable this for v3.12 (with a Kconfig option behind
staging to reenable it if desired).
The feature will be enabled after proper cleanup for v3.13.
Signed-off-by: Yann Droneaud <ydroneaud@opteya.com>
Link: http://marc.info/?i=cover.1381351016.git.ydroneaud@opteya.com
Link: http://marc.info/?i=cover.1381177342.git.ydroneaud@opteya.com
[ Add a Kconfig option to reenable these verbs. - Roland ]
Signed-off-by: Roland Dreier <roland@purestorage.com>
one trivial semicolon cleanup.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQIcBAABCAAGBQJSXCYCAAoJEENa44ZhAt0hWKgP/3R5GUceKeDn9R0ENGPSpQ1/
dvsVmMccMGuJnyZSfUgoGb++6YY5rEjjj6epFlqXtkfoqUvNzDmw8nRO/Pkx+IAT
e/FyrDpcpbuPyOQeGZLIAC2hoQRUPsmayfBOIN+mW8Qu3vUYTKjs12QRqDi3EP6m
itJ07CfAX09LoiZ1S5QxSnEhPvR5MA7zy5ebgdk0QC+6tNcBWx7tOtCY7/BX4MnO
zZL2ZVzxZbHIT7HY+gYID4QxGHFf7JvGX9ATLh9HUzOom3c1XLtdDhH/6mONsTTL
BWTUJIa86DGJwY4fc6wDrOsC8DBo3o3YB98DUWUb6FQswQtx+PcyFg1dAhJuYFTQ
Risjpty4y/EVfUTjBCirf2R8BLCKZyUIFL40ZJvgwhKsH569hS5sVTXMPrQNmsuY
x7C17KJ1iabmtAswJCtM/aoeoodqZnAUg63aV+mbwQXQu9l06fx4UOo/TfG3tH1+
FxVVD3ord98nh77Nv+sGB7ek7x0d3XxEaP7pZscDqRTUx7TT7dXXQY9GC5qAjnfr
YhE8Exxmey+oZ3y6QTYI6scF5x8j0CJlSURfzDgOpKxYnSsdhgujGaQI++e9VF+W
pHAWRqAGsf3wkoMJKZI6DC3lZka81yiByeROSmk08FSVNp7SkVNjl6VC8cAxkVfM
nfNjy6fP/UQp/tcHp68R
=0XuA
-----END PGP SIGNATURE-----
Merge tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband
Pull infiniband updates from Roland Dreier:
"Last batch of IB changes for 3.12: many mlx5 hardware driver fixes
plus one trivial semicolon cleanup"
* tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband:
IB: Remove unnecessary semicolons
IB/mlx5: Ensure proper synchronization accessing memory
IB/mlx5: Fix alignment of reg umr gather buffers
IB/mlx5: Fix eq names to display nicely in /proc/interrupts
mlx5: Fix error code translation from firmware to driver
IB/mlx5: Fix opt param mask according to firmware spec
mlx5: Fix opt param mask for sq err to rts transition
IB/mlx5: Disable atomic operations
mlx5: Fix layout of struct mlx5_init_seg
mlx5: Keep polling to reclaim pages while any returned
IB/mlx5: Avoid async events on invalid port number
IB/mlx5: Decrease memory consumption of mr caches
mlx5: Remove checksum on command interface commands
IB/mlx5: Fix memory leak in mlx5_ib_create_srq
IB/mlx5: Flush cache workqueue before destroying it
IB/mlx5: Fix send work queue size calculation
Call mlx5_ib_populate_pas() before mapping the DMA buffer to ensure
the hardware reads the values written by the CPU.
Found by: Haggai Eran <haggaie@mellanox.com>
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>
The hardware requires that gather buffers for UMR work requests be
aligned to 2K.
Signed-off-by: Eli Cohen <eli@mellanox.com>
Signed-off-by: Roland Dreier <roland@purestorage.com>