Eric Dumazet 65e8354ec1 inetpeer: seqlock optimization
David noticed :

------------------
Eric, I was profiling the non-routing-cache case and something that
stuck out is the case of calling inet_getpeer() with create==0.

If an entry is not found, we have to redo the lookup under a spinlock
to make certain that a concurrent writer rebalancing the tree does
not "hide" an existing entry from us.

This makes the case of a create==0 lookup for a not-present entry
really expensive.  It is on the order of 600 cpu cycles on my
Niagara2.

I added a hack to not do the relookup under the lock when create==0
and it now costs less than 300 cycles.

This is now a pretty common operation with the way we handle COW'd
metrics, so I think it's definitely worth optimizing.
-----------------

One solution is to use a seqlock instead of a spinlock to protect struct
inet_peer_base.

After a failed avl tree lookup, we can easily detect if a writer did
some changes during our lookup. Taking the lock and redo the lookup is
only necessary in this case.

Note: Add one private rcu_deref_locked() macro to place in one spot the
access to spinlock included in seqlock.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2011-03-04 14:33:59 -08:00
..
2011-02-08 14:00:40 -08:00
2010-10-25 13:09:45 -07:00
2011-03-04 14:33:59 -08:00
2011-01-14 13:36:42 +01:00
2010-02-22 16:20:22 -08:00
2011-02-01 15:35:25 -08:00
2011-02-01 15:35:25 -08:00
2009-03-02 03:00:15 -08:00
2009-03-02 03:00:15 -08:00
2011-02-21 11:31:18 -08:00
2009-05-25 22:44:59 -07:00
2010-09-23 14:33:39 -07:00
2010-10-27 11:37:32 -07:00
2010-07-12 12:57:54 -07:00