mirror of
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
synced 2024-12-29 09:16:33 +00:00
workqueue: Changes for v6.9
This cycle, a lot of workqueue changes including some that are significant and invasive. - During v6.6 cycle, unbound workqueues were updated so that they are more topology aware and flexible, which among other things improved workqueue behavior on modern multi-L3 CPUs. In the process,636b927eba
("workqueue: Make unbound workqueues to use per-cpu pool_workqueues") switched unbound workqueues to use per-CPU frontend pool_workqueues as a part of increasing front-back mapping flexibility. An unwelcome side effect of this change was that this made max concurrency enforcement per-CPU blowing up the maximum number of allowed concurrent executions. I incorrectly assumed that this wouldn't cause practical problems as most unbound workqueue users are self-regulate max concurrency; however, there definitely are which don't (e.g. on IO paths) and the drastic increase in the allowed max concurrency led to noticeable perf regressions in some use cases. This is now addressed by separating out max concurrency enforcement to a separate struct - wq_node_nr_active - which makes @max_active consistently mean system-wide max concurrency regardless of the number of CPUs or (finally) NUMA nodes. This is a rather invasive and, in places, a bit clunky; however, the clunkiness rises from the the inherent requirement to handle the disagreement between the execution locality domain and max concurrency enforcement domain on some modern machines. See5797b1c189
("workqueue: Implement system-wide nr_active enforcement for unbound workqueues") for more details. - BH workqueue support is added. They are similar to per-CPU workqueues but execute work items in the softirq context. This is expected to replace tasklet. However, currently, it's missing the ability to disable and enable work items which is needed to convert many tasklet users. To avoid crowding this merge window too much, this will be included in the next merge window. A separate pull request will be sent for the couple conversion patches that are currently pending. - Waiman plugged a long-standing hole in workqueue CPU isolation where ordered workqueues didn't follow wq_unbound_cpumask updates. Ordered workqueues now follow the same rules as other unbound workqueues. - More CPU isolation improvements: Juri fixed another deficit in workqueue isolation where unbound rescuers don't respect wq_unbound_cpumask. Leonardo fixed delayed_work timers firing on isolated CPUs. - Other misc changes. -----BEGIN PGP SIGNATURE----- iIQEABYKACwWIQTfIjM1kS57o3GsC/uxYfJx3gVYGQUCZe7JCQ4cdGpAa2VybmVs Lm9yZwAKCRCxYfJx3gVYGcnqAP9UP8zEM1la19cilhboDumxmRWyRpV/egFOqsMP Y5PuoAEAtsBJtQWtm5w46+y+fk3nK2ugXlQio2gH0qQcxX6SdgQ= =/ovv -----END PGP SIGNATURE----- Merge tag 'wq-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq Pull workqueue updates from Tejun Heo: "This cycle, a lot of workqueue changes including some that are significant and invasive. - During v6.6 cycle, unbound workqueues were updated so that they are more topology aware and flexible, which among other things improved workqueue behavior on modern multi-L3 CPUs. In the process, commit636b927eba
("workqueue: Make unbound workqueues to use per-cpu pool_workqueues") switched unbound workqueues to use per-CPU frontend pool_workqueues as a part of increasing front-back mapping flexibility. An unwelcome side effect of this change was that this made max concurrency enforcement per-CPU blowing up the maximum number of allowed concurrent executions. I incorrectly assumed that this wouldn't cause practical problems as most unbound workqueue users are self-regulate max concurrency; however, there definitely are which don't (e.g. on IO paths) and the drastic increase in the allowed max concurrency led to noticeable perf regressions in some use cases. This is now addressed by separating out max concurrency enforcement to a separate struct - wq_node_nr_active - which makes @max_active consistently mean system-wide max concurrency regardless of the number of CPUs or (finally) NUMA nodes. This is a rather invasive and, in places, a bit clunky; however, the clunkiness rises from the the inherent requirement to handle the disagreement between the execution locality domain and max concurrency enforcement domain on some modern machines. See commit5797b1c189
("workqueue: Implement system-wide nr_active enforcement for unbound workqueues") for more details. - BH workqueue support is added. They are similar to per-CPU workqueues but execute work items in the softirq context. This is expected to replace tasklet. However, currently, it's missing the ability to disable and enable work items which is needed to convert many tasklet users. To avoid crowding this merge window too much, this will be included in the next merge window. A separate pull request will be sent for the couple conversion patches that are currently pending. - Waiman plugged a long-standing hole in workqueue CPU isolation where ordered workqueues didn't follow wq_unbound_cpumask updates. Ordered workqueues now follow the same rules as other unbound workqueues. - More CPU isolation improvements: Juri fixed another deficit in workqueue isolation where unbound rescuers don't respect wq_unbound_cpumask. Leonardo fixed delayed_work timers firing on isolated CPUs. - Other misc changes" * tag 'wq-for-6.9' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq: (54 commits) workqueue: Drain BH work items on hot-unplugged CPUs workqueue: Introduce from_work() helper for cleaner callback declarations workqueue: Control intensive warning threshold through cmdline workqueue: Make @flags handling consistent across set_work_data() and friends workqueue: Remove clear_work_data() workqueue: Factor out work_grab_pending() from __cancel_work_sync() workqueue: Clean up enum work_bits and related constants workqueue: Introduce work_cancel_flags workqueue: Use variable name irq_flags for saving local irq flags workqueue: Reorganize flush and cancel[_sync] functions workqueue: Rename __cancel_work_timer() to __cancel_timer_sync() workqueue: Use rcu_read_lock_any_held() instead of rcu_read_lock_held() workqueue: Cosmetic changes workqueue, irq_work: Build fix for !CONFIG_IRQ_WORK workqueue: Fix queue_work_on() with BH workqueues async: Use a dedicated unbound workqueue with raised min_active workqueue: Implement workqueue_set_min_active() workqueue: Fix kernel-doc comment of unplug_oldest_pwq() workqueue: Bind unbound workqueue rescuer to wq_unbound_cpumask kernel/workqueue: Let rescuers follow unbound wq cpumask changes ...
This commit is contained in:
commit
ff887eb07c
@ -7244,6 +7244,15 @@
|
||||
threshold repeatedly. They are likely good
|
||||
candidates for using WQ_UNBOUND workqueues instead.
|
||||
|
||||
workqueue.cpu_intensive_warning_thresh=<uint>
|
||||
If CONFIG_WQ_CPU_INTENSIVE_REPORT is set, the kernel
|
||||
will report the work functions which violate the
|
||||
intensive_threshold_us repeatedly. In order to prevent
|
||||
spurious warnings, start printing only after a work
|
||||
function has violated this threshold number of times.
|
||||
|
||||
The default is 4 times. 0 disables the warning.
|
||||
|
||||
workqueue.power_efficient
|
||||
Per-cpu workqueues are generally preferred because
|
||||
they show better performance thanks to cache
|
||||
|
@ -77,10 +77,12 @@ wants a function to be executed asynchronously it has to set up a work
|
||||
item pointing to that function and queue that work item on a
|
||||
workqueue.
|
||||
|
||||
Special purpose threads, called worker threads, execute the functions
|
||||
off of the queue, one after the other. If no work is queued, the
|
||||
worker threads become idle. These worker threads are managed in so
|
||||
called worker-pools.
|
||||
A work item can be executed in either a thread or the BH (softirq) context.
|
||||
|
||||
For threaded workqueues, special purpose threads, called [k]workers, execute
|
||||
the functions off of the queue, one after the other. If no work is queued,
|
||||
the worker threads become idle. These worker threads are managed in
|
||||
worker-pools.
|
||||
|
||||
The cmwq design differentiates between the user-facing workqueues that
|
||||
subsystems and drivers queue work items on and the backend mechanism
|
||||
@ -91,6 +93,12 @@ for high priority ones, for each possible CPU and some extra
|
||||
worker-pools to serve work items queued on unbound workqueues - the
|
||||
number of these backing pools is dynamic.
|
||||
|
||||
BH workqueues use the same framework. However, as there can only be one
|
||||
concurrent execution context, there's no need to worry about concurrency.
|
||||
Each per-CPU BH worker pool contains only one pseudo worker which represents
|
||||
the BH execution context. A BH workqueue can be considered a convenience
|
||||
interface to softirq.
|
||||
|
||||
Subsystems and drivers can create and queue work items through special
|
||||
workqueue API functions as they see fit. They can influence some
|
||||
aspects of the way the work items are executed by setting flags on the
|
||||
@ -106,7 +114,7 @@ unless specifically overridden, a work item of a bound workqueue will
|
||||
be queued on the worklist of either normal or highpri worker-pool that
|
||||
is associated to the CPU the issuer is running on.
|
||||
|
||||
For any worker pool implementation, managing the concurrency level
|
||||
For any thread pool implementation, managing the concurrency level
|
||||
(how many execution contexts are active) is an important issue. cmwq
|
||||
tries to keep the concurrency at a minimal but sufficient level.
|
||||
Minimal to save resources and sufficient in that the system is used at
|
||||
@ -164,6 +172,17 @@ resources, scheduled and executed.
|
||||
``flags``
|
||||
---------
|
||||
|
||||
``WQ_BH``
|
||||
BH workqueues can be considered a convenience interface to softirq. BH
|
||||
workqueues are always per-CPU and all BH work items are executed in the
|
||||
queueing CPU's softirq context in the queueing order.
|
||||
|
||||
All BH workqueues must have 0 ``max_active`` and ``WQ_HIGHPRI`` is the
|
||||
only allowed additional flag.
|
||||
|
||||
BH work items cannot sleep. All other features such as delayed queueing,
|
||||
flushing and canceling are supported.
|
||||
|
||||
``WQ_UNBOUND``
|
||||
Work items queued to an unbound wq are served by the special
|
||||
worker-pools which host workers which are not bound to any
|
||||
@ -237,15 +256,11 @@ may queue at the same time. Unless there is a specific need for
|
||||
throttling the number of active work items, specifying '0' is
|
||||
recommended.
|
||||
|
||||
Some users depend on the strict execution ordering of ST wq. The
|
||||
combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` used to
|
||||
achieve this behavior. Work items on such wq were always queued to the
|
||||
unbound worker-pools and only one work item could be active at any given
|
||||
time thus achieving the same ordering property as ST wq.
|
||||
|
||||
In the current implementation the above configuration only guarantees
|
||||
ST behavior within a given NUMA node. Instead ``alloc_ordered_workqueue()`` should
|
||||
be used to achieve system-wide ST behavior.
|
||||
Some users depend on strict execution ordering where only one work item
|
||||
is in flight at any given time and the work items are processed in
|
||||
queueing order. While the combination of ``@max_active`` of 1 and
|
||||
``WQ_UNBOUND`` used to achieve this behavior, this is no longer the
|
||||
case. Use ``alloc_ordered_queue()`` instead.
|
||||
|
||||
|
||||
Example Execution Scenarios
|
||||
|
@ -120,4 +120,5 @@ extern void async_synchronize_cookie(async_cookie_t cookie);
|
||||
extern void async_synchronize_cookie_domain(async_cookie_t cookie,
|
||||
struct async_domain *domain);
|
||||
extern bool current_is_async(void);
|
||||
extern void async_init(void);
|
||||
#endif
|
||||
|
@ -22,20 +22,54 @@
|
||||
*/
|
||||
#define work_data_bits(work) ((unsigned long *)(&(work)->data))
|
||||
|
||||
enum {
|
||||
enum work_bits {
|
||||
WORK_STRUCT_PENDING_BIT = 0, /* work item is pending execution */
|
||||
WORK_STRUCT_INACTIVE_BIT= 1, /* work item is inactive */
|
||||
WORK_STRUCT_PWQ_BIT = 2, /* data points to pwq */
|
||||
WORK_STRUCT_LINKED_BIT = 3, /* next work is linked to this one */
|
||||
WORK_STRUCT_INACTIVE_BIT, /* work item is inactive */
|
||||
WORK_STRUCT_PWQ_BIT, /* data points to pwq */
|
||||
WORK_STRUCT_LINKED_BIT, /* next work is linked to this one */
|
||||
#ifdef CONFIG_DEBUG_OBJECTS_WORK
|
||||
WORK_STRUCT_STATIC_BIT = 4, /* static initializer (debugobjects) */
|
||||
WORK_STRUCT_COLOR_SHIFT = 5, /* color for workqueue flushing */
|
||||
#else
|
||||
WORK_STRUCT_COLOR_SHIFT = 4, /* color for workqueue flushing */
|
||||
WORK_STRUCT_STATIC_BIT, /* static initializer (debugobjects) */
|
||||
#endif
|
||||
WORK_STRUCT_FLAG_BITS,
|
||||
|
||||
/* color for workqueue flushing */
|
||||
WORK_STRUCT_COLOR_SHIFT = WORK_STRUCT_FLAG_BITS,
|
||||
WORK_STRUCT_COLOR_BITS = 4,
|
||||
|
||||
/*
|
||||
* When WORK_STRUCT_PWQ is set, reserve 8 bits off of pwq pointer w/
|
||||
* debugobjects turned off. This makes pwqs aligned to 256 bytes (512
|
||||
* bytes w/ DEBUG_OBJECTS_WORK) and allows 16 workqueue flush colors.
|
||||
*
|
||||
* MSB
|
||||
* [ pwq pointer ] [ flush color ] [ STRUCT flags ]
|
||||
* 4 bits 4 or 5 bits
|
||||
*/
|
||||
WORK_STRUCT_PWQ_SHIFT = WORK_STRUCT_COLOR_SHIFT + WORK_STRUCT_COLOR_BITS,
|
||||
|
||||
/*
|
||||
* data contains off-queue information when !WORK_STRUCT_PWQ.
|
||||
*
|
||||
* MSB
|
||||
* [ pool ID ] [ OFFQ flags ] [ STRUCT flags ]
|
||||
* 1 bit 4 or 5 bits
|
||||
*/
|
||||
WORK_OFFQ_FLAG_SHIFT = WORK_STRUCT_FLAG_BITS,
|
||||
WORK_OFFQ_CANCELING_BIT = WORK_OFFQ_FLAG_SHIFT,
|
||||
WORK_OFFQ_FLAG_END,
|
||||
WORK_OFFQ_FLAG_BITS = WORK_OFFQ_FLAG_END - WORK_OFFQ_FLAG_SHIFT,
|
||||
|
||||
/*
|
||||
* When a work item is off queue, the high bits encode off-queue flags
|
||||
* and the last pool it was on. Cap pool ID to 31 bits and use the
|
||||
* highest number to indicate that no pool is associated.
|
||||
*/
|
||||
WORK_OFFQ_POOL_SHIFT = WORK_OFFQ_FLAG_SHIFT + WORK_OFFQ_FLAG_BITS,
|
||||
WORK_OFFQ_LEFT = BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT,
|
||||
WORK_OFFQ_POOL_BITS = WORK_OFFQ_LEFT <= 31 ? WORK_OFFQ_LEFT : 31,
|
||||
};
|
||||
|
||||
enum work_flags {
|
||||
WORK_STRUCT_PENDING = 1 << WORK_STRUCT_PENDING_BIT,
|
||||
WORK_STRUCT_INACTIVE = 1 << WORK_STRUCT_INACTIVE_BIT,
|
||||
WORK_STRUCT_PWQ = 1 << WORK_STRUCT_PWQ_BIT,
|
||||
@ -45,35 +79,14 @@ enum {
|
||||
#else
|
||||
WORK_STRUCT_STATIC = 0,
|
||||
#endif
|
||||
};
|
||||
|
||||
enum wq_misc_consts {
|
||||
WORK_NR_COLORS = (1 << WORK_STRUCT_COLOR_BITS),
|
||||
|
||||
/* not bound to any CPU, prefer the local CPU */
|
||||
WORK_CPU_UNBOUND = NR_CPUS,
|
||||
|
||||
/*
|
||||
* Reserve 8 bits off of pwq pointer w/ debugobjects turned off.
|
||||
* This makes pwqs aligned to 256 bytes and allows 16 workqueue
|
||||
* flush colors.
|
||||
*/
|
||||
WORK_STRUCT_FLAG_BITS = WORK_STRUCT_COLOR_SHIFT +
|
||||
WORK_STRUCT_COLOR_BITS,
|
||||
|
||||
/* data contains off-queue information when !WORK_STRUCT_PWQ */
|
||||
WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT,
|
||||
|
||||
__WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE,
|
||||
|
||||
/*
|
||||
* When a work item is off queue, its high bits point to the last
|
||||
* pool it was on. Cap at 31 bits and use the highest number to
|
||||
* indicate that no pool is associated.
|
||||
*/
|
||||
WORK_OFFQ_FLAG_BITS = 1,
|
||||
WORK_OFFQ_POOL_SHIFT = WORK_OFFQ_FLAG_BASE + WORK_OFFQ_FLAG_BITS,
|
||||
WORK_OFFQ_LEFT = BITS_PER_LONG - WORK_OFFQ_POOL_SHIFT,
|
||||
WORK_OFFQ_POOL_BITS = WORK_OFFQ_LEFT <= 31 ? WORK_OFFQ_LEFT : 31,
|
||||
|
||||
/* bit mask for work_busy() return values */
|
||||
WORK_BUSY_PENDING = 1 << 0,
|
||||
WORK_BUSY_RUNNING = 1 << 1,
|
||||
@ -83,12 +96,10 @@ enum {
|
||||
};
|
||||
|
||||
/* Convenience constants - of type 'unsigned long', not 'enum'! */
|
||||
#define WORK_OFFQ_CANCELING (1ul << __WORK_OFFQ_CANCELING)
|
||||
#define WORK_OFFQ_CANCELING (1ul << WORK_OFFQ_CANCELING_BIT)
|
||||
#define WORK_OFFQ_POOL_NONE ((1ul << WORK_OFFQ_POOL_BITS) - 1)
|
||||
#define WORK_STRUCT_NO_POOL (WORK_OFFQ_POOL_NONE << WORK_OFFQ_POOL_SHIFT)
|
||||
|
||||
#define WORK_STRUCT_FLAG_MASK ((1ul << WORK_STRUCT_FLAG_BITS) - 1)
|
||||
#define WORK_STRUCT_WQ_DATA_MASK (~WORK_STRUCT_FLAG_MASK)
|
||||
#define WORK_STRUCT_PWQ_MASK (~((1ul << WORK_STRUCT_PWQ_SHIFT) - 1))
|
||||
|
||||
#define WORK_DATA_INIT() ATOMIC_LONG_INIT((unsigned long)WORK_STRUCT_NO_POOL)
|
||||
#define WORK_DATA_STATIC_INIT() \
|
||||
@ -347,7 +358,8 @@ static inline unsigned int work_static(struct work_struct *work) { return 0; }
|
||||
* Workqueue flags and constants. For details, please refer to
|
||||
* Documentation/core-api/workqueue.rst.
|
||||
*/
|
||||
enum {
|
||||
enum wq_flags {
|
||||
WQ_BH = 1 << 0, /* execute in bottom half (softirq) context */
|
||||
WQ_UNBOUND = 1 << 1, /* not bound to any cpu */
|
||||
WQ_FREEZABLE = 1 << 2, /* freeze during suspend */
|
||||
WQ_MEM_RECLAIM = 1 << 3, /* may be used for memory reclaim */
|
||||
@ -386,11 +398,22 @@ enum {
|
||||
__WQ_DRAINING = 1 << 16, /* internal: workqueue is draining */
|
||||
__WQ_ORDERED = 1 << 17, /* internal: workqueue is ordered */
|
||||
__WQ_LEGACY = 1 << 18, /* internal: create*_workqueue() */
|
||||
__WQ_ORDERED_EXPLICIT = 1 << 19, /* internal: alloc_ordered_workqueue() */
|
||||
|
||||
/* BH wq only allows the following flags */
|
||||
__WQ_BH_ALLOWS = WQ_BH | WQ_HIGHPRI,
|
||||
};
|
||||
|
||||
enum wq_consts {
|
||||
WQ_MAX_ACTIVE = 512, /* I like 512, better ideas? */
|
||||
WQ_UNBOUND_MAX_ACTIVE = WQ_MAX_ACTIVE,
|
||||
WQ_DFL_ACTIVE = WQ_MAX_ACTIVE / 2,
|
||||
|
||||
/*
|
||||
* Per-node default cap on min_active. Unless explicitly set, min_active
|
||||
* is set to min(max_active, WQ_DFL_MIN_ACTIVE). For more details, see
|
||||
* workqueue_struct->min_active definition.
|
||||
*/
|
||||
WQ_DFL_MIN_ACTIVE = 8,
|
||||
};
|
||||
|
||||
/*
|
||||
@ -420,6 +443,9 @@ enum {
|
||||
* they are same as their non-power-efficient counterparts - e.g.
|
||||
* system_power_efficient_wq is identical to system_wq if
|
||||
* 'wq_power_efficient' is disabled. See WQ_POWER_EFFICIENT for more info.
|
||||
*
|
||||
* system_bh[_highpri]_wq are convenience interface to softirq. BH work items
|
||||
* are executed in the queueing CPU's BH context in the queueing order.
|
||||
*/
|
||||
extern struct workqueue_struct *system_wq;
|
||||
extern struct workqueue_struct *system_highpri_wq;
|
||||
@ -428,16 +454,43 @@ extern struct workqueue_struct *system_unbound_wq;
|
||||
extern struct workqueue_struct *system_freezable_wq;
|
||||
extern struct workqueue_struct *system_power_efficient_wq;
|
||||
extern struct workqueue_struct *system_freezable_power_efficient_wq;
|
||||
extern struct workqueue_struct *system_bh_wq;
|
||||
extern struct workqueue_struct *system_bh_highpri_wq;
|
||||
|
||||
void workqueue_softirq_action(bool highpri);
|
||||
void workqueue_softirq_dead(unsigned int cpu);
|
||||
|
||||
/**
|
||||
* alloc_workqueue - allocate a workqueue
|
||||
* @fmt: printf format for the name of the workqueue
|
||||
* @flags: WQ_* flags
|
||||
* @max_active: max in-flight work items per CPU, 0 for default
|
||||
* @max_active: max in-flight work items, 0 for default
|
||||
* remaining args: args for @fmt
|
||||
*
|
||||
* Allocate a workqueue with the specified parameters. For detailed
|
||||
* information on WQ_* flags, please refer to
|
||||
* For a per-cpu workqueue, @max_active limits the number of in-flight work
|
||||
* items for each CPU. e.g. @max_active of 1 indicates that each CPU can be
|
||||
* executing at most one work item for the workqueue.
|
||||
*
|
||||
* For unbound workqueues, @max_active limits the number of in-flight work items
|
||||
* for the whole system. e.g. @max_active of 16 indicates that that there can be
|
||||
* at most 16 work items executing for the workqueue in the whole system.
|
||||
*
|
||||
* As sharing the same active counter for an unbound workqueue across multiple
|
||||
* NUMA nodes can be expensive, @max_active is distributed to each NUMA node
|
||||
* according to the proportion of the number of online CPUs and enforced
|
||||
* independently.
|
||||
*
|
||||
* Depending on online CPU distribution, a node may end up with per-node
|
||||
* max_active which is significantly lower than @max_active, which can lead to
|
||||
* deadlocks if the per-node concurrency limit is lower than the maximum number
|
||||
* of interdependent work items for the workqueue.
|
||||
*
|
||||
* To guarantee forward progress regardless of online CPU distribution, the
|
||||
* concurrency limit on every node is guaranteed to be equal to or greater than
|
||||
* min_active which is set to min(@max_active, %WQ_DFL_MIN_ACTIVE). This means
|
||||
* that the sum of per-node max_active's may be larger than @max_active.
|
||||
*
|
||||
* For detailed information on %WQ_* flags, please refer to
|
||||
* Documentation/core-api/workqueue.rst.
|
||||
*
|
||||
* RETURNS:
|
||||
@ -460,8 +513,7 @@ alloc_workqueue(const char *fmt, unsigned int flags, int max_active, ...);
|
||||
* Pointer to the allocated workqueue on success, %NULL on failure.
|
||||
*/
|
||||
#define alloc_ordered_workqueue(fmt, flags, args...) \
|
||||
alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED | \
|
||||
__WQ_ORDERED_EXPLICIT | (flags), 1, ##args)
|
||||
alloc_workqueue(fmt, WQ_UNBOUND | __WQ_ORDERED | (flags), 1, ##args)
|
||||
|
||||
#define create_workqueue(name) \
|
||||
alloc_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, 1, (name))
|
||||
@ -471,6 +523,9 @@ alloc_workqueue(const char *fmt, unsigned int flags, int max_active, ...);
|
||||
#define create_singlethread_workqueue(name) \
|
||||
alloc_ordered_workqueue("%s", __WQ_LEGACY | WQ_MEM_RECLAIM, name)
|
||||
|
||||
#define from_work(var, callback_work, work_fieldname) \
|
||||
container_of(callback_work, typeof(*var), work_fieldname)
|
||||
|
||||
extern void destroy_workqueue(struct workqueue_struct *wq);
|
||||
|
||||
struct workqueue_attrs *alloc_workqueue_attrs(void);
|
||||
@ -508,6 +563,8 @@ extern bool flush_rcu_work(struct rcu_work *rwork);
|
||||
|
||||
extern void workqueue_set_max_active(struct workqueue_struct *wq,
|
||||
int max_active);
|
||||
extern void workqueue_set_min_active(struct workqueue_struct *wq,
|
||||
int min_active);
|
||||
extern struct work_struct *current_work(void);
|
||||
extern bool current_is_workqueue_rescuer(void);
|
||||
extern bool workqueue_congested(int cpu, struct workqueue_struct *wq);
|
||||
|
@ -115,7 +115,7 @@ config CONSTRUCTORS
|
||||
bool
|
||||
|
||||
config IRQ_WORK
|
||||
bool
|
||||
def_bool y if SMP
|
||||
|
||||
config BUILDTIME_TABLE_SORT
|
||||
bool
|
||||
|
@ -1547,6 +1547,7 @@ static noinline void __init kernel_init_freeable(void)
|
||||
sched_init_smp();
|
||||
|
||||
workqueue_init_topology();
|
||||
async_init();
|
||||
padata_init();
|
||||
page_alloc_init_late();
|
||||
|
||||
|
@ -64,6 +64,7 @@ static async_cookie_t next_cookie = 1;
|
||||
static LIST_HEAD(async_global_pending); /* pending from all registered doms */
|
||||
static ASYNC_DOMAIN(async_dfl_domain);
|
||||
static DEFINE_SPINLOCK(async_lock);
|
||||
static struct workqueue_struct *async_wq;
|
||||
|
||||
struct async_entry {
|
||||
struct list_head domain_list;
|
||||
@ -174,7 +175,7 @@ static async_cookie_t __async_schedule_node_domain(async_func_t func,
|
||||
spin_unlock_irqrestore(&async_lock, flags);
|
||||
|
||||
/* schedule for execution */
|
||||
queue_work_node(node, system_unbound_wq, &entry->work);
|
||||
queue_work_node(node, async_wq, &entry->work);
|
||||
|
||||
return newcookie;
|
||||
}
|
||||
@ -345,3 +346,17 @@ bool current_is_async(void)
|
||||
return worker && worker->current_func == async_run_entry_fn;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(current_is_async);
|
||||
|
||||
void __init async_init(void)
|
||||
{
|
||||
/*
|
||||
* Async can schedule a number of interdependent work items. However,
|
||||
* unbound workqueues can handle only upto min_active interdependent
|
||||
* work items. The default min_active of 8 isn't sufficient for async
|
||||
* and can lead to stalls. Let's use a dedicated workqueue with raised
|
||||
* min_active.
|
||||
*/
|
||||
async_wq = alloc_workqueue("async", WQ_UNBOUND, 0);
|
||||
BUG_ON(!async_wq);
|
||||
workqueue_set_min_active(async_wq, WQ_DFL_ACTIVE);
|
||||
}
|
||||
|
@ -27,6 +27,7 @@
|
||||
#include <linux/tick.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/wait_bit.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
#include <asm/softirq_stack.h>
|
||||
|
||||
@ -802,11 +803,13 @@ static void tasklet_action_common(struct softirq_action *a,
|
||||
|
||||
static __latent_entropy void tasklet_action(struct softirq_action *a)
|
||||
{
|
||||
workqueue_softirq_action(false);
|
||||
tasklet_action_common(a, this_cpu_ptr(&tasklet_vec), TASKLET_SOFTIRQ);
|
||||
}
|
||||
|
||||
static __latent_entropy void tasklet_hi_action(struct softirq_action *a)
|
||||
{
|
||||
workqueue_softirq_action(true);
|
||||
tasklet_action_common(a, this_cpu_ptr(&tasklet_hi_vec), HI_SOFTIRQ);
|
||||
}
|
||||
|
||||
@ -929,6 +932,8 @@ static void run_ksoftirqd(unsigned int cpu)
|
||||
#ifdef CONFIG_HOTPLUG_CPU
|
||||
static int takeover_tasklets(unsigned int cpu)
|
||||
{
|
||||
workqueue_softirq_dead(cpu);
|
||||
|
||||
/* CPU is dead, so no lock needed. */
|
||||
local_irq_disable();
|
||||
|
||||
|
1845
kernel/workqueue.c
1845
kernel/workqueue.c
File diff suppressed because it is too large
Load Diff
@ -198,7 +198,11 @@ pub fn enqueue<W, const ID: u64>(&self, w: W) -> W::EnqueueOutput
|
||||
// stay valid until we call the function pointer in the `work_struct`, so the access is ok.
|
||||
unsafe {
|
||||
w.__enqueue(move |work_ptr| {
|
||||
bindings::queue_work_on(bindings::WORK_CPU_UNBOUND as _, queue_ptr, work_ptr)
|
||||
bindings::queue_work_on(
|
||||
bindings::wq_misc_consts_WORK_CPU_UNBOUND as _,
|
||||
queue_ptr,
|
||||
work_ptr,
|
||||
)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
@ -50,6 +50,7 @@ import drgn
|
||||
from drgn.helpers.linux.list import list_for_each_entry,list_empty
|
||||
from drgn.helpers.linux.percpu import per_cpu_ptr
|
||||
from drgn.helpers.linux.cpumask import for_each_cpu,for_each_possible_cpu
|
||||
from drgn.helpers.linux.nodemask import for_each_node
|
||||
from drgn.helpers.linux.idr import idr_for_each
|
||||
|
||||
import argparse
|
||||
@ -75,6 +76,22 @@ def cpumask_str(cpumask):
|
||||
output += f'{v:08x}'
|
||||
return output.strip()
|
||||
|
||||
wq_type_len = 9
|
||||
|
||||
def wq_type_str(wq):
|
||||
if wq.flags & WQ_BH:
|
||||
return f'{"bh":{wq_type_len}}'
|
||||
elif wq.flags & WQ_UNBOUND:
|
||||
if wq.flags & WQ_ORDERED:
|
||||
return f'{"ordered":{wq_type_len}}'
|
||||
else:
|
||||
if wq.unbound_attrs.affn_strict:
|
||||
return f'{"unbound,S":{wq_type_len}}'
|
||||
else:
|
||||
return f'{"unbound":{wq_type_len}}'
|
||||
else:
|
||||
return f'{"percpu":{wq_type_len}}'
|
||||
|
||||
worker_pool_idr = prog['worker_pool_idr']
|
||||
workqueues = prog['workqueues']
|
||||
wq_unbound_cpumask = prog['wq_unbound_cpumask']
|
||||
@ -82,6 +99,7 @@ wq_pod_types = prog['wq_pod_types']
|
||||
wq_affn_dfl = prog['wq_affn_dfl']
|
||||
wq_affn_names = prog['wq_affn_names']
|
||||
|
||||
WQ_BH = prog['WQ_BH']
|
||||
WQ_UNBOUND = prog['WQ_UNBOUND']
|
||||
WQ_ORDERED = prog['__WQ_ORDERED']
|
||||
WQ_MEM_RECLAIM = prog['WQ_MEM_RECLAIM']
|
||||
@ -92,6 +110,11 @@ WQ_AFFN_CACHE = prog['WQ_AFFN_CACHE']
|
||||
WQ_AFFN_NUMA = prog['WQ_AFFN_NUMA']
|
||||
WQ_AFFN_SYSTEM = prog['WQ_AFFN_SYSTEM']
|
||||
|
||||
POOL_BH = prog['POOL_BH']
|
||||
|
||||
WQ_NAME_LEN = prog['WQ_NAME_LEN'].value_()
|
||||
cpumask_str_len = len(cpumask_str(wq_unbound_cpumask))
|
||||
|
||||
print('Affinity Scopes')
|
||||
print('===============')
|
||||
|
||||
@ -133,10 +156,12 @@ for pi, pool in idr_for_each(worker_pool_idr):
|
||||
|
||||
for pi, pool in idr_for_each(worker_pool_idr):
|
||||
pool = drgn.Object(prog, 'struct worker_pool', address=pool)
|
||||
print(f'pool[{pi:0{max_pool_id_len}}] ref={pool.refcnt.value_():{max_ref_len}} nice={pool.attrs.nice.value_():3} ', end='')
|
||||
print(f'pool[{pi:0{max_pool_id_len}}] flags=0x{pool.flags.value_():02x} ref={pool.refcnt.value_():{max_ref_len}} nice={pool.attrs.nice.value_():3} ', end='')
|
||||
print(f'idle/workers={pool.nr_idle.value_():3}/{pool.nr_workers.value_():3} ', end='')
|
||||
if pool.cpu >= 0:
|
||||
print(f'cpu={pool.cpu.value_():3}', end='')
|
||||
if pool.flags & POOL_BH:
|
||||
print(' bh', end='')
|
||||
else:
|
||||
print(f'cpus={cpumask_str(pool.attrs.cpumask)}', end='')
|
||||
print(f' pod_cpus={cpumask_str(pool.attrs.__pod_cpumask)}', end='')
|
||||
@ -148,24 +173,13 @@ print('')
|
||||
print('Workqueue CPU -> pool')
|
||||
print('=====================')
|
||||
|
||||
print('[ workqueue \ type CPU', end='')
|
||||
print(f'[{"workqueue":^{WQ_NAME_LEN-2}}\\ {"type CPU":{wq_type_len}}', end='')
|
||||
for cpu in for_each_possible_cpu(prog):
|
||||
print(f' {cpu:{max_pool_id_len}}', end='')
|
||||
print(' dfl]')
|
||||
|
||||
for wq in list_for_each_entry('struct workqueue_struct', workqueues.address_of_(), 'list'):
|
||||
print(f'{wq.name.string_().decode()[-24:]:24}', end='')
|
||||
if wq.flags & WQ_UNBOUND:
|
||||
if wq.flags & WQ_ORDERED:
|
||||
print(' ordered ', end='')
|
||||
else:
|
||||
print(' unbound', end='')
|
||||
if wq.unbound_attrs.affn_strict:
|
||||
print(',S ', end='')
|
||||
else:
|
||||
print(' ', end='')
|
||||
else:
|
||||
print(' percpu ', end='')
|
||||
print(f'{wq.name.string_().decode():{WQ_NAME_LEN}} {wq_type_str(wq):10}', end='')
|
||||
|
||||
for cpu in for_each_possible_cpu(prog):
|
||||
pool_id = per_cpu_ptr(wq.cpu_pwq, cpu)[0].pool.id.value_()
|
||||
@ -175,3 +189,65 @@ for wq in list_for_each_entry('struct workqueue_struct', workqueues.address_of_(
|
||||
if wq.flags & WQ_UNBOUND:
|
||||
print(f' {wq.dfl_pwq.pool.id.value_():{max_pool_id_len}}', end='')
|
||||
print('')
|
||||
|
||||
print('')
|
||||
print('Workqueue -> rescuer')
|
||||
print('====================')
|
||||
|
||||
ucpus_len = max(cpumask_str_len, len("unbound_cpus"))
|
||||
rcpus_len = max(cpumask_str_len, len("rescuer_cpus"))
|
||||
|
||||
print(f'[{"workqueue":^{WQ_NAME_LEN-2}}\\ {"unbound_cpus":{ucpus_len}} pid {"rescuer_cpus":{rcpus_len}} ]')
|
||||
|
||||
for wq in list_for_each_entry('struct workqueue_struct', workqueues.address_of_(), 'list'):
|
||||
if not (wq.flags & WQ_MEM_RECLAIM):
|
||||
continue
|
||||
|
||||
print(f'{wq.name.string_().decode():{WQ_NAME_LEN}}', end='')
|
||||
if wq.unbound_attrs.value_() != 0:
|
||||
print(f' {cpumask_str(wq.unbound_attrs.cpumask):{ucpus_len}}', end='')
|
||||
else:
|
||||
print(f' {"":{ucpus_len}}', end='')
|
||||
|
||||
print(f' {wq.rescuer.task.pid.value_():6}', end='')
|
||||
print(f' {cpumask_str(wq.rescuer.task.cpus_ptr):{rcpus_len}}', end='')
|
||||
print('')
|
||||
|
||||
print('')
|
||||
print('Unbound workqueue -> node_nr/max_active')
|
||||
print('=======================================')
|
||||
|
||||
if 'node_to_cpumask_map' in prog:
|
||||
__cpu_online_mask = prog['__cpu_online_mask']
|
||||
node_to_cpumask_map = prog['node_to_cpumask_map']
|
||||
nr_node_ids = prog['nr_node_ids'].value_()
|
||||
|
||||
print(f'online_cpus={cpumask_str(__cpu_online_mask.address_of_())}')
|
||||
for node in for_each_node():
|
||||
print(f'NODE[{node:02}]={cpumask_str(node_to_cpumask_map[node])}')
|
||||
print('')
|
||||
|
||||
print(f'[{"workqueue":^{WQ_NAME_LEN-2}}\\ min max', end='')
|
||||
first = True
|
||||
for node in for_each_node():
|
||||
if first:
|
||||
print(f' NODE {node}', end='')
|
||||
first = False
|
||||
else:
|
||||
print(f' {node:7}', end='')
|
||||
print(f' {"dfl":>7} ]')
|
||||
print('')
|
||||
|
||||
for wq in list_for_each_entry('struct workqueue_struct', workqueues.address_of_(), 'list'):
|
||||
if not (wq.flags & WQ_UNBOUND):
|
||||
continue
|
||||
|
||||
print(f'{wq.name.string_().decode():{WQ_NAME_LEN}} ', end='')
|
||||
print(f'{wq.min_active.value_():3} {wq.max_active.value_():3}', end='')
|
||||
for node in for_each_node():
|
||||
nna = wq.node_nr_active[node]
|
||||
print(f' {nna.nr.counter.value_():3}/{nna.max.value_():3}', end='')
|
||||
nna = wq.node_nr_active[nr_node_ids]
|
||||
print(f' {nna.nr.counter.value_():3}/{nna.max.value_():3}')
|
||||
else:
|
||||
printf(f'node_to_cpumask_map not present, is NUMA enabled?')
|
||||
|
Loading…
Reference in New Issue
Block a user