Merge commit 'v2.6.35' into kbuild/kbuild

Conflicts:
	arch/powerpc/Makefile
This commit is contained in:
Michal Marek 2010-08-04 13:59:13 +02:00
commit 772320e845
15241 changed files with 1252410 additions and 694288 deletions

19
.gitignore vendored
View File

@ -35,13 +35,18 @@ modules.builtin
#
# Top-level generic files
#
tags
TAGS
vmlinux
vmlinuz
System.map
Module.markers
Module.symvers
/tags
/TAGS
/linux
/vmlinux
/vmlinuz
/System.map
/Module.markers
/Module.symvers
#
# git files that we don't want to ignore even it they are dot-files
#
!.gitignore
!.mailmap

View File

@ -32,8 +32,6 @@ DocBook/
- directory with DocBook templates etc. for kernel documentation.
HOWTO
- the process and procedures of how to do Linux kernel development.
IO-mapping.txt
- how to access I/O mapped memory from within device drivers.
IPMI.txt
- info on Linux Intelligent Platform Management Interface (IPMI) Driver.
IRQ-affinity.txt
@ -84,6 +82,8 @@ blockdev/
- info on block devices & drivers
btmrvl.txt
- info on Marvell Bluetooth driver usage.
bus-virt-phys-mapping.txt
- how to access I/O mapped memory from within device drivers.
cachetlb.txt
- describes the cache/TLB flushing interfaces Linux uses.
cdrom/
@ -168,6 +168,8 @@ initrd.txt
- how to use the RAM disk as an initial/temporary root filesystem.
input/
- info on Linux input device support.
io-mapping.txt
- description of io_mapping functions in linux/io-mapping.h
io_ordering.txt
- info on ordering I/O writes to memory-mapped addresses.
ioctl/
@ -250,6 +252,8 @@ numastat.txt
- info on how to read Numa policy hit/miss statistics in sysfs.
oops-tracing.txt
- how to decode those nasty internal kernel error dump messages.
padata.txt
- An introduction to the "padata" parallel execution API
parisc/
- directory with info on using Linux on PA-RISC architecture.
parport.txt

View File

@ -0,0 +1,31 @@
What: /sys/bus/usb/devices/.../power/level
Date: March 2007
KernelVersion: 2.6.21
Contact: Alan Stern <stern@rowland.harvard.edu>
Description:
Each USB device directory will contain a file named
power/level. This file holds a power-level setting for
the device, either "on" or "auto".
"on" means that the device is not allowed to autosuspend,
although normal suspends for system sleep will still
be honored. "auto" means the device will autosuspend
and autoresume in the usual manner, according to the
capabilities of its driver.
During normal use, devices should be left in the "auto"
level. The "on" level is meant for administrative uses.
If you want to suspend a device immediately but leave it
free to wake up in response to I/O requests, you should
write "0" to power/autosuspend.
Device not capable of proper suspend and resume should be
left in the "on" level. Although the USB spec requires
devices to support suspend/resume, many of them do not.
In fact so many don't that by default, the USB core
initializes all non-hub devices in the "on" level. Some
drivers may change this setting when they are bound.
This file is deprecated and will be removed after 2010.
Use the power/control file instead; it does exactly the
same thing.

View File

@ -0,0 +1,29 @@
rfkill - radio frequency (RF) connector kill switch support
For details to this subsystem look at Documentation/rfkill.txt.
What: /sys/class/rfkill/rfkill[0-9]+/state
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Current state of the transmitter.
This file is deprecated and sheduled to be removed in 2014,
because its not possible to express the 'soft and hard block'
state of the rfkill driver.
Values: A numeric value.
0: RFKILL_STATE_SOFT_BLOCKED
transmitter is turned off by software
1: RFKILL_STATE_UNBLOCKED
transmitter is (potentially) active
2: RFKILL_STATE_HARD_BLOCKED
transmitter is forced off by something outside of
the driver's control.
What: /sys/class/rfkill/rfkill[0-9]+/claim
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: This file is deprecated because there no longer is a way to
claim just control over a single rfkill instance.
This file is scheduled to be removed in 2012.
Values: 0: Kernel handles events

View File

@ -0,0 +1,67 @@
rfkill - radio frequency (RF) connector kill switch support
For details to this subsystem look at Documentation/rfkill.txt.
For the deprecated /sys/class/rfkill/*/state and
/sys/class/rfkill/*/claim knobs of this interface look in
Documentation/ABI/obsolete/sysfs-class-rfkill.
What: /sys/class/rfkill
Date: 09-Jul-2007
KernelVersion: v2.6.22
Contact: linux-wireless@vger.kernel.org,
Description: The rfkill class subsystem folder.
Each registered rfkill driver is represented by an rfkillX
subfolder (X being an integer > 0).
What: /sys/class/rfkill/rfkill[0-9]+/name
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Name assigned by driver to this key (interface or driver name).
Values: arbitrary string.
What: /sys/class/rfkill/rfkill[0-9]+/type
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Driver type string ("wlan", "bluetooth", etc).
Values: See include/linux/rfkill.h.
What: /sys/class/rfkill/rfkill[0-9]+/persistent
Date: 09-Jul-2007
KernelVersion v2.6.22
Contact: linux-wireless@vger.kernel.org
Description: Whether the soft blocked state is initialised from non-volatile
storage at startup.
Values: A numeric value.
0: false
1: true
What: /sys/class/rfkill/rfkill[0-9]+/hard
Date: 12-March-2010
KernelVersion v2.6.34
Contact: linux-wireless@vger.kernel.org
Description: Current hardblock state. This file is read only.
Values: A numeric value.
0: inactive
The transmitter is (potentially) active.
1: active
The transmitter is forced off by something outside of
the driver's control.
What: /sys/class/rfkill/rfkill[0-9]+/soft
Date: 12-March-2010
KernelVersion v2.6.34
Contact: linux-wireless@vger.kernel.org
Description: Current softblock state. This file is read and write.
Values: A numeric value.
0: inactive
The transmitter is (potentially) active.
1: active
The transmitter is turned off by software.

View File

@ -0,0 +1,7 @@
What: /sys/devices/system/node/nodeX
Date: October 2002
Contact: Linux Memory Management list <linux-mm@kvack.org>
Description:
When CONFIG_NUMA is enabled, this is a directory containing
information on node X such as what CPUs are local to the
node.

View File

@ -20,7 +20,7 @@ Description:
lsm: [[subj_user=] [subj_role=] [subj_type=]
[obj_user=] [obj_role=] [obj_type=]]
base: func:= [BPRM_CHECK][FILE_MMAP][INODE_PERMISSION]
base: func:= [BPRM_CHECK][FILE_MMAP][FILE_CHECK]
mask:= [MAY_READ] [MAY_WRITE] [MAY_APPEND] [MAY_EXEC]
fsmagic:= hex value
uid:= decimal value
@ -40,11 +40,11 @@ Description:
measure func=BPRM_CHECK
measure func=FILE_MMAP mask=MAY_EXEC
measure func=INODE_PERM mask=MAY_READ uid=0
measure func=FILE_CHECK mask=MAY_READ uid=0
The default policy measures all executables in bprm_check,
all files mmapped executable in file_mmap, and all files
open for read by root in inode_permission.
open for read by root in do_filp_open.
Examples of LSM specific definitions:
@ -54,8 +54,8 @@ Description:
dont_measure obj_type=var_log_t
dont_measure obj_type=auditd_log_t
measure subj_user=system_u func=INODE_PERM mask=MAY_READ
measure subj_role=system_r func=INODE_PERM mask=MAY_READ
measure subj_user=system_u func=FILE_CHECK mask=MAY_READ
measure subj_role=system_r func=FILE_CHECK mask=MAY_READ
Smack:
measure subj_user=_ func=INODE_PERM mask=MAY_READ
measure subj_user=_ func=FILE_CHECK mask=MAY_READ

View File

@ -128,3 +128,17 @@ Description:
preferred request size for workloads where sustained
throughput is desired. If no optimal I/O size is
reported this file contains 0.
What: /sys/block/<disk>/queue/nomerges
Date: January 2010
Contact:
Description:
Standard I/O elevator operations include attempts to
merge contiguous I/Os. For known random I/O loads these
attempts will always fail and result in extra cycles
being spent in the kernel. This allows one to turn off
this behavior on one of two ways: When set to 1, complex
merge checks are disabled, but the simple one-shot merges
with the previous I/O request are enabled. When set to 2,
all merge tries are disabled. The default value is 0 -
which enables all types of merge tries.

View File

@ -14,34 +14,6 @@ Description:
The autosuspend delay for newly-created devices is set to
the value of the usbcore.autosuspend module parameter.
What: /sys/bus/usb/devices/.../power/level
Date: March 2007
KernelVersion: 2.6.21
Contact: Alan Stern <stern@rowland.harvard.edu>
Description:
Each USB device directory will contain a file named
power/level. This file holds a power-level setting for
the device, either "on" or "auto".
"on" means that the device is not allowed to autosuspend,
although normal suspends for system sleep will still
be honored. "auto" means the device will autosuspend
and autoresume in the usual manner, according to the
capabilities of its driver.
During normal use, devices should be left in the "auto"
level. The "on" level is meant for administrative uses.
If you want to suspend a device immediately but leave it
free to wake up in response to I/O requests, you should
write "0" to power/autosuspend.
Device not capable of proper suspend and resume should be
left in the "on" level. Although the USB spec requires
devices to support suspend/resume, many of them do not.
In fact so many don't that by default, the USB core
initializes all non-hub devices in the "on" level. Some
drivers may change this setting when they are bound.
What: /sys/bus/usb/devices/.../power/persist
Date: May 2007
KernelVersion: 2.6.23
@ -159,3 +131,14 @@ Description:
device. This is useful to ensure auto probing won't
match the driver to the device. For example:
# echo "046d c315" > /sys/bus/usb/drivers/foo/remove_id
What: /sys/bus/usb/device/.../avoid_reset_quirk
Date: December 2009
Contact: Oliver Neukum <oliver@neukum.org>
Description:
Writing 1 to this file tells the kernel that this
device will morph into another mode when it is reset.
Drivers will not use reset for error handling for
such devices.
Users:
usb_modeswitch

View File

@ -0,0 +1,20 @@
What: /sys/class/power/ds2760-battery.*/charge_now
Date: May 2010
KernelVersion: 2.6.35
Contact: Daniel Mack <daniel@caiaq.de>
Description:
This file is writeable and can be used to set the current
coloumb counter value inside the battery monitor chip. This
is needed for unavoidable corrections of aging batteries.
A userspace daemon can monitor the battery charging logic
and once the counter drops out of considerable bounds, take
appropriate action.
What: /sys/class/power/ds2760-battery.*/charge_full
Date: May 2010
KernelVersion: 2.6.35
Contact: Daniel Mack <daniel@caiaq.de>
Description:
This file is writeable and can be used to set the assumed
battery 'full level'. As batteries age, this value has to be
amended over time.

View File

@ -43,7 +43,7 @@ Date: September 2008
Contact: Badari Pulavarty <pbadari@us.ibm.com>
Description:
The file /sys/devices/system/memory/memoryX/state
is read-write. When read, it's contents show the
is read-write. When read, its contents show the
online/offline state of the memory section. When written,
root can toggle the the online/offline state of a removable
memory section (see removable file description above)

View File

@ -0,0 +1,7 @@
What: /sys/devices/system/node/nodeX/compact
Date: February 2010
Contact: Mel Gorman <mel@csn.ul.ie>
Description:
When this file is written to, all memory within that node
will be compacted. When it completes, memory will be freed
into blocks which have as many contiguous pages as possible

View File

@ -0,0 +1,9 @@
What: /sys/devices/platform/_UDC_/gadget/suspended
Date: April 2010
Contact: Fabien Chouteau <fabien.chouteau@barco.com>
Description:
Show the suspend state of an USB composite gadget.
1 -> suspended
0 -> resumed
(_UDC_ is the name of the USB Device Controller driver)

View File

@ -0,0 +1,79 @@
What: /sys/devices/.../power/
Date: January 2009
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/devices/.../power directory contains attributes
allowing the user space to check and modify some power
management related properties of given device.
What: /sys/devices/.../power/wakeup
Date: January 2009
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/devices/.../power/wakeup attribute allows the user
space to check if the device is enabled to wake up the system
from sleep states, such as the memory sleep state (suspend to
RAM) and hibernation (suspend to disk), and to enable or disable
it to do that as desired.
Some devices support "wakeup" events, which are hardware signals
used to activate the system from a sleep state. Such devices
have one of the following two values for the sysfs power/wakeup
file:
+ "enabled\n" to issue the events;
+ "disabled\n" not to do so;
In that cases the user space can change the setting represented
by the contents of this file by writing either "enabled", or
"disabled" to it.
For the devices that are not capable of generating system wakeup
events this file contains "\n". In that cases the user space
cannot modify the contents of this file and the device cannot be
enabled to wake up the system.
What: /sys/devices/.../power/control
Date: January 2009
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/devices/.../power/control attribute allows the user
space to control the run-time power management of the device.
All devices have one of the following two values for the
power/control file:
+ "auto\n" to allow the device to be power managed at run time;
+ "on\n" to prevent the device from being power managed;
The default for all devices is "auto", which means that they may
be subject to automatic power management, depending on their
drivers. Changing this attribute to "on" prevents the driver
from power managing the device at run time. Doing that while
the device is suspended causes it to be woken up.
What: /sys/devices/.../power/async
Date: January 2009
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/devices/.../async attribute allows the user space to
enable or diasble the device's suspend and resume callbacks to
be executed asynchronously (ie. in separate threads, in parallel
with the main suspend/resume thread) during system-wide power
transitions (eg. suspend to RAM, hibernation).
All devices have one of the following two values for the
power/async file:
+ "enabled\n" to permit the asynchronous suspend/resume;
+ "disabled\n" to forbid it;
The value of this attribute may be changed by writing either
"enabled", or "disabled" to it.
It generally is unsafe to permit the asynchronous suspend/resume
of a device unless it is certain that all of the PM dependencies
of the device are known to the PM core. However, for some
devices this attribute is set to "enabled" by bus type code or
device drivers and in that cases it should be safe to leave the
default value.

View File

@ -0,0 +1,43 @@
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/<hid-bus>:<vendor-id>:<product-id>.<num>/operation_mode
Date: March 2010
Contact: Bruno Prémont <bonbons@linux-vserver.org>
Description: Make it possible to switch the PicoLCD device between LCD
(firmware) and bootloader (flasher) operation modes.
Reading: returns list of available modes, the active mode being
enclosed in brackets ('[' and ']')
Writing: causes operation mode switch. Permitted values are
the non-active mode names listed when read.
Note: when switching mode the current PicoLCD HID device gets
disconnected and reconnects after above delay (see attribute
operation_mode_delay for its value).
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/<hid-bus>:<vendor-id>:<product-id>.<num>/operation_mode_delay
Date: April 2010
Contact: Bruno Prémont <bonbons@linux-vserver.org>
Description: Delay PicoLCD waits before restarting in new mode when
operation_mode has changed.
Reading/Writing: It is expressed in ms and permitted range is
0..30000ms.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/<hid-bus>:<vendor-id>:<product-id>.<num>/fb_update_rate
Date: March 2010
Contact: Bruno Prémont <bonbons@linux-vserver.org>
Description: Make it possible to adjust defio refresh rate.
Reading: returns list of available refresh rates (expressed in Hz),
the active refresh rate being enclosed in brackets ('[' and ']')
Writing: accepts new refresh rate expressed in integer Hz
within permitted rates.
Note: As device can barely do 2 complete refreshes a second
it only makes sense to adjust this value if only one or two
tiles get changed and it's not appropriate to expect the application
to flush it's tiny changes explicitely at higher than default rate.

View File

@ -0,0 +1,29 @@
What: /sys/bus/hid/drivers/prodikeys/.../channel
Date: April 2010
KernelVersion: 2.6.34
Contact: Don Prince <dhprince.devel@yahoo.co.uk>
Description:
Allows control (via software) the midi channel to which
that the pc-midi keyboard will output.midi data.
Range: 0..15
Type: Read/write
What: /sys/bus/hid/drivers/prodikeys/.../sustain
Date: April 2010
KernelVersion: 2.6.34
Contact: Don Prince <dhprince.devel@yahoo.co.uk>
Description:
Allows control (via software) the sustain duration of a
note held by the pc-midi driver.
0 means sustain mode is disabled.
Range: 0..5000 (milliseconds)
Type: Read/write
What: /sys/bus/hid/drivers/prodikeys/.../octave
Date: April 2010
KernelVersion: 2.6.34
Contact: Don Prince <dhprince.devel@yahoo.co.uk>
Description:
Controls the octave shift modifier in the pc-midi driver.
The octave can be shifted via software up/down 2 octaves.
0 means the no ocatve shift.
Range: -2..2 (minus 2 to plus 2)
Type: Read/Write

View File

@ -0,0 +1,111 @@
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/actual_dpi
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: It is possible to switch the dpi setting of the mouse with the
press of a button.
When read, this file returns the raw number of the actual dpi
setting reported by the mouse. This number has to be further
processed to receive the real dpi value.
VALUE DPI
1 800
2 1200
3 1600
4 2000
5 2400
6 3200
This file is readonly.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/actual_profile
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: When read, this file returns the number of the actual profile.
This file is readonly.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/firmware_version
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: When read, this file returns the raw integer version number of the
firmware reported by the mouse. Using the integer value eases
further usage in other programs. To receive the real version
number the decimal point has to be shifted 2 positions to the
left. E.g. a returned value of 138 means 1.38
This file is readonly.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/kone_driver_version
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: When read, this file returns the driver version.
The format of the string is "v<major>.<minor>.<patchlevel>".
This attribute is used by the userland tools to find the sysfs-
paths of installed kone-mice and determine the capabilites of
the driver. Versions of this driver for old kernels replace
usbhid instead of generic-usb. The way to scan for this file
has been chosen to provide a consistent way for all supported
kernel versions.
This file is readonly.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/profile[1-5]
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: The mouse can store 5 profiles which can be switched by the
press of a button. A profile holds informations like button
mappings, sensitivity, the colors of the 5 leds and light
effects.
When read, these files return the respective profile. The
returned data is 975 bytes in size.
When written, this file lets one write the respective profile
data back to the mouse. The data has to be 975 bytes long.
The mouse will reject invalid data, whereas the profile number
stored in the profile doesn't need to fit the number of the
store.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/settings
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: When read, this file returns the settings stored in the mouse.
The size of the data is 36 bytes and holds information like the
startup_profile, tcu state and calibration_data.
When written, this file lets write settings back to the mouse.
The data has to be 36 bytes long. The mouse will reject invalid
data.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/startup_profile
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: The integer value of this attribute ranges from 1 to 5.
When read, this attribute returns the number of the profile
that's active when the mouse is powered on.
When written, this file sets the number of the startup profile
and the mouse activates this profile immediately.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/tcu
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: The mouse has a "Tracking Control Unit" which lets the user
calibrate the laser power to fit the mousepad surface.
When read, this file returns the current state of the TCU,
where 0 means off and 1 means on.
Writing 0 in this file will switch the TCU off.
Writing 1 in this file will start the calibration which takes
around 6 seconds to complete and activates the TCU.
What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/weight
Date: March 2010
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: The mouse can be equipped with one of four supplied weights
ranging from 5 to 20 grams which are recognized by the mouse
and its value can be read out. When read, this file returns the
raw value returned by the mouse which eases further processing
in other software.
The values map to the weights as follows:
VALUE WEIGHT
0 none
1 5g
2 10g
3 15g
4 20g
This file is readonly.

View File

@ -0,0 +1,15 @@
What: /sys/firmware/sfi/tables/
Date: May 2010
Contact: Len Brown <lenb@kernel.org>
Description:
SFI defines a number of small static memory tables
so the kernel can get platform information from firmware.
The tables are defined in the latest SFI specification:
http://simplefirmware.org/documentation
While the tables are used by the kernel, user-space
can observe them this way:
# cd /sys/firmware/sfi/tables
# cat $TABLENAME > $TABLENAME.bin

View File

@ -1,4 +1,4 @@
What: /sys/devices/platform/asus-laptop/display
What: /sys/devices/platform/asus_laptop/display
Date: January 2007
KernelVersion: 2.6.20
Contact: "Corentin Chary" <corentincj@iksaif.net>
@ -13,7 +13,7 @@ Description:
Ex: - 0 (0000b) means no display
- 3 (0011b) CRT+LCD.
What: /sys/devices/platform/asus-laptop/gps
What: /sys/devices/platform/asus_laptop/gps
Date: January 2007
KernelVersion: 2.6.20
Contact: "Corentin Chary" <corentincj@iksaif.net>
@ -21,7 +21,7 @@ Description:
Control the gps device. 1 means on, 0 means off.
Users: Lapsus
What: /sys/devices/platform/asus-laptop/ledd
What: /sys/devices/platform/asus_laptop/ledd
Date: January 2007
KernelVersion: 2.6.20
Contact: "Corentin Chary" <corentincj@iksaif.net>
@ -29,11 +29,11 @@ Description:
Some models like the W1N have a LED display that can be
used to display several informations.
To control the LED display, use the following :
echo 0x0T000DDD > /sys/devices/platform/asus-laptop/
echo 0x0T000DDD > /sys/devices/platform/asus_laptop/
where T control the 3 letters display, and DDD the 3 digits display.
The DDD table can be found in Documentation/laptops/asus-laptop.txt
What: /sys/devices/platform/asus-laptop/bluetooth
What: /sys/devices/platform/asus_laptop/bluetooth
Date: January 2007
KernelVersion: 2.6.20
Contact: "Corentin Chary" <corentincj@iksaif.net>
@ -42,7 +42,7 @@ Description:
This may control the led, the device or both.
Users: Lapsus
What: /sys/devices/platform/asus-laptop/wlan
What: /sys/devices/platform/asus_laptop/wlan
Date: January 2007
KernelVersion: 2.6.20
Contact: "Corentin Chary" <corentincj@iksaif.net>

View File

@ -1,4 +1,4 @@
What: /sys/devices/platform/eeepc-laptop/disp
What: /sys/devices/platform/eeepc/disp
Date: May 2008
KernelVersion: 2.6.26
Contact: "Corentin Chary" <corentincj@iksaif.net>
@ -9,21 +9,21 @@ Description:
- 3 = LCD+CRT
If you run X11, you should use xrandr instead.
What: /sys/devices/platform/eeepc-laptop/camera
What: /sys/devices/platform/eeepc/camera
Date: May 2008
KernelVersion: 2.6.26
Contact: "Corentin Chary" <corentincj@iksaif.net>
Description:
Control the camera. 1 means on, 0 means off.
What: /sys/devices/platform/eeepc-laptop/cardr
What: /sys/devices/platform/eeepc/cardr
Date: May 2008
KernelVersion: 2.6.26
Contact: "Corentin Chary" <corentincj@iksaif.net>
Description:
Control the card reader. 1 means on, 0 means off.
What: /sys/devices/platform/eeepc-laptop/cpufv
What: /sys/devices/platform/eeepc/cpufv
Date: Jun 2009
KernelVersion: 2.6.31
Contact: "Corentin Chary" <corentincj@iksaif.net>
@ -42,7 +42,7 @@ Description:
`------------ Availables modes
For example, 0x301 means: mode 1 selected, 3 available modes.
What: /sys/devices/platform/eeepc-laptop/available_cpufv
What: /sys/devices/platform/eeepc/available_cpufv
Date: Jun 2009
KernelVersion: 2.6.31
Contact: "Corentin Chary" <corentincj@iksaif.net>

View File

@ -101,3 +101,16 @@ Description:
CAUTION: Using it will cause your machine's real-time (CMOS)
clock to be set to a random invalid time after a resume.
What: /sys/power/pm_async
Date: January 2009
Contact: Rafael J. Wysocki <rjw@sisk.pl>
Description:
The /sys/power/pm_async file controls the switch allowing the
user space to enable or disable asynchronous suspend and resume
of devices. If enabled, this feature will cause some device
drivers' suspend and resume callbacks to be executed in parallel
with each other and with the main suspend thread. It is enabled
if this file contains "1", which is the default. It may be
disabled by writing "0" to this file, in which case all devices
will be suspended and resumed synchronously.

View File

@ -0,0 +1,10 @@
What: /sys/class/hidraw/hidraw*/device/speed
Date: April 2010
Kernel Version: 2.6.35
Contact: linux-bluetooth@vger.kernel.org
Description:
The /sys/class/hidraw/hidraw*/device/speed file controls
reporting speed of wacom bluetooth tablet. Reading from
this file returns 1 if tablet reports in high speed mode
or 0 otherwise. Writing to this file one of these values
switches reporting speed.

View File

@ -49,7 +49,7 @@ o oprofile 0.9 # oprofiled --version
o udev 081 # udevinfo -V
o grub 0.93 # grub --version
o mcelog 0.6
o iptables 1.4.1 # iptables -V
o iptables 1.4.2 # iptables -V
Kernel compilation

View File

@ -0,0 +1,771 @@
Dynamic DMA mapping Guide
=========================
David S. Miller <davem@redhat.com>
Richard Henderson <rth@cygnus.com>
Jakub Jelinek <jakub@redhat.com>
This is a guide to device driver writers on how to use the DMA API
with example pseudo-code. For a concise description of the API, see
DMA-API.txt.
Most of the 64bit platforms have special hardware that translates bus
addresses (DMA addresses) into physical addresses. This is similar to
how page tables and/or a TLB translates virtual addresses to physical
addresses on a CPU. This is needed so that e.g. PCI devices can
access with a Single Address Cycle (32bit DMA address) any page in the
64bit physical address space. Previously in Linux those 64bit
platforms had to set artificial limits on the maximum RAM size in the
system, so that the virt_to_bus() static scheme works (the DMA address
translation tables were simply filled on bootup to map each bus
address to the physical page __pa(bus_to_virt())).
So that Linux can use the dynamic DMA mapping, it needs some help from the
drivers, namely it has to take into account that DMA addresses should be
mapped only for the time they are actually used and unmapped after the DMA
transfer.
The following API will work of course even on platforms where no such
hardware exists.
Note that the DMA API works with any bus independent of the underlying
microprocessor architecture. You should use the DMA API rather than
the bus specific DMA API (e.g. pci_dma_*).
First of all, you should make sure
#include <linux/dma-mapping.h>
is in your driver. This file will obtain for you the definition of the
dma_addr_t (which can hold any valid DMA address for the platform)
type which should be used everywhere you hold a DMA (bus) address
returned from the DMA mapping functions.
What memory is DMA'able?
The first piece of information you must know is what kernel memory can
be used with the DMA mapping facilities. There has been an unwritten
set of rules regarding this, and this text is an attempt to finally
write them down.
If you acquired your memory via the page allocator
(i.e. __get_free_page*()) or the generic memory allocators
(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
that memory using the addresses returned from those routines.
This means specifically that you may _not_ use the memory/addresses
returned from vmalloc() for DMA. It is possible to DMA to the
_underlying_ memory mapped into a vmalloc() area, but this requires
walking page tables to get the physical addresses, and then
translating each of those pages back to a kernel address using
something like __va(). [ EDIT: Update this when we integrate
Gerd Knorr's generic code which does this. ]
This rule also means that you may use neither kernel image addresses
(items in data/text/bss segments), nor module image addresses, nor
stack addresses for DMA. These could all be mapped somewhere entirely
different than the rest of physical memory. Even if those classes of
memory could physically work with DMA, you'd need to ensure the I/O
buffers were cacheline-aligned. Without that, you'd see cacheline
sharing problems (data corruption) on CPUs with DMA-incoherent caches.
(The CPU could write to one word, DMA would write to a different one
in the same cache line, and one of them could be overwritten.)
Also, this means that you cannot take the return of a kmap()
call and DMA to/from that. This is similar to vmalloc().
What about block I/O and networking buffers? The block I/O and
networking subsystems make sure that the buffers they use are valid
for you to DMA from/to.
DMA addressing limitations
Does your device have any DMA addressing limitations? For example, is
your device only capable of driving the low order 24-bits of address?
If so, you need to inform the kernel of this fact.
By default, the kernel assumes that your device can address the full
32-bits. For a 64-bit capable device, this needs to be increased.
And for a device with limitations, as discussed in the previous
paragraph, it needs to be decreased.
Special note about PCI: PCI-X specification requires PCI-X devices to
support 64-bit addressing (DAC) for all transactions. And at least
one platform (SGI SN2) requires 64-bit consistent allocations to
operate correctly when the IO bus is in PCI-X mode.
For correct operation, you must interrogate the kernel in your device
probe routine to see if the DMA controller on the machine can properly
support the DMA addressing limitation your device has. It is good
style to do this even if your device holds the default setting,
because this shows that you did think about these issues wrt. your
device.
The query is performed via a call to dma_set_mask():
int dma_set_mask(struct device *dev, u64 mask);
The query for consistent allocations is performed via a call to
dma_set_coherent_mask():
int dma_set_coherent_mask(struct device *dev, u64 mask);
Here, dev is a pointer to the device struct of your device, and mask
is a bit mask describing which bits of an address your device
supports. It returns zero if your card can perform DMA properly on
the machine given the address mask you provided. In general, the
device struct of your device is embedded in the bus specific device
struct of your device. For example, a pointer to the device struct of
your PCI device is pdev->dev (pdev is a pointer to the PCI device
struct of your device).
If it returns non-zero, your device cannot perform DMA properly on
this platform, and attempting to do so will result in undefined
behavior. You must either use a different mask, or not use DMA.
This means that in the failure case, you have three options:
1) Use another DMA mask, if possible (see below).
2) Use some non-DMA mode for data transfer, if possible.
3) Ignore this device and do not initialize it.
It is recommended that your driver print a kernel KERN_WARNING message
when you end up performing either #2 or #3. In this manner, if a user
of your driver reports that performance is bad or that the device is not
even detected, you can ask them for the kernel messages to find out
exactly why.
The standard 32-bit addressing device would do something like this:
if (dma_set_mask(dev, DMA_BIT_MASK(32))) {
printk(KERN_WARNING
"mydev: No suitable DMA available.\n");
goto ignore_this_device;
}
Another common scenario is a 64-bit capable device. The approach here
is to try for 64-bit addressing, but back down to a 32-bit mask that
should not fail. The kernel may fail the 64-bit mask not because the
platform is not capable of 64-bit addressing. Rather, it may fail in
this case simply because 32-bit addressing is done more efficiently
than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
more efficient than DAC addressing.
Here is how you would handle a 64-bit capable device which can drive
all 64-bits when accessing streaming DMA:
int using_dac;
if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
using_dac = 1;
} else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
using_dac = 0;
} else {
printk(KERN_WARNING
"mydev: No suitable DMA available.\n");
goto ignore_this_device;
}
If a card is capable of using 64-bit consistent allocations as well,
the case would look like this:
int using_dac, consistent_using_dac;
if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
using_dac = 1;
consistent_using_dac = 1;
dma_set_coherent_mask(dev, DMA_BIT_MASK(64));
} else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
using_dac = 0;
consistent_using_dac = 0;
dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
} else {
printk(KERN_WARNING
"mydev: No suitable DMA available.\n");
goto ignore_this_device;
}
dma_set_coherent_mask() will always be able to set the same or a
smaller mask as dma_set_mask(). However for the rare case that a
device driver only uses consistent allocations, one would have to
check the return value from dma_set_coherent_mask().
Finally, if your device can only drive the low 24-bits of
address you might do something like:
if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
printk(KERN_WARNING
"mydev: 24-bit DMA addressing not available.\n");
goto ignore_this_device;
}
When dma_set_mask() is successful, and returns zero, the kernel saves
away this mask you have provided. The kernel will use this
information later when you make DMA mappings.
There is a case which we are aware of at this time, which is worth
mentioning in this documentation. If your device supports multiple
functions (for example a sound card provides playback and record
functions) and the various different functions have _different_
DMA addressing limitations, you may wish to probe each mask and
only provide the functionality which the machine can handle. It
is important that the last call to dma_set_mask() be for the
most specific mask.
Here is pseudo-code showing how this might be done:
#define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
#define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
struct my_sound_card *card;
struct device *dev;
...
if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
card->playback_enabled = 1;
} else {
card->playback_enabled = 0;
printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
card->name);
}
if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
card->record_enabled = 1;
} else {
card->record_enabled = 0;
printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
card->name);
}
A sound card was used as an example here because this genre of PCI
devices seems to be littered with ISA chips given a PCI front end,
and thus retaining the 16MB DMA addressing limitations of ISA.
Types of DMA mappings
There are two types of DMA mappings:
- Consistent DMA mappings which are usually mapped at driver
initialization, unmapped at the end and for which the hardware should
guarantee that the device and the CPU can access the data
in parallel and will see updates made by each other without any
explicit software flushing.
Think of "consistent" as "synchronous" or "coherent".
The current default is to return consistent memory in the low 32
bits of the bus space. However, for future compatibility you should
set the consistent mask even if this default is fine for your
driver.
Good examples of what to use consistent mappings for are:
- Network card DMA ring descriptors.
- SCSI adapter mailbox command data structures.
- Device firmware microcode executed out of
main memory.
The invariant these examples all require is that any CPU store
to memory is immediately visible to the device, and vice
versa. Consistent mappings guarantee this.
IMPORTANT: Consistent DMA memory does not preclude the usage of
proper memory barriers. The CPU may reorder stores to
consistent memory just as it may normal memory. Example:
if it is important for the device to see the first word
of a descriptor updated before the second, you must do
something like:
desc->word0 = address;
wmb();
desc->word1 = DESC_VALID;
in order to get correct behavior on all platforms.
Also, on some platforms your driver may need to flush CPU write
buffers in much the same way as it needs to flush write buffers
found in PCI bridges (such as by reading a register's value
after writing it).
- Streaming DMA mappings which are usually mapped for one DMA
transfer, unmapped right after it (unless you use dma_sync_* below)
and for which hardware can optimize for sequential accesses.
This of "streaming" as "asynchronous" or "outside the coherency
domain".
Good examples of what to use streaming mappings for are:
- Networking buffers transmitted/received by a device.
- Filesystem buffers written/read by a SCSI device.
The interfaces for using this type of mapping were designed in
such a way that an implementation can make whatever performance
optimizations the hardware allows. To this end, when using
such mappings you must be explicit about what you want to happen.
Neither type of DMA mapping has alignment restrictions that come from
the underlying bus, although some devices may have such restrictions.
Also, systems with caches that aren't DMA-coherent will work better
when the underlying buffers don't share cache lines with other data.
Using Consistent DMA mappings.
To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
you should do:
dma_addr_t dma_handle;
cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
where device is a struct device *. This may be called in interrupt
context with the GFP_ATOMIC flag.
Size is the length of the region you want to allocate, in bytes.
This routine will allocate RAM for that region, so it acts similarly to
__get_free_pages (but takes size instead of a page order). If your
driver needs regions sized smaller than a page, you may prefer using
the dma_pool interface, described below.
The consistent DMA mapping interfaces, for non-NULL dev, will by
default return a DMA address which is 32-bit addressable. Even if the
device indicates (via DMA mask) that it may address the upper 32-bits,
consistent allocation will only return > 32-bit addresses for DMA if
the consistent DMA mask has been explicitly changed via
dma_set_coherent_mask(). This is true of the dma_pool interface as
well.
dma_alloc_coherent returns two values: the virtual address which you
can use to access it from the CPU and dma_handle which you pass to the
card.
The cpu return address and the DMA bus master address are both
guaranteed to be aligned to the smallest PAGE_SIZE order which
is greater than or equal to the requested size. This invariant
exists (for example) to guarantee that if you allocate a chunk
which is smaller than or equal to 64 kilobytes, the extent of the
buffer you receive will not cross a 64K boundary.
To unmap and free such a DMA region, you call:
dma_free_coherent(dev, size, cpu_addr, dma_handle);
where dev, size are the same as in the above call and cpu_addr and
dma_handle are the values dma_alloc_coherent returned to you.
This function may not be called in interrupt context.
If your driver needs lots of smaller memory regions, you can write
custom code to subdivide pages returned by dma_alloc_coherent,
or you can use the dma_pool API to do that. A dma_pool is like
a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
Also, it understands common hardware constraints for alignment,
like queue heads needing to be aligned on N byte boundaries.
Create a dma_pool like this:
struct dma_pool *pool;
pool = dma_pool_create(name, dev, size, align, alloc);
The "name" is for diagnostics (like a kmem_cache name); dev and size
are as above. The device's hardware alignment requirement for this
type of data is "align" (which is expressed in bytes, and must be a
power of two). If your device has no boundary crossing restrictions,
pass 0 for alloc; passing 4096 says memory allocated from this pool
must not cross 4KByte boundaries (but at that time it may be better to
go for dma_alloc_coherent directly instead).
Allocate memory from a dma pool like this:
cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent,
this returns two values, cpu_addr and dma_handle.
Free memory that was allocated from a dma_pool like this:
dma_pool_free(pool, cpu_addr, dma_handle);
where pool is what you passed to dma_pool_alloc, and cpu_addr and
dma_handle are the values dma_pool_alloc returned. This function
may be called in interrupt context.
Destroy a dma_pool by calling:
dma_pool_destroy(pool);
Make sure you've called dma_pool_free for all memory allocated
from a pool before you destroy the pool. This function may not
be called in interrupt context.
DMA Direction
The interfaces described in subsequent portions of this document
take a DMA direction argument, which is an integer and takes on
one of the following values:
DMA_BIDIRECTIONAL
DMA_TO_DEVICE
DMA_FROM_DEVICE
DMA_NONE
One should provide the exact DMA direction if you know it.
DMA_TO_DEVICE means "from main memory to the device"
DMA_FROM_DEVICE means "from the device to main memory"
It is the direction in which the data moves during the DMA
transfer.
You are _strongly_ encouraged to specify this as precisely
as you possibly can.
If you absolutely cannot know the direction of the DMA transfer,
specify DMA_BIDIRECTIONAL. It means that the DMA can go in
either direction. The platform guarantees that you may legally
specify this, and that it will work, but this may be at the
cost of performance for example.
The value DMA_NONE is to be used for debugging. One can
hold this in a data structure before you come to know the
precise direction, and this will help catch cases where your
direction tracking logic has failed to set things up properly.
Another advantage of specifying this value precisely (outside of
potential platform-specific optimizations of such) is for debugging.
Some platforms actually have a write permission boolean which DMA
mappings can be marked with, much like page protections in the user
program address space. Such platforms can and do report errors in the
kernel logs when the DMA controller hardware detects violation of the
permission setting.
Only streaming mappings specify a direction, consistent mappings
implicitly have a direction attribute setting of
DMA_BIDIRECTIONAL.
The SCSI subsystem tells you the direction to use in the
'sc_data_direction' member of the SCSI command your driver is
working on.
For Networking drivers, it's a rather simple affair. For transmit
packets, map/unmap them with the DMA_TO_DEVICE direction
specifier. For receive packets, just the opposite, map/unmap them
with the DMA_FROM_DEVICE direction specifier.
Using Streaming DMA mappings
The streaming DMA mapping routines can be called from interrupt
context. There are two versions of each map/unmap, one which will
map/unmap a single memory region, and one which will map/unmap a
scatterlist.
To map a single region, you do:
struct device *dev = &my_dev->dev;
dma_addr_t dma_handle;
void *addr = buffer->ptr;
size_t size = buffer->len;
dma_handle = dma_map_single(dev, addr, size, direction);
and to unmap it:
dma_unmap_single(dev, dma_handle, size, direction);
You should call dma_unmap_single when the DMA activity is finished, e.g.
from the interrupt which told you that the DMA transfer is done.
Using cpu pointers like this for single mappings has a disadvantage,
you cannot reference HIGHMEM memory in this way. Thus, there is a
map/unmap interface pair akin to dma_{map,unmap}_single. These
interfaces deal with page/offset pairs instead of cpu pointers.
Specifically:
struct device *dev = &my_dev->dev;
dma_addr_t dma_handle;
struct page *page = buffer->page;
unsigned long offset = buffer->offset;
size_t size = buffer->len;
dma_handle = dma_map_page(dev, page, offset, size, direction);
...
dma_unmap_page(dev, dma_handle, size, direction);
Here, "offset" means byte offset within the given page.
With scatterlists, you map a region gathered from several regions by:
int i, count = dma_map_sg(dev, sglist, nents, direction);
struct scatterlist *sg;
for_each_sg(sglist, sg, count, i) {
hw_address[i] = sg_dma_address(sg);
hw_len[i] = sg_dma_len(sg);
}
where nents is the number of entries in the sglist.
The implementation is free to merge several consecutive sglist entries
into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
consecutive sglist entries can be merged into one provided the first one
ends and the second one starts on a page boundary - in fact this is a huge
advantage for cards which either cannot do scatter-gather or have very
limited number of scatter-gather entries) and returns the actual number
of sg entries it mapped them to. On failure 0 is returned.
Then you should loop count times (note: this can be less than nents times)
and use sg_dma_address() and sg_dma_len() macros where you previously
accessed sg->address and sg->length as shown above.
To unmap a scatterlist, just call:
dma_unmap_sg(dev, sglist, nents, direction);
Again, make sure DMA activity has already finished.
PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
the _same_ one you passed into the dma_map_sg call,
it should _NOT_ be the 'count' value _returned_ from the
dma_map_sg call.
Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
counterpart, because the bus address space is a shared resource (although
in some ports the mapping is per each BUS so less devices contend for the
same bus address space) and you could render the machine unusable by eating
all bus addresses.
If you need to use the same streaming DMA region multiple times and touch
the data in between the DMA transfers, the buffer needs to be synced
properly in order for the cpu and device to see the most uptodate and
correct copy of the DMA buffer.
So, firstly, just map it with dma_map_{single,sg}, and after each DMA
transfer call either:
dma_sync_single_for_cpu(dev, dma_handle, size, direction);
or:
dma_sync_sg_for_cpu(dev, sglist, nents, direction);
as appropriate.
Then, if you wish to let the device get at the DMA area again,
finish accessing the data with the cpu, and then before actually
giving the buffer to the hardware call either:
dma_sync_single_for_device(dev, dma_handle, size, direction);
or:
dma_sync_sg_for_device(dev, sglist, nents, direction);
as appropriate.
After the last DMA transfer call one of the DMA unmap routines
dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
call till dma_unmap_*, then you don't have to call the dma_sync_*
routines at all.
Here is pseudo code which shows a situation in which you would need
to use the dma_sync_*() interfaces.
my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
{
dma_addr_t mapping;
mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
cp->rx_buf = buffer;
cp->rx_len = len;
cp->rx_dma = mapping;
give_rx_buf_to_card(cp);
}
...
my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
{
struct my_card *cp = devid;
...
if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
struct my_card_header *hp;
/* Examine the header to see if we wish
* to accept the data. But synchronize
* the DMA transfer with the CPU first
* so that we see updated contents.
*/
dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
cp->rx_len,
DMA_FROM_DEVICE);
/* Now it is safe to examine the buffer. */
hp = (struct my_card_header *) cp->rx_buf;
if (header_is_ok(hp)) {
dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
DMA_FROM_DEVICE);
pass_to_upper_layers(cp->rx_buf);
make_and_setup_new_rx_buf(cp);
} else {
/* Just sync the buffer and give it back
* to the card.
*/
dma_sync_single_for_device(&cp->dev,
cp->rx_dma,
cp->rx_len,
DMA_FROM_DEVICE);
give_rx_buf_to_card(cp);
}
}
}
Drivers converted fully to this interface should not use virt_to_bus any
longer, nor should they use bus_to_virt. Some drivers have to be changed a
little bit, because there is no longer an equivalent to bus_to_virt in the
dynamic DMA mapping scheme - you have to always store the DMA addresses
returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
calls (dma_map_sg stores them in the scatterlist itself if the platform
supports dynamic DMA mapping in hardware) in your driver structures and/or
in the card registers.
All drivers should be using these interfaces with no exceptions. It
is planned to completely remove virt_to_bus() and bus_to_virt() as
they are entirely deprecated. Some ports already do not provide these
as it is impossible to correctly support them.
Handling Errors
DMA address space is limited on some architectures and an allocation
failure can be determined by:
- checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
- checking the returned dma_addr_t of dma_map_single and dma_map_page
by using dma_mapping_error():
dma_addr_t dma_handle;
dma_handle = dma_map_single(dev, addr, size, direction);
if (dma_mapping_error(dev, dma_handle)) {
/*
* reduce current DMA mapping usage,
* delay and try again later or
* reset driver.
*/
}
Networking drivers must call dev_kfree_skb to free the socket buffer
and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
(ndo_start_xmit). This means that the socket buffer is just dropped in
the failure case.
SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
fails in the queuecommand hook. This means that the SCSI subsystem
passes the command to the driver again later.
Optimizing Unmap State Space Consumption
On many platforms, dma_unmap_{single,page}() is simply a nop.
Therefore, keeping track of the mapping address and length is a waste
of space. Instead of filling your drivers up with ifdefs and the like
to "work around" this (which would defeat the whole purpose of a
portable API) the following facilities are provided.
Actually, instead of describing the macros one by one, we'll
transform some example code.
1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
Example, before:
struct ring_state {
struct sk_buff *skb;
dma_addr_t mapping;
__u32 len;
};
after:
struct ring_state {
struct sk_buff *skb;
DEFINE_DMA_UNMAP_ADDR(mapping);
DEFINE_DMA_UNMAP_LEN(len);
};
2) Use dma_unmap_{addr,len}_set to set these values.
Example, before:
ringp->mapping = FOO;
ringp->len = BAR;
after:
dma_unmap_addr_set(ringp, mapping, FOO);
dma_unmap_len_set(ringp, len, BAR);
3) Use dma_unmap_{addr,len} to access these values.
Example, before:
dma_unmap_single(dev, ringp->mapping, ringp->len,
DMA_FROM_DEVICE);
after:
dma_unmap_single(dev,
dma_unmap_addr(ringp, mapping),
dma_unmap_len(ringp, len),
DMA_FROM_DEVICE);
It really should be self-explanatory. We treat the ADDR and LEN
separately, because it is possible for an implementation to only
need the address in order to perform the unmap operation.
Platform Issues
If you are just writing drivers for Linux and do not maintain
an architecture port for the kernel, you can safely skip down
to "Closing".
1) Struct scatterlist requirements.
Don't invent the architecture specific struct scatterlist; just use
<asm-generic/scatterlist.h>. You need to enable
CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
(including software IOMMU).
2) ARCH_KMALLOC_MINALIGN
Architectures must ensure that kmalloc'ed buffer is
DMA-safe. Drivers and subsystems depend on it. If an architecture
isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
the CPU cache is identical to data in main memory),
ARCH_KMALLOC_MINALIGN must be set so that the memory allocator
makes sure that kmalloc'ed buffer doesn't share a cache line with
the others. See arch/arm/include/asm/cache.h as an example.
Note that ARCH_KMALLOC_MINALIGN is about DMA memory alignment
constraints. You don't need to worry about the architecture data
alignment constraints (e.g. the alignment constraints about 64-bit
objects).
Closing
This document, and the API itself, would not be in its current
form without the feedback and suggestions from numerous individuals.
We would like to specifically mention, in no particular order, the
following people:
Russell King <rmk@arm.linux.org.uk>
Leo Dagum <dagum@barrel.engr.sgi.com>
Ralf Baechle <ralf@oss.sgi.com>
Grant Grundler <grundler@cup.hp.com>
Jay Estabrook <Jay.Estabrook@compaq.com>
Thomas Sailer <sailer@ife.ee.ethz.ch>
Andrea Arcangeli <andrea@suse.de>
Jens Axboe <jens.axboe@oracle.com>
David Mosberger-Tang <davidm@hpl.hp.com>

View File

@ -4,20 +4,18 @@
James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
This document describes the DMA API. For a more gentle introduction
phrased in terms of the pci_ equivalents (and actual examples) see
Documentation/PCI/PCI-DMA-mapping.txt.
of the API (and actual examples) see
Documentation/DMA-API-HOWTO.txt.
This API is split into two pieces. Part I describes the API and the
corresponding pci_ API. Part II describes the extensions to the API
for supporting non-consistent memory machines. Unless you know that
your driver absolutely has to support non-consistent platforms (this
is usually only legacy platforms) you should only use the API
described in part I.
This API is split into two pieces. Part I describes the API. Part II
describes the extensions to the API for supporting non-consistent
memory machines. Unless you know that your driver absolutely has to
support non-consistent platforms (this is usually only legacy
platforms) you should only use the API described in part I.
Part I - pci_ and dma_ Equivalent API
Part I - dma_ API
-------------------------------------
To get the pci_ API, you must #include <linux/pci.h>
To get the dma_ API, you must #include <linux/dma-mapping.h>
@ -27,9 +25,6 @@ Part Ia - Using large dma-coherent buffers
void *
dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flag)
void *
pci_alloc_consistent(struct pci_dev *dev, size_t size,
dma_addr_t *dma_handle)
Consistent memory is memory for which a write by either the device or
the processor can immediately be read by the processor or device
@ -53,15 +48,11 @@ The simplest way to do that is to use the dma_pool calls (see below).
The flag parameter (dma_alloc_coherent only) allows the caller to
specify the GFP_ flags (see kmalloc) for the allocation (the
implementation may choose to ignore flags that affect the location of
the returned memory, like GFP_DMA). For pci_alloc_consistent, you
must assume GFP_ATOMIC behaviour.
the returned memory, like GFP_DMA).
void
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t dma_handle)
void
pci_free_consistent(struct pci_dev *dev, size_t size, void *cpu_addr,
dma_addr_t dma_handle)
Free the region of consistent memory you previously allocated. dev,
size and dma_handle must all be the same as those passed into the
@ -89,10 +80,6 @@ for alignment, like queue heads needing to be aligned on N-byte boundaries.
dma_pool_create(const char *name, struct device *dev,
size_t size, size_t align, size_t alloc);
struct pci_pool *
pci_pool_create(const char *name, struct pci_device *dev,
size_t size, size_t align, size_t alloc);
The pool create() routines initialize a pool of dma-coherent buffers
for use with a given device. It must be called in a context which
can sleep.
@ -108,9 +95,6 @@ from this pool must not cross 4KByte boundaries.
void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
dma_addr_t *dma_handle);
void *pci_pool_alloc(struct pci_pool *pool, gfp_t gfp_flags,
dma_addr_t *dma_handle);
This allocates memory from the pool; the returned memory will meet the size
and alignment requirements specified at creation time. Pass GFP_ATOMIC to
prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
@ -122,9 +106,6 @@ pool's device.
void dma_pool_free(struct dma_pool *pool, void *vaddr,
dma_addr_t addr);
void pci_pool_free(struct pci_pool *pool, void *vaddr,
dma_addr_t addr);
This puts memory back into the pool. The pool is what was passed to
the pool allocation routine; the cpu (vaddr) and dma addresses are what
were returned when that routine allocated the memory being freed.
@ -132,8 +113,6 @@ were returned when that routine allocated the memory being freed.
void dma_pool_destroy(struct dma_pool *pool);
void pci_pool_destroy(struct pci_pool *pool);
The pool destroy() routines free the resources of the pool. They must be
called in a context which can sleep. Make sure you've freed all allocated
memory back to the pool before you destroy it.
@ -144,8 +123,6 @@ Part Ic - DMA addressing limitations
int
dma_supported(struct device *dev, u64 mask)
int
pci_dma_supported(struct pci_dev *hwdev, u64 mask)
Checks to see if the device can support DMA to the memory described by
mask.
@ -159,8 +136,14 @@ driver writers.
int
dma_set_mask(struct device *dev, u64 mask)
Checks to see if the mask is possible and updates the device
parameters if it is.
Returns: 0 if successful and a negative error if not.
int
pci_set_dma_mask(struct pci_device *dev, u64 mask)
dma_set_coherent_mask(struct device *dev, u64 mask)
Checks to see if the mask is possible and updates the device
parameters if it is.
@ -187,9 +170,6 @@ Part Id - Streaming DMA mappings
dma_addr_t
dma_map_single(struct device *dev, void *cpu_addr, size_t size,
enum dma_data_direction direction)
dma_addr_t
pci_map_single(struct pci_dev *hwdev, void *cpu_addr, size_t size,
int direction)
Maps a piece of processor virtual memory so it can be accessed by the
device and returns the physical handle of the memory.
@ -198,14 +178,10 @@ The direction for both api's may be converted freely by casting.
However the dma_ API uses a strongly typed enumerator for its
direction:
DMA_NONE = PCI_DMA_NONE no direction (used for
debugging)
DMA_TO_DEVICE = PCI_DMA_TODEVICE data is going from the
memory to the device
DMA_FROM_DEVICE = PCI_DMA_FROMDEVICE data is coming from
the device to the
memory
DMA_BIDIRECTIONAL = PCI_DMA_BIDIRECTIONAL direction isn't known
DMA_NONE no direction (used for debugging)
DMA_TO_DEVICE data is going from the memory to the device
DMA_FROM_DEVICE data is coming from the device to the memory
DMA_BIDIRECTIONAL direction isn't known
Notes: Not all memory regions in a machine can be mapped by this
API. Further, regions that appear to be physically contiguous in
@ -268,9 +244,6 @@ cache lines are updated with data that the device may have changed).
void
dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
enum dma_data_direction direction)
void
pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr,
size_t size, int direction)
Unmaps the region previously mapped. All the parameters passed in
must be identical to those passed in (and returned) by the mapping
@ -280,15 +253,9 @@ dma_addr_t
dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
enum dma_data_direction direction)
dma_addr_t
pci_map_page(struct pci_dev *hwdev, struct page *page,
unsigned long offset, size_t size, int direction)
void
dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction)
void
pci_unmap_page(struct pci_dev *hwdev, dma_addr_t dma_address,
size_t size, int direction)
API for mapping and unmapping for pages. All the notes and warnings
for the other mapping APIs apply here. Also, although the <offset>
@ -299,9 +266,6 @@ cache width is.
int
dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
int
pci_dma_mapping_error(struct pci_dev *hwdev, dma_addr_t dma_addr)
In some circumstances dma_map_single and dma_map_page will fail to create
a mapping. A driver can check for these errors by testing the returned
dma address with dma_mapping_error(). A non-zero return value means the mapping
@ -311,9 +275,6 @@ reduce current DMA mapping usage or delay and try again later).
int
dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction direction)
int
pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents, int direction)
Returns: the number of physical segments mapped (this may be shorter
than <nents> passed in if some elements of the scatter/gather list are
@ -353,9 +314,6 @@ accessed sg->address and sg->length as shown above.
void
dma_unmap_sg(struct device *dev, struct scatterlist *sg,
int nhwentries, enum dma_data_direction direction)
void
pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nents, int direction)
Unmap the previously mapped scatter/gather list. All the parameters
must be the same as those and passed in to the scatter/gather mapping
@ -365,21 +323,23 @@ Note: <nents> must be the number you passed in, *not* the number of
physical entries returned.
void
dma_sync_single(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
void
pci_dma_sync_single(struct pci_dev *hwdev, dma_addr_t dma_handle,
size_t size, int direction)
dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
void
dma_sync_sg(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
void
pci_dma_sync_sg(struct pci_dev *hwdev, struct scatterlist *sg,
int nelems, int direction)
dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
Synchronise a single contiguous or scatter/gather mapping. All the
parameters must be the same as those passed into the single mapping
API.
Synchronise a single contiguous or scatter/gather mapping for the cpu
and device. With the sync_sg API, all the parameters must be the same
as those passed into the single mapping API. With the sync_single API,
you can use dma_handle and size parameters that aren't identical to
those passed into the single mapping API to do a partial sync.
Notes: You must do this:
@ -461,9 +421,9 @@ void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
Part II - Advanced dma_ usage
-----------------------------
Warning: These pieces of the DMA API have no PCI equivalent. They
should also not be used in the majority of cases, since they cater for
unlikely corner cases that don't belong in usual drivers.
Warning: These pieces of the DMA API should not be used in the
majority of cases, since they cater for unlikely corner cases that
don't belong in usual drivers.
If you don't understand how cache line coherency works between a
processor and an I/O device, you should not be using this part of the
@ -513,16 +473,6 @@ line, but it will guarantee that one or more cache lines fit exactly
into the width returned by this call. It will also always be a power
of two for easy alignment.
void
dma_sync_single_range(struct device *dev, dma_addr_t dma_handle,
unsigned long offset, size_t size,
enum dma_data_direction direction)
Does a partial sync, starting at offset and continuing for size. You
must be careful to observe the cache alignment and width when doing
anything like this. You must also be extra careful about accessing
memory you intend to sync partially.
void
dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction)

View File

@ -14,7 +14,7 @@ DOCBOOKS := z8530book.xml mcabook.xml device-drivers.xml \
genericirq.xml s390-drivers.xml uio-howto.xml scsi.xml \
mac80211.xml debugobjects.xml sh.xml regulator.xml \
alsa-driver-api.xml writing-an-alsa-driver.xml \
tracepoint.xml media.xml
tracepoint.xml media.xml drm.xml
###
# The build process is as follows (targets):

View File

@ -45,7 +45,7 @@
</sect1>
<sect1><title>Atomic and pointer manipulation</title>
!Iarch/x86/include/asm/atomic_32.h
!Iarch/x86/include/asm/atomic.h
!Iarch/x86/include/asm/unaligned.h
</sect1>

View File

@ -316,7 +316,7 @@ CPU B: spin_unlock_irqrestore(&amp;dev_lock, flags)
<chapter id="pubfunctions">
<title>Public Functions Provided</title>
!Iarch/x86/include/asm/io_32.h
!Iarch/x86/include/asm/io.h
!Elib/iomap.c
</chapter>

View File

@ -0,0 +1,839 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN"
"http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []>
<book id="drmDevelopersGuide">
<bookinfo>
<title>Linux DRM Developer's Guide</title>
<copyright>
<year>2008-2009</year>
<holder>
Intel Corporation (Jesse Barnes &lt;jesse.barnes@intel.com&gt;)
</holder>
</copyright>
<legalnotice>
<para>
The contents of this file may be used under the terms of the GNU
General Public License version 2 (the "GPL") as distributed in
the kernel source COPYING file.
</para>
</legalnotice>
</bookinfo>
<toc></toc>
<!-- Introduction -->
<chapter id="drmIntroduction">
<title>Introduction</title>
<para>
The Linux DRM layer contains code intended to support the needs
of complex graphics devices, usually containing programmable
pipelines well suited to 3D graphics acceleration. Graphics
drivers in the kernel can make use of DRM functions to make
tasks like memory management, interrupt handling and DMA easier,
and provide a uniform interface to applications.
</para>
<para>
A note on versions: this guide covers features found in the DRM
tree, including the TTM memory manager, output configuration and
mode setting, and the new vblank internals, in addition to all
the regular features found in current kernels.
</para>
<para>
[Insert diagram of typical DRM stack here]
</para>
</chapter>
<!-- Internals -->
<chapter id="drmInternals">
<title>DRM Internals</title>
<para>
This chapter documents DRM internals relevant to driver authors
and developers working to add support for the latest features to
existing drivers.
</para>
<para>
First, we'll go over some typical driver initialization
requirements, like setting up command buffers, creating an
initial output configuration, and initializing core services.
Subsequent sections will cover core internals in more detail,
providing implementation notes and examples.
</para>
<para>
The DRM layer provides several services to graphics drivers,
many of them driven by the application interfaces it provides
through libdrm, the library that wraps most of the DRM ioctls.
These include vblank event handling, memory
management, output management, framebuffer management, command
submission &amp; fencing, suspend/resume support, and DMA
services.
</para>
<para>
The core of every DRM driver is struct drm_device. Drivers
will typically statically initialize a drm_device structure,
then pass it to drm_init() at load time.
</para>
<!-- Internals: driver init -->
<sect1>
<title>Driver initialization</title>
<para>
Before calling the DRM initialization routines, the driver must
first create and fill out a struct drm_device structure.
</para>
<programlisting>
static struct drm_driver driver = {
/* don't use mtrr's here, the Xserver or user space app should
* deal with them for intel hardware.
*/
.driver_features =
DRIVER_USE_AGP | DRIVER_REQUIRE_AGP |
DRIVER_HAVE_IRQ | DRIVER_IRQ_SHARED | DRIVER_MODESET,
.load = i915_driver_load,
.unload = i915_driver_unload,
.firstopen = i915_driver_firstopen,
.lastclose = i915_driver_lastclose,
.preclose = i915_driver_preclose,
.save = i915_save,
.restore = i915_restore,
.device_is_agp = i915_driver_device_is_agp,
.get_vblank_counter = i915_get_vblank_counter,
.enable_vblank = i915_enable_vblank,
.disable_vblank = i915_disable_vblank,
.irq_preinstall = i915_driver_irq_preinstall,
.irq_postinstall = i915_driver_irq_postinstall,
.irq_uninstall = i915_driver_irq_uninstall,
.irq_handler = i915_driver_irq_handler,
.reclaim_buffers = drm_core_reclaim_buffers,
.get_map_ofs = drm_core_get_map_ofs,
.get_reg_ofs = drm_core_get_reg_ofs,
.fb_probe = intelfb_probe,
.fb_remove = intelfb_remove,
.fb_resize = intelfb_resize,
.master_create = i915_master_create,
.master_destroy = i915_master_destroy,
#if defined(CONFIG_DEBUG_FS)
.debugfs_init = i915_debugfs_init,
.debugfs_cleanup = i915_debugfs_cleanup,
#endif
.gem_init_object = i915_gem_init_object,
.gem_free_object = i915_gem_free_object,
.gem_vm_ops = &amp;i915_gem_vm_ops,
.ioctls = i915_ioctls,
.fops = {
.owner = THIS_MODULE,
.open = drm_open,
.release = drm_release,
.ioctl = drm_ioctl,
.mmap = drm_mmap,
.poll = drm_poll,
.fasync = drm_fasync,
#ifdef CONFIG_COMPAT
.compat_ioctl = i915_compat_ioctl,
#endif
},
.pci_driver = {
.name = DRIVER_NAME,
.id_table = pciidlist,
.probe = probe,
.remove = __devexit_p(drm_cleanup_pci),
},
.name = DRIVER_NAME,
.desc = DRIVER_DESC,
.date = DRIVER_DATE,
.major = DRIVER_MAJOR,
.minor = DRIVER_MINOR,
.patchlevel = DRIVER_PATCHLEVEL,
};
</programlisting>
<para>
In the example above, taken from the i915 DRM driver, the driver
sets several flags indicating what core features it supports.
We'll go over the individual callbacks in later sections. Since
flags indicate which features your driver supports to the DRM
core, you need to set most of them prior to calling drm_init(). Some,
like DRIVER_MODESET can be set later based on user supplied parameters,
but that's the exception rather than the rule.
</para>
<variablelist>
<title>Driver flags</title>
<varlistentry>
<term>DRIVER_USE_AGP</term>
<listitem><para>
Driver uses AGP interface
</para></listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_REQUIRE_AGP</term>
<listitem><para>
Driver needs AGP interface to function.
</para></listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_USE_MTRR</term>
<listitem>
<para>
Driver uses MTRR interface for mapping memory. Deprecated.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_PCI_DMA</term>
<listitem><para>
Driver is capable of PCI DMA. Deprecated.
</para></listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_SG</term>
<listitem><para>
Driver can perform scatter/gather DMA. Deprecated.
</para></listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_HAVE_DMA</term>
<listitem><para>Driver supports DMA. Deprecated.</para></listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_HAVE_IRQ</term><term>DRIVER_IRQ_SHARED</term>
<listitem>
<para>
DRIVER_HAVE_IRQ indicates whether the driver has a IRQ
handler, DRIVER_IRQ_SHARED indicates whether the device &amp;
handler support shared IRQs (note that this is required of
PCI drivers).
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_DMA_QUEUE</term>
<listitem>
<para>
If the driver queues DMA requests and completes them
asynchronously, this flag should be set. Deprecated.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_FB_DMA</term>
<listitem>
<para>
Driver supports DMA to/from the framebuffer. Deprecated.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>DRIVER_MODESET</term>
<listitem>
<para>
Driver supports mode setting interfaces.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
In this specific case, the driver requires AGP and supports
IRQs. DMA, as we'll see, is handled by device specific ioctls
in this case. It also supports the kernel mode setting APIs, though
unlike in the actual i915 driver source, this example unconditionally
exports KMS capability.
</para>
</sect1>
<!-- Internals: driver load -->
<sect1>
<title>Driver load</title>
<para>
In the previous section, we saw what a typical drm_driver
structure might look like. One of the more important fields in
the structure is the hook for the load function.
</para>
<programlisting>
static struct drm_driver driver = {
...
.load = i915_driver_load,
...
};
</programlisting>
<para>
The load function has many responsibilities: allocating a driver
private structure, specifying supported performance counters,
configuring the device (e.g. mapping registers &amp; command
buffers), initializing the memory manager, and setting up the
initial output configuration.
</para>
<para>
Note that the tasks performed at driver load time must not
conflict with DRM client requirements. For instance, if user
level mode setting drivers are in use, it would be problematic
to perform output discovery &amp; configuration at load time.
Likewise, if pre-memory management aware user level drivers are
in use, memory management and command buffer setup may need to
be omitted. These requirements are driver specific, and care
needs to be taken to keep both old and new applications and
libraries working. The i915 driver supports the "modeset"
module parameter to control whether advanced features are
enabled at load time or in legacy fashion. If compatibility is
a concern (e.g. with drivers converted over to the new interfaces
from the old ones), care must be taken to prevent incompatible
device initialization and control with the currently active
userspace drivers.
</para>
<sect2>
<title>Driver private &amp; performance counters</title>
<para>
The driver private hangs off the main drm_device structure and
can be used for tracking various device specific bits of
information, like register offsets, command buffer status,
register state for suspend/resume, etc. At load time, a
driver can simply allocate one and set drm_device.dev_priv
appropriately; at unload the driver can free it and set
drm_device.dev_priv to NULL.
</para>
<para>
The DRM supports several counters which can be used for rough
performance characterization. Note that the DRM stat counter
system is not often used by applications, and supporting
additional counters is completely optional.
</para>
<para>
These interfaces are deprecated and should not be used. If performance
monitoring is desired, the developer should investigate and
potentially enhance the kernel perf and tracing infrastructure to export
GPU related performance information to performance monitoring
tools and applications.
</para>
</sect2>
<sect2>
<title>Configuring the device</title>
<para>
Obviously, device configuration will be device specific.
However, there are several common operations: finding a
device's PCI resources, mapping them, and potentially setting
up an IRQ handler.
</para>
<para>
Finding &amp; mapping resources is fairly straightforward. The
DRM wrapper functions, drm_get_resource_start() and
drm_get_resource_len() can be used to find BARs on the given
drm_device struct. Once those values have been retrieved, the
driver load function can call drm_addmap() to create a new
mapping for the BAR in question. Note you'll probably want a
drm_local_map_t in your driver private structure to track any
mappings you create.
<!-- !Fdrivers/gpu/drm/drm_bufs.c drm_get_resource_* -->
<!-- !Finclude/drm/drmP.h drm_local_map_t -->
</para>
<para>
if compatibility with other operating systems isn't a concern
(DRM drivers can run under various BSD variants and OpenSolaris),
native Linux calls can be used for the above, e.g. pci_resource_*
and iomap*/iounmap. See the Linux device driver book for more
info.
</para>
<para>
Once you have a register map, you can use the DRM_READn() and
DRM_WRITEn() macros to access the registers on your device, or
use driver specific versions to offset into your MMIO space
relative to a driver specific base pointer (see I915_READ for
example).
</para>
<para>
If your device supports interrupt generation, you may want to
setup an interrupt handler at driver load time as well. This
is done using the drm_irq_install() function. If your device
supports vertical blank interrupts, it should call
drm_vblank_init() to initialize the core vblank handling code before
enabling interrupts on your device. This ensures the vblank related
structures are allocated and allows the core to handle vblank events.
</para>
<!--!Fdrivers/char/drm/drm_irq.c drm_irq_install-->
<para>
Once your interrupt handler is registered (it'll use your
drm_driver.irq_handler as the actual interrupt handling
function), you can safely enable interrupts on your device,
assuming any other state your interrupt handler uses is also
initialized.
</para>
<para>
Another task that may be necessary during configuration is
mapping the video BIOS. On many devices, the VBIOS describes
device configuration, LCD panel timings (if any), and contains
flags indicating device state. Mapping the BIOS can be done
using the pci_map_rom() call, a convenience function that
takes care of mapping the actual ROM, whether it has been
shadowed into memory (typically at address 0xc0000) or exists
on the PCI device in the ROM BAR. Note that once you've
mapped the ROM and extracted any necessary information, be
sure to unmap it; on many devices the ROM address decoder is
shared with other BARs, so leaving it mapped can cause
undesired behavior like hangs or memory corruption.
<!--!Fdrivers/pci/rom.c pci_map_rom-->
</para>
</sect2>
<sect2>
<title>Memory manager initialization</title>
<para>
In order to allocate command buffers, cursor memory, scanout
buffers, etc., as well as support the latest features provided
by packages like Mesa and the X.Org X server, your driver
should support a memory manager.
</para>
<para>
If your driver supports memory management (it should!), you'll
need to set that up at load time as well. How you initialize
it depends on which memory manager you're using, TTM or GEM.
</para>
<sect3>
<title>TTM initialization</title>
<para>
TTM (for Translation Table Manager) manages video memory and
aperture space for graphics devices. TTM supports both UMA devices
and devices with dedicated video RAM (VRAM), i.e. most discrete
graphics devices. If your device has dedicated RAM, supporting
TTM is desirable. TTM also integrates tightly with your
driver specific buffer execution function. See the radeon
driver for examples.
</para>
<para>
The core TTM structure is the ttm_bo_driver struct. It contains
several fields with function pointers for initializing the TTM,
allocating and freeing memory, waiting for command completion
and fence synchronization, and memory migration. See the
radeon_ttm.c file for an example of usage.
</para>
<para>
The ttm_global_reference structure is made up of several fields:
</para>
<programlisting>
struct ttm_global_reference {
enum ttm_global_types global_type;
size_t size;
void *object;
int (*init) (struct ttm_global_reference *);
void (*release) (struct ttm_global_reference *);
};
</programlisting>
<para>
There should be one global reference structure for your memory
manager as a whole, and there will be others for each object
created by the memory manager at runtime. Your global TTM should
have a type of TTM_GLOBAL_TTM_MEM. The size field for the global
object should be sizeof(struct ttm_mem_global), and the init and
release hooks should point at your driver specific init and
release routines, which will probably eventually call
ttm_mem_global_init and ttm_mem_global_release respectively.
</para>
<para>
Once your global TTM accounting structure is set up and initialized
(done by calling ttm_global_item_ref on the global object you
just created), you'll need to create a buffer object TTM to
provide a pool for buffer object allocation by clients and the
kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO,
and its size should be sizeof(struct ttm_bo_global). Again,
driver specific init and release functions can be provided,
likely eventually calling ttm_bo_global_init and
ttm_bo_global_release, respectively. Also like the previous
object, ttm_global_item_ref is used to create an initial reference
count for the TTM, which will call your initialization function.
</para>
</sect3>
<sect3>
<title>GEM initialization</title>
<para>
GEM is an alternative to TTM, designed specifically for UMA
devices. It has simpler initialization and execution requirements
than TTM, but has no VRAM management capability. Core GEM
initialization is comprised of a basic drm_mm_init call to create
a GTT DRM MM object, which provides an address space pool for
object allocation. In a KMS configuration, the driver will
need to allocate and initialize a command ring buffer following
basic GEM initialization. Most UMA devices have a so-called
"stolen" memory region, which provides space for the initial
framebuffer and large, contiguous memory regions required by the
device. This space is not typically managed by GEM, and must
be initialized separately into its own DRM MM object.
</para>
<para>
Initialization will be driver specific, and will depend on
the architecture of the device. In the case of Intel
integrated graphics chips like 965GM, GEM initialization can
be done by calling the internal GEM init function,
i915_gem_do_init(). Since the 965GM is a UMA device
(i.e. it doesn't have dedicated VRAM), GEM will manage
making regular RAM available for GPU operations. Memory set
aside by the BIOS (called "stolen" memory by the i915
driver) will be managed by the DRM memrange allocator; the
rest of the aperture will be managed by GEM.
<programlisting>
/* Basic memrange allocator for stolen space (aka vram) */
drm_memrange_init(&amp;dev_priv->vram, 0, prealloc_size);
/* Let GEM Manage from end of prealloc space to end of aperture */
i915_gem_do_init(dev, prealloc_size, agp_size);
</programlisting>
<!--!Edrivers/char/drm/drm_memrange.c-->
</para>
<para>
Once the memory manager has been set up, we can allocate the
command buffer. In the i915 case, this is also done with a
GEM function, i915_gem_init_ringbuffer().
</para>
</sect3>
</sect2>
<sect2>
<title>Output configuration</title>
<para>
The final initialization task is output configuration. This involves
finding and initializing the CRTCs, encoders and connectors
for your device, creating an initial configuration and
registering a framebuffer console driver.
</para>
<sect3>
<title>Output discovery and initialization</title>
<para>
Several core functions exist to create CRTCs, encoders and
connectors, namely drm_crtc_init(), drm_connector_init() and
drm_encoder_init(), along with several "helper" functions to
perform common tasks.
</para>
<para>
Connectors should be registered with sysfs once they've been
detected and initialized, using the
drm_sysfs_connector_add() function. Likewise, when they're
removed from the system, they should be destroyed with
drm_sysfs_connector_remove().
</para>
<programlisting>
<![CDATA[
void intel_crt_init(struct drm_device *dev)
{
struct drm_connector *connector;
struct intel_output *intel_output;
intel_output = kzalloc(sizeof(struct intel_output), GFP_KERNEL);
if (!intel_output)
return;
connector = &intel_output->base;
drm_connector_init(dev, &intel_output->base,
&intel_crt_connector_funcs, DRM_MODE_CONNECTOR_VGA);
drm_encoder_init(dev, &intel_output->enc, &intel_crt_enc_funcs,
DRM_MODE_ENCODER_DAC);
drm_mode_connector_attach_encoder(&intel_output->base,
&intel_output->enc);
/* Set up the DDC bus. */
intel_output->ddc_bus = intel_i2c_create(dev, GPIOA, "CRTDDC_A");
if (!intel_output->ddc_bus) {
dev_printk(KERN_ERR, &dev->pdev->dev, "DDC bus registration "
"failed.\n");
return;
}
intel_output->type = INTEL_OUTPUT_ANALOG;
connector->interlace_allowed = 0;
connector->doublescan_allowed = 0;
drm_encoder_helper_add(&intel_output->enc, &intel_crt_helper_funcs);
drm_connector_helper_add(connector, &intel_crt_connector_helper_funcs);
drm_sysfs_connector_add(connector);
}
]]>
</programlisting>
<para>
In the example above (again, taken from the i915 driver), a
CRT connector and encoder combination is created. A device
specific i2c bus is also created, for fetching EDID data and
performing monitor detection. Once the process is complete,
the new connector is registered with sysfs, to make its
properties available to applications.
</para>
<sect4>
<title>Helper functions and core functions</title>
<para>
Since many PC-class graphics devices have similar display output
designs, the DRM provides a set of helper functions to make
output management easier. The core helper routines handle
encoder re-routing and disabling of unused functions following
mode set. Using the helpers is optional, but recommended for
devices with PC-style architectures (i.e. a set of display planes
for feeding pixels to encoders which are in turn routed to
connectors). Devices with more complex requirements needing
finer grained management can opt to use the core callbacks
directly.
</para>
<para>
[Insert typical diagram here.] [Insert OMAP style config here.]
</para>
</sect4>
<para>
For each encoder, CRTC and connector, several functions must
be provided, depending on the object type. Encoder objects
need to provide a DPMS (basically on/off) function, mode fixup
(for converting requested modes into native hardware timings),
and prepare, set and commit functions for use by the core DRM
helper functions. Connector helpers need to provide mode fetch and
validity functions as well as an encoder matching function for
returning an ideal encoder for a given connector. The core
connector functions include a DPMS callback, (deprecated)
save/restore routines, detection, mode probing, property handling,
and cleanup functions.
</para>
<!--!Edrivers/char/drm/drm_crtc.h-->
<!--!Edrivers/char/drm/drm_crtc.c-->
<!--!Edrivers/char/drm/drm_crtc_helper.c-->
</sect3>
</sect2>
</sect1>
<!-- Internals: vblank handling -->
<sect1>
<title>VBlank event handling</title>
<para>
The DRM core exposes two vertical blank related ioctls:
DRM_IOCTL_WAIT_VBLANK and DRM_IOCTL_MODESET_CTL.
<!--!Edrivers/char/drm/drm_irq.c-->
</para>
<para>
DRM_IOCTL_WAIT_VBLANK takes a struct drm_wait_vblank structure
as its argument, and is used to block or request a signal when a
specified vblank event occurs.
</para>
<para>
DRM_IOCTL_MODESET_CTL should be called by application level
drivers before and after mode setting, since on many devices the
vertical blank counter will be reset at that time. Internally,
the DRM snapshots the last vblank count when the ioctl is called
with the _DRM_PRE_MODESET command so that the counter won't go
backwards (which is dealt with when _DRM_POST_MODESET is used).
</para>
<para>
To support the functions above, the DRM core provides several
helper functions for tracking vertical blank counters, and
requires drivers to provide several callbacks:
get_vblank_counter(), enable_vblank() and disable_vblank(). The
core uses get_vblank_counter() to keep the counter accurate
across interrupt disable periods. It should return the current
vertical blank event count, which is often tracked in a device
register. The enable and disable vblank callbacks should enable
and disable vertical blank interrupts, respectively. In the
absence of DRM clients waiting on vblank events, the core DRM
code will use the disable_vblank() function to disable
interrupts, which saves power. They'll be re-enabled again when
a client calls the vblank wait ioctl above.
</para>
<para>
Devices that don't provide a count register can simply use an
internal atomic counter incremented on every vertical blank
interrupt, and can make their enable and disable vblank
functions into no-ops.
</para>
</sect1>
<sect1>
<title>Memory management</title>
<para>
The memory manager lies at the heart of many DRM operations, and
is also required to support advanced client features like OpenGL
pbuffers. The DRM currently contains two memory managers, TTM
and GEM.
</para>
<sect2>
<title>The Translation Table Manager (TTM)</title>
<para>
TTM was developed by Tungsten Graphics, primarily by Thomas
Hellström, and is intended to be a flexible, high performance
graphics memory manager.
</para>
<para>
Drivers wishing to support TTM must fill out a drm_bo_driver
structure.
</para>
<para>
TTM design background and information belongs here.
</para>
</sect2>
<sect2>
<title>The Graphics Execution Manager (GEM)</title>
<para>
GEM is an Intel project, authored by Eric Anholt and Keith
Packard. It provides simpler interfaces than TTM, and is well
suited for UMA devices.
</para>
<para>
GEM-enabled drivers must provide gem_init_object() and
gem_free_object() callbacks to support the core memory
allocation routines. They should also provide several driver
specific ioctls to support command execution, pinning, buffer
read &amp; write, mapping, and domain ownership transfers.
</para>
<para>
On a fundamental level, GEM involves several operations: memory
allocation and freeing, command execution, and aperture management
at command execution time. Buffer object allocation is relatively
straightforward and largely provided by Linux's shmem layer, which
provides memory to back each object. When mapped into the GTT
or used in a command buffer, the backing pages for an object are
flushed to memory and marked write combined so as to be coherent
with the GPU. Likewise, when the GPU finishes rendering to an object,
if the CPU accesses it, it must be made coherent with the CPU's view
of memory, usually involving GPU cache flushing of various kinds.
This core CPU&lt;-&gt;GPU coherency management is provided by the GEM
set domain function, which evaluates an object's current domain and
performs any necessary flushing or synchronization to put the object
into the desired coherency domain (note that the object may be busy,
i.e. an active render target; in that case the set domain function
will block the client and wait for rendering to complete before
performing any necessary flushing operations).
</para>
<para>
Perhaps the most important GEM function is providing a command
execution interface to clients. Client programs construct command
buffers containing references to previously allocated memory objects
and submit them to GEM. At that point, GEM will take care to bind
all the objects into the GTT, execute the buffer, and provide
necessary synchronization between clients accessing the same buffers.
This often involves evicting some objects from the GTT and re-binding
others (a fairly expensive operation), and providing relocation
support which hides fixed GTT offsets from clients. Clients must
take care not to submit command buffers that reference more objects
than can fit in the GTT or GEM will reject them and no rendering
will occur. Similarly, if several objects in the buffer require
fence registers to be allocated for correct rendering (e.g. 2D blits
on pre-965 chips), care must be taken not to require more fence
registers than are available to the client. Such resource management
should be abstracted from the client in libdrm.
</para>
</sect2>
</sect1>
<!-- Output management -->
<sect1>
<title>Output management</title>
<para>
At the core of the DRM output management code is a set of
structures representing CRTCs, encoders and connectors.
</para>
<para>
A CRTC is an abstraction representing a part of the chip that
contains a pointer to a scanout buffer. Therefore, the number
of CRTCs available determines how many independent scanout
buffers can be active at any given time. The CRTC structure
contains several fields to support this: a pointer to some video
memory, a display mode, and an (x, y) offset into the video
memory to support panning or configurations where one piece of
video memory spans multiple CRTCs.
</para>
<para>
An encoder takes pixel data from a CRTC and converts it to a
format suitable for any attached connectors. On some devices,
it may be possible to have a CRTC send data to more than one
encoder. In that case, both encoders would receive data from
the same scanout buffer, resulting in a "cloned" display
configuration across the connectors attached to each encoder.
</para>
<para>
A connector is the final destination for pixel data on a device,
and usually connects directly to an external display device like
a monitor or laptop panel. A connector can only be attached to
one encoder at a time. The connector is also the structure
where information about the attached display is kept, so it
contains fields for display data, EDID data, DPMS &amp;
connection status, and information about modes supported on the
attached displays.
</para>
<!--!Edrivers/char/drm/drm_crtc.c-->
</sect1>
<sect1>
<title>Framebuffer management</title>
<para>
In order to set a mode on a given CRTC, encoder and connector
configuration, clients need to provide a framebuffer object which
will provide a source of pixels for the CRTC to deliver to the encoder(s)
and ultimately the connector(s) in the configuration. A framebuffer
is fundamentally a driver specific memory object, made into an opaque
handle by the DRM addfb function. Once an fb has been created this
way it can be passed to the KMS mode setting routines for use in
a configuration.
</para>
</sect1>
<sect1>
<title>Command submission &amp; fencing</title>
<para>
This should cover a few device specific command submission
implementations.
</para>
</sect1>
<sect1>
<title>Suspend/resume</title>
<para>
The DRM core provides some suspend/resume code, but drivers
wanting full suspend/resume support should provide save() and
restore() functions. These will be called at suspend,
hibernate, or resume time, and should perform any state save or
restore required by your device across suspend or hibernate
states.
</para>
</sect1>
<sect1>
<title>DMA services</title>
<para>
This should cover how DMA mapping etc. is supported by the core.
These functions are deprecated and should not be used.
</para>
</sect1>
</chapter>
<!-- External interfaces -->
<chapter id="drmExternals">
<title>Userland interfaces</title>
<para>
The DRM core exports several interfaces to applications,
generally intended to be used through corresponding libdrm
wrapper functions. In addition, drivers export device specific
interfaces for use by userspace drivers &amp; device aware
applications through ioctls and sysfs files.
</para>
<para>
External interfaces include: memory mapping, context management,
DMA operations, AGP management, vblank control, fence
management, memory management, and output management.
</para>
<para>
Cover generic ioctls and sysfs layout here. Only need high
level info, since man pages will cover the rest.
</para>
</chapter>
<!-- API reference -->
<appendix id="drmDriverApi">
<title>DRM Driver API</title>
<para>
Include auto-generated API reference here (need to reference it
from paragraphs above too).
</para>
</appendix>
</book>

View File

@ -4,7 +4,7 @@
<book id="kgdbOnLinux">
<bookinfo>
<title>Using kgdb and the kgdb Internals</title>
<title>Using kgdb, kdb and the kernel debugger internals</title>
<authorgroup>
<author>
@ -17,33 +17,8 @@
</affiliation>
</author>
</authorgroup>
<authorgroup>
<author>
<firstname>Tom</firstname>
<surname>Rini</surname>
<affiliation>
<address>
<email>trini@kernel.crashing.org</email>
</address>
</affiliation>
</author>
</authorgroup>
<authorgroup>
<author>
<firstname>Amit S.</firstname>
<surname>Kale</surname>
<affiliation>
<address>
<email>amitkale@linsyssoft.com</email>
</address>
</affiliation>
</author>
</authorgroup>
<copyright>
<year>2008</year>
<year>2008,2010</year>
<holder>Wind River Systems, Inc.</holder>
</copyright>
<copyright>
@ -69,41 +44,76 @@
<chapter id="Introduction">
<title>Introduction</title>
<para>
kgdb is a source level debugger for linux kernel. It is used along
with gdb to debug a linux kernel. The expectation is that gdb can
be used to "break in" to the kernel to inspect memory, variables
and look through call stack information similar to what an
application developer would use gdb for. It is possible to place
breakpoints in kernel code and perform some limited execution
stepping.
The kernel has two different debugger front ends (kdb and kgdb)
which interface to the debug core. It is possible to use either
of the debugger front ends and dynamically transition between them
if you configure the kernel properly at compile and runtime.
</para>
<para>
Two machines are required for using kgdb. One of these machines is a
development machine and the other is a test machine. The kernel
to be debugged runs on the test machine. The development machine
runs an instance of gdb against the vmlinux file which contains
the symbols (not boot image such as bzImage, zImage, uImage...).
In gdb the developer specifies the connection parameters and
connects to kgdb. The type of connection a developer makes with
gdb depends on the availability of kgdb I/O modules compiled as
builtin's or kernel modules in the test machine's kernel.
Kdb is simplistic shell-style interface which you can use on a
system console with a keyboard or serial console. You can use it
to inspect memory, registers, process lists, dmesg, and even set
breakpoints to stop in a certain location. Kdb is not a source
level debugger, although you can set breakpoints and execute some
basic kernel run control. Kdb is mainly aimed at doing some
analysis to aid in development or diagnosing kernel problems. You
can access some symbols by name in kernel built-ins or in kernel
modules if the code was built
with <symbol>CONFIG_KALLSYMS</symbol>.
</para>
<para>
Kgdb is intended to be used as a source level debugger for the
Linux kernel. It is used along with gdb to debug a Linux kernel.
The expectation is that gdb can be used to "break in" to the
kernel to inspect memory, variables and look through call stack
information similar to the way an application developer would use
gdb to debug an application. It is possible to place breakpoints
in kernel code and perform some limited execution stepping.
</para>
<para>
Two machines are required for using kgdb. One of these machines is
a development machine and the other is the target machine. The
kernel to be debugged runs on the target machine. The development
machine runs an instance of gdb against the vmlinux file which
contains the symbols (not boot image such as bzImage, zImage,
uImage...). In gdb the developer specifies the connection
parameters and connects to kgdb. The type of connection a
developer makes with gdb depends on the availability of kgdb I/O
modules compiled as built-ins or loadable kernel modules in the test
machine's kernel.
</para>
</chapter>
<chapter id="CompilingAKernel">
<title>Compiling a kernel</title>
<title>Compiling a kernel</title>
<para>
<itemizedlist>
<listitem><para>In order to enable compilation of kdb, you must first enable kgdb.</para></listitem>
<listitem><para>The kgdb test compile options are described in the kgdb test suite chapter.</para></listitem>
</itemizedlist>
</para>
<sect1 id="CompileKGDB">
<title>Kernel config options for kgdb</title>
<para>
To enable <symbol>CONFIG_KGDB</symbol> you should first turn on
"Prompt for development and/or incomplete code/drivers"
(CONFIG_EXPERIMENTAL) in "General setup", then under the
"Kernel debugging" select "KGDB: kernel debugging with remote gdb".
"Kernel debugging" select "KGDB: kernel debugger".
</para>
<para>
While it is not a hard requirement that you have symbols in your
vmlinux file, gdb tends not to be very useful without the symbolic
data, so you will want to turn
on <symbol>CONFIG_DEBUG_INFO</symbol> which is called "Compile the
kernel with debug info" in the config menu.
</para>
<para>
It is advised, but not required that you turn on the
CONFIG_FRAME_POINTER kernel option. This option inserts code to
into the compiled executable which saves the frame information in
registers or on the stack at different points which will allow a
debugger such as gdb to more accurately construct stack back traces
while debugging the kernel.
<symbol>CONFIG_FRAME_POINTER</symbol> kernel option which is called "Compile the
kernel with frame pointers" in the config menu. This option
inserts code to into the compiled executable which saves the frame
information in registers or on the stack at different points which
allows a debugger such as gdb to more accurately construct
stack back traces while debugging the kernel.
</para>
<para>
If the architecture that you are using supports the kernel option
@ -116,38 +126,160 @@
this option.
</para>
<para>
Next you should choose one of more I/O drivers to interconnect debugging
host and debugged target. Early boot debugging requires a KGDB
I/O driver that supports early debugging and the driver must be
built into the kernel directly. Kgdb I/O driver configuration
takes place via kernel or module parameters, see following
chapter.
Next you should choose one of more I/O drivers to interconnect
debugging host and debugged target. Early boot debugging requires
a KGDB I/O driver that supports early debugging and the driver
must be built into the kernel directly. Kgdb I/O driver
configuration takes place via kernel or module parameters which
you can learn more about in the in the section that describes the
parameter "kgdboc".
</para>
<para>
The kgdb test compile options are described in the kgdb test suite chapter.
<para>Here is an example set of .config symbols to enable or
disable for kgdb:
<itemizedlist>
<listitem><para># CONFIG_DEBUG_RODATA is not set</para></listitem>
<listitem><para>CONFIG_FRAME_POINTER=y</para></listitem>
<listitem><para>CONFIG_KGDB=y</para></listitem>
<listitem><para>CONFIG_KGDB_SERIAL_CONSOLE=y</para></listitem>
</itemizedlist>
</para>
</sect1>
<sect1 id="CompileKDB">
<title>Kernel config options for kdb</title>
<para>Kdb is quite a bit more complex than the simple gdbstub
sitting on top of the kernel's debug core. Kdb must implement a
shell, and also adds some helper functions in other parts of the
kernel, responsible for printing out interesting data such as what
you would see if you ran "lsmod", or "ps". In order to build kdb
into the kernel you follow the same steps as you would for kgdb.
</para>
<para>The main config option for kdb
is <symbol>CONFIG_KGDB_KDB</symbol> which is called "KGDB_KDB:
include kdb frontend for kgdb" in the config menu. In theory you
would have already also selected an I/O driver such as the
CONFIG_KGDB_SERIAL_CONSOLE interface if you plan on using kdb on a
serial port, when you were configuring kgdb.
</para>
<para>If you want to use a PS/2-style keyboard with kdb, you would
select CONFIG_KDB_KEYBOARD which is called "KGDB_KDB: keyboard as
input device" in the config menu. The CONFIG_KDB_KEYBOARD option
is not used for anything in the gdb interface to kgdb. The
CONFIG_KDB_KEYBOARD option only works with kdb.
</para>
<para>Here is an example set of .config symbols to enable/disable kdb:
<itemizedlist>
<listitem><para># CONFIG_DEBUG_RODATA is not set</para></listitem>
<listitem><para>CONFIG_FRAME_POINTER=y</para></listitem>
<listitem><para>CONFIG_KGDB=y</para></listitem>
<listitem><para>CONFIG_KGDB_SERIAL_CONSOLE=y</para></listitem>
<listitem><para>CONFIG_KGDB_KDB=y</para></listitem>
<listitem><para>CONFIG_KDB_KEYBOARD=y</para></listitem>
</itemizedlist>
</para>
</sect1>
</chapter>
<chapter id="EnableKGDB">
<title>Enable kgdb for debugging</title>
<para>
In order to use kgdb you must activate it by passing configuration
information to one of the kgdb I/O drivers. If you do not pass any
configuration information kgdb will not do anything at all. Kgdb
will only actively hook up to the kernel trap hooks if a kgdb I/O
driver is loaded and configured. If you unconfigure a kgdb I/O
driver, kgdb will unregister all the kernel hook points.
<chapter id="kgdbKernelArgs">
<title>Kernel Debugger Boot Arguments</title>
<para>This section describes the various runtime kernel
parameters that affect the configuration of the kernel debugger.
The following chapter covers using kdb and kgdb as well as
provides some examples of the configuration parameters.</para>
<sect1 id="kgdboc">
<title>Kernel parameter: kgdboc</title>
<para>The kgdboc driver was originally an abbreviation meant to
stand for "kgdb over console". Today it is the primary mechanism
to configure how to communicate from gdb to kgdb as well as the
devices you want to use to interact with the kdb shell.
</para>
<para>
All drivers can be reconfigured at run time, if
<symbol>CONFIG_SYSFS</symbol> and <symbol>CONFIG_MODULES</symbol>
are enabled, by echo'ing a new config string to
<constant>/sys/module/&lt;driver&gt;/parameter/&lt;option&gt;</constant>.
The driver can be unconfigured by passing an empty string. You cannot
change the configuration while the debugger is attached. Make sure
to detach the debugger with the <constant>detach</constant> command
prior to trying unconfigure a kgdb I/O driver.
<para>For kgdb/gdb, kgdboc is designed to work with a single serial
port. It is intended to cover the circumstance where you want to
use a serial console as your primary console as well as using it to
perform kernel debugging. It is also possible to use kgdb on a
serial port which is not designated as a system console. Kgdboc
may be configured as a kernel built-in or a kernel loadable module.
You can only make use of <constant>kgdbwait</constant> and early
debugging if you build kgdboc into the kernel as a built-in.
</para>
<sect2 id="kgdbocArgs">
<title>kgdboc arguments</title>
<para>Usage: <constant>kgdboc=[kbd][[,]serial_device][,baud]</constant></para>
<sect3 id="kgdbocArgs1">
<title>Using loadable module or built-in</title>
<para>
<orderedlist>
<listitem><para>As a kernel built-in:</para>
<para>Use the kernel boot argument: <constant>kgdboc=&lt;tty-device&gt;,[baud]</constant></para></listitem>
<listitem>
<para>As a kernel loadable module:</para>
<para>Use the command: <constant>modprobe kgdboc kgdboc=&lt;tty-device&gt;,[baud]</constant></para>
<para>Here are two examples of how you might formate the kgdboc
string. The first is for an x86 target using the first serial port.
The second example is for the ARM Versatile AB using the second
serial port.
<orderedlist>
<listitem><para><constant>kgdboc=ttyS0,115200</constant></para></listitem>
<listitem><para><constant>kgdboc=ttyAMA1,115200</constant></para></listitem>
</orderedlist>
</para>
</listitem>
</orderedlist></para>
</sect3>
<sect3 id="kgdbocArgs2">
<title>Configure kgdboc at runtime with sysfs</title>
<para>At run time you can enable or disable kgdboc by echoing a
parameters into the sysfs. Here are two examples:</para>
<orderedlist>
<listitem><para>Enable kgdboc on ttyS0</para>
<para><constant>echo ttyS0 &gt; /sys/module/kgdboc/parameters/kgdboc</constant></para></listitem>
<listitem><para>Disable kgdboc</para>
<para><constant>echo "" &gt; /sys/module/kgdboc/parameters/kgdboc</constant></para></listitem>
</orderedlist>
<para>NOTE: You do not need to specify the baud if you are
configuring the console on tty which is already configured or
open.</para>
</sect3>
<sect3 id="kgdbocArgs3">
<title>More examples</title>
<para>You can configure kgdboc to use the keyboard, and or a serial device
depending on if you are using kdb and or kgdb, in one of the
following scenarios.
<orderedlist>
<listitem><para>kdb and kgdb over only a serial port</para>
<para><constant>kgdboc=&lt;serial_device&gt;[,baud]</constant></para>
<para>Example: <constant>kgdboc=ttyS0,115200</constant></para>
</listitem>
<listitem><para>kdb and kgdb with keyboard and a serial port</para>
<para><constant>kgdboc=kbd,&lt;serial_device&gt;[,baud]</constant></para>
<para>Example: <constant>kgdboc=kbd,ttyS0,115200</constant></para>
</listitem>
<listitem><para>kdb with a keyboard</para>
<para><constant>kgdboc=kbd</constant></para>
</listitem>
</orderedlist>
</para>
</sect3>
<para>NOTE: Kgdboc does not support interrupting the target via the
gdb remote protocol. You must manually send a sysrq-g unless you
have a proxy that splits console output to a terminal program.
A console proxy has a separate TCP port for the debugger and a separate
TCP port for the "human" console. The proxy can take care of sending
the sysrq-g for you.
</para>
<para>When using kgdboc with no debugger proxy, you can end up
connecting the debugger at one of two entry points. If an
exception occurs after you have loaded kgdboc, a message should
print on the console stating it is waiting for the debugger. In
this case you disconnect your terminal program and then connect the
debugger in its place. If you want to interrupt the target system
and forcibly enter a debug session you have to issue a Sysrq
sequence and then type the letter <constant>g</constant>. Then
you disconnect the terminal session and connect gdb. Your options
if you don't like this are to hack gdb to send the sysrq-g for you
as well as on the initial connect, or to use a debugger proxy that
allows an unmodified gdb to do the debugging.
</para>
</sect2>
</sect1>
<sect1 id="kgdbwait">
<title>Kernel parameter: kgdbwait</title>
<para>
@ -162,103 +294,204 @@
</para>
<para>
The kernel will stop and wait as early as the I/O driver and
architecture will allow when you use this option. If you build the
kgdb I/O driver as a kernel module kgdbwait will not do anything.
architecture allows when you use this option. If you build the
kgdb I/O driver as a loadable kernel module kgdbwait will not do
anything.
</para>
</sect1>
<sect1 id="kgdboc">
<title>Kernel parameter: kgdboc</title>
<para>
The kgdboc driver was originally an abbreviation meant to stand for
"kgdb over console". Kgdboc is designed to work with a single
serial port. It was meant to cover the circumstance
where you wanted to use a serial console as your primary console as
well as using it to perform kernel debugging. Of course you can
also use kgdboc without assigning a console to the same port.
</para>
<sect2 id="UsingKgdboc">
<title>Using kgdboc</title>
<para>
You can configure kgdboc via sysfs or a module or kernel boot line
parameter depending on if you build with CONFIG_KGDBOC as a module
or built-in.
<orderedlist>
<listitem><para>From the module load or build-in</para>
<para><constant>kgdboc=&lt;tty-device&gt;,[baud]</constant></para>
<para>
The example here would be if your console port was typically ttyS0, you would use something like <constant>kgdboc=ttyS0,115200</constant> or on the ARM Versatile AB you would likely use <constant>kgdboc=ttyAMA0,115200</constant>
</para>
</listitem>
<listitem><para>From sysfs</para>
<para><constant>echo ttyS0 &gt; /sys/module/kgdboc/parameters/kgdboc</constant></para>
</listitem>
</orderedlist>
</para>
<para>
NOTE: Kgdboc does not support interrupting the target via the
gdb remote protocol. You must manually send a sysrq-g unless you
have a proxy that splits console output to a terminal problem and
has a separate port for the debugger to connect to that sends the
sysrq-g for you.
</para>
<para>When using kgdboc with no debugger proxy, you can end up
connecting the debugger for one of two entry points. If an
exception occurs after you have loaded kgdboc a message should print
on the console stating it is waiting for the debugger. In case you
disconnect your terminal program and then connect the debugger in
its place. If you want to interrupt the target system and forcibly
enter a debug session you have to issue a Sysrq sequence and then
type the letter <constant>g</constant>. Then you disconnect the
terminal session and connect gdb. Your options if you don't like
this are to hack gdb to send the sysrq-g for you as well as on the
initial connect, or to use a debugger proxy that allows an
unmodified gdb to do the debugging.
</para>
</sect2>
</sect1>
<sect1 id="kgdbcon">
<title>Kernel parameter: kgdbcon</title>
<para>
Kgdb supports using the gdb serial protocol to send console messages
to the debugger when the debugger is connected and running. There
are two ways to activate this feature.
<orderedlist>
<listitem><para>Activate with the kernel command line option:</para>
<para><constant>kgdbcon</constant></para>
</listitem>
<listitem><para>Use sysfs before configuring an io driver</para>
<para>
<constant>echo 1 &gt; /sys/module/kgdb/parameters/kgdb_use_con</constant>
</para>
<para>
NOTE: If you do this after you configure the kgdb I/O driver, the
setting will not take effect until the next point the I/O is
reconfigured.
</para>
</listitem>
</orderedlist>
</para>
<para>
IMPORTANT NOTE: Using this option with kgdb over the console
(kgdboc) is not supported.
<sect1 id="kgdbcon">
<title>Kernel parameter: kgdbcon</title>
<para> The kgdbcon feature allows you to see printk() messages
inside gdb while gdb is connected to the kernel. Kdb does not make
use of the kgdbcon feature.
</para>
<para>Kgdb supports using the gdb serial protocol to send console
messages to the debugger when the debugger is connected and running.
There are two ways to activate this feature.
<orderedlist>
<listitem><para>Activate with the kernel command line option:</para>
<para><constant>kgdbcon</constant></para>
</listitem>
<listitem><para>Use sysfs before configuring an I/O driver</para>
<para>
<constant>echo 1 &gt; /sys/module/kgdb/parameters/kgdb_use_con</constant>
</para>
<para>
NOTE: If you do this after you configure the kgdb I/O driver, the
setting will not take effect until the next point the I/O is
reconfigured.
</para>
</listitem>
</orderedlist>
<para>IMPORTANT NOTE: You cannot use kgdboc + kgdbcon on a tty that is an
active system console. An example incorrect usage is <constant>console=ttyS0,115200 kgdboc=ttyS0 kgdbcon</constant>
</para>
<para>It is possible to use this option with kgdboc on a tty that is not a system console.
</para>
</para>
</sect1>
</chapter>
<chapter id="ConnectingGDB">
<title>Connecting gdb</title>
<chapter id="usingKDB">
<title>Using kdb</title>
<para>
</para>
<sect1 id="quickKDBserial">
<title>Quick start for kdb on a serial port</title>
<para>This is a quick example of how to use kdb.</para>
<para><orderedlist>
<listitem><para>Boot kernel with arguments:
<itemizedlist>
<listitem><para><constant>console=ttyS0,115200 kgdboc=ttyS0,115200</constant></para></listitem>
</itemizedlist></para>
<para>OR</para>
<para>Configure kgdboc after the kernel booted; assuming you are using a serial port console:
<itemizedlist>
<listitem><para><constant>echo ttyS0 &gt; /sys/module/kgdboc/parameters/kgdboc</constant></para></listitem>
</itemizedlist>
</para>
</listitem>
<listitem><para>Enter the kernel debugger manually or by waiting for an oops or fault. There are several ways you can enter the kernel debugger manually; all involve using the sysrq-g, which means you must have enabled CONFIG_MAGIC_SYSRQ=y in your kernel config.</para>
<itemizedlist>
<listitem><para>When logged in as root or with a super user session you can run:</para>
<para><constant>echo g &gt; /proc/sysrq-trigger</constant></para></listitem>
<listitem><para>Example using minicom 2.2</para>
<para>Press: <constant>Control-a</constant></para>
<para>Press: <constant>f</constant></para>
<para>Press: <constant>g</constant></para>
</listitem>
<listitem><para>When you have telneted to a terminal server that supports sending a remote break</para>
<para>Press: <constant>Control-]</constant></para>
<para>Type in:<constant>send break</constant></para>
<para>Press: <constant>Enter</constant></para>
<para>Press: <constant>g</constant></para>
</listitem>
</itemizedlist>
</listitem>
<listitem><para>From the kdb prompt you can run the "help" command to see a complete list of the commands that are available.</para>
<para>Some useful commands in kdb include:
<itemizedlist>
<listitem><para>lsmod -- Shows where kernel modules are loaded</para></listitem>
<listitem><para>ps -- Displays only the active processes</para></listitem>
<listitem><para>ps A -- Shows all the processes</para></listitem>
<listitem><para>summary -- Shows kernel version info and memory usage</para></listitem>
<listitem><para>bt -- Get a backtrace of the current process using dump_stack()</para></listitem>
<listitem><para>dmesg -- View the kernel syslog buffer</para></listitem>
<listitem><para>go -- Continue the system</para></listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>When you are done using kdb you need to consider rebooting the
system or using the "go" command to resuming normal kernel
execution. If you have paused the kernel for a lengthy period of
time, applications that rely on timely networking or anything to do
with real wall clock time could be adversely affected, so you
should take this into consideration when using the kernel
debugger.</para>
</listitem>
</orderedlist></para>
</sect1>
<sect1 id="quickKDBkeyboard">
<title>Quick start for kdb using a keyboard connected console</title>
<para>This is a quick example of how to use kdb with a keyboard.</para>
<para><orderedlist>
<listitem><para>Boot kernel with arguments:
<itemizedlist>
<listitem><para><constant>kgdboc=kbd</constant></para></listitem>
</itemizedlist></para>
<para>OR</para>
<para>Configure kgdboc after the kernel booted:
<itemizedlist>
<listitem><para><constant>echo kbd &gt; /sys/module/kgdboc/parameters/kgdboc</constant></para></listitem>
</itemizedlist>
</para>
</listitem>
<listitem><para>Enter the kernel debugger manually or by waiting for an oops or fault. There are several ways you can enter the kernel debugger manually; all involve using the sysrq-g, which means you must have enabled CONFIG_MAGIC_SYSRQ=y in your kernel config.</para>
<itemizedlist>
<listitem><para>When logged in as root or with a super user session you can run:</para>
<para><constant>echo g &gt; /proc/sysrq-trigger</constant></para></listitem>
<listitem><para>Example using a laptop keyboard</para>
<para>Press and hold down: <constant>Alt</constant></para>
<para>Press and hold down: <constant>Fn</constant></para>
<para>Press and release the key with the label: <constant>SysRq</constant></para>
<para>Release: <constant>Fn</constant></para>
<para>Press and release: <constant>g</constant></para>
<para>Release: <constant>Alt</constant></para>
</listitem>
<listitem><para>Example using a PS/2 101-key keyboard</para>
<para>Press and hold down: <constant>Alt</constant></para>
<para>Press and release the key with the label: <constant>SysRq</constant></para>
<para>Press and release: <constant>g</constant></para>
<para>Release: <constant>Alt</constant></para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>Now type in a kdb command such as "help", "dmesg", "bt" or "go" to continue kernel execution.</para>
</listitem>
</orderedlist></para>
</sect1>
</chapter>
<chapter id="EnableKGDB">
<title>Using kgdb / gdb</title>
<para>In order to use kgdb you must activate it by passing
configuration information to one of the kgdb I/O drivers. If you
do not pass any configuration information kgdb will not do anything
at all. Kgdb will only actively hook up to the kernel trap hooks
if a kgdb I/O driver is loaded and configured. If you unconfigure
a kgdb I/O driver, kgdb will unregister all the kernel hook points.
</para>
<para> All kgdb I/O drivers can be reconfigured at run time, if
<symbol>CONFIG_SYSFS</symbol> and <symbol>CONFIG_MODULES</symbol>
are enabled, by echo'ing a new config string to
<constant>/sys/module/&lt;driver&gt;/parameter/&lt;option&gt;</constant>.
The driver can be unconfigured by passing an empty string. You cannot
change the configuration while the debugger is attached. Make sure
to detach the debugger with the <constant>detach</constant> command
prior to trying to unconfigure a kgdb I/O driver.
</para>
<sect1 id="ConnectingGDB">
<title>Connecting with gdb to a serial port</title>
<orderedlist>
<listitem><para>Configure kgdboc</para>
<para>Boot kernel with arguments:
<itemizedlist>
<listitem><para><constant>kgdboc=ttyS0,115200</constant></para></listitem>
</itemizedlist></para>
<para>OR</para>
<para>Configure kgdboc after the kernel booted:
<itemizedlist>
<listitem><para><constant>echo ttyS0 &gt; /sys/module/kgdboc/parameters/kgdboc</constant></para></listitem>
</itemizedlist></para>
</listitem>
<listitem>
<para>Stop kernel execution (break into the debugger)</para>
<para>In order to connect to gdb via kgdboc, the kernel must
first be stopped. There are several ways to stop the kernel which
include using kgdbwait as a boot argument, via a sysrq-g, or running
the kernel until it takes an exception where it waits for the
debugger to attach.
<itemizedlist>
<listitem><para>When logged in as root or with a super user session you can run:</para>
<para><constant>echo g &gt; /proc/sysrq-trigger</constant></para></listitem>
<listitem><para>Example using minicom 2.2</para>
<para>Press: <constant>Control-a</constant></para>
<para>Press: <constant>f</constant></para>
<para>Press: <constant>g</constant></para>
</listitem>
<listitem><para>When you have telneted to a terminal server that supports sending a remote break</para>
<para>Press: <constant>Control-]</constant></para>
<para>Type in:<constant>send break</constant></para>
<para>Press: <constant>Enter</constant></para>
<para>Press: <constant>g</constant></para>
</listitem>
</itemizedlist>
</para>
</listitem>
<listitem>
<para>Connect from from gdb</para>
<para>
If you are using kgdboc, you need to have used kgdbwait as a boot
argument, issued a sysrq-g, or the system you are going to debug
has already taken an exception and is waiting for the debugger to
attach before you can connect gdb.
</para>
<para>
If you are not using different kgdb I/O driver other than kgdboc,
you should be able to connect and the target will automatically
respond.
</para>
<para>
Example (using a serial port):
Example (using a directly connected port):
</para>
<programlisting>
% gdb ./vmlinux
@ -266,7 +499,7 @@
(gdb) target remote /dev/ttyS0
</programlisting>
<para>
Example (kgdb to a terminal server on tcp port 2012):
Example (kgdb to a terminal server on TCP port 2012):
</para>
<programlisting>
% gdb ./vmlinux
@ -283,6 +516,83 @@
communications. You do this prior to issuing the <constant>target
remote</constant> command by typing in: <constant>set debug remote 1</constant>
</para>
</listitem>
</orderedlist>
<para>Remember if you continue in gdb, and need to "break in" again,
you need to issue an other sysrq-g. It is easy to create a simple
entry point by putting a breakpoint at <constant>sys_sync</constant>
and then you can run "sync" from a shell or script to break into the
debugger.</para>
</sect1>
</chapter>
<chapter id="switchKdbKgdb">
<title>kgdb and kdb interoperability</title>
<para>It is possible to transition between kdb and kgdb dynamically.
The debug core will remember which you used the last time and
automatically start in the same mode.</para>
<sect1>
<title>Switching between kdb and kgdb</title>
<sect2>
<title>Switching from kgdb to kdb</title>
<para>
There are two ways to switch from kgdb to kdb: you can use gdb to
issue a maintenance packet, or you can blindly type the command $3#33.
Whenever kernel debugger stops in kgdb mode it will print the
message <constant>KGDB or $3#33 for KDB</constant>. It is important
to note that you have to type the sequence correctly in one pass.
You cannot type a backspace or delete because kgdb will interpret
that as part of the debug stream.
<orderedlist>
<listitem><para>Change from kgdb to kdb by blindly typing:</para>
<para><constant>$3#33</constant></para></listitem>
<listitem><para>Change from kgdb to kdb with gdb</para>
<para><constant>maintenance packet 3</constant></para>
<para>NOTE: Now you must kill gdb. Typically you press control-z and
issue the command: kill -9 %</para></listitem>
</orderedlist>
</para>
</sect2>
<sect2>
<title>Change from kdb to kgdb</title>
<para>There are two ways you can change from kdb to kgdb. You can
manually enter kgdb mode by issuing the kgdb command from the kdb
shell prompt, or you can connect gdb while the kdb shell prompt is
active. The kdb shell looks for the typical first commands that gdb
would issue with the gdb remote protocol and if it sees one of those
commands it automatically changes into kgdb mode.</para>
<orderedlist>
<listitem><para>From kdb issue the command:</para>
<para><constant>kgdb</constant></para>
<para>Now disconnect your terminal program and connect gdb in its place</para></listitem>
<listitem><para>At the kdb prompt, disconnect the terminal program and connect gdb in its place.</para></listitem>
</orderedlist>
</sect2>
</sect1>
<sect1>
<title>Running kdb commands from gdb</title>
<para>It is possible to run a limited set of kdb commands from gdb,
using the gdb monitor command. You don't want to execute any of the
run control or breakpoint operations, because it can disrupt the
state of the kernel debugger. You should be using gdb for
breakpoints and run control operations if you have gdb connected.
The more useful commands to run are things like lsmod, dmesg, ps or
possibly some of the memory information commands. To see all the kdb
commands you can run <constant>monitor help</constant>.</para>
<para>Example:
<informalexample><programlisting>
(gdb) monitor ps
1 idle process (state I) and
27 sleeping system daemon (state M) processes suppressed,
use 'ps A' to see all.
Task Addr Pid Parent [*] cpu State Thread Command
0xc78291d0 1 0 0 0 S 0xc7829404 init
0xc7954150 942 1 0 0 S 0xc7954384 dropbear
0xc78789c0 944 1 0 0 S 0xc7878bf4 sh
(gdb)
</programlisting></informalexample>
</para>
</sect1>
</chapter>
<chapter id="KGDBTestSuite">
<title>kgdb Test Suite</title>
@ -309,34 +619,36 @@
</para>
</chapter>
<chapter id="CommonBackEndReq">
<title>KGDB Internals</title>
<title>Kernel Debugger Internals</title>
<sect1 id="kgdbArchitecture">
<title>Architecture Specifics</title>
<para>
Kgdb is organized into three basic components:
The kernel debugger is organized into a number of components:
<orderedlist>
<listitem><para>kgdb core</para>
<listitem><para>The debug core</para>
<para>
The kgdb core is found in kernel/kgdb.c. It contains:
The debug core is found in kernel/debugger/debug_core.c. It contains:
<itemizedlist>
<listitem><para>All the logic to implement the gdb serial protocol</para></listitem>
<listitem><para>A generic OS exception handler which includes sync'ing the processors into a stopped state on an multi cpu system.</para></listitem>
<listitem><para>A generic OS exception handler which includes
sync'ing the processors into a stopped state on an multi-CPU
system.</para></listitem>
<listitem><para>The API to talk to the kgdb I/O drivers</para></listitem>
<listitem><para>The API to make calls to the arch specific kgdb implementation</para></listitem>
<listitem><para>The API to make calls to the arch-specific kgdb implementation</para></listitem>
<listitem><para>The logic to perform safe memory reads and writes to memory while using the debugger</para></listitem>
<listitem><para>A full implementation for software breakpoints unless overridden by the arch</para></listitem>
<listitem><para>The API to invoke either the kdb or kgdb frontend to the debug core.</para></listitem>
</itemizedlist>
</para>
</listitem>
<listitem><para>kgdb arch specific implementation</para>
<listitem><para>kgdb arch-specific implementation</para>
<para>
This implementation is generally found in arch/*/kernel/kgdb.c.
As an example, arch/x86/kernel/kgdb.c contains the specifics to
implement HW breakpoint as well as the initialization to
dynamically register and unregister for the trap handlers on
this architecture. The arch specific portion implements:
this architecture. The arch-specific portion implements:
<itemizedlist>
<listitem><para>contains an arch specific trap catcher which
<listitem><para>contains an arch-specific trap catcher which
invokes kgdb_handle_exception() to start kgdb about doing its
work</para></listitem>
<listitem><para>translation to and from gdb specific packet format to pt_regs</para></listitem>
@ -347,11 +659,35 @@
</itemizedlist>
</para>
</listitem>
<listitem><para>gdbstub frontend (aka kgdb)</para>
<para>The gdbstub is located in kernel/debug/gdbstub.c. It contains:</para>
<itemizedlist>
<listitem><para>All the logic to implement the gdb serial protocol</para></listitem>
</itemizedlist>
</listitem>
<listitem><para>kdb frontend</para>
<para>The kdb debugger shell is broken down into a number of
components. The kdb core is located in kernel/debug/kdb. There
are a number of helper functions in some of the other kernel
components to make it possible for kdb to examine and report
information about the kernel without taking locks that could
cause a kernel deadlock. The kdb core contains implements the following functionality.</para>
<itemizedlist>
<listitem><para>A simple shell</para></listitem>
<listitem><para>The kdb core command set</para></listitem>
<listitem><para>A registration API to register additional kdb shell commands.</para>
<para>A good example of a self-contained kdb module is the "ftdump" command for dumping the ftrace buffer. See: kernel/trace/trace_kdb.c</para></listitem>
<listitem><para>The implementation for kdb_printf() which
emits messages directly to I/O drivers, bypassing the kernel
log.</para></listitem>
<listitem><para>SW / HW breakpoint management for the kdb shell</para></listitem>
</itemizedlist>
</listitem>
<listitem><para>kgdb I/O driver</para>
<para>
Each kgdb I/O driver has to provide an implemenation for the following:
Each kgdb I/O driver has to provide an implementation for the following:
<itemizedlist>
<listitem><para>configuration via builtin or module</para></listitem>
<listitem><para>configuration via built-in or module</para></listitem>
<listitem><para>dynamic configuration and kgdb hook registration calls</para></listitem>
<listitem><para>read and write character interface</para></listitem>
<listitem><para>A cleanup handler for unconfiguring from the kgdb core</para></listitem>
@ -416,15 +752,15 @@
underlying low level to the hardware driver having "polling hooks"
which the to which the tty driver is attached. In the initial
implementation of kgdboc it the serial_core was changed to expose a
low level uart hook for doing polled mode reading and writing of a
low level UART hook for doing polled mode reading and writing of a
single character while in an atomic context. When kgdb makes an I/O
request to the debugger, kgdboc invokes a call back in the serial
core which in turn uses the call back in the uart driver. It is
certainly possible to extend kgdboc to work with non-uart based
core which in turn uses the call back in the UART driver. It is
certainly possible to extend kgdboc to work with non-UART based
consoles in the future.
</para>
<para>
When using kgdboc with a uart, the uart driver must implement two callbacks in the <constant>struct uart_ops</constant>. Example from drivers/8250.c:<programlisting>
When using kgdboc with a UART, the UART driver must implement two callbacks in the <constant>struct uart_ops</constant>. Example from drivers/8250.c:<programlisting>
#ifdef CONFIG_CONSOLE_POLL
.poll_get_char = serial8250_get_poll_char,
.poll_put_char = serial8250_put_poll_char,
@ -434,7 +770,7 @@
<constant>#ifdef CONFIG_CONSOLE_POLL</constant>, as shown above.
Keep in mind that polling hooks have to be implemented in such a way
that they can be called from an atomic context and have to restore
the state of the uart chip on return such that the system can return
the state of the UART chip on return such that the system can return
to normal when the debugger detaches. You need to be very careful
with any kind of lock you consider, because failing here is most
going to mean pressing the reset button.
@ -453,6 +789,10 @@
<itemizedlist>
<listitem><para>Jason Wessel<email>jason.wessel@windriver.com</email></para></listitem>
</itemizedlist>
In Jan 2010 this document was updated to include kdb.
<itemizedlist>
<listitem><para>Jason Wessel<email>jason.wessel@windriver.com</email></para></listitem>
</itemizedlist>
</para>
</chapter>
</book>

View File

@ -81,16 +81,14 @@ void (*port_disable) (struct ata_port *);
</programlisting>
<para>
Called from ata_bus_probe() and ata_bus_reset() error paths,
as well as when unregistering from the SCSI module (rmmod, hot
unplug).
Called from ata_bus_probe() error path, as well as when
unregistering from the SCSI module (rmmod, hot unplug).
This function should do whatever needs to be done to take the
port out of use. In most cases, ata_port_disable() can be used
as this hook.
</para>
<para>
Called from ata_bus_probe() on a failed probe.
Called from ata_bus_reset() on a failed bus reset.
Called from ata_scsi_release().
</para>
@ -107,10 +105,6 @@ void (*dev_config) (struct ata_port *, struct ata_device *);
issue of SET FEATURES - XFER MODE, and prior to operation.
</para>
<para>
Called by ata_device_add() after ata_dev_identify() determines
a device is present.
</para>
<para>
This entry may be specified as NULL in ata_port_operations.
</para>
@ -154,8 +148,8 @@ unsigned int (*mode_filter) (struct ata_port *, struct ata_device *, unsigned in
<sect2><title>Taskfile read/write</title>
<programlisting>
void (*tf_load) (struct ata_port *ap, struct ata_taskfile *tf);
void (*tf_read) (struct ata_port *ap, struct ata_taskfile *tf);
void (*sff_tf_load) (struct ata_port *ap, struct ata_taskfile *tf);
void (*sff_tf_read) (struct ata_port *ap, struct ata_taskfile *tf);
</programlisting>
<para>
@ -164,36 +158,35 @@ void (*tf_read) (struct ata_port *ap, struct ata_taskfile *tf);
hardware registers / DMA buffers, to obtain the current set of
taskfile register values.
Most drivers for taskfile-based hardware (PIO or MMIO) use
ata_tf_load() and ata_tf_read() for these hooks.
ata_sff_tf_load() and ata_sff_tf_read() for these hooks.
</para>
</sect2>
<sect2><title>PIO data read/write</title>
<programlisting>
void (*data_xfer) (struct ata_device *, unsigned char *, unsigned int, int);
void (*sff_data_xfer) (struct ata_device *, unsigned char *, unsigned int, int);
</programlisting>
<para>
All bmdma-style drivers must implement this hook. This is the low-level
operation that actually copies the data bytes during a PIO data
transfer.
Typically the driver
will choose one of ata_pio_data_xfer_noirq(), ata_pio_data_xfer(), or
ata_mmio_data_xfer().
Typically the driver will choose one of ata_sff_data_xfer_noirq(),
ata_sff_data_xfer(), or ata_sff_data_xfer32().
</para>
</sect2>
<sect2><title>ATA command execute</title>
<programlisting>
void (*exec_command)(struct ata_port *ap, struct ata_taskfile *tf);
void (*sff_exec_command)(struct ata_port *ap, struct ata_taskfile *tf);
</programlisting>
<para>
causes an ATA command, previously loaded with
->tf_load(), to be initiated in hardware.
Most drivers for taskfile-based hardware use ata_exec_command()
Most drivers for taskfile-based hardware use ata_sff_exec_command()
for this hook.
</para>
@ -218,8 +211,8 @@ command.
<sect2><title>Read specific ATA shadow registers</title>
<programlisting>
u8 (*check_status)(struct ata_port *ap);
u8 (*check_altstatus)(struct ata_port *ap);
u8 (*sff_check_status)(struct ata_port *ap);
u8 (*sff_check_altstatus)(struct ata_port *ap);
</programlisting>
<para>
@ -227,20 +220,26 @@ u8 (*check_altstatus)(struct ata_port *ap);
hardware. On some hardware, reading the Status register has
the side effect of clearing the interrupt condition.
Most drivers for taskfile-based hardware use
ata_check_status() for this hook.
ata_sff_check_status() for this hook.
</para>
</sect2>
<sect2><title>Write specific ATA shadow register</title>
<programlisting>
void (*sff_set_devctl)(struct ata_port *ap, u8 ctl);
</programlisting>
<para>
Note that because this is called from ata_device_add(), at
least a dummy function that clears device interrupts must be
provided for all drivers, even if the controller doesn't
actually have a taskfile status register.
Write the device control ATA shadow register to the hardware.
Most drivers don't need to define this.
</para>
</sect2>
<sect2><title>Select ATA device on bus</title>
<programlisting>
void (*dev_select)(struct ata_port *ap, unsigned int device);
void (*sff_dev_select)(struct ata_port *ap, unsigned int device);
</programlisting>
<para>
@ -251,9 +250,7 @@ void (*dev_select)(struct ata_port *ap, unsigned int device);
</para>
<para>
Most drivers for taskfile-based hardware use
ata_std_dev_select() for this hook. Controllers which do not
support second drives on a port (such as SATA contollers) will
use ata_noop_dev_select().
ata_sff_dev_select() for this hook.
</para>
</sect2>
@ -441,13 +438,13 @@ void (*irq_clear) (struct ata_port *);
to struct ata_host_set.
</para>
<para>
Most legacy IDE drivers use ata_interrupt() for the
Most legacy IDE drivers use ata_sff_interrupt() for the
irq_handler hook, which scans all ports in the host_set,
determines which queued command was active (if any), and calls
ata_host_intr(ap,qc).
ata_sff_host_intr(ap,qc).
</para>
<para>
Most legacy IDE drivers use ata_bmdma_irq_clear() for the
Most legacy IDE drivers use ata_sff_irq_clear() for the
irq_clear() hook, which simply clears the interrupt and error
flags in the DMA status register.
</para>
@ -490,16 +487,12 @@ void (*host_stop) (struct ata_host_set *host_set);
allocates space for a legacy IDE PRD table and returns.
</para>
<para>
->port_stop() is called after ->host_stop(). It's sole function
->port_stop() is called after ->host_stop(). Its sole function
is to release DMA/memory resources, now that they are no longer
actively being used. Many drivers also free driver-private
data from port at this time.
</para>
<para>
Many drivers use ata_port_stop() as this hook, which frees the
PRD table.
</para>
<para>
->host_stop() is called after all ->port_stop() calls
have completed. The hook must finalize hardware shutdown, release DMA
and other resources, etc.

View File

@ -144,7 +144,7 @@ usage should require reading the full document.
this though and the recommendation to allow only a single
interface in STA mode at first!
</para>
!Finclude/net/mac80211.h ieee80211_if_init_conf
!Finclude/net/mac80211.h ieee80211_vif
</chapter>
<chapter id="rx-tx">
@ -234,7 +234,6 @@ usage should require reading the full document.
<title>Multiple queues and QoS support</title>
<para>TBD</para>
!Finclude/net/mac80211.h ieee80211_tx_queue_params
!Finclude/net/mac80211.h ieee80211_tx_queue_stats
</chapter>
<chapter id="AP">

View File

@ -17,6 +17,7 @@
<!ENTITY VIDIOC-DBG-G-REGISTER "<link linkend='vidioc-dbg-g-register'><constant>VIDIOC_DBG_G_REGISTER</constant></link>">
<!ENTITY VIDIOC-DBG-S-REGISTER "<link linkend='vidioc-dbg-g-register'><constant>VIDIOC_DBG_S_REGISTER</constant></link>">
<!ENTITY VIDIOC-DQBUF "<link linkend='vidioc-qbuf'><constant>VIDIOC_DQBUF</constant></link>">
<!ENTITY VIDIOC-DQEVENT "<link linkend='vidioc-dqevent'><constant>VIDIOC_DQEVENT</constant></link>">
<!ENTITY VIDIOC-ENCODER-CMD "<link linkend='vidioc-encoder-cmd'><constant>VIDIOC_ENCODER_CMD</constant></link>">
<!ENTITY VIDIOC-ENUMAUDIO "<link linkend='vidioc-enumaudio'><constant>VIDIOC_ENUMAUDIO</constant></link>">
<!ENTITY VIDIOC-ENUMAUDOUT "<link linkend='vidioc-enumaudioout'><constant>VIDIOC_ENUMAUDOUT</constant></link>">
@ -60,6 +61,7 @@
<!ENTITY VIDIOC-REQBUFS "<link linkend='vidioc-reqbufs'><constant>VIDIOC_REQBUFS</constant></link>">
<!ENTITY VIDIOC-STREAMOFF "<link linkend='vidioc-streamon'><constant>VIDIOC_STREAMOFF</constant></link>">
<!ENTITY VIDIOC-STREAMON "<link linkend='vidioc-streamon'><constant>VIDIOC_STREAMON</constant></link>">
<!ENTITY VIDIOC-SUBSCRIBE-EVENT "<link linkend='vidioc-subscribe-event'><constant>VIDIOC_SUBSCRIBE_EVENT</constant></link>">
<!ENTITY VIDIOC-S-AUDIO "<link linkend='vidioc-g-audio'><constant>VIDIOC_S_AUDIO</constant></link>">
<!ENTITY VIDIOC-S-AUDOUT "<link linkend='vidioc-g-audioout'><constant>VIDIOC_S_AUDOUT</constant></link>">
<!ENTITY VIDIOC-S-CROP "<link linkend='vidioc-g-crop'><constant>VIDIOC_S_CROP</constant></link>">
@ -83,6 +85,7 @@
<!ENTITY VIDIOC-TRY-ENCODER-CMD "<link linkend='vidioc-encoder-cmd'><constant>VIDIOC_TRY_ENCODER_CMD</constant></link>">
<!ENTITY VIDIOC-TRY-EXT-CTRLS "<link linkend='vidioc-g-ext-ctrls'><constant>VIDIOC_TRY_EXT_CTRLS</constant></link>">
<!ENTITY VIDIOC-TRY-FMT "<link linkend='vidioc-g-fmt'><constant>VIDIOC_TRY_FMT</constant></link>">
<!ENTITY VIDIOC-UNSUBSCRIBE-EVENT "<link linkend='vidioc-subscribe-event'><constant>VIDIOC_UNSUBSCRIBE_EVENT</constant></link>">
<!-- Types -->
<!ENTITY v4l2-std-id "<link linkend='v4l2-std-id'>v4l2_std_id</link>">
@ -141,6 +144,9 @@
<!ENTITY v4l2-enc-idx "struct&nbsp;<link linkend='v4l2-enc-idx'>v4l2_enc_idx</link>">
<!ENTITY v4l2-enc-idx-entry "struct&nbsp;<link linkend='v4l2-enc-idx-entry'>v4l2_enc_idx_entry</link>">
<!ENTITY v4l2-encoder-cmd "struct&nbsp;<link linkend='v4l2-encoder-cmd'>v4l2_encoder_cmd</link>">
<!ENTITY v4l2-event "struct&nbsp;<link linkend='v4l2-event'>v4l2_event</link>">
<!ENTITY v4l2-event-subscription "struct&nbsp;<link linkend='v4l2-event-subscription'>v4l2_event_subscription</link>">
<!ENTITY v4l2-event-vsync "struct&nbsp;<link linkend='v4l2-event-vsync'>v4l2_event_vsync</link>">
<!ENTITY v4l2-ext-control "struct&nbsp;<link linkend='v4l2-ext-control'>v4l2_ext_control</link>">
<!ENTITY v4l2-ext-controls "struct&nbsp;<link linkend='v4l2-ext-controls'>v4l2_ext_controls</link>">
<!ENTITY v4l2-fmtdesc "struct&nbsp;<link linkend='v4l2-fmtdesc'>v4l2_fmtdesc</link>">
@ -200,6 +206,7 @@
<!ENTITY sub-controls SYSTEM "v4l/controls.xml">
<!ENTITY sub-dev-capture SYSTEM "v4l/dev-capture.xml">
<!ENTITY sub-dev-codec SYSTEM "v4l/dev-codec.xml">
<!ENTITY sub-dev-event SYSTEM "v4l/dev-event.xml">
<!ENTITY sub-dev-effect SYSTEM "v4l/dev-effect.xml">
<!ENTITY sub-dev-osd SYSTEM "v4l/dev-osd.xml">
<!ENTITY sub-dev-output SYSTEM "v4l/dev-output.xml">
@ -292,6 +299,8 @@
<!ENTITY sub-v4l2grab-c SYSTEM "v4l/v4l2grab.c.xml">
<!ENTITY sub-videodev2-h SYSTEM "v4l/videodev2.h.xml">
<!ENTITY sub-v4l2 SYSTEM "v4l/v4l2.xml">
<!ENTITY sub-dqevent SYSTEM "v4l/vidioc-dqevent.xml">
<!ENTITY sub-subscribe-event SYSTEM "v4l/vidioc-subscribe-event.xml">
<!ENTITY sub-intro SYSTEM "dvb/intro.xml">
<!ENTITY sub-frontend SYSTEM "dvb/frontend.xml">
<!ENTITY sub-dvbproperty SYSTEM "dvb/dvbproperty.xml">
@ -381,3 +390,5 @@
<!ENTITY reqbufs SYSTEM "v4l/vidioc-reqbufs.xml">
<!ENTITY s-hw-freq-seek SYSTEM "v4l/vidioc-s-hw-freq-seek.xml">
<!ENTITY streamon SYSTEM "v4l/vidioc-streamon.xml">
<!ENTITY dqevent SYSTEM "v4l/vidioc-dqevent.xml">
<!ENTITY subscribe_event SYSTEM "v4l/vidioc-subscribe-event.xml">

View File

@ -269,7 +269,7 @@ static void board_hwcontrol(struct mtd_info *mtd, int cmd)
information about the device.
</para>
<programlisting>
int __init board_init (void)
static int __init board_init (void)
{
struct nand_chip *this;
int err = 0;
@ -488,7 +488,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip)
The ECC bytes must be placed immidiately after the data
bytes in order to make the syndrome generator work. This
is contrary to the usual layout used by software ECC. The
seperation of data and out of band area is not longer
separation of data and out of band area is not longer
possible. The nand driver code handles this layout and
the remaining free bytes in the oob area are managed by
the autoplacement code. Provide a matching oob-layout
@ -560,7 +560,7 @@ static void board_select_chip (struct mtd_info *mtd, int chip)
bad blocks. They have factory marked good blocks. The marker pattern
is erased when the block is erased to be reused. So in case of
powerloss before writing the pattern back to the chip this block
would be lost and added to the bad blocks. Therefor we scan the
would be lost and added to the bad blocks. Therefore we scan the
chip(s) when we detect them the first time for good blocks and
store this information in a bad block table before erasing any
of the blocks.
@ -1094,7 +1094,7 @@ in this page</entry>
manufacturers specifications. This applies similar to the spare area.
</para>
<para>
Therefor NAND aware filesystems must either write in page size chunks
Therefore NAND aware filesystems must either write in page size chunks
or hold a writebuffer to collect smaller writes until they sum up to
pagesize. Available NAND aware filesystems: JFFS2, YAFFS.
</para>

View File

@ -19,13 +19,17 @@
</authorgroup>
<copyright>
<year>2008</year>
<year>2008-2010</year>
<holder>Paul Mundt</holder>
</copyright>
<copyright>
<year>2008</year>
<year>2008-2010</year>
<holder>Renesas Technology Corp.</holder>
</copyright>
<copyright>
<year>2010</year>
<holder>Renesas Electronics Corp.</holder>
</copyright>
<legalnotice>
<para>
@ -77,7 +81,7 @@
</chapter>
<chapter id="clk">
<title>Clock Framework Extensions</title>
!Iarch/sh/include/asm/clock.h
!Iinclude/linux/sh_clk.h
</chapter>
<chapter id="mach">
<title>Machine Specific Interfaces</title>

View File

@ -16,6 +16,15 @@
</address>
</affiliation>
</author>
<author>
<firstname>William</firstname>
<surname>Cohen</surname>
<affiliation>
<address>
<email>wcohen@redhat.com</email>
</address>
</affiliation>
</author>
</authorgroup>
<legalnotice>
@ -91,4 +100,8 @@
!Iinclude/trace/events/signal.h
</chapter>
<chapter id="block">
<title>Block IO</title>
!Iinclude/trace/events/block.h
</chapter>
</book>

View File

@ -1170,7 +1170,7 @@ frames per second. If less than this number of frames is to be
captured or output, applications can request frame skipping or
duplicating on the driver side. This is especially useful when using
the &func-read; or &func-write;, which are not augmented by timestamps
or sequence counters, and to avoid unneccessary data copying.</para>
or sequence counters, and to avoid unnecessary data copying.</para>
<para>Finally these ioctls can be used to determine the number of
buffers used internally by a driver in read/write mode. For

View File

@ -2332,15 +2332,26 @@ more information.</para>
</listitem>
</orderedlist>
</section>
</section>
<section>
<title>V4L2 in Linux 2.6.34</title>
<orderedlist>
<listitem>
<para>Added
<constant>V4L2_CID_IRIS_ABSOLUTE</constant> and
<constant>V4L2_CID_IRIS_RELATIVE</constant> controls to the
<link linkend="camera-controls">Camera controls class</link>.
</para>
</listitem>
</orderedlist>
</section>
<section id="other">
<title>Relation of V4L2 to other Linux multimedia APIs</title>
<section id="other">
<title>Relation of V4L2 to other Linux multimedia APIs</title>
<section id="xvideo">
<title>X Video Extension</title>
<section id="xvideo">
<title>X Video Extension</title>
<para>The X Video Extension (abbreviated XVideo or just Xv) is
<para>The X Video Extension (abbreviated XVideo or just Xv) is
an extension of the X Window system, implemented for example by the
XFree86 project. Its scope is similar to V4L2, an API to video capture
and output devices for X clients. Xv allows applications to display
@ -2351,7 +2362,7 @@ capture or output still images in XPixmaps<footnote>
extension available across many operating systems and
architectures.</para>
<para>Because the driver is embedded into the X server Xv has a
<para>Because the driver is embedded into the X server Xv has a
number of advantages over the V4L2 <link linkend="overlay">video
overlay interface</link>. The driver can easily determine the overlay
target, &ie; visible graphics memory or off-screen buffers for a
@ -2360,16 +2371,16 @@ overlay, scaling or color-keying, or the clipping functions of the
video capture hardware, always in sync with drawing operations or
windows moving or changing their stacking order.</para>
<para>To combine the advantages of Xv and V4L a special Xv
<para>To combine the advantages of Xv and V4L a special Xv
driver exists in XFree86 and XOrg, just programming any overlay capable
Video4Linux device it finds. To enable it
<filename>/etc/X11/XF86Config</filename> must contain these lines:</para>
<para><screen>
<para><screen>
Section "Module"
Load "v4l"
EndSection</screen></para>
<para>As of XFree86 4.2 this driver still supports only V4L
<para>As of XFree86 4.2 this driver still supports only V4L
ioctls, however it should work just fine with all V4L2 devices through
the V4L2 backward-compatibility layer. Since V4L2 permits multiple
opens it is possible (if supported by the V4L2 driver) to capture
@ -2377,83 +2388,84 @@ video while an X client requested video overlay. Restrictions of
simultaneous capturing and overlay are discussed in <xref
linkend="overlay" /> apply.</para>
<para>Only marginally related to V4L2, XFree86 extended Xv to
<para>Only marginally related to V4L2, XFree86 extended Xv to
support hardware YUV to RGB conversion and scaling for faster video
playback, and added an interface to MPEG-2 decoding hardware. This API
is useful to display images captured with V4L2 devices.</para>
</section>
</section>
<section>
<title>Digital Video</title>
<section>
<title>Digital Video</title>
<para>V4L2 does not support digital terrestrial, cable or
<para>V4L2 does not support digital terrestrial, cable or
satellite broadcast. A separate project aiming at digital receivers
exists. You can find its homepage at <ulink
url="http://linuxtv.org">http://linuxtv.org</ulink>. The Linux DVB API
has no connection to the V4L2 API except that drivers for hybrid
hardware may support both.</para>
</section>
<section>
<title>Audio Interfaces</title>
<para>[to do - OSS/ALSA]</para>
</section>
</section>
<section>
<title>Audio Interfaces</title>
<section id="experimental">
<title>Experimental API Elements</title>
<para>[to do - OSS/ALSA]</para>
</section>
</section>
<section id="experimental">
<title>Experimental API Elements</title>
<para>The following V4L2 API elements are currently experimental
<para>The following V4L2 API elements are currently experimental
and may change in the future.</para>
<itemizedlist>
<listitem>
<para>Video Output Overlay (OSD) Interface, <xref
<itemizedlist>
<listitem>
<para>Video Output Overlay (OSD) Interface, <xref
linkend="osd" />.</para>
</listitem>
</listitem>
<listitem>
<para><constant>V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY</constant>,
<para><constant>V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY</constant>,
&v4l2-buf-type;, <xref linkend="v4l2-buf-type" />.</para>
</listitem>
<listitem>
<para><constant>V4L2_CAP_VIDEO_OUTPUT_OVERLAY</constant>,
</listitem>
<listitem>
<para><constant>V4L2_CAP_VIDEO_OUTPUT_OVERLAY</constant>,
&VIDIOC-QUERYCAP; ioctl, <xref linkend="device-capabilities" />.</para>
</listitem>
<listitem>
<para>&VIDIOC-ENUM-FRAMESIZES; and
</listitem>
<listitem>
<para>&VIDIOC-ENUM-FRAMESIZES; and
&VIDIOC-ENUM-FRAMEINTERVALS; ioctls.</para>
</listitem>
<listitem>
<para>&VIDIOC-G-ENC-INDEX; ioctl.</para>
</listitem>
<listitem>
<para>&VIDIOC-ENCODER-CMD; and &VIDIOC-TRY-ENCODER-CMD;
</listitem>
<listitem>
<para>&VIDIOC-G-ENC-INDEX; ioctl.</para>
</listitem>
<listitem>
<para>&VIDIOC-ENCODER-CMD; and &VIDIOC-TRY-ENCODER-CMD;
ioctls.</para>
</listitem>
<listitem>
<para>&VIDIOC-DBG-G-REGISTER; and &VIDIOC-DBG-S-REGISTER;
</listitem>
<listitem>
<para>&VIDIOC-DBG-G-REGISTER; and &VIDIOC-DBG-S-REGISTER;
ioctls.</para>
</listitem>
<listitem>
<para>&VIDIOC-DBG-G-CHIP-IDENT; ioctl.</para>
</listitem>
</itemizedlist>
</section>
</listitem>
<listitem>
<para>&VIDIOC-DBG-G-CHIP-IDENT; ioctl.</para>
</listitem>
</itemizedlist>
</section>
<section id="obsolete">
<title>Obsolete API Elements</title>
<section id="obsolete">
<title>Obsolete API Elements</title>
<para>The following V4L2 API elements were superseded by new
<para>The following V4L2 API elements were superseded by new
interfaces and should not be implemented in new drivers.</para>
<itemizedlist>
<listitem>
<para><constant>VIDIOC_G_MPEGCOMP</constant> and
<itemizedlist>
<listitem>
<para><constant>VIDIOC_G_MPEGCOMP</constant> and
<constant>VIDIOC_S_MPEGCOMP</constant> ioctls. Use Extended Controls,
<xref linkend="extended-controls" />.</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
</section>
</section>
<!--

View File

@ -266,6 +266,12 @@ minimum value disables backlight compensation.</entry>
<entry>boolean</entry>
<entry>Chroma automatic gain control.</entry>
</row>
<row>
<entry><constant>V4L2_CID_CHROMA_GAIN</constant></entry>
<entry>integer</entry>
<entry>Adjusts the Chroma gain control (for use when chroma AGC
is disabled).</entry>
</row>
<row>
<entry><constant>V4L2_CID_COLOR_KILLER</constant></entry>
<entry>boolean</entry>
@ -277,8 +283,15 @@ minimum value disables backlight compensation.</entry>
<entry>Selects a color effect. Possible values for
<constant>enum v4l2_colorfx</constant> are:
<constant>V4L2_COLORFX_NONE</constant> (0),
<constant>V4L2_COLORFX_BW</constant> (1) and
<constant>V4L2_COLORFX_SEPIA</constant> (2).</entry>
<constant>V4L2_COLORFX_BW</constant> (1),
<constant>V4L2_COLORFX_SEPIA</constant> (2),
<constant>V4L2_COLORFX_NEGATIVE</constant> (3),
<constant>V4L2_COLORFX_EMBOSS</constant> (4),
<constant>V4L2_COLORFX_SKETCH</constant> (5),
<constant>V4L2_COLORFX_SKY_BLUE</constant> (6),
<constant>V4L2_COLORFX_GRASS_GREEN</constant> (7),
<constant>V4L2_COLORFX_SKIN_WHITEN</constant> (8) and
<constant>V4L2_COLORFX_VIVID</constant> (9).</entry>
</row>
<row>
<entry><constant>V4L2_CID_ROTATE</constant></entry>
@ -1824,6 +1837,25 @@ wide-angle direction. The zoom speed unit is driver-specific.</entry>
</row>
<row><entry></entry></row>
<row>
<entry spanname="id"><constant>V4L2_CID_IRIS_ABSOLUTE</constant>&nbsp;</entry>
<entry>integer</entry>
</row><row><entry spanname="descr">This control sets the
camera's aperture to the specified value. The unit is undefined.
Larger values open the iris wider, smaller values close it.</entry>
</row>
<row><entry></entry></row>
<row>
<entry spanname="id"><constant>V4L2_CID_IRIS_RELATIVE</constant>&nbsp;</entry>
<entry>integer</entry>
</row><row><entry spanname="descr">This control modifies the
camera's aperture by the specified amount. The unit is undefined.
Positive values open the iris one step further, negative values close
it one step further. This is a write-only control.</entry>
</row>
<row><entry></entry></row>
<row>
<entry spanname="id"><constant>V4L2_CID_PRIVACY</constant>&nbsp;</entry>
<entry>boolean</entry>

View File

@ -0,0 +1,31 @@
<title>Event Interface</title>
<para>The V4L2 event interface provides means for user to get
immediately notified on certain conditions taking place on a device.
This might include start of frame or loss of signal events, for
example.
</para>
<para>To receive events, the events the user is interested in first must
be subscribed using the &VIDIOC-SUBSCRIBE-EVENT; ioctl. Once an event is
subscribed, the events of subscribed types are dequeueable using the
&VIDIOC-DQEVENT; ioctl. Events may be unsubscribed using
VIDIOC_UNSUBSCRIBE_EVENT ioctl. The special event type V4L2_EVENT_ALL may
be used to unsubscribe all the events the driver supports.</para>
<para>The event subscriptions and event queues are specific to file
handles. Subscribing an event on one file handle does not affect
other file handles.
</para>
<para>The information on dequeueable events is obtained by using select or
poll system calls on video devices. The V4L2 events use POLLPRI events on
poll system call and exceptions on select system call. </para>
<!--
Local Variables:
mode: sgml
sgml-parent-document: "v4l2.sgml"
indent-tabs-mode: nil
End:
-->

View File

@ -589,7 +589,8 @@ number of a video input as in &v4l2-input; field
<entry></entry>
<entry>A place holder for future extensions and custom
(driver defined) buffer types
<constant>V4L2_BUF_TYPE_PRIVATE</constant> and higher.</entry>
<constant>V4L2_BUF_TYPE_PRIVATE</constant> and higher. Applications
should set this to 0.</entry>
</row>
</tbody>
</tgroup>
@ -700,6 +701,16 @@ buffer cannot be on both queues at the same time, the
They can be both cleared however, then the buffer is in "dequeued"
state, in the application domain to say so.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_ERROR</constant></entry>
<entry>0x0040</entry>
<entry>When this flag is set, the buffer has been dequeued
successfully, although the data might have been corrupted.
This is recoverable, streaming may continue as normal and
the buffer may be reused normally.
Drivers set this flag when the <constant>VIDIOC_DQBUF</constant>
ioctl is called.</entry>
</row>
<row>
<entry><constant>V4L2_BUF_FLAG_KEYFRAME</constant></entry>
<entry>0x0008</entry>
@ -917,8 +928,8 @@ order</emphasis>.</para>
<para>When the driver provides or accepts images field by field
rather than interleaved, it is also important applications understand
how the fields combine to frames. We distinguish between top and
bottom fields, the <emphasis>spatial order</emphasis>: The first line
how the fields combine to frames. We distinguish between top (aka odd) and
bottom (aka even) fields, the <emphasis>spatial order</emphasis>: The first line
of the top field is the first line of an interlaced frame, the first
line of the bottom field is the second line of that frame.</para>
@ -971,12 +982,12 @@ between <constant>V4L2_FIELD_TOP</constant> and
<row>
<entry><constant>V4L2_FIELD_TOP</constant></entry>
<entry>2</entry>
<entry>Images consist of the top field only.</entry>
<entry>Images consist of the top (aka odd) field only.</entry>
</row>
<row>
<entry><constant>V4L2_FIELD_BOTTOM</constant></entry>
<entry>3</entry>
<entry>Images consist of the bottom field only.
<entry>Images consist of the bottom (aka even) field only.
Applications may wish to prevent a device from capturing interlaced
images because they will have "comb" or "feathering" artefacts around
moving objects.</entry>

View File

@ -792,6 +792,18 @@ http://www.thedirks.org/winnov/</ulink></para></entry>
<entry>'YYUV'</entry>
<entry>unknown</entry>
</row>
<row id="V4L2-PIX-FMT-Y4">
<entry><constant>V4L2_PIX_FMT_Y4</constant></entry>
<entry>'Y04 '</entry>
<entry>Old 4-bit greyscale format. Only the least significant 4 bits of each byte are used,
the other bits are set to 0.</entry>
</row>
<row id="V4L2-PIX-FMT-Y6">
<entry><constant>V4L2_PIX_FMT_Y6</constant></entry>
<entry>'Y06 '</entry>
<entry>Old 6-bit greyscale format. Only the least significant 6 bits of each byte are used,
the other bits are set to 0.</entry>
</row>
</tbody>
</tgroup>
</table>

View File

@ -58,7 +58,7 @@ MPEG stream embedded, sliced VBI data format in this specification.
</contrib>
<affiliation>
<address>
<email>awalls@radix.net</email>
<email>awalls@md.metrocast.net</email>
</address>
</affiliation>
</author>
@ -401,6 +401,7 @@ and discussions on the V4L mailing list.</revremark>
<section id="ttx"> &sub-dev-teletext; </section>
<section id="radio"> &sub-dev-radio; </section>
<section id="rds"> &sub-dev-rds; </section>
<section id="event"> &sub-dev-event; </section>
</chapter>
<chapter id="driver">
@ -426,6 +427,7 @@ and discussions on the V4L mailing list.</revremark>
&sub-cropcap;
&sub-dbg-g-chip-ident;
&sub-dbg-g-register;
&sub-dqevent;
&sub-encoder-cmd;
&sub-enumaudio;
&sub-enumaudioout;
@ -467,6 +469,7 @@ and discussions on the V4L mailing list.</revremark>
&sub-reqbufs;
&sub-s-hw-freq-seek;
&sub-streamon;
&sub-subscribe-event;
<!-- End of ioctls. -->
&sub-mmap;
&sub-munmap;

View File

@ -1018,6 +1018,13 @@ enum <link linkend="v4l2-colorfx">v4l2_colorfx</link> {
V4L2_COLORFX_NONE = 0,
V4L2_COLORFX_BW = 1,
V4L2_COLORFX_SEPIA = 2,
V4L2_COLORFX_NEGATIVE = 3,
V4L2_COLORFX_EMBOSS = 4,
V4L2_COLORFX_SKETCH = 5,
V4L2_COLORFX_SKY_BLUE = 6,
V4L2_COLORFX_GRASS_GREEN = 7,
V4L2_COLORFX_SKIN_WHITEN = 8,
V4L2_COLORFX_VIVID = 9.
};
#define V4L2_CID_AUTOBRIGHTNESS (V4L2_CID_BASE+32)
#define V4L2_CID_BAND_STOP_FILTER (V4L2_CID_BASE+33)
@ -1271,6 +1278,9 @@ enum <link linkend="v4l2-exposure-auto-type">v4l2_exposure_auto_type</link> {
#define V4L2_CID_PRIVACY (V4L2_CID_CAMERA_CLASS_BASE+16)
#define V4L2_CID_IRIS_ABSOLUTE (V4L2_CID_CAMERA_CLASS_BASE+17)
#define V4L2_CID_IRIS_RELATIVE (V4L2_CID_CAMERA_CLASS_BASE+18)
/* FM Modulator class control IDs */
#define V4L2_CID_FM_TX_CLASS_BASE (V4L2_CTRL_CLASS_FM_TX | 0x900)
#define V4L2_CID_FM_TX_CLASS (V4L2_CTRL_CLASS_FM_TX | 1)

View File

@ -0,0 +1,131 @@
<refentry id="vidioc-dqevent">
<refmeta>
<refentrytitle>ioctl VIDIOC_DQEVENT</refentrytitle>
&manvol;
</refmeta>
<refnamediv>
<refname>VIDIOC_DQEVENT</refname>
<refpurpose>Dequeue event</refpurpose>
</refnamediv>
<refsynopsisdiv>
<funcsynopsis>
<funcprototype>
<funcdef>int <function>ioctl</function></funcdef>
<paramdef>int <parameter>fd</parameter></paramdef>
<paramdef>int <parameter>request</parameter></paramdef>
<paramdef>struct v4l2_event
*<parameter>argp</parameter></paramdef>
</funcprototype>
</funcsynopsis>
</refsynopsisdiv>
<refsect1>
<title>Arguments</title>
<variablelist>
<varlistentry>
<term><parameter>fd</parameter></term>
<listitem>
<para>&fd;</para>
</listitem>
</varlistentry>
<varlistentry>
<term><parameter>request</parameter></term>
<listitem>
<para>VIDIOC_DQEVENT</para>
</listitem>
</varlistentry>
<varlistentry>
<term><parameter>argp</parameter></term>
<listitem>
<para></para>
</listitem>
</varlistentry>
</variablelist>
</refsect1>
<refsect1>
<title>Description</title>
<para>Dequeue an event from a video device. No input is required
for this ioctl. All the fields of the &v4l2-event; structure are
filled by the driver. The file handle will also receive exceptions
which the application may get by e.g. using the select system
call.</para>
<table frame="none" pgwide="1" id="v4l2-event">
<title>struct <structname>v4l2_event</structname></title>
<tgroup cols="4">
&cs-str;
<tbody valign="top">
<row>
<entry>__u32</entry>
<entry><structfield>type</structfield></entry>
<entry></entry>
<entry>Type of the event.</entry>
</row>
<row>
<entry>union</entry>
<entry><structfield>u</structfield></entry>
<entry></entry>
<entry></entry>
</row>
<row>
<entry></entry>
<entry>&v4l2-event-vsync;</entry>
<entry><structfield>vsync</structfield></entry>
<entry>Event data for event V4L2_EVENT_VSYNC.
</entry>
</row>
<row>
<entry></entry>
<entry>__u8</entry>
<entry><structfield>data</structfield>[64]</entry>
<entry>Event data. Defined by the event type. The union
should be used to define easily accessible type for
events.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>pending</structfield></entry>
<entry></entry>
<entry>Number of pending events excluding this one.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>sequence</structfield></entry>
<entry></entry>
<entry>Event sequence number. The sequence number is
incremented for every subscribed event that takes place.
If sequence numbers are not contiguous it means that
events have been lost.
</entry>
</row>
<row>
<entry>struct timespec</entry>
<entry><structfield>timestamp</structfield></entry>
<entry></entry>
<entry>Event timestamp.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>reserved</structfield>[9]</entry>
<entry></entry>
<entry>Reserved for future extensions. Drivers must set
the array to zero.</entry>
</row>
</tbody>
</tgroup>
</table>
</refsect1>
</refentry>
<!--
Local Variables:
mode: sgml
sgml-parent-document: "v4l2.sgml"
indent-tabs-mode: nil
End:
-->

View File

@ -283,7 +283,7 @@ input/output interface to linux-media@vger.kernel.org on 19 Oct 2009.
<entry>This input supports setting DV presets by using VIDIOC_S_DV_PRESET.</entry>
</row>
<row>
<entry><constant>V4L2_OUT_CAP_CUSTOM_TIMINGS</constant></entry>
<entry><constant>V4L2_IN_CAP_CUSTOM_TIMINGS</constant></entry>
<entry>0x00000002</entry>
<entry>This input supports setting custom video timings by using VIDIOC_S_DV_TIMINGS.</entry>
</row>

View File

@ -55,7 +55,7 @@ captured or output, applications can request frame skipping or
duplicating on the driver side. This is especially useful when using
the <function>read()</function> or <function>write()</function>, which
are not augmented by timestamps or sequence counters, and to avoid
unneccessary data copying.</para>
unnecessary data copying.</para>
<para>Further these ioctls can be used to determine the number of
buffers used internally by a driver in read/write mode. For

View File

@ -54,12 +54,10 @@ to enqueue an empty (capturing) or filled (output) buffer in the
driver's incoming queue. The semantics depend on the selected I/O
method.</para>
<para>To enqueue a <link linkend="mmap">memory mapped</link>
buffer applications set the <structfield>type</structfield> field of a
&v4l2-buffer; to the same buffer type as previously &v4l2-format;
<structfield>type</structfield> and &v4l2-requestbuffers;
<structfield>type</structfield>, the <structfield>memory</structfield>
field to <constant>V4L2_MEMORY_MMAP</constant> and the
<para>To enqueue a buffer applications set the <structfield>type</structfield>
field of a &v4l2-buffer; to the same buffer type as was previously used
with &v4l2-format; <structfield>type</structfield> and &v4l2-requestbuffers;
<structfield>type</structfield>. Applications must also set the
<structfield>index</structfield> field. Valid index numbers range from
zero to the number of buffers allocated with &VIDIOC-REQBUFS;
(&v4l2-requestbuffers; <structfield>count</structfield>) minus one. The
@ -70,8 +68,19 @@ intended for output (<structfield>type</structfield> is
<constant>V4L2_BUF_TYPE_VBI_OUTPUT</constant>) applications must also
initialize the <structfield>bytesused</structfield>,
<structfield>field</structfield> and
<structfield>timestamp</structfield> fields. See <xref
linkend="buffer" /> for details. When
<structfield>timestamp</structfield> fields, see <xref
linkend="buffer" /> for details.
Applications must also set <structfield>flags</structfield> to 0. If a driver
supports capturing from specific video inputs and you want to specify a video
input, then <structfield>flags</structfield> should be set to
<constant>V4L2_BUF_FLAG_INPUT</constant> and the field
<structfield>input</structfield> must be initialized to the desired input.
The <structfield>reserved</structfield> field must be set to 0.
</para>
<para>To enqueue a <link linkend="mmap">memory mapped</link>
buffer applications set the <structfield>memory</structfield>
field to <constant>V4L2_MEMORY_MMAP</constant>. When
<constant>VIDIOC_QBUF</constant> is called with a pointer to this
structure the driver sets the
<constant>V4L2_BUF_FLAG_MAPPED</constant> and
@ -81,14 +90,10 @@ structure the driver sets the
&EINVAL;.</para>
<para>To enqueue a <link linkend="userp">user pointer</link>
buffer applications set the <structfield>type</structfield> field of a
&v4l2-buffer; to the same buffer type as previously &v4l2-format;
<structfield>type</structfield> and &v4l2-requestbuffers;
<structfield>type</structfield>, the <structfield>memory</structfield>
field to <constant>V4L2_MEMORY_USERPTR</constant> and the
buffer applications set the <structfield>memory</structfield>
field to <constant>V4L2_MEMORY_USERPTR</constant>, the
<structfield>m.userptr</structfield> field to the address of the
buffer and <structfield>length</structfield> to its size. When the
buffer is intended for output additional fields must be set as above.
buffer and <structfield>length</structfield> to its size.
When <constant>VIDIOC_QBUF</constant> is called with a pointer to this
structure the driver sets the <constant>V4L2_BUF_FLAG_QUEUED</constant>
flag and clears the <constant>V4L2_BUF_FLAG_MAPPED</constant> and
@ -96,16 +101,21 @@ flag and clears the <constant>V4L2_BUF_FLAG_MAPPED</constant> and
<structfield>flags</structfield> field, or it returns an error code.
This ioctl locks the memory pages of the buffer in physical memory,
they cannot be swapped out to disk. Buffers remain locked until
dequeued, until the &VIDIOC-STREAMOFF; or &VIDIOC-REQBUFS; ioctl are
dequeued, until the &VIDIOC-STREAMOFF; or &VIDIOC-REQBUFS; ioctl is
called, or until the device is closed.</para>
<para>Applications call the <constant>VIDIOC_DQBUF</constant>
ioctl to dequeue a filled (capturing) or displayed (output) buffer
from the driver's outgoing queue. They just set the
<structfield>type</structfield> and <structfield>memory</structfield>
<structfield>type</structfield>, <structfield>memory</structfield>
and <structfield>reserved</structfield>
fields of a &v4l2-buffer; as above, when <constant>VIDIOC_DQBUF</constant>
is called with a pointer to this structure the driver fills the
remaining fields or returns an error code.</para>
remaining fields or returns an error code. The driver may also set
<constant>V4L2_BUF_FLAG_ERROR</constant> in the <structfield>flags</structfield>
field. It indicates a non-critical (recoverable) streaming error. In such case
the application may continue as normal, but should be aware that data in the
dequeued buffer might be corrupted.</para>
<para>By default <constant>VIDIOC_DQBUF</constant> blocks when no
buffer is in the outgoing queue. When the
@ -152,7 +162,13 @@ enqueue a user pointer buffer.</para>
<para><constant>VIDIOC_DQBUF</constant> failed due to an
internal error. Can also indicate temporary problems like signal
loss. Note the driver might dequeue an (empty) buffer despite
returning an error, or even stop capturing.</para>
returning an error, or even stop capturing. Reusing such buffer may be unsafe
though and its details (e.g. <structfield>index</structfield>) may not be
returned either. It is recommended that drivers indicate recoverable errors
by setting the <constant>V4L2_BUF_FLAG_ERROR</constant> and returning 0 instead.
In that case the application should be able to safely reuse the buffer and
continue streaming.
</para>
</listitem>
</varlistentry>
</variablelist>

View File

@ -53,8 +53,10 @@ input</refpurpose>
automatically, similar to sensing the video standard. To do so, applications
call <constant> VIDIOC_QUERY_DV_PRESET</constant> with a pointer to a
&v4l2-dv-preset; type. Once the hardware detects a preset, that preset is
returned in the preset field of &v4l2-dv-preset;. When detection is not
possible or fails, the value V4L2_DV_INVALID is returned.</para>
returned in the preset field of &v4l2-dv-preset;. If the preset could not be
detected because there was no signal, or the signal was unreliable, or the
signal did not map to a supported preset, then the value V4L2_DV_INVALID is
returned.</para>
</refsect1>
<refsect1>

View File

@ -54,12 +54,13 @@ buffer at any time after buffers have been allocated with the
&VIDIOC-REQBUFS; ioctl.</para>
<para>Applications set the <structfield>type</structfield> field
of a &v4l2-buffer; to the same buffer type as previously
of a &v4l2-buffer; to the same buffer type as was previously used with
&v4l2-format; <structfield>type</structfield> and &v4l2-requestbuffers;
<structfield>type</structfield>, and the <structfield>index</structfield>
field. Valid index numbers range from zero
to the number of buffers allocated with &VIDIOC-REQBUFS;
(&v4l2-requestbuffers; <structfield>count</structfield>) minus one.
The <structfield>reserved</structfield> field should to set to 0.
After calling <constant>VIDIOC_QUERYBUF</constant> with a pointer to
this structure drivers return an error code or fill the rest of
the structure.</para>
@ -68,8 +69,8 @@ the structure.</para>
<constant>V4L2_BUF_FLAG_MAPPED</constant>,
<constant>V4L2_BUF_FLAG_QUEUED</constant> and
<constant>V4L2_BUF_FLAG_DONE</constant> flags will be valid. The
<structfield>memory</structfield> field will be set to
<constant>V4L2_MEMORY_MMAP</constant>, the <structfield>m.offset</structfield>
<structfield>memory</structfield> field will be set to the current
I/O method, the <structfield>m.offset</structfield>
contains the offset of the buffer from the start of the device memory,
the <structfield>length</structfield> field its size. The driver may
or may not set the remaining fields and flags, they are meaningless in

View File

@ -325,7 +325,7 @@ should be part of the control documentation.</entry>
<entry>n/a</entry>
<entry>This is not a control. When
<constant>VIDIOC_QUERYCTRL</constant> is called with a control ID
equal to a control class code (see <xref linkend="ctrl-class" />), the
equal to a control class code (see <xref linkend="ctrl-class" />) + 1, the
ioctl returns the name of the control class and this control type.
Older drivers which do not support this feature return an
&EINVAL;.</entry>

View File

@ -54,23 +54,23 @@ I/O. Memory mapped buffers are located in device memory and must be
allocated with this ioctl before they can be mapped into the
application's address space. User buffers are allocated by
applications themselves, and this ioctl is merely used to switch the
driver into user pointer I/O mode.</para>
driver into user pointer I/O mode and to setup some internal structures.</para>
<para>To allocate device buffers applications initialize three
fields of a <structname>v4l2_requestbuffers</structname> structure.
<para>To allocate device buffers applications initialize all
fields of the <structname>v4l2_requestbuffers</structname> structure.
They set the <structfield>type</structfield> field to the respective
stream or buffer type, the <structfield>count</structfield> field to
the desired number of buffers, and <structfield>memory</structfield>
must be set to <constant>V4L2_MEMORY_MMAP</constant>. When the ioctl
is called with a pointer to this structure the driver attempts to
allocate the requested number of buffers and stores the actual number
the desired number of buffers, <structfield>memory</structfield>
must be set to the requested I/O method and the <structfield>reserved</structfield> array
must be zeroed. When the ioctl
is called with a pointer to this structure the driver will attempt to allocate
the requested number of buffers and it stores the actual number
allocated in the <structfield>count</structfield> field. It can be
smaller than the number requested, even zero, when the driver runs out
of free memory. A larger number is possible when the driver requires
more buffers to function correctly.<footnote>
<para>For example video output requires at least two buffers,
of free memory. A larger number is also possible when the driver requires
more buffers to function correctly. For example video output requires at least two buffers,
one displayed and one filled by the application.</para>
</footnote> When memory mapping I/O is not supported the ioctl
<para>When the I/O method is not supported the ioctl
returns an &EINVAL;.</para>
<para>Applications can call <constant>VIDIOC_REQBUFS</constant>
@ -81,14 +81,6 @@ in progress, an implicit &VIDIOC-STREAMOFF;. <!-- mhs: I see no
reason why munmap()ping one or even all buffers must imply
streamoff.--></para>
<para>To negotiate user pointer I/O, applications initialize only
the <structfield>type</structfield> field and set
<structfield>memory</structfield> to
<constant>V4L2_MEMORY_USERPTR</constant>. When the ioctl is called
with a pointer to this structure the driver prepares for user pointer
I/O, when this I/O method is not supported the ioctl returns an
&EINVAL;.</para>
<table pgwide="1" frame="none" id="v4l2-requestbuffers">
<title>struct <structname>v4l2_requestbuffers</structname></title>
<tgroup cols="3">
@ -97,9 +89,7 @@ I/O, when this I/O method is not supported the ioctl returns an
<row>
<entry>__u32</entry>
<entry><structfield>count</structfield></entry>
<entry>The number of buffers requested or granted. This
field is only used when <structfield>memory</structfield> is set to
<constant>V4L2_MEMORY_MMAP</constant>.</entry>
<entry>The number of buffers requested or granted.</entry>
</row>
<row>
<entry>&v4l2-buf-type;</entry>
@ -120,7 +110,7 @@ as the &v4l2-format; <structfield>type</structfield> field. See <xref
<entry><structfield>reserved</structfield>[2]</entry>
<entry>A place holder for future extensions and custom
(driver defined) buffer types <constant>V4L2_BUF_TYPE_PRIVATE</constant> and
higher.</entry>
higher. This array should be zeroed by applications.</entry>
</row>
</tbody>
</tgroup>

View File

@ -0,0 +1,133 @@
<refentry id="vidioc-subscribe-event">
<refmeta>
<refentrytitle>ioctl VIDIOC_SUBSCRIBE_EVENT, VIDIOC_UNSUBSCRIBE_EVENT</refentrytitle>
&manvol;
</refmeta>
<refnamediv>
<refname>VIDIOC_SUBSCRIBE_EVENT, VIDIOC_UNSUBSCRIBE_EVENT</refname>
<refpurpose>Subscribe or unsubscribe event</refpurpose>
</refnamediv>
<refsynopsisdiv>
<funcsynopsis>
<funcprototype>
<funcdef>int <function>ioctl</function></funcdef>
<paramdef>int <parameter>fd</parameter></paramdef>
<paramdef>int <parameter>request</parameter></paramdef>
<paramdef>struct v4l2_event_subscription
*<parameter>argp</parameter></paramdef>
</funcprototype>
</funcsynopsis>
</refsynopsisdiv>
<refsect1>
<title>Arguments</title>
<variablelist>
<varlistentry>
<term><parameter>fd</parameter></term>
<listitem>
<para>&fd;</para>
</listitem>
</varlistentry>
<varlistentry>
<term><parameter>request</parameter></term>
<listitem>
<para>VIDIOC_SUBSCRIBE_EVENT, VIDIOC_UNSUBSCRIBE_EVENT</para>
</listitem>
</varlistentry>
<varlistentry>
<term><parameter>argp</parameter></term>
<listitem>
<para></para>
</listitem>
</varlistentry>
</variablelist>
</refsect1>
<refsect1>
<title>Description</title>
<para>Subscribe or unsubscribe V4L2 event. Subscribed events are
dequeued by using the &VIDIOC-DQEVENT; ioctl.</para>
<table frame="none" pgwide="1" id="v4l2-event-subscription">
<title>struct <structname>v4l2_event_subscription</structname></title>
<tgroup cols="3">
&cs-str;
<tbody valign="top">
<row>
<entry>__u32</entry>
<entry><structfield>type</structfield></entry>
<entry>Type of the event.</entry>
</row>
<row>
<entry>__u32</entry>
<entry><structfield>reserved</structfield>[7]</entry>
<entry>Reserved for future extensions. Drivers and applications
must set the array to zero.</entry>
</row>
</tbody>
</tgroup>
</table>
<table frame="none" pgwide="1" id="event-type">
<title>Event Types</title>
<tgroup cols="3">
&cs-def;
<tbody valign="top">
<row>
<entry><constant>V4L2_EVENT_ALL</constant></entry>
<entry>0</entry>
<entry>All events. V4L2_EVENT_ALL is valid only for
VIDIOC_UNSUBSCRIBE_EVENT for unsubscribing all events at once.
</entry>
</row>
<row>
<entry><constant>V4L2_EVENT_VSYNC</constant></entry>
<entry>1</entry>
<entry>This event is triggered on the vertical sync.
This event has &v4l2-event-vsync; associated with it.
</entry>
</row>
<row>
<entry><constant>V4L2_EVENT_EOS</constant></entry>
<entry>2</entry>
<entry>This event is triggered when the end of a stream is reached.
This is typically used with MPEG decoders to report to the application
when the last of the MPEG stream has been decoded.
</entry>
</row>
<row>
<entry><constant>V4L2_EVENT_PRIVATE_START</constant></entry>
<entry>0x08000000</entry>
<entry>Base event number for driver-private events.</entry>
</row>
</tbody>
</tgroup>
</table>
<table frame="none" pgwide="1" id="v4l2-event-vsync">
<title>struct <structname>v4l2_event_vsync</structname></title>
<tgroup cols="3">
&cs-str;
<tbody valign="top">
<row>
<entry>__u8</entry>
<entry><structfield>field</structfield></entry>
<entry>The upcoming field. See &v4l2-field;.</entry>
</row>
</tbody>
</tgroup>
</table>
</refsect1>
</refentry>
<!--
Local Variables:
mode: sgml
sgml-parent-document: "v4l2.sgml"
indent-tabs-mode: nil
End:
-->

View File

@ -5518,34 +5518,41 @@ struct _snd_pcm_runtime {
]]>
</programlisting>
</informalexample>
For the raw data, <structfield>size</structfield> field must be
set properly. This specifies the maximum size of the proc file access.
</para>
<para>
The callback is much more complicated than the text-file
version. You need to use a low-level I/O functions such as
The read/write callbacks of raw mode are more direct than the text mode.
You need to use a low-level I/O functions such as
<function>copy_from/to_user()</function> to transfer the
data.
<informalexample>
<programlisting>
<![CDATA[
static long my_file_io_read(struct snd_info_entry *entry,
static ssize_t my_file_io_read(struct snd_info_entry *entry,
void *file_private_data,
struct file *file,
char *buf,
unsigned long count,
unsigned long pos)
size_t count,
loff_t pos)
{
long size = count;
if (pos + size > local_max_size)
size = local_max_size - pos;
if (copy_to_user(buf, local_data + pos, size))
if (copy_to_user(buf, local_data + pos, count))
return -EFAULT;
return size;
return count;
}
]]>
</programlisting>
</informalexample>
If the size of the info entry has been set up properly,
<structfield>count</structfield> and <structfield>pos</structfield> are
guaranteed to fit within 0 and the given size.
You don't have to check the range in the callbacks unless any
other condition is required.
</para>
</chapter>

View File

@ -342,7 +342,7 @@ static inline void skel_delete (struct usb_skel *dev)
{
kfree (dev->bulk_in_buffer);
if (dev->bulk_out_buffer != NULL)
usb_buffer_free (dev->udev, dev->bulk_out_size,
usb_free_coherent (dev->udev, dev->bulk_out_size,
dev->bulk_out_buffer,
dev->write_urb->transfer_dma);
usb_free_urb (dev->write_urb);

View File

@ -221,8 +221,8 @@ branches. These different branches are:
- main 2.6.x kernel tree
- 2.6.x.y -stable kernel tree
- 2.6.x -git kernel patches
- 2.6.x -mm kernel patches
- subsystem specific kernel trees and patches
- the 2.6.x -next kernel tree for integration tests
2.6.x kernel tree
-----------------
@ -232,9 +232,9 @@ process is as follows:
- As soon as a new kernel is released a two weeks window is open,
during this period of time maintainers can submit big diffs to
Linus, usually the patches that have already been included in the
-mm kernel for a few weeks. The preferred way to submit big changes
-next kernel for a few weeks. The preferred way to submit big changes
is using git (the kernel's source management tool, more information
can be found at http://git.or.cz/) but plain patches are also just
can be found at http://git-scm.com/) but plain patches are also just
fine.
- After two weeks a -rc1 kernel is released it is now possible to push
only patches that do not include new features that could affect the
@ -293,84 +293,43 @@ daily and represent the current state of Linus' tree. They are more
experimental than -rc kernels since they are generated automatically
without even a cursory glance to see if they are sane.
2.6.x -mm kernel patches
------------------------
These are experimental kernel patches released by Andrew Morton. Andrew
takes all of the different subsystem kernel trees and patches and mushes
them together, along with a lot of patches that have been plucked from
the linux-kernel mailing list. This tree serves as a proving ground for
new features and patches. Once a patch has proved its worth in -mm for
a while Andrew or the subsystem maintainer pushes it on to Linus for
inclusion in mainline.
It is heavily encouraged that all new patches get tested in the -mm tree
before they are sent to Linus for inclusion in the main kernel tree. Code
which does not make an appearance in -mm before the opening of the merge
window will prove hard to merge into the mainline.
These kernels are not appropriate for use on systems that are supposed
to be stable and they are more risky to run than any of the other
branches.
If you wish to help out with the kernel development process, please test
and use these kernel releases and provide feedback to the linux-kernel
mailing list if you have any problems, and if everything works properly.
In addition to all the other experimental patches, these kernels usually
also contain any changes in the mainline -git kernels available at the
time of release.
The -mm kernels are not released on a fixed schedule, but usually a few
-mm kernels are released in between each -rc kernel (1 to 3 is common).
Subsystem Specific kernel trees and patches
-------------------------------------------
A number of the different kernel subsystem developers expose their
development trees so that others can see what is happening in the
different areas of the kernel. These trees are pulled into the -mm
kernel releases as described above.
The maintainers of the various kernel subsystems --- and also many
kernel subsystem developers --- expose their current state of
development in source repositories. That way, others can see what is
happening in the different areas of the kernel. In areas where
development is rapid, a developer may be asked to base his submissions
onto such a subsystem kernel tree so that conflicts between the
submission and other already ongoing work are avoided.
Here is a list of some of the different kernel trees available:
git trees:
- Kbuild development tree, Sam Ravnborg <sam@ravnborg.org>
git.kernel.org:/pub/scm/linux/kernel/git/sam/kbuild.git
Most of these repositories are git trees, but there are also other SCMs
in use, or patch queues being published as quilt series. Addresses of
these subsystem repositories are listed in the MAINTAINERS file. Many
of them can be browsed at http://git.kernel.org/.
- ACPI development tree, Len Brown <len.brown@intel.com>
git.kernel.org:/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6.git
Before a proposed patch is committed to such a subsystem tree, it is
subject to review which primarily happens on mailing lists (see the
respective section below). For several kernel subsystems, this review
process is tracked with the tool patchwork. Patchwork offers a web
interface which shows patch postings, any comments on a patch or
revisions to it, and maintainers can mark patches as under review,
accepted, or rejected. Most of these patchwork sites are listed at
http://patchwork.kernel.org/ or http://patchwork.ozlabs.org/.
- Block development tree, Jens Axboe <jens.axboe@oracle.com>
git.kernel.org:/pub/scm/linux/kernel/git/axboe/linux-2.6-block.git
2.6.x -next kernel tree for integration tests
---------------------------------------------
Before updates from subsystem trees are merged into the mainline 2.6.x
tree, they need to be integration-tested. For this purpose, a special
testing repository exists into which virtually all subsystem trees are
pulled on an almost daily basis:
http://git.kernel.org/?p=linux/kernel/git/sfr/linux-next.git
http://linux.f-seidel.de/linux-next/pmwiki/
- DRM development tree, Dave Airlie <airlied@linux.ie>
git.kernel.org:/pub/scm/linux/kernel/git/airlied/drm-2.6.git
This way, the -next kernel gives a summary outlook onto what will be
expected to go into the mainline kernel at the next merge period.
Adventurous testers are very welcome to runtime-test the -next kernel.
- ia64 development tree, Tony Luck <tony.luck@intel.com>
git.kernel.org:/pub/scm/linux/kernel/git/aegl/linux-2.6.git
- infiniband, Roland Dreier <rolandd@cisco.com>
git.kernel.org:/pub/scm/linux/kernel/git/roland/infiniband.git
- libata, Jeff Garzik <jgarzik@pobox.com>
git.kernel.org:/pub/scm/linux/kernel/git/jgarzik/libata-dev.git
- network drivers, Jeff Garzik <jgarzik@pobox.com>
git.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6.git
- pcmcia, Dominik Brodowski <linux@dominikbrodowski.net>
git.kernel.org:/pub/scm/linux/kernel/git/brodo/pcmcia-2.6.git
- SCSI, James Bottomley <James.Bottomley@hansenpartnership.com>
git.kernel.org:/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6.git
- x86, Ingo Molnar <mingo@elte.hu>
git://git.kernel.org/pub/scm/linux/kernel/git/x86/linux-2.6-x86.git
quilt trees:
- USB, Driver Core, and I2C, Greg Kroah-Hartman <gregkh@suse.de>
kernel.org/pub/linux/kernel/people/gregkh/gregkh-2.6/
Other kernel trees can be found listed at http://git.kernel.org/ and in
the MAINTAINERS file.
Bug Reporting
-------------

View File

@ -365,6 +365,7 @@ You can change this at module load time (for a module) with:
regshifts=<shift1>,<shift2>,...
slave_addrs=<addr1>,<addr2>,...
force_kipmid=<enable1>,<enable2>,...
kipmid_max_busy_us=<ustime1>,<ustime2>,...
unload_when_empty=[0|1]
Each of these except si_trydefaults is a list, the first item for the
@ -433,6 +434,7 @@ kernel command line as:
ipmi_si.regshifts=<shift1>,<shift2>,...
ipmi_si.slave_addrs=<addr1>,<addr2>,...
ipmi_si.force_kipmid=<enable1>,<enable2>,...
ipmi_si.kipmid_max_busy_us=<ustime1>,<ustime2>,...
It works the same as the module parameters of the same names.
@ -450,6 +452,16 @@ force this thread on or off. If you force it off and don't have
interrupts, the driver will run VERY slowly. Don't blame me,
these interfaces suck.
Unfortunately, this thread can use a lot of CPU depending on the
interface's performance. This can waste a lot of CPU and cause
various issues with detecting idle CPU and using extra power. To
avoid this, the kipmid_max_busy_us sets the maximum amount of time, in
microseconds, that kipmid will spin before sleeping for a tick. This
value sets a balance between performance and CPU waste and needs to be
tuned to your needs. Maybe, someday, auto-tuning will be added, but
that's not a simple thing and even the auto-tuning would need to be
tuned to the user's desired performance.
The driver supports a hot add and remove of interfaces. This way,
interfaces can be added or removed after the kernel is up and running.
This is done using /sys/modules/ipmi_si/parameters/hotmod, which is a

View File

@ -1,3 +1,3 @@
obj-m := DocBook/ accounting/ auxdisplay/ connector/ \
filesystems/configfs/ ia64/ networking/ \
pcmcia/ spi/ video4linux/ vm/ watchdog/src/
filesystems/ filesystems/configfs/ ia64/ laptops/ networking/ \
pcmcia/ spi/ timers/ video4linux/ vm/ watchdog/src/

View File

@ -1,766 +0,0 @@
Dynamic DMA mapping
===================
David S. Miller <davem@redhat.com>
Richard Henderson <rth@cygnus.com>
Jakub Jelinek <jakub@redhat.com>
This document describes the DMA mapping system in terms of the pci_
API. For a similar API that works for generic devices, see
DMA-API.txt.
Most of the 64bit platforms have special hardware that translates bus
addresses (DMA addresses) into physical addresses. This is similar to
how page tables and/or a TLB translates virtual addresses to physical
addresses on a CPU. This is needed so that e.g. PCI devices can
access with a Single Address Cycle (32bit DMA address) any page in the
64bit physical address space. Previously in Linux those 64bit
platforms had to set artificial limits on the maximum RAM size in the
system, so that the virt_to_bus() static scheme works (the DMA address
translation tables were simply filled on bootup to map each bus
address to the physical page __pa(bus_to_virt())).
So that Linux can use the dynamic DMA mapping, it needs some help from the
drivers, namely it has to take into account that DMA addresses should be
mapped only for the time they are actually used and unmapped after the DMA
transfer.
The following API will work of course even on platforms where no such
hardware exists, see e.g. arch/x86/include/asm/pci.h for how it is implemented on
top of the virt_to_bus interface.
First of all, you should make sure
#include <linux/pci.h>
is in your driver. This file will obtain for you the definition of the
dma_addr_t (which can hold any valid DMA address for the platform)
type which should be used everywhere you hold a DMA (bus) address
returned from the DMA mapping functions.
What memory is DMA'able?
The first piece of information you must know is what kernel memory can
be used with the DMA mapping facilities. There has been an unwritten
set of rules regarding this, and this text is an attempt to finally
write them down.
If you acquired your memory via the page allocator
(i.e. __get_free_page*()) or the generic memory allocators
(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
that memory using the addresses returned from those routines.
This means specifically that you may _not_ use the memory/addresses
returned from vmalloc() for DMA. It is possible to DMA to the
_underlying_ memory mapped into a vmalloc() area, but this requires
walking page tables to get the physical addresses, and then
translating each of those pages back to a kernel address using
something like __va(). [ EDIT: Update this when we integrate
Gerd Knorr's generic code which does this. ]
This rule also means that you may use neither kernel image addresses
(items in data/text/bss segments), nor module image addresses, nor
stack addresses for DMA. These could all be mapped somewhere entirely
different than the rest of physical memory. Even if those classes of
memory could physically work with DMA, you'd need to ensure the I/O
buffers were cacheline-aligned. Without that, you'd see cacheline
sharing problems (data corruption) on CPUs with DMA-incoherent caches.
(The CPU could write to one word, DMA would write to a different one
in the same cache line, and one of them could be overwritten.)
Also, this means that you cannot take the return of a kmap()
call and DMA to/from that. This is similar to vmalloc().
What about block I/O and networking buffers? The block I/O and
networking subsystems make sure that the buffers they use are valid
for you to DMA from/to.
DMA addressing limitations
Does your device have any DMA addressing limitations? For example, is
your device only capable of driving the low order 24-bits of address
on the PCI bus for SAC DMA transfers? If so, you need to inform the
PCI layer of this fact.
By default, the kernel assumes that your device can address the full
32-bits in a SAC cycle. For a 64-bit DAC capable device, this needs
to be increased. And for a device with limitations, as discussed in
the previous paragraph, it needs to be decreased.
pci_alloc_consistent() by default will return 32-bit DMA addresses.
PCI-X specification requires PCI-X devices to support 64-bit
addressing (DAC) for all transactions. And at least one platform (SGI
SN2) requires 64-bit consistent allocations to operate correctly when
the IO bus is in PCI-X mode. Therefore, like with pci_set_dma_mask(),
it's good practice to call pci_set_consistent_dma_mask() to set the
appropriate mask even if your device only supports 32-bit DMA
(default) and especially if it's a PCI-X device.
For correct operation, you must interrogate the PCI layer in your
device probe routine to see if the PCI controller on the machine can
properly support the DMA addressing limitation your device has. It is
good style to do this even if your device holds the default setting,
because this shows that you did think about these issues wrt. your
device.
The query is performed via a call to pci_set_dma_mask():
int pci_set_dma_mask(struct pci_dev *pdev, u64 device_mask);
The query for consistent allocations is performed via a call to
pci_set_consistent_dma_mask():
int pci_set_consistent_dma_mask(struct pci_dev *pdev, u64 device_mask);
Here, pdev is a pointer to the PCI device struct of your device, and
device_mask is a bit mask describing which bits of a PCI address your
device supports. It returns zero if your card can perform DMA
properly on the machine given the address mask you provided.
If it returns non-zero, your device cannot perform DMA properly on
this platform, and attempting to do so will result in undefined
behavior. You must either use a different mask, or not use DMA.
This means that in the failure case, you have three options:
1) Use another DMA mask, if possible (see below).
2) Use some non-DMA mode for data transfer, if possible.
3) Ignore this device and do not initialize it.
It is recommended that your driver print a kernel KERN_WARNING message
when you end up performing either #2 or #3. In this manner, if a user
of your driver reports that performance is bad or that the device is not
even detected, you can ask them for the kernel messages to find out
exactly why.
The standard 32-bit addressing PCI device would do something like
this:
if (pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
printk(KERN_WARNING
"mydev: No suitable DMA available.\n");
goto ignore_this_device;
}
Another common scenario is a 64-bit capable device. The approach
here is to try for 64-bit DAC addressing, but back down to a
32-bit mask should that fail. The PCI platform code may fail the
64-bit mask not because the platform is not capable of 64-bit
addressing. Rather, it may fail in this case simply because
32-bit SAC addressing is done more efficiently than DAC addressing.
Sparc64 is one platform which behaves in this way.
Here is how you would handle a 64-bit capable device which can drive
all 64-bits when accessing streaming DMA:
int using_dac;
if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
using_dac = 1;
} else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
using_dac = 0;
} else {
printk(KERN_WARNING
"mydev: No suitable DMA available.\n");
goto ignore_this_device;
}
If a card is capable of using 64-bit consistent allocations as well,
the case would look like this:
int using_dac, consistent_using_dac;
if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
using_dac = 1;
consistent_using_dac = 1;
pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
} else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) {
using_dac = 0;
consistent_using_dac = 0;
pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
} else {
printk(KERN_WARNING
"mydev: No suitable DMA available.\n");
goto ignore_this_device;
}
pci_set_consistent_dma_mask() will always be able to set the same or a
smaller mask as pci_set_dma_mask(). However for the rare case that a
device driver only uses consistent allocations, one would have to
check the return value from pci_set_consistent_dma_mask().
Finally, if your device can only drive the low 24-bits of
address during PCI bus mastering you might do something like:
if (pci_set_dma_mask(pdev, DMA_BIT_MASK(24))) {
printk(KERN_WARNING
"mydev: 24-bit DMA addressing not available.\n");
goto ignore_this_device;
}
When pci_set_dma_mask() is successful, and returns zero, the PCI layer
saves away this mask you have provided. The PCI layer will use this
information later when you make DMA mappings.
There is a case which we are aware of at this time, which is worth
mentioning in this documentation. If your device supports multiple
functions (for example a sound card provides playback and record
functions) and the various different functions have _different_
DMA addressing limitations, you may wish to probe each mask and
only provide the functionality which the machine can handle. It
is important that the last call to pci_set_dma_mask() be for the
most specific mask.
Here is pseudo-code showing how this might be done:
#define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
#define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
struct my_sound_card *card;
struct pci_dev *pdev;
...
if (!pci_set_dma_mask(pdev, PLAYBACK_ADDRESS_BITS)) {
card->playback_enabled = 1;
} else {
card->playback_enabled = 0;
printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
card->name);
}
if (!pci_set_dma_mask(pdev, RECORD_ADDRESS_BITS)) {
card->record_enabled = 1;
} else {
card->record_enabled = 0;
printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
card->name);
}
A sound card was used as an example here because this genre of PCI
devices seems to be littered with ISA chips given a PCI front end,
and thus retaining the 16MB DMA addressing limitations of ISA.
Types of DMA mappings
There are two types of DMA mappings:
- Consistent DMA mappings which are usually mapped at driver
initialization, unmapped at the end and for which the hardware should
guarantee that the device and the CPU can access the data
in parallel and will see updates made by each other without any
explicit software flushing.
Think of "consistent" as "synchronous" or "coherent".
The current default is to return consistent memory in the low 32
bits of the PCI bus space. However, for future compatibility you
should set the consistent mask even if this default is fine for your
driver.
Good examples of what to use consistent mappings for are:
- Network card DMA ring descriptors.
- SCSI adapter mailbox command data structures.
- Device firmware microcode executed out of
main memory.
The invariant these examples all require is that any CPU store
to memory is immediately visible to the device, and vice
versa. Consistent mappings guarantee this.
IMPORTANT: Consistent DMA memory does not preclude the usage of
proper memory barriers. The CPU may reorder stores to
consistent memory just as it may normal memory. Example:
if it is important for the device to see the first word
of a descriptor updated before the second, you must do
something like:
desc->word0 = address;
wmb();
desc->word1 = DESC_VALID;
in order to get correct behavior on all platforms.
Also, on some platforms your driver may need to flush CPU write
buffers in much the same way as it needs to flush write buffers
found in PCI bridges (such as by reading a register's value
after writing it).
- Streaming DMA mappings which are usually mapped for one DMA transfer,
unmapped right after it (unless you use pci_dma_sync_* below) and for which
hardware can optimize for sequential accesses.
This of "streaming" as "asynchronous" or "outside the coherency
domain".
Good examples of what to use streaming mappings for are:
- Networking buffers transmitted/received by a device.
- Filesystem buffers written/read by a SCSI device.
The interfaces for using this type of mapping were designed in
such a way that an implementation can make whatever performance
optimizations the hardware allows. To this end, when using
such mappings you must be explicit about what you want to happen.
Neither type of DMA mapping has alignment restrictions that come
from PCI, although some devices may have such restrictions.
Also, systems with caches that aren't DMA-coherent will work better
when the underlying buffers don't share cache lines with other data.
Using Consistent DMA mappings.
To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
you should do:
dma_addr_t dma_handle;
cpu_addr = pci_alloc_consistent(pdev, size, &dma_handle);
where pdev is a struct pci_dev *. This may be called in interrupt context.
You should use dma_alloc_coherent (see DMA-API.txt) for buses
where devices don't have struct pci_dev (like ISA, EISA).
This argument is needed because the DMA translations may be bus
specific (and often is private to the bus which the device is attached
to).
Size is the length of the region you want to allocate, in bytes.
This routine will allocate RAM for that region, so it acts similarly to
__get_free_pages (but takes size instead of a page order). If your
driver needs regions sized smaller than a page, you may prefer using
the pci_pool interface, described below.
The consistent DMA mapping interfaces, for non-NULL pdev, will by
default return a DMA address which is SAC (Single Address Cycle)
addressable. Even if the device indicates (via PCI dma mask) that it
may address the upper 32-bits and thus perform DAC cycles, consistent
allocation will only return > 32-bit PCI addresses for DMA if the
consistent dma mask has been explicitly changed via
pci_set_consistent_dma_mask(). This is true of the pci_pool interface
as well.
pci_alloc_consistent returns two values: the virtual address which you
can use to access it from the CPU and dma_handle which you pass to the
card.
The cpu return address and the DMA bus master address are both
guaranteed to be aligned to the smallest PAGE_SIZE order which
is greater than or equal to the requested size. This invariant
exists (for example) to guarantee that if you allocate a chunk
which is smaller than or equal to 64 kilobytes, the extent of the
buffer you receive will not cross a 64K boundary.
To unmap and free such a DMA region, you call:
pci_free_consistent(pdev, size, cpu_addr, dma_handle);
where pdev, size are the same as in the above call and cpu_addr and
dma_handle are the values pci_alloc_consistent returned to you.
This function may not be called in interrupt context.
If your driver needs lots of smaller memory regions, you can write
custom code to subdivide pages returned by pci_alloc_consistent,
or you can use the pci_pool API to do that. A pci_pool is like
a kmem_cache, but it uses pci_alloc_consistent not __get_free_pages.
Also, it understands common hardware constraints for alignment,
like queue heads needing to be aligned on N byte boundaries.
Create a pci_pool like this:
struct pci_pool *pool;
pool = pci_pool_create(name, pdev, size, align, alloc);
The "name" is for diagnostics (like a kmem_cache name); pdev and size
are as above. The device's hardware alignment requirement for this
type of data is "align" (which is expressed in bytes, and must be a
power of two). If your device has no boundary crossing restrictions,
pass 0 for alloc; passing 4096 says memory allocated from this pool
must not cross 4KByte boundaries (but at that time it may be better to
go for pci_alloc_consistent directly instead).
Allocate memory from a pci pool like this:
cpu_addr = pci_pool_alloc(pool, flags, &dma_handle);
flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
holding SMP locks), SLAB_ATOMIC otherwise. Like pci_alloc_consistent,
this returns two values, cpu_addr and dma_handle.
Free memory that was allocated from a pci_pool like this:
pci_pool_free(pool, cpu_addr, dma_handle);
where pool is what you passed to pci_pool_alloc, and cpu_addr and
dma_handle are the values pci_pool_alloc returned. This function
may be called in interrupt context.
Destroy a pci_pool by calling:
pci_pool_destroy(pool);
Make sure you've called pci_pool_free for all memory allocated
from a pool before you destroy the pool. This function may not
be called in interrupt context.
DMA Direction
The interfaces described in subsequent portions of this document
take a DMA direction argument, which is an integer and takes on
one of the following values:
PCI_DMA_BIDIRECTIONAL
PCI_DMA_TODEVICE
PCI_DMA_FROMDEVICE
PCI_DMA_NONE
One should provide the exact DMA direction if you know it.
PCI_DMA_TODEVICE means "from main memory to the PCI device"
PCI_DMA_FROMDEVICE means "from the PCI device to main memory"
It is the direction in which the data moves during the DMA
transfer.
You are _strongly_ encouraged to specify this as precisely
as you possibly can.
If you absolutely cannot know the direction of the DMA transfer,
specify PCI_DMA_BIDIRECTIONAL. It means that the DMA can go in
either direction. The platform guarantees that you may legally
specify this, and that it will work, but this may be at the
cost of performance for example.
The value PCI_DMA_NONE is to be used for debugging. One can
hold this in a data structure before you come to know the
precise direction, and this will help catch cases where your
direction tracking logic has failed to set things up properly.
Another advantage of specifying this value precisely (outside of
potential platform-specific optimizations of such) is for debugging.
Some platforms actually have a write permission boolean which DMA
mappings can be marked with, much like page protections in the user
program address space. Such platforms can and do report errors in the
kernel logs when the PCI controller hardware detects violation of the
permission setting.
Only streaming mappings specify a direction, consistent mappings
implicitly have a direction attribute setting of
PCI_DMA_BIDIRECTIONAL.
The SCSI subsystem tells you the direction to use in the
'sc_data_direction' member of the SCSI command your driver is
working on.
For Networking drivers, it's a rather simple affair. For transmit
packets, map/unmap them with the PCI_DMA_TODEVICE direction
specifier. For receive packets, just the opposite, map/unmap them
with the PCI_DMA_FROMDEVICE direction specifier.
Using Streaming DMA mappings
The streaming DMA mapping routines can be called from interrupt
context. There are two versions of each map/unmap, one which will
map/unmap a single memory region, and one which will map/unmap a
scatterlist.
To map a single region, you do:
struct pci_dev *pdev = mydev->pdev;
dma_addr_t dma_handle;
void *addr = buffer->ptr;
size_t size = buffer->len;
dma_handle = pci_map_single(pdev, addr, size, direction);
and to unmap it:
pci_unmap_single(pdev, dma_handle, size, direction);
You should call pci_unmap_single when the DMA activity is finished, e.g.
from the interrupt which told you that the DMA transfer is done.
Using cpu pointers like this for single mappings has a disadvantage,
you cannot reference HIGHMEM memory in this way. Thus, there is a
map/unmap interface pair akin to pci_{map,unmap}_single. These
interfaces deal with page/offset pairs instead of cpu pointers.
Specifically:
struct pci_dev *pdev = mydev->pdev;
dma_addr_t dma_handle;
struct page *page = buffer->page;
unsigned long offset = buffer->offset;
size_t size = buffer->len;
dma_handle = pci_map_page(pdev, page, offset, size, direction);
...
pci_unmap_page(pdev, dma_handle, size, direction);
Here, "offset" means byte offset within the given page.
With scatterlists, you map a region gathered from several regions by:
int i, count = pci_map_sg(pdev, sglist, nents, direction);
struct scatterlist *sg;
for_each_sg(sglist, sg, count, i) {
hw_address[i] = sg_dma_address(sg);
hw_len[i] = sg_dma_len(sg);
}
where nents is the number of entries in the sglist.
The implementation is free to merge several consecutive sglist entries
into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
consecutive sglist entries can be merged into one provided the first one
ends and the second one starts on a page boundary - in fact this is a huge
advantage for cards which either cannot do scatter-gather or have very
limited number of scatter-gather entries) and returns the actual number
of sg entries it mapped them to. On failure 0 is returned.
Then you should loop count times (note: this can be less than nents times)
and use sg_dma_address() and sg_dma_len() macros where you previously
accessed sg->address and sg->length as shown above.
To unmap a scatterlist, just call:
pci_unmap_sg(pdev, sglist, nents, direction);
Again, make sure DMA activity has already finished.
PLEASE NOTE: The 'nents' argument to the pci_unmap_sg call must be
the _same_ one you passed into the pci_map_sg call,
it should _NOT_ be the 'count' value _returned_ from the
pci_map_sg call.
Every pci_map_{single,sg} call should have its pci_unmap_{single,sg}
counterpart, because the bus address space is a shared resource (although
in some ports the mapping is per each BUS so less devices contend for the
same bus address space) and you could render the machine unusable by eating
all bus addresses.
If you need to use the same streaming DMA region multiple times and touch
the data in between the DMA transfers, the buffer needs to be synced
properly in order for the cpu and device to see the most uptodate and
correct copy of the DMA buffer.
So, firstly, just map it with pci_map_{single,sg}, and after each DMA
transfer call either:
pci_dma_sync_single_for_cpu(pdev, dma_handle, size, direction);
or:
pci_dma_sync_sg_for_cpu(pdev, sglist, nents, direction);
as appropriate.
Then, if you wish to let the device get at the DMA area again,
finish accessing the data with the cpu, and then before actually
giving the buffer to the hardware call either:
pci_dma_sync_single_for_device(pdev, dma_handle, size, direction);
or:
pci_dma_sync_sg_for_device(dev, sglist, nents, direction);
as appropriate.
After the last DMA transfer call one of the DMA unmap routines
pci_unmap_{single,sg}. If you don't touch the data from the first pci_map_*
call till pci_unmap_*, then you don't have to call the pci_dma_sync_*
routines at all.
Here is pseudo code which shows a situation in which you would need
to use the pci_dma_sync_*() interfaces.
my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
{
dma_addr_t mapping;
mapping = pci_map_single(cp->pdev, buffer, len, PCI_DMA_FROMDEVICE);
cp->rx_buf = buffer;
cp->rx_len = len;
cp->rx_dma = mapping;
give_rx_buf_to_card(cp);
}
...
my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
{
struct my_card *cp = devid;
...
if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
struct my_card_header *hp;
/* Examine the header to see if we wish
* to accept the data. But synchronize
* the DMA transfer with the CPU first
* so that we see updated contents.
*/
pci_dma_sync_single_for_cpu(cp->pdev, cp->rx_dma,
cp->rx_len,
PCI_DMA_FROMDEVICE);
/* Now it is safe to examine the buffer. */
hp = (struct my_card_header *) cp->rx_buf;
if (header_is_ok(hp)) {
pci_unmap_single(cp->pdev, cp->rx_dma, cp->rx_len,
PCI_DMA_FROMDEVICE);
pass_to_upper_layers(cp->rx_buf);
make_and_setup_new_rx_buf(cp);
} else {
/* Just sync the buffer and give it back
* to the card.
*/
pci_dma_sync_single_for_device(cp->pdev,
cp->rx_dma,
cp->rx_len,
PCI_DMA_FROMDEVICE);
give_rx_buf_to_card(cp);
}
}
}
Drivers converted fully to this interface should not use virt_to_bus any
longer, nor should they use bus_to_virt. Some drivers have to be changed a
little bit, because there is no longer an equivalent to bus_to_virt in the
dynamic DMA mapping scheme - you have to always store the DMA addresses
returned by the pci_alloc_consistent, pci_pool_alloc, and pci_map_single
calls (pci_map_sg stores them in the scatterlist itself if the platform
supports dynamic DMA mapping in hardware) in your driver structures and/or
in the card registers.
All PCI drivers should be using these interfaces with no exceptions.
It is planned to completely remove virt_to_bus() and bus_to_virt() as
they are entirely deprecated. Some ports already do not provide these
as it is impossible to correctly support them.
Optimizing Unmap State Space Consumption
On many platforms, pci_unmap_{single,page}() is simply a nop.
Therefore, keeping track of the mapping address and length is a waste
of space. Instead of filling your drivers up with ifdefs and the like
to "work around" this (which would defeat the whole purpose of a
portable API) the following facilities are provided.
Actually, instead of describing the macros one by one, we'll
transform some example code.
1) Use DECLARE_PCI_UNMAP_{ADDR,LEN} in state saving structures.
Example, before:
struct ring_state {
struct sk_buff *skb;
dma_addr_t mapping;
__u32 len;
};
after:
struct ring_state {
struct sk_buff *skb;
DECLARE_PCI_UNMAP_ADDR(mapping)
DECLARE_PCI_UNMAP_LEN(len)
};
NOTE: DO NOT put a semicolon at the end of the DECLARE_*()
macro.
2) Use pci_unmap_{addr,len}_set to set these values.
Example, before:
ringp->mapping = FOO;
ringp->len = BAR;
after:
pci_unmap_addr_set(ringp, mapping, FOO);
pci_unmap_len_set(ringp, len, BAR);
3) Use pci_unmap_{addr,len} to access these values.
Example, before:
pci_unmap_single(pdev, ringp->mapping, ringp->len,
PCI_DMA_FROMDEVICE);
after:
pci_unmap_single(pdev,
pci_unmap_addr(ringp, mapping),
pci_unmap_len(ringp, len),
PCI_DMA_FROMDEVICE);
It really should be self-explanatory. We treat the ADDR and LEN
separately, because it is possible for an implementation to only
need the address in order to perform the unmap operation.
Platform Issues
If you are just writing drivers for Linux and do not maintain
an architecture port for the kernel, you can safely skip down
to "Closing".
1) Struct scatterlist requirements.
Struct scatterlist must contain, at a minimum, the following
members:
struct page *page;
unsigned int offset;
unsigned int length;
The base address is specified by a "page+offset" pair.
Previous versions of struct scatterlist contained a "void *address"
field that was sometimes used instead of page+offset. As of Linux
2.5., page+offset is always used, and the "address" field has been
deleted.
2) More to come...
Handling Errors
DMA address space is limited on some architectures and an allocation
failure can be determined by:
- checking if pci_alloc_consistent returns NULL or pci_map_sg returns 0
- checking the returned dma_addr_t of pci_map_single and pci_map_page
by using pci_dma_mapping_error():
dma_addr_t dma_handle;
dma_handle = pci_map_single(pdev, addr, size, direction);
if (pci_dma_mapping_error(pdev, dma_handle)) {
/*
* reduce current DMA mapping usage,
* delay and try again later or
* reset driver.
*/
}
Closing
This document, and the API itself, would not be in it's current
form without the feedback and suggestions from numerous individuals.
We would like to specifically mention, in no particular order, the
following people:
Russell King <rmk@arm.linux.org.uk>
Leo Dagum <dagum@barrel.engr.sgi.com>
Ralf Baechle <ralf@oss.sgi.com>
Grant Grundler <grundler@cup.hp.com>
Jay Estabrook <Jay.Estabrook@compaq.com>
Thomas Sailer <sailer@ife.ee.ethz.ch>
Andrea Arcangeli <andrea@suse.de>
Jens Axboe <jens.axboe@oracle.com>
David Mosberger-Tang <davidm@hpl.hp.com>

View File

@ -216,7 +216,7 @@ The driver should return one of the following result codes:
- PCI_ERS_RESULT_NEED_RESET
Driver returns this if it thinks the device is not
recoverable in it's current state and it needs a slot
recoverable in its current state and it needs a slot
reset to proceed.
- PCI_ERS_RESULT_DISCONNECT
@ -241,7 +241,7 @@ in working condition.
The driver is not supposed to restart normal driver I/O operations
at this point. It should limit itself to "probing" the device to
check it's recoverability status. If all is right, then the platform
check its recoverability status. If all is right, then the platform
will call resume() once all drivers have ack'd link_reset().
Result codes:

View File

@ -13,7 +13,7 @@ Reporting (AER) driver and provides information on how to use it, as
well as how to enable the drivers of endpoint devices to conform with
PCI Express AER driver.
1.2 Copyright © Intel Corporation 2006.
1.2 Copyright (C) Intel Corporation 2006.
1.3 What is the PCI Express AER Driver?
@ -71,15 +71,11 @@ console. If it's a correctable error, it is outputed as a warning.
Otherwise, it is printed as an error. So users could choose different
log level to filter out correctable error messages.
Below shows an example.
+------ PCI-Express Device Error -----+
Error Severity : Uncorrected (Fatal)
PCIE Bus Error type : Transaction Layer
Unsupported Request : First
Requester ID : 0500
VendorID=8086h, DeviceID=0329h, Bus=05h, Device=00h, Function=00h
TLB Header:
04000001 00200a03 05010000 00050100
Below shows an example:
0000:50:00.0: PCIe Bus Error: severity=Uncorrected (Fatal), type=Transaction Layer, id=0500(Requester ID)
0000:50:00.0: device [8086:0329] error status/mask=00100000/00000000
0000:50:00.0: [20] Unsupported Request (First)
0000:50:00.0: TLP Header: 04000001 00200a03 05010000 00050100
In the example, 'Requester ID' means the ID of the device who sends
the error message to root port. Pls. refer to pci express specs for
@ -112,7 +108,7 @@ but the PCI Express link itself is fully functional. Fatal errors, on
the other hand, cause the link to be unreliable.
When AER is enabled, a PCI Express device will automatically send an
error message to the PCIE root port above it when the device captures
error message to the PCIe root port above it when the device captures
an error. The Root Port, upon receiving an error reporting message,
internally processes and logs the error message in its PCI Express
capability structure. Error information being logged includes storing
@ -198,8 +194,9 @@ to reset link, AER port service driver is required to provide the
function to reset link. Firstly, kernel looks for if the upstream
component has an aer driver. If it has, kernel uses the reset_link
callback of the aer driver. If the upstream component has no aer driver
and the port is downstream port, we will use the aer driver of the
root port who reports the AER error. As for upstream ports,
and the port is downstream port, we will perform a hot reset as the
default by setting the Secondary Bus Reset bit of the Bridge Control
register associated with the downstream port. As for upstream ports,
they should provide their own aer service drivers with reset_link
function. If error_detected returns PCI_ERS_RESULT_CAN_RECOVER and
reset_link returns PCI_ERS_RESULT_RECOVERED, the error handling goes
@ -253,11 +250,11 @@ cleanup uncorrectable status register. Pls. refer to section 3.3.
4. Software error injection
Debugging PCIE AER error recovery code is quite difficult because it
Debugging PCIe AER error recovery code is quite difficult because it
is hard to trigger real hardware errors. Software based error
injection can be used to fake various kinds of PCIE errors.
injection can be used to fake various kinds of PCIe errors.
First you should enable PCIE AER software error injection in kernel
First you should enable PCIe AER software error injection in kernel
configuration, that is, following item should be in your .config.
CONFIG_PCIEAER_INJECT=y or CONFIG_PCIEAER_INJECT=m

View File

@ -6,16 +6,22 @@ checklist.txt
- Review Checklist for RCU Patches
listRCU.txt
- Using RCU to Protect Read-Mostly Linked Lists
lockdep.txt
- RCU and lockdep checking
NMI-RCU.txt
- Using RCU to Protect Dynamic NMI Handlers
rcubarrier.txt
- RCU and Unloadable Modules
rculist_nulls.txt
- RCU list primitives for use with SLAB_DESTROY_BY_RCU
rcuref.txt
- Reference-count design for elements of lists/arrays protected by RCU
rcu.txt
- RCU Concepts
rcubarrier.txt
- Unloading modules that use RCU callbacks
RTFP.txt
- List of RCU papers (bibliography) going back to 1980.
stallwarn.txt
- RCU CPU stall warnings (CONFIG_RCU_CPU_STALL_DETECTOR)
torture.txt
- RCU Torture Test Operation (CONFIG_RCU_TORTURE_TEST)
trace.txt

View File

@ -34,7 +34,7 @@ NMI handler.
cpu = smp_processor_id();
++nmi_count(cpu);
if (!rcu_dereference(nmi_callback)(regs, cpu))
if (!rcu_dereference_sched(nmi_callback)(regs, cpu))
default_do_nmi(regs);
nmi_exit();
@ -47,12 +47,13 @@ function pointer. If this handler returns zero, do_nmi() invokes the
default_do_nmi() function to handle a machine-specific NMI. Finally,
preemption is restored.
Strictly speaking, rcu_dereference() is not needed, since this code runs
only on i386, which does not need rcu_dereference() anyway. However,
it is a good documentation aid, particularly for anyone attempting to
do something similar on Alpha.
In theory, rcu_dereference_sched() is not needed, since this code runs
only on i386, which in theory does not need rcu_dereference_sched()
anyway. However, in practice it is a good documentation aid, particularly
for anyone attempting to do something similar on Alpha or on systems
with aggressive optimizing compilers.
Quick Quiz: Why might the rcu_dereference() be necessary on Alpha,
Quick Quiz: Why might the rcu_dereference_sched() be necessary on Alpha,
given that the code referenced by the pointer is read-only?
@ -99,17 +100,21 @@ invoke irq_enter() and irq_exit() on NMI entry and exit, respectively.
Answer to Quick Quiz
Why might the rcu_dereference() be necessary on Alpha, given
Why might the rcu_dereference_sched() be necessary on Alpha, given
that the code referenced by the pointer is read-only?
Answer: The caller to set_nmi_callback() might well have
initialized some data that is to be used by the
new NMI handler. In this case, the rcu_dereference()
would be needed, because otherwise a CPU that received
an NMI just after the new handler was set might see
the pointer to the new NMI handler, but the old
pre-initialized version of the handler's data.
initialized some data that is to be used by the new NMI
handler. In this case, the rcu_dereference_sched() would
be needed, because otherwise a CPU that received an NMI
just after the new handler was set might see the pointer
to the new NMI handler, but the old pre-initialized
version of the handler's data.
More important, the rcu_dereference() makes it clear
to someone reading the code that the pointer is being
protected by RCU.
This same sad story can happen on other CPUs when using
a compiler with aggressive pointer-value speculation
optimizations.
More important, the rcu_dereference_sched() makes it
clear to someone reading the code that the pointer is
being protected by RCU-sched.

View File

@ -25,10 +25,10 @@ to be referencing the data structure. However, this mechanism was not
optimized for modern computer systems, which is not surprising given
that these overheads were not so expensive in the mid-80s. Nonetheless,
passive serialization appears to be the first deferred-destruction
mechanism to be used in production. Furthermore, the relevant patent has
lapsed, so this approach may be used in non-GPL software, if desired.
(In contrast, use of RCU is permitted only in software licensed under
GPL. Sorry!!!)
mechanism to be used in production. Furthermore, the relevant patent
has lapsed, so this approach may be used in non-GPL software, if desired.
(In contrast, implementation of RCU is permitted only in software licensed
under either GPL or LGPL. Sorry!!!)
In 1990, Pugh [Pugh90] noted that explicitly tracking which threads
were reading a given data structure permitted deferred free to operate
@ -150,6 +150,18 @@ preemptible RCU [PaulEMcKenney2007PreemptibleRCU], and the three-part
LWN "What is RCU?" series [PaulEMcKenney2007WhatIsRCUFundamentally,
PaulEMcKenney2008WhatIsRCUUsage, and PaulEMcKenney2008WhatIsRCUAPI].
2008 saw a journal paper on real-time RCU [DinakarGuniguntala2008IBMSysJ],
a history of how Linux changed RCU more than RCU changed Linux
[PaulEMcKenney2008RCUOSR], and a design overview of hierarchical RCU
[PaulEMcKenney2008HierarchicalRCU].
2009 introduced user-level RCU algorithms [PaulEMcKenney2009MaliciousURCU],
which Mathieu Desnoyers is now maintaining [MathieuDesnoyers2009URCU]
[MathieuDesnoyersPhD]. TINY_RCU [PaulEMcKenney2009BloatWatchRCU] made
its appearance, as did expedited RCU [PaulEMcKenney2009expeditedRCU].
The problem of resizeable RCU-protected hash tables may now be on a path
to a solution [JoshTriplett2009RPHash].
Bibtex Entries
@article{Kung80
@ -730,6 +742,11 @@ Revised:
"
}
#
# "What is RCU?" LWN series.
#
########################################################################
@article{DinakarGuniguntala2008IBMSysJ
,author="D. Guniguntala and P. E. McKenney and J. Triplett and J. Walpole"
,title="The read-copy-update mechanism for supporting real-time applications on shared-memory multiprocessor systems with {Linux}"
@ -820,3 +837,39 @@ Revised:
Uniprocessor assumptions allow simplified RCU implementation.
"
}
@unpublished{PaulEMcKenney2009expeditedRCU
,Author="Paul E. McKenney"
,Title="[{PATCH} -tip 0/3] expedited 'big hammer' {RCU} grace periods"
,month="June"
,day="25"
,year="2009"
,note="Available:
\url{http://lkml.org/lkml/2009/6/25/306}
[Viewed August 16, 2009]"
,annotation="
First posting of expedited RCU to be accepted into -tip.
"
}
@unpublished{JoshTriplett2009RPHash
,Author="Josh Triplett"
,Title="Scalable concurrent hash tables via relativistic programming"
,month="September"
,year="2009"
,note="Linux Plumbers Conference presentation"
,annotation="
RP fun with hash tables.
"
}
@phdthesis{MathieuDesnoyersPhD
, title = "Low-Impact Operating System Tracing"
, author = "Mathieu Desnoyers"
, school = "Ecole Polytechnique de Montr\'{e}al"
, month = "December"
, year = 2009
,note="Available:
\url{http://www.lttng.org/pub/thesis/desnoyers-dissertation-2009-12.pdf}
[Viewed December 9, 2009]"
}

View File

@ -8,13 +8,12 @@ would cause. This list is based on experiences reviewing such patches
over a rather long period of time, but improvements are always welcome!
0. Is RCU being applied to a read-mostly situation? If the data
structure is updated more than about 10% of the time, then
you should strongly consider some other approach, unless
detailed performance measurements show that RCU is nonetheless
the right tool for the job. Yes, you might think of RCU
as simply cutting overhead off of the readers and imposing it
on the writers. That is exactly why normal uses of RCU will
do much more reading than updating.
structure is updated more than about 10% of the time, then you
should strongly consider some other approach, unless detailed
performance measurements show that RCU is nonetheless the right
tool for the job. Yes, RCU does reduce read-side overhead by
increasing write-side overhead, which is exactly why normal uses
of RCU will do much more reading than updating.
Another exception is where performance is not an issue, and RCU
provides a simpler implementation. An example of this situation
@ -35,13 +34,13 @@ over a rather long period of time, but improvements are always welcome!
If you choose #b, be prepared to describe how you have handled
memory barriers on weakly ordered machines (pretty much all of
them -- even x86 allows reads to be reordered), and be prepared
to explain why this added complexity is worthwhile. If you
choose #c, be prepared to explain how this single task does not
become a major bottleneck on big multiprocessor machines (for
example, if the task is updating information relating to itself
that other tasks can read, there by definition can be no
bottleneck).
them -- even x86 allows later loads to be reordered to precede
earlier stores), and be prepared to explain why this added
complexity is worthwhile. If you choose #c, be prepared to
explain how this single task does not become a major bottleneck on
big multiprocessor machines (for example, if the task is updating
information relating to itself that other tasks can read, there
by definition can be no bottleneck).
2. Do the RCU read-side critical sections make proper use of
rcu_read_lock() and friends? These primitives are needed
@ -51,8 +50,10 @@ over a rather long period of time, but improvements are always welcome!
actuarial risk of your kernel.
As a rough rule of thumb, any dereference of an RCU-protected
pointer must be covered by rcu_read_lock() or rcu_read_lock_bh()
or by the appropriate update-side lock.
pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(),
rcu_read_lock_sched(), or by the appropriate update-side lock.
Disabling of preemption can serve as rcu_read_lock_sched(), but
is less readable.
3. Does the update code tolerate concurrent accesses?
@ -62,25 +63,27 @@ over a rather long period of time, but improvements are always welcome!
of ways to handle this concurrency, depending on the situation:
a. Use the RCU variants of the list and hlist update
primitives to add, remove, and replace elements on an
RCU-protected list. Alternatively, use the RCU-protected
trees that have been added to the Linux kernel.
primitives to add, remove, and replace elements on
an RCU-protected list. Alternatively, use the other
RCU-protected data structures that have been added to
the Linux kernel.
This is almost always the best approach.
b. Proceed as in (a) above, but also maintain per-element
locks (that are acquired by both readers and writers)
that guard per-element state. Of course, fields that
the readers refrain from accessing can be guarded by the
update-side lock.
the readers refrain from accessing can be guarded by
some other lock acquired only by updaters, if desired.
This works quite well, also.
c. Make updates appear atomic to readers. For example,
pointer updates to properly aligned fields will appear
atomic, as will individual atomic primitives. Operations
performed under a lock and sequences of multiple atomic
primitives will -not- appear to be atomic.
pointer updates to properly aligned fields will
appear atomic, as will individual atomic primitives.
Sequences of perations performed under a lock will -not-
appear to be atomic to RCU readers, nor will sequences
of multiple atomic primitives.
This can work, but is starting to get a bit tricky.
@ -98,9 +101,9 @@ over a rather long period of time, but improvements are always welcome!
a new structure containing updated values.
4. Weakly ordered CPUs pose special challenges. Almost all CPUs
are weakly ordered -- even i386 CPUs allow reads to be reordered.
RCU code must take all of the following measures to prevent
memory-corruption problems:
are weakly ordered -- even x86 CPUs allow later loads to be
reordered to precede earlier stores. RCU code must take all of
the following measures to prevent memory-corruption problems:
a. Readers must maintain proper ordering of their memory
accesses. The rcu_dereference() primitive ensures that
@ -113,14 +116,25 @@ over a rather long period of time, but improvements are always welcome!
The rcu_dereference() primitive is also an excellent
documentation aid, letting the person reading the code
know exactly which pointers are protected by RCU.
Please note that compilers can also reorder code, and
they are becoming increasingly aggressive about doing
just that. The rcu_dereference() primitive therefore
also prevents destructive compiler optimizations.
The rcu_dereference() primitive is used by the various
"_rcu()" list-traversal primitives, such as the
list_for_each_entry_rcu(). Note that it is perfectly
legal (if redundant) for update-side code to use
rcu_dereference() and the "_rcu()" list-traversal
primitives. This is particularly useful in code
that is common to readers and updaters.
The rcu_dereference() primitive is used by the
various "_rcu()" list-traversal primitives, such
as the list_for_each_entry_rcu(). Note that it is
perfectly legal (if redundant) for update-side code to
use rcu_dereference() and the "_rcu()" list-traversal
primitives. This is particularly useful in code that
is common to readers and updaters. However, lockdep
will complain if you access rcu_dereference() outside
of an RCU read-side critical section. See lockdep.txt
to learn what to do about this.
Of course, neither rcu_dereference() nor the "_rcu()"
list-traversal primitives can substitute for a good
concurrency design coordinating among multiple updaters.
b. If the list macros are being used, the list_add_tail_rcu()
and list_add_rcu() primitives must be used in order
@ -135,11 +149,14 @@ over a rather long period of time, but improvements are always welcome!
readers. Similarly, if the hlist macros are being used,
the hlist_del_rcu() primitive is required.
The list_replace_rcu() primitive may be used to
replace an old structure with a new one in an
RCU-protected list.
The list_replace_rcu() and hlist_replace_rcu() primitives
may be used to replace an old structure with a new one
in their respective types of RCU-protected lists.
d. Updates must ensure that initialization of a given
d. Rules similar to (4b) and (4c) apply to the "hlist_nulls"
type of RCU-protected linked lists.
e. Updates must ensure that initialization of a given
structure happens before pointers to that structure are
publicized. Use the rcu_assign_pointer() primitive
when publicizing a pointer to a structure that can
@ -151,16 +168,31 @@ over a rather long period of time, but improvements are always welcome!
it cannot block.
6. Since synchronize_rcu() can block, it cannot be called from
any sort of irq context. Ditto for synchronize_sched() and
synchronize_srcu().
any sort of irq context. The same rule applies for
synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(),
synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(),
synchronize_sched_expedite(), and synchronize_srcu_expedited().
7. If the updater uses call_rcu(), then the corresponding readers
must use rcu_read_lock() and rcu_read_unlock(). If the updater
uses call_rcu_bh(), then the corresponding readers must use
rcu_read_lock_bh() and rcu_read_unlock_bh(). If the updater
uses call_rcu_sched(), then the corresponding readers must
disable preemption. Mixing things up will result in confusion
and broken kernels.
The expedited forms of these primitives have the same semantics
as the non-expedited forms, but expediting is both expensive
and unfriendly to real-time workloads. Use of the expedited
primitives should be restricted to rare configuration-change
operations that would not normally be undertaken while a real-time
workload is running.
7. If the updater uses call_rcu() or synchronize_rcu(), then the
corresponding readers must use rcu_read_lock() and
rcu_read_unlock(). If the updater uses call_rcu_bh() or
synchronize_rcu_bh(), then the corresponding readers must
use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the
updater uses call_rcu_sched() or synchronize_sched(), then
the corresponding readers must disable preemption, possibly
by calling rcu_read_lock_sched() and rcu_read_unlock_sched().
If the updater uses synchronize_srcu(), the the corresponding
readers must use srcu_read_lock() and srcu_read_unlock(),
and with the same srcu_struct. The rules for the expedited
primitives are the same as for their non-expedited counterparts.
Mixing things up will result in confusion and broken kernels.
One exception to this rule: rcu_read_lock() and rcu_read_unlock()
may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh()
@ -212,6 +244,8 @@ over a rather long period of time, but improvements are always welcome!
e. Periodically invoke synchronize_rcu(), permitting a limited
number of updates per grace period.
The same cautions apply to call_rcu_bh() and call_rcu_sched().
9. All RCU list-traversal primitives, which include
rcu_dereference(), list_for_each_entry_rcu(),
list_for_each_continue_rcu(), and list_for_each_safe_rcu(),
@ -219,17 +253,21 @@ over a rather long period of time, but improvements are always welcome!
must be protected by appropriate update-side locks. RCU
read-side critical sections are delimited by rcu_read_lock()
and rcu_read_unlock(), or by similar primitives such as
rcu_read_lock_bh() and rcu_read_unlock_bh().
rcu_read_lock_bh() and rcu_read_unlock_bh(), in which case
the matching rcu_dereference() primitive must be used in order
to keep lockdep happy, in this case, rcu_dereference_bh().
The reason that it is permissible to use RCU list-traversal
primitives when the update-side lock is held is that doing so
can be quite helpful in reducing code bloat when common code is
shared between readers and updaters.
shared between readers and updaters. Additional primitives
are provided for this case, as discussed in lockdep.txt.
10. Conversely, if you are in an RCU read-side critical section,
and you don't hold the appropriate update-side lock, you -must-
use the "_rcu()" variants of the list macros. Failing to do so
will break Alpha and confuse people reading your code.
will break Alpha, cause aggressive compilers to generate bad code,
and confuse people trying to read your code.
11. Note that synchronize_rcu() -only- guarantees to wait until
all currently executing rcu_read_lock()-protected RCU read-side
@ -239,15 +277,21 @@ over a rather long period of time, but improvements are always welcome!
rcu_read_lock()-protected read-side critical sections, do -not-
use synchronize_rcu().
If you want to wait for some of these other things, you might
instead need to use synchronize_irq() or synchronize_sched().
Similarly, disabling preemption is not an acceptable substitute
for rcu_read_lock(). Code that attempts to use preemption
disabling where it should be using rcu_read_lock() will break
in real-time kernel builds.
If you want to wait for interrupt handlers, NMI handlers, and
code under the influence of preempt_disable(), you instead
need to use synchronize_irq() or synchronize_sched().
12. Any lock acquired by an RCU callback must be acquired elsewhere
with softirq disabled, e.g., via spin_lock_irqsave(),
spin_lock_bh(), etc. Failing to disable irq on a given
acquisition of that lock will result in deadlock as soon as the
RCU callback happens to interrupt that acquisition's critical
section.
acquisition of that lock will result in deadlock as soon as
the RCU softirq handler happens to run your RCU callback while
interrupting that acquisition's critical section.
13. RCU callbacks can be and are executed in parallel. In many cases,
the callback code simply wrappers around kfree(), so that this
@ -265,29 +309,30 @@ over a rather long period of time, but improvements are always welcome!
not the case, a self-spawning RCU callback would prevent the
victim CPU from ever going offline.)
14. SRCU (srcu_read_lock(), srcu_read_unlock(), and synchronize_srcu())
may only be invoked from process context. Unlike other forms of
RCU, it -is- permissible to block in an SRCU read-side critical
section (demarked by srcu_read_lock() and srcu_read_unlock()),
hence the "SRCU": "sleepable RCU". Please note that if you
don't need to sleep in read-side critical sections, you should
be using RCU rather than SRCU, because RCU is almost always
faster and easier to use than is SRCU.
14. SRCU (srcu_read_lock(), srcu_read_unlock(), srcu_dereference(),
synchronize_srcu(), and synchronize_srcu_expedited()) may only
be invoked from process context. Unlike other forms of RCU, it
-is- permissible to block in an SRCU read-side critical section
(demarked by srcu_read_lock() and srcu_read_unlock()), hence the
"SRCU": "sleepable RCU". Please note that if you don't need
to sleep in read-side critical sections, you should be using
RCU rather than SRCU, because RCU is almost always faster and
easier to use than is SRCU.
Also unlike other forms of RCU, explicit initialization
and cleanup is required via init_srcu_struct() and
cleanup_srcu_struct(). These are passed a "struct srcu_struct"
that defines the scope of a given SRCU domain. Once initialized,
the srcu_struct is passed to srcu_read_lock(), srcu_read_unlock()
and synchronize_srcu(). A given synchronize_srcu() waits only
for SRCU read-side critical sections governed by srcu_read_lock()
and srcu_read_unlock() calls that have been passd the same
srcu_struct. This property is what makes sleeping read-side
critical sections tolerable -- a given subsystem delays only
its own updates, not those of other subsystems using SRCU.
Therefore, SRCU is less prone to OOM the system than RCU would
be if RCU's read-side critical sections were permitted to
sleep.
synchronize_srcu(), and synchronize_srcu_expedited(). A given
synchronize_srcu() waits only for SRCU read-side critical
sections governed by srcu_read_lock() and srcu_read_unlock()
calls that have been passed the same srcu_struct. This property
is what makes sleeping read-side critical sections tolerable --
a given subsystem delays only its own updates, not those of other
subsystems using SRCU. Therefore, SRCU is less prone to OOM the
system than RCU would be if RCU's read-side critical sections
were permitted to sleep.
The ability to sleep in read-side critical sections does not
come for free. First, corresponding srcu_read_lock() and
@ -300,8 +345,8 @@ over a rather long period of time, but improvements are always welcome!
requiring SRCU's read-side deadlock immunity or low read-side
realtime latency.
Note that, rcu_assign_pointer() and rcu_dereference() relate to
SRCU just as they do to other forms of RCU.
Note that, rcu_assign_pointer() relates to SRCU just as they do
to other forms of RCU.
15. The whole point of call_rcu(), synchronize_rcu(), and friends
is to wait until all pre-existing readers have finished before
@ -311,12 +356,12 @@ over a rather long period of time, but improvements are always welcome!
destructive operation, and -only- -then- invoke call_rcu(),
synchronize_rcu(), or friends.
Because these primitives only wait for pre-existing readers,
it is the caller's responsibility to guarantee safety to
any subsequent readers.
Because these primitives only wait for pre-existing readers, it
is the caller's responsibility to guarantee that any subsequent
readers will execute safely.
16. The various RCU read-side primitives do -not- contain memory
barriers. The CPU (and in some cases, the compiler) is free
to reorder code into and out of RCU read-side critical sections.
It is the responsibility of the RCU update-side primitives to
deal with this.
16. The various RCU read-side primitives do -not- necessarily contain
memory barriers. You should therefore plan for the CPU
and the compiler to freely reorder code into and out of RCU
read-side critical sections. It is the responsibility of the
RCU update-side primitives to deal with this.

View File

@ -0,0 +1,91 @@
RCU and lockdep checking
All flavors of RCU have lockdep checking available, so that lockdep is
aware of when each task enters and leaves any flavor of RCU read-side
critical section. Each flavor of RCU is tracked separately (but note
that this is not the case in 2.6.32 and earlier). This allows lockdep's
tracking to include RCU state, which can sometimes help when debugging
deadlocks and the like.
In addition, RCU provides the following primitives that check lockdep's
state:
rcu_read_lock_held() for normal RCU.
rcu_read_lock_bh_held() for RCU-bh.
rcu_read_lock_sched_held() for RCU-sched.
srcu_read_lock_held() for SRCU.
These functions are conservative, and will therefore return 1 if they
aren't certain (for example, if CONFIG_DEBUG_LOCK_ALLOC is not set).
This prevents things like WARN_ON(!rcu_read_lock_held()) from giving false
positives when lockdep is disabled.
In addition, a separate kernel config parameter CONFIG_PROVE_RCU enables
checking of rcu_dereference() primitives:
rcu_dereference(p):
Check for RCU read-side critical section.
rcu_dereference_bh(p):
Check for RCU-bh read-side critical section.
rcu_dereference_sched(p):
Check for RCU-sched read-side critical section.
srcu_dereference(p, sp):
Check for SRCU read-side critical section.
rcu_dereference_check(p, c):
Use explicit check expression "c". This is useful in
code that is invoked by both readers and updaters.
rcu_dereference_raw(p)
Don't check. (Use sparingly, if at all.)
rcu_dereference_protected(p, c):
Use explicit check expression "c", and omit all barriers
and compiler constraints. This is useful when the data
structure cannot change, for example, in code that is
invoked only by updaters.
rcu_access_pointer(p):
Return the value of the pointer and omit all barriers,
but retain the compiler constraints that prevent duplicating
or coalescsing. This is useful when when testing the
value of the pointer itself, for example, against NULL.
The rcu_dereference_check() check expression can be any boolean
expression, but would normally include one of the rcu_read_lock_held()
family of functions and a lockdep expression. However, any boolean
expression can be used. For a moderately ornate example, consider
the following:
file = rcu_dereference_check(fdt->fd[fd],
rcu_read_lock_held() ||
lockdep_is_held(&files->file_lock) ||
atomic_read(&files->count) == 1);
This expression picks up the pointer "fdt->fd[fd]" in an RCU-safe manner,
and, if CONFIG_PROVE_RCU is configured, verifies that this expression
is used in:
1. An RCU read-side critical section, or
2. with files->file_lock held, or
3. on an unshared files_struct.
In case (1), the pointer is picked up in an RCU-safe manner for vanilla
RCU read-side critical sections, in case (2) the ->file_lock prevents
any change from taking place, and finally, in case (3) the current task
is the only task accessing the file_struct, again preventing any change
from taking place. If the above statement was invoked only from updater
code, it could instead be written as follows:
file = rcu_dereference_protected(fdt->fd[fd],
lockdep_is_held(&files->file_lock) ||
atomic_read(&files->count) == 1);
This would verify cases #2 and #3 above, and furthermore lockdep would
complain if this was used in an RCU read-side critical section unless one
of these two cases held. Because rcu_dereference_protected() omits all
barriers and compiler constraints, it generates better code than do the
other flavors of rcu_dereference(). On the other hand, it is illegal
to use rcu_dereference_protected() if either the RCU-protected pointer
or the RCU-protected data that it points to can change concurrently.
There are currently only "universal" versions of the rcu_assign_pointer()
and RCU list-/tree-traversal primitives, which do not (yet) check for
being in an RCU read-side critical section. In the future, separate
versions of these primitives might be created.

View File

@ -75,6 +75,8 @@ o I hear that RCU is patented? What is with that?
search for the string "Patent" in RTFP.txt to find them.
Of these, one was allowed to lapse by the assignee, and the
others have been contributed to the Linux kernel under GPL.
There are now also LGPL implementations of user-level RCU
available (http://lttng.org/?q=node/18).
o I hear that RCU needs work in order to support realtime kernels?
@ -91,48 +93,4 @@ o Where can I find more information on RCU?
o What are all these files in this directory?
NMI-RCU.txt
Describes how to use RCU to implement dynamic
NMI handlers, which can be revectored on the fly,
without rebooting.
RTFP.txt
List of RCU-related publications and web sites.
UP.txt
Discussion of RCU usage in UP kernels.
arrayRCU.txt
Describes how to use RCU to protect arrays, with
resizeable arrays whose elements reference other
data structures being of the most interest.
checklist.txt
Lists things to check for when inspecting code that
uses RCU.
listRCU.txt
Describes how to use RCU to protect linked lists.
This is the simplest and most common use of RCU
in the Linux kernel.
rcu.txt
You are reading it!
rcuref.txt
Describes how to combine use of reference counts
with RCU.
whatisRCU.txt
Overview of how the RCU implementation works. Along
the way, presents a conceptual view of RCU.
See 00-INDEX for the list.

View File

@ -0,0 +1,106 @@
Using RCU's CPU Stall Detector
The CONFIG_RCU_CPU_STALL_DETECTOR kernel config parameter enables
RCU's CPU stall detector, which detects conditions that unduly delay
RCU grace periods. The stall detector's idea of what constitutes
"unduly delayed" is controlled by a set of C preprocessor macros:
RCU_SECONDS_TILL_STALL_CHECK
This macro defines the period of time that RCU will wait from
the beginning of a grace period until it issues an RCU CPU
stall warning. This time period is normally ten seconds.
RCU_SECONDS_TILL_STALL_RECHECK
This macro defines the period of time that RCU will wait after
issuing a stall warning until it issues another stall warning
for the same stall. This time period is normally set to thirty
seconds.
RCU_STALL_RAT_DELAY
The CPU stall detector tries to make the offending CPU print its
own warnings, as this often gives better-quality stack traces.
However, if the offending CPU does not detect its own stall in
the number of jiffies specified by RCU_STALL_RAT_DELAY, then
some other CPU will complain. This delay is normally set to
two jiffies.
When a CPU detects that it is stalling, it will print a message similar
to the following:
INFO: rcu_sched_state detected stall on CPU 5 (t=2500 jiffies)
This message indicates that CPU 5 detected that it was causing a stall,
and that the stall was affecting RCU-sched. This message will normally be
followed by a stack dump of the offending CPU. On TREE_RCU kernel builds,
RCU and RCU-sched are implemented by the same underlying mechanism,
while on TREE_PREEMPT_RCU kernel builds, RCU is instead implemented
by rcu_preempt_state.
On the other hand, if the offending CPU fails to print out a stall-warning
message quickly enough, some other CPU will print a message similar to
the following:
INFO: rcu_bh_state detected stalls on CPUs/tasks: { 3 5 } (detected by 2, 2502 jiffies)
This message indicates that CPU 2 detected that CPUs 3 and 5 were both
causing stalls, and that the stall was affecting RCU-bh. This message
will normally be followed by stack dumps for each CPU. Please note that
TREE_PREEMPT_RCU builds can be stalled by tasks as well as by CPUs,
and that the tasks will be indicated by PID, for example, "P3421".
It is even possible for a rcu_preempt_state stall to be caused by both
CPUs -and- tasks, in which case the offending CPUs and tasks will all
be called out in the list.
Finally, if the grace period ends just as the stall warning starts
printing, there will be a spurious stall-warning message:
INFO: rcu_bh_state detected stalls on CPUs/tasks: { } (detected by 4, 2502 jiffies)
This is rare, but does happen from time to time in real life.
So your kernel printed an RCU CPU stall warning. The next question is
"What caused it?" The following problems can result in RCU CPU stall
warnings:
o A CPU looping in an RCU read-side critical section.
o A CPU looping with interrupts disabled. This condition can
result in RCU-sched and RCU-bh stalls.
o A CPU looping with preemption disabled. This condition can
result in RCU-sched stalls and, if ksoftirqd is in use, RCU-bh
stalls.
o A CPU looping with bottom halves disabled. This condition can
result in RCU-sched and RCU-bh stalls.
o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel
without invoking schedule().
o A bug in the RCU implementation.
o A hardware failure. This is quite unlikely, but has occurred
at least once in real life. A CPU failed in a running system,
becoming unresponsive, but not causing an immediate crash.
This resulted in a series of RCU CPU stall warnings, eventually
leading the realization that the CPU had failed.
The RCU, RCU-sched, and RCU-bh implementations have CPU stall
warning. SRCU does not have its own CPU stall warnings, but its
calls to synchronize_sched() will result in RCU-sched detecting
RCU-sched-related CPU stalls. Please note that RCU only detects
CPU stalls when there is a grace period in progress. No grace period,
no CPU stall warnings.
To diagnose the cause of the stall, inspect the stack traces.
The offending function will usually be near the top of the stack.
If you have a series of stall warnings from a single extended stall,
comparing the stack traces can often help determine where the stall
is occurring, which will usually be in the function nearest the top of
that portion of the stack which remains the same from trace to trace.
If you can reliably trigger the stall, ftrace can be quite helpful.
RCU bugs can often be debugged with the help of CONFIG_RCU_TRACE.

View File

@ -30,6 +30,18 @@ MODULE PARAMETERS
This module has the following parameters:
fqs_duration Duration (in microseconds) of artificially induced bursts
of force_quiescent_state() invocations. In RCU
implementations having force_quiescent_state(), these
bursts help force races between forcing a given grace
period and that grace period ending on its own.
fqs_holdoff Holdoff time (in microseconds) between consecutive calls
to force_quiescent_state() within a burst.
fqs_stutter Wait time (in seconds) between consecutive bursts
of calls to force_quiescent_state().
irqreaders Says to invoke RCU readers from irq level. This is currently
done via timers. Defaults to "1" for variants of RCU that
permit this. (Or, more accurately, variants of RCU that do
@ -170,16 +182,6 @@ Similarly, sched_expedited RCU provides the following:
sched_expedited-torture: Reader Pipe: 12660320201 95875 0 0 0 0 0 0 0 0 0
sched_expedited-torture: Reader Batch: 12660424885 0 0 0 0 0 0 0 0 0 0
sched_expedited-torture: Free-Block Circulation: 1090795 1090795 1090794 1090793 1090792 1090791 1090790 1090789 1090788 1090787 0
state: -1 / 0:0 3:0 4:0
As before, the first four lines are similar to those for RCU.
The last line shows the task-migration state. The first number is
-1 if synchronize_sched_expedited() is idle, -2 if in the process of
posting wakeups to the migration kthreads, and N when waiting on CPU N.
Each of the colon-separated fields following the "/" is a CPU:state pair.
Valid states are "0" for idle, "1" for waiting for quiescent state,
"2" for passed through quiescent state, and "3" when a race with a
CPU-hotplug event forces use of the synchronize_sched() primitive.
USAGE

View File

@ -256,23 +256,23 @@ o Each element of the form "1/1 0:127 ^0" represents one struct
The output of "cat rcu/rcu_pending" looks as follows:
rcu_sched:
0 np=255892 qsp=53936 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741
1 np=261224 qsp=54638 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792
2 np=237496 qsp=49664 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629
3 np=236249 qsp=48766 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723
4 np=221310 qsp=46850 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110
5 np=237332 qsp=48449 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456
6 np=219995 qsp=46718 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834
7 np=249893 qsp=49390 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888
0 np=255892 qsp=53936 rpq=85 cbr=0 cng=14417 gpc=10033 gps=24320 nf=6445 nn=146741
1 np=261224 qsp=54638 rpq=33 cbr=0 cng=25723 gpc=16310 gps=2849 nf=5912 nn=155792
2 np=237496 qsp=49664 rpq=23 cbr=0 cng=2762 gpc=45478 gps=1762 nf=1201 nn=136629
3 np=236249 qsp=48766 rpq=98 cbr=0 cng=286 gpc=48049 gps=1218 nf=207 nn=137723
4 np=221310 qsp=46850 rpq=7 cbr=0 cng=26 gpc=43161 gps=4634 nf=3529 nn=123110
5 np=237332 qsp=48449 rpq=9 cbr=0 cng=54 gpc=47920 gps=3252 nf=201 nn=137456
6 np=219995 qsp=46718 rpq=12 cbr=0 cng=50 gpc=42098 gps=6093 nf=4202 nn=120834
7 np=249893 qsp=49390 rpq=42 cbr=0 cng=72 gpc=38400 gps=17102 nf=41 nn=144888
rcu_bh:
0 np=146741 qsp=1419 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314
1 np=155792 qsp=12597 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180
2 np=136629 qsp=18680 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936
3 np=137723 qsp=2843 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863
4 np=123110 qsp=12433 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671
5 np=137456 qsp=4210 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235
6 np=120834 qsp=9902 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921
7 np=144888 qsp=26336 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542
0 np=146741 qsp=1419 rpq=6 cbr=0 cng=6 gpc=0 gps=0 nf=2 nn=145314
1 np=155792 qsp=12597 rpq=3 cbr=0 cng=0 gpc=4 gps=8 nf=3 nn=143180
2 np=136629 qsp=18680 rpq=1 cbr=0 cng=0 gpc=7 gps=6 nf=0 nn=117936
3 np=137723 qsp=2843 rpq=0 cbr=0 cng=0 gpc=10 gps=7 nf=0 nn=134863
4 np=123110 qsp=12433 rpq=0 cbr=0 cng=0 gpc=4 gps=2 nf=0 nn=110671
5 np=137456 qsp=4210 rpq=1 cbr=0 cng=0 gpc=6 gps=5 nf=0 nn=133235
6 np=120834 qsp=9902 rpq=2 cbr=0 cng=0 gpc=6 gps=3 nf=2 nn=110921
7 np=144888 qsp=26336 rpq=0 cbr=0 cng=0 gpc=8 gps=2 nf=0 nn=118542
As always, this is once again split into "rcu_sched" and "rcu_bh"
portions, with CONFIG_TREE_PREEMPT_RCU kernels having an additional
@ -284,6 +284,9 @@ o "np" is the number of times that __rcu_pending() has been invoked
o "qsp" is the number of times that the RCU was waiting for a
quiescent state from this CPU.
o "rpq" is the number of times that the CPU had passed through
a quiescent state, but not yet reported it to RCU.
o "cbr" is the number of times that this CPU had RCU callbacks
that had passed through a grace period, and were thus ready
to be invoked.

View File

@ -323,14 +323,17 @@ used as follows:
Defer Protect
a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock()
call_rcu()
call_rcu() rcu_dereference()
b. call_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh()
rcu_dereference_bh()
c. synchronize_sched() preempt_disable() / preempt_enable()
c. synchronize_sched() rcu_read_lock_sched() / rcu_read_unlock_sched()
preempt_disable() / preempt_enable()
local_irq_save() / local_irq_restore()
hardirq enter / hardirq exit
NMI enter / NMI exit
rcu_dereference_sched()
These three mechanisms are used as follows:
@ -780,9 +783,8 @@ Linux-kernel source code, but it helps to have a full list of the
APIs, since there does not appear to be a way to categorize them
in docbook. Here is the list, by category.
RCU pointer/list traversal:
RCU list traversal:
rcu_dereference
list_for_each_entry_rcu
hlist_for_each_entry_rcu
hlist_nulls_for_each_entry_rcu
@ -808,7 +810,7 @@ RCU: Critical sections Grace period Barrier
rcu_read_lock synchronize_net rcu_barrier
rcu_read_unlock synchronize_rcu
synchronize_rcu_expedited
rcu_dereference synchronize_rcu_expedited
call_rcu
@ -816,7 +818,7 @@ bh: Critical sections Grace period Barrier
rcu_read_lock_bh call_rcu_bh rcu_barrier_bh
rcu_read_unlock_bh synchronize_rcu_bh
synchronize_rcu_bh_expedited
rcu_dereference_bh synchronize_rcu_bh_expedited
sched: Critical sections Grace period Barrier
@ -825,17 +827,25 @@ sched: Critical sections Grace period Barrier
rcu_read_unlock_sched call_rcu_sched
[preempt_disable] synchronize_sched_expedited
[and friends]
rcu_dereference_sched
SRCU: Critical sections Grace period Barrier
srcu_read_lock synchronize_srcu N/A
srcu_read_unlock synchronize_srcu_expedited
srcu_dereference
SRCU: Initialization/cleanup
init_srcu_struct
cleanup_srcu_struct
All: lockdep-checked RCU-protected pointer access
rcu_dereference_check
rcu_dereference_protected
rcu_access_pointer
See the comment headers in the source code (or the docbook generated
from them) for more information.

View File

@ -73,7 +73,7 @@ NOTE: Smack labels are limited to 23 characters. The attr command
If you don't do anything special all users will get the floor ("_")
label when they log in. If you do want to log in via the hacked ssh
at other labels use the attr command to set the smack value on the
home directory and it's contents.
home directory and its contents.
You can add access rules in /etc/smack/accesses. They take the form:

View File

@ -9,10 +9,16 @@ Documentation/SubmittingPatches and elsewhere regarding submitting Linux
kernel patches.
1: Builds cleanly with applicable or modified CONFIG options =y, =m, and
1: If you use a facility then #include the file that defines/declares
that facility. Don't depend on other header files pulling in ones
that you use.
2: Builds cleanly with applicable or modified CONFIG options =y, =m, and
=n. No gcc warnings/errors, no linker warnings/errors.
2: Passes allnoconfig, allmodconfig
2b: Passes allnoconfig, allmodconfig
2c: Builds successfully when using O=builddir
3: Builds on multiple CPU architectures by using local cross-compile tools
or some other build farm.
@ -91,3 +97,13 @@ kernel patches.
25: If any ioctl's are added by the patch, then also update
Documentation/ioctl/ioctl-number.txt.
26: If your modified source code depends on or uses any of the kernel
APIs or features that are related to the following kconfig symbols,
then test multiple builds with the related kconfig symbols disabled
and/or =m (if that option is available) [not all of these at the
same time, just various/random combinations of them]:
CONFIG_SMP, CONFIG_SYSFS, CONFIG_PROC_FS, CONFIG_INPUT, CONFIG_PCI,
CONFIG_BLOCK, CONFIG_PM, CONFIG_HOTPLUG, CONFIG_MAGIC_SYSRQ,
CONFIG_NET, CONFIG_INET=n (but latter with CONFIG_NET=y)

View File

@ -130,6 +130,8 @@ Linux kernel master tree:
ftp.??.kernel.org:/pub/linux/kernel/...
?? == your country code, such as "us", "uk", "fr", etc.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git
Linux kernel mailing list:
linux-kernel@vger.kernel.org
[mail majordomo@vger.kernel.org to subscribe]
@ -160,3 +162,6 @@ How to NOT write kernel driver by Arjan van de Ven:
Kernel Janitor:
http://janitor.kernelnewbies.org/
GIT, Fast Version Control System:
http://git-scm.com/

View File

@ -0,0 +1,59 @@
APEI Error INJection
~~~~~~~~~~~~~~~~~~~~
EINJ provides a hardware error injection mechanism
It is very useful for debugging and testing of other APEI and RAS features.
To use EINJ, make sure the following are enabled in your kernel
configuration:
CONFIG_DEBUG_FS
CONFIG_ACPI_APEI
CONFIG_ACPI_APEI_EINJ
The user interface of EINJ is debug file system, under the
directory apei/einj. The following files are provided.
- available_error_type
Reading this file returns the error injection capability of the
platform, that is, which error types are supported. The error type
definition is as follow, the left field is the error type value, the
right field is error description.
0x00000001 Processor Correctable
0x00000002 Processor Uncorrectable non-fatal
0x00000004 Processor Uncorrectable fatal
0x00000008 Memory Correctable
0x00000010 Memory Uncorrectable non-fatal
0x00000020 Memory Uncorrectable fatal
0x00000040 PCI Express Correctable
0x00000080 PCI Express Uncorrectable fatal
0x00000100 PCI Express Uncorrectable non-fatal
0x00000200 Platform Correctable
0x00000400 Platform Uncorrectable non-fatal
0x00000800 Platform Uncorrectable fatal
The format of file contents are as above, except there are only the
available error type lines.
- error_type
This file is used to set the error type value. The error type value
is defined in "available_error_type" description.
- error_inject
Write any integer to this file to trigger the error
injection. Before this, please specify all necessary error
parameters.
- param1
This file is used to set the first error parameter value. Effect of
parameter depends on error_type specified. For memory error, this is
physical memory address.
- param2
This file is used to set the second error parameter value. Effect of
parameter depends on error_type specified. For memory error, this is
physical memory address mask.
For more information about EINJ, please refer to ACPI specification
version 4.0, section 17.5.

View File

@ -20,6 +20,8 @@ Samsung-S3C24XX
- S3C24XX ARM Linux Overview
Sharp-LH
- Linux on Sharp LH79524 and LH7A40X System On a Chip (SOC)
SPEAr
- ST SPEAr platform Linux Overview
VFP/
- Release notes for Linux Kernel Vector Floating Point support code
empeg/

View File

@ -32,7 +32,7 @@ Notes:
- The flash on board is divided into 3 partitions.
You should be careful to use flash on board.
It's partition is different from GraphicsClient Plus and GraphicsMaster
Its partition is different from GraphicsClient Plus and GraphicsMaster
- 16bpp mode requires a different cable than what ships with the board.
Contact ADS or look through the manual to wire your own. Currently,

View File

@ -0,0 +1,60 @@
SPEAr ARM Linux Overview
==========================
Introduction
------------
SPEAr (Structured Processor Enhanced Architecture).
weblink : http://www.st.com/spear
The ST Microelectronics SPEAr range of ARM9/CortexA9 System-on-Chip CPUs are
supported by the 'spear' platform of ARM Linux. Currently SPEAr300,
SPEAr310, SPEAr320 and SPEAr600 SOCs are supported. Support for the SPEAr13XX
series is in progress.
Hierarchy in SPEAr is as follows:
SPEAr (Platform)
- SPEAr3XX (3XX SOC series, based on ARM9)
- SPEAr300 (SOC)
- SPEAr300_EVB (Evaluation Board)
- SPEAr310 (SOC)
- SPEAr310_EVB (Evaluation Board)
- SPEAr320 (SOC)
- SPEAr320_EVB (Evaluation Board)
- SPEAr6XX (6XX SOC series, based on ARM9)
- SPEAr600 (SOC)
- SPEAr600_EVB (Evaluation Board)
- SPEAr13XX (13XX SOC series, based on ARM CORTEXA9)
- SPEAr1300 (SOC)
Configuration
-------------
A generic configuration is provided for each machine, and can be used as the
default by
make spear600_defconfig
make spear300_defconfig
make spear310_defconfig
make spear320_defconfig
Layout
------
The common files for multiple machine families (SPEAr3XX, SPEAr6XX and
SPEAr13XX) are located in the platform code contained in arch/arm/plat-spear
with headers in plat/.
Each machine series have a directory with name arch/arm/mach-spear followed by
series name. Like mach-spear3xx, mach-spear6xx and mach-spear13xx.
Common file for machines of spear3xx family is mach-spear3xx/spear3xx.c and for
spear6xx is mach-spear6xx/spear6xx.c. mach-spear* also contain soc/machine
specific files, like spear300.c, spear310.c, spear320.c and spear600.c.
mach-spear* also contains board specific files for each machine type.
Document Author
---------------
Viresh Kumar, (c) 2010 ST Microelectronics

View File

@ -14,8 +14,8 @@ Introduction
how the clocks are arranged. The first implementation used as single
PLL to feed the ARM, memory and peripherals via a series of dividers
and muxes and this is the implementation that is documented here. A
newer version where there is a seperate PLL and clock divider for the
ARM core is available as a seperate driver.
newer version where there is a separate PLL and clock divider for the
ARM core is available as a separate driver.
Layout

View File

@ -12,6 +12,8 @@ Introduction
of the s3c2410 GPIO system, please read the Samsung provided
data-sheet/users manual to find out the complete list.
See Documentation/arm/Samsung/GPIO.txt for the core implemetation.
GPIOLIB
-------
@ -24,8 +26,60 @@ GPIOLIB
listed below will be removed (they may be marked as __deprecated
in the near future).
- s3c2410_gpio_getpin
- s3c2410_gpio_setpin
The following functions now either have a s3c_ specific variant
or are merged into gpiolib. See the definitions in
arch/arm/plat-samsung/include/plat/gpio-cfg.h:
s3c2410_gpio_setpin() gpio_set_value() or gpio_direction_output()
s3c2410_gpio_getpin() gpio_get_value() or gpio_direction_input()
s3c2410_gpio_getirq() gpio_to_irq()
s3c2410_gpio_cfgpin() s3c_gpio_cfgpin()
s3c2410_gpio_getcfg() s3c_gpio_getcfg()
s3c2410_gpio_pullup() s3c_gpio_setpull()
GPIOLIB conversion
------------------
If you need to convert your board or driver to use gpiolib from the exiting
s3c2410 api, then here are some notes on the process.
1) If your board is exclusively using an GPIO, say to control peripheral
power, then it will require to claim the gpio with gpio_request() before
it can use it.
It is recommended to check the return value, with at least WARN_ON()
during initialisation.
2) The s3c2410_gpio_cfgpin() can be directly replaced with s3c_gpio_cfgpin()
as they have the same arguments, and can either take the pin specific
values, or the more generic special-function-number arguments.
3) s3c2410_gpio_pullup() changs have the problem that whilst the
s3c2410_gpio_pullup(x, 1) can be easily translated to the
s3c_gpio_setpull(x, S3C_GPIO_PULL_NONE), the s3c2410_gpio_pullup(x, 0)
are not so easy.
The s3c2410_gpio_pullup(x, 0) case enables the pull-up (or in the case
of some of the devices, a pull-down) and as such the new API distinguishes
between the UP and DOWN case. There is currently no 'just turn on' setting
which may be required if this becomes a problem.
4) s3c2410_gpio_setpin() can be replaced by gpio_set_value(), the old call
does not implicitly configure the relevant gpio to output. The gpio
direction should be changed before using gpio_set_value().
5) s3c2410_gpio_getpin() is replaceable by gpio_get_value() if the pin
has been set to input. It is currently unknown what the behaviour is
when using gpio_get_value() on an output pin (s3c2410_gpio_getpin
would return the value the pin is supposed to be outputting).
6) s3c2410_gpio_getirq() should be directly replacable with the
gpio_to_irq() call.
The s3c2410_gpio and gpio_ calls have always operated on the same gpio
numberspace, so there is no problem with converting the gpio numbering
between the calls.
Headers
@ -54,6 +108,11 @@ PIN Numbers
eg S3C2410_GPA(0) or S3C2410_GPF(1). These defines are used to tell
the GPIO functions which pin is to be used.
With the conversion to gpiolib, there is no longer a direct conversion
from gpio pin number to register base address as in earlier kernels. This
is due to the number space required for newer SoCs where the later
GPIOs are not contiguous.
Configuring a pin
-----------------
@ -71,6 +130,8 @@ Configuring a pin
which would turn GPA(0) into the lowest Address line A0, and set
GPE(8) to be connected to the SDIO/MMC controller's SDDAT1 line.
The s3c_gpio_cfgpin() call is a functional replacement for this call.
Reading the current configuration
---------------------------------
@ -82,6 +143,9 @@ Reading the current configuration
The return value will be from the same set of values which can be
passed to s3c2410_gpio_cfgpin().
The s3c_gpio_getcfg() call should be a functional replacement for
this call.
Configuring a pull-up resistor
------------------------------
@ -95,6 +159,10 @@ Configuring a pull-up resistor
Where the to value is zero to set the pull-up off, and 1 to enable
the specified pull-up. Any other values are currently undefined.
The s3c_gpio_setpull() offers similar functionality, but with the
ability to encode whether the pull is up or down. Currently there
is no 'just on' state, so up or down must be selected.
Getting the state of a PIN
--------------------------
@ -106,6 +174,9 @@ Getting the state of a PIN
This will return either zero or non-zero. Do not count on this
function returning 1 if the pin is set.
This call is now implemented by the relevant gpiolib calls, convert
your board or driver to use gpiolib.
Setting the state of a PIN
--------------------------
@ -117,6 +188,9 @@ Setting the state of a PIN
Which sets the given pin to the value. Use 0 to write 0, and 1 to
set the output to 1.
This call is now implemented by the relevant gpiolib calls, convert
your board or driver to use gpiolib.
Getting the IRQ number associated with a PIN
--------------------------------------------
@ -128,6 +202,9 @@ Getting the IRQ number associated with a PIN
Note, not all pins have an IRQ.
This call is now implemented by the relevant gpiolib calls, convert
your board or driver to use gpiolib.
Authour
-------

View File

@ -8,10 +8,16 @@ Introduction
The Samsung S3C24XX range of ARM9 System-on-Chip CPUs are supported
by the 's3c2410' architecture of ARM Linux. Currently the S3C2410,
S3C2412, S3C2413, S3C2440, S3C2442 and S3C2443 devices are supported.
S3C2412, S3C2413, S3C2416 S3C2440, S3C2442, S3C2443 and S3C2450 devices
are supported.
Support for the S3C2400 and S3C24A0 series are in progress.
The S3C2416 and S3C2450 devices are very similar and S3C2450 support is
included under the arch/arm/mach-s3c2416 directory. Note, whilst core
support for these SoCs is in, work on some of the extra peripherals
and extra interrupts is still ongoing.
Configuration
-------------
@ -209,6 +215,13 @@ GPIO
Newer kernels carry GPIOLIB, and support is being moved towards
this with some of the older support in line to be removed.
As of v2.6.34, the move towards using gpiolib support is almost
complete, and very little of the old calls are left.
See Documentation/arm/Samsung-S3C24XX/GPIO.txt for the S3C24XX specific
support and Documentation/arm/Samsung/GPIO.txt for the core Samsung
implementation.
Clock Management
----------------

View File

@ -0,0 +1,42 @@
Samsung GPIO implementation
===========================
Introduction
------------
This outlines the Samsung GPIO implementation and the architecture
specfic calls provided alongisde the drivers/gpio core.
S3C24XX (Legacy)
----------------
See Documentation/arm/Samsung-S3C24XX/GPIO.txt for more information
about these devices. Their implementation is being brought into line
with the core samsung implementation described in this document.
GPIOLIB integration
-------------------
The gpio implementation uses gpiolib as much as possible, only providing
specific calls for the items that require Samsung specific handling, such
as pin special-function or pull resistor control.
GPIO numbering is synchronised between the Samsung and gpiolib system.
PIN configuration
-----------------
Pin configuration is specific to the Samsung architecutre, with each SoC
registering the necessary information for the core gpio configuration
implementation to configure pins as necessary.
The s3c_gpio_cfgpin() and s3c_gpio_setpull() provide the means for a
driver or machine to change gpio configuration.
See arch/arm/plat-samsung/include/plat/gpio-cfg.h for more information
on these functions.

View File

@ -0,0 +1,99 @@
Samsung ARM Linux Overview
==========================
Introduction
------------
The Samsung range of ARM SoCs spans many similar devices, from the initial
ARM9 through to the newest ARM cores. This document shows an overview of
the current kernel support, how to use it and where to find the code
that supports this.
The currently supported SoCs are:
- S3C24XX: See Documentation/arm/Samsung-S3C24XX/Overview.txt for full list
- S3C64XX: S3C6400 and S3C6410
- S5P6440
- S5P6442
- S5PC100
- S5PC110 / S5PV210
S3C24XX Systems
---------------
There is still documentation in Documnetation/arm/Samsung-S3C24XX/ which
deals with the architecture and drivers specific to these devices.
See Documentation/arm/Samsung-S3C24XX/Overview.txt for more information
on the implementation details and specific support.
Configuration
-------------
A number of configurations are supplied, as there is no current way of
unifying all the SoCs into one kernel.
s5p6440_defconfig - S5P6440 specific default configuration
s5p6442_defconfig - S5P6442 specific default configuration
s5pc100_defconfig - S5PC100 specific default configuration
s5pc110_defconfig - S5PC110 specific default configuration
s5pv210_defconfig - S5PV210 specific default configuration
Layout
------
The directory layout is currently being restructured, and consists of
several platform directories and then the machine specific directories
of the CPUs being built for.
plat-samsung provides the base for all the implementations, and is the
last in the line of include directories that are processed for the build
specific information. It contains the base clock, GPIO and device definitions
to get the system running.
plat-s3c24xx is for s3c24xx specific builds, see the S3C24XX docs.
plat-s5p is for s5p specific builds, and contains common support for the
S5P specific systems. Not all S5Ps use all the features in this directory
due to differences in the hardware.
Layout changes
--------------
The old plat-s3c and plat-s5pc1xx directories have been removed, with
support moved to either plat-samsung or plat-s5p as necessary. These moves
where to simplify the include and dependency issues involved with having
so many different platform directories.
It was decided to remove plat-s5pc1xx as some of the support was already
in plat-s5p or plat-samsung, with the S5PC110 support added with S5PV210
the only user was the S5PC100. The S5PC100 specific items where moved to
arch/arm/mach-s5pc100.
Port Contributors
-----------------
Ben Dooks (BJD)
Vincent Sanders
Herbert Potzl
Arnaud Patard (RTP)
Roc Wu
Klaus Fetscher
Dimitry Andric
Shannon Holland
Guillaume Gourat (NexVision)
Christer Weinigel (wingel) (Acer N30)
Lucas Correia Villa Real (S3C2400 port)
Document Author
---------------
Copyright 2009-2010 Ben Dooks <ben-linux@fluff.org>

View File

@ -0,0 +1,167 @@
#!/usr/bin/awk -f
#
# Copyright 2010 Ben Dooks <ben-linux@fluff.org>
#
# Released under GPLv2
# example usage
# ./clksrc-change-registers.awk arch/arm/plat-s5pc1xx/include/plat/regs-clock.h < src > dst
function extract_value(s)
{
eqat = index(s, "=")
comat = index(s, ",")
return substr(s, eqat+2, (comat-eqat)-2)
}
function remove_brackets(b)
{
return substr(b, 2, length(b)-2)
}
function splitdefine(l, p)
{
r = split(l, tp)
p[0] = tp[2]
p[1] = remove_brackets(tp[3])
}
function find_length(f)
{
if (0)
printf "find_length " f "\n" > "/dev/stderr"
if (f ~ /0x1/)
return 1
else if (f ~ /0x3/)
return 2
else if (f ~ /0x7/)
return 3
else if (f ~ /0xf/)
return 4
printf "unknown legnth " f "\n" > "/dev/stderr"
exit
}
function find_shift(s)
{
id = index(s, "<")
if (id <= 0) {
printf "cannot find shift " s "\n" > "/dev/stderr"
exit
}
return substr(s, id+2)
}
BEGIN {
if (ARGC < 2) {
print "too few arguments" > "/dev/stderr"
exit
}
# read the header file and find the mask values that we will need
# to replace and create an associative array of values
while (getline line < ARGV[1] > 0) {
if (line ~ /\#define.*_MASK/ &&
!(line ~ /S5PC100_EPLL_MASK/) &&
!(line ~ /USB_SIG_MASK/)) {
splitdefine(line, fields)
name = fields[0]
if (0)
printf "MASK " line "\n" > "/dev/stderr"
dmask[name,0] = find_length(fields[1])
dmask[name,1] = find_shift(fields[1])
if (0)
printf "=> '" name "' LENGTH=" dmask[name,0] " SHIFT=" dmask[name,1] "\n" > "/dev/stderr"
} else {
}
}
delete ARGV[1]
}
/clksrc_clk.*=.*{/ {
shift=""
mask=""
divshift=""
reg_div=""
reg_src=""
indent=1
print $0
for(; indent >= 1;) {
if ((getline line) <= 0) {
printf "unexpected end of file" > "/dev/stderr"
exit 1;
}
if (line ~ /\.shift/) {
shift = extract_value(line)
} else if (line ~ /\.mask/) {
mask = extract_value(line)
} else if (line ~ /\.reg_divider/) {
reg_div = extract_value(line)
} else if (line ~ /\.reg_source/) {
reg_src = extract_value(line)
} else if (line ~ /\.divider_shift/) {
divshift = extract_value(line)
} else if (line ~ /{/) {
indent++
print line
} else if (line ~ /}/) {
indent--
if (indent == 0) {
if (0) {
printf "shift '" shift "' ='" dmask[shift,0] "'\n" > "/dev/stderr"
printf "mask '" mask "'\n" > "/dev/stderr"
printf "dshft '" divshift "'\n" > "/dev/stderr"
printf "rdiv '" reg_div "'\n" > "/dev/stderr"
printf "rsrc '" reg_src "'\n" > "/dev/stderr"
}
generated = mask
sub(reg_src, reg_div, generated)
if (0) {
printf "/* rsrc " reg_src " */\n"
printf "/* rdiv " reg_div " */\n"
printf "/* shift " shift " */\n"
printf "/* mask " mask " */\n"
printf "/* generated " generated " */\n"
}
if (reg_div != "") {
printf "\t.reg_div = { "
printf ".reg = " reg_div ", "
printf ".shift = " dmask[generated,1] ", "
printf ".size = " dmask[generated,0] ", "
printf "},\n"
}
printf "\t.reg_src = { "
printf ".reg = " reg_src ", "
printf ".shift = " dmask[mask,1] ", "
printf ".size = " dmask[mask,0] ", "
printf "},\n"
}
print line
} else {
print line
}
if (0)
printf indent ":" line "\n" > "/dev/stderr"
}
}
// && ! /clksrc_clk.*=.*{/ { print $0 }

View File

@ -7,7 +7,7 @@ The driver only implements a four-wire touch panel protocol.
The touchscreen driver is maintenance free except for the pen-down or
touch threshold. Some resistive displays and board combinations may
require tuning of this threshold. The driver exposes some of it's
require tuning of this threshold. The driver exposes some of its
internal state in the sys filesystem. If the kernel is configured
with it, CONFIG_SYSFS, and sysfs is mounted at /sys, there will be a
directory

View File

@ -59,7 +59,11 @@ PAGE_OFFSET high_memory-1 Kernel direct-mapped RAM region.
This maps the platforms RAM, and typically
maps all platform RAM in a 1:1 relationship.
TASK_SIZE PAGE_OFFSET-1 Kernel module space
PKMAP_BASE PAGE_OFFSET-1 Permanent kernel mappings
One way of mapping HIGHMEM pages into kernel
space.
MODULES_VADDR MODULES_END-1 Kernel module space
Kernel modules inserted via insmod are
placed here using dynamic mappings.

View File

@ -320,7 +320,7 @@ counter decrement would not become globally visible until the
obj->active update does.
As a historical note, 32-bit Sparc used to only allow usage of
24-bits of it's atomic_t type. This was because it used 8 bits
24-bits of its atomic_t type. This was because it used 8 bits
as a spinlock for SMP safety. Sparc32 lacked a "compare and swap"
type instruction. However, 32-bit Sparc has since been moved over
to a "hash table of spinlocks" scheme, that allows the full 32-bit

View File

@ -43,7 +43,7 @@
void bfin_gpio_irq_free(unsigned gpio);
The request functions will record the function state for a certain pin,
the free functions will clear it's function state.
the free functions will clear its function state.
Once a pin is requested, it can't be requested again before it is freed by
previous caller, otherwise kernel will dump stacks, and the request
function fail.

View File

@ -1162,8 +1162,8 @@ where a driver received a request ala this before:
As mentioned, there is no virtual mapping of a bio. For DMA, this is
not a problem as the driver probably never will need a virtual mapping.
Instead it needs a bus mapping (pci_map_page for a single segment or
use blk_rq_map_sg for scatter gather) to be able to ship it to the driver. For
Instead it needs a bus mapping (dma_map_page for a single segment or
use dma_map_sg for scatter gather) to be able to ship it to the driver. For
PIO drivers (or drivers that need to revert to PIO transfer once in a
while (IDE for example)), where the CPU is doing the actual data
transfer a virtual mapping is needed. If the driver supports highmem I/O,

View File

@ -25,11 +25,11 @@ size allowed by the hardware.
nomerges (RW)
-------------
This enables the user to disable the lookup logic involved with IO merging
requests in the block layer. Merging may still occur through a direct
1-hit cache, since that comes for (almost) free. The IO scheduler will not
waste cycles doing tree/hash lookups for merges if nomerges is 1. Defaults
to 0, enabling all merges.
This enables the user to disable the lookup logic involved with IO
merging requests in the block layer. By default (0) all merges are
enabled. When set to 1 only simple one-hit merges will be tried. When
set to 2 no merge algorithms will be tried (including one-hit or more
complex tree/hash lookups).
nr_requests (RW)
----------------

View File

@ -5,7 +5,7 @@
This document describes the cache/tlb flushing interfaces called
by the Linux VM subsystem. It enumerates over each interface,
describes it's intended purpose, and what side effect is expected
describes its intended purpose, and what side effect is expected
after the interface is invoked.
The side effects described below are stated for a uniprocessor
@ -88,12 +88,12 @@ changes occur:
This is used primarily during fault processing.
5) void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t pte)
unsigned long address, pte_t *ptep)
At the end of every page fault, this routine is invoked to
tell the architecture specific code that a translation
described by "pte" now exists at virtual address "address"
for address space "vma->vm_mm", in the software page tables.
now exists at virtual address "address" for address space
"vma->vm_mm", in the software page tables.
A port may use this information in any way it so chooses.
For example, it could use this event to pre-load TLB
@ -231,7 +231,7 @@ require a whole different set of interfaces to handle properly.
The biggest problem is that of virtual aliasing in the data cache
of a processor.
Is your port susceptible to virtual aliasing in it's D-cache?
Is your port susceptible to virtual aliasing in its D-cache?
Well, if your D-cache is virtually indexed, is larger in size than
PAGE_SIZE, and does not prevent multiple cache lines for the same
physical address from existing at once, you have this problem.
@ -249,7 +249,7 @@ one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
Next, you have to solve the D-cache aliasing issue for all
other cases. Please keep in mind that fact that, for a given page
mapped into some user address space, there is always at least one more
mapping, that of the kernel in it's linear mapping starting at
mapping, that of the kernel in its linear mapping starting at
PAGE_OFFSET. So immediately, once the first user maps a given
physical page into its address space, by implication the D-cache
aliasing problem has the potential to exist since the kernel already
@ -377,3 +377,27 @@ maps this page at its virtual address.
All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
remove this interface completely.
The final category of APIs is for I/O to deliberately aliased address
ranges inside the kernel. Such aliases are set up by use of the
vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O
subsystem assumes that the user mapping and kernel offset mapping are
the only aliases. This isn't true for vmap aliases, so anything in
the kernel trying to do I/O to vmap areas must manually manage
coherency. It must do this by flushing the vmap range before doing
I/O and invalidating it after the I/O returns.
void flush_kernel_vmap_range(void *vaddr, int size)
flushes the kernel cache for a given virtual address range in
the vmap area. This is to make sure that any data the kernel
modified in the vmap range is made visible to the physical
page. The design is to make this area safe to perform I/O on.
Note that this API does *not* also flush the offset map alias
of the area.
void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates
the cache for a given virtual address range in the vmap area
which prevents the processor from making the cache stale by
speculatively reading data while the I/O was occurring to the
physical pages. This is only necessary for data reads into the
vmap area.

View File

@ -159,42 +159,7 @@ two arguments: the CDROM device, and the slot number to which you wish
to change. If the slot number is -1, the drive is unloaded.
4. Compilation options
----------------------
There are a few additional options which can be set when compiling the
driver. Most people should not need to mess with any of these; they
are listed here simply for completeness. A compilation option can be
enabled by adding a line of the form `#define <option> 1' to the top
of ide-cd.c. All these options are disabled by default.
VERBOSE_IDE_CD_ERRORS
If this is set, ATAPI error codes will be translated into textual
descriptions. In addition, a dump is made of the command which
provoked the error. This is off by default to save the memory used
by the (somewhat long) table of error descriptions.
STANDARD_ATAPI
If this is set, the code needed to deal with certain drives which do
not properly implement the ATAPI spec will be disabled. If you know
your drive implements ATAPI properly, you can turn this on to get a
slightly smaller kernel.
NO_DOOR_LOCKING
If this is set, the driver will never attempt to lock the door of
the drive.
CDROM_NBLOCKS_BUFFER
This sets the size of the buffer to be used for a CDROMREADAUDIO
ioctl. The default is 8.
TEST
This currently enables an additional ioctl which enables a user-mode
program to execute an arbitrary packet command. See the source for
details. This should be left off unless you know what you're doing.
5. Common problems
4. Common problems
------------------
This section discusses some common problems encountered when trying to
@ -371,7 +336,7 @@ f. Data corruption.
expense of low system performance.
6. cdchange.c
5. cdchange.c
-------------
/*

View File

@ -17,6 +17,9 @@ HOWTO
You can do a very simple testing of running two dd threads in two different
cgroups. Here is what you can do.
- Enable Block IO controller
CONFIG_BLK_CGROUP=y
- Enable group scheduling in CFQ
CONFIG_CFQ_GROUP_IOSCHED=y
@ -54,32 +57,52 @@ cgroups. Here is what you can do.
Various user visible config options
===================================
CONFIG_BLK_CGROUP
- Block IO controller.
CONFIG_DEBUG_BLK_CGROUP
- Debug help. Right now some additional stats file show up in cgroup
if this option is enabled.
CONFIG_CFQ_GROUP_IOSCHED
- Enables group scheduling in CFQ. Currently only 1 level of group
creation is allowed.
CONFIG_DEBUG_CFQ_IOSCHED
- Enables some debugging messages in blktrace. Also creates extra
cgroup file blkio.dequeue.
Config options selected automatically
=====================================
These config options are not user visible and are selected/deselected
automatically based on IO scheduler configuration.
CONFIG_BLK_CGROUP
- Block IO controller. Selected by CONFIG_CFQ_GROUP_IOSCHED.
CONFIG_DEBUG_BLK_CGROUP
- Debug help. Selected by CONFIG_DEBUG_CFQ_IOSCHED.
Details of cgroup files
=======================
- blkio.weight
- Specifies per cgroup weight.
- Specifies per cgroup weight. This is default weight of the group
on all the devices until and unless overridden by per device rule.
(See blkio.weight_device).
Currently allowed range of weights is from 100 to 1000.
- blkio.weight_device
- One can specify per cgroup per device rules using this interface.
These rules override the default value of group weight as specified
by blkio.weight.
Following is the format.
#echo dev_maj:dev_minor weight > /path/to/cgroup/blkio.weight_device
Configure weight=300 on /dev/sdb (8:16) in this cgroup
# echo 8:16 300 > blkio.weight_device
# cat blkio.weight_device
dev weight
8:16 300
Configure weight=500 on /dev/sda (8:0) in this cgroup
# echo 8:0 500 > blkio.weight_device
# cat blkio.weight_device
dev weight
8:0 500
8:16 300
Remove specific weight for /dev/sda in this cgroup
# echo 8:0 0 > blkio.weight_device
# cat blkio.weight_device
dev weight
8:16 300
- blkio.time
- disk time allocated to cgroup per device in milliseconds. First
two fields specify the major and minor number of the device and
@ -92,13 +115,105 @@ Details of cgroup files
third field specifies the number of sectors transferred by the
group to/from the device.
- blkio.io_service_bytes
- Number of bytes transferred to/from the disk by the group. These
are further divided by the type of operation - read or write, sync
or async. First two fields specify the major and minor number of the
device, third field specifies the operation type and the fourth field
specifies the number of bytes.
- blkio.io_serviced
- Number of IOs completed to/from the disk by the group. These
are further divided by the type of operation - read or write, sync
or async. First two fields specify the major and minor number of the
device, third field specifies the operation type and the fourth field
specifies the number of IOs.
- blkio.io_service_time
- Total amount of time between request dispatch and request completion
for the IOs done by this cgroup. This is in nanoseconds to make it
meaningful for flash devices too. For devices with queue depth of 1,
this time represents the actual service time. When queue_depth > 1,
that is no longer true as requests may be served out of order. This
may cause the service time for a given IO to include the service time
of multiple IOs when served out of order which may result in total
io_service_time > actual time elapsed. This time is further divided by
the type of operation - read or write, sync or async. First two fields
specify the major and minor number of the device, third field
specifies the operation type and the fourth field specifies the
io_service_time in ns.
- blkio.io_wait_time
- Total amount of time the IOs for this cgroup spent waiting in the
scheduler queues for service. This can be greater than the total time
elapsed since it is cumulative io_wait_time for all IOs. It is not a
measure of total time the cgroup spent waiting but rather a measure of
the wait_time for its individual IOs. For devices with queue_depth > 1
this metric does not include the time spent waiting for service once
the IO is dispatched to the device but till it actually gets serviced
(there might be a time lag here due to re-ordering of requests by the
device). This is in nanoseconds to make it meaningful for flash
devices too. This time is further divided by the type of operation -
read or write, sync or async. First two fields specify the major and
minor number of the device, third field specifies the operation type
and the fourth field specifies the io_wait_time in ns.
- blkio.io_merged
- Total number of bios/requests merged into requests belonging to this
cgroup. This is further divided by the type of operation - read or
write, sync or async.
- blkio.io_queued
- Total number of requests queued up at any given instant for this
cgroup. This is further divided by the type of operation - read or
write, sync or async.
- blkio.avg_queue_size
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
The average queue size for this cgroup over the entire time of this
cgroup's existence. Queue size samples are taken each time one of the
queues of this cgroup gets a timeslice.
- blkio.group_wait_time
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
This is the amount of time the cgroup had to wait since it became busy
(i.e., went from 0 to 1 request queued) to get a timeslice for one of
its queues. This is different from the io_wait_time which is the
cumulative total of the amount of time spent by each IO in that cgroup
waiting in the scheduler queue. This is in nanoseconds. If this is
read when the cgroup is in a waiting (for timeslice) state, the stat
will only report the group_wait_time accumulated till the last time it
got a timeslice and will not include the current delta.
- blkio.empty_time
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
This is the amount of time a cgroup spends without any pending
requests when not being served, i.e., it does not include any time
spent idling for one of the queues of the cgroup. This is in
nanoseconds. If this is read when the cgroup is in an empty state,
the stat will only report the empty_time accumulated till the last
time it had a pending request and will not include the current delta.
- blkio.idle_time
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y.
This is the amount of time spent by the IO scheduler idling for a
given cgroup in anticipation of a better request than the exising ones
from other queues/cgroups. This is in nanoseconds. If this is read
when the cgroup is in an idling state, the stat will only report the
idle_time accumulated till the last idle period and will not include
the current delta.
- blkio.dequeue
- Debugging aid only enabled if CONFIG_DEBUG_CFQ_IOSCHED=y. This
- Debugging aid only enabled if CONFIG_DEBUG_BLK_CGROUP=y. This
gives the statistics about how many a times a group was dequeued
from service tree of the device. First two fields specify the major
and minor number of the device and third field specifies the number
of times a group was dequeued from a particular device.
- blkio.reset_stats
- Writing an int to this file will result in resetting all the stats
for that cgroup.
CFQ sysfs tunable
=================
/sys/block/<disk>/queue/iosched/group_isolation

View File

@ -0,0 +1,110 @@
/*
* cgroup_event_listener.c - Simple listener of cgroup events
*
* Copyright (C) Kirill A. Shutemov <kirill@shutemov.name>
*/
#include <assert.h>
#include <errno.h>
#include <fcntl.h>
#include <libgen.h>
#include <limits.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/eventfd.h>
#define USAGE_STR "Usage: cgroup_event_listener <path-to-control-file> <args>\n"
int main(int argc, char **argv)
{
int efd = -1;
int cfd = -1;
int event_control = -1;
char event_control_path[PATH_MAX];
char line[LINE_MAX];
int ret;
if (argc != 3) {
fputs(USAGE_STR, stderr);
return 1;
}
cfd = open(argv[1], O_RDONLY);
if (cfd == -1) {
fprintf(stderr, "Cannot open %s: %s\n", argv[1],
strerror(errno));
goto out;
}
ret = snprintf(event_control_path, PATH_MAX, "%s/cgroup.event_control",
dirname(argv[1]));
if (ret >= PATH_MAX) {
fputs("Path to cgroup.event_control is too long\n", stderr);
goto out;
}
event_control = open(event_control_path, O_WRONLY);
if (event_control == -1) {
fprintf(stderr, "Cannot open %s: %s\n", event_control_path,
strerror(errno));
goto out;
}
efd = eventfd(0, 0);
if (efd == -1) {
perror("eventfd() failed");
goto out;
}
ret = snprintf(line, LINE_MAX, "%d %d %s", efd, cfd, argv[2]);
if (ret >= LINE_MAX) {
fputs("Arguments string is too long\n", stderr);
goto out;
}
ret = write(event_control, line, strlen(line) + 1);
if (ret == -1) {
perror("Cannot write to cgroup.event_control");
goto out;
}
while (1) {
uint64_t result;
ret = read(efd, &result, sizeof(result));
if (ret == -1) {
if (errno == EINTR)
continue;
perror("Cannot read from eventfd");
break;
}
assert(ret == sizeof(result));
ret = access(event_control_path, W_OK);
if ((ret == -1) && (errno == ENOENT)) {
puts("The cgroup seems to have removed.");
ret = 0;
break;
}
if (ret == -1) {
perror("cgroup.event_control "
"is not accessable any more");
break;
}
printf("%s %s: crossed\n", argv[1], argv[2]);
}
out:
if (efd >= 0)
close(efd);
if (event_control >= 0)
close(event_control);
if (cfd >= 0)
close(cfd);
return (ret != 0);
}

View File

@ -22,6 +22,8 @@ CONTENTS:
2. Usage Examples and Syntax
2.1 Basic Usage
2.2 Attaching processes
2.3 Mounting hierarchies by name
2.4 Notification API
3. Kernel API
3.1 Overview
3.2 Synchronization
@ -233,8 +235,7 @@ containing the following files describing that cgroup:
- cgroup.procs: list of tgids in the cgroup. This list is not
guaranteed to be sorted or free of duplicate tgids, and userspace
should sort/uniquify the list if this property is required.
Writing a tgid into this file moves all threads with that tgid into
this cgroup.
This is a read-only file, for now.
- notify_on_release flag: run the release agent on exit?
- release_agent: the path to use for release notifications (this file
exists in the top cgroup only)
@ -338,7 +339,7 @@ To mount a cgroup hierarchy with all available subsystems, type:
The "xxx" is not interpreted by the cgroup code, but will appear in
/proc/mounts so may be any useful identifying string that you like.
To mount a cgroup hierarchy with just the cpuset and numtasks
To mount a cgroup hierarchy with just the cpuset and memory
subsystems, type:
# mount -t cgroup -o cpuset,memory hier1 /dev/cgroup
@ -434,6 +435,25 @@ you give a subsystem a name.
The name of the subsystem appears as part of the hierarchy description
in /proc/mounts and /proc/<pid>/cgroups.
2.4 Notification API
--------------------
There is mechanism which allows to get notifications about changing
status of a cgroup.
To register new notification handler you need:
- create a file descriptor for event notification using eventfd(2);
- open a control file to be monitored (e.g. memory.usage_in_bytes);
- write "<event_fd> <control_fd> <args>" to cgroup.event_control.
Interpretation of args is defined by control file implementation;
eventfd will be woken up by control file implementation or when the
cgroup is removed.
To unregister notification handler just close eventfd.
NOTE: Support of notifications should be implemented for the control
file. See documentation for the subsystem.
3. Kernel API
=============
@ -488,6 +508,11 @@ Each subsystem should:
- add an entry in linux/cgroup_subsys.h
- define a cgroup_subsys object called <name>_subsys
If a subsystem can be compiled as a module, it should also have in its
module initcall a call to cgroup_load_subsys(), and in its exitcall a
call to cgroup_unload_subsys(). It should also set its_subsys.module =
THIS_MODULE in its .c file.
Each subsystem may export the following methods. The only mandatory
methods are create/destroy. Any others that are null are presumed to
be successful no-ops.
@ -536,10 +561,21 @@ returns an error, this will abort the attach operation. If a NULL
task is passed, then a successful result indicates that *any*
unspecified task can be moved into the cgroup. Note that this isn't
called on a fork. If this method returns 0 (success) then this should
remain valid while the caller holds cgroup_mutex. If threadgroup is
remain valid while the caller holds cgroup_mutex and it is ensured that either
attach() or cancel_attach() will be called in future. If threadgroup is
true, then a successful result indicates that all threads in the given
thread's threadgroup can be moved together.
void cancel_attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
struct task_struct *task, bool threadgroup)
(cgroup_mutex held by caller)
Called when a task attach operation has failed after can_attach() has succeeded.
A subsystem whose can_attach() has some side-effects should provide this
function, so that the subsystem can implement a rollback. If not, not necessary.
This will be called only about subsystems whose can_attach() operation have
succeeded.
void attach(struct cgroup_subsys *ss, struct cgroup *cgrp,
struct cgroup *old_cgrp, struct task_struct *task,
bool threadgroup)

View File

@ -42,7 +42,7 @@ Nodes to a set of tasks. In this document "Memory Node" refers to
an on-line node that contains memory.
Cpusets constrain the CPU and Memory placement of tasks to only
the resources within a tasks current cpuset. They form a nested
the resources within a task's current cpuset. They form a nested
hierarchy visible in a virtual file system. These are the essential
hooks, beyond what is already present, required to manage dynamic
job placement on large systems.
@ -53,11 +53,11 @@ Documentation/cgroups/cgroups.txt.
Requests by a task, using the sched_setaffinity(2) system call to
include CPUs in its CPU affinity mask, and using the mbind(2) and
set_mempolicy(2) system calls to include Memory Nodes in its memory
policy, are both filtered through that tasks cpuset, filtering out any
policy, are both filtered through that task's cpuset, filtering out any
CPUs or Memory Nodes not in that cpuset. The scheduler will not
schedule a task on a CPU that is not allowed in its cpus_allowed
vector, and the kernel page allocator will not allocate a page on a
node that is not allowed in the requesting tasks mems_allowed vector.
node that is not allowed in the requesting task's mems_allowed vector.
User level code may create and destroy cpusets by name in the cgroup
virtual file system, manage the attributes and permissions of these
@ -121,9 +121,9 @@ Cpusets extends these two mechanisms as follows:
- Each task in the system is attached to a cpuset, via a pointer
in the task structure to a reference counted cgroup structure.
- Calls to sched_setaffinity are filtered to just those CPUs
allowed in that tasks cpuset.
allowed in that task's cpuset.
- Calls to mbind and set_mempolicy are filtered to just
those Memory Nodes allowed in that tasks cpuset.
those Memory Nodes allowed in that task's cpuset.
- The root cpuset contains all the systems CPUs and Memory
Nodes.
- For any cpuset, one can define child cpusets containing a subset
@ -141,11 +141,11 @@ into the rest of the kernel, none in performance critical paths:
- in init/main.c, to initialize the root cpuset at system boot.
- in fork and exit, to attach and detach a task from its cpuset.
- in sched_setaffinity, to mask the requested CPUs by what's
allowed in that tasks cpuset.
allowed in that task's cpuset.
- in sched.c migrate_live_tasks(), to keep migrating tasks within
the CPUs allowed by their cpuset, if possible.
- in the mbind and set_mempolicy system calls, to mask the requested
Memory Nodes by what's allowed in that tasks cpuset.
Memory Nodes by what's allowed in that task's cpuset.
- in page_alloc.c, to restrict memory to allowed nodes.
- in vmscan.c, to restrict page recovery to the current cpuset.
@ -155,7 +155,7 @@ new system calls are added for cpusets - all support for querying and
modifying cpusets is via this cpuset file system.
The /proc/<pid>/status file for each task has four added lines,
displaying the tasks cpus_allowed (on which CPUs it may be scheduled)
displaying the task's cpus_allowed (on which CPUs it may be scheduled)
and mems_allowed (on which Memory Nodes it may obtain memory),
in the two formats seen in the following example:
@ -168,20 +168,20 @@ Each cpuset is represented by a directory in the cgroup file system
containing (on top of the standard cgroup files) the following
files describing that cpuset:
- cpus: list of CPUs in that cpuset
- mems: list of Memory Nodes in that cpuset
- memory_migrate flag: if set, move pages to cpusets nodes
- cpu_exclusive flag: is cpu placement exclusive?
- mem_exclusive flag: is memory placement exclusive?
- mem_hardwall flag: is memory allocation hardwalled
- memory_pressure: measure of how much paging pressure in cpuset
- memory_spread_page flag: if set, spread page cache evenly on allowed nodes
- memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes
- sched_load_balance flag: if set, load balance within CPUs on that cpuset
- sched_relax_domain_level: the searching range when migrating tasks
- cpuset.cpus: list of CPUs in that cpuset
- cpuset.mems: list of Memory Nodes in that cpuset
- cpuset.memory_migrate flag: if set, move pages to cpusets nodes
- cpuset.cpu_exclusive flag: is cpu placement exclusive?
- cpuset.mem_exclusive flag: is memory placement exclusive?
- cpuset.mem_hardwall flag: is memory allocation hardwalled
- cpuset.memory_pressure: measure of how much paging pressure in cpuset
- cpuset.memory_spread_page flag: if set, spread page cache evenly on allowed nodes
- cpuset.memory_spread_slab flag: if set, spread slab cache evenly on allowed nodes
- cpuset.sched_load_balance flag: if set, load balance within CPUs on that cpuset
- cpuset.sched_relax_domain_level: the searching range when migrating tasks
In addition, the root cpuset only has the following file:
- memory_pressure_enabled flag: compute memory_pressure?
- cpuset.memory_pressure_enabled flag: compute memory_pressure?
New cpusets are created using the mkdir system call or shell
command. The properties of a cpuset, such as its flags, allowed
@ -229,7 +229,7 @@ If a cpuset is cpu or mem exclusive, no other cpuset, other than
a direct ancestor or descendant, may share any of the same CPUs or
Memory Nodes.
A cpuset that is mem_exclusive *or* mem_hardwall is "hardwalled",
A cpuset that is cpuset.mem_exclusive *or* cpuset.mem_hardwall is "hardwalled",
i.e. it restricts kernel allocations for page, buffer and other data
commonly shared by the kernel across multiple users. All cpusets,
whether hardwalled or not, restrict allocations of memory for user
@ -304,15 +304,15 @@ times 1000.
---------------------------
There are two boolean flag files per cpuset that control where the
kernel allocates pages for the file system buffers and related in
kernel data structures. They are called 'memory_spread_page' and
'memory_spread_slab'.
kernel data structures. They are called 'cpuset.memory_spread_page' and
'cpuset.memory_spread_slab'.
If the per-cpuset boolean flag file 'memory_spread_page' is set, then
If the per-cpuset boolean flag file 'cpuset.memory_spread_page' is set, then
the kernel will spread the file system buffers (page cache) evenly
over all the nodes that the faulting task is allowed to use, instead
of preferring to put those pages on the node where the task is running.
If the per-cpuset boolean flag file 'memory_spread_slab' is set,
If the per-cpuset boolean flag file 'cpuset.memory_spread_slab' is set,
then the kernel will spread some file system related slab caches,
such as for inodes and dentries evenly over all the nodes that the
faulting task is allowed to use, instead of preferring to put those
@ -323,41 +323,41 @@ stack segment pages of a task.
By default, both kinds of memory spreading are off, and memory
pages are allocated on the node local to where the task is running,
except perhaps as modified by the tasks NUMA mempolicy or cpuset
except perhaps as modified by the task's NUMA mempolicy or cpuset
configuration, so long as sufficient free memory pages are available.
When new cpusets are created, they inherit the memory spread settings
of their parent.
Setting memory spreading causes allocations for the affected page
or slab caches to ignore the tasks NUMA mempolicy and be spread
or slab caches to ignore the task's NUMA mempolicy and be spread
instead. Tasks using mbind() or set_mempolicy() calls to set NUMA
mempolicies will not notice any change in these calls as a result of
their containing tasks memory spread settings. If memory spreading
their containing task's memory spread settings. If memory spreading
is turned off, then the currently specified NUMA mempolicy once again
applies to memory page allocations.
Both 'memory_spread_page' and 'memory_spread_slab' are boolean flag
Both 'cpuset.memory_spread_page' and 'cpuset.memory_spread_slab' are boolean flag
files. By default they contain "0", meaning that the feature is off
for that cpuset. If a "1" is written to that file, then that turns
the named feature on.
The implementation is simple.
Setting the flag 'memory_spread_page' turns on a per-process flag
Setting the flag 'cpuset.memory_spread_page' turns on a per-process flag
PF_SPREAD_PAGE for each task that is in that cpuset or subsequently
joins that cpuset. The page allocation calls for the page cache
is modified to perform an inline check for this PF_SPREAD_PAGE task
flag, and if set, a call to a new routine cpuset_mem_spread_node()
returns the node to prefer for the allocation.
Similarly, setting 'memory_spread_slab' turns on the flag
Similarly, setting 'cpuset.memory_spread_slab' turns on the flag
PF_SPREAD_SLAB, and appropriately marked slab caches will allocate
pages from the node returned by cpuset_mem_spread_node().
The cpuset_mem_spread_node() routine is also simple. It uses the
value of a per-task rotor cpuset_mem_spread_rotor to select the next
node in the current tasks mems_allowed to prefer for the allocation.
node in the current task's mems_allowed to prefer for the allocation.
This memory placement policy is also known (in other contexts) as
round-robin or interleave.
@ -404,24 +404,24 @@ the following two situations:
system overhead on those CPUs, including avoiding task load
balancing if that is not needed.
When the per-cpuset flag "sched_load_balance" is enabled (the default
setting), it requests that all the CPUs in that cpusets allowed 'cpus'
When the per-cpuset flag "cpuset.sched_load_balance" is enabled (the default
setting), it requests that all the CPUs in that cpusets allowed 'cpuset.cpus'
be contained in a single sched domain, ensuring that load balancing
can move a task (not otherwised pinned, as by sched_setaffinity)
from any CPU in that cpuset to any other.
When the per-cpuset flag "sched_load_balance" is disabled, then the
When the per-cpuset flag "cpuset.sched_load_balance" is disabled, then the
scheduler will avoid load balancing across the CPUs in that cpuset,
--except-- in so far as is necessary because some overlapping cpuset
has "sched_load_balance" enabled.
So, for example, if the top cpuset has the flag "sched_load_balance"
So, for example, if the top cpuset has the flag "cpuset.sched_load_balance"
enabled, then the scheduler will have one sched domain covering all
CPUs, and the setting of the "sched_load_balance" flag in any other
CPUs, and the setting of the "cpuset.sched_load_balance" flag in any other
cpusets won't matter, as we're already fully load balancing.
Therefore in the above two situations, the top cpuset flag
"sched_load_balance" should be disabled, and only some of the smaller,
"cpuset.sched_load_balance" should be disabled, and only some of the smaller,
child cpusets have this flag enabled.
When doing this, you don't usually want to leave any unpinned tasks in
@ -433,7 +433,7 @@ scheduler might not consider the possibility of load balancing that
task to that underused CPU.
Of course, tasks pinned to a particular CPU can be left in a cpuset
that disables "sched_load_balance" as those tasks aren't going anywhere
that disables "cpuset.sched_load_balance" as those tasks aren't going anywhere
else anyway.
There is an impedance mismatch here, between cpusets and sched domains.
@ -443,19 +443,19 @@ overlap and each CPU is in at most one sched domain.
It is necessary for sched domains to be flat because load balancing
across partially overlapping sets of CPUs would risk unstable dynamics
that would be beyond our understanding. So if each of two partially
overlapping cpusets enables the flag 'sched_load_balance', then we
overlapping cpusets enables the flag 'cpuset.sched_load_balance', then we
form a single sched domain that is a superset of both. We won't move
a task to a CPU outside it cpuset, but the scheduler load balancing
code might waste some compute cycles considering that possibility.
This mismatch is why there is not a simple one-to-one relation
between which cpusets have the flag "sched_load_balance" enabled,
between which cpusets have the flag "cpuset.sched_load_balance" enabled,
and the sched domain configuration. If a cpuset enables the flag, it
will get balancing across all its CPUs, but if it disables the flag,
it will only be assured of no load balancing if no other overlapping
cpuset enables the flag.
If two cpusets have partially overlapping 'cpus' allowed, and only
If two cpusets have partially overlapping 'cpuset.cpus' allowed, and only
one of them has this flag enabled, then the other may find its
tasks only partially load balanced, just on the overlapping CPUs.
This is just the general case of the top_cpuset example given a few
@ -468,23 +468,23 @@ load balancing to the other CPUs.
1.7.1 sched_load_balance implementation details.
------------------------------------------------
The per-cpuset flag 'sched_load_balance' defaults to enabled (contrary
The per-cpuset flag 'cpuset.sched_load_balance' defaults to enabled (contrary
to most cpuset flags.) When enabled for a cpuset, the kernel will
ensure that it can load balance across all the CPUs in that cpuset
(makes sure that all the CPUs in the cpus_allowed of that cpuset are
in the same sched domain.)
If two overlapping cpusets both have 'sched_load_balance' enabled,
If two overlapping cpusets both have 'cpuset.sched_load_balance' enabled,
then they will be (must be) both in the same sched domain.
If, as is the default, the top cpuset has 'sched_load_balance' enabled,
If, as is the default, the top cpuset has 'cpuset.sched_load_balance' enabled,
then by the above that means there is a single sched domain covering
the whole system, regardless of any other cpuset settings.
The kernel commits to user space that it will avoid load balancing
where it can. It will pick as fine a granularity partition of sched
domains as it can while still providing load balancing for any set
of CPUs allowed to a cpuset having 'sched_load_balance' enabled.
of CPUs allowed to a cpuset having 'cpuset.sched_load_balance' enabled.
The internal kernel cpuset to scheduler interface passes from the
cpuset code to the scheduler code a partition of the load balanced
@ -495,9 +495,9 @@ all the CPUs that must be load balanced.
The cpuset code builds a new such partition and passes it to the
scheduler sched domain setup code, to have the sched domains rebuilt
as necessary, whenever:
- the 'sched_load_balance' flag of a cpuset with non-empty CPUs changes,
- the 'cpuset.sched_load_balance' flag of a cpuset with non-empty CPUs changes,
- or CPUs come or go from a cpuset with this flag enabled,
- or 'sched_relax_domain_level' value of a cpuset with non-empty CPUs
- or 'cpuset.sched_relax_domain_level' value of a cpuset with non-empty CPUs
and with this flag enabled changes,
- or a cpuset with non-empty CPUs and with this flag enabled is removed,
- or a cpu is offlined/onlined.
@ -542,7 +542,7 @@ As the result, task B on CPU X need to wait task A or wait load balance
on the next tick. For some applications in special situation, waiting
1 tick may be too long.
The 'sched_relax_domain_level' file allows you to request changing
The 'cpuset.sched_relax_domain_level' file allows you to request changing
this searching range as you like. This file takes int value which
indicates size of searching range in levels ideally as follows,
otherwise initial value -1 that indicates the cpuset has no request.
@ -559,8 +559,8 @@ The system default is architecture dependent. The system default
can be changed using the relax_domain_level= boot parameter.
This file is per-cpuset and affect the sched domain where the cpuset
belongs to. Therefore if the flag 'sched_load_balance' of a cpuset
is disabled, then 'sched_relax_domain_level' have no effect since
belongs to. Therefore if the flag 'cpuset.sched_load_balance' of a cpuset
is disabled, then 'cpuset.sched_relax_domain_level' have no effect since
there is no sched domain belonging the cpuset.
If multiple cpusets are overlapping and hence they form a single sched
@ -594,7 +594,7 @@ is attached, is subtle.
If a cpuset has its Memory Nodes modified, then for each task attached
to that cpuset, the next time that the kernel attempts to allocate
a page of memory for that task, the kernel will notice the change
in the tasks cpuset, and update its per-task memory placement to
in the task's cpuset, and update its per-task memory placement to
remain within the new cpusets memory placement. If the task was using
mempolicy MPOL_BIND, and the nodes to which it was bound overlap with
its new cpuset, then the task will continue to use whatever subset
@ -603,13 +603,13 @@ was using MPOL_BIND and now none of its MPOL_BIND nodes are allowed
in the new cpuset, then the task will be essentially treated as if it
was MPOL_BIND bound to the new cpuset (even though its NUMA placement,
as queried by get_mempolicy(), doesn't change). If a task is moved
from one cpuset to another, then the kernel will adjust the tasks
from one cpuset to another, then the kernel will adjust the task's
memory placement, as above, the next time that the kernel attempts
to allocate a page of memory for that task.
If a cpuset has its 'cpus' modified, then each task in that cpuset
If a cpuset has its 'cpuset.cpus' modified, then each task in that cpuset
will have its allowed CPU placement changed immediately. Similarly,
if a tasks pid is written to another cpusets 'tasks' file, then its
if a task's pid is written to another cpusets 'cpuset.tasks' file, then its
allowed CPU placement is changed immediately. If such a task had been
bound to some subset of its cpuset using the sched_setaffinity() call,
the task will be allowed to run on any CPU allowed in its new cpuset,
@ -622,21 +622,21 @@ and the processor placement is updated immediately.
Normally, once a page is allocated (given a physical page
of main memory) then that page stays on whatever node it
was allocated, so long as it remains allocated, even if the
cpusets memory placement policy 'mems' subsequently changes.
If the cpuset flag file 'memory_migrate' is set true, then when
cpusets memory placement policy 'cpuset.mems' subsequently changes.
If the cpuset flag file 'cpuset.memory_migrate' is set true, then when
tasks are attached to that cpuset, any pages that task had
allocated to it on nodes in its previous cpuset are migrated
to the tasks new cpuset. The relative placement of the page within
to the task's new cpuset. The relative placement of the page within
the cpuset is preserved during these migration operations if possible.
For example if the page was on the second valid node of the prior cpuset
then the page will be placed on the second valid node of the new cpuset.
Also if 'memory_migrate' is set true, then if that cpusets
'mems' file is modified, pages allocated to tasks in that
cpuset, that were on nodes in the previous setting of 'mems',
Also if 'cpuset.memory_migrate' is set true, then if that cpuset's
'cpuset.mems' file is modified, pages allocated to tasks in that
cpuset, that were on nodes in the previous setting of 'cpuset.mems',
will be moved to nodes in the new setting of 'mems.'
Pages that were not in the tasks prior cpuset, or in the cpusets
prior 'mems' setting, will not be moved.
Pages that were not in the task's prior cpuset, or in the cpuset's
prior 'cpuset.mems' setting, will not be moved.
There is an exception to the above. If hotplug functionality is used
to remove all the CPUs that are currently assigned to a cpuset,
@ -655,7 +655,7 @@ There is a second exception to the above. GFP_ATOMIC requests are
kernel internal allocations that must be satisfied, immediately.
The kernel may drop some request, in rare cases even panic, if a
GFP_ATOMIC alloc fails. If the request cannot be satisfied within
the current tasks cpuset, then we relax the cpuset, and look for
the current task's cpuset, then we relax the cpuset, and look for
memory anywhere we can find it. It's better to violate the cpuset
than stress the kernel.
@ -678,8 +678,8 @@ and then start a subshell 'sh' in that cpuset:
cd /dev/cpuset
mkdir Charlie
cd Charlie
/bin/echo 2-3 > cpus
/bin/echo 1 > mems
/bin/echo 2-3 > cpuset.cpus
/bin/echo 1 > cpuset.mems
/bin/echo $$ > tasks
sh
# The subshell 'sh' is now running in cpuset Charlie
@ -725,10 +725,13 @@ Now you want to do something with this cpuset.
In this directory you can find several files:
# ls
cpu_exclusive memory_migrate mems tasks
cpus memory_pressure notify_on_release
mem_exclusive memory_spread_page sched_load_balance
mem_hardwall memory_spread_slab sched_relax_domain_level
cpuset.cpu_exclusive cpuset.memory_spread_slab
cpuset.cpus cpuset.mems
cpuset.mem_exclusive cpuset.sched_load_balance
cpuset.mem_hardwall cpuset.sched_relax_domain_level
cpuset.memory_migrate notify_on_release
cpuset.memory_pressure tasks
cpuset.memory_spread_page
Reading them will give you information about the state of this cpuset:
the CPUs and Memory Nodes it can use, the processes that are using
@ -736,13 +739,13 @@ it, its properties. By writing to these files you can manipulate
the cpuset.
Set some flags:
# /bin/echo 1 > cpu_exclusive
# /bin/echo 1 > cpuset.cpu_exclusive
Add some cpus:
# /bin/echo 0-7 > cpus
# /bin/echo 0-7 > cpuset.cpus
Add some mems:
# /bin/echo 0-7 > mems
# /bin/echo 0-7 > cpuset.mems
Now attach your shell to this cpuset:
# /bin/echo $$ > tasks
@ -774,28 +777,28 @@ echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent
This is the syntax to use when writing in the cpus or mems files
in cpuset directories:
# /bin/echo 1-4 > cpus -> set cpus list to cpus 1,2,3,4
# /bin/echo 1,2,3,4 > cpus -> set cpus list to cpus 1,2,3,4
# /bin/echo 1-4 > cpuset.cpus -> set cpus list to cpus 1,2,3,4
# /bin/echo 1,2,3,4 > cpuset.cpus -> set cpus list to cpus 1,2,3,4
To add a CPU to a cpuset, write the new list of CPUs including the
CPU to be added. To add 6 to the above cpuset:
# /bin/echo 1-4,6 > cpus -> set cpus list to cpus 1,2,3,4,6
# /bin/echo 1-4,6 > cpuset.cpus -> set cpus list to cpus 1,2,3,4,6
Similarly to remove a CPU from a cpuset, write the new list of CPUs
without the CPU to be removed.
To remove all the CPUs:
# /bin/echo "" > cpus -> clear cpus list
# /bin/echo "" > cpuset.cpus -> clear cpus list
2.3 Setting flags
-----------------
The syntax is very simple:
# /bin/echo 1 > cpu_exclusive -> set flag 'cpu_exclusive'
# /bin/echo 0 > cpu_exclusive -> unset flag 'cpu_exclusive'
# /bin/echo 1 > cpuset.cpu_exclusive -> set flag 'cpuset.cpu_exclusive'
# /bin/echo 0 > cpuset.cpu_exclusive -> unset flag 'cpuset.cpu_exclusive'
2.4 Attaching processes
-----------------------

View File

@ -1,6 +1,6 @@
Memory Resource Controller(Memcg) Implementation Memo.
Last Updated: 2009/1/20
Base Kernel Version: based on 2.6.29-rc2.
Last Updated: 2010/2
Base Kernel Version: based on 2.6.33-rc7-mm(candidate for 34).
Because VM is getting complex (one of reasons is memcg...), memcg's behavior
is complex. This is a document for memcg's internal behavior.
@ -244,7 +244,7 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
we have to check if OLDPAGE/NEWPAGE is a valid page after commit().
8. LRU
Each memcg has its own private LRU. Now, it's handling is under global
Each memcg has its own private LRU. Now, its handling is under global
VM's control (means that it's handled under global zone->lru_lock).
Almost all routines around memcg's LRU is called by global LRU's
list management functions under zone->lru_lock().
@ -337,7 +337,7 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
race and lock dependency with other cgroup subsystems.
example)
# mount -t cgroup none /cgroup -t cpuset,memory,cpu,devices
# mount -t cgroup none /cgroup -o cpuset,memory,cpu,devices
and do task move, mkdir, rmdir etc...under this.
@ -348,7 +348,7 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
For example, test like following is good.
(Shell-A)
# mount -t cgroup none /cgroup -t memory
# mount -t cgroup none /cgroup -o memory
# mkdir /cgroup/test
# echo 40M > /cgroup/test/memory.limit_in_bytes
# echo 0 > /cgroup/test/tasks
@ -378,3 +378,42 @@ Under below explanation, we assume CONFIG_MEM_RES_CTRL_SWAP=y.
#echo 50M > memory.limit_in_bytes
#echo 50M > memory.memsw.limit_in_bytes
run 51M of malloc
9.9 Move charges at task migration
Charges associated with a task can be moved along with task migration.
(Shell-A)
#mkdir /cgroup/A
#echo $$ >/cgroup/A/tasks
run some programs which uses some amount of memory in /cgroup/A.
(Shell-B)
#mkdir /cgroup/B
#echo 1 >/cgroup/B/memory.move_charge_at_immigrate
#echo "pid of the program running in group A" >/cgroup/B/tasks
You can see charges have been moved by reading *.usage_in_bytes or
memory.stat of both A and B.
See 8.2 of Documentation/cgroups/memory.txt to see what value should be
written to move_charge_at_immigrate.
9.10 Memory thresholds
Memory controler implements memory thresholds using cgroups notification
API. You can use Documentation/cgroups/cgroup_event_listener.c to test
it.
(Shell-A) Create cgroup and run event listener
# mkdir /cgroup/A
# ./cgroup_event_listener /cgroup/A/memory.usage_in_bytes 5M
(Shell-B) Add task to cgroup and try to allocate and free memory
# echo $$ >/cgroup/A/tasks
# a="$(dd if=/dev/zero bs=1M count=10)"
# a=
You will see message from cgroup_event_listener every time you cross
the thresholds.
Use /cgroup/A/memory.memsw.usage_in_bytes to test memsw thresholds.
It's good idea to test root cgroup as well.

View File

@ -1,18 +1,15 @@
Memory Resource Controller
NOTE: The Memory Resource Controller has been generically been referred
to as the memory controller in this document. Do not confuse memory controller
used here with the memory controller that is used in hardware.
to as the memory controller in this document. Do not confuse memory
controller used here with the memory controller that is used in hardware.
Salient features
a. Enable control of Anonymous, Page Cache (mapped and unmapped) and
Swap Cache memory pages.
b. The infrastructure allows easy addition of other types of memory to control
c. Provides *zero overhead* for non memory controller users
d. Provides a double LRU: global memory pressure causes reclaim from the
global LRU; a cgroup on hitting a limit, reclaims from the per
cgroup LRU
(For editors)
In this document:
When we mention a cgroup (cgroupfs's directory) with memory controller,
we call it "memory cgroup". When you see git-log and source code, you'll
see patch's title and function names tend to use "memcg".
In this document, we avoid using it.
Benefits and Purpose of the memory controller
@ -33,6 +30,45 @@ d. A CD/DVD burner could control the amount of memory used by the
e. There are several other use cases, find one or use the controller just
for fun (to learn and hack on the VM subsystem).
Current Status: linux-2.6.34-mmotm(development version of 2010/April)
Features:
- accounting anonymous pages, file caches, swap caches usage and limiting them.
- private LRU and reclaim routine. (system's global LRU and private LRU
work independently from each other)
- optionally, memory+swap usage can be accounted and limited.
- hierarchical accounting
- soft limit
- moving(recharging) account at moving a task is selectable.
- usage threshold notifier
- oom-killer disable knob and oom-notifier
- Root cgroup has no limit controls.
Kernel memory and Hugepages are not under control yet. We just manage
pages on LRU. To add more controls, we have to take care of performance.
Brief summary of control files.
tasks # attach a task(thread) and show list of threads
cgroup.procs # show list of processes
cgroup.event_control # an interface for event_fd()
memory.usage_in_bytes # show current memory(RSS+Cache) usage.
memory.memsw.usage_in_bytes # show current memory+Swap usage
memory.limit_in_bytes # set/show limit of memory usage
memory.memsw.limit_in_bytes # set/show limit of memory+Swap usage
memory.failcnt # show the number of memory usage hits limits
memory.memsw.failcnt # show the number of memory+Swap hits limits
memory.max_usage_in_bytes # show max memory usage recorded
memory.memsw.usage_in_bytes # show max memory+Swap usage recorded
memory.soft_limit_in_bytes # set/show soft limit of memory usage
memory.stat # show various statistics
memory.use_hierarchy # set/show hierarchical account enabled
memory.force_empty # trigger forced move charge to parent
memory.swappiness # set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate # set/show controls of moving charges
memory.oom_control # set/show oom controls.
1. History
The memory controller has a long history. A request for comments for the memory
@ -106,14 +142,14 @@ the necessary data structures and check if the cgroup that is being charged
is over its limit. If it is then reclaim is invoked on the cgroup.
More details can be found in the reclaim section of this document.
If everything goes well, a page meta-data-structure called page_cgroup is
allocated and associated with the page. This routine also adds the page to
the per cgroup LRU.
updated. page_cgroup has its own LRU on cgroup.
(*) page_cgroup structure is allocated at boot/memory-hotplug time.
2.2.1 Accounting details
All mapped anon pages (RSS) and cache pages (Page Cache) are accounted.
(some pages which never be reclaimable and will not be on global LRU
are not accounted. we just accounts pages under usual vm management.)
Some pages which are never reclaimable and will not be on the global LRU
are not accounted. We just account pages under usual VM management.
RSS pages are accounted at page_fault unless they've already been accounted
for earlier. A file page will be accounted for as Page Cache when it's
@ -121,12 +157,19 @@ inserted into inode (radix-tree). While it's mapped into the page tables of
processes, duplicate accounting is carefully avoided.
A RSS page is unaccounted when it's fully unmapped. A PageCache page is
unaccounted when it's removed from radix-tree.
unaccounted when it's removed from radix-tree. Even if RSS pages are fully
unmapped (by kswapd), they may exist as SwapCache in the system until they
are really freed. Such SwapCaches also also accounted.
A swapped-in page is not accounted until it's mapped.
Note: The kernel does swapin-readahead and read multiple swaps at once.
This means swapped-in pages may contain pages for other tasks than a task
causing page fault. So, we avoid accounting at swap-in I/O.
At page migration, accounting information is kept.
Note: we just account pages-on-lru because our purpose is to control amount
of used pages. not-on-lru pages are tend to be out-of-control from vm view.
Note: we just account pages-on-LRU because our purpose is to control amount
of used pages; not-on-LRU pages tend to be out-of-control from VM view.
2.3 Shared Page Accounting
@ -143,6 +186,7 @@ caller of swapoff rather than the users of shmem.
2.4 Swap Extension (CONFIG_CGROUP_MEM_RES_CTLR_SWAP)
Swap Extension allows you to record charge for swap. A swapped-in page is
charged back to original page allocator if possible.
@ -150,13 +194,20 @@ When swap is accounted, following files are added.
- memory.memsw.usage_in_bytes.
- memory.memsw.limit_in_bytes.
usage of mem+swap is limited by memsw.limit_in_bytes.
memsw means memory+swap. Usage of memory+swap is limited by
memsw.limit_in_bytes.
* why 'mem+swap' rather than swap.
Example: Assume a system with 4G of swap. A task which allocates 6G of memory
(by mistake) under 2G memory limitation will use all swap.
In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap.
By using memsw limit, you can avoid system OOM which can be caused by swap
shortage.
* why 'memory+swap' rather than swap.
The global LRU(kswapd) can swap out arbitrary pages. Swap-out means
to move account from memory to swap...there is no change in usage of
mem+swap. In other words, when we want to limit the usage of swap without
affecting global LRU, mem+swap limit is better than just limiting swap from
memory+swap. In other words, when we want to limit the usage of swap without
affecting global LRU, memory+swap limit is better than just limiting swap from
OS point of view.
* What happens when a cgroup hits memory.memsw.limit_in_bytes
@ -168,12 +219,12 @@ it by cgroup.
2.5 Reclaim
Each cgroup maintains a per cgroup LRU that consists of an active
and inactive list. When a cgroup goes over its limit, we first try
Each cgroup maintains a per cgroup LRU which has the same structure as
global VM. When a cgroup goes over its limit, we first try
to reclaim memory from the cgroup so as to make space for the new
pages that the cgroup has touched. If the reclaim is unsuccessful,
an OOM routine is invoked to select and kill the bulkiest task in the
cgroup.
cgroup. (See 10. OOM Control below.)
The reclaim algorithm has not been modified for cgroups, except that
pages that are selected for reclaiming come from the per cgroup LRU
@ -182,13 +233,24 @@ list.
NOTE: Reclaim does not work for the root cgroup, since we cannot set any
limits on the root cgroup.
2. Locking
Note2: When panic_on_oom is set to "2", the whole system will panic.
The memory controller uses the following hierarchy
When oom event notifier is registered, event will be delivered.
(See oom_control section)
1. zone->lru_lock is used for selecting pages to be isolated
2. mem->per_zone->lru_lock protects the per cgroup LRU (per zone)
3. lock_page_cgroup() is used to protect page->page_cgroup
2.6 Locking
lock_page_cgroup()/unlock_page_cgroup() should not be called under
mapping->tree_lock.
Other lock order is following:
PG_locked.
mm->page_table_lock
zone->lru_lock
lock_page_cgroup.
In many cases, just lock_page_cgroup() is called.
per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by
zone->lru_lock, it has no lock of its own.
3. User Interface
@ -197,6 +259,7 @@ The memory controller uses the following hierarchy
a. Enable CONFIG_CGROUPS
b. Enable CONFIG_RESOURCE_COUNTERS
c. Enable CONFIG_CGROUP_MEM_RES_CTLR
d. Enable CONFIG_CGROUP_MEM_RES_CTLR_SWAP (to use swap extension)
1. Prepare the cgroups
# mkdir -p /cgroups
@ -204,31 +267,28 @@ c. Enable CONFIG_CGROUP_MEM_RES_CTLR
2. Make the new group and move bash into it
# mkdir /cgroups/0
# echo $$ > /cgroups/0/tasks
# echo $$ > /cgroups/0/tasks
Since now we're in the 0 cgroup,
We can alter the memory limit:
Since now we're in the 0 cgroup, we can alter the memory limit:
# echo 4M > /cgroups/0/memory.limit_in_bytes
NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo,
mega or gigabytes.
mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.)
NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited).
NOTE: We cannot set limits on the root cgroup any more.
# cat /cgroups/0/memory.limit_in_bytes
4194304
NOTE: The interface has now changed to display the usage in bytes
instead of pages
We can check the usage:
# cat /cgroups/0/memory.usage_in_bytes
1216512
A successful write to this file does not guarantee a successful set of
this limit to the value written into the file. This can be due to a
this limit to the value written into the file. This can be due to a
number of factors, such as rounding up to page boundaries or the total
availability of memory on the system. The user is required to re-read
availability of memory on the system. The user is required to re-read
this file after a write to guarantee the value committed by the kernel.
# echo 1 > memory.limit_in_bytes
@ -243,15 +303,23 @@ caches, RSS and Active pages/Inactive pages are shown.
4. Testing
Balbir posted lmbench, AIM9, LTP and vmmstress results [10] and [11].
Apart from that v6 has been tested with several applications and regular
daily use. The controller has also been tested on the PPC64, x86_64 and
UML platforms.
For testing features and implementation, see memcg_test.txt.
Performance test is also important. To see pure memory controller's overhead,
testing on tmpfs will give you good numbers of small overheads.
Example: do kernel make on tmpfs.
Page-fault scalability is also important. At measuring parallel
page fault test, multi-process test may be better than multi-thread
test because it has noise of shared objects/status.
But the above two are testing extreme situations.
Trying usual test under memory controller is always helpful.
4.1 Troubleshooting
Sometimes a user might find that the application under a cgroup is
terminated. There are several causes for this:
terminated by OOM killer. There are several causes for this:
1. The cgroup limit is too low (just too low to do anything useful)
2. The user is using anonymous memory and swap is turned off or too low
@ -259,21 +327,29 @@ terminated. There are several causes for this:
A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of
some of the pages cached in the cgroup (page cache pages).
To know what happens, disable OOM_Kill by 10. OOM Control(see below) and
seeing what happens will be helpful.
4.2 Task migration
When a task migrates from one cgroup to another, it's charge is not
carried forward. The pages allocated from the original cgroup still
When a task migrates from one cgroup to another, its charge is not
carried forward by default. The pages allocated from the original cgroup still
remain charged to it, the charge is dropped when the page is freed or
reclaimed.
You can move charges of a task along with task migration.
See 8. "Move charges at task migration"
4.3 Removing a cgroup
A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a
cgroup might have some charge associated with it, even though all
tasks have migrated away from it.
Such charges are freed(at default) or moved to its parent. When moved,
both of RSS and CACHES are moved to parent.
If both of them are busy, rmdir() returns -EBUSY. See 5.1 Also.
tasks have migrated away from it. (because we charge against pages, not
against tasks.)
Such charges are freed or moved to their parent. At moving, both of RSS
and CACHES are moved to parent.
rmdir() may return -EBUSY if freeing/moving fails. See 5.1 also.
Charges recorded in swap information is not updated at removal of cgroup.
Recorded information is discarded and a cgroup which uses swap (swapcache)
@ -289,10 +365,10 @@ will be charged as a new owner of it.
# echo 0 > memory.force_empty
Almost all pages tracked by this memcg will be unmapped and freed. Some of
pages cannot be freed because it's locked or in-use. Such pages are moved
to parent and this cgroup will be empty. But this may return -EBUSY in
some too busy case.
Almost all pages tracked by this memory cgroup will be unmapped and freed.
Some pages cannot be freed because they are locked or in-use. Such pages are
moved to parent and this cgroup will be empty. This may return -EBUSY if
VM is too busy to free/move all pages immediately.
Typical use case of this interface is that calling this before rmdir().
Because rmdir() moves all pages to parent, some out-of-use page caches can be
@ -302,19 +378,41 @@ will be charged as a new owner of it.
memory.stat file includes following statistics
# per-memory cgroup local status
cache - # of bytes of page cache memory.
rss - # of bytes of anonymous and swap cache memory.
mapped_file - # of bytes of mapped file (includes tmpfs/shmem)
pgpgin - # of pages paged in (equivalent to # of charging events).
pgpgout - # of pages paged out (equivalent to # of uncharging events).
active_anon - # of bytes of anonymous and swap cache memory on active
lru list.
swap - # of bytes of swap usage
inactive_anon - # of bytes of anonymous memory and swap cache memory on
inactive lru list.
active_file - # of bytes of file-backed memory on active lru list.
inactive_file - # of bytes of file-backed memory on inactive lru list.
LRU list.
active_anon - # of bytes of anonymous and swap cache memory on active
inactive LRU list.
inactive_file - # of bytes of file-backed memory on inactive LRU list.
active_file - # of bytes of file-backed memory on active LRU list.
unevictable - # of bytes of memory that cannot be reclaimed (mlocked etc).
The following additional stats are dependent on CONFIG_DEBUG_VM.
# status considering hierarchy (see memory.use_hierarchy settings)
hierarchical_memory_limit - # of bytes of memory limit with regard to hierarchy
under which the memory cgroup is
hierarchical_memsw_limit - # of bytes of memory+swap limit with regard to
hierarchy under which memory cgroup is.
total_cache - sum of all children's "cache"
total_rss - sum of all children's "rss"
total_mapped_file - sum of all children's "cache"
total_pgpgin - sum of all children's "pgpgin"
total_pgpgout - sum of all children's "pgpgout"
total_swap - sum of all children's "swap"
total_inactive_anon - sum of all children's "inactive_anon"
total_active_anon - sum of all children's "active_anon"
total_inactive_file - sum of all children's "inactive_file"
total_active_file - sum of all children's "active_file"
total_unevictable - sum of all children's "unevictable"
# The following additional stats are dependent on CONFIG_DEBUG_VM.
inactive_ratio - VM internal parameter. (see mm/page_alloc.c)
recent_rotated_anon - VM internal parameter. (see mm/vmscan.c)
@ -323,24 +421,37 @@ recent_scanned_anon - VM internal parameter. (see mm/vmscan.c)
recent_scanned_file - VM internal parameter. (see mm/vmscan.c)
Memo:
recent_rotated means recent frequency of lru rotation.
recent_scanned means recent # of scans to lru.
recent_rotated means recent frequency of LRU rotation.
recent_scanned means recent # of scans to LRU.
showing for better debug please see the code for meanings.
Note:
Only anonymous and swap cache memory is listed as part of 'rss' stat.
This should not be confused with the true 'resident set size' or the
amount of physical memory used by the cgroup. Per-cgroup rss
accounting is not done yet.
amount of physical memory used by the cgroup.
'rss + file_mapped" will give you resident set size of cgroup.
(Note: file and shmem may be shared among other cgroups. In that case,
file_mapped is accounted only when the memory cgroup is owner of page
cache.)
5.3 swappiness
Similar to /proc/sys/vm/swappiness, but affecting a hierarchy of groups only.
Following cgroups' swapiness can't be changed.
- root cgroup (uses /proc/sys/vm/swappiness).
- a cgroup which uses hierarchy and it has child cgroup.
- a cgroup which uses hierarchy and not the root of hierarchy.
Similar to /proc/sys/vm/swappiness, but affecting a hierarchy of groups only.
Following cgroups' swappiness can't be changed.
- root cgroup (uses /proc/sys/vm/swappiness).
- a cgroup which uses hierarchy and it has other cgroup(s) below it.
- a cgroup which uses hierarchy and not the root of hierarchy.
5.4 failcnt
A memory cgroup provides memory.failcnt and memory.memsw.failcnt files.
This failcnt(== failure count) shows the number of times that a usage counter
hit its limit. When a memory cgroup hits a limit, failcnt increases and
memory under it will be reclaimed.
You can reset failcnt by writing 0 to failcnt file.
# echo 0 > .../memory.failcnt
6. Hierarchy support
@ -359,13 +470,13 @@ hierarchy
In the diagram above, with hierarchical accounting enabled, all memory
usage of e, is accounted to its ancestors up until the root (i.e, c and root),
that has memory.use_hierarchy enabled. If one of the ancestors goes over its
that has memory.use_hierarchy enabled. If one of the ancestors goes over its
limit, the reclaim algorithm reclaims from the tasks in the ancestor and the
children of the ancestor.
6.1 Enabling hierarchical accounting and reclaim
The memory controller by default disables the hierarchy feature. Support
A memory cgroup by default disables the hierarchy feature. Support
can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup
# echo 1 > memory.use_hierarchy
@ -375,9 +486,10 @@ The feature can be disabled by
# echo 0 > memory.use_hierarchy
NOTE1: Enabling/disabling will fail if the cgroup already has other
cgroups created below it.
cgroups created below it.
NOTE2: This feature can be enabled/disabled per subtree.
NOTE2: When panic_on_oom is set to "2", the whole system will panic in
case of an OOM event in any cgroup.
7. Soft limits
@ -387,7 +499,7 @@ is to allow control groups to use as much of the memory as needed, provided
a. There is no memory contention
b. They do not exceed their hard limit
When the system detects memory contention or low memory control groups
When the system detects memory contention or low memory, control groups
are pushed back to their soft limits. If the soft limit of each control
group is very high, they are pushed back as much as possible to make
sure that one control group does not starve the others of memory.
@ -401,7 +513,7 @@ it gets invoked from balance_pgdat (kswapd).
7.1 Interface
Soft limits can be setup by using the following commands (in this example we
assume a soft limit of 256 megabytes)
assume a soft limit of 256 MiB)
# echo 256M > memory.soft_limit_in_bytes
@ -414,7 +526,121 @@ NOTE1: Soft limits take effect over a long period of time, since they involve
NOTE2: It is recommended to set the soft limit always below the hard limit,
otherwise the hard limit will take precedence.
8. TODO
8. Move charges at task migration
Users can move charges associated with a task along with task migration, that
is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
This feature is not supported in !CONFIG_MMU environments because of lack of
page tables.
8.1 Interface
This feature is disabled by default. It can be enabled(and disabled again) by
writing to memory.move_charge_at_immigrate of the destination cgroup.
If you want to enable it:
# echo (some positive value) > memory.move_charge_at_immigrate
Note: Each bits of move_charge_at_immigrate has its own meaning about what type
of charges should be moved. See 8.2 for details.
Note: Charges are moved only when you move mm->owner, IOW, a leader of a thread
group.
Note: If we cannot find enough space for the task in the destination cgroup, we
try to make space by reclaiming memory. Task migration may fail if we
cannot make enough space.
Note: It can take several seconds if you move charges much.
And if you want disable it again:
# echo 0 > memory.move_charge_at_immigrate
8.2 Type of charges which can be move
Each bits of move_charge_at_immigrate has its own meaning about what type of
charges should be moved. But in any cases, it must be noted that an account of
a page or a swap can be moved only when it is charged to the task's current(old)
memory cgroup.
bit | what type of charges would be moved ?
-----+------------------------------------------------------------------------
0 | A charge of an anonymous page(or swap of it) used by the target task.
| Those pages and swaps must be used only by the target task. You must
| enable Swap Extension(see 2.4) to enable move of swap charges.
-----+------------------------------------------------------------------------
1 | A charge of file pages(normal file, tmpfs file(e.g. ipc shared memory)
| and swaps of tmpfs file) mmapped by the target task. Unlike the case of
| anonymous pages, file pages(and swaps) in the range mmapped by the task
| will be moved even if the task hasn't done page fault, i.e. they might
| not be the task's "RSS", but other task's "RSS" that maps the same file.
| And mapcount of the page is ignored(the page can be moved even if
| page_mapcount(page) > 1). You must enable Swap Extension(see 2.4) to
| enable move of swap charges.
8.3 TODO
- Implement madvise(2) to let users decide the vma to be moved or not to be
moved.
- All of moving charge operations are done under cgroup_mutex. It's not good
behavior to hold the mutex too long, so we may need some trick.
9. Memory thresholds
Memory cgroup implements memory thresholds using cgroups notification
API (see cgroups.txt). It allows to register multiple memory and memsw
thresholds and gets notifications when it crosses.
To register a threshold application need:
- create an eventfd using eventfd(2);
- open memory.usage_in_bytes or memory.memsw.usage_in_bytes;
- write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to
cgroup.event_control.
Application will be notified through eventfd when memory usage crosses
threshold in any direction.
It's applicable for root and non-root cgroup.
10. OOM Control
memory.oom_control file is for OOM notification and other controls.
Memory cgroup implements OOM notifier using cgroup notification
API (See cgroups.txt). It allows to register multiple OOM notification
delivery and gets notification when OOM happens.
To register a notifier, application need:
- create an eventfd using eventfd(2)
- open memory.oom_control file
- write string like "<event_fd> <fd of memory.oom_control>" to
cgroup.event_control
Application will be notified through eventfd when OOM happens.
OOM notification doesn't work for root cgroup.
You can disable OOM-killer by writing "1" to memory.oom_control file, as:
#echo 1 > memory.oom_control
This operation is only allowed to the top cgroup of sub-hierarchy.
If OOM-killer is disabled, tasks under cgroup will hang/sleep
in memory cgroup's OOM-waitqueue when they request accountable memory.
For running them, you have to relax the memory cgroup's OOM status by
* enlarge limit or reduce usage.
To reduce usage,
* kill some tasks.
* move some tasks to other group with account migration.
* remove some files (on tmpfs?)
Then, stopped tasks will work again.
At reading, current status of OOM is shown.
oom_kill_disable 0 or 1 (if 1, oom-killer is disabled)
under_oom 0 or 1 (if 1, the memory cgroup is under OOM, tasks may
be stopped.)
11. TODO
1. Add support for accounting huge pages (as a separate controller)
2. Make per-cgroup scanner reclaim not-shared pages first

View File

@ -0,0 +1,234 @@
================
CIRCULAR BUFFERS
================
By: David Howells <dhowells@redhat.com>
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Linux provides a number of features that can be used to implement circular
buffering. There are two sets of such features:
(1) Convenience functions for determining information about power-of-2 sized
buffers.
(2) Memory barriers for when the producer and the consumer of objects in the
buffer don't want to share a lock.
To use these facilities, as discussed below, there needs to be just one
producer and just one consumer. It is possible to handle multiple producers by
serialising them, and to handle multiple consumers by serialising them.
Contents:
(*) What is a circular buffer?
(*) Measuring power-of-2 buffers.
(*) Using memory barriers with circular buffers.
- The producer.
- The consumer.
==========================
WHAT IS A CIRCULAR BUFFER?
==========================
First of all, what is a circular buffer? A circular buffer is a buffer of
fixed, finite size into which there are two indices:
(1) A 'head' index - the point at which the producer inserts items into the
buffer.
(2) A 'tail' index - the point at which the consumer finds the next item in
the buffer.
Typically when the tail pointer is equal to the head pointer, the buffer is
empty; and the buffer is full when the head pointer is one less than the tail
pointer.
The head index is incremented when items are added, and the tail index when
items are removed. The tail index should never jump the head index, and both
indices should be wrapped to 0 when they reach the end of the buffer, thus
allowing an infinite amount of data to flow through the buffer.
Typically, items will all be of the same unit size, but this isn't strictly
required to use the techniques below. The indices can be increased by more
than 1 if multiple items or variable-sized items are to be included in the
buffer, provided that neither index overtakes the other. The implementer must
be careful, however, as a region more than one unit in size may wrap the end of
the buffer and be broken into two segments.
============================
MEASURING POWER-OF-2 BUFFERS
============================
Calculation of the occupancy or the remaining capacity of an arbitrarily sized
circular buffer would normally be a slow operation, requiring the use of a
modulus (divide) instruction. However, if the buffer is of a power-of-2 size,
then a much quicker bitwise-AND instruction can be used instead.
Linux provides a set of macros for handling power-of-2 circular buffers. These
can be made use of by:
#include <linux/circ_buf.h>
The macros are:
(*) Measure the remaining capacity of a buffer:
CIRC_SPACE(head_index, tail_index, buffer_size);
This returns the amount of space left in the buffer[1] into which items
can be inserted.
(*) Measure the maximum consecutive immediate space in a buffer:
CIRC_SPACE_TO_END(head_index, tail_index, buffer_size);
This returns the amount of consecutive space left in the buffer[1] into
which items can be immediately inserted without having to wrap back to the
beginning of the buffer.
(*) Measure the occupancy of a buffer:
CIRC_CNT(head_index, tail_index, buffer_size);
This returns the number of items currently occupying a buffer[2].
(*) Measure the non-wrapping occupancy of a buffer:
CIRC_CNT_TO_END(head_index, tail_index, buffer_size);
This returns the number of consecutive items[2] that can be extracted from
the buffer without having to wrap back to the beginning of the buffer.
Each of these macros will nominally return a value between 0 and buffer_size-1,
however:
[1] CIRC_SPACE*() are intended to be used in the producer. To the producer
they will return a lower bound as the producer controls the head index,
but the consumer may still be depleting the buffer on another CPU and
moving the tail index.
To the consumer it will show an upper bound as the producer may be busy
depleting the space.
[2] CIRC_CNT*() are intended to be used in the consumer. To the consumer they
will return a lower bound as the consumer controls the tail index, but the
producer may still be filling the buffer on another CPU and moving the
head index.
To the producer it will show an upper bound as the consumer may be busy
emptying the buffer.
[3] To a third party, the order in which the writes to the indices by the
producer and consumer become visible cannot be guaranteed as they are
independent and may be made on different CPUs - so the result in such a
situation will merely be a guess, and may even be negative.
===========================================
USING MEMORY BARRIERS WITH CIRCULAR BUFFERS
===========================================
By using memory barriers in conjunction with circular buffers, you can avoid
the need to:
(1) use a single lock to govern access to both ends of the buffer, thus
allowing the buffer to be filled and emptied at the same time; and
(2) use atomic counter operations.
There are two sides to this: the producer that fills the buffer, and the
consumer that empties it. Only one thing should be filling a buffer at any one
time, and only one thing should be emptying a buffer at any one time, but the
two sides can operate simultaneously.
THE PRODUCER
------------
The producer will look something like this:
spin_lock(&producer_lock);
unsigned long head = buffer->head;
unsigned long tail = ACCESS_ONCE(buffer->tail);
if (CIRC_SPACE(head, tail, buffer->size) >= 1) {
/* insert one item into the buffer */
struct item *item = buffer[head];
produce_item(item);
smp_wmb(); /* commit the item before incrementing the head */
buffer->head = (head + 1) & (buffer->size - 1);
/* wake_up() will make sure that the head is committed before
* waking anyone up */
wake_up(consumer);
}
spin_unlock(&producer_lock);
This will instruct the CPU that the contents of the new item must be written
before the head index makes it available to the consumer and then instructs the
CPU that the revised head index must be written before the consumer is woken.
Note that wake_up() doesn't have to be the exact mechanism used, but whatever
is used must guarantee a (write) memory barrier between the update of the head
index and the change of state of the consumer, if a change of state occurs.
THE CONSUMER
------------
The consumer will look something like this:
spin_lock(&consumer_lock);
unsigned long head = ACCESS_ONCE(buffer->head);
unsigned long tail = buffer->tail;
if (CIRC_CNT(head, tail, buffer->size) >= 1) {
/* read index before reading contents at that index */
smp_read_barrier_depends();
/* extract one item from the buffer */
struct item *item = buffer[tail];
consume_item(item);
smp_mb(); /* finish reading descriptor before incrementing tail */
buffer->tail = (tail + 1) & (buffer->size - 1);
}
spin_unlock(&consumer_lock);
This will instruct the CPU to make sure the index is up to date before reading
the new item, and then it shall make sure the CPU has finished reading the item
before it writes the new tail pointer, which will erase the item.
Note the use of ACCESS_ONCE() in both algorithms to read the opposition index.
This prevents the compiler from discarding and reloading its cached value -
which some compilers will do across smp_read_barrier_depends(). This isn't
strictly needed if you can be sure that the opposition index will _only_ be
used the once.
===============
FURTHER READING
===============
See also Documentation/memory-barriers.txt for a description of Linux's memory
barrier facilities.

Some files were not shown because too many files have changed in this diff Show More