Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi.git

This commit is contained in:
Stephen Rothwell 2024-12-20 14:39:27 +11:00
commit 8538a76267
97 changed files with 888 additions and 702 deletions

View File

@ -112,9 +112,9 @@ Those usages in group c) should be handled with care, especially in a
that are shared with the mid level and other layers. that are shared with the mid level and other layers.
All functions defined within an LLD and all data defined at file scope All functions defined within an LLD and all data defined at file scope
should be static. For example the slave_alloc() function in an LLD should be static. For example the sdev_init() function in an LLD
called "xxx" could be defined as called "xxx" could be defined as
``static int xxx_slave_alloc(struct scsi_device * sdev) { /* code */ }`` ``static int xxx_sdev_init(struct scsi_device * sdev) { /* code */ }``
.. [#] the scsi_host_alloc() function is a replacement for the rather vaguely .. [#] the scsi_host_alloc() function is a replacement for the rather vaguely
named scsi_register() function in most situations. named scsi_register() function in most situations.
@ -149,21 +149,21 @@ scsi devices of which only the first 2 respond::
scsi_add_host() ----> scsi_add_host() ---->
scsi_scan_host() -------+ scsi_scan_host() -------+
| |
slave_alloc() sdev_init()
slave_configure() --> scsi_change_queue_depth() sdev_configure() --> scsi_change_queue_depth()
| |
slave_alloc() sdev_init()
slave_configure() sdev_configure()
| |
slave_alloc() *** sdev_init() ***
slave_destroy() *** sdev_destroy() ***
*** For scsi devices that the mid level tries to scan but do not *** For scsi devices that the mid level tries to scan but do not
respond, a slave_alloc(), slave_destroy() pair is called. respond, a sdev_init(), sdev_destroy() pair is called.
If the LLD wants to adjust the default queue settings, it can invoke If the LLD wants to adjust the default queue settings, it can invoke
scsi_change_queue_depth() in its slave_configure() routine. scsi_change_queue_depth() in its sdev_configure() routine.
When an HBA is being removed it could be as part of an orderly shutdown When an HBA is being removed it could be as part of an orderly shutdown
associated with the LLD module being unloaded (e.g. with the "rmmod" associated with the LLD module being unloaded (e.g. with the "rmmod"
@ -176,8 +176,8 @@ same::
===----------------------=========-----------------===------ ===----------------------=========-----------------===------
scsi_remove_host() ---------+ scsi_remove_host() ---------+
| |
slave_destroy() sdev_destroy()
slave_destroy() sdev_destroy()
scsi_host_put() scsi_host_put()
It may be useful for a LLD to keep track of struct Scsi_Host instances It may be useful for a LLD to keep track of struct Scsi_Host instances
@ -202,8 +202,8 @@ An LLD can use this sequence to make the mid level aware of a SCSI device::
===-------------------=========--------------------===------ ===-------------------=========--------------------===------
scsi_add_device() ------+ scsi_add_device() ------+
| |
slave_alloc() sdev_init()
slave_configure() [--> scsi_change_queue_depth()] sdev_configure() [--> scsi_change_queue_depth()]
In a similar fashion, an LLD may become aware that a SCSI device has been In a similar fashion, an LLD may become aware that a SCSI device has been
removed (unplugged) or the connection to it has been interrupted. Some removed (unplugged) or the connection to it has been interrupted. Some
@ -218,12 +218,12 @@ upper layers with this sequence::
===----------------------=========-----------------===------ ===----------------------=========-----------------===------
scsi_remove_device() -------+ scsi_remove_device() -------+
| |
slave_destroy() sdev_destroy()
It may be useful for an LLD to keep track of struct scsi_device instances It may be useful for an LLD to keep track of struct scsi_device instances
(a pointer is passed as the parameter to slave_alloc() and (a pointer is passed as the parameter to sdev_init() and
slave_configure() callbacks). Such instances are "owned" by the mid-level. sdev_configure() callbacks). Such instances are "owned" by the mid-level.
struct scsi_device instances are freed after slave_destroy(). struct scsi_device instances are freed after sdev_destroy().
Reference Counting Reference Counting
@ -331,7 +331,7 @@ Details::
* bus scan when an HBA is added (i.e. scsi_scan_host()). So it * bus scan when an HBA is added (i.e. scsi_scan_host()). So it
* should only be called if the HBA becomes aware of a new scsi * should only be called if the HBA becomes aware of a new scsi
* device (lu) after scsi_scan_host() has completed. If successful * device (lu) after scsi_scan_host() has completed. If successful
* this call can lead to slave_alloc() and slave_configure() callbacks * this call can lead to sdev_init() and sdev_configure() callbacks
* into the LLD. * into the LLD.
* *
* Defined in: drivers/scsi/scsi_scan.c * Defined in: drivers/scsi/scsi_scan.c
@ -374,8 +374,8 @@ Details::
* Might block: no * Might block: no
* *
* Notes: Can be invoked any time on a SCSI device controlled by this * Notes: Can be invoked any time on a SCSI device controlled by this
* LLD. [Specifically during and after slave_configure() and prior to * LLD. [Specifically during and after sdev_configure() and prior to
* slave_destroy().] Can safely be invoked from interrupt code. * sdev_destroy().] Can safely be invoked from interrupt code.
* *
* Defined in: drivers/scsi/scsi.c [see source code for more notes] * Defined in: drivers/scsi/scsi.c [see source code for more notes]
* *
@ -506,7 +506,7 @@ Details::
* Notes: If an LLD becomes aware that a scsi device (lu) has * Notes: If an LLD becomes aware that a scsi device (lu) has
* been removed but its host is still present then it can request * been removed but its host is still present then it can request
* the removal of that scsi device. If successful this call will * the removal of that scsi device. If successful this call will
* lead to the slave_destroy() callback being invoked. sdev is an * lead to the sdev_destroy() callback being invoked. sdev is an
* invalid pointer after this call. * invalid pointer after this call.
* *
* Defined in: drivers/scsi/scsi_sysfs.c . * Defined in: drivers/scsi/scsi_sysfs.c .
@ -627,14 +627,14 @@ Interface functions are supplied (defined) by LLDs and their function
pointers are placed in an instance of struct scsi_host_template which pointers are placed in an instance of struct scsi_host_template which
is passed to scsi_host_alloc() [or scsi_register() / init_this_scsi_driver()]. is passed to scsi_host_alloc() [or scsi_register() / init_this_scsi_driver()].
Some are mandatory. Interface functions should be declared static. The Some are mandatory. Interface functions should be declared static. The
accepted convention is that driver "xyz" will declare its slave_configure() accepted convention is that driver "xyz" will declare its sdev_configure()
function as:: function as::
static int xyz_slave_configure(struct scsi_device * sdev); static int xyz_sdev_configure(struct scsi_device * sdev);
and so forth for all interface functions listed below. and so forth for all interface functions listed below.
A pointer to this function should be placed in the 'slave_configure' member A pointer to this function should be placed in the 'sdev_configure' member
of a "struct scsi_host_template" instance. A pointer to such an instance of a "struct scsi_host_template" instance. A pointer to such an instance
should be passed to the mid level's scsi_host_alloc() [or scsi_register() / should be passed to the mid level's scsi_host_alloc() [or scsi_register() /
init_this_scsi_driver()]. init_this_scsi_driver()].
@ -657,9 +657,9 @@ Summary:
- ioctl - driver can respond to ioctls - ioctl - driver can respond to ioctls
- proc_info - supports /proc/scsi/{driver_name}/{host_no} - proc_info - supports /proc/scsi/{driver_name}/{host_no}
- queuecommand - queue scsi command, invoke 'done' on completion - queuecommand - queue scsi command, invoke 'done' on completion
- slave_alloc - prior to any commands being sent to a new device - sdev_init - prior to any commands being sent to a new device
- slave_configure - driver fine tuning for given device after attach - sdev_configure - driver fine tuning for given device after attach
- slave_destroy - given device is about to be shut down - sdev_destroy - given device is about to be shut down
Details:: Details::
@ -960,7 +960,7 @@ Details::
/** /**
* slave_alloc - prior to any commands being sent to a new device * sdev_init - prior to any commands being sent to a new device
* (i.e. just prior to scan) this call is made * (i.e. just prior to scan) this call is made
* @sdp: pointer to new device (about to be scanned) * @sdp: pointer to new device (about to be scanned)
* *
@ -975,24 +975,24 @@ Details::
* prior to its initial scan. The corresponding scsi device may not * prior to its initial scan. The corresponding scsi device may not
* exist but the mid level is just about to scan for it (i.e. send * exist but the mid level is just about to scan for it (i.e. send
* and INQUIRY command plus ...). If a device is found then * and INQUIRY command plus ...). If a device is found then
* slave_configure() will be called while if a device is not found * sdev_configure() will be called while if a device is not found
* slave_destroy() is called. * sdev_destroy() is called.
* For more details see the include/scsi/scsi_host.h file. * For more details see the include/scsi/scsi_host.h file.
* *
* Optionally defined in: LLD * Optionally defined in: LLD
**/ **/
int slave_alloc(struct scsi_device *sdp) int sdev_init(struct scsi_device *sdp)
/** /**
* slave_configure - driver fine tuning for given device just after it * sdev_configure - driver fine tuning for given device just after it
* has been first scanned (i.e. it responded to an * has been first scanned (i.e. it responded to an
* INQUIRY) * INQUIRY)
* @sdp: device that has just been attached * @sdp: device that has just been attached
* *
* Returns 0 if ok. Any other return is assumed to be an error and * Returns 0 if ok. Any other return is assumed to be an error and
* the device is taken offline. [offline devices will _not_ have * the device is taken offline. [offline devices will _not_ have
* slave_destroy() called on them so clean up resources.] * sdev_destroy() called on them so clean up resources.]
* *
* Locks: none * Locks: none
* *
@ -1004,11 +1004,11 @@ Details::
* *
* Optionally defined in: LLD * Optionally defined in: LLD
**/ **/
int slave_configure(struct scsi_device *sdp) int sdev_configure(struct scsi_device *sdp)
/** /**
* slave_destroy - given device is about to be shut down. All * sdev_destroy - given device is about to be shut down. All
* activity has ceased on this device. * activity has ceased on this device.
* @sdp: device that is about to be shut down * @sdp: device that is about to be shut down
* *
@ -1023,12 +1023,12 @@ Details::
* by this driver for given device should be freed now. No further * by this driver for given device should be freed now. No further
* commands will be sent for this sdp instance. [However the device * commands will be sent for this sdp instance. [However the device
* could be re-attached in the future in which case a new instance * could be re-attached in the future in which case a new instance
* of struct scsi_device would be supplied by future slave_alloc() * of struct scsi_device would be supplied by future sdev_init()
* and slave_configure() calls.] * and sdev_configure() calls.]
* *
* Optionally defined in: LLD * Optionally defined in: LLD
**/ **/
void slave_destroy(struct scsi_device *sdp) void sdev_destroy(struct scsi_device *sdp)

View File

@ -397,7 +397,7 @@ extern const struct attribute_group *ahci_sdev_groups[];
.sdev_groups = ahci_sdev_groups, \ .sdev_groups = ahci_sdev_groups, \
.change_queue_depth = ata_scsi_change_queue_depth, \ .change_queue_depth = ata_scsi_change_queue_depth, \
.tag_alloc_policy = BLK_TAG_ALLOC_RR, \ .tag_alloc_policy = BLK_TAG_ALLOC_RR, \
.device_configure = ata_scsi_device_configure .sdev_configure = ata_scsi_sdev_configure
extern struct ata_port_operations ahci_ops; extern struct ata_port_operations ahci_ops;
extern struct ata_port_operations ahci_platform_ops; extern struct ata_port_operations ahci_platform_ops;

View File

@ -1313,7 +1313,7 @@ int ata_scsi_change_queue_depth(struct scsi_device *sdev, int queue_depth)
EXPORT_SYMBOL_GPL(ata_scsi_change_queue_depth); EXPORT_SYMBOL_GPL(ata_scsi_change_queue_depth);
/** /**
* ata_sas_device_configure - Default device_configure routine for libata * ata_sas_sdev_configure - Default sdev_configure routine for libata
* devices * devices
* @sdev: SCSI device to configure * @sdev: SCSI device to configure
* @lim: queue limits * @lim: queue limits
@ -1323,14 +1323,14 @@ EXPORT_SYMBOL_GPL(ata_scsi_change_queue_depth);
* Zero. * Zero.
*/ */
int ata_sas_device_configure(struct scsi_device *sdev, struct queue_limits *lim, int ata_sas_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim,
struct ata_port *ap) struct ata_port *ap)
{ {
ata_scsi_sdev_config(sdev); ata_scsi_sdev_config(sdev);
return ata_scsi_dev_config(sdev, lim, ap->link.device); return ata_scsi_dev_config(sdev, lim, ap->link.device);
} }
EXPORT_SYMBOL_GPL(ata_sas_device_configure); EXPORT_SYMBOL_GPL(ata_sas_sdev_configure);
/** /**
* ata_sas_queuecmd - Issue SCSI cdb to libata-managed device * ata_sas_queuecmd - Issue SCSI cdb to libata-managed device

View File

@ -1133,7 +1133,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim,
} }
/** /**
* ata_scsi_slave_alloc - Early setup of SCSI device * ata_scsi_sdev_init - Early setup of SCSI device
* @sdev: SCSI device to examine * @sdev: SCSI device to examine
* *
* This is called from scsi_alloc_sdev() when the scsi device * This is called from scsi_alloc_sdev() when the scsi device
@ -1143,7 +1143,7 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct queue_limits *lim,
* Defined by SCSI layer. We don't really care. * Defined by SCSI layer. We don't really care.
*/ */
int ata_scsi_slave_alloc(struct scsi_device *sdev) int ata_scsi_sdev_init(struct scsi_device *sdev)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
struct device_link *link; struct device_link *link;
@ -1166,10 +1166,10 @@ int ata_scsi_slave_alloc(struct scsi_device *sdev)
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc); EXPORT_SYMBOL_GPL(ata_scsi_sdev_init);
/** /**
* ata_scsi_device_configure - Set SCSI device attributes * ata_scsi_sdev_configure - Set SCSI device attributes
* @sdev: SCSI device to examine * @sdev: SCSI device to examine
* @lim: queue limits * @lim: queue limits
* *
@ -1181,8 +1181,7 @@ EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc);
* Defined by SCSI layer. We don't really care. * Defined by SCSI layer. We don't really care.
*/ */
int ata_scsi_device_configure(struct scsi_device *sdev, int ata_scsi_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
struct queue_limits *lim)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
struct ata_device *dev = __ata_scsi_find_dev(ap, sdev); struct ata_device *dev = __ata_scsi_find_dev(ap, sdev);
@ -1192,10 +1191,10 @@ int ata_scsi_device_configure(struct scsi_device *sdev,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(ata_scsi_device_configure); EXPORT_SYMBOL_GPL(ata_scsi_sdev_configure);
/** /**
* ata_scsi_slave_destroy - SCSI device is about to be destroyed * ata_scsi_sdev_destroy - SCSI device is about to be destroyed
* @sdev: SCSI device to be destroyed * @sdev: SCSI device to be destroyed
* *
* @sdev is about to be destroyed for hot/warm unplugging. If * @sdev is about to be destroyed for hot/warm unplugging. If
@ -1208,7 +1207,7 @@ EXPORT_SYMBOL_GPL(ata_scsi_device_configure);
* LOCKING: * LOCKING:
* Defined by SCSI layer. We don't really care. * Defined by SCSI layer. We don't really care.
*/ */
void ata_scsi_slave_destroy(struct scsi_device *sdev) void ata_scsi_sdev_destroy(struct scsi_device *sdev)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
unsigned long flags; unsigned long flags;
@ -1228,7 +1227,7 @@ void ata_scsi_slave_destroy(struct scsi_device *sdev)
kfree(sdev->dma_drain_buf); kfree(sdev->dma_drain_buf);
} }
EXPORT_SYMBOL_GPL(ata_scsi_slave_destroy); EXPORT_SYMBOL_GPL(ata_scsi_sdev_destroy);
/** /**
* ata_scsi_start_stop_xlat - Translate SCSI START STOP UNIT command * ata_scsi_start_stop_xlat - Translate SCSI START STOP UNIT command

View File

@ -812,7 +812,7 @@ static void pata_macio_reset_hw(struct pata_macio_priv *priv, int resume)
/* Hook the standard slave config to fixup some HW related alignment /* Hook the standard slave config to fixup some HW related alignment
* restrictions * restrictions
*/ */
static int pata_macio_device_configure(struct scsi_device *sdev, static int pata_macio_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
@ -822,7 +822,7 @@ static int pata_macio_device_configure(struct scsi_device *sdev,
int rc; int rc;
/* First call original */ /* First call original */
rc = ata_scsi_device_configure(sdev, lim); rc = ata_scsi_sdev_configure(sdev, lim);
if (rc) if (rc)
return rc; return rc;
@ -932,7 +932,7 @@ static const struct scsi_host_template pata_macio_sht = {
/* We may not need that strict one */ /* We may not need that strict one */
.dma_boundary = ATA_DMA_BOUNDARY, .dma_boundary = ATA_DMA_BOUNDARY,
.max_segment_size = PATA_MACIO_MAX_SEGMENT_SIZE, .max_segment_size = PATA_MACIO_MAX_SEGMENT_SIZE,
.device_configure = pata_macio_device_configure, .sdev_configure = pata_macio_sdev_configure,
.sdev_groups = ata_common_sdev_groups, .sdev_groups = ata_common_sdev_groups,
.can_queue = ATA_DEF_QUEUE, .can_queue = ATA_DEF_QUEUE,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,

View File

@ -673,7 +673,7 @@ static const struct scsi_host_template mv6_sht = {
.sdev_groups = ata_ncq_sdev_groups, .sdev_groups = ata_ncq_sdev_groups,
.change_queue_depth = ata_scsi_change_queue_depth, .change_queue_depth = ata_scsi_change_queue_depth,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,
.device_configure = ata_scsi_device_configure .sdev_configure = ata_scsi_sdev_configure
}; };
static struct ata_port_operations mv5_ops = { static struct ata_port_operations mv5_ops = {

View File

@ -296,7 +296,7 @@ static void nv_nf2_freeze(struct ata_port *ap);
static void nv_nf2_thaw(struct ata_port *ap); static void nv_nf2_thaw(struct ata_port *ap);
static void nv_ck804_freeze(struct ata_port *ap); static void nv_ck804_freeze(struct ata_port *ap);
static void nv_ck804_thaw(struct ata_port *ap); static void nv_ck804_thaw(struct ata_port *ap);
static int nv_adma_device_configure(struct scsi_device *sdev, static int nv_adma_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim); struct queue_limits *lim);
static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc); static int nv_adma_check_atapi_dma(struct ata_queued_cmd *qc);
static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc); static enum ata_completion_errors nv_adma_qc_prep(struct ata_queued_cmd *qc);
@ -319,7 +319,7 @@ static void nv_adma_tf_read(struct ata_port *ap, struct ata_taskfile *tf);
static void nv_mcp55_thaw(struct ata_port *ap); static void nv_mcp55_thaw(struct ata_port *ap);
static void nv_mcp55_freeze(struct ata_port *ap); static void nv_mcp55_freeze(struct ata_port *ap);
static void nv_swncq_error_handler(struct ata_port *ap); static void nv_swncq_error_handler(struct ata_port *ap);
static int nv_swncq_device_configure(struct scsi_device *sdev, static int nv_swncq_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim); struct queue_limits *lim);
static int nv_swncq_port_start(struct ata_port *ap); static int nv_swncq_port_start(struct ata_port *ap);
static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc); static enum ata_completion_errors nv_swncq_qc_prep(struct ata_queued_cmd *qc);
@ -382,7 +382,7 @@ static const struct scsi_host_template nv_adma_sht = {
.can_queue = NV_ADMA_MAX_CPBS, .can_queue = NV_ADMA_MAX_CPBS,
.sg_tablesize = NV_ADMA_SGTBL_TOTAL_LEN, .sg_tablesize = NV_ADMA_SGTBL_TOTAL_LEN,
.dma_boundary = NV_ADMA_DMA_BOUNDARY, .dma_boundary = NV_ADMA_DMA_BOUNDARY,
.device_configure = nv_adma_device_configure, .sdev_configure = nv_adma_sdev_configure,
.sdev_groups = ata_ncq_sdev_groups, .sdev_groups = ata_ncq_sdev_groups,
.change_queue_depth = ata_scsi_change_queue_depth, .change_queue_depth = ata_scsi_change_queue_depth,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,
@ -393,7 +393,7 @@ static const struct scsi_host_template nv_swncq_sht = {
.can_queue = ATA_MAX_QUEUE - 1, .can_queue = ATA_MAX_QUEUE - 1,
.sg_tablesize = LIBATA_MAX_PRD, .sg_tablesize = LIBATA_MAX_PRD,
.dma_boundary = ATA_DMA_BOUNDARY, .dma_boundary = ATA_DMA_BOUNDARY,
.device_configure = nv_swncq_device_configure, .sdev_configure = nv_swncq_sdev_configure,
.sdev_groups = ata_ncq_sdev_groups, .sdev_groups = ata_ncq_sdev_groups,
.change_queue_depth = ata_scsi_change_queue_depth, .change_queue_depth = ata_scsi_change_queue_depth,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,
@ -663,7 +663,7 @@ static void nv_adma_mode(struct ata_port *ap)
pp->flags &= ~NV_ADMA_PORT_REGISTER_MODE; pp->flags &= ~NV_ADMA_PORT_REGISTER_MODE;
} }
static int nv_adma_device_configure(struct scsi_device *sdev, static int nv_adma_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
@ -676,7 +676,7 @@ static int nv_adma_device_configure(struct scsi_device *sdev,
int adma_enable; int adma_enable;
u32 current_reg, new_reg, config_mask; u32 current_reg, new_reg, config_mask;
rc = ata_scsi_device_configure(sdev, lim); rc = ata_scsi_sdev_configure(sdev, lim);
if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun) if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun)
/* Not a proper libata device, ignore */ /* Not a proper libata device, ignore */
@ -1871,7 +1871,7 @@ static void nv_swncq_host_init(struct ata_host *host)
writel(~0x0, mmio + NV_INT_STATUS_MCP55); writel(~0x0, mmio + NV_INT_STATUS_MCP55);
} }
static int nv_swncq_device_configure(struct scsi_device *sdev, static int nv_swncq_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct ata_port *ap = ata_shost_to_port(sdev->host); struct ata_port *ap = ata_shost_to_port(sdev->host);
@ -1882,7 +1882,7 @@ static int nv_swncq_device_configure(struct scsi_device *sdev,
u8 check_maxtor = 0; u8 check_maxtor = 0;
unsigned char model_num[ATA_ID_PROD_LEN + 1]; unsigned char model_num[ATA_ID_PROD_LEN + 1];
rc = ata_scsi_device_configure(sdev, lim); rc = ata_scsi_sdev_configure(sdev, lim);
if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun) if (sdev->id >= ATA_MAX_DEVICES || sdev->channel || sdev->lun)
/* Not a proper libata device, ignore */ /* Not a proper libata device, ignore */
return rc; return rc;

View File

@ -381,7 +381,7 @@ static const struct scsi_host_template sil24_sht = {
.tag_alloc_policy = BLK_TAG_ALLOC_FIFO, .tag_alloc_policy = BLK_TAG_ALLOC_FIFO,
.sdev_groups = ata_ncq_sdev_groups, .sdev_groups = ata_ncq_sdev_groups,
.change_queue_depth = ata_scsi_change_queue_depth, .change_queue_depth = ata_scsi_change_queue_depth,
.device_configure = ata_scsi_device_configure .sdev_configure = ata_scsi_sdev_configure
}; };
static struct ata_port_operations sil24_ops = { static struct ata_port_operations sil24_ops = {

View File

@ -1490,7 +1490,7 @@ static int sbp2_scsi_queuecommand(struct Scsi_Host *shost,
return retval; return retval;
} }
static int sbp2_scsi_slave_alloc(struct scsi_device *sdev) static int sbp2_scsi_sdev_init(struct scsi_device *sdev)
{ {
struct sbp2_logical_unit *lu = sdev->hostdata; struct sbp2_logical_unit *lu = sdev->hostdata;
@ -1506,7 +1506,7 @@ static int sbp2_scsi_slave_alloc(struct scsi_device *sdev)
return 0; return 0;
} }
static int sbp2_scsi_device_configure(struct scsi_device *sdev, static int sbp2_scsi_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct sbp2_logical_unit *lu = sdev->hostdata; struct sbp2_logical_unit *lu = sdev->hostdata;
@ -1590,8 +1590,8 @@ static const struct scsi_host_template scsi_driver_template = {
.name = "SBP-2 IEEE-1394", .name = "SBP-2 IEEE-1394",
.proc_name = "sbp2", .proc_name = "sbp2",
.queuecommand = sbp2_scsi_queuecommand, .queuecommand = sbp2_scsi_queuecommand,
.slave_alloc = sbp2_scsi_slave_alloc, .sdev_init = sbp2_scsi_sdev_init,
.device_configure = sbp2_scsi_device_configure, .sdev_configure = sbp2_scsi_sdev_configure,
.eh_abort_handler = sbp2_scsi_abort, .eh_abort_handler = sbp2_scsi_abort,
.this_id = -1, .this_id = -1,
.sg_tablesize = SG_ALL, .sg_tablesize = SG_ALL,

View File

@ -2844,7 +2844,8 @@ static int srp_target_alloc(struct scsi_target *starget)
return 0; return 0;
} }
static int srp_slave_configure(struct scsi_device *sdev) static int srp_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
struct srp_target_port *target = host_to_target(shost); struct srp_target_port *target = host_to_target(shost);
@ -3067,7 +3068,7 @@ static const struct scsi_host_template srp_template = {
.name = "InfiniBand SRP initiator", .name = "InfiniBand SRP initiator",
.proc_name = DRV_NAME, .proc_name = DRV_NAME,
.target_alloc = srp_target_alloc, .target_alloc = srp_target_alloc,
.slave_configure = srp_slave_configure, .sdev_configure = srp_sdev_configure,
.info = srp_target_info, .info = srp_target_info,
.init_cmd_priv = srp_init_cmd_priv, .init_cmd_priv = srp_init_cmd_priv,
.exit_cmd_priv = srp_exit_cmd_priv, .exit_cmd_priv = srp_exit_cmd_priv,

View File

@ -96,7 +96,7 @@ static u8 mptfcTaskCtx = MPT_MAX_PROTOCOL_DRIVERS;
static u8 mptfcInternalCtx = MPT_MAX_PROTOCOL_DRIVERS; static u8 mptfcInternalCtx = MPT_MAX_PROTOCOL_DRIVERS;
static int mptfc_target_alloc(struct scsi_target *starget); static int mptfc_target_alloc(struct scsi_target *starget);
static int mptfc_slave_alloc(struct scsi_device *sdev); static int mptfc_sdev_init(struct scsi_device *sdev);
static int mptfc_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *SCpnt); static int mptfc_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *SCpnt);
static void mptfc_target_destroy(struct scsi_target *starget); static void mptfc_target_destroy(struct scsi_target *starget);
static void mptfc_set_rport_loss_tmo(struct fc_rport *rport, uint32_t timeout); static void mptfc_set_rport_loss_tmo(struct fc_rport *rport, uint32_t timeout);
@ -113,10 +113,10 @@ static const struct scsi_host_template mptfc_driver_template = {
.info = mptscsih_info, .info = mptscsih_info,
.queuecommand = mptfc_qcmd, .queuecommand = mptfc_qcmd,
.target_alloc = mptfc_target_alloc, .target_alloc = mptfc_target_alloc,
.slave_alloc = mptfc_slave_alloc, .sdev_init = mptfc_sdev_init,
.slave_configure = mptscsih_slave_configure, .sdev_configure = mptscsih_sdev_configure,
.target_destroy = mptfc_target_destroy, .target_destroy = mptfc_target_destroy,
.slave_destroy = mptscsih_slave_destroy, .sdev_destroy = mptscsih_sdev_destroy,
.change_queue_depth = mptscsih_change_queue_depth, .change_queue_depth = mptscsih_change_queue_depth,
.eh_timed_out = fc_eh_timed_out, .eh_timed_out = fc_eh_timed_out,
.eh_abort_handler = mptfc_abort, .eh_abort_handler = mptfc_abort,
@ -503,7 +503,7 @@ mptfc_register_dev(MPT_ADAPTER *ioc, int channel, FCDevicePage0_t *pg0)
/* /*
* if already mapped, remap here. If not mapped, * if already mapped, remap here. If not mapped,
* target_alloc will allocate vtarget and map, * target_alloc will allocate vtarget and map,
* slave_alloc will fill in vdevice from vtarget. * sdev_init will fill in vdevice from vtarget.
*/ */
if (ri->starget) { if (ri->starget) {
vtarget = ri->starget->hostdata; vtarget = ri->starget->hostdata;
@ -631,7 +631,7 @@ mptfc_dump_lun_info(MPT_ADAPTER *ioc, struct fc_rport *rport, struct scsi_device
* Init memory once per LUN. * Init memory once per LUN.
*/ */
static int static int
mptfc_slave_alloc(struct scsi_device *sdev) mptfc_sdev_init(struct scsi_device *sdev)
{ {
MPT_SCSI_HOST *hd; MPT_SCSI_HOST *hd;
VirtTarget *vtarget; VirtTarget *vtarget;
@ -651,7 +651,7 @@ mptfc_slave_alloc(struct scsi_device *sdev)
vdevice = kzalloc(sizeof(VirtDevice), GFP_KERNEL); vdevice = kzalloc(sizeof(VirtDevice), GFP_KERNEL);
if (!vdevice) { if (!vdevice) {
printk(MYIOC_s_ERR_FMT "slave_alloc kmalloc(%zd) FAILED!\n", printk(MYIOC_s_ERR_FMT "sdev_init kmalloc(%zd) FAILED!\n",
ioc->name, sizeof(VirtDevice)); ioc->name, sizeof(VirtDevice));
return -ENOMEM; return -ENOMEM;
} }

View File

@ -1710,7 +1710,7 @@ mptsas_firmware_event_work(struct work_struct *work)
static int static int
mptsas_slave_configure(struct scsi_device *sdev) mptsas_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
struct Scsi_Host *host = sdev->host; struct Scsi_Host *host = sdev->host;
MPT_SCSI_HOST *hd = shost_priv(host); MPT_SCSI_HOST *hd = shost_priv(host);
@ -1736,7 +1736,7 @@ mptsas_slave_configure(struct scsi_device *sdev)
mptsas_add_device_component_starget(ioc, scsi_target(sdev)); mptsas_add_device_component_starget(ioc, scsi_target(sdev));
out: out:
return mptscsih_slave_configure(sdev); return mptscsih_sdev_configure(sdev, lim);
} }
static int static int
@ -1867,7 +1867,7 @@ mptsas_target_destroy(struct scsi_target *starget)
static int static int
mptsas_slave_alloc(struct scsi_device *sdev) mptsas_sdev_init(struct scsi_device *sdev)
{ {
struct Scsi_Host *host = sdev->host; struct Scsi_Host *host = sdev->host;
MPT_SCSI_HOST *hd = shost_priv(host); MPT_SCSI_HOST *hd = shost_priv(host);
@ -1880,7 +1880,7 @@ mptsas_slave_alloc(struct scsi_device *sdev)
vdevice = kzalloc(sizeof(VirtDevice), GFP_KERNEL); vdevice = kzalloc(sizeof(VirtDevice), GFP_KERNEL);
if (!vdevice) { if (!vdevice) {
printk(MYIOC_s_ERR_FMT "slave_alloc kzalloc(%zd) FAILED!\n", printk(MYIOC_s_ERR_FMT "sdev_init kzalloc(%zd) FAILED!\n",
ioc->name, sizeof(VirtDevice)); ioc->name, sizeof(VirtDevice));
return -ENOMEM; return -ENOMEM;
} }
@ -2005,10 +2005,10 @@ static const struct scsi_host_template mptsas_driver_template = {
.info = mptscsih_info, .info = mptscsih_info,
.queuecommand = mptsas_qcmd, .queuecommand = mptsas_qcmd,
.target_alloc = mptsas_target_alloc, .target_alloc = mptsas_target_alloc,
.slave_alloc = mptsas_slave_alloc, .sdev_init = mptsas_sdev_init,
.slave_configure = mptsas_slave_configure, .sdev_configure = mptsas_sdev_configure,
.target_destroy = mptsas_target_destroy, .target_destroy = mptsas_target_destroy,
.slave_destroy = mptscsih_slave_destroy, .sdev_destroy = mptscsih_sdev_destroy,
.change_queue_depth = mptscsih_change_queue_depth, .change_queue_depth = mptscsih_change_queue_depth,
.eh_timed_out = mptsas_eh_timed_out, .eh_timed_out = mptsas_eh_timed_out,
.eh_abort_handler = mptscsih_abort, .eh_abort_handler = mptscsih_abort,

View File

@ -1071,7 +1071,7 @@ EXPORT_SYMBOL(mptscsih_flush_running_cmds);
* *
* Returns: None. * Returns: None.
* *
* Called from slave_destroy. * Called from sdev_destroy.
*/ */
static void static void
mptscsih_search_running_cmds(MPT_SCSI_HOST *hd, VirtDevice *vdevice) mptscsih_search_running_cmds(MPT_SCSI_HOST *hd, VirtDevice *vdevice)
@ -2331,7 +2331,7 @@ EXPORT_SYMBOL(mptscsih_raid_id_to_num);
* Called if no device present or device being unloaded * Called if no device present or device being unloaded
*/ */
void void
mptscsih_slave_destroy(struct scsi_device *sdev) mptscsih_sdev_destroy(struct scsi_device *sdev)
{ {
struct Scsi_Host *host = sdev->host; struct Scsi_Host *host = sdev->host;
MPT_SCSI_HOST *hd = shost_priv(host); MPT_SCSI_HOST *hd = shost_priv(host);
@ -2399,7 +2399,7 @@ mptscsih_change_queue_depth(struct scsi_device *sdev, int qdepth)
* Return non-zero if fails. * Return non-zero if fails.
*/ */
int int
mptscsih_slave_configure(struct scsi_device *sdev) mptscsih_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
struct Scsi_Host *sh = sdev->host; struct Scsi_Host *sh = sdev->host;
VirtTarget *vtarget; VirtTarget *vtarget;
@ -3302,8 +3302,8 @@ EXPORT_SYMBOL(mptscsih_resume);
EXPORT_SYMBOL(mptscsih_show_info); EXPORT_SYMBOL(mptscsih_show_info);
EXPORT_SYMBOL(mptscsih_info); EXPORT_SYMBOL(mptscsih_info);
EXPORT_SYMBOL(mptscsih_qcmd); EXPORT_SYMBOL(mptscsih_qcmd);
EXPORT_SYMBOL(mptscsih_slave_destroy); EXPORT_SYMBOL(mptscsih_sdev_destroy);
EXPORT_SYMBOL(mptscsih_slave_configure); EXPORT_SYMBOL(mptscsih_sdev_configure);
EXPORT_SYMBOL(mptscsih_abort); EXPORT_SYMBOL(mptscsih_abort);
EXPORT_SYMBOL(mptscsih_dev_reset); EXPORT_SYMBOL(mptscsih_dev_reset);
EXPORT_SYMBOL(mptscsih_target_reset); EXPORT_SYMBOL(mptscsih_target_reset);

View File

@ -116,8 +116,9 @@ extern const char * mptscsih_info(struct Scsi_Host *SChost);
extern int mptscsih_qcmd(struct scsi_cmnd *SCpnt); extern int mptscsih_qcmd(struct scsi_cmnd *SCpnt);
extern int mptscsih_IssueTaskMgmt(MPT_SCSI_HOST *hd, u8 type, u8 channel, extern int mptscsih_IssueTaskMgmt(MPT_SCSI_HOST *hd, u8 type, u8 channel,
u8 id, u64 lun, int ctx2abort, ulong timeout); u8 id, u64 lun, int ctx2abort, ulong timeout);
extern void mptscsih_slave_destroy(struct scsi_device *device); extern void mptscsih_sdev_destroy(struct scsi_device *device);
extern int mptscsih_slave_configure(struct scsi_device *device); extern int mptscsih_sdev_configure(struct scsi_device *device,
struct queue_limits *lim);
extern int mptscsih_abort(struct scsi_cmnd * SCpnt); extern int mptscsih_abort(struct scsi_cmnd * SCpnt);
extern int mptscsih_dev_reset(struct scsi_cmnd * SCpnt); extern int mptscsih_dev_reset(struct scsi_cmnd * SCpnt);
extern int mptscsih_target_reset(struct scsi_cmnd * SCpnt); extern int mptscsih_target_reset(struct scsi_cmnd * SCpnt);

View File

@ -713,7 +713,7 @@ static void mptspi_dv_device(struct _MPT_SCSI_HOST *hd,
mptspi_read_parameters(sdev->sdev_target); mptspi_read_parameters(sdev->sdev_target);
} }
static int mptspi_slave_alloc(struct scsi_device *sdev) static int mptspi_sdev_init(struct scsi_device *sdev)
{ {
MPT_SCSI_HOST *hd = shost_priv(sdev->host); MPT_SCSI_HOST *hd = shost_priv(sdev->host);
VirtTarget *vtarget; VirtTarget *vtarget;
@ -727,7 +727,7 @@ static int mptspi_slave_alloc(struct scsi_device *sdev)
vdevice = kzalloc(sizeof(VirtDevice), GFP_KERNEL); vdevice = kzalloc(sizeof(VirtDevice), GFP_KERNEL);
if (!vdevice) { if (!vdevice) {
printk(MYIOC_s_ERR_FMT "slave_alloc kmalloc(%zd) FAILED!\n", printk(MYIOC_s_ERR_FMT "sdev_init kmalloc(%zd) FAILED!\n",
ioc->name, sizeof(VirtDevice)); ioc->name, sizeof(VirtDevice));
return -ENOMEM; return -ENOMEM;
} }
@ -746,7 +746,8 @@ static int mptspi_slave_alloc(struct scsi_device *sdev)
return 0; return 0;
} }
static int mptspi_slave_configure(struct scsi_device *sdev) static int mptspi_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct _MPT_SCSI_HOST *hd = shost_priv(sdev->host); struct _MPT_SCSI_HOST *hd = shost_priv(sdev->host);
VirtTarget *vtarget = scsi_target(sdev)->hostdata; VirtTarget *vtarget = scsi_target(sdev)->hostdata;
@ -754,7 +755,7 @@ static int mptspi_slave_configure(struct scsi_device *sdev)
mptspi_initTarget(hd, vtarget, sdev); mptspi_initTarget(hd, vtarget, sdev);
ret = mptscsih_slave_configure(sdev); ret = mptscsih_sdev_configure(sdev, lim);
if (ret) if (ret)
return ret; return ret;
@ -799,7 +800,7 @@ mptspi_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *SCpnt)
return mptscsih_qcmd(SCpnt); return mptscsih_qcmd(SCpnt);
} }
static void mptspi_slave_destroy(struct scsi_device *sdev) static void mptspi_sdev_destroy(struct scsi_device *sdev)
{ {
struct scsi_target *starget = scsi_target(sdev); struct scsi_target *starget = scsi_target(sdev);
VirtTarget *vtarget = starget->hostdata; VirtTarget *vtarget = starget->hostdata;
@ -817,7 +818,7 @@ static void mptspi_slave_destroy(struct scsi_device *sdev)
mptspi_write_spi_device_pg1(starget, &pg1); mptspi_write_spi_device_pg1(starget, &pg1);
} }
mptscsih_slave_destroy(sdev); mptscsih_sdev_destroy(sdev);
} }
static const struct scsi_host_template mptspi_driver_template = { static const struct scsi_host_template mptspi_driver_template = {
@ -828,10 +829,10 @@ static const struct scsi_host_template mptspi_driver_template = {
.info = mptscsih_info, .info = mptscsih_info,
.queuecommand = mptspi_qcmd, .queuecommand = mptspi_qcmd,
.target_alloc = mptspi_target_alloc, .target_alloc = mptspi_target_alloc,
.slave_alloc = mptspi_slave_alloc, .sdev_init = mptspi_sdev_init,
.slave_configure = mptspi_slave_configure, .sdev_configure = mptspi_sdev_configure,
.target_destroy = mptspi_target_destroy, .target_destroy = mptspi_target_destroy,
.slave_destroy = mptspi_slave_destroy, .sdev_destroy = mptspi_sdev_destroy,
.change_queue_depth = mptscsih_change_queue_depth, .change_queue_depth = mptscsih_change_queue_depth,
.eh_abort_handler = mptscsih_abort, .eh_abort_handler = mptscsih_abort,
.eh_device_reset_handler = mptscsih_dev_reset, .eh_device_reset_handler = mptscsih_dev_reset,

View File

@ -37,11 +37,11 @@ static bool allow_lun_scan = true;
module_param(allow_lun_scan, bool, 0600); module_param(allow_lun_scan, bool, 0600);
MODULE_PARM_DESC(allow_lun_scan, "For NPIV, scan and attach all storage LUNs"); MODULE_PARM_DESC(allow_lun_scan, "For NPIV, scan and attach all storage LUNs");
static void zfcp_scsi_slave_destroy(struct scsi_device *sdev) static void zfcp_scsi_sdev_destroy(struct scsi_device *sdev)
{ {
struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev); struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
/* if previous slave_alloc returned early, there is nothing to do */ /* if previous sdev_init returned early, there is nothing to do */
if (!zfcp_sdev->port) if (!zfcp_sdev->port)
return; return;
@ -49,7 +49,8 @@ static void zfcp_scsi_slave_destroy(struct scsi_device *sdev)
put_device(&zfcp_sdev->port->dev); put_device(&zfcp_sdev->port->dev);
} }
static int zfcp_scsi_slave_configure(struct scsi_device *sdp) static int zfcp_scsi_sdev_configure(struct scsi_device *sdp,
struct queue_limits *lim)
{ {
if (sdp->tagged_supported) if (sdp->tagged_supported)
scsi_change_queue_depth(sdp, default_depth); scsi_change_queue_depth(sdp, default_depth);
@ -110,7 +111,7 @@ int zfcp_scsi_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *scpnt)
return ret; return ret;
} }
static int zfcp_scsi_slave_alloc(struct scsi_device *sdev) static int zfcp_scsi_sdev_init(struct scsi_device *sdev)
{ {
struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
struct zfcp_adapter *adapter = struct zfcp_adapter *adapter =
@ -427,9 +428,9 @@ static const struct scsi_host_template zfcp_scsi_host_template = {
.eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler, .eh_device_reset_handler = zfcp_scsi_eh_device_reset_handler,
.eh_target_reset_handler = zfcp_scsi_eh_target_reset_handler, .eh_target_reset_handler = zfcp_scsi_eh_target_reset_handler,
.eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler, .eh_host_reset_handler = zfcp_scsi_eh_host_reset_handler,
.slave_alloc = zfcp_scsi_slave_alloc, .sdev_init = zfcp_scsi_sdev_init,
.slave_configure = zfcp_scsi_slave_configure, .sdev_configure = zfcp_scsi_sdev_configure,
.slave_destroy = zfcp_scsi_slave_destroy, .sdev_destroy = zfcp_scsi_sdev_destroy,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,
.host_reset = zfcp_scsi_sysfs_host_reset, .host_reset = zfcp_scsi_sysfs_host_reset,
.proc_name = "zfcp", .proc_name = "zfcp",

View File

@ -284,7 +284,7 @@ static bool zfcp_sysfs_port_in_use(struct zfcp_port *const port)
goto unlock_host_lock; goto unlock_host_lock;
} }
/* port is about to be removed, so no more unit_add or slave_alloc */ /* port is about to be removed, so no more unit_add or sdev_init */
zfcp_sysfs_port_set_removing(port); zfcp_sysfs_port_set_removing(port);
in_use = false; in_use = false;

View File

@ -170,7 +170,7 @@ int zfcp_unit_add(struct zfcp_port *port, u64 fcp_lun)
write_unlock_irq(&port->unit_list_lock); write_unlock_irq(&port->unit_list_lock);
/* /*
* lock order: shost->scan_mutex before zfcp_sysfs_port_units_mutex * lock order: shost->scan_mutex before zfcp_sysfs_port_units_mutex
* due to zfcp_unit_scsi_scan() => zfcp_scsi_slave_alloc() * due to zfcp_unit_scsi_scan() => zfcp_scsi_sdev_init()
*/ */
mutex_unlock(&zfcp_sysfs_port_units_mutex); mutex_unlock(&zfcp_sysfs_port_units_mutex);

View File

@ -1968,13 +1968,14 @@ static char *twa_string_lookup(twa_message_type *table, unsigned int code)
} /* End twa_string_lookup() */ } /* End twa_string_lookup() */
/* This function gets called when a disk is coming on-line */ /* This function gets called when a disk is coming on-line */
static int twa_slave_configure(struct scsi_device *sdev) static int twa_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
/* Force 60 second timeout */ /* Force 60 second timeout */
blk_queue_rq_timeout(sdev->request_queue, 60 * HZ); blk_queue_rq_timeout(sdev->request_queue, 60 * HZ);
return 0; return 0;
} /* End twa_slave_configure() */ } /* End twa_sdev_configure() */
static const struct scsi_host_template driver_template = { static const struct scsi_host_template driver_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
@ -1984,7 +1985,7 @@ static const struct scsi_host_template driver_template = {
.bios_param = twa_scsi_biosparam, .bios_param = twa_scsi_biosparam,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,
.can_queue = TW_Q_LENGTH-2, .can_queue = TW_Q_LENGTH-2,
.slave_configure = twa_slave_configure, .sdev_configure = twa_sdev_configure,
.this_id = -1, .this_id = -1,
.sg_tablesize = TW_APACHE_MAX_SGL_LENGTH, .sg_tablesize = TW_APACHE_MAX_SGL_LENGTH,
.max_sectors = TW_MAX_SECTORS, .max_sectors = TW_MAX_SECTORS,

View File

@ -1523,13 +1523,14 @@ static void twl_shutdown(struct pci_dev *pdev)
} /* End twl_shutdown() */ } /* End twl_shutdown() */
/* This function configures unit settings when a unit is coming on-line */ /* This function configures unit settings when a unit is coming on-line */
static int twl_slave_configure(struct scsi_device *sdev) static int twl_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
/* Force 60 second timeout */ /* Force 60 second timeout */
blk_queue_rq_timeout(sdev->request_queue, 60 * HZ); blk_queue_rq_timeout(sdev->request_queue, 60 * HZ);
return 0; return 0;
} /* End twl_slave_configure() */ } /* End twl_sdev_configure() */
static const struct scsi_host_template driver_template = { static const struct scsi_host_template driver_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
@ -1539,7 +1540,7 @@ static const struct scsi_host_template driver_template = {
.bios_param = twl_scsi_biosparam, .bios_param = twl_scsi_biosparam,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,
.can_queue = TW_Q_LENGTH-2, .can_queue = TW_Q_LENGTH-2,
.slave_configure = twl_slave_configure, .sdev_configure = twl_sdev_configure,
.this_id = -1, .this_id = -1,
.sg_tablesize = TW_LIBERATOR_MAX_SGL_LENGTH, .sg_tablesize = TW_LIBERATOR_MAX_SGL_LENGTH,
.max_sectors = TW_MAX_SECTORS, .max_sectors = TW_MAX_SECTORS,

View File

@ -172,7 +172,7 @@
Initialize queues correctly when loading with no valid units. Initialize queues correctly when loading with no valid units.
1.02.00.034 - Fix tw_decode_bits() to handle multiple errors. 1.02.00.034 - Fix tw_decode_bits() to handle multiple errors.
Add support for user configurable cmd_per_lun. Add support for user configurable cmd_per_lun.
Add support for sht->slave_configure(). Add support for sht->sdev_configure().
1.02.00.035 - Improve tw_allocate_memory() memory allocation. 1.02.00.035 - Improve tw_allocate_memory() memory allocation.
Fix tw_chrdev_ioctl() to sleep correctly. Fix tw_chrdev_ioctl() to sleep correctly.
1.02.00.036 - Increase character ioctl timeout to 60 seconds. 1.02.00.036 - Increase character ioctl timeout to 60 seconds.
@ -2221,13 +2221,13 @@ static void tw_shutdown(struct pci_dev *pdev)
} /* End tw_shutdown() */ } /* End tw_shutdown() */
/* This function gets called when a disk is coming online */ /* This function gets called when a disk is coming online */
static int tw_slave_configure(struct scsi_device *sdev) static int tw_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
/* Force 60 second timeout */ /* Force 60 second timeout */
blk_queue_rq_timeout(sdev->request_queue, 60 * HZ); blk_queue_rq_timeout(sdev->request_queue, 60 * HZ);
return 0; return 0;
} /* End tw_slave_configure() */ } /* End tw_sdev_configure() */
static const struct scsi_host_template driver_template = { static const struct scsi_host_template driver_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
@ -2237,7 +2237,7 @@ static const struct scsi_host_template driver_template = {
.bios_param = tw_scsi_biosparam, .bios_param = tw_scsi_biosparam,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,
.can_queue = TW_Q_LENGTH-2, .can_queue = TW_Q_LENGTH-2,
.slave_configure = tw_slave_configure, .sdev_configure = tw_sdev_configure,
.this_id = -1, .this_id = -1,
.sg_tablesize = TW_MAX_SGL_LENGTH, .sg_tablesize = TW_MAX_SGL_LENGTH,
.max_sectors = TW_MAX_SECTORS, .max_sectors = TW_MAX_SECTORS,

View File

@ -158,9 +158,10 @@ STATIC int NCR_700_abort(struct scsi_cmnd * SCpnt);
STATIC int NCR_700_host_reset(struct scsi_cmnd * SCpnt); STATIC int NCR_700_host_reset(struct scsi_cmnd * SCpnt);
STATIC void NCR_700_chip_setup(struct Scsi_Host *host); STATIC void NCR_700_chip_setup(struct Scsi_Host *host);
STATIC void NCR_700_chip_reset(struct Scsi_Host *host); STATIC void NCR_700_chip_reset(struct Scsi_Host *host);
STATIC int NCR_700_slave_alloc(struct scsi_device *SDpnt); STATIC int NCR_700_sdev_init(struct scsi_device *SDpnt);
STATIC int NCR_700_slave_configure(struct scsi_device *SDpnt); STATIC int NCR_700_sdev_configure(struct scsi_device *SDpnt,
STATIC void NCR_700_slave_destroy(struct scsi_device *SDpnt); struct queue_limits *lim);
STATIC void NCR_700_sdev_destroy(struct scsi_device *SDpnt);
static int NCR_700_change_queue_depth(struct scsi_device *SDpnt, int depth); static int NCR_700_change_queue_depth(struct scsi_device *SDpnt, int depth);
STATIC const struct attribute_group *NCR_700_dev_groups[]; STATIC const struct attribute_group *NCR_700_dev_groups[];
@ -330,9 +331,9 @@ NCR_700_detect(struct scsi_host_template *tpnt,
tpnt->can_queue = NCR_700_COMMAND_SLOTS_PER_HOST; tpnt->can_queue = NCR_700_COMMAND_SLOTS_PER_HOST;
tpnt->sg_tablesize = NCR_700_SG_SEGMENTS; tpnt->sg_tablesize = NCR_700_SG_SEGMENTS;
tpnt->cmd_per_lun = NCR_700_CMD_PER_LUN; tpnt->cmd_per_lun = NCR_700_CMD_PER_LUN;
tpnt->slave_configure = NCR_700_slave_configure; tpnt->sdev_configure = NCR_700_sdev_configure;
tpnt->slave_destroy = NCR_700_slave_destroy; tpnt->sdev_destroy = NCR_700_sdev_destroy;
tpnt->slave_alloc = NCR_700_slave_alloc; tpnt->sdev_init = NCR_700_sdev_init;
tpnt->change_queue_depth = NCR_700_change_queue_depth; tpnt->change_queue_depth = NCR_700_change_queue_depth;
if(tpnt->name == NULL) if(tpnt->name == NULL)
@ -2017,7 +2018,7 @@ NCR_700_set_offset(struct scsi_target *STp, int offset)
} }
STATIC int STATIC int
NCR_700_slave_alloc(struct scsi_device *SDp) NCR_700_sdev_init(struct scsi_device *SDp)
{ {
SDp->hostdata = kzalloc(sizeof(struct NCR_700_Device_Parameters), SDp->hostdata = kzalloc(sizeof(struct NCR_700_Device_Parameters),
GFP_KERNEL); GFP_KERNEL);
@ -2029,7 +2030,7 @@ NCR_700_slave_alloc(struct scsi_device *SDp)
} }
STATIC int STATIC int
NCR_700_slave_configure(struct scsi_device *SDp) NCR_700_sdev_configure(struct scsi_device *SDp, struct queue_limits *lim)
{ {
struct NCR_700_Host_Parameters *hostdata = struct NCR_700_Host_Parameters *hostdata =
(struct NCR_700_Host_Parameters *)SDp->host->hostdata[0]; (struct NCR_700_Host_Parameters *)SDp->host->hostdata[0];
@ -2052,7 +2053,7 @@ NCR_700_slave_configure(struct scsi_device *SDp)
} }
STATIC void STATIC void
NCR_700_slave_destroy(struct scsi_device *SDp) NCR_700_sdev_destroy(struct scsi_device *SDp)
{ {
kfree(SDp->hostdata); kfree(SDp->hostdata);
SDp->hostdata = NULL; SDp->hostdata = NULL;

View File

@ -2153,14 +2153,15 @@ static void __init blogic_inithoststruct(struct blogic_adapter *adapter,
} }
/* /*
blogic_slaveconfig will actually set the queue depth on individual blogic_sdev_configure will actually set the queue depth on individual
scsi devices as they are permanently added to the device chain. We scsi devices as they are permanently added to the device chain. We
shamelessly rip off the SelectQueueDepths code to make this work mostly shamelessly rip off the SelectQueueDepths code to make this work mostly
like it used to. Since we don't get called once at the end of the scan like it used to. Since we don't get called once at the end of the scan
but instead get called for each device, we have to do things a bit but instead get called for each device, we have to do things a bit
differently. differently.
*/ */
static int blogic_slaveconfig(struct scsi_device *dev) static int blogic_sdev_configure(struct scsi_device *dev,
struct queue_limits *lim)
{ {
struct blogic_adapter *adapter = struct blogic_adapter *adapter =
(struct blogic_adapter *) dev->host->hostdata; (struct blogic_adapter *) dev->host->hostdata;
@ -3672,7 +3673,7 @@ static const struct scsi_host_template blogic_template = {
.name = "BusLogic", .name = "BusLogic",
.info = blogic_drvr_info, .info = blogic_drvr_info,
.queuecommand = blogic_qcmd, .queuecommand = blogic_qcmd,
.slave_configure = blogic_slaveconfig, .sdev_configure = blogic_sdev_configure,
.bios_param = blogic_diskparam, .bios_param = blogic_diskparam,
.eh_host_reset_handler = blogic_hostreset, .eh_host_reset_handler = blogic_hostreset,
#if 0 #if 0

View File

@ -1274,7 +1274,8 @@ static inline void blogic_incszbucket(unsigned int *cmdsz_buckets,
static const char *blogic_drvr_info(struct Scsi_Host *); static const char *blogic_drvr_info(struct Scsi_Host *);
static int blogic_qcmd(struct Scsi_Host *h, struct scsi_cmnd *); static int blogic_qcmd(struct Scsi_Host *h, struct scsi_cmnd *);
static int blogic_diskparam(struct scsi_device *, struct block_device *, sector_t, int *); static int blogic_diskparam(struct scsi_device *, struct block_device *, sector_t, int *);
static int blogic_slaveconfig(struct scsi_device *); static int blogic_sdev_configure(struct scsi_device *,
struct queue_limits *lim);
static void blogic_qcompleted_ccb(struct blogic_ccb *); static void blogic_qcompleted_ccb(struct blogic_ccb *);
static irqreturn_t blogic_inthandler(int, void *); static irqreturn_t blogic_inthandler(int, void *);
static int blogic_resetadapter(struct blogic_adapter *, bool hard_reset); static int blogic_resetadapter(struct blogic_adapter *, bool hard_reset);

View File

@ -377,15 +377,17 @@ static int aac_biosparm(struct scsi_device *sdev, struct block_device *bdev,
} }
/** /**
* aac_slave_configure - compute queue depths * aac_sdev_configure - compute queue depths
* @sdev: SCSI device we are considering * @sdev: SCSI device we are considering
* @lim: Request queue limits
* *
* Selects queue depths for each target device based on the host adapter's * Selects queue depths for each target device based on the host adapter's
* total capacity and the queue depth supported by the target device. * total capacity and the queue depth supported by the target device.
* A queue depth of one automatically disables tagged queueing. * A queue depth of one automatically disables tagged queueing.
*/ */
static int aac_slave_configure(struct scsi_device *sdev) static int aac_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct aac_dev *aac = (struct aac_dev *)sdev->host->hostdata; struct aac_dev *aac = (struct aac_dev *)sdev->host->hostdata;
int chn, tid; int chn, tid;
@ -1487,7 +1489,7 @@ static const struct scsi_host_template aac_driver_template = {
.queuecommand = aac_queuecommand, .queuecommand = aac_queuecommand,
.bios_param = aac_biosparm, .bios_param = aac_biosparm,
.shost_groups = aac_host_groups, .shost_groups = aac_host_groups,
.slave_configure = aac_slave_configure, .sdev_configure = aac_sdev_configure,
.change_queue_depth = aac_change_queue_depth, .change_queue_depth = aac_change_queue_depth,
.sdev_groups = aac_dev_groups, .sdev_groups = aac_dev_groups,
.eh_abort_handler = aac_eh_abort, .eh_abort_handler = aac_eh_abort,

View File

@ -4496,7 +4496,7 @@ static int AdvInitAsc3550Driver(ADV_DVC_VAR *asc_dvc)
/* /*
* Microcode operating variables for WDTR, SDTR, and command tag * Microcode operating variables for WDTR, SDTR, and command tag
* queuing will be set in slave_configure() based on what a * queuing will be set in sdev_configure() based on what a
* device reports it is capable of in Inquiry byte 7. * device reports it is capable of in Inquiry byte 7.
* *
* If SCSI Bus Resets have been disabled, then directly set * If SCSI Bus Resets have been disabled, then directly set
@ -5013,7 +5013,7 @@ static int AdvInitAsc38C0800Driver(ADV_DVC_VAR *asc_dvc)
/* /*
* Microcode operating variables for WDTR, SDTR, and command tag * Microcode operating variables for WDTR, SDTR, and command tag
* queuing will be set in slave_configure() based on what a * queuing will be set in sdev_configure() based on what a
* device reports it is capable of in Inquiry byte 7. * device reports it is capable of in Inquiry byte 7.
* *
* If SCSI Bus Resets have been disabled, then directly set * If SCSI Bus Resets have been disabled, then directly set
@ -5508,7 +5508,7 @@ static int AdvInitAsc38C1600Driver(ADV_DVC_VAR *asc_dvc)
/* /*
* Microcode operating variables for WDTR, SDTR, and command tag * Microcode operating variables for WDTR, SDTR, and command tag
* queuing will be set in slave_configure() based on what a * queuing will be set in sdev_configure() based on what a
* device reports it is capable of in Inquiry byte 7. * device reports it is capable of in Inquiry byte 7.
* *
* If SCSI Bus Resets have been disabled, then directly set * If SCSI Bus Resets have been disabled, then directly set
@ -7219,7 +7219,7 @@ static void AscAsyncFix(ASC_DVC_VAR *asc_dvc, struct scsi_device *sdev)
} }
static void static void
advansys_narrow_slave_configure(struct scsi_device *sdev, ASC_DVC_VAR *asc_dvc) advansys_narrow_sdev_configure(struct scsi_device *sdev, ASC_DVC_VAR *asc_dvc)
{ {
ASC_SCSI_BIT_ID_TYPE tid_bit = 1 << sdev->id; ASC_SCSI_BIT_ID_TYPE tid_bit = 1 << sdev->id;
ASC_SCSI_BIT_ID_TYPE orig_use_tagged_qng = asc_dvc->use_tagged_qng; ASC_SCSI_BIT_ID_TYPE orig_use_tagged_qng = asc_dvc->use_tagged_qng;
@ -7345,7 +7345,7 @@ static void advansys_wide_enable_ppr(ADV_DVC_VAR *adv_dvc,
} }
static void static void
advansys_wide_slave_configure(struct scsi_device *sdev, ADV_DVC_VAR *adv_dvc) advansys_wide_sdev_configure(struct scsi_device *sdev, ADV_DVC_VAR *adv_dvc)
{ {
AdvPortAddr iop_base = adv_dvc->iop_base; AdvPortAddr iop_base = adv_dvc->iop_base;
unsigned short tidmask = 1 << sdev->id; unsigned short tidmask = 1 << sdev->id;
@ -7391,15 +7391,16 @@ advansys_wide_slave_configure(struct scsi_device *sdev, ADV_DVC_VAR *adv_dvc)
* Set the number of commands to queue per device for the * Set the number of commands to queue per device for the
* specified host adapter. * specified host adapter.
*/ */
static int advansys_slave_configure(struct scsi_device *sdev) static int advansys_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct asc_board *boardp = shost_priv(sdev->host); struct asc_board *boardp = shost_priv(sdev->host);
if (ASC_NARROW_BOARD(boardp)) if (ASC_NARROW_BOARD(boardp))
advansys_narrow_slave_configure(sdev, advansys_narrow_sdev_configure(sdev,
&boardp->dvc_var.asc_dvc_var); &boardp->dvc_var.asc_dvc_var);
else else
advansys_wide_slave_configure(sdev, advansys_wide_sdev_configure(sdev,
&boardp->dvc_var.adv_dvc_var); &boardp->dvc_var.adv_dvc_var);
return 0; return 0;
@ -10612,7 +10613,7 @@ static const struct scsi_host_template advansys_template = {
.queuecommand = advansys_queuecommand, .queuecommand = advansys_queuecommand,
.eh_host_reset_handler = advansys_reset, .eh_host_reset_handler = advansys_reset,
.bios_param = advansys_biosparam, .bios_param = advansys_biosparam,
.slave_configure = advansys_slave_configure, .sdev_configure = advansys_sdev_configure,
.cmd_size = sizeof(struct advansys_cmd), .cmd_size = sizeof(struct advansys_cmd),
}; };

View File

@ -672,7 +672,7 @@ ahd_linux_target_destroy(struct scsi_target *starget)
} }
static int static int
ahd_linux_slave_alloc(struct scsi_device *sdev) ahd_linux_sdev_init(struct scsi_device *sdev)
{ {
struct ahd_softc *ahd = struct ahd_softc *ahd =
*((struct ahd_softc **)sdev->host->hostdata); *((struct ahd_softc **)sdev->host->hostdata);
@ -701,7 +701,7 @@ ahd_linux_slave_alloc(struct scsi_device *sdev)
} }
static int static int
ahd_linux_slave_configure(struct scsi_device *sdev) ahd_linux_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
if (bootverbose) if (bootverbose)
sdev_printk(KERN_INFO, sdev, "Slave Configure\n"); sdev_printk(KERN_INFO, sdev, "Slave Configure\n");
@ -906,8 +906,8 @@ struct scsi_host_template aic79xx_driver_template = {
.this_id = -1, .this_id = -1,
.max_sectors = 8192, .max_sectors = 8192,
.cmd_per_lun = 2, .cmd_per_lun = 2,
.slave_alloc = ahd_linux_slave_alloc, .sdev_init = ahd_linux_sdev_init,
.slave_configure = ahd_linux_slave_configure, .sdev_configure = ahd_linux_sdev_configure,
.target_alloc = ahd_linux_target_alloc, .target_alloc = ahd_linux_target_alloc,
.target_destroy = ahd_linux_target_destroy, .target_destroy = ahd_linux_target_destroy,
}; };

View File

@ -632,7 +632,7 @@ ahc_linux_target_destroy(struct scsi_target *starget)
} }
static int static int
ahc_linux_slave_alloc(struct scsi_device *sdev) ahc_linux_sdev_init(struct scsi_device *sdev)
{ {
struct ahc_softc *ahc = struct ahc_softc *ahc =
*((struct ahc_softc **)sdev->host->hostdata); *((struct ahc_softc **)sdev->host->hostdata);
@ -664,7 +664,7 @@ ahc_linux_slave_alloc(struct scsi_device *sdev)
} }
static int static int
ahc_linux_slave_configure(struct scsi_device *sdev) ahc_linux_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
if (bootverbose) if (bootverbose)
sdev_printk(KERN_INFO, sdev, "Slave Configure\n"); sdev_printk(KERN_INFO, sdev, "Slave Configure\n");
@ -791,8 +791,8 @@ struct scsi_host_template aic7xxx_driver_template = {
.this_id = -1, .this_id = -1,
.max_sectors = 8192, .max_sectors = 8192,
.cmd_per_lun = 2, .cmd_per_lun = 2,
.slave_alloc = ahc_linux_slave_alloc, .sdev_init = ahc_linux_sdev_init,
.slave_configure = ahc_linux_slave_configure, .sdev_configure = ahc_linux_sdev_configure,
.target_alloc = ahc_linux_target_alloc, .target_alloc = ahc_linux_target_alloc,
.target_destroy = ahc_linux_target_destroy, .target_destroy = ahc_linux_target_destroy,
}; };

View File

@ -143,7 +143,8 @@ static irqreturn_t arcmsr_interrupt(struct AdapterControlBlock *acb);
static void arcmsr_free_irq(struct pci_dev *, struct AdapterControlBlock *); static void arcmsr_free_irq(struct pci_dev *, struct AdapterControlBlock *);
static void arcmsr_wait_firmware_ready(struct AdapterControlBlock *acb); static void arcmsr_wait_firmware_ready(struct AdapterControlBlock *acb);
static void arcmsr_set_iop_datetime(struct timer_list *); static void arcmsr_set_iop_datetime(struct timer_list *);
static int arcmsr_slave_config(struct scsi_device *sdev); static int arcmsr_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim);
static int arcmsr_adjust_disk_queue_depth(struct scsi_device *sdev, int queue_depth) static int arcmsr_adjust_disk_queue_depth(struct scsi_device *sdev, int queue_depth)
{ {
if (queue_depth > ARCMSR_MAX_CMD_PERLUN) if (queue_depth > ARCMSR_MAX_CMD_PERLUN)
@ -160,7 +161,7 @@ static const struct scsi_host_template arcmsr_scsi_host_template = {
.eh_abort_handler = arcmsr_abort, .eh_abort_handler = arcmsr_abort,
.eh_bus_reset_handler = arcmsr_bus_reset, .eh_bus_reset_handler = arcmsr_bus_reset,
.bios_param = arcmsr_bios_param, .bios_param = arcmsr_bios_param,
.slave_configure = arcmsr_slave_config, .sdev_configure = arcmsr_sdev_configure,
.change_queue_depth = arcmsr_adjust_disk_queue_depth, .change_queue_depth = arcmsr_adjust_disk_queue_depth,
.can_queue = ARCMSR_DEFAULT_OUTSTANDING_CMD, .can_queue = ARCMSR_DEFAULT_OUTSTANDING_CMD,
.this_id = ARCMSR_SCSI_INITIATOR_ID, .this_id = ARCMSR_SCSI_INITIATOR_ID,
@ -3344,7 +3345,8 @@ static int arcmsr_queue_command_lck(struct scsi_cmnd *cmd)
static DEF_SCSI_QCMD(arcmsr_queue_command) static DEF_SCSI_QCMD(arcmsr_queue_command)
static int arcmsr_slave_config(struct scsi_device *sdev) static int arcmsr_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
unsigned int dev_timeout; unsigned int dev_timeout;

View File

@ -25,7 +25,7 @@ struct scsi_transport_template *bfad_im_scsi_transport_template;
struct scsi_transport_template *bfad_im_scsi_vport_transport_template; struct scsi_transport_template *bfad_im_scsi_vport_transport_template;
static void bfad_im_itnim_work_handler(struct work_struct *work); static void bfad_im_itnim_work_handler(struct work_struct *work);
static int bfad_im_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *cmnd); static int bfad_im_queuecommand(struct Scsi_Host *h, struct scsi_cmnd *cmnd);
static int bfad_im_slave_alloc(struct scsi_device *sdev); static int bfad_im_sdev_init(struct scsi_device *sdev);
static void bfad_im_fc_rport_add(struct bfad_im_port_s *im_port, static void bfad_im_fc_rport_add(struct bfad_im_port_s *im_port,
struct bfad_itnim_s *itnim); struct bfad_itnim_s *itnim);
@ -404,10 +404,10 @@ bfad_im_reset_target_handler(struct scsi_cmnd *cmnd)
} }
/* /*
* Scsi_Host template entry slave_destroy. * Scsi_Host template entry sdev_destroy.
*/ */
static void static void
bfad_im_slave_destroy(struct scsi_device *sdev) bfad_im_sdev_destroy(struct scsi_device *sdev)
{ {
sdev->hostdata = NULL; sdev->hostdata = NULL;
return; return;
@ -783,7 +783,7 @@ bfad_thread_workq(struct bfad_s *bfad)
* Return non-zero if fails. * Return non-zero if fails.
*/ */
static int static int
bfad_im_slave_configure(struct scsi_device *sdev) bfad_im_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
scsi_change_queue_depth(sdev, bfa_lun_queue_depth); scsi_change_queue_depth(sdev, bfa_lun_queue_depth);
return 0; return 0;
@ -800,9 +800,9 @@ struct scsi_host_template bfad_im_scsi_host_template = {
.eh_device_reset_handler = bfad_im_reset_lun_handler, .eh_device_reset_handler = bfad_im_reset_lun_handler,
.eh_target_reset_handler = bfad_im_reset_target_handler, .eh_target_reset_handler = bfad_im_reset_target_handler,
.slave_alloc = bfad_im_slave_alloc, .sdev_init = bfad_im_sdev_init,
.slave_configure = bfad_im_slave_configure, .sdev_configure = bfad_im_sdev_configure,
.slave_destroy = bfad_im_slave_destroy, .sdev_destroy = bfad_im_sdev_destroy,
.this_id = -1, .this_id = -1,
.sg_tablesize = BFAD_IO_MAX_SGE, .sg_tablesize = BFAD_IO_MAX_SGE,
@ -823,9 +823,9 @@ struct scsi_host_template bfad_im_vport_template = {
.eh_device_reset_handler = bfad_im_reset_lun_handler, .eh_device_reset_handler = bfad_im_reset_lun_handler,
.eh_target_reset_handler = bfad_im_reset_target_handler, .eh_target_reset_handler = bfad_im_reset_target_handler,
.slave_alloc = bfad_im_slave_alloc, .sdev_init = bfad_im_sdev_init,
.slave_configure = bfad_im_slave_configure, .sdev_configure = bfad_im_sdev_configure,
.slave_destroy = bfad_im_slave_destroy, .sdev_destroy = bfad_im_sdev_destroy,
.this_id = -1, .this_id = -1,
.sg_tablesize = BFAD_IO_MAX_SGE, .sg_tablesize = BFAD_IO_MAX_SGE,
@ -915,7 +915,7 @@ bfad_get_itnim(struct bfad_im_port_s *im_port, int id)
} }
/* /*
* Function is invoked from the SCSI Host Template slave_alloc() entry point. * Function is invoked from the SCSI Host Template sdev_init() entry point.
* Has the logic to query the LUN Mask database to check if this LUN needs to * Has the logic to query the LUN Mask database to check if this LUN needs to
* be made visible to the SCSI mid-layer or not. * be made visible to the SCSI mid-layer or not.
* *
@ -946,10 +946,10 @@ bfad_im_check_if_make_lun_visible(struct scsi_device *sdev,
} }
/* /*
* Scsi_Host template entry slave_alloc * Scsi_Host template entry sdev_init
*/ */
static int static int
bfad_im_slave_alloc(struct scsi_device *sdev) bfad_im_sdev_init(struct scsi_device *sdev)
{ {
struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
struct bfad_itnim_data_s *itnim_data; struct bfad_itnim_data_s *itnim_data;

View File

@ -2652,7 +2652,8 @@ static int bnx2fc_cpu_offline(unsigned int cpu)
return 0; return 0;
} }
static int bnx2fc_slave_configure(struct scsi_device *sdev) static int bnx2fc_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
if (!bnx2fc_queue_depth) if (!bnx2fc_queue_depth)
return 0; return 0;
@ -2951,7 +2952,7 @@ static struct scsi_host_template bnx2fc_shost_template = {
.eh_device_reset_handler = bnx2fc_eh_device_reset, /* lun reset */ .eh_device_reset_handler = bnx2fc_eh_device_reset, /* lun reset */
.eh_target_reset_handler = bnx2fc_eh_target_reset, /* tgt reset */ .eh_target_reset_handler = bnx2fc_eh_target_reset, /* tgt reset */
.eh_host_reset_handler = fc_eh_host_reset, .eh_host_reset_handler = fc_eh_host_reset,
.slave_alloc = fc_slave_alloc, .sdev_init = fc_sdev_init,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,
.this_id = -1, .this_id = -1,
.cmd_per_lun = 3, .cmd_per_lun = 3,
@ -2959,7 +2960,7 @@ static struct scsi_host_template bnx2fc_shost_template = {
.dma_boundary = 0x7fff, .dma_boundary = 0x7fff,
.max_sectors = 0x3fbf, .max_sectors = 0x3fbf,
.track_queue_depth = 1, .track_queue_depth = 1,
.slave_configure = bnx2fc_slave_configure, .sdev_configure = bnx2fc_sdev_configure,
.shost_groups = bnx2fc_host_groups, .shost_groups = bnx2fc_host_groups,
.cmd_size = sizeof(struct bnx2fc_priv), .cmd_size = sizeof(struct bnx2fc_priv),
}; };

View File

@ -800,7 +800,7 @@ csio_scsis_io_active(struct csio_ioreq *req, enum csio_scsi_ev evt)
rn = req->rnode; rn = req->rnode;
/* /*
* FW says remote device is lost, but rnode * FW says remote device is lost, but rnode
* doesnt reflect it. * doesn't reflect it.
*/ */
if (csio_scsi_itnexus_loss_error(req->wr_status) && if (csio_scsi_itnexus_loss_error(req->wr_status) &&
csio_is_rnode_ready(rn)) { csio_is_rnode_ready(rn)) {
@ -2224,7 +2224,7 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
} }
static int static int
csio_slave_alloc(struct scsi_device *sdev) csio_sdev_init(struct scsi_device *sdev)
{ {
struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
@ -2237,14 +2237,14 @@ csio_slave_alloc(struct scsi_device *sdev)
} }
static int static int
csio_slave_configure(struct scsi_device *sdev) csio_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
scsi_change_queue_depth(sdev, csio_lun_qdepth); scsi_change_queue_depth(sdev, csio_lun_qdepth);
return 0; return 0;
} }
static void static void
csio_slave_destroy(struct scsi_device *sdev) csio_sdev_destroy(struct scsi_device *sdev)
{ {
sdev->hostdata = NULL; sdev->hostdata = NULL;
} }
@ -2276,9 +2276,9 @@ struct scsi_host_template csio_fcoe_shost_template = {
.eh_timed_out = fc_eh_timed_out, .eh_timed_out = fc_eh_timed_out,
.eh_abort_handler = csio_eh_abort_handler, .eh_abort_handler = csio_eh_abort_handler,
.eh_device_reset_handler = csio_eh_lun_reset_handler, .eh_device_reset_handler = csio_eh_lun_reset_handler,
.slave_alloc = csio_slave_alloc, .sdev_init = csio_sdev_init,
.slave_configure = csio_slave_configure, .sdev_configure = csio_sdev_configure,
.slave_destroy = csio_slave_destroy, .sdev_destroy = csio_sdev_destroy,
.scan_finished = csio_scan_finished, .scan_finished = csio_scan_finished,
.this_id = -1, .this_id = -1,
.sg_tablesize = CSIO_SCSI_MAX_SGE, .sg_tablesize = CSIO_SCSI_MAX_SGE,
@ -2295,9 +2295,9 @@ struct scsi_host_template csio_fcoe_shost_vport_template = {
.eh_timed_out = fc_eh_timed_out, .eh_timed_out = fc_eh_timed_out,
.eh_abort_handler = csio_eh_abort_handler, .eh_abort_handler = csio_eh_abort_handler,
.eh_device_reset_handler = csio_eh_lun_reset_handler, .eh_device_reset_handler = csio_eh_lun_reset_handler,
.slave_alloc = csio_slave_alloc, .sdev_init = csio_sdev_init,
.slave_configure = csio_slave_configure, .sdev_configure = csio_sdev_configure,
.slave_destroy = csio_slave_destroy, .sdev_destroy = csio_sdev_destroy,
.scan_finished = csio_scan_finished, .scan_finished = csio_scan_finished,
.this_id = -1, .this_id = -1,
.sg_tablesize = CSIO_SCSI_MAX_SGE, .sg_tablesize = CSIO_SCSI_MAX_SGE,

View File

@ -3715,13 +3715,13 @@ static void adapter_remove_and_free_all_devices(struct AdapterCtlBlk* acb)
/** /**
* dc395x_slave_alloc - Called by the scsi mid layer to tell us about a new * dc395x_sdev_init - Called by the scsi mid layer to tell us about a new
* scsi device that we need to deal with. We allocate a new device and then * scsi device that we need to deal with. We allocate a new device and then
* insert that device into the adapters device list. * insert that device into the adapters device list.
* *
* @scsi_device: The new scsi device that we need to handle. * @scsi_device: The new scsi device that we need to handle.
**/ **/
static int dc395x_slave_alloc(struct scsi_device *scsi_device) static int dc395x_sdev_init(struct scsi_device *scsi_device)
{ {
struct AdapterCtlBlk *acb = (struct AdapterCtlBlk *)scsi_device->host->hostdata; struct AdapterCtlBlk *acb = (struct AdapterCtlBlk *)scsi_device->host->hostdata;
struct DeviceCtlBlk *dcb; struct DeviceCtlBlk *dcb;
@ -3736,12 +3736,12 @@ static int dc395x_slave_alloc(struct scsi_device *scsi_device)
/** /**
* dc395x_slave_destroy - Called by the scsi mid layer to tell us about a * dc395x_sdev_destroy - Called by the scsi mid layer to tell us about a
* device that is going away. * device that is going away.
* *
* @scsi_device: The new scsi device that we need to handle. * @scsi_device: The new scsi device that we need to handle.
**/ **/
static void dc395x_slave_destroy(struct scsi_device *scsi_device) static void dc395x_sdev_destroy(struct scsi_device *scsi_device)
{ {
struct AdapterCtlBlk *acb = (struct AdapterCtlBlk *)scsi_device->host->hostdata; struct AdapterCtlBlk *acb = (struct AdapterCtlBlk *)scsi_device->host->hostdata;
struct DeviceCtlBlk *dcb = find_dcb(acb, scsi_device->id, scsi_device->lun); struct DeviceCtlBlk *dcb = find_dcb(acb, scsi_device->id, scsi_device->lun);
@ -4547,8 +4547,8 @@ static const struct scsi_host_template dc395x_driver_template = {
.show_info = dc395x_show_info, .show_info = dc395x_show_info,
.name = DC395X_BANNER " " DC395X_VERSION, .name = DC395X_BANNER " " DC395X_VERSION,
.queuecommand = dc395x_queue_command, .queuecommand = dc395x_queue_command,
.slave_alloc = dc395x_slave_alloc, .sdev_init = dc395x_sdev_init,
.slave_destroy = dc395x_slave_destroy, .sdev_destroy = dc395x_sdev_destroy,
.can_queue = DC395x_MAX_CAN_QUEUE, .can_queue = DC395x_MAX_CAN_QUEUE,
.this_id = 7, .this_id = 7,
.sg_tablesize = DC395x_MAX_SG_TABLESIZE, .sg_tablesize = DC395x_MAX_SG_TABLESIZE,

View File

@ -2261,7 +2261,7 @@ static void esp_init_swstate(struct esp *esp)
INIT_LIST_HEAD(&esp->active_cmds); INIT_LIST_HEAD(&esp->active_cmds);
INIT_LIST_HEAD(&esp->esp_cmd_pool); INIT_LIST_HEAD(&esp->esp_cmd_pool);
/* Start with a clear state, domain validation (via ->slave_configure, /* Start with a clear state, domain validation (via ->sdev_configure,
* spi_dv_device()) will attempt to enable SYNC, WIDE, and tagged * spi_dv_device()) will attempt to enable SYNC, WIDE, and tagged
* commands. * commands.
*/ */
@ -2441,7 +2441,7 @@ static void esp_target_destroy(struct scsi_target *starget)
tp->starget = NULL; tp->starget = NULL;
} }
static int esp_slave_alloc(struct scsi_device *dev) static int esp_sdev_init(struct scsi_device *dev)
{ {
struct esp *esp = shost_priv(dev->host); struct esp *esp = shost_priv(dev->host);
struct esp_target_data *tp = &esp->target[dev->id]; struct esp_target_data *tp = &esp->target[dev->id];
@ -2463,7 +2463,7 @@ static int esp_slave_alloc(struct scsi_device *dev)
return 0; return 0;
} }
static int esp_slave_configure(struct scsi_device *dev) static int esp_sdev_configure(struct scsi_device *dev, struct queue_limits *lim)
{ {
struct esp *esp = shost_priv(dev->host); struct esp *esp = shost_priv(dev->host);
struct esp_target_data *tp = &esp->target[dev->id]; struct esp_target_data *tp = &esp->target[dev->id];
@ -2479,7 +2479,7 @@ static int esp_slave_configure(struct scsi_device *dev)
return 0; return 0;
} }
static void esp_slave_destroy(struct scsi_device *dev) static void esp_sdev_destroy(struct scsi_device *dev)
{ {
struct esp_lun_data *lp = dev->hostdata; struct esp_lun_data *lp = dev->hostdata;
@ -2667,9 +2667,9 @@ const struct scsi_host_template scsi_esp_template = {
.queuecommand = esp_queuecommand, .queuecommand = esp_queuecommand,
.target_alloc = esp_target_alloc, .target_alloc = esp_target_alloc,
.target_destroy = esp_target_destroy, .target_destroy = esp_target_destroy,
.slave_alloc = esp_slave_alloc, .sdev_init = esp_sdev_init,
.slave_configure = esp_slave_configure, .sdev_configure = esp_sdev_configure,
.slave_destroy = esp_slave_destroy, .sdev_destroy = esp_sdev_destroy,
.eh_abort_handler = esp_eh_abort_handler, .eh_abort_handler = esp_eh_abort_handler,
.eh_bus_reset_handler = esp_eh_bus_reset_handler, .eh_bus_reset_handler = esp_eh_bus_reset_handler,
.eh_host_reset_handler = esp_eh_host_reset_handler, .eh_host_reset_handler = esp_eh_host_reset_handler,

View File

@ -269,7 +269,7 @@ static const struct scsi_host_template fcoe_shost_template = {
.eh_abort_handler = fc_eh_abort, .eh_abort_handler = fc_eh_abort,
.eh_device_reset_handler = fc_eh_device_reset, .eh_device_reset_handler = fc_eh_device_reset,
.eh_host_reset_handler = fc_eh_host_reset, .eh_host_reset_handler = fc_eh_host_reset,
.slave_alloc = fc_slave_alloc, .sdev_init = fc_sdev_init,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,
.this_id = -1, .this_id = -1,
.cmd_per_lun = 3, .cmd_per_lun = 3,

View File

@ -86,7 +86,7 @@ static struct libfc_function_template fnic_transport_template = {
.exch_mgr_reset = fnic_exch_mgr_reset .exch_mgr_reset = fnic_exch_mgr_reset
}; };
static int fnic_slave_alloc(struct scsi_device *sdev) static int fnic_sdev_init(struct scsi_device *sdev)
{ {
struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
@ -105,7 +105,7 @@ static const struct scsi_host_template fnic_host_template = {
.eh_abort_handler = fnic_abort_cmd, .eh_abort_handler = fnic_abort_cmd,
.eh_device_reset_handler = fnic_device_reset, .eh_device_reset_handler = fnic_device_reset,
.eh_host_reset_handler = fnic_host_reset, .eh_host_reset_handler = fnic_host_reset,
.slave_alloc = fnic_slave_alloc, .sdev_init = fnic_sdev_init,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,
.this_id = -1, .this_id = -1,
.cmd_per_lun = 3, .cmd_per_lun = 3,

View File

@ -485,8 +485,7 @@ int fnic_trace_buf_init(void)
} }
fnic_trace_entries.page_offset = fnic_trace_entries.page_offset =
vmalloc(array_size(fnic_max_trace_entries, vcalloc(fnic_max_trace_entries, sizeof(unsigned long));
sizeof(unsigned long)));
if (!fnic_trace_entries.page_offset) { if (!fnic_trace_entries.page_offset) {
printk(KERN_ERR PFX "Failed to allocate memory for" printk(KERN_ERR PFX "Failed to allocate memory for"
" page_offset\n"); " page_offset\n");
@ -497,8 +496,6 @@ int fnic_trace_buf_init(void)
err = -ENOMEM; err = -ENOMEM;
goto err_fnic_trace_buf_init; goto err_fnic_trace_buf_init;
} }
memset((void *)fnic_trace_entries.page_offset, 0,
(fnic_max_trace_entries * sizeof(unsigned long)));
fnic_trace_entries.wr_idx = fnic_trace_entries.rd_idx = 0; fnic_trace_entries.wr_idx = fnic_trace_entries.rd_idx = 0;
fnic_buf_head = fnic_trace_buf_p; fnic_buf_head = fnic_trace_buf_p;
@ -559,8 +556,7 @@ int fnic_fc_trace_init(void)
fc_trace_max_entries = (fnic_fc_trace_max_pages * PAGE_SIZE)/ fc_trace_max_entries = (fnic_fc_trace_max_pages * PAGE_SIZE)/
FC_TRC_SIZE_BYTES; FC_TRC_SIZE_BYTES;
fnic_fc_ctlr_trace_buf_p = fnic_fc_ctlr_trace_buf_p =
(unsigned long)vmalloc(array_size(PAGE_SIZE, (unsigned long)vcalloc(fnic_fc_trace_max_pages, PAGE_SIZE);
fnic_fc_trace_max_pages));
if (!fnic_fc_ctlr_trace_buf_p) { if (!fnic_fc_ctlr_trace_buf_p) {
pr_err("fnic: Failed to allocate memory for " pr_err("fnic: Failed to allocate memory for "
"FC Control Trace Buf\n"); "FC Control Trace Buf\n");
@ -568,13 +564,9 @@ int fnic_fc_trace_init(void)
goto err_fnic_fc_ctlr_trace_buf_init; goto err_fnic_fc_ctlr_trace_buf_init;
} }
memset((void *)fnic_fc_ctlr_trace_buf_p, 0,
fnic_fc_trace_max_pages * PAGE_SIZE);
/* Allocate memory for page offset */ /* Allocate memory for page offset */
fc_trace_entries.page_offset = fc_trace_entries.page_offset =
vmalloc(array_size(fc_trace_max_entries, vcalloc(fc_trace_max_entries, sizeof(unsigned long));
sizeof(unsigned long)));
if (!fc_trace_entries.page_offset) { if (!fc_trace_entries.page_offset) {
pr_err("fnic:Failed to allocate memory for page_offset\n"); pr_err("fnic:Failed to allocate memory for page_offset\n");
if (fnic_fc_ctlr_trace_buf_p) { if (fnic_fc_ctlr_trace_buf_p) {
@ -585,8 +577,6 @@ int fnic_fc_trace_init(void)
err = -ENOMEM; err = -ENOMEM;
goto err_fnic_fc_ctlr_trace_buf_init; goto err_fnic_fc_ctlr_trace_buf_init;
} }
memset((void *)fc_trace_entries.page_offset, 0,
(fc_trace_max_entries * sizeof(unsigned long)));
fc_trace_entries.rd_idx = fc_trace_entries.wr_idx = 0; fc_trace_entries.rd_idx = fc_trace_entries.wr_idx = 0;
fc_trace_buf_head = fnic_fc_ctlr_trace_buf_p; fc_trace_buf_head = fnic_fc_ctlr_trace_buf_p;

View File

@ -643,9 +643,8 @@ extern int hisi_sas_probe(struct platform_device *pdev,
const struct hisi_sas_hw *ops); const struct hisi_sas_hw *ops);
extern void hisi_sas_remove(struct platform_device *pdev); extern void hisi_sas_remove(struct platform_device *pdev);
int hisi_sas_device_configure(struct scsi_device *sdev, int hisi_sas_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim);
struct queue_limits *lim); extern int hisi_sas_sdev_init(struct scsi_device *sdev);
extern int hisi_sas_slave_alloc(struct scsi_device *sdev);
extern int hisi_sas_scan_finished(struct Scsi_Host *shost, unsigned long time); extern int hisi_sas_scan_finished(struct Scsi_Host *shost, unsigned long time);
extern void hisi_sas_scan_start(struct Scsi_Host *shost); extern void hisi_sas_scan_start(struct Scsi_Host *shost);
extern int hisi_sas_host_reset(struct Scsi_Host *shost, int reset_type); extern int hisi_sas_host_reset(struct Scsi_Host *shost, int reset_type);

View File

@ -805,13 +805,13 @@ static int hisi_sas_init_device(struct domain_device *device)
return rc; return rc;
} }
int hisi_sas_slave_alloc(struct scsi_device *sdev) int hisi_sas_sdev_init(struct scsi_device *sdev)
{ {
struct domain_device *ddev = sdev_to_domain_dev(sdev); struct domain_device *ddev = sdev_to_domain_dev(sdev);
struct hisi_sas_device *sas_dev = ddev->lldd_dev; struct hisi_sas_device *sas_dev = ddev->lldd_dev;
int rc; int rc;
rc = sas_slave_alloc(sdev); rc = sas_sdev_init(sdev);
if (rc) if (rc)
return rc; return rc;
@ -821,7 +821,7 @@ int hisi_sas_slave_alloc(struct scsi_device *sdev)
sas_dev->dev_status = HISI_SAS_DEV_NORMAL; sas_dev->dev_status = HISI_SAS_DEV_NORMAL;
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hisi_sas_slave_alloc); EXPORT_SYMBOL_GPL(hisi_sas_sdev_init);
static int hisi_sas_dev_found(struct domain_device *device) static int hisi_sas_dev_found(struct domain_device *device)
{ {
@ -868,11 +868,10 @@ static int hisi_sas_dev_found(struct domain_device *device)
return rc; return rc;
} }
int hisi_sas_device_configure(struct scsi_device *sdev, int hisi_sas_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
struct queue_limits *lim)
{ {
struct domain_device *dev = sdev_to_domain_dev(sdev); struct domain_device *dev = sdev_to_domain_dev(sdev);
int ret = sas_device_configure(sdev, lim); int ret = sas_sdev_configure(sdev, lim);
if (ret) if (ret)
return ret; return ret;
@ -881,7 +880,7 @@ int hisi_sas_device_configure(struct scsi_device *sdev,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(hisi_sas_device_configure); EXPORT_SYMBOL_GPL(hisi_sas_sdev_configure);
void hisi_sas_scan_start(struct Scsi_Host *shost) void hisi_sas_scan_start(struct Scsi_Host *shost)
{ {

View File

@ -1753,11 +1753,11 @@ static int check_fw_info_v1_hw(struct hisi_hba *hisi_hba)
static const struct scsi_host_template sht_v1_hw = { static const struct scsi_host_template sht_v1_hw = {
LIBSAS_SHT_BASE_NO_SLAVE_INIT LIBSAS_SHT_BASE_NO_SLAVE_INIT
.device_configure = hisi_sas_device_configure, .sdev_configure = hisi_sas_sdev_configure,
.scan_finished = hisi_sas_scan_finished, .scan_finished = hisi_sas_scan_finished,
.scan_start = hisi_sas_scan_start, .scan_start = hisi_sas_scan_start,
.sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .sg_tablesize = HISI_SAS_SGE_PAGE_CNT,
.slave_alloc = hisi_sas_slave_alloc, .sdev_init = hisi_sas_sdev_init,
.shost_groups = host_v1_hw_groups, .shost_groups = host_v1_hw_groups,
.host_reset = hisi_sas_host_reset, .host_reset = hisi_sas_host_reset,
}; };

View File

@ -3585,11 +3585,11 @@ static int check_fw_info_v2_hw(struct hisi_hba *hisi_hba)
static const struct scsi_host_template sht_v2_hw = { static const struct scsi_host_template sht_v2_hw = {
LIBSAS_SHT_BASE_NO_SLAVE_INIT LIBSAS_SHT_BASE_NO_SLAVE_INIT
.device_configure = hisi_sas_device_configure, .sdev_configure = hisi_sas_sdev_configure,
.scan_finished = hisi_sas_scan_finished, .scan_finished = hisi_sas_scan_finished,
.scan_start = hisi_sas_scan_start, .scan_start = hisi_sas_scan_start,
.sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .sg_tablesize = HISI_SAS_SGE_PAGE_CNT,
.slave_alloc = hisi_sas_slave_alloc, .sdev_init = hisi_sas_sdev_init,
.shost_groups = host_v2_hw_groups, .shost_groups = host_v2_hw_groups,
.sdev_groups = sdev_groups_v2_hw, .sdev_groups = sdev_groups_v2_hw,
.host_reset = hisi_sas_host_reset, .host_reset = hisi_sas_host_reset,

View File

@ -2908,12 +2908,12 @@ static ssize_t iopoll_q_cnt_v3_hw_show(struct device *dev,
} }
static DEVICE_ATTR_RO(iopoll_q_cnt_v3_hw); static DEVICE_ATTR_RO(iopoll_q_cnt_v3_hw);
static int device_configure_v3_hw(struct scsi_device *sdev, static int sdev_configure_v3_hw(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct Scsi_Host *shost = dev_to_shost(&sdev->sdev_gendev); struct Scsi_Host *shost = dev_to_shost(&sdev->sdev_gendev);
struct hisi_hba *hisi_hba = shost_priv(shost); struct hisi_hba *hisi_hba = shost_priv(shost);
int ret = hisi_sas_device_configure(sdev, lim); int ret = hisi_sas_sdev_configure(sdev, lim);
struct device *dev = hisi_hba->dev; struct device *dev = hisi_hba->dev;
if (ret) if (ret)
@ -3336,13 +3336,13 @@ static void hisi_sas_map_queues(struct Scsi_Host *shost)
static const struct scsi_host_template sht_v3_hw = { static const struct scsi_host_template sht_v3_hw = {
LIBSAS_SHT_BASE_NO_SLAVE_INIT LIBSAS_SHT_BASE_NO_SLAVE_INIT
.device_configure = device_configure_v3_hw, .sdev_configure = sdev_configure_v3_hw,
.scan_finished = hisi_sas_scan_finished, .scan_finished = hisi_sas_scan_finished,
.scan_start = hisi_sas_scan_start, .scan_start = hisi_sas_scan_start,
.map_queues = hisi_sas_map_queues, .map_queues = hisi_sas_map_queues,
.sg_tablesize = HISI_SAS_SGE_PAGE_CNT, .sg_tablesize = HISI_SAS_SGE_PAGE_CNT,
.sg_prot_tablesize = HISI_SAS_SGE_PAGE_CNT, .sg_prot_tablesize = HISI_SAS_SGE_PAGE_CNT,
.slave_alloc = hisi_sas_slave_alloc, .sdev_init = hisi_sas_sdev_init,
.shost_groups = host_v3_hw_groups, .shost_groups = host_v3_hw_groups,
.sdev_groups = sdev_groups_v3_hw, .sdev_groups = sdev_groups_v3_hw,
.tag_alloc_policy = BLK_TAG_ALLOC_RR, .tag_alloc_policy = BLK_TAG_ALLOC_RR,

View File

@ -283,9 +283,10 @@ static int hpsa_scan_finished(struct Scsi_Host *sh,
static int hpsa_change_queue_depth(struct scsi_device *sdev, int qdepth); static int hpsa_change_queue_depth(struct scsi_device *sdev, int qdepth);
static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd); static int hpsa_eh_device_reset_handler(struct scsi_cmnd *scsicmd);
static int hpsa_slave_alloc(struct scsi_device *sdev); static int hpsa_sdev_init(struct scsi_device *sdev);
static int hpsa_slave_configure(struct scsi_device *sdev); static int hpsa_sdev_configure(struct scsi_device *sdev,
static void hpsa_slave_destroy(struct scsi_device *sdev); struct queue_limits *lim);
static void hpsa_sdev_destroy(struct scsi_device *sdev);
static void hpsa_update_scsi_devices(struct ctlr_info *h); static void hpsa_update_scsi_devices(struct ctlr_info *h);
static int check_for_unit_attention(struct ctlr_info *h, static int check_for_unit_attention(struct ctlr_info *h,
@ -978,9 +979,9 @@ static const struct scsi_host_template hpsa_driver_template = {
.this_id = -1, .this_id = -1,
.eh_device_reset_handler = hpsa_eh_device_reset_handler, .eh_device_reset_handler = hpsa_eh_device_reset_handler,
.ioctl = hpsa_ioctl, .ioctl = hpsa_ioctl,
.slave_alloc = hpsa_slave_alloc, .sdev_init = hpsa_sdev_init,
.slave_configure = hpsa_slave_configure, .sdev_configure = hpsa_sdev_configure,
.slave_destroy = hpsa_slave_destroy, .sdev_destroy = hpsa_sdev_destroy,
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
.compat_ioctl = hpsa_compat_ioctl, .compat_ioctl = hpsa_compat_ioctl,
#endif #endif
@ -2107,7 +2108,7 @@ static struct hpsa_scsi_dev_t *lookup_hpsa_scsi_dev(struct ctlr_info *h,
return NULL; return NULL;
} }
static int hpsa_slave_alloc(struct scsi_device *sdev) static int hpsa_sdev_init(struct scsi_device *sdev)
{ {
struct hpsa_scsi_dev_t *sd = NULL; struct hpsa_scsi_dev_t *sd = NULL;
unsigned long flags; unsigned long flags;
@ -2142,7 +2143,8 @@ static int hpsa_slave_alloc(struct scsi_device *sdev)
/* configure scsi device based on internal per-device structure */ /* configure scsi device based on internal per-device structure */
#define CTLR_TIMEOUT (120 * HZ) #define CTLR_TIMEOUT (120 * HZ)
static int hpsa_slave_configure(struct scsi_device *sdev) static int hpsa_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct hpsa_scsi_dev_t *sd; struct hpsa_scsi_dev_t *sd;
int queue_depth; int queue_depth;
@ -2173,7 +2175,7 @@ static int hpsa_slave_configure(struct scsi_device *sdev)
return 0; return 0;
} }
static void hpsa_slave_destroy(struct scsi_device *sdev) static void hpsa_sdev_destroy(struct scsi_device *sdev)
{ {
struct hpsa_scsi_dev_t *hdev = NULL; struct hpsa_scsi_dev_t *hdev = NULL;

View File

@ -1151,7 +1151,7 @@ static struct attribute *hptiop_host_attrs[] = {
ATTRIBUTE_GROUPS(hptiop_host); ATTRIBUTE_GROUPS(hptiop_host);
static int hptiop_device_configure(struct scsi_device *sdev, static int hptiop_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
if (sdev->type == TYPE_TAPE) if (sdev->type == TYPE_TAPE)
@ -1168,7 +1168,7 @@ static const struct scsi_host_template driver_template = {
.emulated = 0, .emulated = 0,
.proc_name = driver_name, .proc_name = driver_name,
.shost_groups = hptiop_host_groups, .shost_groups = hptiop_host_groups,
.device_configure = hptiop_device_configure, .sdev_configure = hptiop_sdev_configure,
.this_id = -1, .this_id = -1,
.change_queue_depth = hptiop_adjust_disk_queue_depth, .change_queue_depth = hptiop_adjust_disk_queue_depth,
.cmd_size = sizeof(struct hpt_cmd_priv), .cmd_size = sizeof(struct hpt_cmd_priv),

View File

@ -3393,7 +3393,7 @@ static int ibmvfc_scan_finished(struct Scsi_Host *shost, unsigned long time)
} }
/** /**
* ibmvfc_slave_alloc - Setup the device's task set value * ibmvfc_sdev_init - Setup the device's task set value
* @sdev: struct scsi_device device to configure * @sdev: struct scsi_device device to configure
* *
* Set the device's task set value so that error handling works as * Set the device's task set value so that error handling works as
@ -3402,7 +3402,7 @@ static int ibmvfc_scan_finished(struct Scsi_Host *shost, unsigned long time)
* Returns: * Returns:
* 0 on success / -ENXIO if device does not exist * 0 on success / -ENXIO if device does not exist
**/ **/
static int ibmvfc_slave_alloc(struct scsi_device *sdev) static int ibmvfc_sdev_init(struct scsi_device *sdev)
{ {
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
@ -3441,8 +3441,9 @@ static int ibmvfc_target_alloc(struct scsi_target *starget)
} }
/** /**
* ibmvfc_slave_configure - Configure the device * ibmvfc_sdev_configure - Configure the device
* @sdev: struct scsi_device device to configure * @sdev: struct scsi_device device to configure
* @lim: Request queue limits
* *
* Enable allow_restart for a device if it is a disk. Adjust the * Enable allow_restart for a device if it is a disk. Adjust the
* queue_depth here also. * queue_depth here also.
@ -3450,7 +3451,8 @@ static int ibmvfc_target_alloc(struct scsi_target *starget)
* Returns: * Returns:
* 0 * 0
**/ **/
static int ibmvfc_slave_configure(struct scsi_device *sdev) static int ibmvfc_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
unsigned long flags = 0; unsigned long flags = 0;
@ -3696,8 +3698,8 @@ static const struct scsi_host_template driver_template = {
.eh_device_reset_handler = ibmvfc_eh_device_reset_handler, .eh_device_reset_handler = ibmvfc_eh_device_reset_handler,
.eh_target_reset_handler = ibmvfc_eh_target_reset_handler, .eh_target_reset_handler = ibmvfc_eh_target_reset_handler,
.eh_host_reset_handler = ibmvfc_eh_host_reset_handler, .eh_host_reset_handler = ibmvfc_eh_host_reset_handler,
.slave_alloc = ibmvfc_slave_alloc, .sdev_init = ibmvfc_sdev_init,
.slave_configure = ibmvfc_slave_configure, .sdev_configure = ibmvfc_sdev_configure,
.target_alloc = ibmvfc_target_alloc, .target_alloc = ibmvfc_target_alloc,
.scan_finished = ibmvfc_scan_finished, .scan_finished = ibmvfc_scan_finished,
.change_queue_depth = ibmvfc_change_queue_depth, .change_queue_depth = ibmvfc_change_queue_depth,

View File

@ -1860,14 +1860,16 @@ static void ibmvscsi_handle_crq(struct viosrp_crq *crq,
} }
/** /**
* ibmvscsi_slave_configure: Set the "allow_restart" flag for each disk. * ibmvscsi_sdev_configure: Set the "allow_restart" flag for each disk.
* @sdev: struct scsi_device device to configure * @sdev: struct scsi_device device to configure
* @lim: Request queue limits
* *
* Enable allow_restart for a device if it is a disk. Adjust the * Enable allow_restart for a device if it is a disk. Adjust the
* queue_depth here also as is required by the documentation for * queue_depth here also as is required by the documentation for
* struct scsi_host_template. * struct scsi_host_template.
*/ */
static int ibmvscsi_slave_configure(struct scsi_device *sdev) static int ibmvscsi_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
unsigned long lock_flags = 0; unsigned long lock_flags = 0;
@ -2091,7 +2093,7 @@ static struct scsi_host_template driver_template = {
.eh_abort_handler = ibmvscsi_eh_abort_handler, .eh_abort_handler = ibmvscsi_eh_abort_handler,
.eh_device_reset_handler = ibmvscsi_eh_device_reset_handler, .eh_device_reset_handler = ibmvscsi_eh_device_reset_handler,
.eh_host_reset_handler = ibmvscsi_eh_host_reset_handler, .eh_host_reset_handler = ibmvscsi_eh_host_reset_handler,
.slave_configure = ibmvscsi_slave_configure, .sdev_configure = ibmvscsi_sdev_configure,
.change_queue_depth = ibmvscsi_change_queue_depth, .change_queue_depth = ibmvscsi_change_queue_depth,
.host_reset = ibmvscsi_host_reset, .host_reset = ibmvscsi_host_reset,
.cmd_per_lun = IBMVSCSI_CMDS_PER_LUN_DEFAULT, .cmd_per_lun = IBMVSCSI_CMDS_PER_LUN_DEFAULT,

View File

@ -4745,13 +4745,13 @@ static struct ipr_resource_entry *ipr_find_sdev(struct scsi_device *sdev)
} }
/** /**
* ipr_slave_destroy - Unconfigure a SCSI device * ipr_sdev_destroy - Unconfigure a SCSI device
* @sdev: scsi device struct * @sdev: scsi device struct
* *
* Return value: * Return value:
* nothing * nothing
**/ **/
static void ipr_slave_destroy(struct scsi_device *sdev) static void ipr_sdev_destroy(struct scsi_device *sdev)
{ {
struct ipr_resource_entry *res; struct ipr_resource_entry *res;
struct ipr_ioa_cfg *ioa_cfg; struct ipr_ioa_cfg *ioa_cfg;
@ -4769,7 +4769,7 @@ static void ipr_slave_destroy(struct scsi_device *sdev)
} }
/** /**
* ipr_device_configure - Configure a SCSI device * ipr_sdev_configure - Configure a SCSI device
* @sdev: scsi device struct * @sdev: scsi device struct
* @lim: queue limits * @lim: queue limits
* *
@ -4778,7 +4778,7 @@ static void ipr_slave_destroy(struct scsi_device *sdev)
* Return value: * Return value:
* 0 on success * 0 on success
**/ **/
static int ipr_device_configure(struct scsi_device *sdev, static int ipr_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata; struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
@ -4815,7 +4815,7 @@ static int ipr_device_configure(struct scsi_device *sdev,
} }
/** /**
* ipr_slave_alloc - Prepare for commands to a device. * ipr_sdev_init - Prepare for commands to a device.
* @sdev: scsi device struct * @sdev: scsi device struct
* *
* This function saves a pointer to the resource entry * This function saves a pointer to the resource entry
@ -4826,7 +4826,7 @@ static int ipr_device_configure(struct scsi_device *sdev,
* Return value: * Return value:
* 0 on success / -ENXIO if device does not exist * 0 on success / -ENXIO if device does not exist
**/ **/
static int ipr_slave_alloc(struct scsi_device *sdev) static int ipr_sdev_init(struct scsi_device *sdev)
{ {
struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata; struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
struct ipr_resource_entry *res; struct ipr_resource_entry *res;
@ -6398,9 +6398,9 @@ static const struct scsi_host_template driver_template = {
.eh_abort_handler = ipr_eh_abort, .eh_abort_handler = ipr_eh_abort,
.eh_device_reset_handler = ipr_eh_dev_reset, .eh_device_reset_handler = ipr_eh_dev_reset,
.eh_host_reset_handler = ipr_eh_host_reset, .eh_host_reset_handler = ipr_eh_host_reset,
.slave_alloc = ipr_slave_alloc, .sdev_init = ipr_sdev_init,
.device_configure = ipr_device_configure, .sdev_configure = ipr_sdev_configure,
.slave_destroy = ipr_slave_destroy, .sdev_destroy = ipr_sdev_destroy,
.scan_finished = ipr_scan_finished, .scan_finished = ipr_scan_finished,
.target_destroy = ipr_target_destroy, .target_destroy = ipr_target_destroy,
.change_queue_depth = ipr_change_queue_depth, .change_queue_depth = ipr_change_queue_depth,

View File

@ -364,7 +364,7 @@ static struct scsi_host_template ips_driver_template = {
.proc_name = "ips", .proc_name = "ips",
.show_info = ips_show_info, .show_info = ips_show_info,
.write_info = ips_write_info, .write_info = ips_write_info,
.slave_configure = ips_slave_configure, .sdev_configure = ips_sdev_configure,
.bios_param = ips_biosparam, .bios_param = ips_biosparam,
.this_id = -1, .this_id = -1,
.sg_tablesize = IPS_MAX_SG, .sg_tablesize = IPS_MAX_SG,
@ -1166,7 +1166,7 @@ static int ips_biosparam(struct scsi_device *sdev, struct block_device *bdev,
/****************************************************************************/ /****************************************************************************/
/* */ /* */
/* Routine Name: ips_slave_configure */ /* Routine Name: ips_sdev_configure */
/* */ /* */
/* Routine Description: */ /* Routine Description: */
/* */ /* */
@ -1174,7 +1174,7 @@ static int ips_biosparam(struct scsi_device *sdev, struct block_device *bdev,
/* */ /* */
/****************************************************************************/ /****************************************************************************/
static int static int
ips_slave_configure(struct scsi_device * SDptr) ips_sdev_configure(struct scsi_device *SDptr, struct queue_limits *lim)
{ {
ips_ha_t *ha; ips_ha_t *ha;
int min; int min;

View File

@ -400,7 +400,8 @@
*/ */
static int ips_biosparam(struct scsi_device *sdev, struct block_device *bdev, static int ips_biosparam(struct scsi_device *sdev, struct block_device *bdev,
sector_t capacity, int geom[]); sector_t capacity, int geom[]);
static int ips_slave_configure(struct scsi_device *SDptr); static int ips_sdev_configure(struct scsi_device *SDptr,
struct queue_limits *lim);
/* /*
* Raid Command Formats * Raid Command Formats

View File

@ -1057,7 +1057,7 @@ static umode_t iscsi_sw_tcp_attr_is_visible(int param_type, int param)
return 0; return 0;
} }
static int iscsi_sw_tcp_device_configure(struct scsi_device *sdev, static int iscsi_sw_tcp_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(sdev->host); struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(sdev->host);
@ -1083,7 +1083,7 @@ static const struct scsi_host_template iscsi_sw_tcp_sht = {
.eh_device_reset_handler= iscsi_eh_device_reset, .eh_device_reset_handler= iscsi_eh_device_reset,
.eh_target_reset_handler = iscsi_eh_recover_target, .eh_target_reset_handler = iscsi_eh_recover_target,
.dma_boundary = PAGE_SIZE - 1, .dma_boundary = PAGE_SIZE - 1,
.device_configure = iscsi_sw_tcp_device_configure, .sdev_configure = iscsi_sw_tcp_sdev_configure,
.proc_name = "iscsi_tcp", .proc_name = "iscsi_tcp",
.this_id = -1, .this_id = -1,
.track_queue_depth = 1, .track_queue_depth = 1,

View File

@ -2222,13 +2222,13 @@ int fc_eh_host_reset(struct scsi_cmnd *sc_cmd)
EXPORT_SYMBOL(fc_eh_host_reset); EXPORT_SYMBOL(fc_eh_host_reset);
/** /**
* fc_slave_alloc() - Configure the queue depth of a Scsi_Host * fc_sdev_init() - Configure the queue depth of a Scsi_Host
* @sdev: The SCSI device that identifies the SCSI host * @sdev: The SCSI device that identifies the SCSI host
* *
* Configures queue depth based on host's cmd_per_len. If not set * Configures queue depth based on host's cmd_per_len. If not set
* then we use the libfc default. * then we use the libfc default.
*/ */
int fc_slave_alloc(struct scsi_device *sdev) int fc_sdev_init(struct scsi_device *sdev)
{ {
struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
@ -2238,7 +2238,7 @@ int fc_slave_alloc(struct scsi_device *sdev)
scsi_change_queue_depth(sdev, FC_FCP_DFLT_QUEUE_DEPTH); scsi_change_queue_depth(sdev, FC_FCP_DFLT_QUEUE_DEPTH);
return 0; return 0;
} }
EXPORT_SYMBOL(fc_slave_alloc); EXPORT_SYMBOL(fc_sdev_init);
/** /**
* fc_fcp_destroy() - Tear down the FCP layer for a given local port * fc_fcp_destroy() - Tear down the FCP layer for a given local port

View File

@ -804,15 +804,14 @@ EXPORT_SYMBOL_GPL(sas_target_alloc);
#define SAS_DEF_QD 256 #define SAS_DEF_QD 256
int sas_device_configure(struct scsi_device *scsi_dev, int sas_sdev_configure(struct scsi_device *scsi_dev, struct queue_limits *lim)
struct queue_limits *lim)
{ {
struct domain_device *dev = sdev_to_domain_dev(scsi_dev); struct domain_device *dev = sdev_to_domain_dev(scsi_dev);
BUG_ON(dev->rphy->identify.device_type != SAS_END_DEVICE); BUG_ON(dev->rphy->identify.device_type != SAS_END_DEVICE);
if (dev_is_sata(dev)) { if (dev_is_sata(dev)) {
ata_sas_device_configure(scsi_dev, lim, dev->sata_dev.ap); ata_sas_sdev_configure(scsi_dev, lim, dev->sata_dev.ap);
return 0; return 0;
} }
@ -830,7 +829,7 @@ int sas_device_configure(struct scsi_device *scsi_dev,
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(sas_device_configure); EXPORT_SYMBOL_GPL(sas_sdev_configure);
int sas_change_queue_depth(struct scsi_device *sdev, int depth) int sas_change_queue_depth(struct scsi_device *sdev, int depth)
{ {
@ -1194,14 +1193,14 @@ void sas_task_abort(struct sas_task *task)
} }
EXPORT_SYMBOL_GPL(sas_task_abort); EXPORT_SYMBOL_GPL(sas_task_abort);
int sas_slave_alloc(struct scsi_device *sdev) int sas_sdev_init(struct scsi_device *sdev)
{ {
if (dev_is_sata(sdev_to_domain_dev(sdev)) && sdev->lun) if (dev_is_sata(sdev_to_domain_dev(sdev)) && sdev->lun)
return -ENXIO; return -ENXIO;
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(sas_slave_alloc); EXPORT_SYMBOL_GPL(sas_sdev_init);
void sas_target_destroy(struct scsi_target *starget) void sas_target_destroy(struct scsi_target *starget)
{ {

View File

@ -6226,7 +6226,7 @@ lpfc_host_reset_handler(struct scsi_cmnd *cmnd)
} }
/** /**
* lpfc_slave_alloc - scsi_host_template slave_alloc entry point * lpfc_sdev_init - scsi_host_template sdev_init entry point
* @sdev: Pointer to scsi_device. * @sdev: Pointer to scsi_device.
* *
* This routine populates the cmds_per_lun count + 2 scsi_bufs into this host's * This routine populates the cmds_per_lun count + 2 scsi_bufs into this host's
@ -6239,7 +6239,7 @@ lpfc_host_reset_handler(struct scsi_cmnd *cmnd)
* 0 - Success * 0 - Success
**/ **/
static int static int
lpfc_slave_alloc(struct scsi_device *sdev) lpfc_sdev_init(struct scsi_device *sdev)
{ {
struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata; struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata;
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
@ -6342,8 +6342,9 @@ lpfc_slave_alloc(struct scsi_device *sdev)
} }
/** /**
* lpfc_slave_configure - scsi_host_template slave_configure entry point * lpfc_sdev_configure - scsi_host_template sdev_configure entry point
* @sdev: Pointer to scsi_device. * @sdev: Pointer to scsi_device.
* @lim: Request queue limits.
* *
* This routine configures following items * This routine configures following items
* - Tag command queuing support for @sdev if supported. * - Tag command queuing support for @sdev if supported.
@ -6353,7 +6354,7 @@ lpfc_slave_alloc(struct scsi_device *sdev)
* 0 - Success * 0 - Success
**/ **/
static int static int
lpfc_slave_configure(struct scsi_device *sdev) lpfc_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata; struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata;
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
@ -6371,13 +6372,13 @@ lpfc_slave_configure(struct scsi_device *sdev)
} }
/** /**
* lpfc_slave_destroy - slave_destroy entry point of SHT data structure * lpfc_sdev_destroy - sdev_destroy entry point of SHT data structure
* @sdev: Pointer to scsi_device. * @sdev: Pointer to scsi_device.
* *
* This routine sets @sdev hostatdata filed to null. * This routine sets @sdev hostatdata filed to null.
**/ **/
static void static void
lpfc_slave_destroy(struct scsi_device *sdev) lpfc_sdev_destroy(struct scsi_device *sdev)
{ {
struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata; struct lpfc_vport *vport = (struct lpfc_vport *) sdev->host->hostdata;
struct lpfc_hba *phba = vport->phba; struct lpfc_hba *phba = vport->phba;
@ -6737,7 +6738,13 @@ lpfc_no_command(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
} }
static int static int
lpfc_no_slave(struct scsi_device *sdev) lpfc_init_no_sdev(struct scsi_device *sdev)
{
return -ENODEV;
}
static int
lpfc_config_no_sdev(struct scsi_device *sdev, struct queue_limits *lim)
{ {
return -ENODEV; return -ENODEV;
} }
@ -6748,8 +6755,8 @@ struct scsi_host_template lpfc_template_nvme = {
.proc_name = LPFC_DRIVER_NAME, .proc_name = LPFC_DRIVER_NAME,
.info = lpfc_info, .info = lpfc_info,
.queuecommand = lpfc_no_command, .queuecommand = lpfc_no_command,
.slave_alloc = lpfc_no_slave, .sdev_init = lpfc_init_no_sdev,
.slave_configure = lpfc_no_slave, .sdev_configure = lpfc_config_no_sdev,
.scan_finished = lpfc_scan_finished, .scan_finished = lpfc_scan_finished,
.this_id = -1, .this_id = -1,
.sg_tablesize = 1, .sg_tablesize = 1,
@ -6772,9 +6779,9 @@ struct scsi_host_template lpfc_template = {
.eh_device_reset_handler = lpfc_device_reset_handler, .eh_device_reset_handler = lpfc_device_reset_handler,
.eh_target_reset_handler = lpfc_target_reset_handler, .eh_target_reset_handler = lpfc_target_reset_handler,
.eh_host_reset_handler = lpfc_host_reset_handler, .eh_host_reset_handler = lpfc_host_reset_handler,
.slave_alloc = lpfc_slave_alloc, .sdev_init = lpfc_sdev_init,
.slave_configure = lpfc_slave_configure, .sdev_configure = lpfc_sdev_configure,
.slave_destroy = lpfc_slave_destroy, .sdev_destroy = lpfc_sdev_destroy,
.scan_finished = lpfc_scan_finished, .scan_finished = lpfc_scan_finished,
.this_id = -1, .this_id = -1,
.sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT, .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT,
@ -6799,9 +6806,9 @@ struct scsi_host_template lpfc_vport_template = {
.eh_target_reset_handler = lpfc_target_reset_handler, .eh_target_reset_handler = lpfc_target_reset_handler,
.eh_bus_reset_handler = NULL, .eh_bus_reset_handler = NULL,
.eh_host_reset_handler = NULL, .eh_host_reset_handler = NULL,
.slave_alloc = lpfc_slave_alloc, .sdev_init = lpfc_sdev_init,
.slave_configure = lpfc_slave_configure, .sdev_configure = lpfc_sdev_configure,
.slave_destroy = lpfc_slave_destroy, .sdev_destroy = lpfc_sdev_destroy,
.scan_finished = lpfc_scan_finished, .scan_finished = lpfc_scan_finished,
.this_id = -1, .this_id = -1,
.sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT, .sg_tablesize = LPFC_DEFAULT_SG_SEG_CNT,

View File

@ -2067,7 +2067,7 @@ static void megasas_set_static_target_properties(struct scsi_device *sdev,
} }
static int megasas_device_configure(struct scsi_device *sdev, static int megasas_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
u16 pd_index = 0; u16 pd_index = 0;
@ -2108,7 +2108,7 @@ static int megasas_device_configure(struct scsi_device *sdev,
return 0; return 0;
} }
static int megasas_slave_alloc(struct scsi_device *sdev) static int megasas_sdev_init(struct scsi_device *sdev)
{ {
u16 pd_index = 0, ld_tgt_id; u16 pd_index = 0, ld_tgt_id;
struct megasas_instance *instance ; struct megasas_instance *instance ;
@ -2153,7 +2153,7 @@ static int megasas_slave_alloc(struct scsi_device *sdev)
return 0; return 0;
} }
static void megasas_slave_destroy(struct scsi_device *sdev) static void megasas_sdev_destroy(struct scsi_device *sdev)
{ {
u16 ld_tgt_id; u16 ld_tgt_id;
struct megasas_instance *instance; struct megasas_instance *instance;
@ -3509,9 +3509,9 @@ static const struct scsi_host_template megasas_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
.name = "Avago SAS based MegaRAID driver", .name = "Avago SAS based MegaRAID driver",
.proc_name = "megaraid_sas", .proc_name = "megaraid_sas",
.device_configure = megasas_device_configure, .sdev_configure = megasas_sdev_configure,
.slave_alloc = megasas_slave_alloc, .sdev_init = megasas_sdev_init,
.slave_destroy = megasas_slave_destroy, .sdev_destroy = megasas_sdev_destroy,
.queuecommand = megasas_queue_command, .queuecommand = megasas_queue_command,
.eh_target_reset_handler = megasas_reset_target, .eh_target_reset_handler = megasas_reset_target,
.eh_abort_handler = megasas_task_abort, .eh_abort_handler = megasas_task_abort,

View File

@ -4465,14 +4465,14 @@ static int mpi3mr_scan_finished(struct Scsi_Host *shost,
} }
/** /**
* mpi3mr_slave_destroy - Slave destroy callback handler * mpi3mr_sdev_destroy - Slave destroy callback handler
* @sdev: SCSI device reference * @sdev: SCSI device reference
* *
* Cleanup and free per device(lun) private data. * Cleanup and free per device(lun) private data.
* *
* Return: Nothing. * Return: Nothing.
*/ */
static void mpi3mr_slave_destroy(struct scsi_device *sdev) static void mpi3mr_sdev_destroy(struct scsi_device *sdev)
{ {
struct Scsi_Host *shost; struct Scsi_Host *shost;
struct mpi3mr_ioc *mrioc; struct mpi3mr_ioc *mrioc;
@ -4552,7 +4552,7 @@ static void mpi3mr_target_destroy(struct scsi_target *starget)
} }
/** /**
* mpi3mr_device_configure - Slave configure callback handler * mpi3mr_sdev_configure - Slave configure callback handler
* @sdev: SCSI device reference * @sdev: SCSI device reference
* @lim: queue limits * @lim: queue limits
* *
@ -4561,7 +4561,7 @@ static void mpi3mr_target_destroy(struct scsi_target *starget)
* *
* Return: 0 always. * Return: 0 always.
*/ */
static int mpi3mr_device_configure(struct scsi_device *sdev, static int mpi3mr_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct scsi_target *starget; struct scsi_target *starget;
@ -4599,14 +4599,14 @@ static int mpi3mr_device_configure(struct scsi_device *sdev,
} }
/** /**
* mpi3mr_slave_alloc -Slave alloc callback handler * mpi3mr_sdev_init -Slave alloc callback handler
* @sdev: SCSI device reference * @sdev: SCSI device reference
* *
* Allocate per device(lun) private data and initialize it. * Allocate per device(lun) private data and initialize it.
* *
* Return: 0 on success -ENOMEM on memory allocation failure. * Return: 0 on success -ENOMEM on memory allocation failure.
*/ */
static int mpi3mr_slave_alloc(struct scsi_device *sdev) static int mpi3mr_sdev_init(struct scsi_device *sdev)
{ {
struct Scsi_Host *shost; struct Scsi_Host *shost;
struct mpi3mr_ioc *mrioc; struct mpi3mr_ioc *mrioc;
@ -5062,10 +5062,10 @@ static const struct scsi_host_template mpi3mr_driver_template = {
.proc_name = MPI3MR_DRIVER_NAME, .proc_name = MPI3MR_DRIVER_NAME,
.queuecommand = mpi3mr_qcmd, .queuecommand = mpi3mr_qcmd,
.target_alloc = mpi3mr_target_alloc, .target_alloc = mpi3mr_target_alloc,
.slave_alloc = mpi3mr_slave_alloc, .sdev_init = mpi3mr_sdev_init,
.device_configure = mpi3mr_device_configure, .sdev_configure = mpi3mr_sdev_configure,
.target_destroy = mpi3mr_target_destroy, .target_destroy = mpi3mr_target_destroy,
.slave_destroy = mpi3mr_slave_destroy, .sdev_destroy = mpi3mr_sdev_destroy,
.scan_finished = mpi3mr_scan_finished, .scan_finished = mpi3mr_scan_finished,
.scan_start = mpi3mr_scan_start, .scan_start = mpi3mr_scan_start,
.change_queue_depth = mpi3mr_change_queue_depth, .change_queue_depth = mpi3mr_change_queue_depth,

View File

@ -2025,14 +2025,14 @@ scsih_target_destroy(struct scsi_target *starget)
} }
/** /**
* scsih_slave_alloc - device add routine * scsih_sdev_init - device add routine
* @sdev: scsi device struct * @sdev: scsi device struct
* *
* Return: 0 if ok. Any other return is assumed to be an error and * Return: 0 if ok. Any other return is assumed to be an error and
* the device is ignored. * the device is ignored.
*/ */
static int static int
scsih_slave_alloc(struct scsi_device *sdev) scsih_sdev_init(struct scsi_device *sdev)
{ {
struct Scsi_Host *shost; struct Scsi_Host *shost;
struct MPT3SAS_ADAPTER *ioc; struct MPT3SAS_ADAPTER *ioc;
@ -2107,11 +2107,11 @@ scsih_slave_alloc(struct scsi_device *sdev)
} }
/** /**
* scsih_slave_destroy - device destroy routine * scsih_sdev_destroy - device destroy routine
* @sdev: scsi device struct * @sdev: scsi device struct
*/ */
static void static void
scsih_slave_destroy(struct scsi_device *sdev) scsih_sdev_destroy(struct scsi_device *sdev)
{ {
struct MPT3SAS_TARGET *sas_target_priv_data; struct MPT3SAS_TARGET *sas_target_priv_data;
struct scsi_target *starget; struct scsi_target *starget;
@ -2496,7 +2496,7 @@ _scsih_enable_tlr(struct MPT3SAS_ADAPTER *ioc, struct scsi_device *sdev)
} }
/** /**
* scsih_device_configure - device configure routine. * scsih_sdev_configure - device configure routine.
* @sdev: scsi device struct * @sdev: scsi device struct
* @lim: queue limits * @lim: queue limits
* *
@ -2504,7 +2504,7 @@ _scsih_enable_tlr(struct MPT3SAS_ADAPTER *ioc, struct scsi_device *sdev)
* the device is ignored. * the device is ignored.
*/ */
static int static int
scsih_device_configure(struct scsi_device *sdev, struct queue_limits *lim) scsih_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
struct Scsi_Host *shost = sdev->host; struct Scsi_Host *shost = sdev->host;
struct MPT3SAS_ADAPTER *ioc = shost_priv(shost); struct MPT3SAS_ADAPTER *ioc = shost_priv(shost);
@ -11904,10 +11904,10 @@ static const struct scsi_host_template mpt2sas_driver_template = {
.proc_name = MPT2SAS_DRIVER_NAME, .proc_name = MPT2SAS_DRIVER_NAME,
.queuecommand = scsih_qcmd, .queuecommand = scsih_qcmd,
.target_alloc = scsih_target_alloc, .target_alloc = scsih_target_alloc,
.slave_alloc = scsih_slave_alloc, .sdev_init = scsih_sdev_init,
.device_configure = scsih_device_configure, .sdev_configure = scsih_sdev_configure,
.target_destroy = scsih_target_destroy, .target_destroy = scsih_target_destroy,
.slave_destroy = scsih_slave_destroy, .sdev_destroy = scsih_sdev_destroy,
.scan_finished = scsih_scan_finished, .scan_finished = scsih_scan_finished,
.scan_start = scsih_scan_start, .scan_start = scsih_scan_start,
.change_queue_depth = scsih_change_queue_depth, .change_queue_depth = scsih_change_queue_depth,
@ -11942,10 +11942,10 @@ static const struct scsi_host_template mpt3sas_driver_template = {
.proc_name = MPT3SAS_DRIVER_NAME, .proc_name = MPT3SAS_DRIVER_NAME,
.queuecommand = scsih_qcmd, .queuecommand = scsih_qcmd,
.target_alloc = scsih_target_alloc, .target_alloc = scsih_target_alloc,
.slave_alloc = scsih_slave_alloc, .sdev_init = scsih_sdev_init,
.device_configure = scsih_device_configure, .sdev_configure = scsih_sdev_configure,
.target_destroy = scsih_target_destroy, .target_destroy = scsih_target_destroy,
.slave_destroy = scsih_slave_destroy, .sdev_destroy = scsih_sdev_destroy,
.scan_finished = scsih_scan_finished, .scan_finished = scsih_scan_finished,
.scan_start = scsih_scan_start, .scan_start = scsih_scan_start,
.change_queue_depth = scsih_change_queue_depth, .change_queue_depth = scsih_change_queue_depth,

View File

@ -2000,7 +2000,8 @@ static struct mvumi_instance_template mvumi_instance_9580 = {
.reset_host = mvumi_reset_host_9580, .reset_host = mvumi_reset_host_9580,
}; };
static int mvumi_slave_configure(struct scsi_device *sdev) static int mvumi_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct mvumi_hba *mhba; struct mvumi_hba *mhba;
unsigned char bitcount = sizeof(unsigned char) * 8; unsigned char bitcount = sizeof(unsigned char) * 8;
@ -2172,7 +2173,7 @@ static const struct scsi_host_template mvumi_template = {
.module = THIS_MODULE, .module = THIS_MODULE,
.name = "Marvell Storage Controller", .name = "Marvell Storage Controller",
.slave_configure = mvumi_slave_configure, .sdev_configure = mvumi_sdev_configure,
.queuecommand = mvumi_queue_command, .queuecommand = mvumi_queue_command,
.eh_timed_out = mvumi_timed_out, .eh_timed_out = mvumi_timed_out,
.eh_host_reset_handler = mvumi_host_reset, .eh_host_reset_handler = mvumi_host_reset,

View File

@ -1619,7 +1619,7 @@ static int myrb_queuecommand(struct Scsi_Host *shost,
return myrb_pthru_queuecommand(shost, scmd); return myrb_pthru_queuecommand(shost, scmd);
} }
static int myrb_ldev_slave_alloc(struct scsi_device *sdev) static int myrb_ldev_sdev_init(struct scsi_device *sdev)
{ {
struct myrb_hba *cb = shost_priv(sdev->host); struct myrb_hba *cb = shost_priv(sdev->host);
struct myrb_ldev_info *ldev_info; struct myrb_ldev_info *ldev_info;
@ -1665,7 +1665,7 @@ static int myrb_ldev_slave_alloc(struct scsi_device *sdev)
return 0; return 0;
} }
static int myrb_pdev_slave_alloc(struct scsi_device *sdev) static int myrb_pdev_sdev_init(struct scsi_device *sdev)
{ {
struct myrb_hba *cb = shost_priv(sdev->host); struct myrb_hba *cb = shost_priv(sdev->host);
struct myrb_pdev_state *pdev_info; struct myrb_pdev_state *pdev_info;
@ -1701,7 +1701,7 @@ static int myrb_pdev_slave_alloc(struct scsi_device *sdev)
return 0; return 0;
} }
static int myrb_slave_alloc(struct scsi_device *sdev) static int myrb_sdev_init(struct scsi_device *sdev)
{ {
if (sdev->channel > myrb_logical_channel(sdev->host)) if (sdev->channel > myrb_logical_channel(sdev->host))
return -ENXIO; return -ENXIO;
@ -1710,12 +1710,13 @@ static int myrb_slave_alloc(struct scsi_device *sdev)
return -ENXIO; return -ENXIO;
if (sdev->channel == myrb_logical_channel(sdev->host)) if (sdev->channel == myrb_logical_channel(sdev->host))
return myrb_ldev_slave_alloc(sdev); return myrb_ldev_sdev_init(sdev);
return myrb_pdev_slave_alloc(sdev); return myrb_pdev_sdev_init(sdev);
} }
static int myrb_slave_configure(struct scsi_device *sdev) static int myrb_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct myrb_ldev_info *ldev_info; struct myrb_ldev_info *ldev_info;
@ -1741,7 +1742,7 @@ static int myrb_slave_configure(struct scsi_device *sdev)
return 0; return 0;
} }
static void myrb_slave_destroy(struct scsi_device *sdev) static void myrb_sdev_destroy(struct scsi_device *sdev)
{ {
kfree(sdev->hostdata); kfree(sdev->hostdata);
} }
@ -2208,9 +2209,9 @@ static const struct scsi_host_template myrb_template = {
.proc_name = "myrb", .proc_name = "myrb",
.queuecommand = myrb_queuecommand, .queuecommand = myrb_queuecommand,
.eh_host_reset_handler = myrb_host_reset, .eh_host_reset_handler = myrb_host_reset,
.slave_alloc = myrb_slave_alloc, .sdev_init = myrb_sdev_init,
.slave_configure = myrb_slave_configure, .sdev_configure = myrb_sdev_configure,
.slave_destroy = myrb_slave_destroy, .sdev_destroy = myrb_sdev_destroy,
.bios_param = myrb_biosparam, .bios_param = myrb_biosparam,
.cmd_size = sizeof(struct myrb_cmdblk), .cmd_size = sizeof(struct myrb_cmdblk),
.shost_groups = myrb_shost_groups, .shost_groups = myrb_shost_groups,

View File

@ -1786,7 +1786,7 @@ static unsigned short myrs_translate_ldev(struct myrs_hba *cs,
return ldev_num; return ldev_num;
} }
static int myrs_slave_alloc(struct scsi_device *sdev) static int myrs_sdev_init(struct scsi_device *sdev)
{ {
struct myrs_hba *cs = shost_priv(sdev->host); struct myrs_hba *cs = shost_priv(sdev->host);
unsigned char status; unsigned char status;
@ -1882,7 +1882,8 @@ static int myrs_slave_alloc(struct scsi_device *sdev)
return 0; return 0;
} }
static int myrs_slave_configure(struct scsi_device *sdev) static int myrs_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct myrs_hba *cs = shost_priv(sdev->host); struct myrs_hba *cs = shost_priv(sdev->host);
struct myrs_ldev_info *ldev_info; struct myrs_ldev_info *ldev_info;
@ -1910,7 +1911,7 @@ static int myrs_slave_configure(struct scsi_device *sdev)
return 0; return 0;
} }
static void myrs_slave_destroy(struct scsi_device *sdev) static void myrs_sdev_destroy(struct scsi_device *sdev)
{ {
kfree(sdev->hostdata); kfree(sdev->hostdata);
} }
@ -1921,9 +1922,9 @@ static const struct scsi_host_template myrs_template = {
.proc_name = "myrs", .proc_name = "myrs",
.queuecommand = myrs_queuecommand, .queuecommand = myrs_queuecommand,
.eh_host_reset_handler = myrs_host_reset, .eh_host_reset_handler = myrs_host_reset,
.slave_alloc = myrs_slave_alloc, .sdev_init = myrs_sdev_init,
.slave_configure = myrs_slave_configure, .sdev_configure = myrs_sdev_configure,
.slave_destroy = myrs_slave_destroy, .sdev_destroy = myrs_sdev_destroy,
.cmd_size = sizeof(struct myrs_cmdblk), .cmd_size = sizeof(struct myrs_cmdblk),
.shost_groups = myrs_shost_groups, .shost_groups = myrs_shost_groups,
.sdev_groups = myrs_sdev_groups, .sdev_groups = myrs_sdev_groups,

View File

@ -7786,7 +7786,7 @@ static void __init ncr_getclock (struct ncb *np, int mult)
/*===================== LINUX ENTRY POINTS SECTION ==========================*/ /*===================== LINUX ENTRY POINTS SECTION ==========================*/
static int ncr53c8xx_slave_alloc(struct scsi_device *device) static int ncr53c8xx_sdev_init(struct scsi_device *device)
{ {
struct Scsi_Host *host = device->host; struct Scsi_Host *host = device->host;
struct ncb *np = ((struct host_data *) host->hostdata)->ncb; struct ncb *np = ((struct host_data *) host->hostdata)->ncb;
@ -7796,7 +7796,8 @@ static int ncr53c8xx_slave_alloc(struct scsi_device *device)
return 0; return 0;
} }
static int ncr53c8xx_slave_configure(struct scsi_device *device) static int ncr53c8xx_sdev_configure(struct scsi_device *device,
struct queue_limits *lim)
{ {
struct Scsi_Host *host = device->host; struct Scsi_Host *host = device->host;
struct ncb *np = ((struct host_data *) host->hostdata)->ncb; struct ncb *np = ((struct host_data *) host->hostdata)->ncb;
@ -8093,8 +8094,8 @@ struct Scsi_Host * __init ncr_attach(struct scsi_host_template *tpnt,
tpnt->shost_groups = ncr53c8xx_host_groups; tpnt->shost_groups = ncr53c8xx_host_groups;
tpnt->queuecommand = ncr53c8xx_queue_command; tpnt->queuecommand = ncr53c8xx_queue_command;
tpnt->slave_configure = ncr53c8xx_slave_configure; tpnt->sdev_configure = ncr53c8xx_sdev_configure;
tpnt->slave_alloc = ncr53c8xx_slave_alloc; tpnt->sdev_init = ncr53c8xx_sdev_init;
tpnt->eh_bus_reset_handler = ncr53c8xx_bus_reset; tpnt->eh_bus_reset_handler = ncr53c8xx_bus_reset;
tpnt->can_queue = SCSI_NCR_CAN_QUEUE; tpnt->can_queue = SCSI_NCR_CAN_QUEUE;
tpnt->this_id = 7; tpnt->this_id = 7;

View File

@ -90,7 +90,7 @@ enum port_type {
#define PM8001_MAX_PORTS 16 /* max. possible ports */ #define PM8001_MAX_PORTS 16 /* max. possible ports */
#define PM8001_MAX_DEVICES 2048 /* max supported device */ #define PM8001_MAX_DEVICES 2048 /* max supported device */
#define PM8001_MAX_MSIX_VEC 64 /* max msi-x int for spcv/ve */ #define PM8001_MAX_MSIX_VEC 64 /* max msi-x int for spcv/ve */
#define PM8001_RESERVE_SLOT 8 #define PM8001_RESERVE_SLOT 128
#define PM8001_SECTOR_SIZE 512 #define PM8001_SECTOR_SIZE 512
#define PM8001_PAGE_SIZE_4K 4096 #define PM8001_PAGE_SIZE_4K 4096

View File

@ -3472,12 +3472,13 @@ int pm8001_mpi_task_abort_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
status, tag, scp); status, tag, scp);
switch (status) { switch (status) {
case IO_SUCCESS: case IO_SUCCESS:
pm8001_dbg(pm8001_ha, EH, "IO_SUCCESS\n"); pm8001_dbg(pm8001_ha, FAIL, "ABORT IO_SUCCESS for tag %#x\n",
tag);
ts->resp = SAS_TASK_COMPLETE; ts->resp = SAS_TASK_COMPLETE;
ts->stat = SAS_SAM_STAT_GOOD; ts->stat = SAS_SAM_STAT_GOOD;
break; break;
case IO_NOT_VALID: case IO_NOT_VALID:
pm8001_dbg(pm8001_ha, EH, "IO_NOT_VALID\n"); pm8001_dbg(pm8001_ha, FAIL, "IO_NOT_VALID for tag %#x\n", tag);
ts->resp = TMF_RESP_FUNC_FAILED; ts->resp = TMF_RESP_FUNC_FAILED;
break; break;
} }

View File

@ -101,6 +101,63 @@ int pm8001_tag_alloc(struct pm8001_hba_info *pm8001_ha, u32 *tag_out)
return 0; return 0;
} }
static void pm80xx_get_tag_opcodes(struct sas_task *task, int *ata_op,
int *ata_tag, bool *task_aborted)
{
unsigned long flags;
struct ata_queued_cmd *qc = NULL;
*ata_op = 0;
*ata_tag = -1;
*task_aborted = false;
if (!task)
return;
spin_lock_irqsave(&task->task_state_lock, flags);
if (unlikely((task->task_state_flags & SAS_TASK_STATE_ABORTED)))
*task_aborted = true;
spin_unlock_irqrestore(&task->task_state_lock, flags);
if (task->task_proto == SAS_PROTOCOL_STP) {
// sas_ata_qc_issue path uses SAS_PROTOCOL_STP.
// This only works for scsi + libsas + libata users.
qc = task->uldd_task;
if (qc) {
*ata_op = qc->tf.command;
*ata_tag = qc->tag;
}
}
}
void pm80xx_show_pending_commands(struct pm8001_hba_info *pm8001_ha,
struct pm8001_device *target_pm8001_dev)
{
int i = 0, ata_op = 0, ata_tag = -1;
struct pm8001_ccb_info *ccb = NULL;
struct sas_task *task = NULL;
struct pm8001_device *pm8001_dev = NULL;
bool task_aborted;
for (i = 0; i < pm8001_ha->ccb_count; i++) {
ccb = &pm8001_ha->ccb_info[i];
if (ccb->ccb_tag == PM8001_INVALID_TAG)
continue;
pm8001_dev = ccb->device;
if (target_pm8001_dev && pm8001_dev &&
target_pm8001_dev != pm8001_dev)
continue;
task = ccb->task;
pm80xx_get_tag_opcodes(task, &ata_op, &ata_tag, &task_aborted);
pm8001_dbg(pm8001_ha, FAIL,
"tag %#x, device %#x task %p task aborted %d ata opcode %#x ata tag %d\n",
ccb->ccb_tag,
(pm8001_dev ? pm8001_dev->device_id : 0),
task, task_aborted,
ata_op, ata_tag);
}
}
/** /**
* pm8001_mem_alloc - allocate memory for pm8001. * pm8001_mem_alloc - allocate memory for pm8001.
* @pdev: pci device. * @pdev: pci device.
@ -374,23 +431,6 @@ static int pm8001_task_prep_ssp(struct pm8001_hba_info *pm8001_ha,
return PM8001_CHIP_DISP->ssp_io_req(pm8001_ha, ccb); return PM8001_CHIP_DISP->ssp_io_req(pm8001_ha, ccb);
} }
/* Find the local port id that's attached to this device */
static int sas_find_local_port_id(struct domain_device *dev)
{
struct domain_device *pdev = dev->parent;
/* Directly attached device */
if (!pdev)
return dev->port->id;
while (pdev) {
struct domain_device *pdev_p = pdev->parent;
if (!pdev_p)
return pdev->port->id;
pdev = pdev->parent;
}
return 0;
}
#define DEV_IS_GONE(pm8001_dev) \ #define DEV_IS_GONE(pm8001_dev) \
((!pm8001_dev || (pm8001_dev->dev_type == SAS_PHY_UNUSED))) ((!pm8001_dev || (pm8001_dev->dev_type == SAS_PHY_UNUSED)))
@ -463,10 +503,10 @@ int pm8001_queue_command(struct sas_task *task, gfp_t gfp_flags)
spin_lock_irqsave(&pm8001_ha->lock, flags); spin_lock_irqsave(&pm8001_ha->lock, flags);
pm8001_dev = dev->lldd_dev; pm8001_dev = dev->lldd_dev;
port = &pm8001_ha->port[sas_find_local_port_id(dev)]; port = pm8001_ha->phy[pm8001_dev->attached_phy].port;
if (!internal_abort && if (!internal_abort &&
(DEV_IS_GONE(pm8001_dev) || !port->port_attached)) { (DEV_IS_GONE(pm8001_dev) || !port || !port->port_attached)) {
ts->resp = SAS_TASK_UNDELIVERED; ts->resp = SAS_TASK_UNDELIVERED;
ts->stat = SAS_PHY_DOWN; ts->stat = SAS_PHY_DOWN;
if (sas_protocol_ata(task_proto)) { if (sas_protocol_ata(task_proto)) {

View File

@ -786,6 +786,8 @@ static inline void pm8001_ccb_task_free_done(struct pm8001_hba_info *pm8001_ha,
} }
void pm8001_setds_completion(struct domain_device *dev); void pm8001_setds_completion(struct domain_device *dev);
void pm8001_tmf_aborted(struct sas_task *task); void pm8001_tmf_aborted(struct sas_task *task);
void pm80xx_show_pending_commands(struct pm8001_hba_info *pm8001_ha,
struct pm8001_device *dev);
#endif #endif

View File

@ -2246,7 +2246,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
u32 param; u32 param;
u32 status; u32 status;
u32 tag; u32 tag;
int i, j; int i, j, ata_tag = -1;
u8 sata_addr_low[4]; u8 sata_addr_low[4];
u32 temp_sata_addr_low, temp_sata_addr_hi; u32 temp_sata_addr_low, temp_sata_addr_hi;
u8 sata_addr_hi[4]; u8 sata_addr_hi[4];
@ -2256,6 +2256,7 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
u32 *sata_resp; u32 *sata_resp;
struct pm8001_device *pm8001_dev; struct pm8001_device *pm8001_dev;
unsigned long flags; unsigned long flags;
struct ata_queued_cmd *qc;
psataPayload = (struct sata_completion_resp *)(piomb + 4); psataPayload = (struct sata_completion_resp *)(piomb + 4);
status = le32_to_cpu(psataPayload->status); status = le32_to_cpu(psataPayload->status);
@ -2267,8 +2268,11 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
pm8001_dev = ccb->device; pm8001_dev = ccb->device;
if (t) { if (t) {
if (t->dev && (t->dev->lldd_dev)) if (t->dev && (t->dev->lldd_dev)) {
pm8001_dev = t->dev->lldd_dev; pm8001_dev = t->dev->lldd_dev;
qc = t->uldd_task;
ata_tag = qc ? qc->tag : -1;
}
} else { } else {
pm8001_dbg(pm8001_ha, FAIL, "task null, freeing CCB tag %d\n", pm8001_dbg(pm8001_ha, FAIL, "task null, freeing CCB tag %d\n",
ccb->ccb_tag); ccb->ccb_tag);
@ -2276,16 +2280,14 @@ mpi_sata_completion(struct pm8001_hba_info *pm8001_ha,
return; return;
} }
if (pm8001_dev && unlikely(!t->lldd_task || !t->dev)) if (pm8001_dev && unlikely(!t->lldd_task || !t->dev))
return; return;
ts = &t->task_status; ts = &t->task_status;
if (status != IO_SUCCESS) { if (status != IO_SUCCESS) {
pm8001_dbg(pm8001_ha, FAIL, pm8001_dbg(pm8001_ha, FAIL,
"IO failed device_id %u status 0x%x tag %d\n", "IO failed status %#x pm80xx tag %#x ata tag %d\n",
pm8001_dev->device_id, status, tag); status, tag, ata_tag);
} }
/* Print sas address of IO failed device */ /* Print sas address of IO failed device */
@ -2667,13 +2669,19 @@ static void mpi_sata_event(struct pm8001_hba_info *pm8001_ha,
/* Check if this is NCQ error */ /* Check if this is NCQ error */
if (event == IO_XFER_ERROR_ABORTED_NCQ_MODE) { if (event == IO_XFER_ERROR_ABORTED_NCQ_MODE) {
/* tag value is invalid with this event */
pm8001_dbg(pm8001_ha, FAIL, "NCQ ERROR for device %#x tag %#x\n",
dev_id, tag);
/* find device using device id */ /* find device using device id */
pm8001_dev = pm8001_find_dev(pm8001_ha, dev_id); pm8001_dev = pm8001_find_dev(pm8001_ha, dev_id);
/* send read log extension by aborting the link - libata does what we want */ /* send read log extension by aborting the link - libata does what we want */
if (pm8001_dev) if (pm8001_dev) {
pm80xx_show_pending_commands(pm8001_ha, pm8001_dev);
pm8001_handle_event(pm8001_ha, pm8001_handle_event(pm8001_ha,
pm8001_dev, pm8001_dev,
IO_XFER_ERROR_ABORTED_NCQ_MODE); IO_XFER_ERROR_ABORTED_NCQ_MODE);
}
return; return;
} }
@ -3336,10 +3344,11 @@ static int mpi_phy_start_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
u32 phy_id = u32 phy_id =
le32_to_cpu(pPayload->phyid) & 0xFF; le32_to_cpu(pPayload->phyid) & 0xFF;
struct pm8001_phy *phy = &pm8001_ha->phy[phy_id]; struct pm8001_phy *phy = &pm8001_ha->phy[phy_id];
u32 tag = le32_to_cpu(pPayload->tag);
pm8001_dbg(pm8001_ha, INIT, pm8001_dbg(pm8001_ha, INIT,
"phy start resp status:0x%x, phyid:0x%x\n", "phy start resp status:0x%x, phyid:0x%x, tag 0x%x\n",
status, phy_id); status, phy_id, tag);
if (status == 0) if (status == 0)
phy->phy_state = PHY_LINK_DOWN; phy->phy_state = PHY_LINK_DOWN;
@ -3348,6 +3357,8 @@ static int mpi_phy_start_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
complete(phy->enable_completion); complete(phy->enable_completion);
phy->enable_completion = NULL; phy->enable_completion = NULL;
} }
pm8001_tag_free(pm8001_ha, tag);
return 0; return 0;
} }
@ -3628,8 +3639,10 @@ static int mpi_phy_stop_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
u32 phyid = u32 phyid =
le32_to_cpu(pPayload->phyid) & 0xFF; le32_to_cpu(pPayload->phyid) & 0xFF;
struct pm8001_phy *phy = &pm8001_ha->phy[phyid]; struct pm8001_phy *phy = &pm8001_ha->phy[phyid];
pm8001_dbg(pm8001_ha, MSG, "phy:0x%x status:0x%x\n", u32 tag = le32_to_cpu(pPayload->tag);
phyid, status);
pm8001_dbg(pm8001_ha, MSG, "phy:0x%x status:0x%x tag 0x%x\n", phyid,
status, tag);
if (status == PHY_STOP_SUCCESS || if (status == PHY_STOP_SUCCESS ||
status == PHY_STOP_ERR_DEVICE_ATTACHED) { status == PHY_STOP_ERR_DEVICE_ATTACHED) {
phy->phy_state = PHY_LINK_DISABLE; phy->phy_state = PHY_LINK_DISABLE;
@ -3637,6 +3650,7 @@ static int mpi_phy_stop_resp(struct pm8001_hba_info *pm8001_ha, void *piomb)
phy->sas_phy.linkrate = SAS_PHY_DISABLED; phy->sas_phy.linkrate = SAS_PHY_DISABLED;
} }
pm8001_tag_free(pm8001_ha, tag);
return 0; return 0;
} }
@ -3655,10 +3669,9 @@ static int mpi_set_controller_config_resp(struct pm8001_hba_info *pm8001_ha,
u32 tag = le32_to_cpu(pPayload->tag); u32 tag = le32_to_cpu(pPayload->tag);
pm8001_dbg(pm8001_ha, MSG, pm8001_dbg(pm8001_ha, MSG,
"SET CONTROLLER RESP: status 0x%x qlfr_pgcd 0x%x\n", "SET CONTROLLER RESP: status 0x%x qlfr_pgcd 0x%x tag 0x%x\n",
status, err_qlfr_pgcd); status, err_qlfr_pgcd, tag);
pm8001_tag_free(pm8001_ha, tag); pm8001_tag_free(pm8001_ha, tag);
return 0; return 0;
} }
@ -4632,9 +4645,16 @@ static int
pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id) pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id)
{ {
struct phy_start_req payload; struct phy_start_req payload;
u32 tag = 0x01; int ret;
u32 tag;
u32 opcode = OPC_INB_PHYSTART; u32 opcode = OPC_INB_PHYSTART;
ret = pm8001_tag_alloc(pm8001_ha, &tag);
if (ret) {
pm8001_dbg(pm8001_ha, FAIL, "Tag allocation failed\n");
return ret;
}
memset(&payload, 0, sizeof(payload)); memset(&payload, 0, sizeof(payload));
payload.tag = cpu_to_le32(tag); payload.tag = cpu_to_le32(tag);
@ -4670,9 +4690,16 @@ static int pm80xx_chip_phy_stop_req(struct pm8001_hba_info *pm8001_ha,
u8 phy_id) u8 phy_id)
{ {
struct phy_stop_req payload; struct phy_stop_req payload;
u32 tag = 0x01; int ret;
u32 tag;
u32 opcode = OPC_INB_PHYSTOP; u32 opcode = OPC_INB_PHYSTOP;
ret = pm8001_tag_alloc(pm8001_ha, &tag);
if (ret) {
pm8001_dbg(pm8001_ha, FAIL, "Tag allocation failed\n");
return ret;
}
memset(&payload, 0, sizeof(payload)); memset(&payload, 0, sizeof(payload));
payload.tag = cpu_to_le32(tag); payload.tag = cpu_to_le32(tag);
payload.phy_id = cpu_to_le32(phy_id); payload.phy_id = cpu_to_le32(phy_id);

View File

@ -125,7 +125,7 @@ MODULE_DEVICE_TABLE(pci, pmcraid_pci_table);
/** /**
* pmcraid_slave_alloc - Prepare for commands to a device * pmcraid_sdev_init - Prepare for commands to a device
* @scsi_dev: scsi device struct * @scsi_dev: scsi device struct
* *
* This function is called by mid-layer prior to sending any command to the new * This function is called by mid-layer prior to sending any command to the new
@ -136,7 +136,7 @@ MODULE_DEVICE_TABLE(pci, pmcraid_pci_table);
* Return value: * Return value:
* 0 on success / -ENXIO if device does not exist * 0 on success / -ENXIO if device does not exist
*/ */
static int pmcraid_slave_alloc(struct scsi_device *scsi_dev) static int pmcraid_sdev_init(struct scsi_device *scsi_dev)
{ {
struct pmcraid_resource_entry *temp, *res = NULL; struct pmcraid_resource_entry *temp, *res = NULL;
struct pmcraid_instance *pinstance; struct pmcraid_instance *pinstance;
@ -197,7 +197,7 @@ static int pmcraid_slave_alloc(struct scsi_device *scsi_dev)
} }
/** /**
* pmcraid_device_configure - Configures a SCSI device * pmcraid_sdev_configure - Configures a SCSI device
* @scsi_dev: scsi device struct * @scsi_dev: scsi device struct
* @lim: queue limits * @lim: queue limits
* *
@ -210,7 +210,7 @@ static int pmcraid_slave_alloc(struct scsi_device *scsi_dev)
* Return value: * Return value:
* 0 on success * 0 on success
*/ */
static int pmcraid_device_configure(struct scsi_device *scsi_dev, static int pmcraid_sdev_configure(struct scsi_device *scsi_dev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct pmcraid_resource_entry *res = scsi_dev->hostdata; struct pmcraid_resource_entry *res = scsi_dev->hostdata;
@ -248,17 +248,17 @@ static int pmcraid_device_configure(struct scsi_device *scsi_dev,
} }
/** /**
* pmcraid_slave_destroy - Unconfigure a SCSI device before removing it * pmcraid_sdev_destroy - Unconfigure a SCSI device before removing it
* *
* @scsi_dev: scsi device struct * @scsi_dev: scsi device struct
* *
* This is called by mid-layer before removing a device. Pointer assignments * This is called by mid-layer before removing a device. Pointer assignments
* done in pmcraid_slave_alloc will be reset to NULL here. * done in pmcraid_sdev_init will be reset to NULL here.
* *
* Return value * Return value
* none * none
*/ */
static void pmcraid_slave_destroy(struct scsi_device *scsi_dev) static void pmcraid_sdev_destroy(struct scsi_device *scsi_dev)
{ {
struct pmcraid_resource_entry *res; struct pmcraid_resource_entry *res;
@ -3668,9 +3668,9 @@ static const struct scsi_host_template pmcraid_host_template = {
.eh_device_reset_handler = pmcraid_eh_device_reset_handler, .eh_device_reset_handler = pmcraid_eh_device_reset_handler,
.eh_host_reset_handler = pmcraid_eh_host_reset_handler, .eh_host_reset_handler = pmcraid_eh_host_reset_handler,
.slave_alloc = pmcraid_slave_alloc, .sdev_init = pmcraid_sdev_init,
.device_configure = pmcraid_device_configure, .sdev_configure = pmcraid_sdev_configure,
.slave_destroy = pmcraid_slave_destroy, .sdev_destroy = pmcraid_sdev_destroy,
.change_queue_depth = pmcraid_change_queue_depth, .change_queue_depth = pmcraid_change_queue_depth,
.can_queue = PMCRAID_MAX_IO_CMD, .can_queue = PMCRAID_MAX_IO_CMD,
.this_id = -1, .this_id = -1,

View File

@ -61,7 +61,8 @@ enum lv1_atapi_in_out {
}; };
static int ps3rom_slave_configure(struct scsi_device *scsi_dev) static int ps3rom_sdev_configure(struct scsi_device *scsi_dev,
struct queue_limits *lim)
{ {
struct ps3rom_private *priv = shost_priv(scsi_dev->host); struct ps3rom_private *priv = shost_priv(scsi_dev->host);
struct ps3_storage_device *dev = priv->dev; struct ps3_storage_device *dev = priv->dev;
@ -325,7 +326,7 @@ static irqreturn_t ps3rom_interrupt(int irq, void *data)
static const struct scsi_host_template ps3rom_host_template = { static const struct scsi_host_template ps3rom_host_template = {
.name = DEVICE_NAME, .name = DEVICE_NAME,
.slave_configure = ps3rom_slave_configure, .sdev_configure = ps3rom_sdev_configure,
.queuecommand = ps3rom_queuecommand, .queuecommand = ps3rom_queuecommand,
.can_queue = 1, .can_queue = 1,
.this_id = 7, .this_id = 7,

View File

@ -982,7 +982,8 @@ static int qedf_eh_host_reset(struct scsi_cmnd *sc_cmd)
return SUCCESS; return SUCCESS;
} }
static int qedf_slave_configure(struct scsi_device *sdev) static int qedf_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
if (qedf_queue_depth) { if (qedf_queue_depth) {
scsi_change_queue_depth(sdev, qedf_queue_depth); scsi_change_queue_depth(sdev, qedf_queue_depth);
@ -1003,7 +1004,7 @@ static const struct scsi_host_template qedf_host_template = {
.eh_device_reset_handler = qedf_eh_device_reset, /* lun reset */ .eh_device_reset_handler = qedf_eh_device_reset, /* lun reset */
.eh_target_reset_handler = qedf_eh_target_reset, /* target reset */ .eh_target_reset_handler = qedf_eh_target_reset, /* target reset */
.eh_host_reset_handler = qedf_eh_host_reset, .eh_host_reset_handler = qedf_eh_host_reset,
.slave_configure = qedf_slave_configure, .sdev_configure = qedf_sdev_configure,
.dma_boundary = QED_HW_DMA_BOUNDARY, .dma_boundary = QED_HW_DMA_BOUNDARY,
.sg_tablesize = QEDF_MAX_BDS_PER_CMD, .sg_tablesize = QEDF_MAX_BDS_PER_CMD,
.can_queue = FCOE_PARAMS_NUM_TASKS, .can_queue = FCOE_PARAMS_NUM_TASKS,

View File

@ -1159,7 +1159,7 @@ qla1280_set_target_parameters(struct scsi_qla_host *ha, int bus, int target)
/************************************************************************** /**************************************************************************
* qla1280_slave_configure * qla1280_sdev_configure
* *
* Description: * Description:
* Determines the queue depth for a given device. There are two ways * Determines the queue depth for a given device. There are two ways
@ -1170,7 +1170,7 @@ qla1280_set_target_parameters(struct scsi_qla_host *ha, int bus, int target)
* default queue depth (dependent on the number of hardware SCBs). * default queue depth (dependent on the number of hardware SCBs).
**************************************************************************/ **************************************************************************/
static int static int
qla1280_slave_configure(struct scsi_device *device) qla1280_sdev_configure(struct scsi_device *device, struct queue_limits *lim)
{ {
struct scsi_qla_host *ha; struct scsi_qla_host *ha;
int default_depth = 3; int default_depth = 3;
@ -4121,7 +4121,7 @@ static const struct scsi_host_template qla1280_driver_template = {
.proc_name = "qla1280", .proc_name = "qla1280",
.name = "Qlogic ISP 1280/12160", .name = "Qlogic ISP 1280/12160",
.info = qla1280_info, .info = qla1280_info,
.slave_configure = qla1280_slave_configure, .sdev_configure = qla1280_sdev_configure,
.queuecommand = qla1280_queuecommand, .queuecommand = qla1280_queuecommand,
.eh_abort_handler = qla1280_eh_abort, .eh_abort_handler = qla1280_eh_abort,
.eh_device_reset_handler= qla1280_eh_device_reset, .eh_device_reset_handler= qla1280_eh_device_reset,

View File

@ -4098,6 +4098,8 @@ struct qla_hw_data {
uint32_t npiv_supported :1; uint32_t npiv_supported :1;
uint32_t pci_channel_io_perm_failure :1; uint32_t pci_channel_io_perm_failure :1;
uint32_t fce_enabled :1; uint32_t fce_enabled :1;
uint32_t user_enabled_fce :1;
uint32_t fce_dump_buf_alloced :1;
uint32_t fac_supported :1; uint32_t fac_supported :1;
uint32_t chip_reset_done :1; uint32_t chip_reset_done :1;

View File

@ -409,16 +409,17 @@ qla2x00_dfs_fce_show(struct seq_file *s, void *unused)
mutex_lock(&ha->fce_mutex); mutex_lock(&ha->fce_mutex);
if (ha->flags.user_enabled_fce) {
seq_puts(s, "FCE Trace Buffer\n"); seq_puts(s, "FCE Trace Buffer\n");
seq_printf(s, "In Pointer = %llx\n\n", (unsigned long long)ha->fce_wr); seq_printf(s, "In Pointer = %llx\n\n", (unsigned long long)ha->fce_wr);
seq_printf(s, "Base = %llx\n\n", (unsigned long long) ha->fce_dma); seq_printf(s, "Base = %llx\n\n", (unsigned long long)ha->fce_dma);
seq_puts(s, "FCE Enable Registers\n"); seq_puts(s, "FCE Enable Registers\n");
seq_printf(s, "%08x %08x %08x %08x %08x %08x\n", seq_printf(s, "%08x %08x %08x %08x %08x %08x\n",
ha->fce_mb[0], ha->fce_mb[2], ha->fce_mb[3], ha->fce_mb[4], ha->fce_mb[0], ha->fce_mb[2], ha->fce_mb[3], ha->fce_mb[4],
ha->fce_mb[5], ha->fce_mb[6]); ha->fce_mb[5], ha->fce_mb[6]);
fce = (uint32_t *) ha->fce; fce = (uint32_t *)ha->fce;
fce_start = (unsigned long long) ha->fce_dma; fce_start = (unsigned long long)ha->fce_dma;
for (cnt = 0; cnt < fce_calc_size(ha->fce_bufs) / 4; cnt++) { for (cnt = 0; cnt < fce_calc_size(ha->fce_bufs) / 4; cnt++) {
if (cnt % 8 == 0) if (cnt % 8 == 0)
seq_printf(s, "\n%llx: ", seq_printf(s, "\n%llx: ",
@ -429,6 +430,10 @@ qla2x00_dfs_fce_show(struct seq_file *s, void *unused)
} }
seq_puts(s, "\nEnd\n"); seq_puts(s, "\nEnd\n");
} else {
seq_puts(s, "FCE Trace is currently not enabled\n");
seq_puts(s, "\techo [ 1 | 0 ] > fce\n");
}
mutex_unlock(&ha->fce_mutex); mutex_unlock(&ha->fce_mutex);
@ -467,7 +472,7 @@ qla2x00_dfs_fce_release(struct inode *inode, struct file *file)
struct qla_hw_data *ha = vha->hw; struct qla_hw_data *ha = vha->hw;
int rval; int rval;
if (ha->flags.fce_enabled) if (ha->flags.fce_enabled || !ha->fce)
goto out; goto out;
mutex_lock(&ha->fce_mutex); mutex_lock(&ha->fce_mutex);
@ -488,11 +493,88 @@ qla2x00_dfs_fce_release(struct inode *inode, struct file *file)
return single_release(inode, file); return single_release(inode, file);
} }
static ssize_t
qla2x00_dfs_fce_write(struct file *file, const char __user *buffer,
size_t count, loff_t *pos)
{
struct seq_file *s = file->private_data;
struct scsi_qla_host *vha = s->private;
struct qla_hw_data *ha = vha->hw;
char *buf;
int rc = 0;
unsigned long enable;
if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
!IS_QLA27XX(ha) && !IS_QLA28XX(ha)) {
ql_dbg(ql_dbg_user, vha, 0xd034,
"this adapter does not support FCE.");
return -EINVAL;
}
buf = memdup_user_nul(buffer, count);
if (IS_ERR(buf)) {
ql_dbg(ql_dbg_user, vha, 0xd037,
"fail to copy user buffer.");
return PTR_ERR(buf);
}
enable = kstrtoul(buf, 0, 0);
rc = count;
mutex_lock(&ha->fce_mutex);
if (enable) {
if (ha->flags.user_enabled_fce) {
mutex_unlock(&ha->fce_mutex);
goto out_free;
}
ha->flags.user_enabled_fce = 1;
if (!ha->fce) {
rc = qla2x00_alloc_fce_trace(vha);
if (rc) {
ha->flags.user_enabled_fce = 0;
mutex_unlock(&ha->fce_mutex);
goto out_free;
}
/* adjust fw dump buffer to take into account of this feature */
if (!ha->flags.fce_dump_buf_alloced)
qla2x00_alloc_fw_dump(vha);
}
if (!ha->flags.fce_enabled)
qla_enable_fce_trace(vha);
ql_dbg(ql_dbg_user, vha, 0xd045, "User enabled FCE .\n");
} else {
if (!ha->flags.user_enabled_fce) {
mutex_unlock(&ha->fce_mutex);
goto out_free;
}
ha->flags.user_enabled_fce = 0;
if (ha->flags.fce_enabled) {
qla2x00_disable_fce_trace(vha, NULL, NULL);
ha->flags.fce_enabled = 0;
}
qla2x00_free_fce_trace(ha);
/* no need to re-adjust fw dump buffer */
ql_dbg(ql_dbg_user, vha, 0xd04f, "User disabled FCE .\n");
}
mutex_unlock(&ha->fce_mutex);
out_free:
kfree(buf);
return rc;
}
static const struct file_operations dfs_fce_ops = { static const struct file_operations dfs_fce_ops = {
.open = qla2x00_dfs_fce_open, .open = qla2x00_dfs_fce_open,
.read = seq_read, .read = seq_read,
.llseek = seq_lseek, .llseek = seq_lseek,
.release = qla2x00_dfs_fce_release, .release = qla2x00_dfs_fce_release,
.write = qla2x00_dfs_fce_write,
}; };
static int static int
@ -626,8 +708,6 @@ qla2x00_dfs_setup(scsi_qla_host_t *vha)
if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) && if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
!IS_QLA27XX(ha) && !IS_QLA28XX(ha)) !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
goto out; goto out;
if (!ha->fce)
goto out;
if (qla2x00_dfs_root) if (qla2x00_dfs_root)
goto create_dir; goto create_dir;

View File

@ -11,6 +11,9 @@
/* /*
* Global Function Prototypes in qla_init.c source file. * Global Function Prototypes in qla_init.c source file.
*/ */
int qla2x00_alloc_fce_trace(scsi_qla_host_t *);
void qla2x00_free_fce_trace(struct qla_hw_data *ha);
void qla_enable_fce_trace(scsi_qla_host_t *);
extern int qla2x00_initialize_adapter(scsi_qla_host_t *); extern int qla2x00_initialize_adapter(scsi_qla_host_t *);
extern int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport); extern int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport);

View File

@ -2681,7 +2681,7 @@ qla83xx_nic_core_fw_load(scsi_qla_host_t *vha)
return rval; return rval;
} }
static void qla_enable_fce_trace(scsi_qla_host_t *vha) void qla_enable_fce_trace(scsi_qla_host_t *vha)
{ {
int rval; int rval;
struct qla_hw_data *ha = vha->hw; struct qla_hw_data *ha = vha->hw;
@ -3717,25 +3717,24 @@ qla24xx_chip_diag(scsi_qla_host_t *vha)
return rval; return rval;
} }
static void int qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
{ {
dma_addr_t tc_dma; dma_addr_t tc_dma;
void *tc; void *tc;
struct qla_hw_data *ha = vha->hw; struct qla_hw_data *ha = vha->hw;
if (!IS_FWI2_CAPABLE(ha)) if (!IS_FWI2_CAPABLE(ha))
return; return -EINVAL;
if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) && if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
!IS_QLA27XX(ha) && !IS_QLA28XX(ha)) !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
return; return -EINVAL;
if (ha->fce) { if (ha->fce) {
ql_dbg(ql_dbg_init, vha, 0x00bd, ql_dbg(ql_dbg_init, vha, 0x00bd,
"%s: FCE Mem is already allocated.\n", "%s: FCE Mem is already allocated.\n",
__func__); __func__);
return; return -EIO;
} }
/* Allocate memory for Fibre Channel Event Buffer. */ /* Allocate memory for Fibre Channel Event Buffer. */
@ -3745,7 +3744,7 @@ qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
ql_log(ql_log_warn, vha, 0x00be, ql_log(ql_log_warn, vha, 0x00be,
"Unable to allocate (%d KB) for FCE.\n", "Unable to allocate (%d KB) for FCE.\n",
FCE_SIZE / 1024); FCE_SIZE / 1024);
return; return -ENOMEM;
} }
ql_dbg(ql_dbg_init, vha, 0x00c0, ql_dbg(ql_dbg_init, vha, 0x00c0,
@ -3754,6 +3753,16 @@ qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
ha->fce_dma = tc_dma; ha->fce_dma = tc_dma;
ha->fce = tc; ha->fce = tc;
ha->fce_bufs = FCE_NUM_BUFFERS; ha->fce_bufs = FCE_NUM_BUFFERS;
return 0;
}
void qla2x00_free_fce_trace(struct qla_hw_data *ha)
{
if (!ha->fce)
return;
dma_free_coherent(&ha->pdev->dev, FCE_SIZE, ha->fce, ha->fce_dma);
ha->fce = NULL;
ha->fce_dma = 0;
} }
static void static void
@ -3844,9 +3853,10 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
if (ha->tgt.atio_ring) if (ha->tgt.atio_ring)
mq_size += ha->tgt.atio_q_length * sizeof(request_t); mq_size += ha->tgt.atio_q_length * sizeof(request_t);
qla2x00_alloc_fce_trace(vha); if (ha->fce) {
if (ha->fce)
fce_size = sizeof(struct qla2xxx_fce_chain) + FCE_SIZE; fce_size = sizeof(struct qla2xxx_fce_chain) + FCE_SIZE;
ha->flags.fce_dump_buf_alloced = 1;
}
qla2x00_alloc_eft_trace(vha); qla2x00_alloc_eft_trace(vha);
if (ha->eft) if (ha->eft)
eft_size = EFT_SIZE; eft_size = EFT_SIZE;

View File

@ -1933,7 +1933,7 @@ qla2x00_abort_all_cmds(scsi_qla_host_t *vha, int res)
} }
static int static int
qla2xxx_slave_alloc(struct scsi_device *sdev) qla2xxx_sdev_init(struct scsi_device *sdev)
{ {
struct fc_rport *rport = starget_to_rport(scsi_target(sdev)); struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
@ -1946,7 +1946,7 @@ qla2xxx_slave_alloc(struct scsi_device *sdev)
} }
static int static int
qla2xxx_slave_configure(struct scsi_device *sdev) qla2xxx_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
scsi_qla_host_t *vha = shost_priv(sdev->host); scsi_qla_host_t *vha = shost_priv(sdev->host);
struct req_que *req = vha->req; struct req_que *req = vha->req;
@ -1956,7 +1956,7 @@ qla2xxx_slave_configure(struct scsi_device *sdev)
} }
static void static void
qla2xxx_slave_destroy(struct scsi_device *sdev) qla2xxx_sdev_destroy(struct scsi_device *sdev)
{ {
sdev->hostdata = NULL; sdev->hostdata = NULL;
} }
@ -8087,10 +8087,10 @@ struct scsi_host_template qla2xxx_driver_template = {
.eh_bus_reset_handler = qla2xxx_eh_bus_reset, .eh_bus_reset_handler = qla2xxx_eh_bus_reset,
.eh_host_reset_handler = qla2xxx_eh_host_reset, .eh_host_reset_handler = qla2xxx_eh_host_reset,
.slave_configure = qla2xxx_slave_configure, .sdev_configure = qla2xxx_sdev_configure,
.slave_alloc = qla2xxx_slave_alloc, .sdev_init = qla2xxx_sdev_init,
.slave_destroy = qla2xxx_slave_destroy, .sdev_destroy = qla2xxx_sdev_destroy,
.scan_finished = qla2xxx_scan_finished, .scan_finished = qla2xxx_scan_finished,
.scan_start = qla2xxx_scan_start, .scan_start = qla2xxx_scan_start,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,

View File

@ -160,7 +160,7 @@ static int qla4xxx_eh_abort(struct scsi_cmnd *cmd);
static int qla4xxx_eh_device_reset(struct scsi_cmnd *cmd); static int qla4xxx_eh_device_reset(struct scsi_cmnd *cmd);
static int qla4xxx_eh_target_reset(struct scsi_cmnd *cmd); static int qla4xxx_eh_target_reset(struct scsi_cmnd *cmd);
static int qla4xxx_eh_host_reset(struct scsi_cmnd *cmd); static int qla4xxx_eh_host_reset(struct scsi_cmnd *cmd);
static int qla4xxx_slave_alloc(struct scsi_device *device); static int qla4xxx_sdev_init(struct scsi_device *device);
static umode_t qla4_attr_is_visible(int param_type, int param); static umode_t qla4_attr_is_visible(int param_type, int param);
static int qla4xxx_host_reset(struct Scsi_Host *shost, int reset_type); static int qla4xxx_host_reset(struct Scsi_Host *shost, int reset_type);
@ -234,7 +234,7 @@ static struct scsi_host_template qla4xxx_driver_template = {
.eh_host_reset_handler = qla4xxx_eh_host_reset, .eh_host_reset_handler = qla4xxx_eh_host_reset,
.eh_timed_out = qla4xxx_eh_cmd_timed_out, .eh_timed_out = qla4xxx_eh_cmd_timed_out,
.slave_alloc = qla4xxx_slave_alloc, .sdev_init = qla4xxx_sdev_init,
.change_queue_depth = scsi_change_queue_depth, .change_queue_depth = scsi_change_queue_depth,
.this_id = -1, .this_id = -1,
@ -9052,7 +9052,7 @@ static void qla4xxx_config_dma_addressing(struct scsi_qla_host *ha)
} }
} }
static int qla4xxx_slave_alloc(struct scsi_device *sdev) static int qla4xxx_sdev_init(struct scsi_device *sdev)
{ {
struct iscsi_cls_session *cls_sess; struct iscsi_cls_session *cls_sess;
struct iscsi_session *sess; struct iscsi_session *sess;

View File

@ -975,7 +975,8 @@ static inline void update_can_queue(struct Scsi_Host *host, u_int in_ptr, u_int
host->sg_tablesize = QLOGICPTI_MAX_SG(num_free); host->sg_tablesize = QLOGICPTI_MAX_SG(num_free);
} }
static int qlogicpti_slave_configure(struct scsi_device *sdev) static int qlogicpti_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct qlogicpti *qpti = shost_priv(sdev->host); struct qlogicpti *qpti = shost_priv(sdev->host);
int tgt = sdev->id; int tgt = sdev->id;
@ -1292,7 +1293,7 @@ static const struct scsi_host_template qpti_template = {
.name = "qlogicpti", .name = "qlogicpti",
.info = qlogicpti_info, .info = qlogicpti_info,
.queuecommand = qlogicpti_queuecommand, .queuecommand = qlogicpti_queuecommand,
.slave_configure = qlogicpti_slave_configure, .sdev_configure = qlogicpti_sdev_configure,
.eh_abort_handler = qlogicpti_abort, .eh_abort_handler = qlogicpti_abort,
.eh_host_reset_handler = qlogicpti_reset, .eh_host_reset_handler = qlogicpti_reset,
.can_queue = QLOGICPTI_REQ_QUEUE_LEN, .can_queue = QLOGICPTI_REQ_QUEUE_LEN,

View File

@ -5879,23 +5879,24 @@ static struct sdebug_dev_info *find_build_dev_info(struct scsi_device *sdev)
return open_devip; return open_devip;
} }
static int scsi_debug_slave_alloc(struct scsi_device *sdp) static int scsi_debug_sdev_init(struct scsi_device *sdp)
{ {
if (sdebug_verbose) if (sdebug_verbose)
pr_info("slave_alloc <%u %u %u %llu>\n", pr_info("sdev_init <%u %u %u %llu>\n",
sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); sdp->host->host_no, sdp->channel, sdp->id, sdp->lun);
return 0; return 0;
} }
static int scsi_debug_slave_configure(struct scsi_device *sdp) static int scsi_debug_sdev_configure(struct scsi_device *sdp,
struct queue_limits *lim)
{ {
struct sdebug_dev_info *devip = struct sdebug_dev_info *devip =
(struct sdebug_dev_info *)sdp->hostdata; (struct sdebug_dev_info *)sdp->hostdata;
struct dentry *dentry; struct dentry *dentry;
if (sdebug_verbose) if (sdebug_verbose)
pr_info("slave_configure <%u %u %u %llu>\n", pr_info("sdev_configure <%u %u %u %llu>\n",
sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); sdp->host->host_no, sdp->channel, sdp->id, sdp->lun);
if (sdp->host->max_cmd_len != SDEBUG_MAX_CMD_LEN) if (sdp->host->max_cmd_len != SDEBUG_MAX_CMD_LEN)
sdp->host->max_cmd_len = SDEBUG_MAX_CMD_LEN; sdp->host->max_cmd_len = SDEBUG_MAX_CMD_LEN;
@ -5927,14 +5928,14 @@ static int scsi_debug_slave_configure(struct scsi_device *sdp)
return 0; return 0;
} }
static void scsi_debug_slave_destroy(struct scsi_device *sdp) static void scsi_debug_sdev_destroy(struct scsi_device *sdp)
{ {
struct sdebug_dev_info *devip = struct sdebug_dev_info *devip =
(struct sdebug_dev_info *)sdp->hostdata; (struct sdebug_dev_info *)sdp->hostdata;
struct sdebug_err_inject *err; struct sdebug_err_inject *err;
if (sdebug_verbose) if (sdebug_verbose)
pr_info("slave_destroy <%u %u %u %llu>\n", pr_info("sdev_destroy <%u %u %u %llu>\n",
sdp->host->host_no, sdp->channel, sdp->id, sdp->lun); sdp->host->host_no, sdp->channel, sdp->id, sdp->lun);
if (!devip) if (!devip)
@ -8712,9 +8713,9 @@ static struct scsi_host_template sdebug_driver_template = {
.proc_name = sdebug_proc_name, .proc_name = sdebug_proc_name,
.name = "SCSI DEBUG", .name = "SCSI DEBUG",
.info = scsi_debug_info, .info = scsi_debug_info,
.slave_alloc = scsi_debug_slave_alloc, .sdev_init = scsi_debug_sdev_init,
.slave_configure = scsi_debug_slave_configure, .sdev_configure = scsi_debug_sdev_configure,
.slave_destroy = scsi_debug_slave_destroy, .sdev_destroy = scsi_debug_sdev_destroy,
.ioctl = scsi_debug_ioctl, .ioctl = scsi_debug_ioctl,
.queuecommand = scsi_debug_queuecommand, .queuecommand = scsi_debug_queuecommand,
.change_queue_depth = sdebug_change_qdepth, .change_queue_depth = sdebug_change_qdepth,

View File

@ -227,7 +227,7 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
/* /*
* realloc if new shift is calculated, which is caused by setting * realloc if new shift is calculated, which is caused by setting
* up one new default queue depth after calling ->device_configure * up one new default queue depth after calling ->sdev_configure
*/ */
if (!need_alloc && new_shift != sdev->budget_map.shift) if (!need_alloc && new_shift != sdev->budget_map.shift)
need_alloc = need_free = true; need_alloc = need_free = true;
@ -265,7 +265,7 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
* scsi_alloc_sdev - allocate and setup a scsi_Device * scsi_alloc_sdev - allocate and setup a scsi_Device
* @starget: which target to allocate a &scsi_device for * @starget: which target to allocate a &scsi_device for
* @lun: which lun * @lun: which lun
* @hostdata: usually NULL and set by ->slave_alloc instead * @hostdata: usually NULL and set by ->sdev_init instead
* *
* Description: * Description:
* Allocate, initialize for io, and return a pointer to a scsi_Device. * Allocate, initialize for io, and return a pointer to a scsi_Device.
@ -312,11 +312,11 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
sdev->sdev_gendev.parent = get_device(&starget->dev); sdev->sdev_gendev.parent = get_device(&starget->dev);
sdev->sdev_target = starget; sdev->sdev_target = starget;
/* usually NULL and set by ->slave_alloc instead */ /* usually NULL and set by ->sdev_init instead */
sdev->hostdata = hostdata; sdev->hostdata = hostdata;
/* if the device needs this changing, it may do so in the /* if the device needs this changing, it may do so in the
* slave_configure function */ * sdev_configure function */
sdev->max_device_blocked = SCSI_DEFAULT_DEVICE_BLOCKED; sdev->max_device_blocked = SCSI_DEFAULT_DEVICE_BLOCKED;
/* /*
@ -363,8 +363,8 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget,
scsi_sysfs_device_initialize(sdev); scsi_sysfs_device_initialize(sdev);
if (shost->hostt->slave_alloc) { if (shost->hostt->sdev_init) {
ret = shost->hostt->slave_alloc(sdev); ret = shost->hostt->sdev_init(sdev);
if (ret) { if (ret) {
/* /*
* if LLDD reports slave not present, don't clutter * if LLDD reports slave not present, don't clutter
@ -1074,10 +1074,8 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
else if (*bflags & BLIST_MAX_1024) else if (*bflags & BLIST_MAX_1024)
lim.max_hw_sectors = 1024; lim.max_hw_sectors = 1024;
if (hostt->device_configure) if (hostt->sdev_configure)
ret = hostt->device_configure(sdev, &lim); ret = hostt->sdev_configure(sdev, &lim);
else if (hostt->slave_configure)
ret = hostt->slave_configure(sdev);
if (ret) { if (ret) {
queue_limits_cancel_update(sdev->request_queue); queue_limits_cancel_update(sdev->request_queue);
/* /*
@ -1097,12 +1095,12 @@ static int scsi_add_lun(struct scsi_device *sdev, unsigned char *inq_result,
} }
/* /*
* The queue_depth is often changed in ->device_configure. * The queue_depth is often changed in ->sdev_configure.
* *
* Set up budget map again since memory consumption of the map depends * Set up budget map again since memory consumption of the map depends
* on actual queue depth. * on actual queue depth.
*/ */
if (hostt->device_configure || hostt->slave_configure) if (hostt->sdev_configure)
scsi_realloc_sdev_budget_map(sdev, sdev->queue_depth); scsi_realloc_sdev_budget_map(sdev, sdev->queue_depth);
if (sdev->scsi_level >= SCSI_3) if (sdev->scsi_level >= SCSI_3)

View File

@ -1513,8 +1513,8 @@ void __scsi_remove_device(struct scsi_device *sdev)
kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags); kref_put(&sdev->host->tagset_refcnt, scsi_mq_free_tags);
cancel_work_sync(&sdev->requeue_work); cancel_work_sync(&sdev->requeue_work);
if (sdev->host->hostt->slave_destroy) if (sdev->host->hostt->sdev_destroy)
sdev->host->hostt->slave_destroy(sdev); sdev->host->hostt->sdev_destroy(sdev);
transport_destroy_device(dev); transport_destroy_device(dev);
/* /*

View File

@ -6489,7 +6489,7 @@ static int pqi_eh_abort_handler(struct scsi_cmnd *scmd)
return SUCCESS; return SUCCESS;
} }
static int pqi_slave_alloc(struct scsi_device *sdev) static int pqi_sdev_init(struct scsi_device *sdev)
{ {
struct pqi_scsi_dev *device; struct pqi_scsi_dev *device;
unsigned long flags; unsigned long flags;
@ -6557,7 +6557,8 @@ static inline bool pqi_is_tape_changer_device(struct pqi_scsi_dev *device)
return device->devtype == TYPE_TAPE || device->devtype == TYPE_MEDIUM_CHANGER; return device->devtype == TYPE_TAPE || device->devtype == TYPE_MEDIUM_CHANGER;
} }
static int pqi_slave_configure(struct scsi_device *sdev) static int pqi_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
int rc = 0; int rc = 0;
struct pqi_scsi_dev *device; struct pqi_scsi_dev *device;
@ -6573,7 +6574,7 @@ static int pqi_slave_configure(struct scsi_device *sdev)
return rc; return rc;
} }
static void pqi_slave_destroy(struct scsi_device *sdev) static void pqi_sdev_destroy(struct scsi_device *sdev)
{ {
struct pqi_ctrl_info *ctrl_info; struct pqi_ctrl_info *ctrl_info;
struct pqi_scsi_dev *device; struct pqi_scsi_dev *device;
@ -7548,9 +7549,9 @@ static const struct scsi_host_template pqi_driver_template = {
.eh_device_reset_handler = pqi_eh_device_reset_handler, .eh_device_reset_handler = pqi_eh_device_reset_handler,
.eh_abort_handler = pqi_eh_abort_handler, .eh_abort_handler = pqi_eh_abort_handler,
.ioctl = pqi_ioctl, .ioctl = pqi_ioctl,
.slave_alloc = pqi_slave_alloc, .sdev_init = pqi_sdev_init,
.slave_configure = pqi_slave_configure, .sdev_configure = pqi_sdev_configure,
.slave_destroy = pqi_slave_destroy, .sdev_destroy = pqi_sdev_destroy,
.map_queues = pqi_map_queues, .map_queues = pqi_map_queues,
.sdev_groups = pqi_sdev_groups, .sdev_groups = pqi_sdev_groups,
.shost_groups = pqi_shost_groups, .shost_groups = pqi_shost_groups,

View File

@ -42,11 +42,11 @@ module_param(snic_max_qdepth, uint, S_IRUGO | S_IWUSR);
MODULE_PARM_DESC(snic_max_qdepth, "Queue depth to report for each LUN"); MODULE_PARM_DESC(snic_max_qdepth, "Queue depth to report for each LUN");
/* /*
* snic_slave_alloc : callback function to SCSI Mid Layer, called on * snic_sdev_init : callback function to SCSI Mid Layer, called on
* scsi device initialization. * scsi device initialization.
*/ */
static int static int
snic_slave_alloc(struct scsi_device *sdev) snic_sdev_init(struct scsi_device *sdev)
{ {
struct snic_tgt *tgt = starget_to_tgt(scsi_target(sdev)); struct snic_tgt *tgt = starget_to_tgt(scsi_target(sdev));
@ -57,11 +57,11 @@ snic_slave_alloc(struct scsi_device *sdev)
} }
/* /*
* snic_slave_configure : callback function to SCSI Mid Layer, called on * snic_sdev_configure : callback function to SCSI Mid Layer, called on
* scsi device initialization. * scsi device initialization.
*/ */
static int static int
snic_slave_configure(struct scsi_device *sdev) snic_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
struct snic *snic = shost_priv(sdev->host); struct snic *snic = shost_priv(sdev->host);
u32 qdepth = 0, max_ios = 0; u32 qdepth = 0, max_ios = 0;
@ -107,8 +107,8 @@ static const struct scsi_host_template snic_host_template = {
.eh_abort_handler = snic_abort_cmd, .eh_abort_handler = snic_abort_cmd,
.eh_device_reset_handler = snic_device_reset, .eh_device_reset_handler = snic_device_reset,
.eh_host_reset_handler = snic_host_reset, .eh_host_reset_handler = snic_host_reset,
.slave_alloc = snic_slave_alloc, .sdev_init = snic_sdev_init,
.slave_configure = snic_slave_configure, .sdev_configure = snic_sdev_configure,
.change_queue_depth = snic_change_queue_depth, .change_queue_depth = snic_change_queue_depth,
.this_id = -1, .this_id = -1,
.cmd_per_lun = SNIC_DFLT_QUEUE_DEPTH, .cmd_per_lun = SNIC_DFLT_QUEUE_DEPTH,

View File

@ -584,7 +584,7 @@ static void return_abnormal_state(struct st_hba *hba, int status)
spin_unlock_irqrestore(hba->host->host_lock, flags); spin_unlock_irqrestore(hba->host->host_lock, flags);
} }
static int static int
stex_slave_config(struct scsi_device *sdev) stex_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
sdev->use_10_for_rw = 1; sdev->use_10_for_rw = 1;
sdev->use_10_for_ms = 1; sdev->use_10_for_ms = 1;
@ -1481,7 +1481,7 @@ static const struct scsi_host_template driver_template = {
.proc_name = DRV_NAME, .proc_name = DRV_NAME,
.bios_param = stex_biosparam, .bios_param = stex_biosparam,
.queuecommand = stex_queuecommand, .queuecommand = stex_queuecommand,
.slave_configure = stex_slave_config, .sdev_configure = stex_sdev_configure,
.eh_abort_handler = stex_abort, .eh_abort_handler = stex_abort,
.eh_host_reset_handler = stex_reset, .eh_host_reset_handler = stex_reset,
.this_id = -1, .this_id = -1,

View File

@ -1579,7 +1579,8 @@ static int storvsc_device_alloc(struct scsi_device *sdevice)
return 0; return 0;
} }
static int storvsc_device_configure(struct scsi_device *sdevice) static int storvsc_sdev_configure(struct scsi_device *sdevice,
struct queue_limits *lim)
{ {
blk_queue_rq_timeout(sdevice->request_queue, (storvsc_timeout * HZ)); blk_queue_rq_timeout(sdevice->request_queue, (storvsc_timeout * HZ));
@ -1880,8 +1881,8 @@ static struct scsi_host_template scsi_driver = {
.eh_host_reset_handler = storvsc_host_reset_handler, .eh_host_reset_handler = storvsc_host_reset_handler,
.proc_name = "storvsc_host", .proc_name = "storvsc_host",
.eh_timed_out = storvsc_eh_timed_out, .eh_timed_out = storvsc_eh_timed_out,
.slave_alloc = storvsc_device_alloc, .sdev_init = storvsc_device_alloc,
.slave_configure = storvsc_device_configure, .sdev_configure = storvsc_sdev_configure,
.cmd_per_lun = 2048, .cmd_per_lun = 2048,
.this_id = -1, .this_id = -1,
/* Ensure there are no gaps in presented sgls */ /* Ensure there are no gaps in presented sgls */

View File

@ -765,7 +765,7 @@ static void sym_tune_dev_queuing(struct sym_tcb *tp, int lun, u_short reqtags)
} }
} }
static int sym53c8xx_slave_alloc(struct scsi_device *sdev) static int sym53c8xx_sdev_init(struct scsi_device *sdev)
{ {
struct sym_hcb *np = sym_get_hcb(sdev->host); struct sym_hcb *np = sym_get_hcb(sdev->host);
struct sym_tcb *tp = &np->target[sdev->id]; struct sym_tcb *tp = &np->target[sdev->id];
@ -825,7 +825,8 @@ static int sym53c8xx_slave_alloc(struct scsi_device *sdev)
/* /*
* Linux entry point for device queue sizing. * Linux entry point for device queue sizing.
*/ */
static int sym53c8xx_slave_configure(struct scsi_device *sdev) static int sym53c8xx_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct sym_hcb *np = sym_get_hcb(sdev->host); struct sym_hcb *np = sym_get_hcb(sdev->host);
struct sym_tcb *tp = &np->target[sdev->id]; struct sym_tcb *tp = &np->target[sdev->id];
@ -861,14 +862,14 @@ static int sym53c8xx_slave_configure(struct scsi_device *sdev)
return 0; return 0;
} }
static void sym53c8xx_slave_destroy(struct scsi_device *sdev) static void sym53c8xx_sdev_destroy(struct scsi_device *sdev)
{ {
struct sym_hcb *np = sym_get_hcb(sdev->host); struct sym_hcb *np = sym_get_hcb(sdev->host);
struct sym_tcb *tp = &np->target[sdev->id]; struct sym_tcb *tp = &np->target[sdev->id];
struct sym_lcb *lp = sym_lp(tp, sdev->lun); struct sym_lcb *lp = sym_lp(tp, sdev->lun);
unsigned long flags; unsigned long flags;
/* if slave_alloc returned before allocating a sym_lcb, return */ /* if sdev_init returned before allocating a sym_lcb, return */
if (!lp) if (!lp)
return; return;
@ -1684,9 +1685,9 @@ static const struct scsi_host_template sym2_template = {
.info = sym53c8xx_info, .info = sym53c8xx_info,
.cmd_size = sizeof(struct sym_ucmd), .cmd_size = sizeof(struct sym_ucmd),
.queuecommand = sym53c8xx_queue_command, .queuecommand = sym53c8xx_queue_command,
.slave_alloc = sym53c8xx_slave_alloc, .sdev_init = sym53c8xx_sdev_init,
.slave_configure = sym53c8xx_slave_configure, .sdev_configure = sym53c8xx_sdev_configure,
.slave_destroy = sym53c8xx_slave_destroy, .sdev_destroy = sym53c8xx_sdev_destroy,
.eh_abort_handler = sym53c8xx_eh_abort_handler, .eh_abort_handler = sym53c8xx_eh_abort_handler,
.eh_target_reset_handler = sym53c8xx_eh_target_reset_handler, .eh_target_reset_handler = sym53c8xx_eh_target_reset_handler,
.eh_bus_reset_handler = sym53c8xx_eh_bus_reset_handler, .eh_bus_reset_handler = sym53c8xx_eh_bus_reset_handler,

View File

@ -800,7 +800,7 @@ static const struct scsi_host_template virtscsi_host_template = {
.eh_abort_handler = virtscsi_abort, .eh_abort_handler = virtscsi_abort,
.eh_device_reset_handler = virtscsi_device_reset, .eh_device_reset_handler = virtscsi_device_reset,
.eh_timed_out = virtscsi_eh_timed_out, .eh_timed_out = virtscsi_eh_timed_out,
.slave_alloc = virtscsi_device_alloc, .sdev_init = virtscsi_device_alloc,
.dma_boundary = UINT_MAX, .dma_boundary = UINT_MAX,
.map_queues = virtscsi_map_queues, .map_queues = virtscsi_map_queues,

View File

@ -735,7 +735,8 @@ static int scsifront_dev_reset_handler(struct scsi_cmnd *sc)
return scsifront_action_handler(sc, VSCSIIF_ACT_SCSI_RESET); return scsifront_action_handler(sc, VSCSIIF_ACT_SCSI_RESET);
} }
static int scsifront_sdev_configure(struct scsi_device *sdev) static int scsifront_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim)
{ {
struct vscsifrnt_info *info = shost_priv(sdev->host); struct vscsifrnt_info *info = shost_priv(sdev->host);
int err; int err;
@ -776,8 +777,8 @@ static const struct scsi_host_template scsifront_sht = {
.queuecommand = scsifront_queuecommand, .queuecommand = scsifront_queuecommand,
.eh_abort_handler = scsifront_eh_abort_handler, .eh_abort_handler = scsifront_eh_abort_handler,
.eh_device_reset_handler = scsifront_dev_reset_handler, .eh_device_reset_handler = scsifront_dev_reset_handler,
.slave_configure = scsifront_sdev_configure, .sdev_configure = scsifront_sdev_configure,
.slave_destroy = scsifront_sdev_destroy, .sdev_destroy = scsifront_sdev_destroy,
.cmd_per_lun = VSCSIIF_DEFAULT_CMD_PER_LUN, .cmd_per_lun = VSCSIIF_DEFAULT_CMD_PER_LUN,
.can_queue = VSCSIIF_MAX_REQS, .can_queue = VSCSIIF_MAX_REQS,
.this_id = -1, .this_id = -1,
@ -1074,8 +1075,8 @@ static void scsifront_do_lun_hotplug(struct vscsifrnt_info *info, int op)
continue; continue;
/* /*
* Front device state path, used in slave_configure called * Front device state path, used in sdev_configure called
* on successfull scsi_add_device, and in slave_destroy called * on successfull scsi_add_device, and in sdev_destroy called
* on remove of a device. * on remove of a device.
*/ */
snprintf(info->dev_state_path, sizeof(info->dev_state_path), snprintf(info->dev_state_path, sizeof(info->dev_state_path),

View File

@ -258,10 +258,15 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state,
return UFS_PM_LVL_0; return UFS_PM_LVL_0;
} }
static bool ufshcd_has_pending_tasks(struct ufs_hba *hba)
{
return hba->outstanding_tasks || hba->active_uic_cmd ||
hba->uic_async_done;
}
static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba) static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba)
{ {
return (hba->clk_gating.active_reqs || hba->outstanding_reqs || hba->outstanding_tasks || return hba->outstanding_reqs || ufshcd_has_pending_tasks(hba);
hba->active_uic_cmd || hba->uic_async_done);
} }
static const struct ufs_dev_quirk ufs_fixups[] = { static const struct ufs_dev_quirk ufs_fixups[] = {
@ -1447,16 +1452,16 @@ static void ufshcd_clk_scaling_suspend_work(struct work_struct *work)
{ {
struct ufs_hba *hba = container_of(work, struct ufs_hba, struct ufs_hba *hba = container_of(work, struct ufs_hba,
clk_scaling.suspend_work); clk_scaling.suspend_work);
unsigned long irq_flags;
spin_lock_irqsave(hba->host->host_lock, irq_flags); scoped_guard(spinlock_irqsave, &hba->clk_scaling.lock)
if (hba->clk_scaling.active_reqs || hba->clk_scaling.is_suspended) { {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags); if (hba->clk_scaling.active_reqs ||
hba->clk_scaling.is_suspended)
return; return;
}
hba->clk_scaling.is_suspended = true; hba->clk_scaling.is_suspended = true;
hba->clk_scaling.window_start_t = 0; hba->clk_scaling.window_start_t = 0;
spin_unlock_irqrestore(hba->host->host_lock, irq_flags); }
devfreq_suspend_device(hba->devfreq); devfreq_suspend_device(hba->devfreq);
} }
@ -1465,15 +1470,13 @@ static void ufshcd_clk_scaling_resume_work(struct work_struct *work)
{ {
struct ufs_hba *hba = container_of(work, struct ufs_hba, struct ufs_hba *hba = container_of(work, struct ufs_hba,
clk_scaling.resume_work); clk_scaling.resume_work);
unsigned long irq_flags;
spin_lock_irqsave(hba->host->host_lock, irq_flags); scoped_guard(spinlock_irqsave, &hba->clk_scaling.lock)
if (!hba->clk_scaling.is_suspended) { {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags); if (!hba->clk_scaling.is_suspended)
return; return;
}
hba->clk_scaling.is_suspended = false; hba->clk_scaling.is_suspended = false;
spin_unlock_irqrestore(hba->host->host_lock, irq_flags); }
devfreq_resume_device(hba->devfreq); devfreq_resume_device(hba->devfreq);
} }
@ -1487,7 +1490,6 @@ static int ufshcd_devfreq_target(struct device *dev,
bool scale_up = false, sched_clk_scaling_suspend_work = false; bool scale_up = false, sched_clk_scaling_suspend_work = false;
struct list_head *clk_list = &hba->clk_list_head; struct list_head *clk_list = &hba->clk_list_head;
struct ufs_clk_info *clki; struct ufs_clk_info *clki;
unsigned long irq_flags;
if (!ufshcd_is_clkscaling_supported(hba)) if (!ufshcd_is_clkscaling_supported(hba))
return -EINVAL; return -EINVAL;
@ -1508,15 +1510,13 @@ static int ufshcd_devfreq_target(struct device *dev,
*freq = (unsigned long) clk_round_rate(clki->clk, *freq); *freq = (unsigned long) clk_round_rate(clki->clk, *freq);
} }
spin_lock_irqsave(hba->host->host_lock, irq_flags); scoped_guard(spinlock_irqsave, &hba->clk_scaling.lock)
if (ufshcd_eh_in_progress(hba)) { {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags); if (ufshcd_eh_in_progress(hba))
return 0; return 0;
}
/* Skip scaling clock when clock scaling is suspended */ /* Skip scaling clock when clock scaling is suspended */
if (hba->clk_scaling.is_suspended) { if (hba->clk_scaling.is_suspended) {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
dev_warn(hba->dev, "clock scaling is suspended, skip"); dev_warn(hba->dev, "clock scaling is suspended, skip");
return 0; return 0;
} }
@ -1524,10 +1524,8 @@ static int ufshcd_devfreq_target(struct device *dev,
if (!hba->clk_scaling.active_reqs) if (!hba->clk_scaling.active_reqs)
sched_clk_scaling_suspend_work = true; sched_clk_scaling_suspend_work = true;
if (list_empty(clk_list)) { if (list_empty(clk_list))
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
goto out; goto out;
}
/* Decide based on the target or rounded-off frequency and update */ /* Decide based on the target or rounded-off frequency and update */
if (hba->use_pm_opp) if (hba->use_pm_opp)
@ -1540,11 +1538,10 @@ static int ufshcd_devfreq_target(struct device *dev,
/* Update the frequency */ /* Update the frequency */
if (!ufshcd_is_devfreq_scaling_required(hba, *freq, scale_up)) { if (!ufshcd_is_devfreq_scaling_required(hba, *freq, scale_up)) {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
ret = 0; ret = 0;
goto out; /* no state change required */ goto out; /* no state change required */
} }
spin_unlock_irqrestore(hba->host->host_lock, irq_flags); }
start = ktime_get(); start = ktime_get();
ret = ufshcd_devfreq_scale(hba, *freq, scale_up); ret = ufshcd_devfreq_scale(hba, *freq, scale_up);
@ -1569,7 +1566,6 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
{ {
struct ufs_hba *hba = dev_get_drvdata(dev); struct ufs_hba *hba = dev_get_drvdata(dev);
struct ufs_clk_scaling *scaling = &hba->clk_scaling; struct ufs_clk_scaling *scaling = &hba->clk_scaling;
unsigned long flags;
ktime_t curr_t; ktime_t curr_t;
if (!ufshcd_is_clkscaling_supported(hba)) if (!ufshcd_is_clkscaling_supported(hba))
@ -1577,7 +1573,8 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
memset(stat, 0, sizeof(*stat)); memset(stat, 0, sizeof(*stat));
spin_lock_irqsave(hba->host->host_lock, flags); guard(spinlock_irqsave)(&hba->clk_scaling.lock);
curr_t = ktime_get(); curr_t = ktime_get();
if (!scaling->window_start_t) if (!scaling->window_start_t)
goto start_window; goto start_window;
@ -1613,7 +1610,7 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
scaling->busy_start_t = 0; scaling->busy_start_t = 0;
scaling->is_busy_started = false; scaling->is_busy_started = false;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags);
return 0; return 0;
} }
@ -1677,19 +1674,19 @@ static void ufshcd_devfreq_remove(struct ufs_hba *hba)
static void ufshcd_suspend_clkscaling(struct ufs_hba *hba) static void ufshcd_suspend_clkscaling(struct ufs_hba *hba)
{ {
unsigned long flags;
bool suspend = false; bool suspend = false;
cancel_work_sync(&hba->clk_scaling.suspend_work); cancel_work_sync(&hba->clk_scaling.suspend_work);
cancel_work_sync(&hba->clk_scaling.resume_work); cancel_work_sync(&hba->clk_scaling.resume_work);
spin_lock_irqsave(hba->host->host_lock, flags); scoped_guard(spinlock_irqsave, &hba->clk_scaling.lock)
{
if (!hba->clk_scaling.is_suspended) { if (!hba->clk_scaling.is_suspended) {
suspend = true; suspend = true;
hba->clk_scaling.is_suspended = true; hba->clk_scaling.is_suspended = true;
hba->clk_scaling.window_start_t = 0; hba->clk_scaling.window_start_t = 0;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags); }
if (suspend) if (suspend)
devfreq_suspend_device(hba->devfreq); devfreq_suspend_device(hba->devfreq);
@ -1697,15 +1694,15 @@ static void ufshcd_suspend_clkscaling(struct ufs_hba *hba)
static void ufshcd_resume_clkscaling(struct ufs_hba *hba) static void ufshcd_resume_clkscaling(struct ufs_hba *hba)
{ {
unsigned long flags;
bool resume = false; bool resume = false;
spin_lock_irqsave(hba->host->host_lock, flags); scoped_guard(spinlock_irqsave, &hba->clk_scaling.lock)
{
if (hba->clk_scaling.is_suspended) { if (hba->clk_scaling.is_suspended) {
resume = true; resume = true;
hba->clk_scaling.is_suspended = false; hba->clk_scaling.is_suspended = false;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags); }
if (resume) if (resume)
devfreq_resume_device(hba->devfreq); devfreq_resume_device(hba->devfreq);
@ -1791,6 +1788,8 @@ static void ufshcd_init_clk_scaling(struct ufs_hba *hba)
INIT_WORK(&hba->clk_scaling.resume_work, INIT_WORK(&hba->clk_scaling.resume_work,
ufshcd_clk_scaling_resume_work); ufshcd_clk_scaling_resume_work);
spin_lock_init(&hba->clk_scaling.lock);
hba->clk_scaling.workq = alloc_ordered_workqueue( hba->clk_scaling.workq = alloc_ordered_workqueue(
"ufs_clkscaling_%d", WQ_MEM_RECLAIM, hba->host->host_no); "ufs_clkscaling_%d", WQ_MEM_RECLAIM, hba->host->host_no);
@ -1811,19 +1810,16 @@ static void ufshcd_exit_clk_scaling(struct ufs_hba *hba)
static void ufshcd_ungate_work(struct work_struct *work) static void ufshcd_ungate_work(struct work_struct *work)
{ {
int ret; int ret;
unsigned long flags;
struct ufs_hba *hba = container_of(work, struct ufs_hba, struct ufs_hba *hba = container_of(work, struct ufs_hba,
clk_gating.ungate_work); clk_gating.ungate_work);
cancel_delayed_work_sync(&hba->clk_gating.gate_work); cancel_delayed_work_sync(&hba->clk_gating.gate_work);
spin_lock_irqsave(hba->host->host_lock, flags); scoped_guard(spinlock_irqsave, &hba->clk_gating.lock) {
if (hba->clk_gating.state == CLKS_ON) { if (hba->clk_gating.state == CLKS_ON)
spin_unlock_irqrestore(hba->host->host_lock, flags);
return; return;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_hba_vreg_set_hpm(hba); ufshcd_hba_vreg_set_hpm(hba);
ufshcd_setup_clocks(hba, true); ufshcd_setup_clocks(hba, true);
@ -1858,7 +1854,7 @@ void ufshcd_hold(struct ufs_hba *hba)
if (!ufshcd_is_clkgating_allowed(hba) || if (!ufshcd_is_clkgating_allowed(hba) ||
!hba->clk_gating.is_initialized) !hba->clk_gating.is_initialized)
return; return;
spin_lock_irqsave(hba->host->host_lock, flags); spin_lock_irqsave(&hba->clk_gating.lock, flags);
hba->clk_gating.active_reqs++; hba->clk_gating.active_reqs++;
start: start:
@ -1874,11 +1870,11 @@ void ufshcd_hold(struct ufs_hba *hba)
*/ */
if (ufshcd_can_hibern8_during_gating(hba) && if (ufshcd_can_hibern8_during_gating(hba) &&
ufshcd_is_link_hibern8(hba)) { ufshcd_is_link_hibern8(hba)) {
spin_unlock_irqrestore(hba->host->host_lock, flags); spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
flush_result = flush_work(&hba->clk_gating.ungate_work); flush_result = flush_work(&hba->clk_gating.ungate_work);
if (hba->clk_gating.is_suspended && !flush_result) if (hba->clk_gating.is_suspended && !flush_result)
return; return;
spin_lock_irqsave(hba->host->host_lock, flags); spin_lock_irqsave(&hba->clk_gating.lock, flags);
goto start; goto start;
} }
break; break;
@ -1907,17 +1903,17 @@ void ufshcd_hold(struct ufs_hba *hba)
*/ */
fallthrough; fallthrough;
case REQ_CLKS_ON: case REQ_CLKS_ON:
spin_unlock_irqrestore(hba->host->host_lock, flags); spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
flush_work(&hba->clk_gating.ungate_work); flush_work(&hba->clk_gating.ungate_work);
/* Make sure state is CLKS_ON before returning */ /* Make sure state is CLKS_ON before returning */
spin_lock_irqsave(hba->host->host_lock, flags); spin_lock_irqsave(&hba->clk_gating.lock, flags);
goto start; goto start;
default: default:
dev_err(hba->dev, "%s: clk gating is in invalid state %d\n", dev_err(hba->dev, "%s: clk gating is in invalid state %d\n",
__func__, hba->clk_gating.state); __func__, hba->clk_gating.state);
break; break;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags); spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
} }
EXPORT_SYMBOL_GPL(ufshcd_hold); EXPORT_SYMBOL_GPL(ufshcd_hold);
@ -1925,10 +1921,9 @@ static void ufshcd_gate_work(struct work_struct *work)
{ {
struct ufs_hba *hba = container_of(work, struct ufs_hba, struct ufs_hba *hba = container_of(work, struct ufs_hba,
clk_gating.gate_work.work); clk_gating.gate_work.work);
unsigned long flags;
int ret; int ret;
spin_lock_irqsave(hba->host->host_lock, flags); scoped_guard(spinlock_irqsave, &hba->clk_gating.lock) {
/* /*
* In case you are here to cancel this work the gating state * In case you are here to cancel this work the gating state
* would be marked as REQ_CLKS_ON. In this case save time by * would be marked as REQ_CLKS_ON. In this case save time by
@ -1936,17 +1931,22 @@ static void ufshcd_gate_work(struct work_struct *work)
* state to CLKS_ON. * state to CLKS_ON.
*/ */
if (hba->clk_gating.is_suspended || if (hba->clk_gating.is_suspended ||
(hba->clk_gating.state != REQ_CLKS_OFF)) { hba->clk_gating.state != REQ_CLKS_OFF) {
hba->clk_gating.state = CLKS_ON; hba->clk_gating.state = CLKS_ON;
trace_ufshcd_clk_gating(dev_name(hba->dev), trace_ufshcd_clk_gating(dev_name(hba->dev),
hba->clk_gating.state); hba->clk_gating.state);
goto rel_lock; return;
} }
if (ufshcd_is_ufs_dev_busy(hba) || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL) if (hba->clk_gating.active_reqs)
goto rel_lock; return;
}
spin_unlock_irqrestore(hba->host->host_lock, flags); scoped_guard(spinlock_irqsave, hba->host->host_lock) {
if (ufshcd_is_ufs_dev_busy(hba) ||
hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
return;
}
/* put the link into hibern8 mode before turning off clocks */ /* put the link into hibern8 mode before turning off clocks */
if (ufshcd_can_hibern8_during_gating(hba)) { if (ufshcd_can_hibern8_during_gating(hba)) {
@ -1957,7 +1957,7 @@ static void ufshcd_gate_work(struct work_struct *work)
__func__, ret); __func__, ret);
trace_ufshcd_clk_gating(dev_name(hba->dev), trace_ufshcd_clk_gating(dev_name(hba->dev),
hba->clk_gating.state); hba->clk_gating.state);
goto out; return;
} }
ufshcd_set_link_hibern8(hba); ufshcd_set_link_hibern8(hba);
} }
@ -1977,33 +1977,34 @@ static void ufshcd_gate_work(struct work_struct *work)
* prevent from doing cancel work multiple times when there are * prevent from doing cancel work multiple times when there are
* new requests arriving before the current cancel work is done. * new requests arriving before the current cancel work is done.
*/ */
spin_lock_irqsave(hba->host->host_lock, flags); guard(spinlock_irqsave)(&hba->clk_gating.lock);
if (hba->clk_gating.state == REQ_CLKS_OFF) { if (hba->clk_gating.state == REQ_CLKS_OFF) {
hba->clk_gating.state = CLKS_OFF; hba->clk_gating.state = CLKS_OFF;
trace_ufshcd_clk_gating(dev_name(hba->dev), trace_ufshcd_clk_gating(dev_name(hba->dev),
hba->clk_gating.state); hba->clk_gating.state);
} }
rel_lock:
spin_unlock_irqrestore(hba->host->host_lock, flags);
out:
return;
} }
/* host lock must be held before calling this variant */
static void __ufshcd_release(struct ufs_hba *hba) static void __ufshcd_release(struct ufs_hba *hba)
{ {
lockdep_assert_held(&hba->clk_gating.lock);
if (!ufshcd_is_clkgating_allowed(hba)) if (!ufshcd_is_clkgating_allowed(hba))
return; return;
hba->clk_gating.active_reqs--; hba->clk_gating.active_reqs--;
if (hba->clk_gating.active_reqs || hba->clk_gating.is_suspended || if (hba->clk_gating.active_reqs || hba->clk_gating.is_suspended ||
hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL || !hba->clk_gating.is_initialized ||
hba->outstanding_tasks || !hba->clk_gating.is_initialized ||
hba->active_uic_cmd || hba->uic_async_done ||
hba->clk_gating.state == CLKS_OFF) hba->clk_gating.state == CLKS_OFF)
return; return;
scoped_guard(spinlock_irqsave, hba->host->host_lock) {
if (ufshcd_has_pending_tasks(hba) ||
hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
return;
}
hba->clk_gating.state = REQ_CLKS_OFF; hba->clk_gating.state = REQ_CLKS_OFF;
trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state); trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
queue_delayed_work(hba->clk_gating.clk_gating_workq, queue_delayed_work(hba->clk_gating.clk_gating_workq,
@ -2013,11 +2014,8 @@ static void __ufshcd_release(struct ufs_hba *hba)
void ufshcd_release(struct ufs_hba *hba) void ufshcd_release(struct ufs_hba *hba)
{ {
unsigned long flags; guard(spinlock_irqsave)(&hba->clk_gating.lock);
spin_lock_irqsave(hba->host->host_lock, flags);
__ufshcd_release(hba); __ufshcd_release(hba);
spin_unlock_irqrestore(hba->host->host_lock, flags);
} }
EXPORT_SYMBOL_GPL(ufshcd_release); EXPORT_SYMBOL_GPL(ufshcd_release);
@ -2032,11 +2030,9 @@ static ssize_t ufshcd_clkgate_delay_show(struct device *dev,
void ufshcd_clkgate_delay_set(struct device *dev, unsigned long value) void ufshcd_clkgate_delay_set(struct device *dev, unsigned long value)
{ {
struct ufs_hba *hba = dev_get_drvdata(dev); struct ufs_hba *hba = dev_get_drvdata(dev);
unsigned long flags;
spin_lock_irqsave(hba->host->host_lock, flags); guard(spinlock_irqsave)(&hba->clk_gating.lock);
hba->clk_gating.delay_ms = value; hba->clk_gating.delay_ms = value;
spin_unlock_irqrestore(hba->host->host_lock, flags);
} }
EXPORT_SYMBOL_GPL(ufshcd_clkgate_delay_set); EXPORT_SYMBOL_GPL(ufshcd_clkgate_delay_set);
@ -2064,7 +2060,6 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t count) struct device_attribute *attr, const char *buf, size_t count)
{ {
struct ufs_hba *hba = dev_get_drvdata(dev); struct ufs_hba *hba = dev_get_drvdata(dev);
unsigned long flags;
u32 value; u32 value;
if (kstrtou32(buf, 0, &value)) if (kstrtou32(buf, 0, &value))
@ -2072,9 +2067,10 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
value = !!value; value = !!value;
spin_lock_irqsave(hba->host->host_lock, flags); guard(spinlock_irqsave)(&hba->clk_gating.lock);
if (value == hba->clk_gating.is_enabled) if (value == hba->clk_gating.is_enabled)
goto out; return count;
if (value) if (value)
__ufshcd_release(hba); __ufshcd_release(hba);
@ -2082,8 +2078,7 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
hba->clk_gating.active_reqs++; hba->clk_gating.active_reqs++;
hba->clk_gating.is_enabled = value; hba->clk_gating.is_enabled = value;
out:
spin_unlock_irqrestore(hba->host->host_lock, flags);
return count; return count;
} }
@ -2125,6 +2120,8 @@ static void ufshcd_init_clk_gating(struct ufs_hba *hba)
INIT_DELAYED_WORK(&hba->clk_gating.gate_work, ufshcd_gate_work); INIT_DELAYED_WORK(&hba->clk_gating.gate_work, ufshcd_gate_work);
INIT_WORK(&hba->clk_gating.ungate_work, ufshcd_ungate_work); INIT_WORK(&hba->clk_gating.ungate_work, ufshcd_ungate_work);
spin_lock_init(&hba->clk_gating.lock);
hba->clk_gating.clk_gating_workq = alloc_ordered_workqueue( hba->clk_gating.clk_gating_workq = alloc_ordered_workqueue(
"ufs_clk_gating_%d", WQ_MEM_RECLAIM | WQ_HIGHPRI, "ufs_clk_gating_%d", WQ_MEM_RECLAIM | WQ_HIGHPRI,
hba->host->host_no); hba->host->host_no);
@ -2154,19 +2151,17 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
{ {
bool queue_resume_work = false; bool queue_resume_work = false;
ktime_t curr_t = ktime_get(); ktime_t curr_t = ktime_get();
unsigned long flags;
if (!ufshcd_is_clkscaling_supported(hba)) if (!ufshcd_is_clkscaling_supported(hba))
return; return;
spin_lock_irqsave(hba->host->host_lock, flags); guard(spinlock_irqsave)(&hba->clk_scaling.lock);
if (!hba->clk_scaling.active_reqs++) if (!hba->clk_scaling.active_reqs++)
queue_resume_work = true; queue_resume_work = true;
if (!hba->clk_scaling.is_enabled || hba->pm_op_in_progress) { if (!hba->clk_scaling.is_enabled || hba->pm_op_in_progress)
spin_unlock_irqrestore(hba->host->host_lock, flags);
return; return;
}
if (queue_resume_work) if (queue_resume_work)
queue_work(hba->clk_scaling.workq, queue_work(hba->clk_scaling.workq,
@ -2182,18 +2177,17 @@ static void ufshcd_clk_scaling_start_busy(struct ufs_hba *hba)
hba->clk_scaling.busy_start_t = curr_t; hba->clk_scaling.busy_start_t = curr_t;
hba->clk_scaling.is_busy_started = true; hba->clk_scaling.is_busy_started = true;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags);
} }
static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba) static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba)
{ {
struct ufs_clk_scaling *scaling = &hba->clk_scaling; struct ufs_clk_scaling *scaling = &hba->clk_scaling;
unsigned long flags;
if (!ufshcd_is_clkscaling_supported(hba)) if (!ufshcd_is_clkscaling_supported(hba))
return; return;
spin_lock_irqsave(hba->host->host_lock, flags); guard(spinlock_irqsave)(&hba->clk_scaling.lock);
hba->clk_scaling.active_reqs--; hba->clk_scaling.active_reqs--;
if (!scaling->active_reqs && scaling->is_busy_started) { if (!scaling->active_reqs && scaling->is_busy_started) {
scaling->tot_busy_t += ktime_to_us(ktime_sub(ktime_get(), scaling->tot_busy_t += ktime_to_us(ktime_sub(ktime_get(),
@ -2201,7 +2195,6 @@ static void ufshcd_clk_scaling_update_busy(struct ufs_hba *hba)
scaling->busy_start_t = 0; scaling->busy_start_t = 0;
scaling->is_busy_started = false; scaling->is_busy_started = false;
} }
spin_unlock_irqrestore(hba->host->host_lock, flags);
} }
static inline int ufshcd_monitor_opcode2dir(u8 opcode) static inline int ufshcd_monitor_opcode2dir(u8 opcode)
@ -5195,12 +5188,12 @@ static void ufshcd_lu_init(struct ufs_hba *hba, struct scsi_device *sdev)
} }
/** /**
* ufshcd_slave_alloc - handle initial SCSI device configurations * ufshcd_sdev_init - handle initial SCSI device configurations
* @sdev: pointer to SCSI device * @sdev: pointer to SCSI device
* *
* Return: success. * Return: success.
*/ */
static int ufshcd_slave_alloc(struct scsi_device *sdev) static int ufshcd_sdev_init(struct scsi_device *sdev)
{ {
struct ufs_hba *hba; struct ufs_hba *hba;
@ -5243,13 +5236,13 @@ static int ufshcd_change_queue_depth(struct scsi_device *sdev, int depth)
} }
/** /**
* ufshcd_device_configure - adjust SCSI device configurations * ufshcd_sdev_configure - adjust SCSI device configurations
* @sdev: pointer to SCSI device * @sdev: pointer to SCSI device
* @lim: queue limits * @lim: queue limits
* *
* Return: 0 (success). * Return: 0 (success).
*/ */
static int ufshcd_device_configure(struct scsi_device *sdev, static int ufshcd_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct ufs_hba *hba = shost_priv(sdev->host); struct ufs_hba *hba = shost_priv(sdev->host);
@ -5281,10 +5274,10 @@ static int ufshcd_device_configure(struct scsi_device *sdev,
} }
/** /**
* ufshcd_slave_destroy - remove SCSI device configurations * ufshcd_sdev_destroy - remove SCSI device configurations
* @sdev: pointer to SCSI device * @sdev: pointer to SCSI device
*/ */
static void ufshcd_slave_destroy(struct scsi_device *sdev) static void ufshcd_sdev_destroy(struct scsi_device *sdev)
{ {
struct ufs_hba *hba; struct ufs_hba *hba;
unsigned long flags; unsigned long flags;
@ -8259,7 +8252,9 @@ static void ufshcd_rtc_work(struct work_struct *work)
hba = container_of(to_delayed_work(work), struct ufs_hba, ufs_rtc_update_work); hba = container_of(to_delayed_work(work), struct ufs_hba, ufs_rtc_update_work);
/* Update RTC only when there are no requests in progress and UFSHCI is operational */ /* Update RTC only when there are no requests in progress and UFSHCI is operational */
if (!ufshcd_is_ufs_dev_busy(hba) && hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL) if (!ufshcd_is_ufs_dev_busy(hba) &&
hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL &&
!hba->clk_gating.active_reqs)
ufshcd_update_rtc(hba); ufshcd_update_rtc(hba);
if (ufshcd_is_ufs_dev_active(hba) && hba->dev_info.rtc_update_period) if (ufshcd_is_ufs_dev_active(hba) && hba->dev_info.rtc_update_period)
@ -8968,9 +8963,9 @@ static const struct scsi_host_template ufshcd_driver_template = {
.map_queues = ufshcd_map_queues, .map_queues = ufshcd_map_queues,
.queuecommand = ufshcd_queuecommand, .queuecommand = ufshcd_queuecommand,
.mq_poll = ufshcd_poll, .mq_poll = ufshcd_poll,
.slave_alloc = ufshcd_slave_alloc, .sdev_init = ufshcd_sdev_init,
.device_configure = ufshcd_device_configure, .sdev_configure = ufshcd_sdev_configure,
.slave_destroy = ufshcd_slave_destroy, .sdev_destroy = ufshcd_sdev_destroy,
.change_queue_depth = ufshcd_change_queue_depth, .change_queue_depth = ufshcd_change_queue_depth,
.eh_abort_handler = ufshcd_abort, .eh_abort_handler = ufshcd_abort,
.eh_device_reset_handler = ufshcd_eh_device_reset_handler, .eh_device_reset_handler = ufshcd_eh_device_reset_handler,
@ -9156,7 +9151,6 @@ static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
int ret = 0; int ret = 0;
struct ufs_clk_info *clki; struct ufs_clk_info *clki;
struct list_head *head = &hba->clk_list_head; struct list_head *head = &hba->clk_list_head;
unsigned long flags;
ktime_t start = ktime_get(); ktime_t start = ktime_get();
bool clk_state_changed = false; bool clk_state_changed = false;
@ -9207,11 +9201,10 @@ static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
clk_disable_unprepare(clki->clk); clk_disable_unprepare(clki->clk);
} }
} else if (!ret && on) { } else if (!ret && on) {
spin_lock_irqsave(hba->host->host_lock, flags); scoped_guard(spinlock_irqsave, &hba->clk_gating.lock)
hba->clk_gating.state = CLKS_ON; hba->clk_gating.state = CLKS_ON;
trace_ufshcd_clk_gating(dev_name(hba->dev), trace_ufshcd_clk_gating(dev_name(hba->dev),
hba->clk_gating.state); hba->clk_gating.state);
spin_unlock_irqrestore(hba->host->host_lock, flags);
} }
if (clk_state_changed) if (clk_state_changed)

View File

@ -322,7 +322,7 @@ static inline void mts_urb_abort(struct mts_desc* desc) {
usb_kill_urb( desc->urb ); usb_kill_urb( desc->urb );
} }
static int mts_slave_alloc (struct scsi_device *s) static int mts_sdev_init (struct scsi_device *s)
{ {
s->inquiry_len = 0x24; s->inquiry_len = 0x24;
return 0; return 0;
@ -626,7 +626,7 @@ static const struct scsi_host_template mts_scsi_host_template = {
.this_id = -1, .this_id = -1,
.emulated = 1, .emulated = 1,
.dma_alignment = 511, .dma_alignment = 511,
.slave_alloc = mts_slave_alloc, .sdev_init = mts_sdev_init,
.max_sectors= 256, /* 128 K */ .max_sectors= 256, /* 128 K */
}; };

View File

@ -64,7 +64,7 @@ static const char* host_info(struct Scsi_Host *host)
return us->scsi_name; return us->scsi_name;
} }
static int slave_alloc (struct scsi_device *sdev) static int sdev_init (struct scsi_device *sdev)
{ {
struct us_data *us = host_to_us(sdev->host); struct us_data *us = host_to_us(sdev->host);
@ -88,7 +88,7 @@ static int slave_alloc (struct scsi_device *sdev)
return 0; return 0;
} }
static int device_configure(struct scsi_device *sdev, struct queue_limits *lim) static int sdev_configure(struct scsi_device *sdev, struct queue_limits *lim)
{ {
struct us_data *us = host_to_us(sdev->host); struct us_data *us = host_to_us(sdev->host);
struct device *dev = us->pusb_dev->bus->sysdev; struct device *dev = us->pusb_dev->bus->sysdev;
@ -127,7 +127,7 @@ static int device_configure(struct scsi_device *sdev, struct queue_limits *lim)
lim->max_hw_sectors, dma_max_mapping_size(dev) >> SECTOR_SHIFT); lim->max_hw_sectors, dma_max_mapping_size(dev) >> SECTOR_SHIFT);
/* /*
* We can't put these settings in slave_alloc() because that gets * We can't put these settings in sdev_init() because that gets
* called before the device type is known. Consequently these * called before the device type is known. Consequently these
* settings can't be overridden via the scsi devinfo mechanism. * settings can't be overridden via the scsi devinfo mechanism.
*/ */
@ -637,8 +637,8 @@ static const struct scsi_host_template usb_stor_host_template = {
/* unknown initiator id */ /* unknown initiator id */
.this_id = -1, .this_id = -1,
.slave_alloc = slave_alloc, .sdev_init = sdev_init,
.device_configure = device_configure, .sdev_configure = sdev_configure,
.target_alloc = target_alloc, .target_alloc = target_alloc,
/* lots of sg segments can be handled */ /* lots of sg segments can be handled */

View File

@ -817,7 +817,7 @@ static int uas_target_alloc(struct scsi_target *starget)
return 0; return 0;
} }
static int uas_slave_alloc(struct scsi_device *sdev) static int uas_sdev_init(struct scsi_device *sdev)
{ {
struct uas_dev_info *devinfo = struct uas_dev_info *devinfo =
(struct uas_dev_info *)sdev->host->hostdata; (struct uas_dev_info *)sdev->host->hostdata;
@ -832,7 +832,7 @@ static int uas_slave_alloc(struct scsi_device *sdev)
return 0; return 0;
} }
static int uas_device_configure(struct scsi_device *sdev, static int uas_sdev_configure(struct scsi_device *sdev,
struct queue_limits *lim) struct queue_limits *lim)
{ {
struct uas_dev_info *devinfo = sdev->hostdata; struct uas_dev_info *devinfo = sdev->hostdata;
@ -905,8 +905,8 @@ static const struct scsi_host_template uas_host_template = {
.name = "uas", .name = "uas",
.queuecommand = uas_queuecommand, .queuecommand = uas_queuecommand,
.target_alloc = uas_target_alloc, .target_alloc = uas_target_alloc,
.slave_alloc = uas_slave_alloc, .sdev_init = uas_sdev_init,
.device_configure = uas_device_configure, .sdev_configure = uas_sdev_configure,
.eh_abort_handler = uas_eh_abort_handler, .eh_abort_handler = uas_eh_abort_handler,
.eh_device_reset_handler = uas_eh_device_reset_handler, .eh_device_reset_handler = uas_eh_device_reset_handler,
.this_id = -1, .this_id = -1,

View File

@ -1199,10 +1199,9 @@ extern int ata_std_bios_param(struct scsi_device *sdev,
struct block_device *bdev, struct block_device *bdev,
sector_t capacity, int geom[]); sector_t capacity, int geom[]);
extern void ata_scsi_unlock_native_capacity(struct scsi_device *sdev); extern void ata_scsi_unlock_native_capacity(struct scsi_device *sdev);
extern int ata_scsi_slave_alloc(struct scsi_device *sdev); extern int ata_scsi_sdev_init(struct scsi_device *sdev);
int ata_scsi_device_configure(struct scsi_device *sdev, int ata_scsi_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim);
struct queue_limits *lim); extern void ata_scsi_sdev_destroy(struct scsi_device *sdev);
extern void ata_scsi_slave_destroy(struct scsi_device *sdev);
extern int ata_scsi_change_queue_depth(struct scsi_device *sdev, extern int ata_scsi_change_queue_depth(struct scsi_device *sdev,
int queue_depth); int queue_depth);
extern int ata_change_queue_depth(struct ata_port *ap, struct scsi_device *sdev, extern int ata_change_queue_depth(struct ata_port *ap, struct scsi_device *sdev,
@ -1301,7 +1300,7 @@ extern struct ata_port *ata_port_alloc(struct ata_host *host);
extern void ata_port_free(struct ata_port *ap); extern void ata_port_free(struct ata_port *ap);
extern int ata_tport_add(struct device *parent, struct ata_port *ap); extern int ata_tport_add(struct device *parent, struct ata_port *ap);
extern void ata_tport_delete(struct ata_port *ap); extern void ata_tport_delete(struct ata_port *ap);
int ata_sas_device_configure(struct scsi_device *sdev, struct queue_limits *lim, int ata_sas_sdev_configure(struct scsi_device *sdev, struct queue_limits *lim,
struct ata_port *ap); struct ata_port *ap);
extern int ata_sas_queuecmd(struct scsi_cmnd *cmd, struct ata_port *ap); extern int ata_sas_queuecmd(struct scsi_cmnd *cmd, struct ata_port *ap);
extern void ata_tf_to_fis(const struct ata_taskfile *tf, extern void ata_tf_to_fis(const struct ata_taskfile *tf,
@ -1458,8 +1457,8 @@ extern const struct attribute_group *ata_common_sdev_groups[];
.this_id = ATA_SHT_THIS_ID, \ .this_id = ATA_SHT_THIS_ID, \
.emulated = ATA_SHT_EMULATED, \ .emulated = ATA_SHT_EMULATED, \
.proc_name = drv_name, \ .proc_name = drv_name, \
.slave_alloc = ata_scsi_slave_alloc, \ .sdev_init = ata_scsi_sdev_init, \
.slave_destroy = ata_scsi_slave_destroy, \ .sdev_destroy = ata_scsi_sdev_destroy, \
.bios_param = ata_std_bios_param, \ .bios_param = ata_std_bios_param, \
.unlock_native_capacity = ata_scsi_unlock_native_capacity,\ .unlock_native_capacity = ata_scsi_unlock_native_capacity,\
.max_sectors = ATA_MAX_SECTORS_LBA48 .max_sectors = ATA_MAX_SECTORS_LBA48
@ -1468,13 +1467,13 @@ extern const struct attribute_group *ata_common_sdev_groups[];
__ATA_BASE_SHT(drv_name), \ __ATA_BASE_SHT(drv_name), \
.can_queue = ATA_DEF_QUEUE, \ .can_queue = ATA_DEF_QUEUE, \
.tag_alloc_policy = BLK_TAG_ALLOC_RR, \ .tag_alloc_policy = BLK_TAG_ALLOC_RR, \
.device_configure = ata_scsi_device_configure .sdev_configure = ata_scsi_sdev_configure
#define ATA_SUBBASE_SHT_QD(drv_name, drv_qd) \ #define ATA_SUBBASE_SHT_QD(drv_name, drv_qd) \
__ATA_BASE_SHT(drv_name), \ __ATA_BASE_SHT(drv_name), \
.can_queue = drv_qd, \ .can_queue = drv_qd, \
.tag_alloc_policy = BLK_TAG_ALLOC_RR, \ .tag_alloc_policy = BLK_TAG_ALLOC_RR, \
.device_configure = ata_scsi_device_configure .sdev_configure = ata_scsi_sdev_configure
#define ATA_BASE_SHT(drv_name) \ #define ATA_BASE_SHT(drv_name) \
ATA_SUBBASE_SHT(drv_name), \ ATA_SUBBASE_SHT(drv_name), \

View File

@ -963,7 +963,7 @@ int fc_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);
int fc_eh_abort(struct scsi_cmnd *); int fc_eh_abort(struct scsi_cmnd *);
int fc_eh_device_reset(struct scsi_cmnd *); int fc_eh_device_reset(struct scsi_cmnd *);
int fc_eh_host_reset(struct scsi_cmnd *); int fc_eh_host_reset(struct scsi_cmnd *);
int fc_slave_alloc(struct scsi_device *); int fc_sdev_init(struct scsi_device *);
/* /*
* ELS/CT interface * ELS/CT interface

View File

@ -683,8 +683,7 @@ int sas_phy_reset(struct sas_phy *phy, int hard_reset);
int sas_phy_enable(struct sas_phy *phy, int enable); int sas_phy_enable(struct sas_phy *phy, int enable);
extern int sas_queuecommand(struct Scsi_Host *, struct scsi_cmnd *); extern int sas_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);
extern int sas_target_alloc(struct scsi_target *); extern int sas_target_alloc(struct scsi_target *);
int sas_device_configure(struct scsi_device *dev, int sas_sdev_configure(struct scsi_device *dev, struct queue_limits *lim);
struct queue_limits *lim);
extern int sas_change_queue_depth(struct scsi_device *, int new_depth); extern int sas_change_queue_depth(struct scsi_device *, int new_depth);
extern int sas_bios_param(struct scsi_device *, struct block_device *, extern int sas_bios_param(struct scsi_device *, struct block_device *,
sector_t capacity, int *hsc); sector_t capacity, int *hsc);
@ -703,7 +702,7 @@ int sas_eh_device_reset_handler(struct scsi_cmnd *cmd);
int sas_eh_target_reset_handler(struct scsi_cmnd *cmd); int sas_eh_target_reset_handler(struct scsi_cmnd *cmd);
extern void sas_target_destroy(struct scsi_target *); extern void sas_target_destroy(struct scsi_target *);
extern int sas_slave_alloc(struct scsi_device *); extern int sas_sdev_init(struct scsi_device *);
extern int sas_ioctl(struct scsi_device *sdev, unsigned int cmd, extern int sas_ioctl(struct scsi_device *sdev, unsigned int cmd,
void __user *arg); void __user *arg);
extern int sas_drain_work(struct sas_ha_struct *ha); extern int sas_drain_work(struct sas_ha_struct *ha);
@ -750,8 +749,8 @@ void sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event,
#endif #endif
#define LIBSAS_SHT_BASE _LIBSAS_SHT_BASE \ #define LIBSAS_SHT_BASE _LIBSAS_SHT_BASE \
.device_configure = sas_device_configure, \ .sdev_configure = sas_sdev_configure, \
.slave_alloc = sas_slave_alloc, \ .sdev_init = sas_sdev_init, \
#define LIBSAS_SHT_BASE_NO_SLAVE_INIT _LIBSAS_SHT_BASE #define LIBSAS_SHT_BASE_NO_SLAVE_INIT _LIBSAS_SHT_BASE

View File

@ -59,7 +59,7 @@ struct iscsi_bsg_host_vendor {
*/ */
struct iscsi_bsg_host_vendor_reply { struct iscsi_bsg_host_vendor_reply {
/* start of vendor response area */ /* start of vendor response area */
uint32_t vendor_rsp[0]; DECLARE_FLEX_ARRAY(uint32_t, vendor_rsp);
}; };

View File

@ -155,7 +155,7 @@ struct scsi_device {
blist_flags_t sdev_bflags; /* black/white flags as also found in blist_flags_t sdev_bflags; /* black/white flags as also found in
* scsi_devinfo.[hc]. For now used only to * scsi_devinfo.[hc]. For now used only to
* pass settings from slave_alloc to scsi * pass settings from sdev_init to scsi
* core. */ * core. */
unsigned int eh_timeout; /* Error handling timeout */ unsigned int eh_timeout; /* Error handling timeout */
@ -357,7 +357,7 @@ struct scsi_target {
atomic_t target_blocked; atomic_t target_blocked;
/* /*
* LLDs should set this in the slave_alloc host template callout. * LLDs should set this in the sdev_init host template callout.
* If set to zero then there is not limit. * If set to zero then there is not limit.
*/ */
unsigned int can_queue; unsigned int can_queue;

View File

@ -168,20 +168,20 @@ struct scsi_host_template {
* Return values: 0 on success, non-0 on failure * Return values: 0 on success, non-0 on failure
* *
* Deallocation: If we didn't find any devices at this ID, you will * Deallocation: If we didn't find any devices at this ID, you will
* get an immediate call to slave_destroy(). If we find something * get an immediate call to sdev_destroy(). If we find something
* here then you will get a call to slave_configure(), then the * here then you will get a call to sdev_configure(), then the
* device will be used for however long it is kept around, then when * device will be used for however long it is kept around, then when
* the device is removed from the system (or * possibly at reboot * the device is removed from the system (or * possibly at reboot
* time), you will then get a call to slave_destroy(). This is * time), you will then get a call to sdev_destroy(). This is
* assuming you implement slave_configure and slave_destroy. * assuming you implement sdev_configure and sdev_destroy.
* However, if you allocate memory and hang it off the device struct, * However, if you allocate memory and hang it off the device struct,
* then you must implement the slave_destroy() routine at a minimum * then you must implement the sdev_destroy() routine at a minimum
* in order to avoid leaking memory * in order to avoid leaking memory
* each time a device is tore down. * each time a device is tore down.
* *
* Status: OPTIONAL * Status: OPTIONAL
*/ */
int (* slave_alloc)(struct scsi_device *); int (* sdev_init)(struct scsi_device *);
/* /*
* Once the device has responded to an INQUIRY and we know the * Once the device has responded to an INQUIRY and we know the
@ -206,28 +206,24 @@ struct scsi_host_template {
* specific setup basis... * specific setup basis...
* 6. Return 0 on success, non-0 on error. The device will be marked * 6. Return 0 on success, non-0 on error. The device will be marked
* as offline on error so that no access will occur. If you return * as offline on error so that no access will occur. If you return
* non-0, your slave_destroy routine will never get called for this * non-0, your sdev_destroy routine will never get called for this
* device, so don't leave any loose memory hanging around, clean * device, so don't leave any loose memory hanging around, clean
* up after yourself before returning non-0 * up after yourself before returning non-0
* *
* Status: OPTIONAL * Status: OPTIONAL
*
* Note: slave_configure is the legacy version, use device_configure for
* all new code. A driver must never define both.
*/ */
int (* device_configure)(struct scsi_device *, struct queue_limits *lim); int (* sdev_configure)(struct scsi_device *, struct queue_limits *lim);
int (* slave_configure)(struct scsi_device *);
/* /*
* Immediately prior to deallocating the device and after all activity * Immediately prior to deallocating the device and after all activity
* has ceased the mid layer calls this point so that the low level * has ceased the mid layer calls this point so that the low level
* driver may completely detach itself from the scsi device and vice * driver may completely detach itself from the scsi device and vice
* versa. The low level driver is responsible for freeing any memory * versa. The low level driver is responsible for freeing any memory
* it allocated in the slave_alloc or slave_configure calls. * it allocated in the sdev_init or sdev_configure calls.
* *
* Status: OPTIONAL * Status: OPTIONAL
*/ */
void (* slave_destroy)(struct scsi_device *); void (* sdev_destroy)(struct scsi_device *);
/* /*
* Before the mid layer attempts to scan for a new device attached * Before the mid layer attempts to scan for a new device attached

View File

@ -405,6 +405,9 @@ enum clk_gating_state {
* delay_ms * delay_ms
* @ungate_work: worker to turn on clocks that will be used in case of * @ungate_work: worker to turn on clocks that will be used in case of
* interrupt context * interrupt context
* @clk_gating_workq: workqueue for clock gating work.
* @lock: serialize access to some struct ufs_clk_gating members. An outer lock
* relative to the host lock
* @state: the current clocks state * @state: the current clocks state
* @delay_ms: gating delay in ms * @delay_ms: gating delay in ms
* @is_suspended: clk gating is suspended when set to 1 which can be used * @is_suspended: clk gating is suspended when set to 1 which can be used
@ -415,11 +418,14 @@ enum clk_gating_state {
* @is_initialized: Indicates whether clock gating is initialized or not * @is_initialized: Indicates whether clock gating is initialized or not
* @active_reqs: number of requests that are pending and should be waited for * @active_reqs: number of requests that are pending and should be waited for
* completion before gating clocks. * completion before gating clocks.
* @clk_gating_workq: workqueue for clock gating work.
*/ */
struct ufs_clk_gating { struct ufs_clk_gating {
struct delayed_work gate_work; struct delayed_work gate_work;
struct work_struct ungate_work; struct work_struct ungate_work;
struct workqueue_struct *clk_gating_workq;
spinlock_t lock;
enum clk_gating_state state; enum clk_gating_state state;
unsigned long delay_ms; unsigned long delay_ms;
bool is_suspended; bool is_suspended;
@ -428,11 +434,14 @@ struct ufs_clk_gating {
bool is_enabled; bool is_enabled;
bool is_initialized; bool is_initialized;
int active_reqs; int active_reqs;
struct workqueue_struct *clk_gating_workq;
}; };
/** /**
* struct ufs_clk_scaling - UFS clock scaling related data * struct ufs_clk_scaling - UFS clock scaling related data
* @workq: workqueue to schedule devfreq suspend/resume work
* @suspend_work: worker to suspend devfreq
* @resume_work: worker to resume devfreq
* @lock: serialize access to some struct ufs_clk_scaling members
* @active_reqs: number of requests that are pending. If this is zero when * @active_reqs: number of requests that are pending. If this is zero when
* devfreq ->target() function is called then schedule "suspend_work" to * devfreq ->target() function is called then schedule "suspend_work" to
* suspend devfreq. * suspend devfreq.
@ -442,9 +451,6 @@ struct ufs_clk_gating {
* @enable_attr: sysfs attribute to enable/disable clock scaling * @enable_attr: sysfs attribute to enable/disable clock scaling
* @saved_pwr_info: UFS power mode may also be changed during scaling and this * @saved_pwr_info: UFS power mode may also be changed during scaling and this
* one keeps track of previous power mode. * one keeps track of previous power mode.
* @workq: workqueue to schedule devfreq suspend/resume work
* @suspend_work: worker to suspend devfreq
* @resume_work: worker to resume devfreq
* @target_freq: frequency requested by devfreq framework * @target_freq: frequency requested by devfreq framework
* @min_gear: lowest HS gear to scale down to * @min_gear: lowest HS gear to scale down to
* @is_enabled: tracks if scaling is currently enabled or not, controlled by * @is_enabled: tracks if scaling is currently enabled or not, controlled by
@ -456,15 +462,18 @@ struct ufs_clk_gating {
* @is_suspended: tracks if devfreq is suspended or not * @is_suspended: tracks if devfreq is suspended or not
*/ */
struct ufs_clk_scaling { struct ufs_clk_scaling {
struct workqueue_struct *workq;
struct work_struct suspend_work;
struct work_struct resume_work;
spinlock_t lock;
int active_reqs; int active_reqs;
unsigned long tot_busy_t; unsigned long tot_busy_t;
ktime_t window_start_t; ktime_t window_start_t;
ktime_t busy_start_t; ktime_t busy_start_t;
struct device_attribute enable_attr; struct device_attribute enable_attr;
struct ufs_pa_layer_attr saved_pwr_info; struct ufs_pa_layer_attr saved_pwr_info;
struct workqueue_struct *workq;
struct work_struct suspend_work;
struct work_struct resume_work;
unsigned long target_freq; unsigned long target_freq;
u32 min_gear; u32 min_gear;
bool is_enabled; bool is_enabled;