SCSI misc on 20231102

Updates to the usual drivers (ufs, megaraid_sas, lpfc, target, ibmvfc,
 scsi_debug) plus the usual assorted minor fixes and updates.  The
 major change this time around is a prep patch for rethreading of the
 driver reset handler API not to take a scsi_cmd structure which starts
 to reduce various drivers' dependence on scsi_cmd in error handling.
 
 Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCZUORLiYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishQ4WAQDDIhzp
 /PiJBBtt0U9ii/lYqRLrOVnN0extKEgEGO+FbwEAssKgs+5Jn/7XCgdpSrx8Co3/
 0cPXrZGxs7tFpFWLZjM=
 =AlRU
 -----END PGP SIGNATURE-----

Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI updates from James Bottomley:
 "Updates to the usual drivers (ufs, megaraid_sas, lpfc, target, ibmvfc,
  scsi_debug) plus the usual assorted minor fixes and updates.

  The major change this time around is a prep patch for rethreading of
  the driver reset handler API not to take a scsi_cmd structure which
  starts to reduce various drivers' dependence on scsi_cmd in error
  handling"

* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (132 commits)
  scsi: ufs: core: Leave space for '\0' in utf8 desc string
  scsi: ufs: core: Conversion to bool not necessary
  scsi: ufs: core: Fix race between force complete and ISR
  scsi: megaraid: Fix up debug message in megaraid_abort_and_reset()
  scsi: aic79xx: Fix up NULL command in ahd_done()
  scsi: message: fusion: Initialize return value in mptfc_bus_reset()
  scsi: mpt3sas: Fix loop logic
  scsi: snic: Remove useless code in snic_dr_clean_pending_req()
  scsi: core: Add comment to target_destroy in scsi_host_template
  scsi: core: Clean up scsi_dev_queue_ready()
  scsi: pmcraid: Add missing scsi_device_put() in pmcraid_eh_target_reset_handler()
  scsi: target: core: Fix kernel-doc comment
  scsi: pmcraid: Fix kernel-doc comment
  scsi: core: Handle depopulation and restoration in progress
  scsi: ufs: core: Add support for parsing OPP
  scsi: ufs: core: Add OPP support for scaling clocks and regulators
  scsi: ufs: dt-bindings: common: Add OPP table
  scsi: scsi_debug: Add param to control sdev's allow_restart
  scsi: scsi_debug: Add debugfs interface to fail target reset
  scsi: scsi_debug: Add new error injection type: Reset LUN failed
  ...
This commit is contained in:
Linus Torvalds 2023-11-02 15:13:50 -10:00
commit 6ed92e559a
103 changed files with 2755 additions and 1370 deletions

View File

@ -20,11 +20,25 @@ properties:
items:
- description: Minimum frequency for given clock in Hz
- description: Maximum frequency for given clock in Hz
deprecated: true
description: |
Preferred is operating-points-v2.
Array of <min max> operating frequencies in Hz stored in the same order
as the clocks property. If this property is not defined or a value in the
array is "0" then it is assumed that the frequency is set by the parent
clock or a fixed rate clock source.
as the clocks property. If either this property or operating-points-v2 is
not defined or a value in the array is "0" then it is assumed that the
frequency is set by the parent clock or a fixed rate clock source.
operating-points-v2:
description:
Preferred over freq-table-hz.
If present, each OPP must contain array of frequencies stored in the same
order for each clock. If clock frequency in the array is "0" then it is
assumed that the frequency is set by the parent clock or a fixed rate
clock source.
opp-table:
type: object
interrupts:
maxItems: 1
@ -75,8 +89,23 @@ properties:
dependencies:
freq-table-hz: [ clocks ]
operating-points-v2: [ clocks, clock-names ]
required:
- interrupts
allOf:
- if:
required:
- freq-table-hz
then:
properties:
operating-points-v2: false
- if:
required:
- operating-points-v2
then:
properties:
freq-table-hz: false
additionalProperties: true

View File

@ -11221,7 +11221,6 @@ M: Sagi Grimberg <sagi@grimberg.me>
L: linux-rdma@vger.kernel.org
L: target-devel@vger.kernel.org
S: Supported
W: http://www.linux-iscsi.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master
F: drivers/infiniband/ulp/isert
@ -13623,6 +13622,7 @@ MEGARAID SCSI/SAS DRIVERS
M: Kashyap Desai <kashyap.desai@broadcom.com>
M: Sumit Saxena <sumit.saxena@broadcom.com>
M: Shivasharan S <shivasharan.srikanteshwara@broadcom.com>
M: Chandrakanth patil <chandrakanth.patil@broadcom.com>
L: megaraidlinux.pdl@broadcom.com
L: linux-scsi@vger.kernel.org
S: Maintained
@ -19275,7 +19275,6 @@ M: "Martin K. Petersen" <martin.petersen@oracle.com>
L: linux-scsi@vger.kernel.org
L: target-devel@vger.kernel.org
S: Supported
W: http://www.linux-iscsi.org
Q: https://patchwork.kernel.org/project/target-devel/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkp/scsi.git
F: Documentation/target/

View File

@ -3867,6 +3867,9 @@ static const struct target_core_fabric_ops srpt_template = {
.tfc_discovery_attrs = srpt_da_attrs,
.tfc_wwn_attrs = srpt_wwn_attrs,
.tfc_tpg_attrib_attrs = srpt_tpg_attrib_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
/**

View File

@ -1328,8 +1328,8 @@ mptctl_getiocinfo (MPT_ADAPTER *ioc, unsigned long arg, unsigned int data_size)
/* Set the Version Strings.
*/
strncpy (karg->driverVersion, MPT_LINUX_PACKAGE_NAME, MPT_IOCTL_VERSION_LENGTH);
karg->driverVersion[MPT_IOCTL_VERSION_LENGTH-1]='\0';
strscpy_pad(karg->driverVersion, MPT_LINUX_PACKAGE_NAME,
sizeof(karg->driverVersion));
karg->busChangeEvent = 0;
karg->hostId = ioc->pfacts[port].PortSCSIID;
@ -1493,10 +1493,8 @@ mptctl_readtest (MPT_ADAPTER *ioc, unsigned long arg)
#else
karg.chip_type = ioc->pcidev->device;
#endif
strncpy (karg.name, ioc->name, MPT_MAX_NAME);
karg.name[MPT_MAX_NAME-1]='\0';
strncpy (karg.product, ioc->prod_name, MPT_PRODUCT_LENGTH);
karg.product[MPT_PRODUCT_LENGTH-1]='\0';
strscpy_pad(karg.name, ioc->name, sizeof(karg.name));
strscpy_pad(karg.product, ioc->prod_name, sizeof(karg.product));
/* Copy the data from kernel memory to user memory
*/
@ -2394,7 +2392,7 @@ mptctl_hp_hostinfo(MPT_ADAPTER *ioc, unsigned long arg, unsigned int data_size)
cfg.dir = 0; /* read */
cfg.timeout = 10;
strncpy(karg.serial_number, " ", 24);
strscpy_pad(karg.serial_number, " ", sizeof(karg.serial_number));
if (mpt_config(ioc, &cfg) == 0) {
if (cfg.cfghdr.hdr->PageLength > 0) {
/* Issue the second config page request */
@ -2408,8 +2406,9 @@ mptctl_hp_hostinfo(MPT_ADAPTER *ioc, unsigned long arg, unsigned int data_size)
if (mpt_config(ioc, &cfg) == 0) {
ManufacturingPage0_t *pdata = (ManufacturingPage0_t *) pbuf;
if (strlen(pdata->BoardTracerNumber) > 1) {
strscpy(karg.serial_number,
pdata->BoardTracerNumber, 24);
strscpy_pad(karg.serial_number,
pdata->BoardTracerNumber,
sizeof(karg.serial_number));
}
}
dma_free_coherent(&ioc->pcidev->dev,
@ -2456,7 +2455,7 @@ mptctl_hp_hostinfo(MPT_ADAPTER *ioc, unsigned long arg, unsigned int data_size)
}
}
/*
/*
* Gather ISTWI(Industry Standard Two Wire Interface) Data
*/
if ((mf = mpt_get_msg_frame(mptctl_id, ioc)) == NULL) {

View File

@ -183,73 +183,109 @@ static struct fc_function_template mptfc_transport_functions = {
};
static int
mptfc_block_error_handler(struct scsi_cmnd *SCpnt,
int (*func)(struct scsi_cmnd *SCpnt),
const char *caller)
mptfc_block_error_handler(struct fc_rport *rport)
{
MPT_SCSI_HOST *hd;
struct scsi_device *sdev = SCpnt->device;
struct Scsi_Host *shost = sdev->host;
struct fc_rport *rport = starget_to_rport(scsi_target(sdev));
struct Scsi_Host *shost = rport_to_shost(rport);
unsigned long flags;
int ready;
MPT_ADAPTER *ioc;
MPT_ADAPTER *ioc;
int loops = 40; /* seconds */
hd = shost_priv(SCpnt->device->host);
hd = shost_priv(shost);
ioc = hd->ioc;
spin_lock_irqsave(shost->host_lock, flags);
while ((ready = fc_remote_port_chkready(rport) >> 16) == DID_IMM_RETRY
|| (loops > 0 && ioc->active == 0)) {
spin_unlock_irqrestore(shost->host_lock, flags);
dfcprintk (ioc, printk(MYIOC_s_DEBUG_FMT
"mptfc_block_error_handler.%d: %d:%llu, port status is "
"%x, active flag %d, deferring %s recovery.\n",
"mptfc_block_error_handler.%d: %s, port status is "
"%x, active flag %d, deferring recovery.\n",
ioc->name, ioc->sh->host_no,
SCpnt->device->id, SCpnt->device->lun,
ready, ioc->active, caller));
dev_name(&rport->dev), ready, ioc->active));
msleep(1000);
spin_lock_irqsave(shost->host_lock, flags);
loops --;
}
spin_unlock_irqrestore(shost->host_lock, flags);
if (ready == DID_NO_CONNECT || !SCpnt->device->hostdata
|| ioc->active == 0) {
if (ready == DID_NO_CONNECT || ioc->active == 0) {
dfcprintk (ioc, printk(MYIOC_s_DEBUG_FMT
"%s.%d: %d:%llu, failing recovery, "
"port state %x, active %d, vdevice %p.\n", caller,
"mpt_block_error_handler.%d: %s, failing recovery, "
"port state %x, active %d.\n",
ioc->name, ioc->sh->host_no,
SCpnt->device->id, SCpnt->device->lun, ready,
ioc->active, SCpnt->device->hostdata));
dev_name(&rport->dev), ready, ioc->active));
return FAILED;
}
dfcprintk (ioc, printk(MYIOC_s_DEBUG_FMT
"%s.%d: %d:%llu, executing recovery.\n", caller,
ioc->name, ioc->sh->host_no,
SCpnt->device->id, SCpnt->device->lun));
return (*func)(SCpnt);
return SUCCESS;
}
static int
mptfc_abort(struct scsi_cmnd *SCpnt)
{
return
mptfc_block_error_handler(SCpnt, mptscsih_abort, __func__);
struct Scsi_Host *shost = SCpnt->device->host;
struct fc_rport *rport = starget_to_rport(scsi_target(SCpnt->device));
MPT_SCSI_HOST __maybe_unused *hd = shost_priv(shost);
int rtn;
rtn = mptfc_block_error_handler(rport);
if (rtn == SUCCESS) {
dfcprintk (hd->ioc, printk(MYIOC_s_DEBUG_FMT
"%s.%d: %d:%llu, executing recovery.\n", __func__,
hd->ioc->name, shost->host_no,
SCpnt->device->id, SCpnt->device->lun));
rtn = mptscsih_abort(SCpnt);
}
return rtn;
}
static int
mptfc_dev_reset(struct scsi_cmnd *SCpnt)
{
return
mptfc_block_error_handler(SCpnt, mptscsih_dev_reset, __func__);
struct Scsi_Host *shost = SCpnt->device->host;
struct fc_rport *rport = starget_to_rport(scsi_target(SCpnt->device));
MPT_SCSI_HOST __maybe_unused *hd = shost_priv(shost);
int rtn;
rtn = mptfc_block_error_handler(rport);
if (rtn == SUCCESS) {
dfcprintk (hd->ioc, printk(MYIOC_s_DEBUG_FMT
"%s.%d: %d:%llu, executing recovery.\n", __func__,
hd->ioc->name, shost->host_no,
SCpnt->device->id, SCpnt->device->lun));
rtn = mptscsih_dev_reset(SCpnt);
}
return rtn;
}
static int
mptfc_bus_reset(struct scsi_cmnd *SCpnt)
{
return
mptfc_block_error_handler(SCpnt, mptscsih_bus_reset, __func__);
struct Scsi_Host *shost = SCpnt->device->host;
MPT_SCSI_HOST __maybe_unused *hd = shost_priv(shost);
int channel = SCpnt->device->channel;
struct mptfc_rport_info *ri;
int rtn = FAILED;
list_for_each_entry(ri, &hd->ioc->fc_rports, list) {
if (ri->flags & MPT_RPORT_INFO_FLAGS_REGISTERED) {
VirtTarget *vtarget = ri->starget->hostdata;
if (!vtarget || vtarget->channel != channel)
continue;
rtn = fc_block_rport(ri->rport);
if (rtn != 0)
break;
}
}
if (rtn == 0) {
dfcprintk (hd->ioc, printk(MYIOC_s_DEBUG_FMT
"%s.%d: %d:%llu, executing recovery.\n", __func__,
hd->ioc->name, shost->host_no,
SCpnt->device->id, SCpnt->device->lun));
rtn = mptscsih_bus_reset(SCpnt);
}
return rtn;
}
static void

View File

@ -2964,17 +2964,17 @@ mptsas_exp_repmanufacture_info(MPT_ADAPTER *ioc,
goto out_free;
manufacture_reply = data_out + sizeof(struct rep_manu_request);
strncpy(edev->vendor_id, manufacture_reply->vendor_id,
SAS_EXPANDER_VENDOR_ID_LEN);
strncpy(edev->product_id, manufacture_reply->product_id,
SAS_EXPANDER_PRODUCT_ID_LEN);
strncpy(edev->product_rev, manufacture_reply->product_rev,
SAS_EXPANDER_PRODUCT_REV_LEN);
strscpy(edev->vendor_id, manufacture_reply->vendor_id,
sizeof(edev->vendor_id));
strscpy(edev->product_id, manufacture_reply->product_id,
sizeof(edev->product_id));
strscpy(edev->product_rev, manufacture_reply->product_rev,
sizeof(edev->product_rev));
edev->level = manufacture_reply->sas_format;
if (manufacture_reply->sas_format) {
strncpy(edev->component_vendor_id,
strscpy(edev->component_vendor_id,
manufacture_reply->component_vendor_id,
SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN);
sizeof(edev->component_vendor_id));
tmp = (u8 *)&manufacture_reply->component_id;
edev->component_id = tmp[0] << 8 | tmp[1];
edev->component_revision_id =

View File

@ -1793,7 +1793,7 @@ mptscsih_abort(struct scsi_cmnd * SCpnt)
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
/**
* mptscsih_dev_reset - Perform a SCSI TARGET_RESET! new_eh variant
* mptscsih_dev_reset - Perform a SCSI LOGICAL_UNIT_RESET!
* @SCpnt: Pointer to scsi_cmnd structure, IO which reset is due to
*
* (linux scsi_host_template.eh_dev_reset_handler routine)
@ -1808,6 +1808,58 @@ mptscsih_dev_reset(struct scsi_cmnd * SCpnt)
VirtDevice *vdevice;
MPT_ADAPTER *ioc;
/* If we can't locate our host adapter structure, return FAILED status.
*/
if ((hd = shost_priv(SCpnt->device->host)) == NULL){
printk(KERN_ERR MYNAM ": lun reset: "
"Can't locate host! (sc=%p)\n", SCpnt);
return FAILED;
}
ioc = hd->ioc;
printk(MYIOC_s_INFO_FMT "attempting lun reset! (sc=%p)\n",
ioc->name, SCpnt);
scsi_print_command(SCpnt);
vdevice = SCpnt->device->hostdata;
if (!vdevice || !vdevice->vtarget) {
retval = 0;
goto out;
}
retval = mptscsih_IssueTaskMgmt(hd,
MPI_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET,
vdevice->vtarget->channel,
vdevice->vtarget->id, vdevice->lun, 0,
mptscsih_get_tm_timeout(ioc));
out:
printk (MYIOC_s_INFO_FMT "lun reset: %s (sc=%p)\n",
ioc->name, ((retval == 0) ? "SUCCESS" : "FAILED" ), SCpnt);
if (retval == 0)
return SUCCESS;
else
return FAILED;
}
/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
/**
* mptscsih_target_reset - Perform a SCSI TARGET_RESET!
* @SCpnt: Pointer to scsi_cmnd structure, IO which reset is due to
*
* (linux scsi_host_template.eh_target_reset_handler routine)
*
* Returns SUCCESS or FAILED.
**/
int
mptscsih_target_reset(struct scsi_cmnd * SCpnt)
{
MPT_SCSI_HOST *hd;
int retval;
VirtDevice *vdevice;
MPT_ADAPTER *ioc;
/* If we can't locate our host adapter structure, return FAILED status.
*/
if ((hd = shost_priv(SCpnt->device->host)) == NULL){
@ -3256,6 +3308,7 @@ EXPORT_SYMBOL(mptscsih_slave_destroy);
EXPORT_SYMBOL(mptscsih_slave_configure);
EXPORT_SYMBOL(mptscsih_abort);
EXPORT_SYMBOL(mptscsih_dev_reset);
EXPORT_SYMBOL(mptscsih_target_reset);
EXPORT_SYMBOL(mptscsih_bus_reset);
EXPORT_SYMBOL(mptscsih_host_reset);
EXPORT_SYMBOL(mptscsih_bios_param);

View File

@ -120,6 +120,7 @@ extern void mptscsih_slave_destroy(struct scsi_device *device);
extern int mptscsih_slave_configure(struct scsi_device *device);
extern int mptscsih_abort(struct scsi_cmnd * SCpnt);
extern int mptscsih_dev_reset(struct scsi_cmnd * SCpnt);
extern int mptscsih_target_reset(struct scsi_cmnd * SCpnt);
extern int mptscsih_bus_reset(struct scsi_cmnd * SCpnt);
extern int mptscsih_host_reset(struct scsi_cmnd *SCpnt);
extern int mptscsih_bios_param(struct scsi_device * sdev, struct block_device *bdev, sector_t capacity, int geom[]);

View File

@ -834,21 +834,6 @@ config SCSI_IMM
To compile this driver as a module, choose M here: the
module will be called imm.
config SCSI_IZIP_EPP16
bool "ppa/imm option - Use slow (but safe) EPP-16"
depends on SCSI_IMM
help
EPP (Enhanced Parallel Port) is a standard for parallel ports which
allows them to act as expansion buses that can handle up to 64
peripheral devices.
Some parallel port chipsets are slower than their motherboard, and
so we have to control the state of the chipset's FIFO queue every
now and then to avoid data loss. This will be done if you say Y
here.
Generally, saying Y is the safe option and slows things down a bit.
config SCSI_IZIP_SLOW_CTR
bool "ppa/imm option - Assume slow parport control register"
depends on SCSI_PPA || SCSI_IMM

View File

@ -536,13 +536,18 @@ ahd_linux_unmap_scb(struct ahd_softc *ahd, struct scb *scb)
struct scsi_cmnd *cmd;
cmd = scb->io_ctx;
ahd_sync_sglist(ahd, scb, BUS_DMASYNC_POSTWRITE);
scsi_dma_unmap(cmd);
if (cmd) {
ahd_sync_sglist(ahd, scb, BUS_DMASYNC_POSTWRITE);
scsi_dma_unmap(cmd);
}
}
/******************************** Macros **************************************/
#define BUILD_SCSIID(ahd, cmd) \
(((scmd_id(cmd) << TID_SHIFT) & TID) | (ahd)->our_id)
static inline unsigned int ahd_build_scsiid(struct ahd_softc *ahd,
struct scsi_device *sdev)
{
return ((sdev_id(sdev) << TID_SHIFT) & TID) | (ahd)->our_id;
}
/*
* Return a string describing the driver.
@ -811,14 +816,14 @@ ahd_linux_dev_reset(struct scsi_cmnd *cmd)
tinfo = ahd_fetch_transinfo(ahd, 'A', ahd->our_id,
cmd->device->id, &tstate);
reset_scb->io_ctx = cmd;
reset_scb->io_ctx = NULL;
reset_scb->platform_data->dev = dev;
reset_scb->sg_count = 0;
ahd_set_residual(reset_scb, 0);
ahd_set_sense_residual(reset_scb, 0);
reset_scb->platform_data->xfer_len = 0;
reset_scb->hscb->control = 0;
reset_scb->hscb->scsiid = BUILD_SCSIID(ahd,cmd);
reset_scb->hscb->scsiid = ahd_build_scsiid(ahd, cmd->device);
reset_scb->hscb->lun = cmd->device->lun;
reset_scb->hscb->cdb_len = 0;
reset_scb->hscb->task_management = SIU_TASKMGMT_LUN_RESET;
@ -1577,7 +1582,7 @@ ahd_linux_run_command(struct ahd_softc *ahd, struct ahd_linux_device *dev,
* Fill out basics of the HSCB.
*/
hscb->control = 0;
hscb->scsiid = BUILD_SCSIID(ahd, cmd);
hscb->scsiid = ahd_build_scsiid(ahd, cmd->device);
hscb->lun = cmd->device->lun;
scb->hscb->task_management = 0;
mask = SCB_GET_TARGET_MASK(ahd, scb);
@ -1766,9 +1771,16 @@ ahd_done(struct ahd_softc *ahd, struct scb *scb)
dev = scb->platform_data->dev;
dev->active--;
dev->openings++;
if ((cmd->result & (CAM_DEV_QFRZN << 16)) != 0) {
cmd->result &= ~(CAM_DEV_QFRZN << 16);
dev->qfrozen--;
if (cmd) {
if ((cmd->result & (CAM_DEV_QFRZN << 16)) != 0) {
cmd->result &= ~(CAM_DEV_QFRZN << 16);
dev->qfrozen--;
}
} else if (scb->flags & SCB_DEVICE_RESET) {
if (ahd->platform_data->eh_done)
complete(ahd->platform_data->eh_done);
ahd_free_scb(ahd, scb);
return;
}
ahd_linux_unmap_scb(ahd, scb);
@ -1822,7 +1834,8 @@ ahd_done(struct ahd_softc *ahd, struct scb *scb)
} else {
ahd_set_transaction_status(scb, CAM_REQ_CMP);
}
} else if (ahd_get_transaction_status(scb) == CAM_SCSI_STATUS_ERROR) {
} else if (cmd &&
ahd_get_transaction_status(scb) == CAM_SCSI_STATUS_ERROR) {
ahd_linux_handle_scsi_status(ahd, cmd->device, scb);
}
@ -1856,7 +1869,8 @@ ahd_done(struct ahd_softc *ahd, struct scb *scb)
}
ahd_free_scb(ahd, scb);
ahd_linux_queue_cmd_complete(ahd, cmd);
if (cmd)
ahd_linux_queue_cmd_complete(ahd, cmd);
}
static void

View File

@ -366,7 +366,8 @@ static void ahc_linux_queue_cmd_complete(struct ahc_softc *ahc,
struct scsi_cmnd *cmd);
static void ahc_linux_freeze_simq(struct ahc_softc *ahc);
static void ahc_linux_release_simq(struct ahc_softc *ahc);
static int ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag);
static int ahc_linux_queue_recovery_cmd(struct scsi_device *sdev,
struct scsi_cmnd *cmd);
static void ahc_linux_initialize_scsi_bus(struct ahc_softc *ahc);
static u_int ahc_linux_user_tagdepth(struct ahc_softc *ahc,
struct ahc_devinfo *devinfo);
@ -728,7 +729,7 @@ ahc_linux_abort(struct scsi_cmnd *cmd)
{
int error;
error = ahc_linux_queue_recovery_cmd(cmd, SCB_ABORT);
error = ahc_linux_queue_recovery_cmd(cmd->device, cmd);
if (error != SUCCESS)
printk("aic7xxx_abort returns 0x%x\n", error);
return (error);
@ -742,7 +743,7 @@ ahc_linux_dev_reset(struct scsi_cmnd *cmd)
{
int error;
error = ahc_linux_queue_recovery_cmd(cmd, SCB_DEVICE_RESET);
error = ahc_linux_queue_recovery_cmd(cmd->device, NULL);
if (error != SUCCESS)
printk("aic7xxx_dev_reset returns 0x%x\n", error);
return (error);
@ -798,11 +799,18 @@ struct scsi_host_template aic7xxx_driver_template = {
/**************************** Tasklet Handler *********************************/
/******************************** Macros **************************************/
#define BUILD_SCSIID(ahc, cmd) \
((((cmd)->device->id << TID_SHIFT) & TID) \
| (((cmd)->device->channel == 0) ? (ahc)->our_id : (ahc)->our_id_b) \
| (((cmd)->device->channel == 0) ? 0 : TWIN_CHNLB))
static inline unsigned int ahc_build_scsiid(struct ahc_softc *ahc,
struct scsi_device *sdev)
{
unsigned int scsiid = (sdev->id << TID_SHIFT) & TID;
if (sdev->channel == 0)
scsiid |= ahc->our_id;
else
scsiid |= ahc->our_id_b | TWIN_CHNLB;
return scsiid;
}
/******************************** Bus DMA *************************************/
int
@ -1457,7 +1465,7 @@ ahc_linux_run_command(struct ahc_softc *ahc, struct ahc_linux_device *dev,
* Fill out basics of the HSCB.
*/
hscb->control = 0;
hscb->scsiid = BUILD_SCSIID(ahc, cmd);
hscb->scsiid = ahc_build_scsiid(ahc, cmd->device);
hscb->lun = cmd->device->lun;
mask = SCB_GET_TARGET_MASK(ahc, scb);
tinfo = ahc_fetch_transinfo(ahc, SCB_GET_CHANNEL(ahc, scb),
@ -2029,11 +2037,12 @@ ahc_linux_release_simq(struct ahc_softc *ahc)
}
static int
ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
ahc_linux_queue_recovery_cmd(struct scsi_device *sdev,
struct scsi_cmnd *cmd)
{
struct ahc_softc *ahc;
struct ahc_linux_device *dev;
struct scb *pending_scb;
struct scb *pending_scb = NULL, *scb;
u_int saved_scbptr;
u_int active_scb_index;
u_int last_phase;
@ -2046,18 +2055,19 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
int disconnected;
unsigned long flags;
pending_scb = NULL;
paused = FALSE;
wait = FALSE;
ahc = *(struct ahc_softc **)cmd->device->host->hostdata;
ahc = *(struct ahc_softc **)sdev->host->hostdata;
scmd_printk(KERN_INFO, cmd, "Attempting to queue a%s message\n",
flag == SCB_ABORT ? "n ABORT" : " TARGET RESET");
sdev_printk(KERN_INFO, sdev, "Attempting to queue a%s message\n",
cmd ? "n ABORT" : " TARGET RESET");
printk("CDB:");
for (cdb_byte = 0; cdb_byte < cmd->cmd_len; cdb_byte++)
printk(" 0x%x", cmd->cmnd[cdb_byte]);
printk("\n");
if (cmd) {
printk("CDB:");
for (cdb_byte = 0; cdb_byte < cmd->cmd_len; cdb_byte++)
printk(" 0x%x", cmd->cmnd[cdb_byte]);
printk("\n");
}
ahc_lock(ahc, &flags);
@ -2068,7 +2078,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
* at all, and the system wanted us to just abort the
* command, return success.
*/
dev = scsi_transport_device_data(cmd->device);
dev = scsi_transport_device_data(sdev);
if (dev == NULL) {
/*
@ -2076,13 +2086,12 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
* so we must not still own the command.
*/
printk("%s:%d:%d:%d: Is not an active device\n",
ahc_name(ahc), cmd->device->channel, cmd->device->id,
(u8)cmd->device->lun);
ahc_name(ahc), sdev->channel, sdev->id, (u8)sdev->lun);
retval = SUCCESS;
goto no_cmd;
}
if ((dev->flags & (AHC_DEV_Q_BASIC|AHC_DEV_Q_TAGGED)) == 0
if (cmd && (dev->flags & (AHC_DEV_Q_BASIC|AHC_DEV_Q_TAGGED)) == 0
&& ahc_search_untagged_queues(ahc, cmd, cmd->device->id,
cmd->device->channel + 'A',
(u8)cmd->device->lun,
@ -2097,25 +2106,28 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
/*
* See if we can find a matching cmd in the pending list.
*/
LIST_FOREACH(pending_scb, &ahc->pending_scbs, pending_links) {
if (pending_scb->io_ctx == cmd)
break;
}
if (pending_scb == NULL && flag == SCB_DEVICE_RESET) {
/* Any SCB for this device will do for a target reset */
LIST_FOREACH(pending_scb, &ahc->pending_scbs, pending_links) {
if (ahc_match_scb(ahc, pending_scb, scmd_id(cmd),
scmd_channel(cmd) + 'A',
CAM_LUN_WILDCARD,
SCB_LIST_NULL, ROLE_INITIATOR))
if (cmd) {
LIST_FOREACH(scb, &ahc->pending_scbs, pending_links) {
if (scb->io_ctx == cmd) {
pending_scb = scb;
break;
}
}
} else {
/* Any SCB for this device will do for a target reset */
LIST_FOREACH(scb, &ahc->pending_scbs, pending_links) {
if (ahc_match_scb(ahc, scb, sdev->id,
sdev->channel + 'A',
CAM_LUN_WILDCARD,
SCB_LIST_NULL, ROLE_INITIATOR)) {
pending_scb = scb;
break;
}
}
}
if (pending_scb == NULL) {
scmd_printk(KERN_INFO, cmd, "Command not found\n");
sdev_printk(KERN_INFO, sdev, "Command not found\n");
goto no_cmd;
}
@ -2146,22 +2158,22 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
ahc_dump_card_state(ahc);
disconnected = TRUE;
if (flag == SCB_ABORT) {
if (ahc_search_qinfifo(ahc, cmd->device->id,
cmd->device->channel + 'A',
cmd->device->lun,
if (cmd) {
if (ahc_search_qinfifo(ahc, sdev->id,
sdev->channel + 'A',
sdev->lun,
pending_scb->hscb->tag,
ROLE_INITIATOR, CAM_REQ_ABORTED,
SEARCH_COMPLETE) > 0) {
printk("%s:%d:%d:%d: Cmd aborted from QINFIFO\n",
ahc_name(ahc), cmd->device->channel,
cmd->device->id, (u8)cmd->device->lun);
ahc_name(ahc), sdev->channel,
sdev->id, (u8)sdev->lun);
retval = SUCCESS;
goto done;
}
} else if (ahc_search_qinfifo(ahc, cmd->device->id,
cmd->device->channel + 'A',
cmd->device->lun,
} else if (ahc_search_qinfifo(ahc, sdev->id,
sdev->channel + 'A',
sdev->lun,
pending_scb->hscb->tag,
ROLE_INITIATOR, /*status*/0,
SEARCH_COUNT) > 0) {
@ -2174,7 +2186,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
bus_scb = ahc_lookup_scb(ahc, ahc_inb(ahc, SCB_TAG));
if (bus_scb == pending_scb)
disconnected = FALSE;
else if (flag != SCB_ABORT
else if (!cmd
&& ahc_inb(ahc, SAVED_SCSIID) == pending_scb->hscb->scsiid
&& ahc_inb(ahc, SAVED_LUN) == SCB_GET_LUN(pending_scb))
disconnected = FALSE;
@ -2194,18 +2206,18 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
saved_scsiid = ahc_inb(ahc, SAVED_SCSIID);
if (last_phase != P_BUSFREE
&& (pending_scb->hscb->tag == active_scb_index
|| (flag == SCB_DEVICE_RESET
&& SCSIID_TARGET(ahc, saved_scsiid) == scmd_id(cmd)))) {
|| (!cmd && SCSIID_TARGET(ahc, saved_scsiid) == sdev->id))) {
/*
* We're active on the bus, so assert ATN
* and hope that the target responds.
*/
pending_scb = ahc_lookup_scb(ahc, active_scb_index);
pending_scb->flags |= SCB_RECOVERY_SCB|flag;
pending_scb->flags |= SCB_RECOVERY_SCB;
pending_scb->flags |= cmd ? SCB_ABORT : SCB_DEVICE_RESET;
ahc_outb(ahc, MSG_OUT, HOST_MSG);
ahc_outb(ahc, SCSISIGO, last_phase|ATNO);
scmd_printk(KERN_INFO, cmd, "Device is active, asserting ATN\n");
sdev_printk(KERN_INFO, sdev, "Device is active, asserting ATN\n");
wait = TRUE;
} else if (disconnected) {
@ -2226,7 +2238,8 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
* an unsolicited reselection occurred.
*/
pending_scb->hscb->control |= MK_MESSAGE|DISCONNECTED;
pending_scb->flags |= SCB_RECOVERY_SCB|flag;
pending_scb->flags |= SCB_RECOVERY_SCB;
pending_scb->flags |= cmd ? SCB_ABORT : SCB_DEVICE_RESET;
/*
* Remove any cached copy of this SCB in the
@ -2235,9 +2248,9 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
* same element in the SCB, SCB_NEXT, for
* both the qinfifo and the disconnected list.
*/
ahc_search_disc_list(ahc, cmd->device->id,
cmd->device->channel + 'A',
cmd->device->lun, pending_scb->hscb->tag,
ahc_search_disc_list(ahc, sdev->id,
sdev->channel + 'A',
sdev->lun, pending_scb->hscb->tag,
/*stop_on_first*/TRUE,
/*remove*/TRUE,
/*save_state*/FALSE);
@ -2260,9 +2273,9 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
* so we are the next SCB for this target
* to run.
*/
ahc_search_qinfifo(ahc, cmd->device->id,
cmd->device->channel + 'A',
cmd->device->lun, SCB_LIST_NULL,
ahc_search_qinfifo(ahc, sdev->id,
sdev->channel + 'A',
(u8)sdev->lun, SCB_LIST_NULL,
ROLE_INITIATOR, CAM_REQUEUE_REQ,
SEARCH_COMPLETE);
ahc_qinfifo_requeue_tail(ahc, pending_scb);
@ -2271,7 +2284,7 @@ ahc_linux_queue_recovery_cmd(struct scsi_cmnd *cmd, scb_flag flag)
printk("Device is disconnected, re-queuing SCB\n");
wait = TRUE;
} else {
scmd_printk(KERN_INFO, cmd, "Unable to deliver message\n");
sdev_printk(KERN_INFO, sdev, "Unable to deliver message\n");
retval = FAILED;
goto done;
}

View File

@ -384,6 +384,7 @@ struct bnx2fc_rport {
};
struct bnx2fc_mp_req {
u64 tm_lun;
u8 tm_flags;
u32 req_len;

View File

@ -1709,7 +1709,8 @@ void bnx2fc_init_task(struct bnx2fc_cmd *io_req,
struct fcoe_cached_sge_ctx *cached_sge;
struct fcoe_ext_mul_sges_ctx *sgl;
int dev_type = tgt->dev_type;
u64 *fcp_cmnd;
struct fcp_cmnd *fcp_cmnd;
u64 *raw_fcp_cmnd;
u64 tmp_fcp_cmnd[4];
u32 context_id;
int cnt, i;
@ -1778,16 +1779,19 @@ void bnx2fc_init_task(struct bnx2fc_cmd *io_req,
task->txwr_rxrd.union_ctx.tx_seq.ctx.seq_cnt = 1;
/* Fill FCP_CMND IU */
fcp_cmnd = (u64 *)
fcp_cmnd = (struct fcp_cmnd *)&tmp_fcp_cmnd;
bnx2fc_build_fcp_cmnd(io_req, fcp_cmnd);
int_to_scsilun(sc_cmd->device->lun, &fcp_cmnd->fc_lun);
memcpy(fcp_cmnd->fc_cdb, sc_cmd->cmnd, sc_cmd->cmd_len);
raw_fcp_cmnd = (u64 *)
task->txwr_rxrd.union_ctx.fcp_cmd.opaque;
bnx2fc_build_fcp_cmnd(io_req, (struct fcp_cmnd *)&tmp_fcp_cmnd);
/* swap fcp_cmnd */
cnt = sizeof(struct fcp_cmnd) / sizeof(u64);
for (i = 0; i < cnt; i++) {
*fcp_cmnd = cpu_to_be64(tmp_fcp_cmnd[i]);
fcp_cmnd++;
*raw_fcp_cmnd = cpu_to_be64(tmp_fcp_cmnd[i]);
raw_fcp_cmnd++;
}
/* Rx Write Tx Read */

View File

@ -656,10 +656,9 @@ int bnx2fc_init_mp_req(struct bnx2fc_cmd *io_req)
return SUCCESS;
}
static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
static int bnx2fc_initiate_tmf(struct fc_lport *lport, struct fc_rport *rport,
u64 tm_lun, u8 tm_flags)
{
struct fc_lport *lport;
struct fc_rport *rport;
struct fc_rport_libfc_priv *rp;
struct fcoe_port *port;
struct bnx2fc_interface *interface;
@ -668,7 +667,6 @@ static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
struct bnx2fc_mp_req *tm_req;
struct fcoe_task_ctx_entry *task;
struct fcoe_task_ctx_entry *task_page;
struct Scsi_Host *host = sc_cmd->device->host;
struct fc_frame_header *fc_hdr;
struct fcp_cmnd *fcp_cmnd;
int task_idx, index;
@ -677,8 +675,6 @@ static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
u32 sid, did;
unsigned long start = jiffies;
lport = shost_priv(host);
rport = starget_to_rport(scsi_target(sc_cmd->device));
port = lport_priv(lport);
interface = port->priv;
@ -689,7 +685,7 @@ static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
}
rp = rport->dd_data;
rc = fc_block_scsi_eh(sc_cmd);
rc = fc_block_rport(rport);
if (rc)
return rc;
@ -718,7 +714,7 @@ static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
goto retry_tmf;
}
/* Initialize rest of io_req fields */
io_req->sc_cmd = sc_cmd;
io_req->sc_cmd = NULL;
io_req->port = port;
io_req->tgt = tgt;
@ -736,11 +732,13 @@ static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
/* Set TM flags */
io_req->io_req_flags = 0;
tm_req->tm_flags = tm_flags;
tm_req->tm_lun = tm_lun;
/* Fill FCP_CMND */
bnx2fc_build_fcp_cmnd(io_req, (struct fcp_cmnd *)tm_req->req_buf);
fcp_cmnd = (struct fcp_cmnd *)tm_req->req_buf;
memset(fcp_cmnd->fc_cdb, 0, sc_cmd->cmd_len);
int_to_scsilun(tm_lun, &fcp_cmnd->fc_lun);
memset(fcp_cmnd->fc_cdb, 0, BNX2FC_MAX_CMD_LEN);
fcp_cmnd->fc_dl = 0;
/* Fill FC header */
@ -763,8 +761,6 @@ static int bnx2fc_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
task = &(task_page[index]);
bnx2fc_init_mp_task(io_req, task);
bnx2fc_priv(sc_cmd)->io_req = io_req;
/* Obtain free SQ entry */
spin_lock_bh(&tgt->tgt_lock);
bnx2fc_add_2_sq(tgt, xid);
@ -1062,7 +1058,10 @@ int bnx2fc_initiate_cleanup(struct bnx2fc_cmd *io_req)
*/
int bnx2fc_eh_target_reset(struct scsi_cmnd *sc_cmd)
{
return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_TGT_RESET);
struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device));
struct fc_lport *lport = shost_priv(rport_to_shost(rport));
return bnx2fc_initiate_tmf(lport, rport, 0, FCP_TMF_TGT_RESET);
}
/**
@ -1075,7 +1074,11 @@ int bnx2fc_eh_target_reset(struct scsi_cmnd *sc_cmd)
*/
int bnx2fc_eh_device_reset(struct scsi_cmnd *sc_cmd)
{
return bnx2fc_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET);
struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device));
struct fc_lport *lport = shost_priv(rport_to_shost(rport));
return bnx2fc_initiate_tmf(lport, rport, sc_cmd->device->lun,
FCP_TMF_LUN_RESET);
}
static int bnx2fc_abts_cleanup(struct bnx2fc_cmd *io_req)
@ -1450,10 +1453,9 @@ void bnx2fc_process_abts_compl(struct bnx2fc_cmd *io_req,
static void bnx2fc_lun_reset_cmpl(struct bnx2fc_cmd *io_req)
{
struct scsi_cmnd *sc_cmd = io_req->sc_cmd;
struct bnx2fc_rport *tgt = io_req->tgt;
struct bnx2fc_cmd *cmd, *tmp;
u64 tm_lun = sc_cmd->device->lun;
struct bnx2fc_mp_req *tm_req = &io_req->mp_req;
u64 lun;
int rc = 0;
@ -1465,8 +1467,10 @@ static void bnx2fc_lun_reset_cmpl(struct bnx2fc_cmd *io_req)
*/
list_for_each_entry_safe(cmd, tmp, &tgt->active_cmd_queue, link) {
BNX2FC_TGT_DBG(tgt, "LUN RST cmpl: scan for pending IOs\n");
if (!cmd->sc_cmd)
continue;
lun = cmd->sc_cmd->device->lun;
if (lun == tm_lun) {
if (lun == tm_req->tm_lun) {
/* Initiate ABTS on this cmd */
if (!test_and_set_bit(BNX2FC_FLAG_ISSUE_ABTS,
&cmd->req_flags)) {
@ -1570,32 +1574,37 @@ void bnx2fc_process_tm_compl(struct bnx2fc_cmd *io_req,
printk(KERN_ERR PFX "tmf's fc_hdr r_ctl = 0x%x\n",
fc_hdr->fh_r_ctl);
}
if (!bnx2fc_priv(sc_cmd)->io_req) {
printk(KERN_ERR PFX "tm_compl: io_req is NULL\n");
return;
}
switch (io_req->fcp_status) {
case FC_GOOD:
if (io_req->cdb_status == 0) {
/* Good IO completion */
sc_cmd->result = DID_OK << 16;
} else {
/* Transport status is good, SCSI status not good */
sc_cmd->result = (DID_OK << 16) | io_req->cdb_status;
if (sc_cmd) {
if (!bnx2fc_priv(sc_cmd)->io_req) {
printk(KERN_ERR PFX "tm_compl: io_req is NULL\n");
return;
}
if (io_req->fcp_resid)
scsi_set_resid(sc_cmd, io_req->fcp_resid);
break;
switch (io_req->fcp_status) {
case FC_GOOD:
if (io_req->cdb_status == 0) {
/* Good IO completion */
sc_cmd->result = DID_OK << 16;
} else {
/* Transport status is good, SCSI status not good */
sc_cmd->result = (DID_OK << 16) | io_req->cdb_status;
}
if (io_req->fcp_resid)
scsi_set_resid(sc_cmd, io_req->fcp_resid);
break;
default:
BNX2FC_IO_DBG(io_req, "process_tm_compl: fcp_status = %d\n",
io_req->fcp_status);
break;
default:
BNX2FC_IO_DBG(io_req, "process_tm_compl: fcp_status = %d\n",
io_req->fcp_status);
break;
}
sc_cmd = io_req->sc_cmd;
io_req->sc_cmd = NULL;
bnx2fc_priv(sc_cmd)->io_req = NULL;
scsi_done(sc_cmd);
}
sc_cmd = io_req->sc_cmd;
io_req->sc_cmd = NULL;
/* check if the io_req exists in tgt's tmf_q */
if (io_req->on_tmf_queue) {
@ -1607,9 +1616,6 @@ void bnx2fc_process_tm_compl(struct bnx2fc_cmd *io_req,
return;
}
bnx2fc_priv(sc_cmd)->io_req = NULL;
scsi_done(sc_cmd);
kref_put(&io_req->refcount, bnx2fc_cmd_release);
if (io_req->wait_for_abts_comp) {
BNX2FC_IO_DBG(io_req, "tm_compl - wake up the waiter\n");
@ -1738,15 +1744,9 @@ static void bnx2fc_unmap_sg_list(struct bnx2fc_cmd *io_req)
void bnx2fc_build_fcp_cmnd(struct bnx2fc_cmd *io_req,
struct fcp_cmnd *fcp_cmnd)
{
struct scsi_cmnd *sc_cmd = io_req->sc_cmd;
memset(fcp_cmnd, 0, sizeof(struct fcp_cmnd));
int_to_scsilun(sc_cmd->device->lun, &fcp_cmnd->fc_lun);
fcp_cmnd->fc_dl = htonl(io_req->data_xfer_len);
memcpy(fcp_cmnd->fc_cdb, sc_cmd->cmnd, sc_cmd->cmd_len);
fcp_cmnd->fc_cmdref = 0;
fcp_cmnd->fc_pri_ta = 0;
fcp_cmnd->fc_tm_flags = io_req->mp_req.tm_flags;

View File

@ -1294,7 +1294,7 @@ static int cxgbi_ddp_reserve(struct cxgbi_conn *cconn,
/*
* the ddp tag will be used for the itt in the outgoing pdu,
* the itt genrated by libiscsi is saved in the ppm and can be
* the itt generated by libiscsi is saved in the ppm and can be
* retrieved via the ddp tag
*/
err = cxgbi_ppm_ppods_reserve(ppm, ttinfo->nr_pages, 0, &ttinfo->idx,

View File

@ -82,7 +82,7 @@ static int hp_sw_tur(struct scsi_device *sdev, struct hp_sw_dh_data *h)
{
unsigned char cmd[6] = { TEST_UNIT_READY };
struct scsi_sense_hdr sshdr;
int ret = SCSI_DH_OK, res;
int ret, res;
blk_opf_t opf = REQ_OP_DRV_IN | REQ_FAILFAST_DEV |
REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER;
const struct scsi_exec_args exec_args = {
@ -92,19 +92,18 @@ static int hp_sw_tur(struct scsi_device *sdev, struct hp_sw_dh_data *h)
retry:
res = scsi_execute_cmd(sdev, cmd, opf, NULL, 0, HP_SW_TIMEOUT,
HP_SW_RETRIES, &exec_args);
if (res) {
if (scsi_sense_valid(&sshdr))
ret = tur_done(sdev, h, &sshdr);
else {
sdev_printk(KERN_WARNING, sdev,
"%s: sending tur failed with %x\n",
HP_SW_NAME, res);
ret = SCSI_DH_IO;
}
} else {
if (res > 0 && scsi_sense_valid(&sshdr)) {
ret = tur_done(sdev, h, &sshdr);
} else if (res == 0) {
h->path_state = HP_SW_PATH_ACTIVE;
ret = SCSI_DH_OK;
} else {
sdev_printk(KERN_WARNING, sdev,
"%s: sending tur failed with %x\n",
HP_SW_NAME, res);
ret = SCSI_DH_IO;
}
if (ret == SCSI_DH_IMM_RETRY)
goto retry;
@ -122,7 +121,7 @@ static int hp_sw_start_stop(struct hp_sw_dh_data *h)
unsigned char cmd[6] = { START_STOP, 0, 0, 0, 1, 0 };
struct scsi_sense_hdr sshdr;
struct scsi_device *sdev = h->sdev;
int res, rc = SCSI_DH_OK;
int res, rc;
int retry_cnt = HP_SW_RETRIES;
blk_opf_t opf = REQ_OP_DRV_IN | REQ_FAILFAST_DEV |
REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER;
@ -133,35 +132,37 @@ static int hp_sw_start_stop(struct hp_sw_dh_data *h)
retry:
res = scsi_execute_cmd(sdev, cmd, opf, NULL, 0, HP_SW_TIMEOUT,
HP_SW_RETRIES, &exec_args);
if (res) {
if (!scsi_sense_valid(&sshdr)) {
sdev_printk(KERN_WARNING, sdev,
"%s: sending start_stop_unit failed, "
"no sense available\n", HP_SW_NAME);
return SCSI_DH_IO;
}
switch (sshdr.sense_key) {
case NOT_READY:
if (sshdr.asc == 0x04 && sshdr.ascq == 3) {
/*
* LUN not ready - manual intervention required
*
* Switch-over in progress, retry.
*/
if (--retry_cnt)
goto retry;
rc = SCSI_DH_RETRY;
break;
}
fallthrough;
default:
sdev_printk(KERN_WARNING, sdev,
"%s: sending start_stop_unit failed, "
"sense %x/%x/%x\n", HP_SW_NAME,
sshdr.sense_key, sshdr.asc, sshdr.ascq);
rc = SCSI_DH_IO;
}
if (!res) {
return SCSI_DH_OK;
} else if (res < 0 || !scsi_sense_valid(&sshdr)) {
sdev_printk(KERN_WARNING, sdev,
"%s: sending start_stop_unit failed, "
"no sense available\n", HP_SW_NAME);
return SCSI_DH_IO;
}
switch (sshdr.sense_key) {
case NOT_READY:
if (sshdr.asc == 0x04 && sshdr.ascq == 3) {
/*
* LUN not ready - manual intervention required
*
* Switch-over in progress, retry.
*/
if (--retry_cnt)
goto retry;
rc = SCSI_DH_RETRY;
break;
}
fallthrough;
default:
sdev_printk(KERN_WARNING, sdev,
"%s: sending start_stop_unit failed, "
"sense %x/%x/%x\n", HP_SW_NAME,
sshdr.sense_key, sshdr.asc, sshdr.ascq);
rc = SCSI_DH_IO;
}
return rc;
}

View File

@ -530,7 +530,7 @@ static void send_mode_select(struct work_struct *work)
container_of(work, struct rdac_controller, ms_work);
struct scsi_device *sdev = ctlr->ms_sdev;
struct rdac_dh_data *h = sdev->handler_data;
int err = SCSI_DH_OK, retry_cnt = RDAC_RETRY_COUNT;
int rc, err, retry_cnt = RDAC_RETRY_COUNT;
struct rdac_queue_data *tmp, *qdata;
LIST_HEAD(list);
unsigned char cdb[MAX_COMMAND_SIZE];
@ -558,20 +558,23 @@ static void send_mode_select(struct work_struct *work)
(char *) h->ctlr->array_name, h->ctlr->index,
(retry_cnt == RDAC_RETRY_COUNT) ? "queueing" : "retrying");
if (scsi_execute_cmd(sdev, cdb, opf, &h->ctlr->mode_select, data_size,
RDAC_TIMEOUT * HZ, RDAC_RETRIES, &exec_args)) {
rc = scsi_execute_cmd(sdev, cdb, opf, &h->ctlr->mode_select, data_size,
RDAC_TIMEOUT * HZ, RDAC_RETRIES, &exec_args);
if (!rc) {
h->state = RDAC_STATE_ACTIVE;
RDAC_LOG(RDAC_LOG_FAILOVER, sdev, "array %s, ctlr %d, "
"MODE_SELECT completed",
(char *) h->ctlr->array_name, h->ctlr->index);
err = SCSI_DH_OK;
} else if (rc < 0) {
err = SCSI_DH_IO;
} else {
err = mode_select_handle_sense(sdev, &sshdr);
if (err == SCSI_DH_RETRY && retry_cnt--)
goto retry;
if (err == SCSI_DH_IMM_RETRY)
goto retry;
}
if (err == SCSI_DH_OK) {
h->state = RDAC_STATE_ACTIVE;
RDAC_LOG(RDAC_LOG_FAILOVER, sdev, "array %s, ctlr %d, "
"MODE_SELECT completed",
(char *) h->ctlr->array_name, h->ctlr->index);
}
list_for_each_entry_safe(qdata, tmp, &list, entry) {
list_del(&qdata->entry);

View File

@ -1611,6 +1611,8 @@ static const struct target_core_fabric_ops efct_lio_ops = {
.sess_get_initiator_sid = NULL,
.tfc_tpg_base_attrs = efct_lio_tpg_attrs,
.tfc_tpg_attrib_attrs = efct_lio_tpg_attrib_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
static const struct target_core_fabric_ops efct_lio_npiv_ops = {
@ -1646,6 +1648,9 @@ static const struct target_core_fabric_ops efct_lio_npiv_ops = {
.sess_get_initiator_sid = NULL,
.tfc_tpg_base_attrs = efct_lio_npiv_tpg_attrs,
.tfc_tpg_attrib_attrs = efct_lio_npiv_tpg_attrib_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
int efct_scsi_tgt_driver_init(void)

View File

@ -41,6 +41,8 @@
* USA.
*/
#include <linux/bitfield.h>
#include "esas2r.h"
/*
@ -792,16 +794,10 @@ static int hba_ioctl_callback(struct esas2r_adapter *a,
pcie_capability_read_dword(a->pcid, PCI_EXP_LNKCAP,
&caps);
gai->pci.link_speed_curr =
(u8)(stat & PCI_EXP_LNKSTA_CLS);
gai->pci.link_speed_max =
(u8)(caps & PCI_EXP_LNKCAP_SLS);
gai->pci.link_width_curr =
(u8)((stat & PCI_EXP_LNKSTA_NLW)
>> PCI_EXP_LNKSTA_NLW_SHIFT);
gai->pci.link_width_max =
(u8)((caps & PCI_EXP_LNKCAP_MLW)
>> 4);
gai->pci.link_speed_curr = FIELD_GET(PCI_EXP_LNKSTA_CLS, stat);
gai->pci.link_speed_max = FIELD_GET(PCI_EXP_LNKCAP_SLS, caps);
gai->pci.link_width_curr = FIELD_GET(PCI_EXP_LNKSTA_NLW, stat);
gai->pci.link_width_max = FIELD_GET(PCI_EXP_LNKCAP_MLW, caps);
}
gai->pci.msi_vector_cnt = 1;

View File

@ -145,16 +145,17 @@ void fnic_handle_link(struct work_struct *work)
spin_unlock_irqrestore(&fnic->fnic_lock, flags);
if (fnic->config.flags & VFCF_FIP_CAPABLE) {
/* start FCoE VLAN discovery */
fnic_fc_trace_set_data(
fnic->lport->host->host_no,
FNIC_FC_LE, "Link Status: DOWN_UP_VLAN",
strlen("Link Status: DOWN_UP_VLAN"));
fnic_fc_trace_set_data(fnic->lport->host->host_no,
FNIC_FC_LE, "Link Status: DOWN_UP_VLAN",
strlen("Link Status: DOWN_UP_VLAN"));
fnic_fcoe_send_vlan_req(fnic);
return;
}
FNIC_FCS_DBG(KERN_DEBUG, fnic->lport->host, "link up\n");
fnic_fc_trace_set_data(fnic->lport->host->host_no, FNIC_FC_LE,
"Link Status: DOWN_UP", strlen("Link Status: DOWN_UP"));
"Link Status: DOWN_UP", strlen("Link Status: DOWN_UP"));
fcoe_ctlr_link_up(&fnic->ctlr);
} else {
/* UP -> DOWN */

View File

@ -343,7 +343,7 @@ struct hisi_sas_hw {
u8 reg_index, u8 reg_count, u8 *write_data);
void (*wait_cmds_complete_timeout)(struct hisi_hba *hisi_hba,
int delay_ms, int timeout_ms);
void (*debugfs_snapshot_regs)(struct hisi_hba *hisi_hba);
int (*debugfs_snapshot_regs)(struct hisi_hba *hisi_hba);
int complete_hdr_size;
const struct scsi_host_template *sht;
};
@ -451,7 +451,6 @@ struct hisi_hba {
const struct hisi_sas_hw *hw; /* Low level hw interface */
unsigned long sata_dev_bitmap[BITS_TO_LONGS(HISI_SAS_MAX_DEVICES)];
struct work_struct rst_work;
struct work_struct debugfs_work;
u32 phy_state;
u32 intr_coal_ticks; /* Time of interrupt coalesce in us */
u32 intr_coal_count; /* Interrupt count to coalesce */

View File

@ -1958,8 +1958,11 @@ static bool hisi_sas_internal_abort_timeout(struct sas_task *task,
struct hisi_hba *hisi_hba = dev_to_hisi_hba(device);
struct hisi_sas_internal_abort_data *timeout = data;
if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct)
queue_work(hisi_hba->wq, &hisi_hba->debugfs_work);
if (hisi_sas_debugfs_enable && hisi_hba->debugfs_itct[0].itct) {
down(&hisi_hba->sem);
hisi_hba->hw->debugfs_snapshot_regs(hisi_hba);
up(&hisi_hba->sem);
}
if (task->task_state_flags & SAS_TASK_STATE_DONE) {
pr_err("Internal abort: timeout %016llx\n",

View File

@ -558,8 +558,7 @@ static int experimental_iopoll_q_cnt;
module_param(experimental_iopoll_q_cnt, int, 0444);
MODULE_PARM_DESC(experimental_iopoll_q_cnt, "number of queues to be used as poll mode, def=0");
static void debugfs_work_handler_v3_hw(struct work_struct *work);
static void debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba);
static int debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba);
static u32 hisi_sas_read32(struct hisi_hba *hisi_hba, u32 off)
{
@ -3388,7 +3387,6 @@ hisi_sas_shost_alloc_pci(struct pci_dev *pdev)
hisi_hba = shost_priv(shost);
INIT_WORK(&hisi_hba->rst_work, hisi_sas_rst_work_handler);
INIT_WORK(&hisi_hba->debugfs_work, debugfs_work_handler_v3_hw);
hisi_hba->hw = &hisi_sas_v3_hw;
hisi_hba->pci_dev = pdev;
hisi_hba->dev = dev;
@ -3860,37 +3858,6 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
&debugfs_ras_v3_hw_fops);
}
static void debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba)
{
int debugfs_dump_index = hisi_hba->debugfs_dump_index;
struct device *dev = hisi_hba->dev;
u64 timestamp = local_clock();
if (debugfs_dump_index >= hisi_sas_debugfs_dump_count) {
dev_warn(dev, "dump count exceeded!\n");
return;
}
do_div(timestamp, NSEC_PER_MSEC);
hisi_hba->debugfs_timestamp[debugfs_dump_index] = timestamp;
debugfs_snapshot_prepare_v3_hw(hisi_hba);
debugfs_snapshot_global_reg_v3_hw(hisi_hba);
debugfs_snapshot_port_reg_v3_hw(hisi_hba);
debugfs_snapshot_axi_reg_v3_hw(hisi_hba);
debugfs_snapshot_ras_reg_v3_hw(hisi_hba);
debugfs_snapshot_cq_reg_v3_hw(hisi_hba);
debugfs_snapshot_dq_reg_v3_hw(hisi_hba);
debugfs_snapshot_itct_reg_v3_hw(hisi_hba);
debugfs_snapshot_iost_reg_v3_hw(hisi_hba);
debugfs_create_files_v3_hw(hisi_hba);
debugfs_snapshot_restore_v3_hw(hisi_hba);
hisi_hba->debugfs_dump_index++;
}
static ssize_t debugfs_trigger_dump_v3_hw_write(struct file *file,
const char __user *user_buf,
size_t count, loff_t *ppos)
@ -3898,9 +3865,6 @@ static ssize_t debugfs_trigger_dump_v3_hw_write(struct file *file,
struct hisi_hba *hisi_hba = file->f_inode->i_private;
char buf[8];
if (hisi_hba->debugfs_dump_index >= hisi_sas_debugfs_dump_count)
return -EFAULT;
if (count > 8)
return -EFAULT;
@ -3910,7 +3874,12 @@ static ssize_t debugfs_trigger_dump_v3_hw_write(struct file *file,
if (buf[0] != '1')
return -EFAULT;
queue_work(hisi_hba->wq, &hisi_hba->debugfs_work);
down(&hisi_hba->sem);
if (debugfs_snapshot_regs_v3_hw(hisi_hba)) {
up(&hisi_hba->sem);
return -EFAULT;
}
up(&hisi_hba->sem);
return count;
}
@ -4661,14 +4630,6 @@ static void debugfs_fifo_init_v3_hw(struct hisi_hba *hisi_hba)
}
}
static void debugfs_work_handler_v3_hw(struct work_struct *work)
{
struct hisi_hba *hisi_hba =
container_of(work, struct hisi_hba, debugfs_work);
debugfs_snapshot_regs_v3_hw(hisi_hba);
}
static void debugfs_release_v3_hw(struct hisi_hba *hisi_hba, int dump_index)
{
struct device *dev = hisi_hba->dev;
@ -4703,7 +4664,7 @@ static int debugfs_alloc_v3_hw(struct hisi_hba *hisi_hba, int dump_index)
{
const struct hisi_sas_hw *hw = hisi_hba->hw;
struct device *dev = hisi_hba->dev;
int p, c, d, r, i;
int p, c, d, r;
size_t sz;
for (r = 0; r < DEBUGFS_REGS_NUM; r++) {
@ -4783,11 +4744,48 @@ static int debugfs_alloc_v3_hw(struct hisi_hba *hisi_hba, int dump_index)
return 0;
fail:
for (i = 0; i < hisi_sas_debugfs_dump_count; i++)
debugfs_release_v3_hw(hisi_hba, i);
debugfs_release_v3_hw(hisi_hba, dump_index);
return -ENOMEM;
}
static int debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba)
{
int debugfs_dump_index = hisi_hba->debugfs_dump_index;
struct device *dev = hisi_hba->dev;
u64 timestamp = local_clock();
if (debugfs_dump_index >= hisi_sas_debugfs_dump_count) {
dev_warn(dev, "dump count exceeded!\n");
return -EINVAL;
}
if (debugfs_alloc_v3_hw(hisi_hba, debugfs_dump_index)) {
dev_warn(dev, "failed to alloc memory\n");
return -ENOMEM;
}
do_div(timestamp, NSEC_PER_MSEC);
hisi_hba->debugfs_timestamp[debugfs_dump_index] = timestamp;
debugfs_snapshot_prepare_v3_hw(hisi_hba);
debugfs_snapshot_global_reg_v3_hw(hisi_hba);
debugfs_snapshot_port_reg_v3_hw(hisi_hba);
debugfs_snapshot_axi_reg_v3_hw(hisi_hba);
debugfs_snapshot_ras_reg_v3_hw(hisi_hba);
debugfs_snapshot_cq_reg_v3_hw(hisi_hba);
debugfs_snapshot_dq_reg_v3_hw(hisi_hba);
debugfs_snapshot_itct_reg_v3_hw(hisi_hba);
debugfs_snapshot_iost_reg_v3_hw(hisi_hba);
debugfs_create_files_v3_hw(hisi_hba);
debugfs_snapshot_restore_v3_hw(hisi_hba);
hisi_hba->debugfs_dump_index++;
return 0;
}
static void debugfs_phy_down_cnt_init_v3_hw(struct hisi_hba *hisi_hba)
{
struct dentry *dir = debugfs_create_dir("phy_down_cnt",
@ -4865,10 +4863,15 @@ static void debugfs_bist_init_v3_hw(struct hisi_hba *hisi_hba)
hisi_hba->debugfs_bist_linkrate = SAS_LINK_RATE_1_5_GBPS;
}
static void debugfs_exit_v3_hw(struct hisi_hba *hisi_hba)
{
debugfs_remove_recursive(hisi_hba->debugfs_dir);
hisi_hba->debugfs_dir = NULL;
}
static void debugfs_init_v3_hw(struct hisi_hba *hisi_hba)
{
struct device *dev = hisi_hba->dev;
int i;
hisi_hba->debugfs_dir = debugfs_create_dir(dev_name(dev),
hisi_sas_debugfs_dir);
@ -4885,19 +4888,6 @@ static void debugfs_init_v3_hw(struct hisi_hba *hisi_hba)
debugfs_phy_down_cnt_init_v3_hw(hisi_hba);
debugfs_fifo_init_v3_hw(hisi_hba);
for (i = 0; i < hisi_sas_debugfs_dump_count; i++) {
if (debugfs_alloc_v3_hw(hisi_hba, i)) {
debugfs_remove_recursive(hisi_hba->debugfs_dir);
dev_dbg(dev, "failed to init debugfs!\n");
break;
}
}
}
static void debugfs_exit_v3_hw(struct hisi_hba *hisi_hba)
{
debugfs_remove_recursive(hisi_hba->debugfs_dir);
}
static int

View File

@ -22,7 +22,6 @@
#include <linux/bsg-lib.h>
#include <asm/firmware.h>
#include <asm/irq.h>
#include <asm/rtas.h>
#include <asm/vio.h>
#include <scsi/scsi.h>
#include <scsi/scsi_cmnd.h>
@ -38,6 +37,7 @@ static unsigned int default_timeout = IBMVFC_DEFAULT_TIMEOUT;
static u64 max_lun = IBMVFC_MAX_LUN;
static unsigned int max_targets = IBMVFC_MAX_TARGETS;
static unsigned int max_requests = IBMVFC_MAX_REQUESTS_DEFAULT;
static u16 scsi_qdepth = IBMVFC_SCSI_QDEPTH;
static unsigned int disc_threads = IBMVFC_MAX_DISC_THREADS;
static unsigned int ibmvfc_debug = IBMVFC_DEBUG;
static unsigned int log_level = IBMVFC_DEFAULT_LOG_LEVEL;
@ -83,6 +83,9 @@ MODULE_PARM_DESC(default_timeout,
module_param_named(max_requests, max_requests, uint, S_IRUGO);
MODULE_PARM_DESC(max_requests, "Maximum requests for this adapter. "
"[Default=" __stringify(IBMVFC_MAX_REQUESTS_DEFAULT) "]");
module_param_named(scsi_qdepth, scsi_qdepth, ushort, S_IRUGO);
MODULE_PARM_DESC(scsi_qdepth, "Maximum scsi command depth per adapter queue. "
"[Default=" __stringify(IBMVFC_SCSI_QDEPTH) "]");
module_param_named(max_lun, max_lun, ullong, S_IRUGO);
MODULE_PARM_DESC(max_lun, "Maximum allowed LUN. "
"[Default=" __stringify(IBMVFC_MAX_LUN) "]");
@ -160,8 +163,8 @@ static void ibmvfc_npiv_logout(struct ibmvfc_host *);
static void ibmvfc_tgt_implicit_logout_and_del(struct ibmvfc_target *);
static void ibmvfc_tgt_move_login(struct ibmvfc_target *);
static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *);
static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *);
static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *, struct ibmvfc_channels *);
static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *, struct ibmvfc_channels *);
static const char *unknown_error = "unknown error";
@ -776,28 +779,26 @@ static int ibmvfc_send_crq_init_complete(struct ibmvfc_host *vhost)
* ibmvfc_init_event_pool - Allocates and initializes the event pool for a host
* @vhost: ibmvfc host who owns the event pool
* @queue: ibmvfc queue struct
* @size: pool size
*
* Returns zero on success.
**/
static int ibmvfc_init_event_pool(struct ibmvfc_host *vhost,
struct ibmvfc_queue *queue,
unsigned int size)
struct ibmvfc_queue *queue)
{
int i;
struct ibmvfc_event_pool *pool = &queue->evt_pool;
ENTER;
if (!size)
if (!queue->total_depth)
return 0;
pool->size = size;
pool->events = kcalloc(size, sizeof(*pool->events), GFP_KERNEL);
pool->size = queue->total_depth;
pool->events = kcalloc(pool->size, sizeof(*pool->events), GFP_KERNEL);
if (!pool->events)
return -ENOMEM;
pool->iu_storage = dma_alloc_coherent(vhost->dev,
size * sizeof(*pool->iu_storage),
pool->size * sizeof(*pool->iu_storage),
&pool->iu_token, 0);
if (!pool->iu_storage) {
@ -807,9 +808,11 @@ static int ibmvfc_init_event_pool(struct ibmvfc_host *vhost,
INIT_LIST_HEAD(&queue->sent);
INIT_LIST_HEAD(&queue->free);
queue->evt_free = queue->evt_depth;
queue->reserved_free = queue->reserved_depth;
spin_lock_init(&queue->l_lock);
for (i = 0; i < size; ++i) {
for (i = 0; i < pool->size; ++i) {
struct ibmvfc_event *evt = &pool->events[i];
/*
@ -922,7 +925,7 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost)
struct vio_dev *vdev = to_vio_dev(vhost->dev);
unsigned long flags;
ibmvfc_dereg_sub_crqs(vhost);
ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs);
/* Re-enable the CRQ */
do {
@ -941,7 +944,7 @@ static int ibmvfc_reenable_crq_queue(struct ibmvfc_host *vhost)
spin_unlock(vhost->crq.q_lock);
spin_unlock_irqrestore(vhost->host->host_lock, flags);
ibmvfc_reg_sub_crqs(vhost);
ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs);
return rc;
}
@ -960,7 +963,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
struct vio_dev *vdev = to_vio_dev(vhost->dev);
struct ibmvfc_queue *crq = &vhost->crq;
ibmvfc_dereg_sub_crqs(vhost);
ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs);
/* Close the CRQ */
do {
@ -993,7 +996,7 @@ static int ibmvfc_reset_crq(struct ibmvfc_host *vhost)
spin_unlock(vhost->crq.q_lock);
spin_unlock_irqrestore(vhost->host->host_lock, flags);
ibmvfc_reg_sub_crqs(vhost);
ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs);
return rc;
}
@ -1033,6 +1036,12 @@ static void ibmvfc_free_event(struct ibmvfc_event *evt)
spin_lock_irqsave(&evt->queue->l_lock, flags);
list_add_tail(&evt->queue_list, &evt->queue->free);
if (evt->reserved) {
evt->reserved = 0;
evt->queue->reserved_free++;
} else {
evt->queue->evt_free++;
}
if (evt->eh_comp)
complete(evt->eh_comp);
spin_unlock_irqrestore(&evt->queue->l_lock, flags);
@ -1475,6 +1484,12 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost)
struct ibmvfc_queue *async_crq = &vhost->async_crq;
struct device_node *of_node = vhost->dev->of_node;
const char *location;
u16 max_cmds;
max_cmds = scsi_qdepth + IBMVFC_NUM_INTERNAL_REQ;
if (mq_enabled)
max_cmds += (scsi_qdepth + IBMVFC_NUM_INTERNAL_SUBQ_REQ) *
vhost->scsi_scrqs.desired_queues;
memset(login_info, 0, sizeof(*login_info));
@ -1489,7 +1504,7 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost)
if (vhost->client_migrated)
login_info->flags |= cpu_to_be16(IBMVFC_CLIENT_MIGRATED);
login_info->max_cmds = cpu_to_be32(max_requests + IBMVFC_NUM_INTERNAL_REQ);
login_info->max_cmds = cpu_to_be32(max_cmds);
login_info->capabilities = cpu_to_be64(IBMVFC_CAN_MIGRATE | IBMVFC_CAN_SEND_VF_WWPN);
if (vhost->mq_enabled || vhost->using_channels)
@ -1508,25 +1523,39 @@ static void ibmvfc_set_login_info(struct ibmvfc_host *vhost)
}
/**
* ibmvfc_get_event - Gets the next free event in pool
* __ibmvfc_get_event - Gets the next free event in pool
* @queue: ibmvfc queue struct
* @reserved: event is for a reserved management command
*
* Returns a free event from the pool.
**/
static struct ibmvfc_event *ibmvfc_get_event(struct ibmvfc_queue *queue)
static struct ibmvfc_event *__ibmvfc_get_event(struct ibmvfc_queue *queue, int reserved)
{
struct ibmvfc_event *evt;
struct ibmvfc_event *evt = NULL;
unsigned long flags;
spin_lock_irqsave(&queue->l_lock, flags);
BUG_ON(list_empty(&queue->free));
evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list);
if (reserved && queue->reserved_free) {
evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list);
evt->reserved = 1;
queue->reserved_free--;
} else if (queue->evt_free) {
evt = list_entry(queue->free.next, struct ibmvfc_event, queue_list);
queue->evt_free--;
} else {
goto out;
}
atomic_set(&evt->free, 0);
list_del(&evt->queue_list);
out:
spin_unlock_irqrestore(&queue->l_lock, flags);
return evt;
}
#define ibmvfc_get_event(queue) __ibmvfc_get_event(queue, 0)
#define ibmvfc_get_reserved_event(queue) __ibmvfc_get_event(queue, 1)
/**
* ibmvfc_locked_done - Calls evt completion with host_lock held
* @evt: ibmvfc evt to complete
@ -1948,9 +1977,15 @@ static int ibmvfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
if (vhost->using_channels) {
scsi_channel = hwq % vhost->scsi_scrqs.active_queues;
evt = ibmvfc_get_event(&vhost->scsi_scrqs.scrqs[scsi_channel]);
if (!evt)
return SCSI_MLQUEUE_HOST_BUSY;
evt->hwq = hwq % vhost->scsi_scrqs.active_queues;
} else
} else {
evt = ibmvfc_get_event(&vhost->crq);
if (!evt)
return SCSI_MLQUEUE_HOST_BUSY;
}
ibmvfc_init_event(evt, ibmvfc_scsi_done, IBMVFC_CMD_FORMAT);
evt->cmnd = cmnd;
@ -2037,7 +2072,12 @@ static int ibmvfc_bsg_timeout(struct bsg_job *job)
}
vhost->aborting_passthru = 1;
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
spin_unlock_irqrestore(vhost->host->host_lock, flags);
return -ENOMEM;
}
ibmvfc_init_event(evt, ibmvfc_bsg_timeout_done, IBMVFC_MAD_FORMAT);
tmf = &evt->iu.tmf;
@ -2095,7 +2135,11 @@ static int ibmvfc_bsg_plogi(struct ibmvfc_host *vhost, unsigned int port_id)
if (unlikely((rc = ibmvfc_host_chkready(vhost))))
goto unlock_out;
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
rc = -ENOMEM;
goto unlock_out;
}
ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT);
plogi = &evt->iu.plogi;
memset(plogi, 0, sizeof(*plogi));
@ -2213,7 +2257,12 @@ static int ibmvfc_bsg_request(struct bsg_job *job)
goto out;
}
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
spin_unlock_irqrestore(vhost->host->host_lock, flags);
rc = -ENOMEM;
goto out;
}
ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT);
mad = &evt->iu.passthru;
@ -2302,6 +2351,11 @@ static int ibmvfc_reset_device(struct scsi_device *sdev, int type, char *desc)
else
evt = ibmvfc_get_event(&vhost->crq);
if (!evt) {
spin_unlock_irqrestore(vhost->host->host_lock, flags);
return -ENOMEM;
}
ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT);
tmf = ibmvfc_init_vfc_cmd(evt, sdev);
iu = ibmvfc_get_fcp_iu(vhost, tmf);
@ -2504,7 +2558,9 @@ static struct ibmvfc_event *ibmvfc_init_tmf(struct ibmvfc_queue *queue,
struct ibmvfc_event *evt;
struct ibmvfc_tmf *tmf;
evt = ibmvfc_get_event(queue);
evt = ibmvfc_get_reserved_event(queue);
if (!evt)
return NULL;
ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_MAD_FORMAT);
tmf = &evt->iu.tmf;
@ -2561,6 +2617,11 @@ static int ibmvfc_cancel_all_mq(struct scsi_device *sdev, int type)
if (found_evt && vhost->logged_in) {
evt = ibmvfc_init_tmf(&queues[i], sdev, type);
if (!evt) {
spin_unlock(queues[i].q_lock);
spin_unlock_irqrestore(vhost->host->host_lock, flags);
return -ENOMEM;
}
evt->sync_iu = &queues[i].cancel_rsp;
ibmvfc_send_event(evt, vhost, default_timeout);
list_add_tail(&evt->cancel, &cancelq);
@ -2774,6 +2835,10 @@ static int ibmvfc_abort_task_set(struct scsi_device *sdev)
if (vhost->state == IBMVFC_ACTIVE) {
evt = ibmvfc_get_event(&vhost->crq);
if (!evt) {
spin_unlock_irqrestore(vhost->host->host_lock, flags);
return -ENOMEM;
}
ibmvfc_init_event(evt, ibmvfc_sync_completion, IBMVFC_CMD_FORMAT);
tmf = ibmvfc_init_vfc_cmd(evt, sdev);
iu = ibmvfc_get_fcp_iu(vhost, tmf);
@ -2930,18 +2995,6 @@ static void ibmvfc_dev_cancel_all_noreset(struct scsi_device *sdev, void *data)
*rc |= ibmvfc_cancel_all(sdev, IBMVFC_TMF_SUPPRESS_ABTS);
}
/**
* ibmvfc_dev_cancel_all_reset - Device iterated cancel all function
* @sdev: scsi device struct
* @data: return code
*
**/
static void ibmvfc_dev_cancel_all_reset(struct scsi_device *sdev, void *data)
{
unsigned long *rc = data;
*rc |= ibmvfc_cancel_all(sdev, IBMVFC_TMF_TGT_RESET);
}
/**
* ibmvfc_eh_target_reset_handler - Reset the target
* @cmd: scsi command struct
@ -2951,22 +3004,38 @@ static void ibmvfc_dev_cancel_all_reset(struct scsi_device *sdev, void *data)
**/
static int ibmvfc_eh_target_reset_handler(struct scsi_cmnd *cmd)
{
struct scsi_device *sdev = cmd->device;
struct ibmvfc_host *vhost = shost_priv(sdev->host);
struct scsi_target *starget = scsi_target(sdev);
struct scsi_target *starget = scsi_target(cmd->device);
struct fc_rport *rport = starget_to_rport(starget);
struct Scsi_Host *shost = rport_to_shost(rport);
struct ibmvfc_host *vhost = shost_priv(shost);
int block_rc;
int reset_rc = 0;
int rc = FAILED;
unsigned long cancel_rc = 0;
bool tgt_reset = false;
ENTER;
block_rc = fc_block_scsi_eh(cmd);
block_rc = fc_block_rport(rport);
ibmvfc_wait_while_resetting(vhost);
if (block_rc != FAST_IO_FAIL) {
starget_for_each_device(starget, &cancel_rc, ibmvfc_dev_cancel_all_reset);
reset_rc = ibmvfc_reset_device(sdev, IBMVFC_TARGET_RESET, "target");
struct scsi_device *sdev;
shost_for_each_device(sdev, shost) {
if ((sdev->channel != starget->channel) ||
(sdev->id != starget->id))
continue;
cancel_rc |= ibmvfc_cancel_all(sdev,
IBMVFC_TMF_TGT_RESET);
if (!tgt_reset) {
reset_rc = ibmvfc_reset_device(sdev,
IBMVFC_TARGET_RESET, "target");
tgt_reset = true;
}
}
} else
starget_for_each_device(starget, &cancel_rc, ibmvfc_dev_cancel_all_noreset);
starget_for_each_device(starget, &cancel_rc,
ibmvfc_dev_cancel_all_noreset);
if (!cancel_rc && !reset_rc)
rc = ibmvfc_wait_for_ops(vhost, starget, ibmvfc_match_target);
@ -3513,11 +3582,12 @@ static ssize_t ibmvfc_show_scsi_channels(struct device *dev,
{
struct Scsi_Host *shost = class_to_shost(dev);
struct ibmvfc_host *vhost = shost_priv(shost);
struct ibmvfc_channels *scsi = &vhost->scsi_scrqs;
unsigned long flags = 0;
int len;
spin_lock_irqsave(shost->host_lock, flags);
len = snprintf(buf, PAGE_SIZE, "%d\n", vhost->client_scsi_channels);
len = snprintf(buf, PAGE_SIZE, "%d\n", scsi->desired_queues);
spin_unlock_irqrestore(shost->host_lock, flags);
return len;
}
@ -3528,12 +3598,13 @@ static ssize_t ibmvfc_store_scsi_channels(struct device *dev,
{
struct Scsi_Host *shost = class_to_shost(dev);
struct ibmvfc_host *vhost = shost_priv(shost);
struct ibmvfc_channels *scsi = &vhost->scsi_scrqs;
unsigned long flags = 0;
unsigned int channels;
spin_lock_irqsave(shost->host_lock, flags);
channels = simple_strtoul(buf, NULL, 10);
vhost->client_scsi_channels = min(channels, nr_scsi_hw_queues);
scsi->desired_queues = min(channels, shost->nr_hw_queues);
ibmvfc_hard_reset_host(vhost);
spin_unlock_irqrestore(shost->host_lock, flags);
return strlen(buf);
@ -3633,7 +3704,6 @@ static const struct scsi_host_template driver_template = {
.max_sectors = IBMVFC_MAX_SECTORS,
.shost_groups = ibmvfc_host_groups,
.track_queue_depth = 1,
.host_tagset = 1,
};
/**
@ -3869,7 +3939,7 @@ static void ibmvfc_drain_sub_crq(struct ibmvfc_queue *scrq)
}
}
static irqreturn_t ibmvfc_interrupt_scsi(int irq, void *scrq_instance)
static irqreturn_t ibmvfc_interrupt_mq(int irq, void *scrq_instance)
{
struct ibmvfc_queue *scrq = (struct ibmvfc_queue *)scrq_instance;
@ -4031,7 +4101,13 @@ static void ibmvfc_tgt_send_prli(struct ibmvfc_target *tgt)
return;
kref_get(&tgt->kref);
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE);
kref_put(&tgt->kref, ibmvfc_release_tgt);
__ibmvfc_reset_host(vhost);
return;
}
vhost->discovery_threads++;
ibmvfc_init_event(evt, ibmvfc_tgt_prli_done, IBMVFC_MAD_FORMAT);
evt->tgt = tgt;
@ -4138,7 +4214,13 @@ static void ibmvfc_tgt_send_plogi(struct ibmvfc_target *tgt)
kref_get(&tgt->kref);
tgt->logo_rcvd = 0;
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE);
kref_put(&tgt->kref, ibmvfc_release_tgt);
__ibmvfc_reset_host(vhost);
return;
}
vhost->discovery_threads++;
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT);
ibmvfc_init_event(evt, ibmvfc_tgt_plogi_done, IBMVFC_MAD_FORMAT);
@ -4214,7 +4296,9 @@ static struct ibmvfc_event *__ibmvfc_tgt_get_implicit_logout_evt(struct ibmvfc_t
struct ibmvfc_event *evt;
kref_get(&tgt->kref);
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt)
return NULL;
ibmvfc_init_event(evt, done, IBMVFC_MAD_FORMAT);
evt->tgt = tgt;
mad = &evt->iu.implicit_logout;
@ -4242,6 +4326,13 @@ static void ibmvfc_tgt_implicit_logout(struct ibmvfc_target *tgt)
vhost->discovery_threads++;
evt = __ibmvfc_tgt_get_implicit_logout_evt(tgt,
ibmvfc_tgt_implicit_logout_done);
if (!evt) {
vhost->discovery_threads--;
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE);
kref_put(&tgt->kref, ibmvfc_release_tgt);
__ibmvfc_reset_host(vhost);
return;
}
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT);
if (ibmvfc_send_event(evt, vhost, default_timeout)) {
@ -4380,7 +4471,13 @@ static void ibmvfc_tgt_move_login(struct ibmvfc_target *tgt)
return;
kref_get(&tgt->kref);
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_DEL_RPORT);
kref_put(&tgt->kref, ibmvfc_release_tgt);
__ibmvfc_reset_host(vhost);
return;
}
vhost->discovery_threads++;
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_INIT_WAIT);
ibmvfc_init_event(evt, ibmvfc_tgt_move_login_done, IBMVFC_MAD_FORMAT);
@ -4546,7 +4643,15 @@ static void ibmvfc_adisc_timeout(struct timer_list *t)
vhost->abort_threads++;
kref_get(&tgt->kref);
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
tgt_err(tgt, "Failed to get cancel event for ADISC.\n");
vhost->abort_threads--;
kref_put(&tgt->kref, ibmvfc_release_tgt);
__ibmvfc_reset_host(vhost);
spin_unlock_irqrestore(vhost->host->host_lock, flags);
return;
}
ibmvfc_init_event(evt, ibmvfc_tgt_adisc_cancel_done, IBMVFC_MAD_FORMAT);
evt->tgt = tgt;
@ -4596,7 +4701,13 @@ static void ibmvfc_tgt_adisc(struct ibmvfc_target *tgt)
return;
kref_get(&tgt->kref);
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE);
kref_put(&tgt->kref, ibmvfc_release_tgt);
__ibmvfc_reset_host(vhost);
return;
}
vhost->discovery_threads++;
ibmvfc_init_event(evt, ibmvfc_tgt_adisc_done, IBMVFC_MAD_FORMAT);
evt->tgt = tgt;
@ -4699,7 +4810,13 @@ static void ibmvfc_tgt_query_target(struct ibmvfc_target *tgt)
return;
kref_get(&tgt->kref);
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE);
kref_put(&tgt->kref, ibmvfc_release_tgt);
__ibmvfc_reset_host(vhost);
return;
}
vhost->discovery_threads++;
evt->tgt = tgt;
ibmvfc_init_event(evt, ibmvfc_tgt_query_target_done, IBMVFC_MAD_FORMAT);
@ -4822,7 +4939,7 @@ static int ibmvfc_alloc_targets(struct ibmvfc_host *vhost)
int i, rc;
for (i = 0, rc = 0; !rc && i < vhost->num_targets; i++)
rc = ibmvfc_alloc_target(vhost, &vhost->disc_buf[i]);
rc = ibmvfc_alloc_target(vhost, &vhost->scsi_scrqs.disc_buf[i]);
return rc;
}
@ -4871,7 +4988,14 @@ static void ibmvfc_discover_targets_done(struct ibmvfc_event *evt)
static void ibmvfc_discover_targets(struct ibmvfc_host *vhost)
{
struct ibmvfc_discover_targets *mad;
struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq);
struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq);
int level = IBMVFC_DEFAULT_LOG_LEVEL;
if (!evt) {
ibmvfc_log(vhost, level, "Discover Targets failed: no available events\n");
ibmvfc_hard_reset_host(vhost);
return;
}
ibmvfc_init_event(evt, ibmvfc_discover_targets_done, IBMVFC_MAD_FORMAT);
mad = &evt->iu.discover_targets;
@ -4879,9 +5003,9 @@ static void ibmvfc_discover_targets(struct ibmvfc_host *vhost)
mad->common.version = cpu_to_be32(1);
mad->common.opcode = cpu_to_be32(IBMVFC_DISC_TARGETS);
mad->common.length = cpu_to_be16(sizeof(*mad));
mad->bufflen = cpu_to_be32(vhost->disc_buf_sz);
mad->buffer.va = cpu_to_be64(vhost->disc_buf_dma);
mad->buffer.len = cpu_to_be32(vhost->disc_buf_sz);
mad->bufflen = cpu_to_be32(vhost->scsi_scrqs.disc_buf_sz);
mad->buffer.va = cpu_to_be64(vhost->scsi_scrqs.disc_buf_dma);
mad->buffer.len = cpu_to_be32(vhost->scsi_scrqs.disc_buf_sz);
mad->flags = cpu_to_be32(IBMVFC_DISC_TGT_PORT_ID_WWPN_LIST);
ibmvfc_set_host_action(vhost, IBMVFC_HOST_ACTION_INIT_WAIT);
@ -4895,7 +5019,7 @@ static void ibmvfc_channel_setup_done(struct ibmvfc_event *evt)
{
struct ibmvfc_host *vhost = evt->vhost;
struct ibmvfc_channel_setup *setup = vhost->channel_setup_buf;
struct ibmvfc_scsi_channels *scrqs = &vhost->scsi_scrqs;
struct ibmvfc_channels *scrqs = &vhost->scsi_scrqs;
u32 mad_status = be16_to_cpu(evt->xfer_iu->channel_setup.common.status);
int level = IBMVFC_DEFAULT_LOG_LEVEL;
int flags, active_queues, i;
@ -4945,12 +5069,19 @@ static void ibmvfc_channel_setup(struct ibmvfc_host *vhost)
{
struct ibmvfc_channel_setup_mad *mad;
struct ibmvfc_channel_setup *setup_buf = vhost->channel_setup_buf;
struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq);
struct ibmvfc_scsi_channels *scrqs = &vhost->scsi_scrqs;
struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq);
struct ibmvfc_channels *scrqs = &vhost->scsi_scrqs;
unsigned int num_channels =
min(vhost->client_scsi_channels, vhost->max_vios_scsi_channels);
min(scrqs->desired_queues, vhost->max_vios_scsi_channels);
int level = IBMVFC_DEFAULT_LOG_LEVEL;
int i;
if (!evt) {
ibmvfc_log(vhost, level, "Channel Setup failed: no available events\n");
ibmvfc_hard_reset_host(vhost);
return;
}
memset(setup_buf, 0, sizeof(*setup_buf));
if (num_channels == 0)
setup_buf->flags = cpu_to_be32(IBMVFC_CANCEL_CHANNELS);
@ -5011,7 +5142,14 @@ static void ibmvfc_channel_enquiry_done(struct ibmvfc_event *evt)
static void ibmvfc_channel_enquiry(struct ibmvfc_host *vhost)
{
struct ibmvfc_channel_enquiry *mad;
struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq);
struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq);
int level = IBMVFC_DEFAULT_LOG_LEVEL;
if (!evt) {
ibmvfc_log(vhost, level, "Channel Enquiry failed: no available events\n");
ibmvfc_hard_reset_host(vhost);
return;
}
ibmvfc_init_event(evt, ibmvfc_channel_enquiry_done, IBMVFC_MAD_FORMAT);
mad = &evt->iu.channel_enquiry;
@ -5132,7 +5270,13 @@ static void ibmvfc_npiv_login_done(struct ibmvfc_event *evt)
static void ibmvfc_npiv_login(struct ibmvfc_host *vhost)
{
struct ibmvfc_npiv_login_mad *mad;
struct ibmvfc_event *evt = ibmvfc_get_event(&vhost->crq);
struct ibmvfc_event *evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
ibmvfc_dbg(vhost, "NPIV Login failed: no available events\n");
ibmvfc_hard_reset_host(vhost);
return;
}
ibmvfc_gather_partition_info(vhost);
ibmvfc_set_login_info(vhost);
@ -5197,7 +5341,13 @@ static void ibmvfc_npiv_logout(struct ibmvfc_host *vhost)
struct ibmvfc_npiv_logout_mad *mad;
struct ibmvfc_event *evt;
evt = ibmvfc_get_event(&vhost->crq);
evt = ibmvfc_get_reserved_event(&vhost->crq);
if (!evt) {
ibmvfc_dbg(vhost, "NPIV Logout failed: no available events\n");
ibmvfc_hard_reset_host(vhost);
return;
}
ibmvfc_init_event(evt, ibmvfc_npiv_logout_done, IBMVFC_MAD_FORMAT);
mad = &evt->iu.npiv_logout;
@ -5645,7 +5795,6 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost,
{
struct device *dev = vhost->dev;
size_t fmt_size;
unsigned int pool_size = 0;
ENTER;
spin_lock_init(&queue->_lock);
@ -5654,7 +5803,9 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost,
switch (fmt) {
case IBMVFC_CRQ_FMT:
fmt_size = sizeof(*queue->msgs.crq);
pool_size = max_requests + IBMVFC_NUM_INTERNAL_REQ;
queue->total_depth = scsi_qdepth + IBMVFC_NUM_INTERNAL_REQ;
queue->evt_depth = scsi_qdepth;
queue->reserved_depth = IBMVFC_NUM_INTERNAL_REQ;
break;
case IBMVFC_ASYNC_FMT:
fmt_size = sizeof(*queue->msgs.async);
@ -5662,14 +5813,17 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost,
case IBMVFC_SUB_CRQ_FMT:
fmt_size = sizeof(*queue->msgs.scrq);
/* We need one extra event for Cancel Commands */
pool_size = max_requests + 1;
queue->total_depth = scsi_qdepth + IBMVFC_NUM_INTERNAL_SUBQ_REQ;
queue->evt_depth = scsi_qdepth;
queue->reserved_depth = IBMVFC_NUM_INTERNAL_SUBQ_REQ;
break;
default:
dev_warn(dev, "Unknown command/response queue message format: %d\n", fmt);
return -EINVAL;
}
if (ibmvfc_init_event_pool(vhost, queue, pool_size)) {
queue->fmt = fmt;
if (ibmvfc_init_event_pool(vhost, queue)) {
dev_err(dev, "Couldn't initialize event pool.\n");
return -ENOMEM;
}
@ -5688,7 +5842,6 @@ static int ibmvfc_alloc_queue(struct ibmvfc_host *vhost,
}
queue->cur = 0;
queue->fmt = fmt;
queue->size = PAGE_SIZE / fmt_size;
queue->vhost = vhost;
@ -5757,12 +5910,13 @@ static int ibmvfc_init_crq(struct ibmvfc_host *vhost)
return retrc;
}
static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
int index)
static int ibmvfc_register_channel(struct ibmvfc_host *vhost,
struct ibmvfc_channels *channels,
int index)
{
struct device *dev = vhost->dev;
struct vio_dev *vdev = to_vio_dev(dev);
struct ibmvfc_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
struct ibmvfc_queue *scrq = &channels->scrqs[index];
int rc = -ENOMEM;
ENTER;
@ -5786,9 +5940,24 @@ static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
goto irq_failed;
}
snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-scsi%d",
vdev->unit_address, index);
rc = request_irq(scrq->irq, ibmvfc_interrupt_scsi, 0, scrq->name, scrq);
switch (channels->protocol) {
case IBMVFC_PROTO_SCSI:
snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-scsi%d",
vdev->unit_address, index);
scrq->handler = ibmvfc_interrupt_mq;
break;
case IBMVFC_PROTO_NVME:
snprintf(scrq->name, sizeof(scrq->name), "ibmvfc-%x-nvmf%d",
vdev->unit_address, index);
scrq->handler = ibmvfc_interrupt_mq;
break;
default:
dev_err(dev, "Unknown channel protocol (%d)\n",
channels->protocol);
goto irq_failed;
}
rc = request_irq(scrq->irq, scrq->handler, 0, scrq->name, scrq);
if (rc) {
dev_err(dev, "Couldn't register sub-crq[%d] irq\n", index);
@ -5804,17 +5973,19 @@ static int ibmvfc_register_scsi_channel(struct ibmvfc_host *vhost,
irq_failed:
do {
rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address, scrq->cookie);
} while (rtas_busy_delay(rc));
} while (rc == H_BUSY || H_IS_LONG_BUSY(rc));
reg_failed:
LEAVE;
return rc;
}
static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index)
static void ibmvfc_deregister_channel(struct ibmvfc_host *vhost,
struct ibmvfc_channels *channels,
int index)
{
struct device *dev = vhost->dev;
struct vio_dev *vdev = to_vio_dev(dev);
struct ibmvfc_queue *scrq = &vhost->scsi_scrqs.scrqs[index];
struct ibmvfc_queue *scrq = &channels->scrqs[index];
long rc;
ENTER;
@ -5838,18 +6009,19 @@ static void ibmvfc_deregister_scsi_channel(struct ibmvfc_host *vhost, int index)
LEAVE;
}
static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost)
static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost,
struct ibmvfc_channels *channels)
{
int i, j;
ENTER;
if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs)
if (!vhost->mq_enabled || !channels->scrqs)
return;
for (i = 0; i < nr_scsi_hw_queues; i++) {
if (ibmvfc_register_scsi_channel(vhost, i)) {
for (i = 0; i < channels->max_queues; i++) {
if (ibmvfc_register_channel(vhost, channels, i)) {
for (j = i; j > 0; j--)
ibmvfc_deregister_scsi_channel(vhost, j - 1);
ibmvfc_deregister_channel(vhost, channels, j - 1);
vhost->do_enquiry = 0;
return;
}
@ -5858,80 +6030,105 @@ static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost)
LEAVE;
}
static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost)
static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost,
struct ibmvfc_channels *channels)
{
int i;
ENTER;
if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs)
if (!vhost->mq_enabled || !channels->scrqs)
return;
for (i = 0; i < nr_scsi_hw_queues; i++)
ibmvfc_deregister_scsi_channel(vhost, i);
for (i = 0; i < channels->max_queues; i++)
ibmvfc_deregister_channel(vhost, channels, i);
LEAVE;
}
static int ibmvfc_alloc_channels(struct ibmvfc_host *vhost,
struct ibmvfc_channels *channels)
{
struct ibmvfc_queue *scrq;
int i, j;
int rc = 0;
channels->scrqs = kcalloc(channels->max_queues,
sizeof(*channels->scrqs),
GFP_KERNEL);
if (!channels->scrqs)
return -ENOMEM;
for (i = 0; i < channels->max_queues; i++) {
scrq = &channels->scrqs[i];
rc = ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT);
if (rc) {
for (j = i; j > 0; j--) {
scrq = &channels->scrqs[j - 1];
ibmvfc_free_queue(vhost, scrq);
}
kfree(channels->scrqs);
channels->scrqs = NULL;
channels->active_queues = 0;
return rc;
}
}
return rc;
}
static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)
{
struct ibmvfc_queue *scrq;
int i, j;
ENTER;
if (!vhost->mq_enabled)
return;
vhost->scsi_scrqs.scrqs = kcalloc(nr_scsi_hw_queues,
sizeof(*vhost->scsi_scrqs.scrqs),
GFP_KERNEL);
if (!vhost->scsi_scrqs.scrqs) {
if (ibmvfc_alloc_channels(vhost, &vhost->scsi_scrqs)) {
vhost->do_enquiry = 0;
vhost->mq_enabled = 0;
return;
}
for (i = 0; i < nr_scsi_hw_queues; i++) {
scrq = &vhost->scsi_scrqs.scrqs[i];
if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT)) {
for (j = i; j > 0; j--) {
scrq = &vhost->scsi_scrqs.scrqs[j - 1];
ibmvfc_free_queue(vhost, scrq);
}
kfree(vhost->scsi_scrqs.scrqs);
vhost->scsi_scrqs.scrqs = NULL;
vhost->scsi_scrqs.active_queues = 0;
vhost->do_enquiry = 0;
vhost->mq_enabled = 0;
return;
}
}
ibmvfc_reg_sub_crqs(vhost);
ibmvfc_reg_sub_crqs(vhost, &vhost->scsi_scrqs);
LEAVE;
}
static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)
static void ibmvfc_release_channels(struct ibmvfc_host *vhost,
struct ibmvfc_channels *channels)
{
struct ibmvfc_queue *scrq;
int i;
if (channels->scrqs) {
for (i = 0; i < channels->max_queues; i++) {
scrq = &channels->scrqs[i];
ibmvfc_free_queue(vhost, scrq);
}
kfree(channels->scrqs);
channels->scrqs = NULL;
channels->active_queues = 0;
}
}
static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)
{
ENTER;
if (!vhost->scsi_scrqs.scrqs)
return;
ibmvfc_dereg_sub_crqs(vhost);
ibmvfc_dereg_sub_crqs(vhost, &vhost->scsi_scrqs);
for (i = 0; i < nr_scsi_hw_queues; i++) {
scrq = &vhost->scsi_scrqs.scrqs[i];
ibmvfc_free_queue(vhost, scrq);
}
kfree(vhost->scsi_scrqs.scrqs);
vhost->scsi_scrqs.scrqs = NULL;
vhost->scsi_scrqs.active_queues = 0;
ibmvfc_release_channels(vhost, &vhost->scsi_scrqs);
LEAVE;
}
static void ibmvfc_free_disc_buf(struct device *dev, struct ibmvfc_channels *channels)
{
dma_free_coherent(dev, channels->disc_buf_sz, channels->disc_buf,
channels->disc_buf_dma);
}
/**
* ibmvfc_free_mem - Free memory for vhost
* @vhost: ibmvfc host struct
@ -5946,8 +6143,7 @@ static void ibmvfc_free_mem(struct ibmvfc_host *vhost)
ENTER;
mempool_destroy(vhost->tgt_pool);
kfree(vhost->trace);
dma_free_coherent(vhost->dev, vhost->disc_buf_sz, vhost->disc_buf,
vhost->disc_buf_dma);
ibmvfc_free_disc_buf(vhost->dev, &vhost->scsi_scrqs);
dma_free_coherent(vhost->dev, sizeof(*vhost->login_buf),
vhost->login_buf, vhost->login_buf_dma);
dma_free_coherent(vhost->dev, sizeof(*vhost->channel_setup_buf),
@ -5957,6 +6153,21 @@ static void ibmvfc_free_mem(struct ibmvfc_host *vhost)
LEAVE;
}
static int ibmvfc_alloc_disc_buf(struct device *dev, struct ibmvfc_channels *channels)
{
channels->disc_buf_sz = sizeof(*channels->disc_buf) * max_targets;
channels->disc_buf = dma_alloc_coherent(dev, channels->disc_buf_sz,
&channels->disc_buf_dma, GFP_KERNEL);
if (!channels->disc_buf) {
dev_err(dev, "Couldn't allocate %s Discover Targets buffer\n",
(channels->protocol == IBMVFC_PROTO_SCSI) ? "SCSI" : "NVMe");
return -ENOMEM;
}
return 0;
}
/**
* ibmvfc_alloc_mem - Allocate memory for vhost
* @vhost: ibmvfc host struct
@ -5992,21 +6203,15 @@ static int ibmvfc_alloc_mem(struct ibmvfc_host *vhost)
goto free_sg_pool;
}
vhost->disc_buf_sz = sizeof(*vhost->disc_buf) * max_targets;
vhost->disc_buf = dma_alloc_coherent(dev, vhost->disc_buf_sz,
&vhost->disc_buf_dma, GFP_KERNEL);
if (!vhost->disc_buf) {
dev_err(dev, "Couldn't allocate Discover Targets buffer\n");
if (ibmvfc_alloc_disc_buf(dev, &vhost->scsi_scrqs))
goto free_login_buffer;
}
vhost->trace = kcalloc(IBMVFC_NUM_TRACE_ENTRIES,
sizeof(struct ibmvfc_trace_entry), GFP_KERNEL);
atomic_set(&vhost->trace_index, -1);
if (!vhost->trace)
goto free_disc_buffer;
goto free_scsi_disc_buffer;
vhost->tgt_pool = mempool_create_kmalloc_pool(IBMVFC_TGT_MEMPOOL_SZ,
sizeof(struct ibmvfc_target));
@ -6032,9 +6237,8 @@ static int ibmvfc_alloc_mem(struct ibmvfc_host *vhost)
mempool_destroy(vhost->tgt_pool);
free_trace:
kfree(vhost->trace);
free_disc_buffer:
dma_free_coherent(dev, vhost->disc_buf_sz, vhost->disc_buf,
vhost->disc_buf_dma);
free_scsi_disc_buffer:
ibmvfc_free_disc_buf(dev, &vhost->scsi_scrqs);
free_login_buffer:
dma_free_coherent(dev, sizeof(*vhost->login_buf),
vhost->login_buf, vhost->login_buf_dma);
@ -6113,7 +6317,8 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
struct Scsi_Host *shost;
struct device *dev = &vdev->dev;
int rc = -ENOMEM;
unsigned int max_scsi_queues = IBMVFC_MAX_SCSI_QUEUES;
unsigned int online_cpus = num_online_cpus();
unsigned int max_scsi_queues = min((unsigned int)IBMVFC_MAX_SCSI_QUEUES, online_cpus);
ENTER;
shost = scsi_host_alloc(&driver_template, sizeof(*vhost));
@ -6123,7 +6328,7 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
}
shost->transportt = ibmvfc_transport_template;
shost->can_queue = max_requests;
shost->can_queue = scsi_qdepth;
shost->max_lun = max_lun;
shost->max_id = max_targets;
shost->max_sectors = IBMVFC_MAX_SECTORS;
@ -6142,7 +6347,9 @@ static int ibmvfc_probe(struct vio_dev *vdev, const struct vio_device_id *id)
vhost->task_set = 1;
vhost->mq_enabled = mq_enabled;
vhost->client_scsi_channels = min(shost->nr_hw_queues, nr_scsi_channels);
vhost->scsi_scrqs.desired_queues = min(shost->nr_hw_queues, nr_scsi_channels);
vhost->scsi_scrqs.max_queues = shost->nr_hw_queues;
vhost->scsi_scrqs.protocol = IBMVFC_PROTO_SCSI;
vhost->using_channels = 0;
vhost->do_enquiry = 1;
vhost->scan_timeout = 0;
@ -6282,7 +6489,9 @@ static int ibmvfc_resume(struct device *dev)
*/
static unsigned long ibmvfc_get_desired_dma(struct vio_dev *vdev)
{
unsigned long pool_dma = max_requests * sizeof(union ibmvfc_iu);
unsigned long pool_dma;
pool_dma = (IBMVFC_MAX_SCSI_QUEUES * scsi_qdepth) * sizeof(union ibmvfc_iu);
return pool_dma + ((512 * 1024) * driver_template.cmd_per_lun);
}

View File

@ -27,6 +27,7 @@
#define IBMVFC_ABORT_TIMEOUT 8
#define IBMVFC_ABORT_WAIT_TIMEOUT 40
#define IBMVFC_MAX_REQUESTS_DEFAULT 100
#define IBMVFC_SCSI_QDEPTH 128
#define IBMVFC_DEBUG 0
#define IBMVFC_MAX_TARGETS 1024
@ -57,6 +58,8 @@
* 2 for each discovery thread
*/
#define IBMVFC_NUM_INTERNAL_REQ (1 + 1 + 1 + 2 + (disc_threads * 2))
/* Reserved suset of events for cancelling channelized IO commands */
#define IBMVFC_NUM_INTERNAL_SUBQ_REQ 4
#define IBMVFC_MAD_SUCCESS 0x00
#define IBMVFC_MAD_NOT_SUPPORTED 0xF1
@ -713,9 +716,15 @@ enum ibmvfc_target_action {
IBMVFC_TGT_ACTION_LOGOUT_DELETED_RPORT,
};
enum ibmvfc_protocol {
IBMVFC_PROTO_SCSI = 0,
IBMVFC_PROTO_NVME = 1,
};
struct ibmvfc_target {
struct list_head queue;
struct ibmvfc_host *vhost;
enum ibmvfc_protocol protocol;
u64 scsi_id;
u64 wwpn;
u64 new_scsi_id;
@ -758,6 +767,7 @@ struct ibmvfc_event {
struct completion *eh_comp;
struct timer_list timer;
u16 hwq;
u8 reserved;
};
/* a pool of event structs for use */
@ -793,6 +803,11 @@ struct ibmvfc_queue {
struct ibmvfc_event_pool evt_pool;
struct list_head sent;
struct list_head free;
u16 total_depth;
u16 evt_depth;
u16 reserved_depth;
u16 evt_free;
u16 reserved_free;
spinlock_t l_lock;
union ibmvfc_iu cancel_rsp;
@ -804,11 +819,18 @@ struct ibmvfc_queue {
unsigned long irq;
unsigned long hwq_id;
char name[32];
irq_handler_t handler;
};
struct ibmvfc_scsi_channels {
struct ibmvfc_channels {
struct ibmvfc_queue *scrqs;
enum ibmvfc_protocol protocol;
unsigned int active_queues;
unsigned int desired_queues;
unsigned int max_queues;
int disc_buf_sz;
struct ibmvfc_discover_targets_entry *disc_buf;
dma_addr_t disc_buf_dma;
};
enum ibmvfc_host_action {
@ -857,37 +879,33 @@ struct ibmvfc_host {
mempool_t *tgt_pool;
struct ibmvfc_queue crq;
struct ibmvfc_queue async_crq;
struct ibmvfc_scsi_channels scsi_scrqs;
struct ibmvfc_channels scsi_scrqs;
struct ibmvfc_npiv_login login_info;
union ibmvfc_npiv_login_data *login_buf;
dma_addr_t login_buf_dma;
struct ibmvfc_channel_setup *channel_setup_buf;
dma_addr_t channel_setup_dma;
int disc_buf_sz;
int log_level;
struct ibmvfc_discover_targets_entry *disc_buf;
struct mutex passthru_mutex;
int max_vios_scsi_channels;
unsigned int max_vios_scsi_channels;
int task_set;
int init_retries;
int discovery_threads;
int abort_threads;
int client_migrated;
int reinit;
int delay_init;
int scan_complete;
unsigned int client_migrated:1;
unsigned int reinit:1;
unsigned int delay_init:1;
unsigned int logged_in:1;
unsigned int mq_enabled:1;
unsigned int using_channels:1;
unsigned int do_enquiry:1;
unsigned int aborting_passthru:1;
unsigned int scan_complete:1;
int scan_timeout;
int logged_in;
int mq_enabled;
int using_channels;
int do_enquiry;
int client_scsi_channels;
int aborting_passthru;
int events_to_log;
#define IBMVFC_AE_LINKUP 0x0001
#define IBMVFC_AE_LINKDOWN 0x0002
#define IBMVFC_AE_RSCN 0x0004
dma_addr_t disc_buf_dma;
unsigned int partition_number;
char partition_name[97];
void (*job_step) (struct ibmvfc_host *);

View File

@ -3975,6 +3975,9 @@ static const struct target_core_fabric_ops ibmvscsis_ops = {
.fabric_drop_tpg = ibmvscsis_drop_tpg,
.tfc_wwn_attrs = ibmvscsis_wwn_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
static void ibmvscsis_dev_release(struct device *dev) {};

View File

@ -51,10 +51,15 @@ typedef struct {
} imm_struct;
static void imm_reset_pulse(unsigned int base);
static int device_check(imm_struct *dev);
static int device_check(imm_struct *dev, bool autodetect);
#include "imm.h"
static unsigned int mode = IMM_AUTODETECT;
module_param(mode, uint, 0644);
MODULE_PARM_DESC(mode, "Transfer mode (0 = Autodetect, 1 = SPP 4-bit, "
"2 = SPP 8-bit, 3 = EPP 8-bit, 4 = EPP 16-bit, 5 = EPP 32-bit");
static inline imm_struct *imm_dev(struct Scsi_Host *host)
{
return *(imm_struct **)&host->hostdata;
@ -366,13 +371,10 @@ static int imm_out(imm_struct *dev, char *buffer, int len)
case IMM_EPP_8:
epp_reset(ppb);
w_ctr(ppb, 0x4);
#ifdef CONFIG_SCSI_IZIP_EPP16
if (!(((long) buffer | len) & 0x01))
outsw(ppb + 4, buffer, len >> 1);
#else
if (!(((long) buffer | len) & 0x03))
if (dev->mode == IMM_EPP_32 && !(((long) buffer | len) & 0x03))
outsl(ppb + 4, buffer, len >> 2);
#endif
else if (dev->mode == IMM_EPP_16 && !(((long) buffer | len) & 0x01))
outsw(ppb + 4, buffer, len >> 1);
else
outsb(ppb + 4, buffer, len);
w_ctr(ppb, 0xc);
@ -426,13 +428,10 @@ static int imm_in(imm_struct *dev, char *buffer, int len)
case IMM_EPP_8:
epp_reset(ppb);
w_ctr(ppb, 0x24);
#ifdef CONFIG_SCSI_IZIP_EPP16
if (!(((long) buffer | len) & 0x01))
insw(ppb + 4, buffer, len >> 1);
#else
if (!(((long) buffer | len) & 0x03))
insl(ppb + 4, buffer, len >> 2);
#endif
if (dev->mode == IMM_EPP_32 && !(((long) buffer | len) & 0x03))
insw(ppb + 4, buffer, len >> 2);
else if (dev->mode == IMM_EPP_16 && !(((long) buffer | len) & 0x01))
insl(ppb + 4, buffer, len >> 1);
else
insb(ppb + 4, buffer, len);
w_ctr(ppb, 0x2c);
@ -589,13 +588,28 @@ static int imm_select(imm_struct *dev, int target)
static int imm_init(imm_struct *dev)
{
bool autodetect = dev->mode == IMM_AUTODETECT;
if (autodetect) {
int modes = dev->dev->port->modes;
/* Mode detection works up the chain of speed
* This avoids a nasty if-then-else-if-... tree
*/
dev->mode = IMM_NIBBLE;
if (modes & PARPORT_MODE_TRISTATE)
dev->mode = IMM_PS2;
}
if (imm_connect(dev, 0) != 1)
return -EIO;
imm_reset_pulse(dev->base);
mdelay(1); /* Delay to allow devices to settle */
imm_disconnect(dev);
mdelay(1); /* Another delay to allow devices to settle */
return device_check(dev);
return device_check(dev, autodetect);
}
static inline int imm_send_command(struct scsi_cmnd *cmd)
@ -1000,7 +1014,7 @@ static int imm_reset(struct scsi_cmnd *cmd)
return SUCCESS;
}
static int device_check(imm_struct *dev)
static int device_check(imm_struct *dev, bool autodetect)
{
/* This routine looks for a device and then attempts to use EPP
to send a command. If all goes as planned then EPP is available. */
@ -1012,8 +1026,8 @@ static int device_check(imm_struct *dev)
old_mode = dev->mode;
for (loop = 0; loop < 8; loop++) {
/* Attempt to use EPP for Test Unit Ready */
if ((ppb & 0x0007) == 0x0000)
dev->mode = IMM_EPP_32;
if (autodetect && (ppb & 0x0007) == 0x0000)
dev->mode = IMM_EPP_8;
second_pass:
imm_connect(dev, CONNECT_EPP_MAYBE);
@ -1038,7 +1052,7 @@ static int device_check(imm_struct *dev)
udelay(1000);
imm_disconnect(dev);
udelay(1000);
if (dev->mode == IMM_EPP_32) {
if (dev->mode != old_mode) {
dev->mode = old_mode;
goto second_pass;
}
@ -1063,7 +1077,7 @@ static int device_check(imm_struct *dev)
udelay(1000);
imm_disconnect(dev);
udelay(1000);
if (dev->mode == IMM_EPP_32) {
if (dev->mode != old_mode) {
dev->mode = old_mode;
goto second_pass;
}
@ -1150,7 +1164,6 @@ static int __imm_attach(struct parport *pb)
DECLARE_WAIT_QUEUE_HEAD_ONSTACK(waiting);
DEFINE_WAIT(wait);
int ports;
int modes, ppb;
int err = -ENOMEM;
struct pardev_cb imm_cb;
@ -1162,7 +1175,7 @@ static int __imm_attach(struct parport *pb)
dev->base = -1;
dev->mode = IMM_AUTODETECT;
dev->mode = mode < IMM_UNKNOWN ? mode : IMM_AUTODETECT;
INIT_LIST_HEAD(&dev->list);
temp = find_parent();
@ -1197,18 +1210,9 @@ static int __imm_attach(struct parport *pb)
}
dev->waiting = NULL;
finish_wait(&waiting, &wait);
ppb = dev->base = dev->dev->port->base;
dev->base = dev->dev->port->base;
dev->base_hi = dev->dev->port->base_hi;
w_ctr(ppb, 0x0c);
modes = dev->dev->port->modes;
/* Mode detection works up the chain of speed
* This avoids a nasty if-then-else-if-... tree
*/
dev->mode = IMM_NIBBLE;
if (modes & PARPORT_MODE_TRISTATE)
dev->mode = IMM_PS2;
w_ctr(dev->base, 0x0c);
/* Done configuration */

View File

@ -100,11 +100,7 @@ static char *IMM_MODE_STRING[] =
[IMM_PS2] = "PS/2",
[IMM_EPP_8] = "EPP 8 bit",
[IMM_EPP_16] = "EPP 16 bit",
#ifdef CONFIG_SCSI_IZIP_EPP16
[IMM_EPP_32] = "EPP 16 bit",
#else
[IMM_EPP_32] = "EPP 32 bit",
#endif
[IMM_UNKNOWN] = "Unknown",
};

View File

@ -835,7 +835,6 @@ static int __ips_eh_reset(struct scsi_cmnd *SC)
int i;
ips_ha_t *ha;
ips_scb_t *scb;
ips_copp_wait_item_t *item;
METHOD_TRACE("ips_eh_reset", 1);
@ -860,23 +859,6 @@ static int __ips_eh_reset(struct scsi_cmnd *SC)
if (!ha->active)
return (FAILED);
/* See if the command is on the copp queue */
item = ha->copp_waitlist.head;
while ((item) && (item->scsi_cmd != SC))
item = item->next;
if (item) {
/* Found it */
ips_removeq_copp(&ha->copp_waitlist, item);
return (SUCCESS);
}
/* See if the command is on the wait queue */
if (ips_removeq_wait(&ha->scb_waitlist, SC)) {
/* command not sent yet */
return (SUCCESS);
}
/* An explanation for the casual observer: */
/* Part of the function of a RAID controller is automatic error */
/* detection and recovery. As such, the only problem that physically */

View File

@ -241,6 +241,12 @@ static void fc_lport_ptp_setup(struct fc_lport *lport,
}
mutex_lock(&lport->disc.disc_mutex);
lport->ptp_rdata = fc_rport_create(lport, remote_fid);
if (!lport->ptp_rdata) {
printk(KERN_WARNING "libfc: Failed to setup lport 0x%x\n",
lport->port_id);
mutex_unlock(&lport->disc.disc_mutex);
return;
}
kref_get(&lport->ptp_rdata->kref);
lport->ptp_rdata->ids.port_name = remote_wwpn;
lport->ptp_rdata->ids.node_name = remote_wwnn;

View File

@ -275,7 +275,7 @@ static void sas_resume_devices(struct work_struct *work)
*
* See comment in sas_discover_sata().
*/
int sas_discover_end_dev(struct domain_device *dev)
static int sas_discover_end_dev(struct domain_device *dev)
{
return sas_notify_lldd_dev_found(dev);
}

View File

@ -315,8 +315,8 @@ int sas_phy_reset(struct sas_phy *phy, int hard_reset)
}
EXPORT_SYMBOL_GPL(sas_phy_reset);
int sas_set_phy_speed(struct sas_phy *phy,
struct sas_phy_linkrates *rates)
static int sas_set_phy_speed(struct sas_phy *phy,
struct sas_phy_linkrates *rates)
{
int ret;

View File

@ -39,6 +39,18 @@ struct sas_phy_data {
struct sas_work enable_work;
};
void sas_hash_addr(u8 *hashed, const u8 *sas_addr);
int sas_discover_root_expander(struct domain_device *dev);
int sas_ex_revalidate_domain(struct domain_device *dev);
void sas_unregister_domain_devices(struct asd_sas_port *port, int gone);
void sas_init_disc(struct sas_discovery *disc, struct asd_sas_port *port);
void sas_discover_event(struct asd_sas_port *port, enum discover_event ev);
void sas_init_dev(struct domain_device *dev);
void sas_unregister_dev(struct asd_sas_port *port, struct domain_device *dev);
void sas_scsi_recover_host(struct Scsi_Host *shost);
int sas_register_phys(struct sas_ha_struct *sas_ha);

View File

@ -131,6 +131,15 @@ lpfc_els_chk_latt(struct lpfc_vport *vport)
return 1;
}
static bool lpfc_is_els_acc_rsp(struct lpfc_dmabuf *buf)
{
struct fc_els_ls_acc *rsp = buf->virt;
if (rsp && rsp->la_cmd == ELS_LS_ACC)
return true;
return false;
}
/**
* lpfc_prep_els_iocb - Allocate and prepare a lpfc iocb data structure
* @vport: pointer to a host virtual N_Port data structure.
@ -1107,6 +1116,8 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list);
if (!prsp)
goto out;
if (!lpfc_is_els_acc_rsp(prsp))
goto out;
sp = prsp->virt + sizeof(uint32_t);
/* FLOGI completes successfully */
@ -2119,6 +2130,10 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
/* Good status, call state machine */
prsp = list_entry(cmdiocb->cmd_dmabuf->list.next,
struct lpfc_dmabuf, list);
if (!prsp)
goto out;
if (!lpfc_is_els_acc_rsp(prsp))
goto out;
ndlp = lpfc_plogi_confirm_nport(phba, prsp->virt, ndlp);
sp = (struct serv_parm *)((u8 *)prsp->virt +
@ -3445,6 +3460,8 @@ lpfc_cmpl_els_disc_cmd(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
prdf = (struct lpfc_els_rdf_rsp *)prsp->virt;
if (!prdf)
goto out;
if (!lpfc_is_els_acc_rsp(prsp))
goto out;
for (i = 0; i < ELS_RDF_REG_TAG_CNT &&
i < be32_to_cpu(prdf->reg_d1.reg_desc.count); i++)
@ -4043,6 +4060,9 @@ lpfc_cmpl_els_edc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
edc_rsp->acc_hdr.la_cmd,
be32_to_cpu(edc_rsp->desc_list_len));
if (!lpfc_is_els_acc_rsp(prsp))
goto out;
/*
* Payload length in bytes is the response descriptor list
* length minus the 12 bytes of Link Service Request
@ -11339,6 +11359,9 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
prsp = list_get_first(&pcmd->list, struct lpfc_dmabuf, list);
if (!prsp)
goto out;
if (!lpfc_is_els_acc_rsp(prsp))
goto out;
sp = prsp->virt + sizeof(uint32_t);
fabric_param_changed = lpfc_check_clean_addr_bit(vport, sp);
memcpy(&vport->fabric_portname, &sp->portName,

View File

@ -5654,7 +5654,7 @@ __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did)
((uint32_t)ndlp->nlp_xri << 16) |
((uint32_t)ndlp->nlp_type << 8)
);
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE,
"0929 FIND node DID "
"Data: x%px x%x x%x x%x x%x x%px\n",
ndlp, ndlp->nlp_DID,
@ -5701,8 +5701,8 @@ lpfc_findnode_mapped(struct lpfc_vport *vport)
((uint32_t)ndlp->nlp_type << 8) |
((uint32_t)ndlp->nlp_rpi & 0xff));
spin_unlock_irqrestore(shost->host_lock, iflags);
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
"2025 FIND node DID "
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE,
"2025 FIND node DID MAPPED "
"Data: x%px x%x x%x x%x x%px\n",
ndlp, ndlp->nlp_DID,
ndlp->nlp_flag, data1,
@ -6468,7 +6468,7 @@ __lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param)
list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) {
if (filter(ndlp, param)) {
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE,
lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE,
"3185 FIND node filter %ps DID "
"ndlp x%px did x%x flg x%x st x%x "
"xri x%x type x%x rpi x%x\n",

View File

@ -25,7 +25,7 @@
#define LOG_MBOX 0x00000004 /* Mailbox events */
#define LOG_INIT 0x00000008 /* Initialization events */
#define LOG_LINK_EVENT 0x00000010 /* Link events */
#define LOG_IP 0x00000020 /* IP traffic history */
#define LOG_NODE_VERBOSE 0x00000020 /* Node verbose events */
#define LOG_FCP 0x00000040 /* FCP traffic history */
#define LOG_NODE 0x00000080 /* Node table events */
#define LOG_TEMP 0x00000100 /* Temperature sensor events */

View File

@ -934,25 +934,35 @@ lpfc_rcv_prli_support_check(struct lpfc_vport *vport,
struct ls_rjt stat;
uint32_t *payload;
uint32_t cmd;
PRLI *npr;
payload = cmdiocb->cmd_dmabuf->virt;
cmd = *payload;
npr = (PRLI *)((uint8_t *)payload + sizeof(uint32_t));
if (vport->phba->nvmet_support) {
/* Must be a NVME PRLI */
if (cmd == ELS_CMD_PRLI)
if (cmd == ELS_CMD_PRLI)
goto out;
} else {
/* Initiator mode. */
if (!vport->nvmei_support && (cmd == ELS_CMD_NVMEPRLI))
goto out;
/* NPIV ports will RJT initiator only functions */
if (vport->port_type == LPFC_NPIV_PORT &&
npr->initiatorFunc && !npr->targetFunc)
goto out;
}
return 1;
out:
lpfc_printf_vlog(vport, KERN_WARNING, LOG_NVME_DISC,
lpfc_printf_vlog(vport, KERN_WARNING, LOG_DISCOVERY,
"6115 Rcv PRLI (%x) check failed: ndlp rpi %d "
"state x%x flags x%x\n",
"state x%x flags x%x port_type: x%x "
"npr->initfcn: x%x npr->tgtfcn: x%x\n",
cmd, ndlp->nlp_rpi, ndlp->nlp_state,
ndlp->nlp_flag);
ndlp->nlp_flag, vport->port_type,
npr->initiatorFunc, npr->targetFunc);
memset(&stat, 0, sizeof(struct ls_rjt));
stat.un.b.lsRjtRsnCode = LSRJT_CMD_UNSUPPORTED;
stat.un.b.lsRjtRsnCodeExp = LSEXP_REQ_UNSUPPORTED;

View File

@ -950,7 +950,7 @@ lpfc_nvme_io_cmd_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
int cpu;
#endif
int offline = 0;
bool offline = false;
/* Sanity check on return of outstanding command */
if (!lpfc_ncmd) {
@ -1124,7 +1124,9 @@ lpfc_nvme_io_cmd_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
nCmd->transferred_length = 0;
nCmd->rcv_rsplen = 0;
nCmd->status = NVME_SC_INTERNAL;
offline = pci_channel_offline(vport->phba->pcidev);
if (pci_channel_offline(vport->phba->pcidev) ||
lpfc_ncmd->result == IOERR_SLI_DOWN)
offline = true;
}
}

View File

@ -8571,12 +8571,10 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba)
* is not fatal as the driver will use generic values.
*/
rc = lpfc_parse_vpd(phba, vpd, vpd_size);
if (unlikely(!rc)) {
if (unlikely(!rc))
lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
"0377 Error %d parsing vpd. "
"Using defaults.\n", rc);
rc = 0;
}
kfree(vpd);
/* Save information as VPD data */

View File

@ -20,7 +20,7 @@
* included with this package. *
*******************************************************************/
#define LPFC_DRIVER_VERSION "14.2.0.14"
#define LPFC_DRIVER_VERSION "14.2.0.15"
#define LPFC_DRIVER_NAME "lpfc"
/* Used for SLI 2/3 */

View File

@ -1898,7 +1898,7 @@ megaraid_reset(struct scsi_cmnd *cmd)
spin_lock_irq(&adapter->lock);
rval = megaraid_abort_and_reset(adapter, cmd, SCB_RESET);
rval = megaraid_abort_and_reset(adapter, NULL, SCB_RESET);
/*
* This is required here to complete any completed requests
@ -1925,10 +1925,13 @@ megaraid_abort_and_reset(adapter_t *adapter, struct scsi_cmnd *cmd, int aor)
struct list_head *pos, *next;
scb_t *scb;
dev_warn(&adapter->dev->dev, "%s cmd=%x <c=%d t=%d l=%d>\n",
(aor == SCB_ABORT)? "ABORTING":"RESET",
cmd->cmnd[0], cmd->device->channel,
cmd->device->id, (u32)cmd->device->lun);
if (aor == SCB_ABORT)
dev_warn(&adapter->dev->dev,
"ABORTING cmd=%x <c=%d t=%d l=%d>\n",
cmd->cmnd[0], cmd->device->channel,
cmd->device->id, (u32)cmd->device->lun);
else
dev_warn(&adapter->dev->dev, "RESETTING\n");
if(list_empty(&adapter->pending_list))
return FAILED;
@ -1937,7 +1940,7 @@ megaraid_abort_and_reset(adapter_t *adapter, struct scsi_cmnd *cmd, int aor)
scb = list_entry(pos, scb_t, list);
if (scb->cmd == cmd) { /* Found command */
if (!cmd || scb->cmd == cmd) { /* Found command */
scb->state |= aor;
@ -1956,31 +1959,23 @@ megaraid_abort_and_reset(adapter_t *adapter, struct scsi_cmnd *cmd, int aor)
return FAILED;
}
else {
/*
* Not yet issued! Remove from the pending
* list
*/
dev_warn(&adapter->dev->dev,
"%s-[%x], driver owner\n",
(aor==SCB_ABORT) ? "ABORTING":"RESET",
scb->idx);
mega_free_scb(adapter, scb);
if( aor == SCB_ABORT ) {
cmd->result = (DID_ABORT << 16);
}
else {
cmd->result = (DID_RESET << 16);
}
/*
* Not yet issued! Remove from the pending
* list
*/
dev_warn(&adapter->dev->dev,
"%s-[%x], driver owner\n",
(cmd) ? "ABORTING":"RESET",
scb->idx);
mega_free_scb(adapter, scb);
if (cmd) {
cmd->result = (DID_ABORT << 16);
list_add_tail(SCSI_LIST(cmd),
&adapter->completed_list);
return SUCCESS;
&adapter->completed_list);
}
return SUCCESS;
}
}
@ -4114,8 +4109,6 @@ static const struct scsi_host_template megaraid_template = {
.sg_tablesize = MAX_SGLIST,
.cmd_per_lun = DEF_CMD_PER_LUN,
.eh_abort_handler = megaraid_abort,
.eh_device_reset_handler = megaraid_reset,
.eh_bus_reset_handler = megaraid_reset,
.eh_host_reset_handler = megaraid_reset,
.no_write_same = 1,
.cmd_size = sizeof(struct megaraid_cmd_priv),

View File

@ -23,8 +23,8 @@
/*
* MegaRAID SAS Driver meta data
*/
#define MEGASAS_VERSION "07.725.01.00-rc1"
#define MEGASAS_RELDATE "Mar 2, 2023"
#define MEGASAS_VERSION "07.727.03.00-rc1"
#define MEGASAS_RELDATE "Oct 03, 2023"
#define MEGASAS_MSIX_NAME_LEN 32

View File

@ -263,13 +263,13 @@ u32 megasas_readl(struct megasas_instance *instance,
* Fusion registers could intermittently return all zeroes.
* This behavior is transient in nature and subsequent reads will
* return valid value. As a workaround in driver, retry readl for
* upto three times until a non-zero value is read.
* up to thirty times until a non-zero value is read.
*/
if (instance->adapter_type == AERO_SERIES) {
do {
ret_val = readl(addr);
i++;
} while (ret_val == 0 && i < 3);
} while (ret_val == 0 && i < 30);
return ret_val;
} else {
return readl(addr);

View File

@ -4268,6 +4268,9 @@ megasas_wait_for_outstanding_fusion(struct megasas_instance *instance,
}
out:
if (!retval && reason == SCSIIO_TIMEOUT_OCR)
dev_info(&instance->pdev->dev, "IO is completed, no OCR is required\n");
return retval;
}

View File

@ -4012,37 +4012,15 @@ static inline void mpi3mr_setup_divert_ws(struct mpi3mr_ioc *mrioc,
* mpi3mr_eh_host_reset - Host reset error handling callback
* @scmd: SCSI command reference
*
* Issue controller reset if the scmd is for a Physical Device,
* if the scmd is for RAID volume, then wait for
* MPI3MR_RAID_ERRREC_RESET_TIMEOUT and checke whether any
* pending I/Os prior to issuing reset to the controller.
* Issue controller reset
*
* Return: SUCCESS of successful reset else FAILED
*/
static int mpi3mr_eh_host_reset(struct scsi_cmnd *scmd)
{
struct mpi3mr_ioc *mrioc = shost_priv(scmd->device->host);
struct mpi3mr_stgt_priv_data *stgt_priv_data;
struct mpi3mr_sdev_priv_data *sdev_priv_data;
u8 dev_type = MPI3_DEVICE_DEVFORM_VD;
int retval = FAILED, ret;
sdev_priv_data = scmd->device->hostdata;
if (sdev_priv_data && sdev_priv_data->tgt_priv_data) {
stgt_priv_data = sdev_priv_data->tgt_priv_data;
dev_type = stgt_priv_data->dev_type;
}
if (dev_type == MPI3_DEVICE_DEVFORM_VD) {
mpi3mr_wait_for_host_io(mrioc,
MPI3MR_RAID_ERRREC_RESET_TIMEOUT);
if (!mpi3mr_get_fw_pending_ios(mrioc)) {
retval = SUCCESS;
goto out;
}
}
mpi3mr_print_pending_host_io(mrioc);
ret = mpi3mr_soft_reset_handler(mrioc,
MPI3MR_RESET_FROM_EH_HOS, 1);
if (ret)
@ -4057,6 +4035,44 @@ static int mpi3mr_eh_host_reset(struct scsi_cmnd *scmd)
return retval;
}
/**
* mpi3mr_eh_bus_reset - Bus reset error handling callback
* @scmd: SCSI command reference
*
* Checks whether pending I/Os are present for the RAID volume;
* if not there's no need to reset the adapter.
*
* Return: SUCCESS of successful reset else FAILED
*/
static int mpi3mr_eh_bus_reset(struct scsi_cmnd *scmd)
{
struct mpi3mr_ioc *mrioc = shost_priv(scmd->device->host);
struct mpi3mr_stgt_priv_data *stgt_priv_data;
struct mpi3mr_sdev_priv_data *sdev_priv_data;
u8 dev_type = MPI3_DEVICE_DEVFORM_VD;
int retval = FAILED;
sdev_priv_data = scmd->device->hostdata;
if (sdev_priv_data && sdev_priv_data->tgt_priv_data) {
stgt_priv_data = sdev_priv_data->tgt_priv_data;
dev_type = stgt_priv_data->dev_type;
}
if (dev_type == MPI3_DEVICE_DEVFORM_VD) {
mpi3mr_wait_for_host_io(mrioc,
MPI3MR_RAID_ERRREC_RESET_TIMEOUT);
if (!mpi3mr_get_fw_pending_ios(mrioc))
retval = SUCCESS;
}
if (retval == FAILED)
mpi3mr_print_pending_host_io(mrioc);
sdev_printk(KERN_INFO, scmd->device,
"Bus reset is %s for scmd(%p)\n",
((retval == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
return retval;
}
/**
* mpi3mr_eh_target_reset - Target reset error handling callback
* @scmd: SCSI command reference
@ -4900,6 +4916,7 @@ static const struct scsi_host_template mpi3mr_driver_template = {
.change_queue_depth = mpi3mr_change_queue_depth,
.eh_device_reset_handler = mpi3mr_eh_dev_reset,
.eh_target_reset_handler = mpi3mr_eh_target_reset,
.eh_bus_reset_handler = mpi3mr_eh_bus_reset,
.eh_host_reset_handler = mpi3mr_eh_host_reset,
.bios_param = mpi3mr_bios_param,
.map_queues = mpi3mr_map_queues,

View File

@ -223,8 +223,8 @@ _base_readl_ext_retry(const void __iomem *addr)
for (i = 0 ; i < 30 ; i++) {
ret_val = readl(addr);
if (ret_val == 0)
continue;
if (ret_val != 0)
break;
}
return ret_val;

View File

@ -1180,65 +1180,6 @@ void pm8001_chip_iounmap(struct pm8001_hba_info *pm8001_ha)
}
}
#ifndef PM8001_USE_MSIX
/**
* pm8001_chip_intx_interrupt_enable - enable PM8001 chip interrupt
* @pm8001_ha: our hba card information
*/
static void
pm8001_chip_intx_interrupt_enable(struct pm8001_hba_info *pm8001_ha)
{
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL);
pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL);
}
/**
* pm8001_chip_intx_interrupt_disable - disable PM8001 chip interrupt
* @pm8001_ha: our hba card information
*/
static void
pm8001_chip_intx_interrupt_disable(struct pm8001_hba_info *pm8001_ha)
{
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_MASK_ALL);
}
#else
/**
* pm8001_chip_msix_interrupt_enable - enable PM8001 chip interrupt
* @pm8001_ha: our hba card information
* @int_vec_idx: interrupt number to enable
*/
static void
pm8001_chip_msix_interrupt_enable(struct pm8001_hba_info *pm8001_ha,
u32 int_vec_idx)
{
u32 msi_index;
u32 value;
msi_index = int_vec_idx * MSIX_TABLE_ELEMENT_SIZE;
msi_index += MSIX_TABLE_BASE;
pm8001_cw32(pm8001_ha, 0, msi_index, MSIX_INTERRUPT_ENABLE);
value = (1 << int_vec_idx);
pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, value);
}
/**
* pm8001_chip_msix_interrupt_disable - disable PM8001 chip interrupt
* @pm8001_ha: our hba card information
* @int_vec_idx: interrupt number to disable
*/
static void
pm8001_chip_msix_interrupt_disable(struct pm8001_hba_info *pm8001_ha,
u32 int_vec_idx)
{
u32 msi_index;
msi_index = int_vec_idx * MSIX_TABLE_ELEMENT_SIZE;
msi_index += MSIX_TABLE_BASE;
pm8001_cw32(pm8001_ha, 0, msi_index, MSIX_INTERRUPT_DISABLE);
}
#endif
/**
* pm8001_chip_interrupt_enable - enable PM8001 chip interrupt
* @pm8001_ha: our hba card information
@ -1247,11 +1188,14 @@ pm8001_chip_msix_interrupt_disable(struct pm8001_hba_info *pm8001_ha,
static void
pm8001_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec)
{
#ifdef PM8001_USE_MSIX
pm8001_chip_msix_interrupt_enable(pm8001_ha, 0);
#else
pm8001_chip_intx_interrupt_enable(pm8001_ha);
#endif
if (pm8001_ha->use_msix) {
pm8001_cw32(pm8001_ha, 0, MSIX_TABLE_BASE,
MSIX_INTERRUPT_ENABLE);
pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, 1);
} else {
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL);
pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL);
}
}
/**
@ -1262,11 +1206,11 @@ pm8001_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec)
static void
pm8001_chip_interrupt_disable(struct pm8001_hba_info *pm8001_ha, u8 vec)
{
#ifdef PM8001_USE_MSIX
pm8001_chip_msix_interrupt_disable(pm8001_ha, 0);
#else
pm8001_chip_intx_interrupt_disable(pm8001_ha);
#endif
if (pm8001_ha->use_msix)
pm8001_cw32(pm8001_ha, 0, MSIX_TABLE_BASE,
MSIX_INTERRUPT_DISABLE);
else
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_MASK_ALL);
}
/**
@ -4309,16 +4253,15 @@ static int pm8001_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha,
static u32 pm8001_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha)
{
#ifdef PM8001_USE_MSIX
return 1;
#else
u32 value;
if (pm8001_ha->use_msix)
return 1;
value = pm8001_cr32(pm8001_ha, 0, MSGU_ODR);
if (value)
return 1;
return 0;
#endif
}
/**

View File

@ -56,6 +56,18 @@ MODULE_PARM_DESC(link_rate, "Enable link rate.\n"
" 4: Link rate 6.0G\n"
" 8: Link rate 12.0G\n");
bool pm8001_use_msix = true;
module_param_named(use_msix, pm8001_use_msix, bool, 0444);
MODULE_PARM_DESC(zoned, "Use MSIX interrupts. Default: true");
static bool pm8001_use_tasklet = true;
module_param_named(use_tasklet, pm8001_use_tasklet, bool, 0444);
MODULE_PARM_DESC(zoned, "Use MSIX interrupts. Default: true");
static bool pm8001_read_wwn = true;
module_param_named(read_wwn, pm8001_read_wwn, bool, 0444);
MODULE_PARM_DESC(zoned, "Get WWN from the controller. Default: true");
static struct scsi_transport_template *pm8001_stt;
static int pm8001_init_ccb_tag(struct pm8001_hba_info *);
@ -200,8 +212,6 @@ static void pm8001_free(struct pm8001_hba_info *pm8001_ha)
kfree(pm8001_ha);
}
#ifdef PM8001_USE_TASKLET
/**
* pm8001_tasklet() - tasklet for 64 msi-x interrupt handler
* @opaque: the passed general host adapter struct
@ -209,16 +219,67 @@ static void pm8001_free(struct pm8001_hba_info *pm8001_ha)
*/
static void pm8001_tasklet(unsigned long opaque)
{
struct pm8001_hba_info *pm8001_ha;
struct isr_param *irq_vector;
struct isr_param *irq_vector = (struct isr_param *)opaque;
struct pm8001_hba_info *pm8001_ha = irq_vector->drv_inst;
if (WARN_ON_ONCE(!pm8001_ha))
return;
irq_vector = (struct isr_param *)opaque;
pm8001_ha = irq_vector->drv_inst;
if (unlikely(!pm8001_ha))
BUG_ON(1);
PM8001_CHIP_DISP->isr(pm8001_ha, irq_vector->irq_id);
}
#endif
static void pm8001_init_tasklet(struct pm8001_hba_info *pm8001_ha)
{
int i;
if (!pm8001_use_tasklet)
return;
/* Tasklet for non msi-x interrupt handler */
if ((!pm8001_ha->pdev->msix_cap || !pci_msi_enabled()) ||
(pm8001_ha->chip_id == chip_8001)) {
tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
(unsigned long)&(pm8001_ha->irq_vector[0]));
return;
}
for (i = 0; i < PM8001_MAX_MSIX_VEC; i++)
tasklet_init(&pm8001_ha->tasklet[i], pm8001_tasklet,
(unsigned long)&(pm8001_ha->irq_vector[i]));
}
static void pm8001_kill_tasklet(struct pm8001_hba_info *pm8001_ha)
{
int i;
if (!pm8001_use_tasklet)
return;
/* For non-msix and msix interrupts */
if ((!pm8001_ha->pdev->msix_cap || !pci_msi_enabled()) ||
(pm8001_ha->chip_id == chip_8001)) {
tasklet_kill(&pm8001_ha->tasklet[0]);
return;
}
for (i = 0; i < PM8001_MAX_MSIX_VEC; i++)
tasklet_kill(&pm8001_ha->tasklet[i]);
}
static irqreturn_t pm8001_handle_irq(struct pm8001_hba_info *pm8001_ha,
int irq)
{
if (unlikely(!pm8001_ha))
return IRQ_NONE;
if (!PM8001_CHIP_DISP->is_our_interrupt(pm8001_ha))
return IRQ_NONE;
if (!pm8001_use_tasklet)
return PM8001_CHIP_DISP->isr(pm8001_ha, irq);
tasklet_schedule(&pm8001_ha->tasklet[irq]);
return IRQ_HANDLED;
}
/**
* pm8001_interrupt_handler_msix - main MSIX interrupt handler.
@ -230,22 +291,10 @@ static void pm8001_tasklet(unsigned long opaque)
*/
static irqreturn_t pm8001_interrupt_handler_msix(int irq, void *opaque)
{
struct isr_param *irq_vector;
struct pm8001_hba_info *pm8001_ha;
irqreturn_t ret = IRQ_HANDLED;
irq_vector = (struct isr_param *)opaque;
pm8001_ha = irq_vector->drv_inst;
struct isr_param *irq_vector = (struct isr_param *)opaque;
struct pm8001_hba_info *pm8001_ha = irq_vector->drv_inst;
if (unlikely(!pm8001_ha))
return IRQ_NONE;
if (!PM8001_CHIP_DISP->is_our_interrupt(pm8001_ha))
return IRQ_NONE;
#ifdef PM8001_USE_TASKLET
tasklet_schedule(&pm8001_ha->tasklet[irq_vector->irq_id]);
#else
ret = PM8001_CHIP_DISP->isr(pm8001_ha, irq_vector->irq_id);
#endif
return ret;
return pm8001_handle_irq(pm8001_ha, irq_vector->irq_id);
}
/**
@ -256,24 +305,14 @@ static irqreturn_t pm8001_interrupt_handler_msix(int irq, void *opaque)
static irqreturn_t pm8001_interrupt_handler_intx(int irq, void *dev_id)
{
struct pm8001_hba_info *pm8001_ha;
irqreturn_t ret = IRQ_HANDLED;
struct sas_ha_struct *sha = dev_id;
pm8001_ha = sha->lldd_ha;
if (unlikely(!pm8001_ha))
return IRQ_NONE;
if (!PM8001_CHIP_DISP->is_our_interrupt(pm8001_ha))
return IRQ_NONE;
struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
#ifdef PM8001_USE_TASKLET
tasklet_schedule(&pm8001_ha->tasklet[0]);
#else
ret = PM8001_CHIP_DISP->isr(pm8001_ha, 0);
#endif
return ret;
return pm8001_handle_irq(pm8001_ha, 0);
}
static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha);
static void pm8001_free_irq(struct pm8001_hba_info *pm8001_ha);
/**
* pm8001_alloc - initiate our hba structure and 6 DMAs area.
@ -511,7 +550,6 @@ static struct pm8001_hba_info *pm8001_pci_alloc(struct pci_dev *pdev,
{
struct pm8001_hba_info *pm8001_ha;
struct sas_ha_struct *sha = SHOST_TO_SAS_HA(shost);
int j;
pm8001_ha = sha->lldd_ha;
if (!pm8001_ha)
@ -542,17 +580,8 @@ static struct pm8001_hba_info *pm8001_pci_alloc(struct pci_dev *pdev,
else
pm8001_ha->iomb_size = IOMB_SIZE_SPC;
#ifdef PM8001_USE_TASKLET
/* Tasklet for non msi-x interrupt handler */
if ((!pdev->msix_cap || !pci_msi_enabled())
|| (pm8001_ha->chip_id == chip_8001))
tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
(unsigned long)&(pm8001_ha->irq_vector[0]));
else
for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
tasklet_init(&pm8001_ha->tasklet[j], pm8001_tasklet,
(unsigned long)&(pm8001_ha->irq_vector[j]));
#endif
pm8001_init_tasklet(pm8001_ha);
if (pm8001_ioremap(pm8001_ha))
goto failed_pci_alloc;
if (!pm8001_alloc(pm8001_ha, ent))
@ -658,19 +687,30 @@ static void pm8001_post_sas_ha_init(struct Scsi_Host *shost,
*/
static int pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
{
u8 i, j;
u8 sas_add[8];
#ifdef PM8001_READ_VPD
/* For new SPC controllers WWN is stored in flash vpd
* For SPC/SPCve controllers WWN is stored in EEPROM
* For Older SPC WWN is stored in NVMD
*/
DECLARE_COMPLETION_ONSTACK(completion);
struct pm8001_ioctl_payload payload;
unsigned long time_remaining;
u8 sas_add[8];
u16 deviceid;
int rc;
unsigned long time_remaining;
u8 i, j;
if (!pm8001_read_wwn) {
__be64 dev_sas_addr = cpu_to_be64(0x50010c600047f9d0ULL);
for (i = 0; i < pm8001_ha->chip->n_phy; i++)
memcpy(&pm8001_ha->phy[i].dev_sas_addr, &dev_sas_addr,
SAS_ADDR_SIZE);
memcpy(pm8001_ha->sas_addr, &pm8001_ha->phy[0].dev_sas_addr,
SAS_ADDR_SIZE);
return 0;
}
/*
* For new SPC controllers WWN is stored in flash vpd. For SPC/SPCve
* controllers WWN is stored in EEPROM. And for Older SPC WWN is stored
* in NVMD.
*/
if (PM8001_CHIP_DISP->fatal_errors(pm8001_ha)) {
pm8001_dbg(pm8001_ha, FAIL, "controller is in fatal error state\n");
return -EIO;
@ -744,16 +784,7 @@ static int pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
pm8001_ha->phy[i].dev_sas_addr);
}
kfree(payload.func_specific);
#else
for (i = 0; i < pm8001_ha->chip->n_phy; i++) {
pm8001_ha->phy[i].dev_sas_addr = 0x50010c600047f9d0ULL;
pm8001_ha->phy[i].dev_sas_addr =
cpu_to_be64((u64)
(*(u64 *)&pm8001_ha->phy[i].dev_sas_addr));
}
memcpy(pm8001_ha->sas_addr, &pm8001_ha->phy[0].dev_sas_addr,
SAS_ADDR_SIZE);
#endif
return 0;
}
@ -763,13 +794,13 @@ static int pm8001_init_sas_add(struct pm8001_hba_info *pm8001_ha)
*/
static int pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha)
{
#ifdef PM8001_READ_VPD
/*OPTION ROM FLASH read for the SPC cards */
DECLARE_COMPLETION_ONSTACK(completion);
struct pm8001_ioctl_payload payload;
int rc;
if (!pm8001_read_wwn)
return 0;
pm8001_ha->nvmd_completion = &completion;
/* SAS ADDRESS read from flash / EEPROM */
payload.minor_function = 6;
@ -788,7 +819,7 @@ static int pm8001_get_phy_settings_info(struct pm8001_hba_info *pm8001_ha)
wait_for_completion(&completion);
pm8001_set_phy_profile(pm8001_ha, sizeof(u8), payload.func_specific);
kfree(payload.func_specific);
#endif
return 0;
}
@ -939,7 +970,6 @@ static int pm8001_configure_phy_settings(struct pm8001_hba_info *pm8001_ha)
}
}
#ifdef PM8001_USE_MSIX
/**
* pm8001_setup_msix - enable MSI-X interrupt
* @pm8001_ha: our ha struct.
@ -1021,7 +1051,6 @@ static u32 pm8001_request_msix(struct pm8001_hba_info *pm8001_ha)
return rc;
}
#endif
/**
* pm8001_request_irq - register interrupt
@ -1030,10 +1059,9 @@ static u32 pm8001_request_msix(struct pm8001_hba_info *pm8001_ha)
static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
{
struct pci_dev *pdev = pm8001_ha->pdev;
#ifdef PM8001_USE_MSIX
int rc;
if (pci_find_capability(pdev, PCI_CAP_ID_MSIX)) {
if (pm8001_use_msix && pci_find_capability(pdev, PCI_CAP_ID_MSIX)) {
rc = pm8001_setup_msix(pm8001_ha);
if (rc) {
pm8001_dbg(pm8001_ha, FAIL,
@ -1041,14 +1069,22 @@ static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
return rc;
}
if (pdev->msix_cap && pci_msi_enabled())
return pm8001_request_msix(pm8001_ha);
if (!pdev->msix_cap || !pci_msi_enabled())
goto use_intx;
rc = pm8001_request_msix(pm8001_ha);
if (rc)
return rc;
pm8001_ha->use_msix = true;
return 0;
}
use_intx:
/* Initialize the INT-X interrupt */
pm8001_dbg(pm8001_ha, INIT, "MSIX not supported!!!\n");
#endif
/* initialize the INT-X interrupt */
pm8001_ha->use_msix = false;
pm8001_ha->irq_vector[0].irq_id = 0;
pm8001_ha->irq_vector[0].drv_inst = pm8001_ha;
@ -1057,6 +1093,26 @@ static u32 pm8001_request_irq(struct pm8001_hba_info *pm8001_ha)
SHOST_TO_SAS_HA(pm8001_ha->shost));
}
static void pm8001_free_irq(struct pm8001_hba_info *pm8001_ha)
{
struct pci_dev *pdev = pm8001_ha->pdev;
int i;
if (pm8001_ha->use_msix) {
for (i = 0; i < pm8001_ha->number_of_intr; i++)
synchronize_irq(pci_irq_vector(pdev, i));
for (i = 0; i < pm8001_ha->number_of_intr; i++)
free_irq(pci_irq_vector(pdev, i), &pm8001_ha->irq_vector[i]);
pci_free_irq_vectors(pdev);
return;
}
/* INT-X */
free_irq(pm8001_ha->irq, pm8001_ha->sas);
}
/**
* pm8001_pci_probe - probe supported device
* @pdev: pci device which kernel has been prepared for.
@ -1252,33 +1308,17 @@ static int pm8001_init_ccb_tag(struct pm8001_hba_info *pm8001_ha)
static void pm8001_pci_remove(struct pci_dev *pdev)
{
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha;
int i, j;
pm8001_ha = sha->lldd_ha;
struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
int i;
sas_unregister_ha(sha);
sas_remove_host(pm8001_ha->shost);
list_del(&pm8001_ha->list);
PM8001_CHIP_DISP->interrupt_disable(pm8001_ha, 0xFF);
PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha);
#ifdef PM8001_USE_MSIX
for (i = 0; i < pm8001_ha->number_of_intr; i++)
synchronize_irq(pci_irq_vector(pdev, i));
for (i = 0; i < pm8001_ha->number_of_intr; i++)
free_irq(pci_irq_vector(pdev, i), &pm8001_ha->irq_vector[i]);
pci_free_irq_vectors(pdev);
#else
free_irq(pm8001_ha->irq, sha);
#endif
#ifdef PM8001_USE_TASKLET
/* For non-msix and msix interrupts */
if ((!pdev->msix_cap || !pci_msi_enabled()) ||
(pm8001_ha->chip_id == chip_8001))
tasklet_kill(&pm8001_ha->tasklet[0]);
else
for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
tasklet_kill(&pm8001_ha->tasklet[j]);
#endif
pm8001_free_irq(pm8001_ha);
pm8001_kill_tasklet(pm8001_ha);
scsi_host_put(pm8001_ha->shost);
for (i = 0; i < pm8001_ha->ccb_count; i++) {
@ -1309,7 +1349,7 @@ static int __maybe_unused pm8001_pci_suspend(struct device *dev)
struct pci_dev *pdev = to_pci_dev(dev);
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha = sha->lldd_ha;
int i, j;
sas_suspend_ha(sha);
flush_workqueue(pm8001_wq);
scsi_block_requests(pm8001_ha->shost);
@ -1319,24 +1359,10 @@ static int __maybe_unused pm8001_pci_suspend(struct device *dev)
}
PM8001_CHIP_DISP->interrupt_disable(pm8001_ha, 0xFF);
PM8001_CHIP_DISP->chip_soft_rst(pm8001_ha);
#ifdef PM8001_USE_MSIX
for (i = 0; i < pm8001_ha->number_of_intr; i++)
synchronize_irq(pci_irq_vector(pdev, i));
for (i = 0; i < pm8001_ha->number_of_intr; i++)
free_irq(pci_irq_vector(pdev, i), &pm8001_ha->irq_vector[i]);
pci_free_irq_vectors(pdev);
#else
free_irq(pm8001_ha->irq, sha);
#endif
#ifdef PM8001_USE_TASKLET
/* For non-msix and msix interrupts */
if ((!pdev->msix_cap || !pci_msi_enabled()) ||
(pm8001_ha->chip_id == chip_8001))
tasklet_kill(&pm8001_ha->tasklet[0]);
else
for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
tasklet_kill(&pm8001_ha->tasklet[j]);
#endif
pm8001_free_irq(pm8001_ha);
pm8001_kill_tasklet(pm8001_ha);
pm8001_info(pm8001_ha, "pdev=0x%p, slot=%s, entering "
"suspended state\n", pdev,
pm8001_ha->name);
@ -1355,7 +1381,7 @@ static int __maybe_unused pm8001_pci_resume(struct device *dev)
struct sas_ha_struct *sha = pci_get_drvdata(pdev);
struct pm8001_hba_info *pm8001_ha;
int rc;
u8 i = 0, j;
u8 i = 0;
DECLARE_COMPLETION_ONSTACK(completion);
pm8001_ha = sha->lldd_ha;
@ -1383,17 +1409,9 @@ static int __maybe_unused pm8001_pci_resume(struct device *dev)
rc = pm8001_request_irq(pm8001_ha);
if (rc)
goto err_out_disable;
#ifdef PM8001_USE_TASKLET
/* Tasklet for non msi-x interrupt handler */
if ((!pdev->msix_cap || !pci_msi_enabled()) ||
(pm8001_ha->chip_id == chip_8001))
tasklet_init(&pm8001_ha->tasklet[0], pm8001_tasklet,
(unsigned long)&(pm8001_ha->irq_vector[0]));
else
for (j = 0; j < PM8001_MAX_MSIX_VEC; j++)
tasklet_init(&pm8001_ha->tasklet[j], pm8001_tasklet,
(unsigned long)&(pm8001_ha->irq_vector[j]));
#endif
pm8001_init_tasklet(pm8001_ha);
PM8001_CHIP_DISP->interrupt_enable(pm8001_ha, 0);
if (pm8001_ha->chip_id != chip_8001) {
for (i = 1; i < pm8001_ha->number_of_intr; i++)
@ -1525,6 +1543,9 @@ static int __init pm8001_init(void)
{
int rc = -ENOMEM;
if (pm8001_use_tasklet && !pm8001_use_msix)
pm8001_use_tasklet = false;
pm8001_wq = alloc_workqueue("pm80xx", 0, 0);
if (!pm8001_wq)
goto err;

View File

@ -83,10 +83,7 @@ do { \
pm8001_info(HBA, fmt, ##__VA_ARGS__); \
} while (0)
#define PM8001_USE_TASKLET
#define PM8001_USE_MSIX
#define PM8001_READ_VPD
extern bool pm8001_use_msix;
#define IS_SPCV_12G(dev) ((dev->device == 0X8074) \
|| (dev->device == 0X8076) \
@ -520,14 +517,12 @@ struct pm8001_hba_info {
struct pm8001_device *devices;
struct pm8001_ccb_info *ccb_info;
u32 ccb_count;
#ifdef PM8001_USE_MSIX
bool use_msix;
int number_of_intr;/*will be used in remove()*/
char intr_drvname[PM8001_MAX_MSIX_VEC]
[PM8001_NAME_LENGTH+1+3+1];
#endif
#ifdef PM8001_USE_TASKLET
struct tasklet_struct tasklet[PM8001_MAX_MSIX_VEC];
#endif
u32 logging_level;
u32 link_rate;
u32 fw_status;

View File

@ -1714,27 +1714,6 @@ static void pm80xx_hw_chip_rst(struct pm8001_hba_info *pm8001_ha)
pm8001_dbg(pm8001_ha, INIT, "chip reset finished\n");
}
/**
* pm80xx_chip_intx_interrupt_enable - enable PM8001 chip interrupt
* @pm8001_ha: our hba card information
*/
static void
pm80xx_chip_intx_interrupt_enable(struct pm8001_hba_info *pm8001_ha)
{
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL);
pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL);
}
/**
* pm80xx_chip_intx_interrupt_disable - disable PM8001 chip interrupt
* @pm8001_ha: our hba card information
*/
static void
pm80xx_chip_intx_interrupt_disable(struct pm8001_hba_info *pm8001_ha)
{
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, ODMR_MASK_ALL);
}
/**
* pm80xx_chip_interrupt_enable - enable PM8001 chip interrupt
* @pm8001_ha: our hba card information
@ -1743,16 +1722,16 @@ pm80xx_chip_intx_interrupt_disable(struct pm8001_hba_info *pm8001_ha)
static void
pm80xx_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec)
{
#ifdef PM8001_USE_MSIX
if (!pm8001_ha->use_msix) {
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, ODMR_CLEAR_ALL);
pm8001_cw32(pm8001_ha, 0, MSGU_ODCR, ODCR_CLEAR_ALL);
return;
}
if (vec < 32)
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, 1U << vec);
else
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR_U,
1U << (vec - 32));
return;
#endif
pm80xx_chip_intx_interrupt_enable(pm8001_ha);
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR_U, 1U << (vec - 32));
}
/**
@ -1763,19 +1742,20 @@ pm80xx_chip_interrupt_enable(struct pm8001_hba_info *pm8001_ha, u8 vec)
static void
pm80xx_chip_interrupt_disable(struct pm8001_hba_info *pm8001_ha, u8 vec)
{
#ifdef PM8001_USE_MSIX
if (!pm8001_ha->use_msix) {
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_CLR, ODMR_MASK_ALL);
return;
}
if (vec == 0xFF) {
/* disable all vectors 0-31, 32-63 */
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, 0xFFFFFFFF);
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U, 0xFFFFFFFF);
} else if (vec < 32)
} else if (vec < 32) {
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR, 1U << vec);
else
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U,
1U << (vec - 32));
return;
#endif
pm80xx_chip_intx_interrupt_disable(pm8001_ha);
} else {
pm8001_cw32(pm8001_ha, 0, MSGU_ODMR_U, 1U << (vec - 32));
}
}
/**
@ -4802,16 +4782,15 @@ static int pm80xx_chip_phy_ctl_req(struct pm8001_hba_info *pm8001_ha,
static u32 pm80xx_chip_is_our_interrupt(struct pm8001_hba_info *pm8001_ha)
{
#ifdef PM8001_USE_MSIX
return 1;
#else
u32 value;
if (pm8001_ha->use_msix)
return 1;
value = pm8001_cr32(pm8001_ha, 0, MSGU_ODR);
if (value)
return 1;
return 0;
#endif
}
/**

View File

@ -2679,7 +2679,7 @@ static int pmcraid_error_handler(struct pmcraid_cmd *cmd)
/**
* pmcraid_reset_device - device reset handler functions
*
* @scsi_cmd: scsi command struct
* @scsi_dev: scsi device struct
* @timeout: command timeout
* @modifier: reset modifier indicating the reset sequence to be performed
*
@ -2691,7 +2691,7 @@ static int pmcraid_error_handler(struct pmcraid_cmd *cmd)
* SUCCESS / FAILED
*/
static int pmcraid_reset_device(
struct scsi_cmnd *scsi_cmd,
struct scsi_device *scsi_dev,
unsigned long timeout,
u8 modifier)
{
@ -2703,11 +2703,11 @@ static int pmcraid_reset_device(
u32 ioasc;
pinstance =
(struct pmcraid_instance *)scsi_cmd->device->host->hostdata;
res = scsi_cmd->device->hostdata;
(struct pmcraid_instance *)scsi_dev->host->hostdata;
res = scsi_dev->hostdata;
if (!res) {
sdev_printk(KERN_ERR, scsi_cmd->device,
sdev_printk(KERN_ERR, scsi_dev,
"reset_device: NULL resource pointer\n");
return FAILED;
}
@ -3018,27 +3018,72 @@ static int pmcraid_eh_device_reset_handler(struct scsi_cmnd *scmd)
{
scmd_printk(KERN_INFO, scmd,
"resetting device due to an I/O command timeout.\n");
return pmcraid_reset_device(scmd,
return pmcraid_reset_device(scmd->device,
PMCRAID_INTERNAL_TIMEOUT,
RESET_DEVICE_LUN);
}
static int pmcraid_eh_bus_reset_handler(struct scsi_cmnd *scmd)
{
scmd_printk(KERN_INFO, scmd,
struct Scsi_Host *host = scmd->device->host;
struct pmcraid_instance *pinstance =
(struct pmcraid_instance *)host->hostdata;
struct pmcraid_resource_entry *res = NULL;
struct pmcraid_resource_entry *temp;
struct scsi_device *sdev = NULL;
unsigned long lock_flags;
/*
* The reset device code insists on us passing down
* a device, so grab the first device on the bus.
*/
spin_lock_irqsave(&pinstance->resource_lock, lock_flags);
list_for_each_entry(temp, &pinstance->used_res_q, queue) {
if (scmd->device->channel == PMCRAID_VSET_BUS_ID &&
RES_IS_VSET(temp->cfg_entry)) {
res = temp;
break;
} else if (scmd->device->channel == PMCRAID_PHYS_BUS_ID &&
RES_IS_GSCSI(temp->cfg_entry)) {
res = temp;
break;
}
}
if (res)
sdev = res->scsi_dev;
spin_unlock_irqrestore(&pinstance->resource_lock, lock_flags);
if (!sdev)
return FAILED;
sdev_printk(KERN_INFO, sdev,
"Doing bus reset due to an I/O command timeout.\n");
return pmcraid_reset_device(scmd,
return pmcraid_reset_device(sdev,
PMCRAID_RESET_BUS_TIMEOUT,
RESET_DEVICE_BUS);
}
static int pmcraid_eh_target_reset_handler(struct scsi_cmnd *scmd)
{
scmd_printk(KERN_INFO, scmd,
struct Scsi_Host *shost = scmd->device->host;
struct scsi_device *scsi_dev = NULL, *tmp;
int ret;
shost_for_each_device(tmp, shost) {
if ((tmp->channel == scmd->device->channel) &&
(tmp->id == scmd->device->id)) {
scsi_dev = tmp;
break;
}
}
if (!scsi_dev)
return FAILED;
sdev_printk(KERN_INFO, scsi_dev,
"Doing target reset due to an I/O command timeout.\n");
return pmcraid_reset_device(scmd,
PMCRAID_INTERNAL_TIMEOUT,
RESET_DEVICE_TARGET);
ret = pmcraid_reset_device(scsi_dev,
PMCRAID_INTERNAL_TIMEOUT,
RESET_DEVICE_TARGET);
scsi_device_put(scsi_dev);
return ret;
}
/**

View File

@ -112,6 +112,7 @@ struct qedf_ioreq {
#define QEDF_CMD_ERR_SCSI_DONE 0x5
u8 io_req_flags;
uint8_t tm_flags;
u64 tm_lun;
struct qedf_rport *fcport;
#define QEDF_CMD_ST_INACTIVE 0
#define QEDFC_CMD_ST_IO_ACTIVE 1
@ -497,7 +498,7 @@ extern void qedf_process_warning_compl(struct qedf_ctx *qedf,
struct fcoe_cqe *cqe, struct qedf_ioreq *io_req);
extern void qedf_process_error_detect(struct qedf_ctx *qedf,
struct fcoe_cqe *cqe, struct qedf_ioreq *io_req);
extern void qedf_flush_active_ios(struct qedf_rport *fcport, int lun);
extern void qedf_flush_active_ios(struct qedf_rport *fcport, u64 lun);
extern void qedf_release_cmd(struct kref *ref);
extern int qedf_initiate_abts(struct qedf_ioreq *io_req,
bool return_scsi_cmd_on_abts);
@ -522,7 +523,7 @@ extern int qedf_initiate_cleanup(struct qedf_ioreq *io_req,
bool return_scsi_cmd_on_abts);
extern void qedf_process_cleanup_compl(struct qedf_ctx *qedf,
struct fcoe_cqe *cqe, struct qedf_ioreq *io_req);
extern int qedf_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags);
extern int qedf_initiate_tmf(struct fc_rport *rport, u64 lun, u8 tm_flags);
extern void qedf_process_tmf_compl(struct qedf_ctx *qedf, struct fcoe_cqe *cqe,
struct qedf_ioreq *io_req);
extern void qedf_process_cqe(struct qedf_ctx *qedf, struct fcoe_cqe *cqe);

View File

@ -546,7 +546,7 @@ static int qedf_build_bd_list_from_sg(struct qedf_ioreq *io_req)
}
static void qedf_build_fcp_cmnd(struct qedf_ioreq *io_req,
struct fcp_cmnd *fcp_cmnd)
struct fcp_cmnd *fcp_cmnd)
{
struct scsi_cmnd *sc_cmd = io_req->sc_cmd;
@ -554,8 +554,12 @@ static void qedf_build_fcp_cmnd(struct qedf_ioreq *io_req,
memset(fcp_cmnd, 0, FCP_CMND_LEN);
/* 8 bytes: SCSI LUN info */
int_to_scsilun(sc_cmd->device->lun,
(struct scsi_lun *)&fcp_cmnd->fc_lun);
if (io_req->cmd_type == QEDF_TASK_MGMT_CMD)
int_to_scsilun(io_req->tm_lun,
(struct scsi_lun *)&fcp_cmnd->fc_lun);
else
int_to_scsilun(sc_cmd->device->lun,
(struct scsi_lun *)&fcp_cmnd->fc_lun);
/* 4 bytes: flag info */
fcp_cmnd->fc_pri_ta = 0;
@ -1095,7 +1099,7 @@ static void qedf_parse_fcp_rsp(struct qedf_ioreq *io_req,
}
/* The sense buffer can be NULL for TMF commands */
if (sc_cmd->sense_buffer) {
if (sc_cmd && sc_cmd->sense_buffer) {
memset(sc_cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
if (fcp_sns_len)
memcpy(sc_cmd->sense_buffer, sense_data,
@ -1580,7 +1584,7 @@ static void qedf_flush_els_req(struct qedf_ctx *qedf,
/* A value of -1 for lun is a wild card that means flush all
* active SCSI I/Os for the target.
*/
void qedf_flush_active_ios(struct qedf_rport *fcport, int lun)
void qedf_flush_active_ios(struct qedf_rport *fcport, u64 lun)
{
struct qedf_ioreq *io_req;
struct qedf_ctx *qedf;
@ -1768,10 +1772,6 @@ void qedf_flush_active_ios(struct qedf_rport *fcport, int lun)
kref_put(&io_req->refcount, qedf_release_cmd);
continue;
}
if (lun > -1) {
if (io_req->lun != lun)
continue;
}
/*
* Use kref_get_unless_zero in the unlikely case the command
@ -2287,7 +2287,7 @@ void qedf_process_cleanup_compl(struct qedf_ctx *qedf, struct fcoe_cqe *cqe,
complete(&io_req->cleanup_done);
}
static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd,
static int qedf_execute_tmf(struct qedf_rport *fcport, u64 tm_lun,
uint8_t tm_flags)
{
struct qedf_ioreq *io_req;
@ -2297,17 +2297,10 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd,
int rc = 0;
uint16_t xid;
int tmo = 0;
int lun = 0;
unsigned long flags;
struct fcoe_wqe *sqe;
u16 sqe_idx;
if (!sc_cmd) {
QEDF_ERR(&qedf->dbg_ctx, "sc_cmd is NULL\n");
return FAILED;
}
lun = (int)sc_cmd->device->lun;
if (!test_bit(QEDF_RPORT_SESSION_READY, &fcport->flags)) {
QEDF_ERR(&(qedf->dbg_ctx), "fcport not offloaded\n");
rc = FAILED;
@ -2327,7 +2320,7 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd,
qedf->target_resets++;
/* Initialize rest of io_req fields */
io_req->sc_cmd = sc_cmd;
io_req->sc_cmd = NULL;
io_req->fcport = fcport;
io_req->cmd_type = QEDF_TASK_MGMT_CMD;
@ -2341,6 +2334,7 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd,
/* Default is to return a SCSI command when an error occurs */
io_req->return_scsi_cmd_on_abts = false;
io_req->tm_lun = tm_lun;
/* Obtain exchange id */
xid = io_req->xid;
@ -2395,7 +2389,7 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd,
if (tm_flags == FCP_TMF_LUN_RESET)
qedf_flush_active_ios(fcport, lun);
qedf_flush_active_ios(fcport, tm_lun);
else
qedf_flush_active_ios(fcport, -1);
@ -2410,23 +2404,18 @@ static int qedf_execute_tmf(struct qedf_rport *fcport, struct scsi_cmnd *sc_cmd,
return rc;
}
int qedf_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
int qedf_initiate_tmf(struct fc_rport *rport, u64 lun, u8 tm_flags)
{
struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device));
struct fc_rport_libfc_priv *rp = rport->dd_data;
struct qedf_rport *fcport = (struct qedf_rport *)&rp[1];
struct qedf_ctx *qedf;
struct fc_lport *lport = shost_priv(sc_cmd->device->host);
struct qedf_ctx *qedf = fcport->qedf;
struct fc_lport *lport = rp->local_port;
int rc = SUCCESS;
int rval;
struct qedf_ioreq *io_req = NULL;
int ref_cnt = 0;
struct fc_rport_priv *rdata = fcport->rdata;
QEDF_ERR(NULL,
"tm_flags 0x%x sc_cmd %p op = 0x%02x target_id = 0x%x lun=%d\n",
tm_flags, sc_cmd, sc_cmd->cmd_len ? sc_cmd->cmnd[0] : 0xff,
rport->scsi_target_id, (int)sc_cmd->device->lun);
"tm_flags 0x%x target_id = 0x%x lun=%llu\n",
tm_flags, rport->scsi_target_id, lun);
if (!rdata || !kref_get_unless_zero(&rdata->kref)) {
QEDF_ERR(NULL, "stale rport\n");
@ -2437,33 +2426,10 @@ int qedf_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
(tm_flags == FCP_TMF_TGT_RESET) ? "TARGET RESET" :
"LUN RESET");
if (qedf_priv(sc_cmd)->io_req) {
io_req = qedf_priv(sc_cmd)->io_req;
ref_cnt = kref_read(&io_req->refcount);
QEDF_ERR(NULL,
"orig io_req = %p xid = 0x%x ref_cnt = %d.\n",
io_req, io_req->xid, ref_cnt);
}
rval = fc_remote_port_chkready(rport);
if (rval) {
QEDF_ERR(NULL, "device_reset rport not ready\n");
rc = FAILED;
goto tmf_err;
}
rc = fc_block_scsi_eh(sc_cmd);
rc = fc_block_rport(rport);
if (rc)
goto tmf_err;
if (!fcport) {
QEDF_ERR(NULL, "device_reset: rport is NULL\n");
rc = FAILED;
goto tmf_err;
}
qedf = fcport->qedf;
if (!qedf) {
QEDF_ERR(NULL, "qedf is NULL.\n");
rc = FAILED;
@ -2500,7 +2466,7 @@ int qedf_initiate_tmf(struct scsi_cmnd *sc_cmd, u8 tm_flags)
goto tmf_err;
}
rc = qedf_execute_tmf(fcport, sc_cmd, tm_flags);
rc = qedf_execute_tmf(fcport, lun, tm_flags);
tmf_err:
kref_put(&rdata->kref, fc_rport_destroy);
@ -2517,7 +2483,6 @@ void qedf_process_tmf_compl(struct qedf_ctx *qedf, struct fcoe_cqe *cqe,
fcp_rsp = &cqe->cqe_info.rsp_info;
qedf_parse_fcp_rsp(io_req, fcp_rsp);
io_req->sc_cmd = NULL;
complete(&io_req->tm_done);
}

View File

@ -774,7 +774,7 @@ static int qedf_eh_abort(struct scsi_cmnd *sc_cmd)
goto drop_rdata_kref;
}
rc = fc_block_scsi_eh(sc_cmd);
rc = fc_block_rport(rport);
if (rc)
goto drop_rdata_kref;
@ -858,18 +858,19 @@ static int qedf_eh_abort(struct scsi_cmnd *sc_cmd)
static int qedf_eh_target_reset(struct scsi_cmnd *sc_cmd)
{
QEDF_ERR(NULL, "%d:0:%d:%lld: TARGET RESET Issued...",
sc_cmd->device->host->host_no, sc_cmd->device->id,
sc_cmd->device->lun);
return qedf_initiate_tmf(sc_cmd, FCP_TMF_TGT_RESET);
struct scsi_target *starget = scsi_target(sc_cmd->device);
struct fc_rport *rport = starget_to_rport(starget);
QEDF_ERR(NULL, "TARGET RESET Issued...");
return qedf_initiate_tmf(rport, 0, FCP_TMF_TGT_RESET);
}
static int qedf_eh_device_reset(struct scsi_cmnd *sc_cmd)
{
QEDF_ERR(NULL, "%d:0:%d:%lld: LUN RESET Issued... ",
sc_cmd->device->host->host_no, sc_cmd->device->id,
sc_cmd->device->lun);
return qedf_initiate_tmf(sc_cmd, FCP_TMF_LUN_RESET);
struct fc_rport *rport = starget_to_rport(scsi_target(sc_cmd->device));
QEDF_ERR(NULL, "LUN RESET Issued...\n");
return qedf_initiate_tmf(rport, sc_cmd->device->lun, FCP_TMF_LUN_RESET);
}
bool qedf_wait_for_upload(struct qedf_ctx *qedf)

View File

@ -716,7 +716,6 @@ enum action {
ABORT_COMMAND,
DEVICE_RESET,
BUS_RESET,
ADAPTER_RESET,
};
@ -898,22 +897,9 @@ qla1280_error_action(struct scsi_cmnd *cmd, enum action action)
}
break;
case ADAPTER_RESET:
default:
if (qla1280_verbose) {
printk(KERN_INFO
"scsi(%ld): Issued ADAPTER RESET\n",
ha->host_no);
printk(KERN_INFO "scsi(%ld): I/O processing will "
"continue automatically\n", ha->host_no);
}
ha->flags.reset_active = 1;
if (qla1280_abort_isp(ha) != 0) { /* it's dead */
result = FAILED;
}
ha->flags.reset_active = 0;
dprintk(1, "RESET invalid action %d\n", action);
return FAILED;
}
/*
@ -1011,11 +997,27 @@ qla1280_eh_bus_reset(struct scsi_cmnd *cmd)
static int
qla1280_eh_adapter_reset(struct scsi_cmnd *cmd)
{
int rc;
int rc = SUCCESS;
struct Scsi_Host *shost = cmd->device->host;
struct scsi_qla_host *ha = (struct scsi_qla_host *)shost->hostdata;
spin_lock_irq(cmd->device->host->host_lock);
rc = qla1280_error_action(cmd, ADAPTER_RESET);
spin_unlock_irq(cmd->device->host->host_lock);
spin_lock_irq(shost->host_lock);
if (qla1280_verbose) {
printk(KERN_INFO
"scsi(%ld): Issued ADAPTER RESET\n",
ha->host_no);
printk(KERN_INFO "scsi(%ld): I/O processing will "
"continue automatically\n", ha->host_no);
}
ha->flags.reset_active = 1;
if (qla1280_abort_isp(ha) != 0) { /* it's dead */
rc = FAILED;
}
ha->flags.reset_active = 0;
spin_unlock_irq(shost->host_lock);
return rc;
}

View File

@ -5,6 +5,7 @@
*/
#include "qla_def.h"
#include <linux/bitfield.h>
#include <linux/moduleparam.h>
#include <linux/vmalloc.h>
#include <linux/delay.h>
@ -633,8 +634,8 @@ qla24xx_pci_info_str(struct scsi_qla_host *vha, char *str, size_t str_len)
const char *speed_str;
pcie_capability_read_dword(ha->pdev, PCI_EXP_LNKCAP, &lstat);
lspeed = lstat & PCI_EXP_LNKCAP_SLS;
lwidth = (lstat & PCI_EXP_LNKCAP_MLW) >> 4;
lspeed = FIELD_GET(PCI_EXP_LNKCAP_SLS, lstat);
lwidth = FIELD_GET(PCI_EXP_LNKCAP_MLW, lstat);
switch (lspeed) {
case 1:

View File

@ -1822,6 +1822,9 @@ static const struct target_core_fabric_ops tcm_qla2xxx_ops = {
.tfc_wwn_attrs = tcm_qla2xxx_wwn_attrs,
.tfc_tpg_base_attrs = tcm_qla2xxx_tpg_attrs,
.tfc_tpg_attrib_attrs = tcm_qla2xxx_tpg_attrib_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
static const struct target_core_fabric_ops tcm_qla2xxx_npiv_ops = {
@ -1859,6 +1862,9 @@ static const struct target_core_fabric_ops tcm_qla2xxx_npiv_ops = {
.fabric_init_nodeacl = tcm_qla2xxx_init_nodeacl,
.tfc_wwn_attrs = tcm_qla2xxx_wwn_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
static int tcm_qla2xxx_register_configfs(void)

View File

@ -703,7 +703,7 @@ int scsi_cdl_enable(struct scsi_device *sdev, bool enable)
ret = scsi_mode_select(sdev, 1, 0, buf_data, len, 5 * HZ, 3,
&data, &sshdr);
if (ret) {
if (scsi_sense_valid(&sshdr))
if (ret > 0 && scsi_sense_valid(&sshdr))
scsi_print_sense_hdr(sdev,
dev_name(&sdev->sdev_gendev), &sshdr);
return ret;

View File

@ -41,6 +41,8 @@
#include <linux/random.h>
#include <linux/xarray.h>
#include <linux/prefetch.h>
#include <linux/debugfs.h>
#include <linux/async.h>
#include <net/checksum.h>
@ -285,6 +287,46 @@ struct sdeb_zone_state { /* ZBC: per zone state */
sector_t z_wp;
};
enum sdebug_err_type {
ERR_TMOUT_CMD = 0, /* make specific scsi command timeout */
ERR_FAIL_QUEUE_CMD = 1, /* make specific scsi command's */
/* queuecmd return failed */
ERR_FAIL_CMD = 2, /* make specific scsi command's */
/* queuecmd return succeed but */
/* with errors set in scsi_cmnd */
ERR_ABORT_CMD_FAILED = 3, /* control return FAILED from */
/* scsi_debug_abort() */
ERR_LUN_RESET_FAILED = 4, /* control return FAILED from */
/* scsi_debug_device_reseLUN_RESET_FAILEDt() */
};
struct sdebug_err_inject {
int type;
struct list_head list;
int cnt;
unsigned char cmd;
struct rcu_head rcu;
union {
/*
* For ERR_FAIL_QUEUE_CMD
*/
int queuecmd_ret;
/*
* For ERR_FAIL_CMD
*/
struct {
unsigned char host_byte;
unsigned char driver_byte;
unsigned char status_byte;
unsigned char sense_key;
unsigned char asc;
unsigned char asq;
};
};
};
struct sdebug_dev_info {
struct list_head dev_list;
unsigned int channel;
@ -310,6 +352,15 @@ struct sdebug_dev_info {
unsigned int max_open;
ktime_t create_ts; /* time since bootup that this device was created */
struct sdeb_zone_state *zstate;
struct dentry *debugfs_entry;
struct spinlock list_lock;
struct list_head inject_err_list;
};
struct sdebug_target_info {
bool reset_fail;
struct dentry *debugfs_entry;
};
struct sdebug_host_info {
@ -792,6 +843,7 @@ static bool have_dif_prot;
static bool write_since_sync;
static bool sdebug_statistics = DEF_STATISTICS;
static bool sdebug_wp;
static bool sdebug_allow_restart;
/* Following enum: 0: no zbc, def; 1: host aware; 2: host managed */
static enum blk_zoned_model sdeb_zbc_model = BLK_ZONED_NONE;
static char *sdeb_zbc_model_s;
@ -862,6 +914,265 @@ static const int device_qfull_result =
static const int condition_met_result = SAM_STAT_CONDITION_MET;
static struct dentry *sdebug_debugfs_root;
static void sdebug_err_free(struct rcu_head *head)
{
struct sdebug_err_inject *inject =
container_of(head, typeof(*inject), rcu);
kfree(inject);
}
static void sdebug_err_add(struct scsi_device *sdev, struct sdebug_err_inject *new)
{
struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdev->hostdata;
struct sdebug_err_inject *err;
spin_lock(&devip->list_lock);
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
if (err->type == new->type && err->cmd == new->cmd) {
list_del_rcu(&err->list);
call_rcu(&err->rcu, sdebug_err_free);
}
}
list_add_tail_rcu(&new->list, &devip->inject_err_list);
spin_unlock(&devip->list_lock);
}
static int sdebug_err_remove(struct scsi_device *sdev, const char *buf, size_t count)
{
struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdev->hostdata;
struct sdebug_err_inject *err;
int type;
unsigned char cmd;
if (sscanf(buf, "- %d %hhx", &type, &cmd) != 2) {
kfree(buf);
return -EINVAL;
}
spin_lock(&devip->list_lock);
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
if (err->type == type && err->cmd == cmd) {
list_del_rcu(&err->list);
call_rcu(&err->rcu, sdebug_err_free);
spin_unlock(&devip->list_lock);
kfree(buf);
return count;
}
}
spin_unlock(&devip->list_lock);
kfree(buf);
return -EINVAL;
}
static int sdebug_error_show(struct seq_file *m, void *p)
{
struct scsi_device *sdev = (struct scsi_device *)m->private;
struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdev->hostdata;
struct sdebug_err_inject *err;
seq_puts(m, "Type\tCount\tCommand\n");
rcu_read_lock();
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
switch (err->type) {
case ERR_TMOUT_CMD:
case ERR_ABORT_CMD_FAILED:
case ERR_LUN_RESET_FAILED:
seq_printf(m, "%d\t%d\t0x%x\n", err->type, err->cnt,
err->cmd);
break;
case ERR_FAIL_QUEUE_CMD:
seq_printf(m, "%d\t%d\t0x%x\t0x%x\n", err->type,
err->cnt, err->cmd, err->queuecmd_ret);
break;
case ERR_FAIL_CMD:
seq_printf(m, "%d\t%d\t0x%x\t0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
err->type, err->cnt, err->cmd,
err->host_byte, err->driver_byte,
err->status_byte, err->sense_key,
err->asc, err->asq);
break;
}
}
rcu_read_unlock();
return 0;
}
static int sdebug_error_open(struct inode *inode, struct file *file)
{
return single_open(file, sdebug_error_show, inode->i_private);
}
static ssize_t sdebug_error_write(struct file *file, const char __user *ubuf,
size_t count, loff_t *ppos)
{
char *buf;
unsigned int inject_type;
struct sdebug_err_inject *inject;
struct scsi_device *sdev = (struct scsi_device *)file->f_inode->i_private;
buf = kmalloc(count, GFP_KERNEL);
if (!buf)
return -ENOMEM;
if (copy_from_user(buf, ubuf, count)) {
kfree(buf);
return -EFAULT;
}
if (buf[0] == '-')
return sdebug_err_remove(sdev, buf, count);
if (sscanf(buf, "%d", &inject_type) != 1) {
kfree(buf);
return -EINVAL;
}
inject = kzalloc(sizeof(struct sdebug_err_inject), GFP_KERNEL);
if (!inject) {
kfree(buf);
return -ENOMEM;
}
switch (inject_type) {
case ERR_TMOUT_CMD:
case ERR_ABORT_CMD_FAILED:
case ERR_LUN_RESET_FAILED:
if (sscanf(buf, "%d %d %hhx", &inject->type, &inject->cnt,
&inject->cmd) != 3)
goto out_error;
break;
case ERR_FAIL_QUEUE_CMD:
if (sscanf(buf, "%d %d %hhx %x", &inject->type, &inject->cnt,
&inject->cmd, &inject->queuecmd_ret) != 4)
goto out_error;
break;
case ERR_FAIL_CMD:
if (sscanf(buf, "%d %d %hhx %hhx %hhx %hhx %hhx %hhx %hhx",
&inject->type, &inject->cnt, &inject->cmd,
&inject->host_byte, &inject->driver_byte,
&inject->status_byte, &inject->sense_key,
&inject->asc, &inject->asq) != 9)
goto out_error;
break;
default:
goto out_error;
break;
}
kfree(buf);
sdebug_err_add(sdev, inject);
return count;
out_error:
kfree(buf);
kfree(inject);
return -EINVAL;
}
static const struct file_operations sdebug_error_fops = {
.open = sdebug_error_open,
.read = seq_read,
.write = sdebug_error_write,
.release = single_release,
};
static int sdebug_target_reset_fail_show(struct seq_file *m, void *p)
{
struct scsi_target *starget = (struct scsi_target *)m->private;
struct sdebug_target_info *targetip =
(struct sdebug_target_info *)starget->hostdata;
if (targetip)
seq_printf(m, "%c\n", targetip->reset_fail ? 'Y' : 'N');
return 0;
}
static int sdebug_target_reset_fail_open(struct inode *inode, struct file *file)
{
return single_open(file, sdebug_target_reset_fail_show, inode->i_private);
}
static ssize_t sdebug_target_reset_fail_write(struct file *file,
const char __user *ubuf, size_t count, loff_t *ppos)
{
int ret;
struct scsi_target *starget =
(struct scsi_target *)file->f_inode->i_private;
struct sdebug_target_info *targetip =
(struct sdebug_target_info *)starget->hostdata;
if (targetip) {
ret = kstrtobool_from_user(ubuf, count, &targetip->reset_fail);
return ret < 0 ? ret : count;
}
return -ENODEV;
}
static const struct file_operations sdebug_target_reset_fail_fops = {
.open = sdebug_target_reset_fail_open,
.read = seq_read,
.write = sdebug_target_reset_fail_write,
.release = single_release,
};
static int sdebug_target_alloc(struct scsi_target *starget)
{
struct sdebug_target_info *targetip;
struct dentry *dentry;
targetip = kzalloc(sizeof(struct sdebug_target_info), GFP_KERNEL);
if (!targetip)
return -ENOMEM;
targetip->debugfs_entry = debugfs_create_dir(dev_name(&starget->dev),
sdebug_debugfs_root);
if (IS_ERR_OR_NULL(targetip->debugfs_entry))
pr_info("%s: failed to create debugfs directory for target %s\n",
__func__, dev_name(&starget->dev));
debugfs_create_file("fail_reset", 0600, targetip->debugfs_entry, starget,
&sdebug_target_reset_fail_fops);
if (IS_ERR_OR_NULL(dentry))
pr_info("%s: failed to create fail_reset file for target %s\n",
__func__, dev_name(&starget->dev));
starget->hostdata = targetip;
return 0;
}
static void sdebug_tartget_cleanup_async(void *data, async_cookie_t cookie)
{
struct sdebug_target_info *targetip = data;
debugfs_remove(targetip->debugfs_entry);
kfree(targetip);
}
static void sdebug_target_destroy(struct scsi_target *starget)
{
struct sdebug_target_info *targetip;
targetip = (struct sdebug_target_info *)starget->hostdata;
if (targetip) {
starget->hostdata = NULL;
async_schedule(sdebug_tartget_cleanup_async, targetip);
}
}
/* Only do the extra work involved in logical block provisioning if one or
* more of the lbpu, lbpws or lbpws10 parameters are given and we are doing
@ -5096,6 +5407,8 @@ static struct sdebug_dev_info *sdebug_device_create(
}
devip->create_ts = ktime_get_boottime();
atomic_set(&devip->stopped, (sdeb_tur_ms_to_ready > 0 ? 2 : 0));
spin_lock_init(&devip->list_lock);
INIT_LIST_HEAD(&devip->inject_err_list);
list_add_tail(&devip->dev_list, &sdbg_host->dev_info_list);
}
return devip;
@ -5141,6 +5454,7 @@ static int scsi_debug_slave_alloc(struct scsi_device *sdp)
if (sdebug_verbose)
pr_info("slave_alloc <%u %u %u %llu>\n",
sdp->host->host_no, sdp->channel, sdp->id, sdp->lun);
return 0;
}
@ -5148,6 +5462,7 @@ static int scsi_debug_slave_configure(struct scsi_device *sdp)
{
struct sdebug_dev_info *devip =
(struct sdebug_dev_info *)sdp->hostdata;
struct dentry *dentry;
if (sdebug_verbose)
pr_info("slave_configure <%u %u %u %llu>\n",
@ -5163,6 +5478,22 @@ static int scsi_debug_slave_configure(struct scsi_device *sdp)
if (sdebug_no_uld)
sdp->no_uld_attach = 1;
config_cdb_len(sdp);
if (sdebug_allow_restart)
sdp->allow_restart = 1;
devip->debugfs_entry = debugfs_create_dir(dev_name(&sdp->sdev_dev),
sdebug_debugfs_root);
if (IS_ERR_OR_NULL(devip->debugfs_entry))
pr_info("%s: failed to create debugfs directory for device %s\n",
__func__, dev_name(&sdp->sdev_gendev));
dentry = debugfs_create_file("error", 0600, devip->debugfs_entry, sdp,
&sdebug_error_fops);
if (IS_ERR_OR_NULL(dentry))
pr_info("%s: failed to create error file for device %s\n",
__func__, dev_name(&sdp->sdev_gendev));
return 0;
}
@ -5170,15 +5501,27 @@ static void scsi_debug_slave_destroy(struct scsi_device *sdp)
{
struct sdebug_dev_info *devip =
(struct sdebug_dev_info *)sdp->hostdata;
struct sdebug_err_inject *err;
if (sdebug_verbose)
pr_info("slave_destroy <%u %u %u %llu>\n",
sdp->host->host_no, sdp->channel, sdp->id, sdp->lun);
if (devip) {
/* make this slot available for re-use */
devip->used = false;
sdp->hostdata = NULL;
if (!devip)
return;
spin_lock(&devip->list_lock);
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
list_del_rcu(&err->list);
call_rcu(&err->rcu, sdebug_err_free);
}
spin_unlock(&devip->list_lock);
debugfs_remove(devip->debugfs_entry);
/* make this slot available for re-use */
devip->used = false;
sdp->hostdata = NULL;
}
/* Returns true if we require the queued memory to be freed by the caller. */
@ -5272,9 +5615,39 @@ static void stop_all_queued(void)
mutex_unlock(&sdebug_host_list_mutex);
}
static int sdebug_fail_abort(struct scsi_cmnd *cmnd)
{
struct scsi_device *sdp = cmnd->device;
struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata;
struct sdebug_err_inject *err;
unsigned char *cmd = cmnd->cmnd;
int ret = 0;
if (devip == NULL)
return 0;
rcu_read_lock();
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
if (err->type == ERR_ABORT_CMD_FAILED &&
(err->cmd == cmd[0] || err->cmd == 0xff)) {
ret = !!err->cnt;
if (err->cnt < 0)
err->cnt++;
rcu_read_unlock();
return ret;
}
}
rcu_read_unlock();
return 0;
}
static int scsi_debug_abort(struct scsi_cmnd *SCpnt)
{
bool ok = scsi_debug_abort_cmnd(SCpnt);
u8 *cmd = SCpnt->cmnd;
u8 opcode = cmd[0];
++num_aborts;
@ -5283,6 +5656,12 @@ static int scsi_debug_abort(struct scsi_cmnd *SCpnt)
"%s: command%s found\n", __func__,
ok ? "" : " not");
if (sdebug_fail_abort(SCpnt)) {
scmd_printk(KERN_INFO, SCpnt, "fail abort command 0x%x\n",
opcode);
return FAILED;
}
return SUCCESS;
}
@ -5306,10 +5685,40 @@ static void scsi_debug_stop_all_queued(struct scsi_device *sdp)
scsi_debug_stop_all_queued_iter, sdp);
}
static int sdebug_fail_lun_reset(struct scsi_cmnd *cmnd)
{
struct scsi_device *sdp = cmnd->device;
struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata;
struct sdebug_err_inject *err;
unsigned char *cmd = cmnd->cmnd;
int ret = 0;
if (devip == NULL)
return 0;
rcu_read_lock();
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
if (err->type == ERR_LUN_RESET_FAILED &&
(err->cmd == cmd[0] || err->cmd == 0xff)) {
ret = !!err->cnt;
if (err->cnt < 0)
err->cnt++;
rcu_read_unlock();
return ret;
}
}
rcu_read_unlock();
return 0;
}
static int scsi_debug_device_reset(struct scsi_cmnd *SCpnt)
{
struct scsi_device *sdp = SCpnt->device;
struct sdebug_dev_info *devip = sdp->hostdata;
u8 *cmd = SCpnt->cmnd;
u8 opcode = cmd[0];
++num_dev_resets;
@ -5320,14 +5729,33 @@ static int scsi_debug_device_reset(struct scsi_cmnd *SCpnt)
if (devip)
set_bit(SDEBUG_UA_POR, devip->uas_bm);
if (sdebug_fail_lun_reset(SCpnt)) {
scmd_printk(KERN_INFO, SCpnt, "fail lun reset 0x%x\n", opcode);
return FAILED;
}
return SUCCESS;
}
static int sdebug_fail_target_reset(struct scsi_cmnd *cmnd)
{
struct scsi_target *starget = scsi_target(cmnd->device);
struct sdebug_target_info *targetip =
(struct sdebug_target_info *)starget->hostdata;
if (targetip)
return targetip->reset_fail;
return 0;
}
static int scsi_debug_target_reset(struct scsi_cmnd *SCpnt)
{
struct scsi_device *sdp = SCpnt->device;
struct sdebug_host_info *sdbg_host = shost_to_sdebug_host(sdp->host);
struct sdebug_dev_info *devip;
u8 *cmd = SCpnt->cmnd;
u8 opcode = cmd[0];
int k = 0;
++num_target_resets;
@ -5345,6 +5773,12 @@ static int scsi_debug_target_reset(struct scsi_cmnd *SCpnt)
sdev_printk(KERN_INFO, sdp,
"%s: %d device(s) found in target\n", __func__, k);
if (sdebug_fail_target_reset(SCpnt)) {
scmd_printk(KERN_INFO, SCpnt, "fail target reset 0x%x\n",
opcode);
return FAILED;
}
return SUCCESS;
}
@ -5772,6 +6206,7 @@ module_param_named(zone_cap_mb, sdeb_zbc_zone_cap_mb, int, S_IRUGO);
module_param_named(zone_max_open, sdeb_zbc_max_open, int, S_IRUGO);
module_param_named(zone_nr_conv, sdeb_zbc_nr_conv, int, S_IRUGO);
module_param_named(zone_size_mb, sdeb_zbc_zone_size_mb, int, S_IRUGO);
module_param_named(allow_restart, sdebug_allow_restart, bool, S_IRUGO | S_IWUSR);
MODULE_AUTHOR("Eric Youngdale + Douglas Gilbert");
MODULE_DESCRIPTION("SCSI debug adapter driver");
@ -5844,6 +6279,7 @@ MODULE_PARM_DESC(zone_cap_mb, "Zone capacity in MiB (def=zone size)");
MODULE_PARM_DESC(zone_max_open, "Maximum number of open zones; [0] for no limit (def=auto)");
MODULE_PARM_DESC(zone_nr_conv, "Number of conventional zones (def=1)");
MODULE_PARM_DESC(zone_size_mb, "Zone size in MiB (def=auto)");
MODULE_PARM_DESC(allow_restart, "Set scsi_device's allow_restart flag(def=0)");
#define SDEBUG_INFO_LEN 256
static char sdebug_info[SDEBUG_INFO_LEN];
@ -7011,6 +7447,10 @@ static int __init scsi_debug_init(void)
goto driver_unreg;
}
sdebug_debugfs_root = debugfs_create_dir("scsi_debug", NULL);
if (IS_ERR_OR_NULL(sdebug_debugfs_root))
pr_info("%s: failed to create initial debugfs directory\n", __func__);
for (k = 0; k < hosts_to_add; k++) {
if (want_store && k == 0) {
ret = sdebug_add_host_helper(idx);
@ -7057,6 +7497,7 @@ static void __exit scsi_debug_exit(void)
sdebug_erase_all_stores(false);
xa_destroy(per_store_ap);
debugfs_remove(sdebug_debugfs_root);
}
device_initcall(scsi_debug_init);
@ -7496,6 +7937,104 @@ static int sdebug_blk_mq_poll(struct Scsi_Host *shost, unsigned int queue_num)
return num_entries;
}
static int sdebug_timeout_cmd(struct scsi_cmnd *cmnd)
{
struct scsi_device *sdp = cmnd->device;
struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata;
struct sdebug_err_inject *err;
unsigned char *cmd = cmnd->cmnd;
int ret = 0;
if (devip == NULL)
return 0;
rcu_read_lock();
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
if (err->type == ERR_TMOUT_CMD &&
(err->cmd == cmd[0] || err->cmd == 0xff)) {
ret = !!err->cnt;
if (err->cnt < 0)
err->cnt++;
rcu_read_unlock();
return ret;
}
}
rcu_read_unlock();
return 0;
}
static int sdebug_fail_queue_cmd(struct scsi_cmnd *cmnd)
{
struct scsi_device *sdp = cmnd->device;
struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata;
struct sdebug_err_inject *err;
unsigned char *cmd = cmnd->cmnd;
int ret = 0;
if (devip == NULL)
return 0;
rcu_read_lock();
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
if (err->type == ERR_FAIL_QUEUE_CMD &&
(err->cmd == cmd[0] || err->cmd == 0xff)) {
ret = err->cnt ? err->queuecmd_ret : 0;
if (err->cnt < 0)
err->cnt++;
rcu_read_unlock();
return ret;
}
}
rcu_read_unlock();
return 0;
}
static int sdebug_fail_cmd(struct scsi_cmnd *cmnd, int *retval,
struct sdebug_err_inject *info)
{
struct scsi_device *sdp = cmnd->device;
struct sdebug_dev_info *devip = (struct sdebug_dev_info *)sdp->hostdata;
struct sdebug_err_inject *err;
unsigned char *cmd = cmnd->cmnd;
int ret = 0;
int result;
if (devip == NULL)
return 0;
rcu_read_lock();
list_for_each_entry_rcu(err, &devip->inject_err_list, list) {
if (err->type == ERR_FAIL_CMD &&
(err->cmd == cmd[0] || err->cmd == 0xff)) {
if (!err->cnt) {
rcu_read_unlock();
return 0;
}
ret = !!err->cnt;
rcu_read_unlock();
goto out_handle;
}
}
rcu_read_unlock();
return 0;
out_handle:
if (err->cnt < 0)
err->cnt++;
mk_sense_buffer(cmnd, err->sense_key, err->asc, err->asq);
result = err->status_byte | err->host_byte << 16 | err->driver_byte << 24;
*info = *err;
*retval = schedule_resp(cmnd, devip, result, NULL, 0, 0);
return ret;
}
static int scsi_debug_queuecommand(struct Scsi_Host *shost,
struct scsi_cmnd *scp)
{
@ -7515,6 +8054,8 @@ static int scsi_debug_queuecommand(struct Scsi_Host *shost,
u8 opcode = cmd[0];
bool has_wlun_rl;
bool inject_now;
int ret = 0;
struct sdebug_err_inject err;
scsi_set_resid(scp, 0);
if (sdebug_statistics) {
@ -7554,6 +8095,29 @@ static int scsi_debug_queuecommand(struct Scsi_Host *shost,
if (NULL == devip)
goto err_out;
}
if (sdebug_timeout_cmd(scp)) {
scmd_printk(KERN_INFO, scp, "timeout command 0x%x\n", opcode);
return 0;
}
ret = sdebug_fail_queue_cmd(scp);
if (ret) {
scmd_printk(KERN_INFO, scp, "fail queue command 0x%x with 0x%x\n",
opcode, ret);
return ret;
}
if (sdebug_fail_cmd(scp, &ret, &err)) {
scmd_printk(KERN_INFO, scp,
"fail command 0x%x with hostbyte=0x%x, "
"driverbyte=0x%x, statusbyte=0x%x, "
"sense_key=0x%x, asc=0x%x, asq=0x%x\n",
opcode, err.host_byte, err.driver_byte,
err.status_byte, err.sense_key, err.asc, err.asq);
return ret;
}
if (unlikely(inject_now && !atomic_read(&sdeb_inject_pending)))
atomic_set(&sdeb_inject_pending, 1);
@ -7672,7 +8236,6 @@ static int sdebug_init_cmd_priv(struct Scsi_Host *shost, struct scsi_cmnd *cmd)
return 0;
}
static struct scsi_host_template sdebug_driver_template = {
.show_info = scsi_debug_show_info,
.write_info = scsi_debug_write_info,
@ -7702,6 +8265,8 @@ static struct scsi_host_template sdebug_driver_template = {
.track_queue_depth = 1,
.cmd_size = sizeof(struct sdebug_scsi_cmd),
.init_cmd_priv = sdebug_init_cmd_priv,
.target_alloc = sdebug_target_alloc,
.target_destroy = sdebug_target_destroy,
};
static int sdebug_driver_probe(struct device *dev)

View File

@ -774,6 +774,7 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
case 0x1b: /* sanitize in progress */
case 0x1d: /* configuration in progress */
case 0x24: /* depopulation in progress */
case 0x25: /* depopulation restore in progress */
action = ACTION_DELAYED_RETRY;
break;
case 0x0a: /* ALUA state transition */
@ -1250,28 +1251,26 @@ static inline int scsi_dev_queue_ready(struct request_queue *q,
int token;
token = sbitmap_get(&sdev->budget_map);
if (atomic_read(&sdev->device_blocked)) {
if (token < 0)
goto out;
if (token < 0)
return -1;
if (scsi_device_busy(sdev) > 1)
goto out_dec;
if (!atomic_read(&sdev->device_blocked))
return token;
/*
* unblock after device_blocked iterates to zero
*/
if (atomic_dec_return(&sdev->device_blocked) > 0)
goto out_dec;
SCSI_LOG_MLQUEUE(3, sdev_printk(KERN_INFO, sdev,
"unblocking device at zero depth\n"));
/*
* Only unblock if no other commands are pending and
* if device_blocked has decreased to zero
*/
if (scsi_device_busy(sdev) > 1 ||
atomic_dec_return(&sdev->device_blocked) > 0) {
sbitmap_put(&sdev->budget_map, token);
return -1;
}
SCSI_LOG_MLQUEUE(3, sdev_printk(KERN_INFO, sdev,
"unblocking device at zero depth\n"));
return token;
out_dec:
if (token >= 0)
sbitmap_put(&sdev->budget_map, token);
out:
return -1;
}
/*
@ -2299,10 +2298,10 @@ scsi_test_unit_ready(struct scsi_device *sdev, int timeout, int retries,
do {
result = scsi_execute_cmd(sdev, cmd, REQ_OP_DRV_IN, NULL, 0,
timeout, 1, &exec_args);
if (sdev->removable && scsi_sense_valid(sshdr) &&
if (sdev->removable && result > 0 && scsi_sense_valid(sshdr) &&
sshdr->sense_key == UNIT_ATTENTION)
sdev->changed = 1;
} while (scsi_sense_valid(sshdr) &&
} while (result > 0 && scsi_sense_valid(sshdr) &&
sshdr->sense_key == UNIT_ATTENTION && --retries);
return result;

View File

@ -676,10 +676,10 @@ spi_dv_device_echo_buffer(struct scsi_device *sdev, u8 *buffer,
for (r = 0; r < retries; r++) {
result = spi_execute(sdev, spi_write_buffer, REQ_OP_DRV_OUT,
buffer, len, &sshdr);
if(result || !scsi_device_online(sdev)) {
if (result || !scsi_device_online(sdev)) {
scsi_device_set_state(sdev, SDEV_QUIESCE);
if (scsi_sense_valid(&sshdr)
if (result > 0 && scsi_sense_valid(&sshdr)
&& sshdr.sense_key == ILLEGAL_REQUEST
/* INVALID FIELD IN CDB */
&& sshdr.asc == 0x24 && sshdr.ascq == 0x00)

View File

@ -143,7 +143,7 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
struct scsi_mode_data data;
struct scsi_sense_hdr sshdr;
static const char temp[] = "temporary ";
int len;
int len, ret;
if (sdp->type != TYPE_DISK && sdp->type != TYPE_ZBC)
/* no cache control on RBC devices; theoretically they
@ -190,9 +190,10 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
*/
data.device_specific = 0;
if (scsi_mode_select(sdp, 1, sp, buffer_data, len, SD_TIMEOUT,
sdkp->max_retries, &data, &sshdr)) {
if (scsi_sense_valid(&sshdr))
ret = scsi_mode_select(sdp, 1, sp, buffer_data, len, SD_TIMEOUT,
sdkp->max_retries, &data, &sshdr);
if (ret) {
if (ret > 0 && scsi_sense_valid(&sshdr))
sd_print_sense_hdr(sdkp, &sshdr);
return -EINVAL;
}
@ -2258,19 +2259,21 @@ sd_spinup_disk(struct scsi_disk *sdkp)
sdkp->max_retries,
&exec_args);
/*
* If the drive has indicated to us that it
* doesn't have any media in it, don't bother
* with any more polling.
*/
if (media_not_present(sdkp, &sshdr)) {
if (media_was_present)
sd_printk(KERN_NOTICE, sdkp, "Media removed, stopped polling\n");
return;
}
if (the_result > 0) {
/*
* If the drive has indicated to us that it
* doesn't have any media in it, don't bother
* with any more polling.
*/
if (media_not_present(sdkp, &sshdr)) {
if (media_was_present)
sd_printk(KERN_NOTICE, sdkp,
"Media removed, stopped polling\n");
return;
}
if (the_result)
sense_valid = scsi_sense_valid(&sshdr);
}
retries++;
} while (retries < 3 &&
(!scsi_status_is_good(the_result) ||
@ -2302,6 +2305,10 @@ sd_spinup_disk(struct scsi_disk *sdkp)
break; /* unavailable */
if (sshdr.asc == 4 && sshdr.ascq == 0x1b)
break; /* sanitize in progress */
if (sshdr.asc == 4 && sshdr.ascq == 0x24)
break; /* depopulation in progress */
if (sshdr.asc == 4 && sshdr.ascq == 0x25)
break; /* depopulation restoration in progress */
/*
* Issue command to spin up drive when not ready
*/
@ -2466,11 +2473,10 @@ static int read_capacity_16(struct scsi_disk *sdkp, struct scsi_device *sdp,
the_result = scsi_execute_cmd(sdp, cmd, REQ_OP_DRV_IN,
buffer, RC16_LEN, SD_TIMEOUT,
sdkp->max_retries, &exec_args);
if (media_not_present(sdkp, &sshdr))
return -ENODEV;
if (the_result > 0) {
if (media_not_present(sdkp, &sshdr))
return -ENODEV;
sense_valid = scsi_sense_valid(&sshdr);
if (sense_valid &&
sshdr.sense_key == ILLEGAL_REQUEST &&
@ -2967,7 +2973,7 @@ sd_read_cache_type(struct scsi_disk *sdkp, unsigned char *buffer)
}
bad_sense:
if (scsi_sense_valid(&sshdr) &&
if (res == -EIO && scsi_sense_valid(&sshdr) &&
sshdr.sense_key == ILLEGAL_REQUEST &&
sshdr.asc == 0x24 && sshdr.ascq == 0x0)
/* Invalid field in CDB */
@ -3015,7 +3021,7 @@ static void sd_read_app_tag_own(struct scsi_disk *sdkp, unsigned char *buffer)
sd_first_printk(KERN_WARNING, sdkp,
"getting Control mode page failed, assume no ATO\n");
if (scsi_sense_valid(&sshdr))
if (res == -EIO && scsi_sense_valid(&sshdr))
sd_print_sense_hdr(sdkp, &sshdr);
return;

View File

@ -1850,7 +1850,7 @@ snic_dr_clean_pending_req(struct snic *snic, struct scsi_cmnd *lr_sc)
{
struct scsi_device *lr_sdev = lr_sc->device;
u32 tag = 0;
int ret = FAILED;
int ret;
for (tag = 0; tag < snic->max_tag_id; tag++) {
if (tag == snic_cmd_tag(lr_sc))
@ -1859,7 +1859,6 @@ snic_dr_clean_pending_req(struct snic *snic, struct scsi_cmnd *lr_sc)
ret = snic_dr_clean_single_req(snic, tag, lr_sdev);
if (ret) {
SNIC_HOST_ERR(snic->shost, "clean_err:tag = %d\n", tag);
goto clean_err;
}
}
@ -1867,24 +1866,19 @@ snic_dr_clean_pending_req(struct snic *snic, struct scsi_cmnd *lr_sc)
schedule_timeout(msecs_to_jiffies(100));
/* Walk through all the cmds and check abts status. */
if (snic_is_abts_pending(snic, lr_sc)) {
ret = FAILED;
if (snic_is_abts_pending(snic, lr_sc))
goto clean_err;
}
ret = 0;
SNIC_SCSI_DBG(snic->shost, "clean_pending_req: Success.\n");
return ret;
return 0;
clean_err:
ret = FAILED;
SNIC_HOST_ERR(snic->shost,
"Failed to Clean Pending IOs on %s device.\n",
dev_name(&lr_sdev->sdev_gendev));
return ret;
return FAILED;
} /* end of snic_dr_clean_pending_req */

View File

@ -177,7 +177,8 @@ static unsigned int sr_get_events(struct scsi_device *sdev)
result = scsi_execute_cmd(sdev, cmd, REQ_OP_DRV_IN, buf, sizeof(buf),
SR_TIMEOUT, MAX_RETRIES, &exec_args);
if (scsi_sense_valid(&sshdr) && sshdr.sense_key == UNIT_ATTENTION)
if (result > 0 && scsi_sense_valid(&sshdr) &&
sshdr.sense_key == UNIT_ATTENTION)
return DISK_EVENT_MEDIA_CHANGE;
if (result || be16_to_cpu(eh->data_len) < sizeof(*med))

View File

@ -559,14 +559,15 @@ static void sym53c8xx_timer(struct timer_list *t)
*/
#define SYM_EH_ABORT 0
#define SYM_EH_DEVICE_RESET 1
#define SYM_EH_BUS_RESET 2
#define SYM_EH_HOST_RESET 3
/*
* Generic method for our eh processing.
* The 'op' argument tells what we have to do.
*/
static int sym_eh_handler(int op, char *opname, struct scsi_cmnd *cmd)
/*
* Error handlers called from the eh thread (one thread per HBA).
*/
static int sym53c8xx_eh_abort_handler(struct scsi_cmnd *cmd)
{
struct sym_ucmd *ucmd = SYM_UCMD_PTR(cmd);
struct Scsi_Host *shost = cmd->device->host;
@ -578,37 +579,13 @@ static int sym_eh_handler(int op, char *opname, struct scsi_cmnd *cmd)
int sts = -1;
struct completion eh_done;
scmd_printk(KERN_WARNING, cmd, "%s operation started\n", opname);
scmd_printk(KERN_WARNING, cmd, "ABORT operation started\n");
/* We may be in an error condition because the PCI bus
* went down. In this case, we need to wait until the
* PCI bus is reset, the card is reset, and only then
* proceed with the scsi error recovery. There's no
* point in hurrying; take a leisurely wait.
/*
* Escalate to host reset if the PCI bus went down
*/
#define WAIT_FOR_PCI_RECOVERY 35
if (pci_channel_offline(pdev)) {
int finished_reset = 0;
init_completion(&eh_done);
spin_lock_irq(shost->host_lock);
/* Make sure we didn't race */
if (pci_channel_offline(pdev)) {
BUG_ON(sym_data->io_reset);
sym_data->io_reset = &eh_done;
} else {
finished_reset = 1;
}
spin_unlock_irq(shost->host_lock);
if (!finished_reset)
finished_reset = wait_for_completion_timeout
(sym_data->io_reset,
WAIT_FOR_PCI_RECOVERY*HZ);
spin_lock_irq(shost->host_lock);
sym_data->io_reset = NULL;
spin_unlock_irq(shost->host_lock);
if (!finished_reset)
return SCSI_FAILED;
}
if (pci_channel_offline(pdev))
return SCSI_FAILED;
spin_lock_irq(shost->host_lock);
/* This one is queued in some place -> to wait for completion */
@ -620,28 +597,7 @@ static int sym_eh_handler(int op, char *opname, struct scsi_cmnd *cmd)
}
}
/* Try to proceed the operation we have been asked for */
sts = -1;
switch(op) {
case SYM_EH_ABORT:
sts = sym_abort_scsiio(np, cmd, 1);
break;
case SYM_EH_DEVICE_RESET:
sts = sym_reset_scsi_target(np, cmd->device->id);
break;
case SYM_EH_BUS_RESET:
sym_reset_scsi_bus(np, 1);
sts = 0;
break;
case SYM_EH_HOST_RESET:
sym_reset_scsi_bus(np, 0);
sym_start_up(shost, 1);
sts = 0;
break;
default:
break;
}
sts = sym_abort_scsiio(np, cmd, 1);
/* On error, restore everything and cross fingers :) */
if (sts)
cmd_queued = 0;
@ -658,33 +614,130 @@ static int sym_eh_handler(int op, char *opname, struct scsi_cmnd *cmd)
spin_unlock_irq(shost->host_lock);
}
dev_warn(&cmd->device->sdev_gendev, "%s operation %s.\n", opname,
dev_warn(&cmd->device->sdev_gendev, "ABORT operation %s.\n",
sts==0 ? "complete" :sts==-2 ? "timed-out" : "failed");
return sts ? SCSI_FAILED : SCSI_SUCCESS;
}
/*
* Error handlers called from the eh thread (one thread per HBA).
*/
static int sym53c8xx_eh_abort_handler(struct scsi_cmnd *cmd)
static int sym53c8xx_eh_target_reset_handler(struct scsi_cmnd *cmd)
{
return sym_eh_handler(SYM_EH_ABORT, "ABORT", cmd);
}
struct scsi_target *starget = scsi_target(cmd->device);
struct Scsi_Host *shost = dev_to_shost(starget->dev.parent);
struct sym_data *sym_data = shost_priv(shost);
struct pci_dev *pdev = sym_data->pdev;
struct sym_hcb *np = sym_data->ncb;
SYM_QUEHEAD *qp;
int sts;
struct completion eh_done;
static int sym53c8xx_eh_device_reset_handler(struct scsi_cmnd *cmd)
{
return sym_eh_handler(SYM_EH_DEVICE_RESET, "DEVICE RESET", cmd);
starget_printk(KERN_WARNING, starget,
"TARGET RESET operation started\n");
/*
* Escalate to host reset if the PCI bus went down
*/
if (pci_channel_offline(pdev))
return SCSI_FAILED;
spin_lock_irq(shost->host_lock);
sts = sym_reset_scsi_target(np, starget->id);
if (!sts) {
FOR_EACH_QUEUED_ELEMENT(&np->busy_ccbq, qp) {
struct sym_ccb *cp = sym_que_entry(qp, struct sym_ccb,
link_ccbq);
struct scsi_cmnd *cmd = cp->cmd;
struct sym_ucmd *ucmd;
if (!cmd || cmd->device->channel != starget->channel ||
cmd->device->id != starget->id)
continue;
ucmd = SYM_UCMD_PTR(cmd);
init_completion(&eh_done);
ucmd->eh_done = &eh_done;
spin_unlock_irq(shost->host_lock);
if (!wait_for_completion_timeout(&eh_done, 5*HZ)) {
ucmd->eh_done = NULL;
sts = -2;
}
spin_lock_irq(shost->host_lock);
}
}
spin_unlock_irq(shost->host_lock);
starget_printk(KERN_WARNING, starget, "TARGET RESET operation %s.\n",
sts==0 ? "complete" :sts==-2 ? "timed-out" : "failed");
return SCSI_SUCCESS;
}
static int sym53c8xx_eh_bus_reset_handler(struct scsi_cmnd *cmd)
{
return sym_eh_handler(SYM_EH_BUS_RESET, "BUS RESET", cmd);
struct Scsi_Host *shost = cmd->device->host;
struct sym_data *sym_data = shost_priv(shost);
struct pci_dev *pdev = sym_data->pdev;
struct sym_hcb *np = sym_data->ncb;
scmd_printk(KERN_WARNING, cmd, "BUS RESET operation started\n");
/*
* Escalate to host reset if the PCI bus went down
*/
if (pci_channel_offline(pdev))
return SCSI_FAILED;
spin_lock_irq(shost->host_lock);
sym_reset_scsi_bus(np, 1);
spin_unlock_irq(shost->host_lock);
dev_warn(&cmd->device->sdev_gendev, "BUS RESET operation complete.\n");
return SCSI_SUCCESS;
}
static int sym53c8xx_eh_host_reset_handler(struct scsi_cmnd *cmd)
{
return sym_eh_handler(SYM_EH_HOST_RESET, "HOST RESET", cmd);
struct Scsi_Host *shost = cmd->device->host;
struct sym_data *sym_data = shost_priv(shost);
struct pci_dev *pdev = sym_data->pdev;
struct sym_hcb *np = sym_data->ncb;
struct completion eh_done;
int finished_reset = 1;
shost_printk(KERN_WARNING, shost, "HOST RESET operation started\n");
/* We may be in an error condition because the PCI bus
* went down. In this case, we need to wait until the
* PCI bus is reset, the card is reset, and only then
* proceed with the scsi error recovery. There's no
* point in hurrying; take a leisurely wait.
*/
#define WAIT_FOR_PCI_RECOVERY 35
if (pci_channel_offline(pdev)) {
init_completion(&eh_done);
spin_lock_irq(shost->host_lock);
/* Make sure we didn't race */
if (pci_channel_offline(pdev)) {
BUG_ON(sym_data->io_reset);
sym_data->io_reset = &eh_done;
finished_reset = 0;
}
spin_unlock_irq(shost->host_lock);
if (!finished_reset)
finished_reset = wait_for_completion_timeout
(sym_data->io_reset,
WAIT_FOR_PCI_RECOVERY*HZ);
spin_lock_irq(shost->host_lock);
sym_data->io_reset = NULL;
spin_unlock_irq(shost->host_lock);
}
if (finished_reset) {
sym_reset_scsi_bus(np, 0);
sym_start_up(shost, 1);
}
shost_printk(KERN_WARNING, shost, "HOST RESET operation %s.\n",
finished_reset==1 ? "complete" : "failed");
return finished_reset ? SCSI_SUCCESS : SCSI_FAILED;
}
/*
@ -1635,7 +1688,7 @@ static const struct scsi_host_template sym2_template = {
.slave_configure = sym53c8xx_slave_configure,
.slave_destroy = sym53c8xx_slave_destroy,
.eh_abort_handler = sym53c8xx_eh_abort_handler,
.eh_device_reset_handler = sym53c8xx_eh_device_reset_handler,
.eh_target_reset_handler = sym53c8xx_eh_target_reset_handler,
.eh_bus_reset_handler = sym53c8xx_eh_bus_reset_handler,
.eh_host_reset_handler = sym53c8xx_eh_host_reset_handler,
.this_id = 7,

View File

@ -1,12 +1,15 @@
# SPDX-License-Identifier: GPL-2.0-only
config ISCSI_TARGET
tristate "Linux-iSCSI.org iSCSI Target Mode Stack"
tristate "SCSI Target Mode Stack"
depends on INET
select CRYPTO
select CRYPTO_CRC32C
select CRYPTO_CRC32C_INTEL if X86
help
Say M here to enable the ConfigFS enabled Linux-iSCSI.org iSCSI
Target Mode Stack.
Say M to enable the SCSI target mode stack. A SCSI target mode stack
is software that makes local storage available over a storage network
to a SCSI initiator system. The supported storage network technologies
include iSCSI, Fibre Channel and the SCSI RDMA Protocol (SRP).
Configuration of the SCSI target mode stack happens through configfs.
source "drivers/target/iscsi/cxgbit/Kconfig"

View File

@ -1234,12 +1234,6 @@ int iscsit_setup_scsi_cmd(struct iscsit_conn *conn, struct iscsit_cmd *cmd,
spin_lock_bh(&conn->cmd_lock);
list_add_tail(&cmd->i_conn_node, &conn->conn_cmd_list);
spin_unlock_bh(&conn->cmd_lock);
/*
* Check if we need to delay processing because of ALUA
* Active/NonOptimized primary access state..
*/
core_alua_check_nonop_delay(&cmd->se_cmd);
return 0;
}
EXPORT_SYMBOL(iscsit_setup_scsi_cmd);

View File

@ -1589,5 +1589,8 @@ const struct target_core_fabric_ops iscsi_ops = {
.tfc_tpg_nacl_auth_attrs = lio_target_nacl_auth_attrs,
.tfc_tpg_nacl_param_attrs = lio_target_nacl_param_attrs,
.write_pending_must_be_called = true,
.write_pending_must_be_called = 1,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};

View File

@ -948,7 +948,7 @@ int iscsit_execute_cmd(struct iscsit_cmd *cmd, int ooo)
iscsit_set_unsolicited_dataout(cmd);
}
return transport_handle_cdb_direct(&cmd->se_cmd);
return target_submit(&cmd->se_cmd);
case ISCSI_OP_NOOP_OUT:
case ISCSI_OP_TEXT:

View File

@ -318,7 +318,7 @@ static int iscsit_task_reassign_complete_read(
pr_debug("READ ITT: 0x%08x: t_state: %d never sent to"
" transport\n", cmd->init_task_tag,
cmd->se_cmd.t_state);
transport_handle_cdb_direct(se_cmd);
target_submit(se_cmd);
return 0;
}

View File

@ -154,7 +154,7 @@ static void tcm_loop_target_queue_cmd(struct tcm_loop_cmd *tl_cmd)
GFP_ATOMIC))
return;
target_queue_submission(se_cmd);
target_submit(se_cmd);
return;
out_done:
@ -1102,6 +1102,8 @@ static const struct target_core_fabric_ops loop_ops = {
.tfc_wwn_attrs = tcm_loop_wwn_attrs,
.tfc_tpg_base_attrs = tcm_loop_tpg_attrs,
.tfc_tpg_attrib_attrs = tcm_loop_tpg_attrib_attrs,
.default_submit_type = TARGET_QUEUE_SUBMIT,
.direct_submit_supp = 0,
};
static int __init tcm_loop_fabric_init(void)

View File

@ -2278,6 +2278,9 @@ static const struct target_core_fabric_ops sbp_ops = {
.tfc_wwn_attrs = sbp_wwn_attrs,
.tfc_tpg_base_attrs = sbp_tpg_base_attrs,
.tfc_tpg_attrib_attrs = sbp_tpg_attrib_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
static int __init sbp_init(void)

View File

@ -850,7 +850,6 @@ int core_alua_check_nonop_delay(
msleep_interruptible(cmd->alua_nonop_delay);
return 0;
}
EXPORT_SYMBOL(core_alua_check_nonop_delay);
static int core_alua_write_tpg_metadata(
const char *path,

View File

@ -577,6 +577,7 @@ DEF_CONFIGFS_ATTRIB_SHOW(unmap_granularity_alignment);
DEF_CONFIGFS_ATTRIB_SHOW(unmap_zeroes_data);
DEF_CONFIGFS_ATTRIB_SHOW(max_write_same_len);
DEF_CONFIGFS_ATTRIB_SHOW(emulate_rsoc);
DEF_CONFIGFS_ATTRIB_SHOW(submit_type);
#define DEF_CONFIGFS_ATTRIB_STORE_U32(_name) \
static ssize_t _name##_store(struct config_item *item, const char *page,\
@ -1231,6 +1232,24 @@ static ssize_t emulate_rsoc_store(struct config_item *item,
return count;
}
static ssize_t submit_type_store(struct config_item *item, const char *page,
size_t count)
{
struct se_dev_attrib *da = to_attrib(item);
int ret;
u8 val;
ret = kstrtou8(page, 0, &val);
if (ret < 0)
return ret;
if (val > TARGET_QUEUE_SUBMIT)
return -EINVAL;
da->submit_type = val;
return count;
}
CONFIGFS_ATTR(, emulate_model_alias);
CONFIGFS_ATTR(, emulate_dpo);
CONFIGFS_ATTR(, emulate_fua_write);
@ -1266,6 +1285,7 @@ CONFIGFS_ATTR(, unmap_zeroes_data);
CONFIGFS_ATTR(, max_write_same_len);
CONFIGFS_ATTR(, alua_support);
CONFIGFS_ATTR(, pgr_support);
CONFIGFS_ATTR(, submit_type);
/*
* dev_attrib attributes for devices using the target core SBC/SPC
@ -1308,6 +1328,7 @@ struct configfs_attribute *sbc_attrib_attrs[] = {
&attr_alua_support,
&attr_pgr_support,
&attr_emulate_rsoc,
&attr_submit_type,
NULL,
};
EXPORT_SYMBOL(sbc_attrib_attrs);
@ -1325,6 +1346,7 @@ struct configfs_attribute *passthrough_attrib_attrs[] = {
&attr_emulate_pr,
&attr_alua_support,
&attr_pgr_support,
&attr_submit_type,
NULL,
};
EXPORT_SYMBOL(passthrough_attrib_attrs);

View File

@ -779,6 +779,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
dev->dev_attrib.unmap_zeroes_data =
DA_UNMAP_ZEROES_DATA_DEFAULT;
dev->dev_attrib.max_write_same_len = DA_MAX_WRITE_SAME_LEN;
dev->dev_attrib.submit_type = TARGET_FABRIC_DEFAULT_SUBMIT;
xcopy_lun = &dev->xcopy_lun;
rcu_assign_pointer(xcopy_lun->lun_se_dev, dev);

View File

@ -1065,8 +1065,32 @@ target_fabric_wwn_cmd_completion_affinity_store(struct config_item *item,
}
CONFIGFS_ATTR(target_fabric_wwn_, cmd_completion_affinity);
static ssize_t
target_fabric_wwn_default_submit_type_show(struct config_item *item,
char *page)
{
struct se_wwn *wwn = container_of(to_config_group(item), struct se_wwn,
param_group);
return sysfs_emit(page, "%u\n",
wwn->wwn_tf->tf_ops->default_submit_type);
}
CONFIGFS_ATTR_RO(target_fabric_wwn_, default_submit_type);
static ssize_t
target_fabric_wwn_direct_submit_supported_show(struct config_item *item,
char *page)
{
struct se_wwn *wwn = container_of(to_config_group(item), struct se_wwn,
param_group);
return sysfs_emit(page, "%u\n",
wwn->wwn_tf->tf_ops->direct_submit_supp);
}
CONFIGFS_ATTR_RO(target_fabric_wwn_, direct_submit_supported);
static struct configfs_attribute *target_fabric_wwn_param_attrs[] = {
&target_fabric_wwn_attr_cmd_completion_affinity,
&target_fabric_wwn_attr_default_submit_type,
&target_fabric_wwn_attr_direct_submit_supported,
NULL,
};

View File

@ -1576,17 +1576,39 @@ target_cmd_parse_cdb(struct se_cmd *cmd)
}
EXPORT_SYMBOL(target_cmd_parse_cdb);
/*
* Used by fabric module frontends to queue tasks directly.
* May only be used from process context.
*/
int transport_handle_cdb_direct(
struct se_cmd *cmd)
static int __target_submit(struct se_cmd *cmd)
{
sense_reason_t ret;
might_sleep();
/*
* Check if we need to delay processing because of ALUA
* Active/NonOptimized primary access state..
*/
core_alua_check_nonop_delay(cmd);
if (cmd->t_data_nents != 0) {
/*
* This is primarily a hack for udev and tcm loop which sends
* INQUIRYs with a single page and expects the data to be
* cleared.
*/
if (!(cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) &&
cmd->data_direction == DMA_FROM_DEVICE) {
struct scatterlist *sgl = cmd->t_data_sg;
unsigned char *buf = NULL;
BUG_ON(!sgl);
buf = kmap_local_page(sg_page(sgl));
if (buf) {
memset(buf + sgl->offset, 0, sgl->length);
kunmap_local(buf);
}
}
}
if (!cmd->se_lun) {
dump_stack();
pr_err("cmd->se_lun is NULL\n");
@ -1614,7 +1636,6 @@ int transport_handle_cdb_direct(
transport_generic_request_failure(cmd, ret);
return 0;
}
EXPORT_SYMBOL(transport_handle_cdb_direct);
sense_reason_t
transport_generic_map_mem_to_cmd(struct se_cmd *cmd, struct scatterlist *sgl,
@ -1781,53 +1802,6 @@ int target_submit_prep(struct se_cmd *se_cmd, unsigned char *cdb,
}
EXPORT_SYMBOL_GPL(target_submit_prep);
/**
* target_submit - perform final initialization and submit cmd to LIO core
* @se_cmd: command descriptor to submit
*
* target_submit_prep must have been called on the cmd, and this must be
* called from process context.
*/
void target_submit(struct se_cmd *se_cmd)
{
struct scatterlist *sgl = se_cmd->t_data_sg;
unsigned char *buf = NULL;
might_sleep();
if (se_cmd->t_data_nents != 0) {
BUG_ON(!sgl);
/*
* A work-around for tcm_loop as some userspace code via
* scsi-generic do not memset their associated read buffers,
* so go ahead and do that here for type non-data CDBs. Also
* note that this is currently guaranteed to be a single SGL
* for this case by target core in target_setup_cmd_from_cdb()
* -> transport_generic_cmd_sequencer().
*/
if (!(se_cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) &&
se_cmd->data_direction == DMA_FROM_DEVICE) {
if (sgl)
buf = kmap(sg_page(sgl)) + sgl->offset;
if (buf) {
memset(buf, 0, sgl->length);
kunmap(sg_page(sgl));
}
}
}
/*
* Check if we need to delay processing because of ALUA
* Active/NonOptimized primary access state..
*/
core_alua_check_nonop_delay(se_cmd);
transport_handle_cdb_direct(se_cmd);
}
EXPORT_SYMBOL_GPL(target_submit);
/**
* target_submit_cmd - lookup unpacked lun and submit uninitialized se_cmd
*
@ -1923,7 +1897,7 @@ void target_queued_submit_work(struct work_struct *work)
se_plug = target_plug_device(se_dev);
}
target_submit(se_cmd);
__target_submit(se_cmd);
}
if (se_plug)
@ -1934,7 +1908,7 @@ void target_queued_submit_work(struct work_struct *work)
* target_queue_submission - queue the cmd to run on the LIO workqueue
* @se_cmd: command descriptor to submit
*/
void target_queue_submission(struct se_cmd *se_cmd)
static void target_queue_submission(struct se_cmd *se_cmd)
{
struct se_device *se_dev = se_cmd->se_dev;
int cpu = se_cmd->cpuid;
@ -1944,7 +1918,35 @@ void target_queue_submission(struct se_cmd *se_cmd)
llist_add(&se_cmd->se_cmd_list, &sq->cmd_list);
queue_work_on(cpu, target_submission_wq, &sq->work);
}
EXPORT_SYMBOL_GPL(target_queue_submission);
/**
* target_submit - perform final initialization and submit cmd to LIO core
* @se_cmd: command descriptor to submit
*
* target_submit_prep or something similar must have been called on the cmd,
* and this must be called from process context.
*/
int target_submit(struct se_cmd *se_cmd)
{
const struct target_core_fabric_ops *tfo = se_cmd->se_sess->se_tpg->se_tpg_tfo;
struct se_dev_attrib *da = &se_cmd->se_dev->dev_attrib;
u8 submit_type;
if (da->submit_type == TARGET_FABRIC_DEFAULT_SUBMIT)
submit_type = tfo->default_submit_type;
else if (da->submit_type == TARGET_DIRECT_SUBMIT &&
tfo->direct_submit_supp)
submit_type = TARGET_DIRECT_SUBMIT;
else
submit_type = TARGET_QUEUE_SUBMIT;
if (submit_type == TARGET_DIRECT_SUBMIT)
return __target_submit(se_cmd);
target_queue_submission(se_cmd);
return 0;
}
EXPORT_SYMBOL_GPL(target_submit);
static void target_complete_tmr_failure(struct work_struct *work)
{

View File

@ -201,7 +201,7 @@ struct tcmu_tmr {
uint8_t tmr_type;
uint32_t tmr_cmd_cnt;
int16_t tmr_cmd_ids[];
int16_t tmr_cmd_ids[] __counted_by(tmr_cmd_cnt);
};
/*

View File

@ -432,6 +432,9 @@ static const struct target_core_fabric_ops ft_fabric_ops = {
.tfc_wwn_attrs = ft_wwn_attrs,
.tfc_tpg_nacl_base_attrs = ft_nacl_base_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
static struct notifier_block ft_notifier = {

View File

@ -20,6 +20,7 @@
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/module.h>
#include <linux/pm_opp.h>
#include <linux/regulator/consumer.h>
#include <linux/sched/clock.h>
#include <linux/iopoll.h>
@ -274,8 +275,8 @@ static inline void ufshcd_add_delay_before_dme_cmd(struct ufs_hba *hba);
static int ufshcd_host_reset_and_restore(struct ufs_hba *hba);
static void ufshcd_resume_clkscaling(struct ufs_hba *hba);
static void ufshcd_suspend_clkscaling(struct ufs_hba *hba);
static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba);
static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up);
static int ufshcd_scale_clks(struct ufs_hba *hba, unsigned long freq,
bool scale_up);
static irqreturn_t ufshcd_intr(int irq, void *__hba);
static int ufshcd_change_power_mode(struct ufs_hba *hba,
struct ufs_pa_layer_attr *pwr_mode);
@ -447,8 +448,8 @@ static void ufshcd_add_command_trace(struct ufs_hba *hba, unsigned int tag,
} else {
doorbell = ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);
}
trace_ufshcd_command(dev_name(hba->dev), str_t, tag,
doorbell, hwq_id, transfer_len, intr, lba, opcode, group_id);
trace_ufshcd_command(cmd->device, str_t, tag, doorbell, hwq_id,
transfer_len, intr, lba, opcode, group_id);
}
static void ufshcd_print_clk_freqs(struct ufs_hba *hba)
@ -1062,14 +1063,68 @@ static int ufshcd_set_clk_freq(struct ufs_hba *hba, bool scale_up)
return ret;
}
int ufshcd_opp_config_clks(struct device *dev, struct opp_table *opp_table,
struct dev_pm_opp *opp, void *data,
bool scaling_down)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
struct list_head *head = &hba->clk_list_head;
struct ufs_clk_info *clki;
unsigned long freq;
u8 idx = 0;
int ret;
list_for_each_entry(clki, head, list) {
if (!IS_ERR_OR_NULL(clki->clk)) {
freq = dev_pm_opp_get_freq_indexed(opp, idx++);
/* Do not set rate for clocks having frequency as 0 */
if (!freq)
continue;
ret = clk_set_rate(clki->clk, freq);
if (ret) {
dev_err(dev, "%s: %s clk set rate(%ldHz) failed, %d\n",
__func__, clki->name, freq, ret);
return ret;
}
trace_ufshcd_clk_scaling(dev_name(dev),
(scaling_down ? "scaled down" : "scaled up"),
clki->name, hba->clk_scaling.target_freq, freq);
}
}
return 0;
}
EXPORT_SYMBOL_GPL(ufshcd_opp_config_clks);
static int ufshcd_opp_set_rate(struct ufs_hba *hba, unsigned long freq)
{
struct dev_pm_opp *opp;
int ret;
opp = dev_pm_opp_find_freq_floor_indexed(hba->dev,
&freq, 0);
if (IS_ERR(opp))
return PTR_ERR(opp);
ret = dev_pm_opp_set_opp(hba->dev, opp);
dev_pm_opp_put(opp);
return ret;
}
/**
* ufshcd_scale_clks - scale up or scale down UFS controller clocks
* @hba: per adapter instance
* @freq: frequency to scale
* @scale_up: True if scaling up and false if scaling down
*
* Return: 0 if successful; < 0 upon failure.
*/
static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up)
static int ufshcd_scale_clks(struct ufs_hba *hba, unsigned long freq,
bool scale_up)
{
int ret = 0;
ktime_t start = ktime_get();
@ -1078,13 +1133,21 @@ static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up)
if (ret)
goto out;
ret = ufshcd_set_clk_freq(hba, scale_up);
if (hba->use_pm_opp)
ret = ufshcd_opp_set_rate(hba, freq);
else
ret = ufshcd_set_clk_freq(hba, scale_up);
if (ret)
goto out;
ret = ufshcd_vops_clk_scale_notify(hba, scale_up, POST_CHANGE);
if (ret)
ufshcd_set_clk_freq(hba, !scale_up);
if (ret) {
if (hba->use_pm_opp)
ufshcd_opp_set_rate(hba,
hba->devfreq->previous_freq);
else
ufshcd_set_clk_freq(hba, !scale_up);
}
out:
trace_ufshcd_profile_clk_scaling(dev_name(hba->dev),
@ -1096,12 +1159,13 @@ static int ufshcd_scale_clks(struct ufs_hba *hba, bool scale_up)
/**
* ufshcd_is_devfreq_scaling_required - check if scaling is required or not
* @hba: per adapter instance
* @freq: frequency to scale
* @scale_up: True if scaling up and false if scaling down
*
* Return: true if scaling is required, false otherwise.
*/
static bool ufshcd_is_devfreq_scaling_required(struct ufs_hba *hba,
bool scale_up)
unsigned long freq, bool scale_up)
{
struct ufs_clk_info *clki;
struct list_head *head = &hba->clk_list_head;
@ -1109,6 +1173,9 @@ static bool ufshcd_is_devfreq_scaling_required(struct ufs_hba *hba,
if (list_empty(head))
return false;
if (hba->use_pm_opp)
return freq != hba->clk_scaling.target_freq;
list_for_each_entry(clki, head, list) {
if (!IS_ERR_OR_NULL(clki->clk)) {
if (scale_up && clki->max_freq) {
@ -1304,12 +1371,14 @@ static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err, bool sc
/**
* ufshcd_devfreq_scale - scale up/down UFS clocks and gear
* @hba: per adapter instance
* @freq: frequency to scale
* @scale_up: True for scaling up and false for scalin down
*
* Return: 0 for success; -EBUSY if scaling can't happen at this time; non-zero
* for any other errors.
*/
static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
static int ufshcd_devfreq_scale(struct ufs_hba *hba, unsigned long freq,
bool scale_up)
{
int ret = 0;
@ -1324,7 +1393,7 @@ static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
goto out_unprepare;
}
ret = ufshcd_scale_clks(hba, scale_up);
ret = ufshcd_scale_clks(hba, freq, scale_up);
if (ret) {
if (!scale_up)
ufshcd_scale_gear(hba, true);
@ -1335,7 +1404,8 @@ static int ufshcd_devfreq_scale(struct ufs_hba *hba, bool scale_up)
if (scale_up) {
ret = ufshcd_scale_gear(hba, true);
if (ret) {
ufshcd_scale_clks(hba, false);
ufshcd_scale_clks(hba, hba->devfreq->previous_freq,
false);
goto out_unprepare;
}
}
@ -1357,9 +1427,10 @@ static void ufshcd_clk_scaling_suspend_work(struct work_struct *work)
return;
}
hba->clk_scaling.is_suspended = true;
hba->clk_scaling.window_start_t = 0;
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
__ufshcd_suspend_clkscaling(hba);
devfreq_suspend_device(hba->devfreq);
}
static void ufshcd_clk_scaling_resume_work(struct work_struct *work)
@ -1393,15 +1464,35 @@ static int ufshcd_devfreq_target(struct device *dev,
if (!ufshcd_is_clkscaling_supported(hba))
return -EINVAL;
clki = list_first_entry(&hba->clk_list_head, struct ufs_clk_info, list);
/* Override with the closest supported frequency */
*freq = (unsigned long) clk_round_rate(clki->clk, *freq);
if (hba->use_pm_opp) {
struct dev_pm_opp *opp;
/* Get the recommended frequency from OPP framework */
opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp))
return PTR_ERR(opp);
dev_pm_opp_put(opp);
} else {
/* Override with the closest supported frequency */
clki = list_first_entry(&hba->clk_list_head, struct ufs_clk_info,
list);
*freq = (unsigned long) clk_round_rate(clki->clk, *freq);
}
spin_lock_irqsave(hba->host->host_lock, irq_flags);
if (ufshcd_eh_in_progress(hba)) {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
return 0;
}
/* Skip scaling clock when clock scaling is suspended */
if (hba->clk_scaling.is_suspended) {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
dev_warn(hba->dev, "clock scaling is suspended, skip");
return 0;
}
if (!hba->clk_scaling.active_reqs)
sched_clk_scaling_suspend_work = true;
@ -1410,12 +1501,17 @@ static int ufshcd_devfreq_target(struct device *dev,
goto out;
}
/* Decide based on the rounded-off frequency and update */
scale_up = *freq == clki->max_freq;
if (!scale_up)
/* Decide based on the target or rounded-off frequency and update */
if (hba->use_pm_opp)
scale_up = *freq > hba->clk_scaling.target_freq;
else
scale_up = *freq == clki->max_freq;
if (!hba->use_pm_opp && !scale_up)
*freq = clki->min_freq;
/* Update the frequency */
if (!ufshcd_is_devfreq_scaling_required(hba, scale_up)) {
if (!ufshcd_is_devfreq_scaling_required(hba, *freq, scale_up)) {
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
ret = 0;
goto out; /* no state change required */
@ -1423,14 +1519,16 @@ static int ufshcd_devfreq_target(struct device *dev,
spin_unlock_irqrestore(hba->host->host_lock, irq_flags);
start = ktime_get();
ret = ufshcd_devfreq_scale(hba, scale_up);
ret = ufshcd_devfreq_scale(hba, *freq, scale_up);
if (!ret)
hba->clk_scaling.target_freq = *freq;
trace_ufshcd_profile_clk_scaling(dev_name(hba->dev),
(scale_up ? "up" : "down"),
ktime_to_us(ktime_sub(ktime_get(), start)), ret);
out:
if (sched_clk_scaling_suspend_work)
if (sched_clk_scaling_suspend_work && !scale_up)
queue_work(hba->clk_scaling.workq,
&hba->clk_scaling.suspend_work);
@ -1443,8 +1541,6 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
struct ufs_hba *hba = dev_get_drvdata(dev);
struct ufs_clk_scaling *scaling = &hba->clk_scaling;
unsigned long flags;
struct list_head *clk_list = &hba->clk_list_head;
struct ufs_clk_info *clki;
ktime_t curr_t;
if (!ufshcd_is_clkscaling_supported(hba))
@ -1457,17 +1553,24 @@ static int ufshcd_devfreq_get_dev_status(struct device *dev,
if (!scaling->window_start_t)
goto start_window;
clki = list_first_entry(clk_list, struct ufs_clk_info, list);
/*
* If current frequency is 0, then the ondemand governor considers
* there's no initial frequency set. And it always requests to set
* to max. frequency.
*/
stat->current_frequency = clki->curr_freq;
if (hba->use_pm_opp) {
stat->current_frequency = hba->clk_scaling.target_freq;
} else {
struct list_head *clk_list = &hba->clk_list_head;
struct ufs_clk_info *clki;
clki = list_first_entry(clk_list, struct ufs_clk_info, list);
stat->current_frequency = clki->curr_freq;
}
if (scaling->is_busy_started)
scaling->tot_busy_t += ktime_us_delta(curr_t,
scaling->busy_start_t);
stat->total_time = ktime_us_delta(curr_t, scaling->window_start_t);
stat->busy_time = scaling->tot_busy_t;
start_window:
@ -1496,9 +1599,11 @@ static int ufshcd_devfreq_init(struct ufs_hba *hba)
if (list_empty(clk_list))
return 0;
clki = list_first_entry(clk_list, struct ufs_clk_info, list);
dev_pm_opp_add(hba->dev, clki->min_freq, 0);
dev_pm_opp_add(hba->dev, clki->max_freq, 0);
if (!hba->use_pm_opp) {
clki = list_first_entry(clk_list, struct ufs_clk_info, list);
dev_pm_opp_add(hba->dev, clki->min_freq, 0);
dev_pm_opp_add(hba->dev, clki->max_freq, 0);
}
ufshcd_vops_config_scaling_param(hba, &hba->vps->devfreq_profile,
&hba->vps->ondemand_data);
@ -1510,8 +1615,10 @@ static int ufshcd_devfreq_init(struct ufs_hba *hba)
ret = PTR_ERR(devfreq);
dev_err(hba->dev, "Unable to register with devfreq %d\n", ret);
dev_pm_opp_remove(hba->dev, clki->min_freq);
dev_pm_opp_remove(hba->dev, clki->max_freq);
if (!hba->use_pm_opp) {
dev_pm_opp_remove(hba->dev, clki->min_freq);
dev_pm_opp_remove(hba->dev, clki->max_freq);
}
return ret;
}
@ -1523,7 +1630,6 @@ static int ufshcd_devfreq_init(struct ufs_hba *hba)
static void ufshcd_devfreq_remove(struct ufs_hba *hba)
{
struct list_head *clk_list = &hba->clk_list_head;
struct ufs_clk_info *clki;
if (!hba->devfreq)
return;
@ -1531,19 +1637,13 @@ static void ufshcd_devfreq_remove(struct ufs_hba *hba)
devfreq_remove_device(hba->devfreq);
hba->devfreq = NULL;
clki = list_first_entry(clk_list, struct ufs_clk_info, list);
dev_pm_opp_remove(hba->dev, clki->min_freq);
dev_pm_opp_remove(hba->dev, clki->max_freq);
}
if (!hba->use_pm_opp) {
struct ufs_clk_info *clki;
static void __ufshcd_suspend_clkscaling(struct ufs_hba *hba)
{
unsigned long flags;
devfreq_suspend_device(hba->devfreq);
spin_lock_irqsave(hba->host->host_lock, flags);
hba->clk_scaling.window_start_t = 0;
spin_unlock_irqrestore(hba->host->host_lock, flags);
clki = list_first_entry(clk_list, struct ufs_clk_info, list);
dev_pm_opp_remove(hba->dev, clki->min_freq);
dev_pm_opp_remove(hba->dev, clki->max_freq);
}
}
static void ufshcd_suspend_clkscaling(struct ufs_hba *hba)
@ -1558,11 +1658,12 @@ static void ufshcd_suspend_clkscaling(struct ufs_hba *hba)
if (!hba->clk_scaling.is_suspended) {
suspend = true;
hba->clk_scaling.is_suspended = true;
hba->clk_scaling.window_start_t = 0;
}
spin_unlock_irqrestore(hba->host->host_lock, flags);
if (suspend)
__ufshcd_suspend_clkscaling(hba);
devfreq_suspend_device(hba->devfreq);
}
static void ufshcd_resume_clkscaling(struct ufs_hba *hba)
@ -1618,7 +1719,7 @@ static ssize_t ufshcd_clkscale_enable_store(struct device *dev,
ufshcd_resume_clkscaling(hba);
} else {
ufshcd_suspend_clkscaling(hba);
err = ufshcd_devfreq_scale(hba, true);
err = ufshcd_devfreq_scale(hba, ULONG_MAX, true);
if (err)
dev_err(hba->dev, "%s: failed to scale clocks up %d\n",
__func__, err);
@ -2165,7 +2266,8 @@ void ufshcd_send_command(struct ufs_hba *hba, unsigned int task_tag,
lrbp->compl_time_stamp = ktime_set(0, 0);
lrbp->compl_time_stamp_local_clock = 0;
ufshcd_add_command_trace(hba, task_tag, UFS_CMD_SEND);
ufshcd_clk_scaling_start_busy(hba);
if (lrbp->cmd)
ufshcd_clk_scaling_start_busy(hba);
if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
ufshcd_start_monitor(hba, lrbp);
@ -2304,7 +2406,7 @@ static inline bool ufshcd_ready_for_uic_cmd(struct ufs_hba *hba)
int ret = read_poll_timeout(ufshcd_readl, val, val & UIC_COMMAND_READY,
500, UIC_CMD_TIMEOUT * 1000, false, hba,
REG_CONTROLLER_STATUS);
return ret == 0 ? true : false;
return ret == 0;
}
/**
@ -2715,27 +2817,23 @@ static int ufshcd_compose_devman_upiu(struct ufs_hba *hba,
* for SCSI Purposes
* @hba: per adapter instance
* @lrbp: pointer to local reference block
*
* Return: 0 upon success; < 0 upon failure.
*/
static int ufshcd_comp_scsi_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
static void ufshcd_comp_scsi_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
struct request *rq = scsi_cmd_to_rq(lrbp->cmd);
unsigned int ioprio_class = IOPRIO_PRIO_CLASS(req_get_ioprio(rq));
u8 upiu_flags;
int ret = 0;
if (hba->ufs_version <= ufshci_version(1, 1))
lrbp->command_type = UTP_CMD_TYPE_SCSI;
else
lrbp->command_type = UTP_CMD_TYPE_UFS_STORAGE;
if (likely(lrbp->cmd)) {
ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags, lrbp->cmd->sc_data_direction, 0);
ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags);
} else {
ret = -EINVAL;
}
return ret;
ufshcd_prepare_req_desc_hdr(lrbp, &upiu_flags,
lrbp->cmd->sc_data_direction, 0);
if (ioprio_class == IOPRIO_CLASS_RT)
upiu_flags |= UPIU_CMD_FLAGS_CP;
ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags);
}
/**
@ -2823,8 +2921,6 @@ static int ufshcd_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)
int err = 0;
struct ufs_hw_queue *hwq = NULL;
WARN_ONCE(tag < 0 || tag >= hba->nutrs, "Invalid tag %d\n", tag);
switch (hba->ufshcd_state) {
case UFSHCD_STATE_OPERATIONAL:
break;
@ -3632,7 +3728,7 @@ int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
*/
ret = utf16s_to_utf8s(uc_str->uc,
uc_str->len - QUERY_DESC_HDR_SIZE,
UTF16_BIG_ENDIAN, str, ascii_len);
UTF16_BIG_ENDIAN, str, ascii_len - 1);
/* replace non-printable or non-ASCII characters with spaces */
for (i = 0; i < ret; i++)
@ -5098,8 +5194,7 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
struct request_queue *q = sdev->request_queue;
blk_queue_update_dma_pad(q, PRDT_DATA_BYTE_COUNT_PAD - 1);
if (hba->quirks & UFSHCD_QUIRK_4KB_DMA_ALIGNMENT)
blk_queue_update_dma_alignment(q, SZ_4K - 1);
/*
* Block runtime-pm until all consumers are added.
* Refer ufshcd_setup_links().
@ -5115,6 +5210,9 @@ static int ufshcd_slave_configure(struct scsi_device *sdev)
*/
sdev->silence_suspend = 1;
if (hba->vops && hba->vops->config_scsi_dev)
hba->vops->config_scsi_dev(sdev);
ufshcd_crypto_register(hba, q);
return 0;
@ -5405,7 +5503,6 @@ void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag,
lrbp->utr_descriptor_ptr->header.ocs = ocs;
}
complete(hba->dev_cmd.complete);
ufshcd_clk_scaling_update_busy(hba);
}
}
}
@ -5518,13 +5615,13 @@ static void ufshcd_mcq_compl_pending_transfer(struct ufs_hba *hba,
* For those cmds of which the cqes are not present
* in the cq, complete them explicitly.
*/
spin_lock_irqsave(&hwq->cq_lock, flags);
if (cmd && !test_bit(SCMD_STATE_COMPLETE, &cmd->state)) {
spin_lock_irqsave(&hwq->cq_lock, flags);
set_host_byte(cmd, DID_REQUEUE);
ufshcd_release_scsi_cmd(hba, lrbp);
scsi_done(cmd);
spin_unlock_irqrestore(&hwq->cq_lock, flags);
}
spin_unlock_irqrestore(&hwq->cq_lock, flags);
} else {
ufshcd_mcq_poll_cqe_lock(hba, hwq);
}
@ -6924,8 +7021,6 @@ static int __ufshcd_issue_tm_cmd(struct ufs_hba *hba,
spin_lock_irqsave(host->host_lock, flags);
task_tag = req->tag;
WARN_ONCE(task_tag < 0 || task_tag >= hba->nutmrs, "Invalid tag %d\n",
task_tag);
hba->tmf_rqs[req->tag] = req;
treq->upiu_req.req_header.task_tag = task_tag;
@ -7499,8 +7594,6 @@ static int ufshcd_abort(struct scsi_cmnd *cmd)
bool outstanding;
u32 reg;
WARN_ONCE(tag < 0, "Invalid tag %d\n", tag);
ufshcd_hold(hba);
if (!is_mcq_enabled(hba)) {
@ -7627,7 +7720,7 @@ static int ufshcd_host_reset_and_restore(struct ufs_hba *hba)
hba->silence_err_logs = false;
/* scale up clocks to max frequency before full reinitialization */
ufshcd_scale_clks(hba, true);
ufshcd_scale_clks(hba, ULONG_MAX, true);
err = ufshcd_hba_enable(hba);
@ -7716,6 +7809,19 @@ static int ufshcd_eh_host_reset_handler(struct scsi_cmnd *cmd)
hba = shost_priv(cmd->device->host);
/*
* If runtime PM sent SSU and got a timeout, scsi_error_handler is
* stuck in this function waiting for flush_work(&hba->eh_work). And
* ufshcd_err_handler(eh_work) is stuck waiting for runtime PM. Do
* ufshcd_link_recovery instead of eh_work to prevent deadlock.
*/
if (hba->pm_op_in_progress) {
if (ufshcd_link_recovery(hba))
err = FAILED;
return err;
}
spin_lock_irqsave(hba->host->host_lock, flags);
hba->force_reset = true;
ufshcd_schedule_eh_work(hba);
@ -8723,7 +8829,8 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params)
if (ret)
goto out;
if (hba->quirks & UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH) {
if (!hba->pm_op_in_progress &&
(hba->quirks & UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH)) {
/* Reset the device and controller before doing reinit */
ufshcd_device_reset(hba);
ufshcd_hba_stop(hba);
@ -9159,6 +9266,17 @@ static int ufshcd_init_clocks(struct ufs_hba *hba)
dev_dbg(dev, "%s: clk: %s, rate: %lu\n", __func__,
clki->name, clk_get_rate(clki->clk));
}
/* Set Max. frequency for all clocks */
if (hba->use_pm_opp) {
ret = ufshcd_opp_set_rate(hba, ULONG_MAX);
if (ret) {
dev_err(hba->dev, "%s: failed to set OPP: %d", __func__,
ret);
goto out;
}
}
out:
return ret;
}

View File

@ -305,12 +305,11 @@ static int cdns_ufs_pltfrm_probe(struct platform_device *pdev)
*
* Return: 0 (success).
*/
static int cdns_ufs_pltfrm_remove(struct platform_device *pdev)
static void cdns_ufs_pltfrm_remove(struct platform_device *pdev)
{
struct ufs_hba *hba = platform_get_drvdata(pdev);
ufshcd_remove(hba);
return 0;
}
static const struct dev_pm_ops cdns_ufs_dev_pm_ops = {
@ -322,7 +321,7 @@ static const struct dev_pm_ops cdns_ufs_dev_pm_ops = {
static struct platform_driver cdns_ufs_pltfrm_driver = {
.probe = cdns_ufs_pltfrm_probe,
.remove = cdns_ufs_pltfrm_remove,
.remove_new = cdns_ufs_pltfrm_remove,
.driver = {
.name = "cdns-ufshcd",
.pm = &cdns_ufs_dev_pm_ops,

View File

@ -74,14 +74,12 @@ static int tc_dwc_g210_pltfm_probe(struct platform_device *pdev)
* @pdev: pointer to platform device structure
*
*/
static int tc_dwc_g210_pltfm_remove(struct platform_device *pdev)
static void tc_dwc_g210_pltfm_remove(struct platform_device *pdev)
{
struct ufs_hba *hba = platform_get_drvdata(pdev);
pm_runtime_get_sync(&(pdev)->dev);
ufshcd_remove(hba);
return 0;
}
static const struct dev_pm_ops tc_dwc_g210_pltfm_pm_ops = {
@ -91,7 +89,7 @@ static const struct dev_pm_ops tc_dwc_g210_pltfm_pm_ops = {
static struct platform_driver tc_dwc_g210_pltfm_driver = {
.probe = tc_dwc_g210_pltfm_probe,
.remove = tc_dwc_g210_pltfm_remove,
.remove_new = tc_dwc_g210_pltfm_remove,
.driver = {
.name = "tc-dwc-g210-pltfm",
.pm = &tc_dwc_g210_pltfm_pm_ops,

View File

@ -65,13 +65,11 @@ static int ti_j721e_ufs_probe(struct platform_device *pdev)
return ret;
}
static int ti_j721e_ufs_remove(struct platform_device *pdev)
static void ti_j721e_ufs_remove(struct platform_device *pdev)
{
of_platform_depopulate(&pdev->dev);
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
return 0;
}
static const struct of_device_id ti_j721e_ufs_of_match[] = {
@ -85,7 +83,7 @@ MODULE_DEVICE_TABLE(of, ti_j721e_ufs_of_match);
static struct platform_driver ti_j721e_ufs_driver = {
.probe = ti_j721e_ufs_probe,
.remove = ti_j721e_ufs_remove,
.remove_new = ti_j721e_ufs_remove,
.driver = {
.name = "ti-j721e-ufs",
.of_match_table = ti_j721e_ufs_of_match,

View File

@ -1511,6 +1511,11 @@ static int fsd_ufs_pre_link(struct exynos_ufs *ufs)
return 0;
}
static void exynos_ufs_config_scsi_dev(struct scsi_device *sdev)
{
blk_queue_update_dma_alignment(sdev->request_queue, SZ_4K - 1);
}
static int fsd_ufs_post_link(struct exynos_ufs *ufs)
{
int i;
@ -1579,6 +1584,7 @@ static const struct ufs_hba_variant_ops ufs_hba_exynos_ops = {
.hibern8_notify = exynos_ufs_hibern8_notify,
.suspend = exynos_ufs_suspend,
.resume = exynos_ufs_resume,
.config_scsi_dev = exynos_ufs_config_scsi_dev,
};
static struct ufs_hba_variant_ops ufs_hba_exynosauto_vh_ops = {
@ -1605,7 +1611,7 @@ static int exynos_ufs_probe(struct platform_device *pdev)
return err;
}
static int exynos_ufs_remove(struct platform_device *pdev)
static void exynos_ufs_remove(struct platform_device *pdev)
{
struct ufs_hba *hba = platform_get_drvdata(pdev);
struct exynos_ufs *ufs = ufshcd_get_variant(hba);
@ -1615,8 +1621,6 @@ static int exynos_ufs_remove(struct platform_device *pdev)
phy_power_off(ufs->phy);
phy_exit(ufs->phy);
return 0;
}
static struct exynos_ufs_uic_attr exynos7_uic_attr = {
@ -1680,8 +1684,7 @@ static const struct exynos_ufs_drv_data exynos_ufs_drvs = {
UFSHCI_QUIRK_SKIP_RESET_INTR_AGGR |
UFSHCD_QUIRK_BROKEN_OCS_FATAL_ERROR |
UFSHCI_QUIRK_SKIP_MANUAL_WB_FLUSH_CTRL |
UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING |
UFSHCD_QUIRK_4KB_DMA_ALIGNMENT,
UFSHCD_QUIRK_SKIP_DEF_UNIPRO_TIMEOUT_SETTING,
.opts = EXYNOS_UFS_OPT_HAS_APB_CLK_CTRL |
EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL |
EXYNOS_UFS_OPT_BROKEN_RX_SEL_IDX |
@ -1756,7 +1759,7 @@ static const struct dev_pm_ops exynos_ufs_pm_ops = {
static struct platform_driver exynos_ufs_pltform = {
.probe = exynos_ufs_probe,
.remove = exynos_ufs_remove,
.remove_new = exynos_ufs_remove,
.driver = {
.name = "exynos-ufshc",
.pm = &exynos_ufs_pm_ops,

View File

@ -575,12 +575,11 @@ static int ufs_hisi_probe(struct platform_device *pdev)
return ufshcd_pltfrm_init(pdev, of_id->data);
}
static int ufs_hisi_remove(struct platform_device *pdev)
static void ufs_hisi_remove(struct platform_device *pdev)
{
struct ufs_hba *hba = platform_get_drvdata(pdev);
ufshcd_remove(hba);
return 0;
}
static const struct dev_pm_ops ufs_hisi_pm_ops = {
@ -592,7 +591,7 @@ static const struct dev_pm_ops ufs_hisi_pm_ops = {
static struct platform_driver ufs_hisi_pltform = {
.probe = ufs_hisi_probe,
.remove = ufs_hisi_remove,
.remove_new = ufs_hisi_remove,
.driver = {
.name = "ufshcd-hisi",
.pm = &ufs_hisi_pm_ops,

View File

@ -806,7 +806,7 @@ static int ufs_mtk_vreg_fix_vcc(struct ufs_hba *hba)
return 0;
}
err = ufshcd_populate_vreg(dev, vcc_name, &info->vcc);
err = ufshcd_populate_vreg(dev, vcc_name, &info->vcc, false);
if (err)
return err;
@ -1748,13 +1748,12 @@ static int ufs_mtk_probe(struct platform_device *pdev)
*
* Always return 0
*/
static int ufs_mtk_remove(struct platform_device *pdev)
static void ufs_mtk_remove(struct platform_device *pdev)
{
struct ufs_hba *hba = platform_get_drvdata(pdev);
pm_runtime_get_sync(&(pdev)->dev);
ufshcd_remove(hba);
return 0;
}
#ifdef CONFIG_PM_SLEEP
@ -1818,7 +1817,7 @@ static const struct dev_pm_ops ufs_mtk_pm_ops = {
static struct platform_driver ufs_mtk_pltform = {
.probe = ufs_mtk_probe,
.remove = ufs_mtk_remove,
.remove_new = ufs_mtk_remove,
.driver = {
.name = "ufshcd-mtk",
.pm = &ufs_mtk_pm_ops,

View File

@ -93,8 +93,7 @@ static const struct __ufs_qcom_bw_table {
static struct ufs_qcom_host *ufs_qcom_hosts[MAX_UFS_QCOM_HOSTS];
static void ufs_qcom_get_default_testbus_cfg(struct ufs_qcom_host *host);
static int ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(struct ufs_hba *hba,
u32 clk_cycles);
static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up);
static struct ufs_qcom_host *rcdev_to_ufs_host(struct reset_controller_dev *rcd)
{
@ -460,7 +459,7 @@ static int ufs_qcom_power_up_sequence(struct ufs_hba *hba)
return ret;
}
phy_set_mode_ext(phy, PHY_MODE_UFS_HS_B, host->hs_gear);
phy_set_mode_ext(phy, PHY_MODE_UFS_HS_B, host->phy_gear);
/* power on phy - start serdes and phy's power and clocks */
ret = phy_power_on(phy);
@ -528,11 +527,20 @@ static int ufs_qcom_hce_enable_notify(struct ufs_hba *hba,
return err;
}
/*
/**
* ufs_qcom_cfg_timers - Configure ufs qcom cfg timers
*
* @hba: host controller instance
* @gear: Current operating gear
* @hs: current power mode
* @rate: current operating rate (A or B)
* @update_link_startup_timer: indicate if link_start ongoing
* @is_pre_scale_up: flag to check if pre scale up condition.
* Return: zero for success and non-zero in case of a failure.
*/
static int ufs_qcom_cfg_timers(struct ufs_hba *hba, u32 gear,
u32 hs, u32 rate, bool update_link_startup_timer)
u32 hs, u32 rate, bool update_link_startup_timer,
bool is_pre_scale_up)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
struct ufs_clk_info *clki;
@ -563,11 +571,14 @@ static int ufs_qcom_cfg_timers(struct ufs_hba *hba, u32 gear,
/*
* The Qunipro controller does not use following registers:
* SYS1CLK_1US_REG, TX_SYMBOL_CLK_1US_REG, CLK_NS_REG &
* UFS_REG_PA_LINK_STARTUP_TIMER
* But UTP controller uses SYS1CLK_1US_REG register for Interrupt
* UFS_REG_PA_LINK_STARTUP_TIMER.
* However UTP controller uses SYS1CLK_1US_REG register for Interrupt
* Aggregation logic.
*/
if (ufs_qcom_cap_qunipro(host) && !ufshcd_is_intr_aggr_allowed(hba))
* It is mandatory to write SYS1CLK_1US_REG register on UFS host
* controller V4.0.0 onwards.
*/
if (host->hw_ver.major < 4 && ufs_qcom_cap_qunipro(host) &&
!ufshcd_is_intr_aggr_allowed(hba))
return 0;
if (gear == 0) {
@ -576,8 +587,14 @@ static int ufs_qcom_cfg_timers(struct ufs_hba *hba, u32 gear,
}
list_for_each_entry(clki, &hba->clk_list_head, list) {
if (!strcmp(clki->name, "core_clk"))
core_clk_rate = clk_get_rate(clki->clk);
if (!strcmp(clki->name, "core_clk")) {
if (is_pre_scale_up)
core_clk_rate = clki->max_freq;
else
core_clk_rate = clk_get_rate(clki->clk);
break;
}
}
/* If frequency is smaller than 1MHz, set to 1MHz */
@ -679,20 +696,17 @@ static int ufs_qcom_link_startup_notify(struct ufs_hba *hba,
switch (status) {
case PRE_CHANGE:
if (ufs_qcom_cfg_timers(hba, UFS_PWM_G1, SLOWAUTO_MODE,
0, true)) {
0, true, false)) {
dev_err(hba->dev, "%s: ufs_qcom_cfg_timers() failed\n",
__func__);
return -EINVAL;
}
if (ufs_qcom_cap_qunipro(host))
/*
* set unipro core clock cycles to 150 & clear clock
* divider
*/
err = ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(hba,
150);
if (ufs_qcom_cap_qunipro(host)) {
err = ufs_qcom_set_core_clk_ctrl(hba, true);
if (err)
dev_err(hba->dev, "cfg core clk ctrl failed\n");
}
/*
* Some UFS devices (and may be host) have issues if LCC is
* enabled. So we are setting PA_Local_TX_LCC_Enable to 0
@ -909,8 +923,13 @@ static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba,
return ret;
}
/* Use the agreed gear */
host->hs_gear = dev_req_params->gear_tx;
/*
* Update phy_gear only when the gears are scaled to a higher value. This is
* because, the PHY gear settings are backwards compatible and we only need to
* change the PHY gear settings while scaling to higher gears.
*/
if (dev_req_params->gear_tx > host->phy_gear)
host->phy_gear = dev_req_params->gear_tx;
/* enable the device ref clock before changing to HS mode */
if (!ufshcd_is_hs_mode(&hba->pwr_info) &&
@ -926,7 +945,7 @@ static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba,
case POST_CHANGE:
if (ufs_qcom_cfg_timers(hba, dev_req_params->gear_rx,
dev_req_params->pwr_rx,
dev_req_params->hs_rate, false)) {
dev_req_params->hs_rate, false, false)) {
dev_err(hba->dev, "%s: ufs_qcom_cfg_timers() failed\n",
__func__);
/*
@ -1277,7 +1296,7 @@ static int ufs_qcom_init(struct ufs_hba *hba)
* Power up the PHY using the minimum supported gear (UFS_HS_G2).
* Switching to max gear will be performed during reinit if supported.
*/
host->hs_gear = UFS_HS_G2;
host->phy_gear = UFS_HS_G2;
return 0;
@ -1296,14 +1315,96 @@ static void ufs_qcom_exit(struct ufs_hba *hba)
phy_exit(host->generic_phy);
}
static int ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(struct ufs_hba *hba,
u32 clk_cycles)
/**
* ufs_qcom_set_clk_40ns_cycles - Configure 40ns clk cycles
*
* @hba: host controller instance
* @cycles_in_1us: No of cycles in 1us to be configured
*
* Returns error if dme get/set configuration for 40ns fails
* and returns zero on success.
*/
static int ufs_qcom_set_clk_40ns_cycles(struct ufs_hba *hba,
u32 cycles_in_1us)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
u32 cycles_in_40ns;
u32 reg;
int err;
u32 core_clk_ctrl_reg;
if (clk_cycles > DME_VS_CORE_CLK_CTRL_MAX_CORE_CLK_1US_CYCLES_MASK)
/*
* UFS host controller V4.0.0 onwards needs to program
* PA_VS_CORE_CLK_40NS_CYCLES attribute per programmed
* frequency of unipro core clk of UFS host controller.
*/
if (host->hw_ver.major < 4)
return 0;
/*
* Generic formulae for cycles_in_40ns = (freq_unipro/25) is not
* applicable for all frequencies. For ex: ceil(37.5 MHz/25) will
* be 2 and ceil(403 MHZ/25) will be 17 whereas Hardware
* specification expect to be 16. Hence use exact hardware spec
* mandated value for cycles_in_40ns instead of calculating using
* generic formulae.
*/
switch (cycles_in_1us) {
case UNIPRO_CORE_CLK_FREQ_403_MHZ:
cycles_in_40ns = 16;
break;
case UNIPRO_CORE_CLK_FREQ_300_MHZ:
cycles_in_40ns = 12;
break;
case UNIPRO_CORE_CLK_FREQ_201_5_MHZ:
cycles_in_40ns = 8;
break;
case UNIPRO_CORE_CLK_FREQ_150_MHZ:
cycles_in_40ns = 6;
break;
case UNIPRO_CORE_CLK_FREQ_100_MHZ:
cycles_in_40ns = 4;
break;
case UNIPRO_CORE_CLK_FREQ_75_MHZ:
cycles_in_40ns = 3;
break;
case UNIPRO_CORE_CLK_FREQ_37_5_MHZ:
cycles_in_40ns = 2;
break;
default:
dev_err(hba->dev, "UNIPRO clk freq %u MHz not supported\n",
cycles_in_1us);
return -EINVAL;
}
err = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_VS_CORE_CLK_40NS_CYCLES), &reg);
if (err)
return err;
reg &= ~PA_VS_CORE_CLK_40NS_CYCLES_MASK;
reg |= cycles_in_40ns;
return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_VS_CORE_CLK_40NS_CYCLES), reg);
}
static int ufs_qcom_set_core_clk_ctrl(struct ufs_hba *hba, bool is_scale_up)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
struct list_head *head = &hba->clk_list_head;
struct ufs_clk_info *clki;
u32 cycles_in_1us;
u32 core_clk_ctrl_reg;
int err;
list_for_each_entry(clki, head, list) {
if (!IS_ERR_OR_NULL(clki->clk) &&
!strcmp(clki->name, "core_clk_unipro")) {
if (is_scale_up)
cycles_in_1us = ceil(clki->max_freq, (1000 * 1000));
else
cycles_in_1us = ceil(clk_get_rate(clki->clk), (1000 * 1000));
break;
}
}
err = ufshcd_dme_get(hba,
UIC_ARG_MIB(DME_VS_CORE_CLK_CTRL),
@ -1311,32 +1412,54 @@ static int ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(struct ufs_hba *hba,
if (err)
return err;
core_clk_ctrl_reg &= ~DME_VS_CORE_CLK_CTRL_MAX_CORE_CLK_1US_CYCLES_MASK;
core_clk_ctrl_reg |= clk_cycles;
/* Bit mask is different for UFS host controller V4.0.0 onwards */
if (host->hw_ver.major >= 4) {
if (!FIELD_FIT(CLK_1US_CYCLES_MASK_V4, cycles_in_1us))
return -ERANGE;
core_clk_ctrl_reg &= ~CLK_1US_CYCLES_MASK_V4;
core_clk_ctrl_reg |= FIELD_PREP(CLK_1US_CYCLES_MASK_V4, cycles_in_1us);
} else {
if (!FIELD_FIT(CLK_1US_CYCLES_MASK, cycles_in_1us))
return -ERANGE;
core_clk_ctrl_reg &= ~CLK_1US_CYCLES_MASK;
core_clk_ctrl_reg |= FIELD_PREP(CLK_1US_CYCLES_MASK, cycles_in_1us);
}
/* Clear CORE_CLK_DIV_EN */
core_clk_ctrl_reg &= ~DME_VS_CORE_CLK_CTRL_CORE_CLK_DIV_EN_BIT;
return ufshcd_dme_set(hba,
err = ufshcd_dme_set(hba,
UIC_ARG_MIB(DME_VS_CORE_CLK_CTRL),
core_clk_ctrl_reg);
if (err)
return err;
/* Configure unipro core clk 40ns attribute */
return ufs_qcom_set_clk_40ns_cycles(hba, cycles_in_1us);
}
static int ufs_qcom_clk_scale_up_pre_change(struct ufs_hba *hba)
{
/* nothing to do as of now */
return 0;
}
static int ufs_qcom_clk_scale_up_post_change(struct ufs_hba *hba)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
struct ufs_pa_layer_attr *attr = &host->dev_req_params;
int ret;
if (!ufs_qcom_cap_qunipro(host))
return 0;
/* set unipro core clock cycles to 150 and clear clock divider */
return ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(hba, 150);
ret = ufs_qcom_cfg_timers(hba, attr->gear_rx, attr->pwr_rx,
attr->hs_rate, false, true);
if (ret) {
dev_err(hba->dev, "%s ufs cfg timer failed\n", __func__);
return ret;
}
/* set unipro core clock attributes and clear clock divider */
return ufs_qcom_set_core_clk_ctrl(hba, true);
}
static int ufs_qcom_clk_scale_up_post_change(struct ufs_hba *hba)
{
return 0;
}
static int ufs_qcom_clk_scale_down_pre_change(struct ufs_hba *hba)
@ -1371,15 +1494,14 @@ static int ufs_qcom_clk_scale_down_post_change(struct ufs_hba *hba)
if (!ufs_qcom_cap_qunipro(host))
return 0;
/* set unipro core clock cycles to 75 and clear clock divider */
return ufs_qcom_set_dme_vs_core_clk_ctrl_clear_div(hba, 75);
/* set unipro core clock attributes and clear clock divider */
return ufs_qcom_set_core_clk_ctrl(hba, false);
}
static int ufs_qcom_clk_scale_notify(struct ufs_hba *hba,
bool scale_up, enum ufs_notify_change_status status)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
struct ufs_pa_layer_attr *dev_req_params = &host->dev_req_params;
int err = 0;
/* check the host controller state before sending hibern8 cmd */
@ -1409,11 +1531,6 @@ static int ufs_qcom_clk_scale_notify(struct ufs_hba *hba,
return err;
}
ufs_qcom_cfg_timers(hba,
dev_req_params->gear_rx,
dev_req_params->pwr_rx,
dev_req_params->hs_rate,
false);
ufs_qcom_icc_update_bw(host);
ufshcd_uic_hibern8_exit(hba);
}
@ -1910,14 +2027,13 @@ static int ufs_qcom_probe(struct platform_device *pdev)
*
* Always returns 0
*/
static int ufs_qcom_remove(struct platform_device *pdev)
static void ufs_qcom_remove(struct platform_device *pdev)
{
struct ufs_hba *hba = platform_get_drvdata(pdev);
pm_runtime_get_sync(&(pdev)->dev);
ufshcd_remove(hba);
platform_msi_domain_free_irqs(hba->dev);
return 0;
}
static const struct of_device_id ufs_qcom_of_match[] __maybe_unused = {
@ -1949,7 +2065,7 @@ static const struct dev_pm_ops ufs_qcom_pm_ops = {
static struct platform_driver ufs_qcom_pltform = {
.probe = ufs_qcom_probe,
.remove = ufs_qcom_remove,
.remove_new = ufs_qcom_remove,
.driver = {
.name = "ufshcd-qcom",
.pm = &ufs_qcom_pm_ops,

View File

@ -129,8 +129,21 @@ enum {
#define PA_VS_CONFIG_REG1 0x9000
#define DME_VS_CORE_CLK_CTRL 0xD002
/* bit and mask definitions for DME_VS_CORE_CLK_CTRL attribute */
#define DME_VS_CORE_CLK_CTRL_CORE_CLK_DIV_EN_BIT BIT(8)
#define DME_VS_CORE_CLK_CTRL_MAX_CORE_CLK_1US_CYCLES_MASK 0xFF
#define CLK_1US_CYCLES_MASK_V4 GENMASK(27, 16)
#define CLK_1US_CYCLES_MASK GENMASK(7, 0)
#define DME_VS_CORE_CLK_CTRL_CORE_CLK_DIV_EN_BIT BIT(8)
#define PA_VS_CORE_CLK_40NS_CYCLES 0x9007
#define PA_VS_CORE_CLK_40NS_CYCLES_MASK GENMASK(6, 0)
/* QCOM UFS host controller core clk frequencies */
#define UNIPRO_CORE_CLK_FREQ_37_5_MHZ 38
#define UNIPRO_CORE_CLK_FREQ_75_MHZ 75
#define UNIPRO_CORE_CLK_FREQ_100_MHZ 100
#define UNIPRO_CORE_CLK_FREQ_150_MHZ 150
#define UNIPRO_CORE_CLK_FREQ_300_MHZ 300
#define UNIPRO_CORE_CLK_FREQ_201_5_MHZ 202
#define UNIPRO_CORE_CLK_FREQ_403_MHZ 403
static inline void
ufs_qcom_get_controller_revision(struct ufs_hba *hba,
@ -227,7 +240,7 @@ struct ufs_qcom_host {
struct gpio_desc *device_reset;
u32 hs_gear;
u32 phy_gear;
bool esi_enabled;
};
@ -244,6 +257,7 @@ ufs_qcom_get_debug_reg_offset(struct ufs_qcom_host *host, u32 reg)
#define ufs_qcom_is_link_off(hba) ufshcd_is_link_off(hba)
#define ufs_qcom_is_link_active(hba) ufshcd_is_link_active(hba)
#define ufs_qcom_is_link_hibern8(hba) ufshcd_is_link_hibern8(hba)
#define ceil(freq, div) ((freq) % (div) == 0 ? ((freq)/(div)) : ((freq)/(div) + 1))
int ufs_qcom_testbus_config(struct ufs_qcom_host *host);

View File

@ -388,18 +388,16 @@ static int ufs_renesas_probe(struct platform_device *pdev)
return ufshcd_pltfrm_init(pdev, &ufs_renesas_vops);
}
static int ufs_renesas_remove(struct platform_device *pdev)
static void ufs_renesas_remove(struct platform_device *pdev)
{
struct ufs_hba *hba = platform_get_drvdata(pdev);
ufshcd_remove(hba);
return 0;
}
static struct platform_driver ufs_renesas_platform = {
.probe = ufs_renesas_probe,
.remove = ufs_renesas_remove,
.remove_new = ufs_renesas_remove,
.driver = {
.name = "ufshcd-renesas",
.of_match_table = of_match_ptr(ufs_renesas_of_match),

View File

@ -425,13 +425,12 @@ static int ufs_sprd_probe(struct platform_device *pdev)
return err;
}
static int ufs_sprd_remove(struct platform_device *pdev)
static void ufs_sprd_remove(struct platform_device *pdev)
{
struct ufs_hba *hba = platform_get_drvdata(pdev);
pm_runtime_get_sync(&(pdev)->dev);
ufshcd_remove(hba);
return 0;
}
static const struct dev_pm_ops ufs_sprd_pm_ops = {
@ -443,7 +442,7 @@ static const struct dev_pm_ops ufs_sprd_pm_ops = {
static struct platform_driver ufs_sprd_pltform = {
.probe = ufs_sprd_probe,
.remove = ufs_sprd_remove,
.remove_new = ufs_sprd_remove,
.driver = {
.name = "ufshcd-sprd",
.pm = &ufs_sprd_pm_ops,

View File

@ -58,11 +58,12 @@ static int __intel_dsm(struct intel_host *intel_host, struct device *dev,
int err = 0;
size_t len;
obj = acpi_evaluate_dsm(ACPI_HANDLE(dev), &intel_dsm_guid, 0, fn, NULL);
obj = acpi_evaluate_dsm_typed(ACPI_HANDLE(dev), &intel_dsm_guid, 0, fn, NULL,
ACPI_TYPE_BUFFER);
if (!obj)
return -EOPNOTSUPP;
if (obj->type != ACPI_TYPE_BUFFER || obj->buffer.length < 1) {
if (obj->buffer.length < 1) {
err = -EINVAL;
goto out;
}

View File

@ -10,6 +10,7 @@
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/pm_runtime.h>
#include <linux/of.h>
@ -121,7 +122,7 @@ static bool phandle_exists(const struct device_node *np,
#define MAX_PROP_SIZE 32
int ufshcd_populate_vreg(struct device *dev, const char *name,
struct ufs_vreg **out_vreg)
struct ufs_vreg **out_vreg, bool skip_current)
{
char prop_name[MAX_PROP_SIZE];
struct ufs_vreg *vreg = NULL;
@ -147,6 +148,11 @@ int ufshcd_populate_vreg(struct device *dev, const char *name,
if (!vreg->name)
return -ENOMEM;
if (skip_current) {
vreg->max_uA = 0;
goto out;
}
snprintf(prop_name, MAX_PROP_SIZE, "%s-max-microamp", name);
if (of_property_read_u32(np, prop_name, &vreg->max_uA)) {
dev_info(dev, "%s: unable to find %s\n", __func__, prop_name);
@ -175,19 +181,19 @@ static int ufshcd_parse_regulator_info(struct ufs_hba *hba)
struct device *dev = hba->dev;
struct ufs_vreg_info *info = &hba->vreg_info;
err = ufshcd_populate_vreg(dev, "vdd-hba", &info->vdd_hba);
err = ufshcd_populate_vreg(dev, "vdd-hba", &info->vdd_hba, true);
if (err)
goto out;
err = ufshcd_populate_vreg(dev, "vcc", &info->vcc);
err = ufshcd_populate_vreg(dev, "vcc", &info->vcc, false);
if (err)
goto out;
err = ufshcd_populate_vreg(dev, "vccq", &info->vccq);
err = ufshcd_populate_vreg(dev, "vccq", &info->vccq, false);
if (err)
goto out;
err = ufshcd_populate_vreg(dev, "vccq2", &info->vccq2);
err = ufshcd_populate_vreg(dev, "vccq2", &info->vccq2, false);
out:
return err;
}
@ -207,6 +213,77 @@ static void ufshcd_init_lanes_per_dir(struct ufs_hba *hba)
}
}
static int ufshcd_parse_operating_points(struct ufs_hba *hba)
{
struct device *dev = hba->dev;
struct device_node *np = dev->of_node;
struct dev_pm_opp_config config = {};
struct ufs_clk_info *clki;
const char **clk_names;
int cnt, i, ret;
if (!of_find_property(np, "operating-points-v2", NULL))
return 0;
if (of_find_property(np, "freq-table-hz", NULL)) {
dev_err(dev, "%s: operating-points and freq-table-hz are incompatible\n",
__func__);
return -EINVAL;
}
cnt = of_property_count_strings(np, "clock-names");
if (cnt <= 0) {
dev_err(dev, "%s: Missing clock-names\n", __func__);
return -ENODEV;
}
/* OPP expects clk_names to be NULL terminated */
clk_names = devm_kcalloc(dev, cnt + 1, sizeof(*clk_names), GFP_KERNEL);
if (!clk_names)
return -ENOMEM;
/*
* We still need to get reference to all clocks as the UFS core uses
* them separately.
*/
for (i = 0; i < cnt; i++) {
ret = of_property_read_string_index(np, "clock-names", i,
&clk_names[i]);
if (ret)
return ret;
clki = devm_kzalloc(dev, sizeof(*clki), GFP_KERNEL);
if (!clki)
return -ENOMEM;
clki->name = devm_kstrdup(dev, clk_names[i], GFP_KERNEL);
if (!clki->name)
return -ENOMEM;
if (!strcmp(clk_names[i], "ref_clk"))
clki->keep_link_active = true;
list_add_tail(&clki->list, &hba->clk_list_head);
}
config.clk_names = clk_names,
config.config_clks = ufshcd_opp_config_clks;
ret = devm_pm_opp_set_config(dev, &config);
if (ret)
return ret;
ret = devm_pm_opp_of_add_table(dev);
if (ret) {
dev_err(dev, "Failed to add OPP table: %d\n", ret);
return ret;
}
hba->use_pm_opp = true;
return 0;
}
/**
* ufshcd_get_pwr_dev_param - get finally agreed attributes for
* power mode change
@ -373,6 +450,12 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
ufshcd_init_lanes_per_dir(hba);
err = ufshcd_parse_operating_points(hba);
if (err) {
dev_err(dev, "%s: OPP parse failed %d\n", __func__, err);
goto dealloc_host;
}
err = ufshcd_init(hba, mmio_base, irq);
if (err) {
dev_err_probe(dev, err, "Initialization failed with error %d\n",

View File

@ -32,6 +32,6 @@ void ufshcd_init_pwr_dev_param(struct ufs_dev_params *dev_param);
int ufshcd_pltfrm_init(struct platform_device *pdev,
const struct ufs_hba_variant_ops *vops);
int ufshcd_populate_vreg(struct device *dev, const char *name,
struct ufs_vreg **out_vreg);
struct ufs_vreg **out_vreg, bool skip_current);
#endif /* UFSHCD_PLTFRM_H_ */

View File

@ -1687,6 +1687,9 @@ static const struct target_core_fabric_ops usbg_ops = {
.tfc_wwn_attrs = usbg_wwn_attrs,
.tfc_tpg_base_attrs = usbg_base_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
/* Start gadget.c code */

View File

@ -909,7 +909,7 @@ static void vhost_scsi_target_queue_cmd(struct vhost_scsi_cmd *cmd)
cmd->tvc_prot_sgl_count, GFP_KERNEL))
return;
target_queue_submission(se_cmd);
target_submit(se_cmd);
}
static void
@ -2598,6 +2598,9 @@ static const struct target_core_fabric_ops vhost_scsi_ops = {
.tfc_wwn_attrs = vhost_scsi_wwn_attrs,
.tfc_tpg_base_attrs = vhost_scsi_tpg_attrs,
.tfc_tpg_attrib_attrs = vhost_scsi_tpg_attrib_attrs,
.default_submit_type = TARGET_QUEUE_SUBMIT,
.direct_submit_supp = 1,
};
static int __init vhost_scsi_init(void)

View File

@ -1832,6 +1832,9 @@ static const struct target_core_fabric_ops scsiback_ops = {
.tfc_wwn_attrs = scsiback_wwn_attrs,
.tfc_tpg_base_attrs = scsiback_tpg_attrs,
.tfc_tpg_param_attrs = scsiback_param_attrs,
.default_submit_type = TARGET_DIRECT_SUBMIT,
.direct_submit_supp = 1,
};
static const struct xenbus_device_id scsiback_ids[] = {

View File

@ -404,8 +404,6 @@ cmd_to_domain_dev(struct scsi_cmnd *cmd)
return sdev_to_domain_dev(cmd->device);
}
void sas_hash_addr(u8 *hashed, const u8 *sas_addr);
/* Before calling a notify event, LLDD should use this function
* when the link is severed (possibly from its tasklet).
* The idea is that the Class only reads those, while the LLDD,
@ -681,7 +679,6 @@ extern void sas_resume_ha(struct sas_ha_struct *sas_ha);
extern void sas_resume_ha_no_sync(struct sas_ha_struct *sas_ha);
extern void sas_suspend_ha(struct sas_ha_struct *sas_ha);
int sas_set_phy_speed(struct sas_phy *phy, struct sas_phy_linkrates *rates);
int sas_phy_reset(struct sas_phy *phy, int hard_reset);
int sas_phy_enable(struct sas_phy *phy, int enable);
extern int sas_queuecommand(struct Scsi_Host *, struct scsi_cmnd *);
@ -699,20 +696,6 @@ extern struct scsi_transport_template *
sas_domain_attach_transport(struct sas_domain_function_template *);
extern struct device_attribute dev_attr_phy_event_threshold;
int sas_discover_root_expander(struct domain_device *);
int sas_ex_revalidate_domain(struct domain_device *);
void sas_unregister_domain_devices(struct asd_sas_port *port, int gone);
void sas_init_disc(struct sas_discovery *disc, struct asd_sas_port *);
void sas_discover_event(struct asd_sas_port *, enum discover_event ev);
int sas_discover_end_dev(struct domain_device *);
void sas_unregister_dev(struct asd_sas_port *port, struct domain_device *);
void sas_init_dev(struct domain_device *);
void sas_task_abort(struct sas_task *);
int sas_eh_abort_handler(struct scsi_cmnd *cmd);
int sas_eh_device_reset_handler(struct scsi_cmnd *cmd);

View File

@ -245,6 +245,9 @@ struct scsi_host_template {
* midlayer calls this point so that the driver may deallocate
* and terminate any references to the target.
*
* Note: This callback is called with the host lock held and hence
* must not sleep.
*
* Status: OPTIONAL
*/
void (* target_destroy)(struct scsi_target *);

View File

@ -108,6 +108,15 @@
#define SE_MODE_PAGE_BUF 512
#define SE_SENSE_BUF 96
enum target_submit_type {
/* Use the fabric driver's default submission type */
TARGET_FABRIC_DEFAULT_SUBMIT,
/* Submit from the calling context */
TARGET_DIRECT_SUBMIT,
/* Defer submission to the LIO workqueue */
TARGET_QUEUE_SUBMIT,
};
/* struct se_hba->hba_flags */
enum hba_flags_table {
HBA_FLAGS_INTERNAL_USE = 0x01,
@ -717,6 +726,7 @@ struct se_dev_attrib {
u32 unmap_granularity;
u32 unmap_granularity_alignment;
u32 max_write_same_len;
u8 submit_type;
struct se_device *da_dev;
struct config_group da_group;
};

View File

@ -113,11 +113,20 @@ struct target_core_fabric_ops {
struct configfs_attribute **tfc_tpg_nacl_param_attrs;
/*
* Set this member variable to true if the SCSI transport protocol
* Set this member variable if the SCSI transport protocol
* (e.g. iSCSI) requires that the Data-Out buffer is transferred in
* its entirety before a command is aborted.
*/
bool write_pending_must_be_called;
unsigned int write_pending_must_be_called:1;
/*
* Set this if the driver supports submitting commands to the backend
* from target_submit/target_submit_cmd.
*/
unsigned int direct_submit_supp:1;
/*
* Set this to a target_submit_type value.
*/
u8 default_submit_type;
};
int target_register_template(const struct target_core_fabric_ops *fo);
@ -166,20 +175,18 @@ int target_submit_prep(struct se_cmd *se_cmd, unsigned char *cdb,
struct scatterlist *sgl, u32 sgl_count,
struct scatterlist *sgl_bidi, u32 sgl_bidi_count,
struct scatterlist *sgl_prot, u32 sgl_prot_count, gfp_t gfp);
void target_submit(struct se_cmd *se_cmd);
int target_submit(struct se_cmd *se_cmd);
sense_reason_t transport_lookup_cmd_lun(struct se_cmd *);
sense_reason_t target_cmd_init_cdb(struct se_cmd *se_cmd, unsigned char *cdb,
gfp_t gfp);
sense_reason_t target_cmd_parse_cdb(struct se_cmd *);
void target_submit_cmd(struct se_cmd *, struct se_session *, unsigned char *,
unsigned char *, u64, u32, int, int, int);
void target_queue_submission(struct se_cmd *se_cmd);
int target_submit_tmr(struct se_cmd *se_cmd, struct se_session *se_sess,
unsigned char *sense, u64 unpacked_lun,
void *fabric_tmr_ptr, unsigned char tm_type,
gfp_t, u64, int);
int transport_handle_cdb_direct(struct se_cmd *);
sense_reason_t transport_generic_new_cmd(struct se_cmd *);
void target_put_cmd_and_wait(struct se_cmd *cmd);
@ -197,8 +204,6 @@ void target_stop_session(struct se_session *se_sess);
void target_wait_for_sess_cmds(struct se_session *);
void target_show_cmd(const char *pfx, struct se_cmd *cmd);
int core_alua_check_nonop_delay(struct se_cmd *);
int core_tmr_alloc_req(struct se_cmd *, void *, u8, gfp_t);
void core_tmr_release_req(struct se_tmr_req *);
int transport_generic_handle_tmr(struct se_cmd *);

Some files were not shown because too many files have changed in this diff Show More