padata_sysfs_store() was copied from padata_sysfs_show() but this check
was not adapted. Today there is no attribute which can fail this
check, but if there is one it may as well be correct.
Fixes: 5e017dc3f8bc ("padata: Added sysfs primitives to padata subsystem")
Signed-off-by: Thomas Weißschuh <linux@weissschuh.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The keywrap (kw) algorithm has no in-tree user. It has never had an
in-tree user, and the patch that added it provided no justification for
its inclusion. Even use of it via AF_ALG is impossible, as it uses a
weird calling convention where part of the ciphertext is returned via
the IV buffer, which is not returned to userspace in AF_ALG.
It's also unclear whether any new code in the kernel that does key
wrapping would actually use this algorithm. It is controversial in the
cryptographic community due to having no clearly stated security goal,
no security proof, poor performance, and only a 64-bit auth tag. Later
work (https://eprint.iacr.org/2006/221) suggested that the goal is
deterministic authenticated encryption. But there are now more modern
algorithms for this, and this is not the same as key wrapping, for which
a regular AEAD such as AES-GCM usually can be (and is) used instead.
Therefore, remove this unused code.
There were several special cases for this algorithm in the self-tests,
due to its weird calling convention. Remove those too.
Cc: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the vmac64 template, as it has no known users. It also continues
to have longstanding bugs such as alignment violations (see
https://lore.kernel.org/r/20241226134847.6690-1-evepolonium@gmail.com/).
This code was added in 2009 by commit f1939f7c5645 ("crypto: vmac - New
hash algorithm for intel_txt support"). Based on the mention of
intel_txt support in the commit title, it seems it was added as a
prerequisite for the contemporaneous patch
"intel_txt: add s3 userspace memory integrity verification"
(https://lore.kernel.org/r/4ABF2B50.6070106@intel.com/). In the design
proposed by that patch, when an Intel Trusted Execution Technology (TXT)
enabled system resumed from suspend, the "tboot" trusted executable
launched the Linux kernel without verifying userspace memory, and then
the Linux kernel used VMAC to verify userspace memory.
However, that patch was never merged, as reviewers had objected to the
design. It was later reworked into commit 4bd96a7a8185 ("x86, tboot:
Add support for S3 memory integrity protection") which made tboot verify
the memory instead. Thus the VMAC support in Linux was never used.
No in-tree user has appeared since then, other than potentially the
usual components that allow specifying arbitrary hash algorithms by
name, namely AF_ALG and dm-integrity. However there are no indications
that VMAC is being used with these components. Debian Code Search and
web searches for "vmac64" (the actual algorithm name) do not return any
results other than the kernel itself, suggesting that it does not appear
in any other code or documentation. Explicitly grepping the source code
of the usual suspects (libell, iwd, cryptsetup) finds no matches either.
Before 2018, the vmac code was also completely broken due to using a
hardcoded nonce and the wrong endianness for the MAC. It was then fixed
by commit ed331adab35b ("crypto: vmac - add nonced version with big
endian digest") and commit 0917b873127c ("crypto: vmac - remove insecure
version with hardcoded nonce"). These were intentionally breaking
changes that changed all the computed MAC values as well as the
algorithm name ("vmac" to "vmac64"). No complaints were ever received
about these breaking changes, strongly suggesting the absence of users.
The reason I had put some effort into fixing this code in 2018 is
because it was used by an out-of-tree driver. But if it is still needed
in that particular out-of-tree driver, the code can be carried in that
driver instead. There is no need to carry it upstream.
Cc: Atharva Tiwari <evepolonium@gmail.com>
Cc: Shane Wang <shane.wang@intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Document ipq9574, ipq5424 and ipq5322 compatible for the True Random Number
Generator.
Signed-off-by: Md Sadre Alam <quic_mdalam@quicinc.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove hard-coded strings by using the str_enabled_disabled() helper
function.
Use pr_info() instead of printk(KERN_INFO) to silence a checkpatch
warning.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
With the latest mm-unstable, setting the iaa_crypto sync_mode to 'async'
causes crypto testmgr.c test_acomp() failure and dmesg call traces, and
zswap being unable to use 'deflate-iaa' as a compressor:
echo async > /sys/bus/dsa/drivers/crypto/sync_mode
[ 255.271030] zswap: compressor deflate-iaa not available
[ 369.960673] INFO: task cryptomgr_test:4889 blocked for more than 122 seconds.
[ 369.970127] Not tainted 6.13.0-rc1-mm-unstable-12-16-2024+ #324
[ 369.977411] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 369.986246] task:cryptomgr_test state:D stack:0 pid:4889 tgid:4889 ppid:2 flags:0x00004000
[ 369.986253] Call Trace:
[ 369.986256] <TASK>
[ 369.986260] __schedule+0x45c/0xfa0
[ 369.986273] schedule+0x2e/0xb0
[ 369.986277] schedule_timeout+0xe7/0x100
[ 369.986284] ? __prepare_to_swait+0x4e/0x70
[ 369.986290] wait_for_completion+0x8d/0x120
[ 369.986293] test_acomp+0x284/0x670
[ 369.986305] ? __pfx_cryptomgr_test+0x10/0x10
[ 369.986312] alg_test_comp+0x263/0x440
[ 369.986315] ? sched_balance_newidle+0x259/0x430
[ 369.986320] ? __pfx_cryptomgr_test+0x10/0x10
[ 369.986323] alg_test.part.27+0x103/0x410
[ 369.986326] ? __schedule+0x464/0xfa0
[ 369.986330] ? __pfx_cryptomgr_test+0x10/0x10
[ 369.986333] cryptomgr_test+0x20/0x40
[ 369.986336] kthread+0xda/0x110
[ 369.986344] ? __pfx_kthread+0x10/0x10
[ 369.986346] ret_from_fork+0x2d/0x40
[ 369.986355] ? __pfx_kthread+0x10/0x10
[ 369.986358] ret_from_fork_asm+0x1a/0x30
[ 369.986365] </TASK>
This happens because the only async polling without interrupts that
iaa_crypto currently implements is with the 'sync' mode. With 'async',
iaa_crypto calls to compress/decompress submit the descriptor and return
-EINPROGRESS, without any mechanism in the driver to poll for
completions. Hence callers such as test_acomp() in crypto/testmgr.c or
zswap, that wrap the calls to crypto_acomp_compress() and
crypto_acomp_decompress() in synchronous wrappers, will block
indefinitely. Even before zswap can notice this problem, the crypto
testmgr.c's test_acomp() will fail and prevent registration of
"deflate-iaa" as a valid crypto acomp algorithm, thereby disallowing the
use of "deflate-iaa" as a zswap compress (zswap will fall-back to the
default compressor in this case).
To fix this issue, this patch modifies the iaa_crypto sync_mode set
function to treat 'async' equivalent to 'sync', so that the correct and
only supported driver async polling without interrupts implementation is
enabled, and zswap can use 'deflate-iaa' as the compressor.
Hence, with this patch, this is what will happen:
echo async > /sys/bus/dsa/drivers/crypto/sync_mode
cat /sys/bus/dsa/drivers/crypto/sync_mode
sync
There are no crypto/testmgr.c test_acomp() errors, no call traces and zswap
can use 'deflate-iaa' without any errors. The iaa_crypto documentation has
also been updated to mention this caveat with 'async' and what to expect
with this fix.
True iaa_crypto async polling without interrupts is enabled in patch
"crypto: iaa - Implement batch_compress(), batch_decompress() API in
iaa_crypto." [1] which is under review as part of the "zswap IAA compress
batching" patch-series [2]. Until this is merged, we would appreciate it if
this current patch can be considered for a hotfix.
[1]: https://patchwork.kernel.org/project/linux-mm/patch/20241221063119.29140-5-kanchana.p.sridhar@intel.com/
[2]: https://patchwork.kernel.org/project/linux-mm/list/?series=920084
Fixes: 09646c98d ("crypto: iaa - Add irq support for the crypto async interface")
Signed-off-by: Kanchana P Sridhar <kanchana.p.sridhar@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The stack frame in libaesgcm_init triggers a size warning on x86-64.
Reduce it by making buf static.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Commit ce8fd0500b74 ("crypto: qce - use __free() for a buffer that's
always freed") introduced a buggy use of __free(), which clang
rightfully points out:
drivers/crypto/qce/sha.c:365:3: error: cannot jump from this goto statement to its label
365 | goto err_free_ahash;
| ^
drivers/crypto/qce/sha.c:373:6: note: jump bypasses initialization of variable with __attribute__((cleanup))
373 | u8 *buf __free(kfree) = kzalloc(keylen + QCE_MAX_ALIGN_SIZE,
| ^
Jumping over a variable declared with the cleanup attribute does not
prevent the cleanup function from running; instead, the cleanup function
is called with an uninitialized value.
Moving the declaration back to the top function with __free() and a NULL
initialization would resolve the bug but that is really not much
different from the original code. Since the function is so simple and
there is no functional reason to use __free() here, just revert the
original change to resolve the issue.
Fixes: ce8fd0500b74 ("crypto: qce - use __free() for a buffer that's always freed")
Reported-by: Linux Kernel Functional Testing <lkft@linaro.org>
Closes: https://lore.kernel.org/CA+G9fYtpAwXa5mUQ5O7vDLK2xN4t-kJoxgUe1ZFRT=AGqmLSRA@mail.gmail.com/
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Acked-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
init_ixp_crypto() calls of_parse_phandle_with_fixed_args() multiple
times, but does not release all the obtained refcounts. Fix it by adding
of_node_put() calls.
This bug was found by an experimental static analysis tool that I am
developing.
Fixes: 76f24b4f46b8 ("crypto: ixp4xx - Add device tree support")
Signed-off-by: Joe Hattori <joe@pf.is.s.u-tokyo.ac.jp>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the digest alg is HMAC-SHAx or another, the authsize may be less
than 4 bytes and mac_len of the BD is set to zero, the hardware considers
it a BD configuration error and reports a ras error, so the sec driver
needs to switch to software calculation in this case, this patch add a
check for it and remove unnecessary check that has been done by crypto.
Fixes: 2f072d75d1ab ("crypto: hisilicon - Add aead support on SEC2")
Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com>
Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
When the AEAD algorithm is used for encryption or decryption,
the input authentication length varies, the hardware needs to
obtain the input length to pass the integrity check verification.
Currently, the driver uses a fixed authentication length,which
causes decryption failure, so the length configuration is modified.
In addition, the step of setting the auth length is unnecessary,
so it was deleted from the setkey function.
Fixes: 2f072d75d1ab ("crypto: hisilicon - Add aead support on SEC2")
Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com>
Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reduce latency by taking advantage of the property vaesenclast(key, a) ^
b == vaesenclast(key ^ b, a), like I did in the AES-GCM code.
Also replace a vpand and vpxor with a vpternlogd.
On AMD Zen 5 this improves performance by about 3%. Intel performance
remains about the same, with a 0.1% improvement being seen on Icelake.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Prefer immediates of -128 to 128, since the former fits in a signed
byte, saving 3 bytes per instruction. Also prefer VEX-coded
instructions to EVEX where this is easy to do.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The AES-XTS assembly code currently treats the length as signed, since
this saves a few instructions in the loop compared to treating it as
unsigned. Therefore update the type to make this clear. (It is not
actually passed any values larger than PAGE_SIZE.)
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Improve some of the comments in aes-xts-avx-x86_64.S.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since aes-xts-avx-x86_64.S contains multiple functions, move the
register aliases for the parameters and local variables of the XTS
update function into the macro that generates that function. Then add
register aliases to aes_xts_encrypt_iv() to improve readability there.
This makes aes-xts-avx-x86_64.S consistent with the GCM assembly files.
No change in the generated code.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use .irp instead of repeating code.
No change in the generated code.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reorganize the main loop to free up the RNDKEYLAST[0-3] registers and
use them for more cached round keys. This improves performance by about
2% on AMD Zen 4 and Zen 5. Intel performance remains about the same.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Prefer immediates of -128 to 128, since the former fits in a signed
byte, saving 3 bytes per instruction. Also replace a vpand and vpxor
with a vpternlogd.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
gf128mul_4k_bbe(), gf128mul_bbe() and gf128mul_init_4k_bbe()
are part of the library originally added in 2006 by
commit c494e0705d67 ("[CRYPTO] lib: table driven multiplications in
GF(2^128)")
but have never been used.
Remove them.
(BBE is Big endian Byte/Big endian bits
Note the 64k table version is used and I've left that in)
Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Move the hash table growth check and work scheduling outside the
rht lock to prevent a possible circular locking dependency.
The original implementation could trigger a lockdep warning due to
a potential deadlock scenario involving nested locks between
rhashtable bucket, rq lock, and dsq lock. By relocating the
growth check and work scheduling after releasing the rth lock, we break
this potential deadlock chain.
This change expands the flexibility of rhashtable by removing
restrictive locking that previously limited its use in scheduler
and workqueue contexts.
Import to say that this calls rht_grow_above_75(), which reads from
struct rhashtable without holding the lock, if this is a problem, we can
move the check to the lock, and schedule the workqueue after the lock.
Fixes: f0e1a0643a59 ("sched_ext: Implement BPF extensible scheduler class")
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Breno Leitao <leitao@debian.org>
Modified so that atomic_inc is also moved outside of the bucket
lock along with the growth above 75% check.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since this code is zero-initializing the algorithm struct, the
assignment of 0 to cra_alignmask is redundant. Remove it to reduce the
number of matches that are found when grepping for cra_alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Struct fields are zero by default, so these lines of code have no
effect. Remove them to reduce the number of matches that are found when
grepping for cra_alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Struct fields are zero by default, so these lines of code have no
effect. Remove them to reduce the number of matches that are found when
grepping for cra_alignmask.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Instead of specifying a nonzero alignmask, use the unaligned access
helpers. This eliminates unnecessary alignment operations on most CPUs,
which can handle unaligned accesses efficiently, and brings us a step
closer to eventually removing support for the alignmask field.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Since the physical address support in skcipher_walk is not used anymore,
remove all the code associated with it. This includes:
- The skcipher_walk_async() and skcipher_walk_complete() functions;
- The SKCIPHER_WALK_PHYS flag and everything conditional on it;
- The buffers, phys, and virt.page fields in struct skcipher_walk;
- struct skcipher_walk_buffer.
As a result, skcipher_walk now just supports virtual addresses.
Physical address support in skcipher_walk is unneeded because drivers
that need physical addresses just use the scatterlists directly.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Remove the driver for the Stream Processing Unit (SPU) on the Niagara 2.
Removing this driver allows removing the support for physical address
walks in skcipher_walk. That is a misfeature that is used only by this
driver and increases the overhead of the crypto API for everyone else.
There is little evidence that anyone cares about this driver. The
Niagara 2, a.k.a. the UltraSPARC T2, is a server CPU released in
2007. The SPU is also present on the SPARC T3, released in 2010.
However, the SPU went away in SPARC T4, released in 2012, which replaced
it with proper cryptographic instructions instead. These newer
instructions are supported by the kernel in arch/sparc/crypto/.
This driver was completely broken from (at least) 2015 to 2022, from
commit 8996eafdcbad ("crypto: ahash - ensure statesize is non-zero") to
commit 76a4e8745935 ("crypto: n2 - add missing hash statesize"), since
its probe function always returned an error before registering any
algorithms. Though, even with that obvious issue fixed, it is unclear
whether the driver now works correctly. E.g., there are no indications
that anyone has run the self-tests recently.
One bug report for this driver in 2017
(https://lore.kernel.org/r/nycvar.YFH.7.76.1712110214220.28416@n3.vanv.qr)
complained that it crashed the kernel while being loaded. The reporter
didn't seem to care about the functionality of the driver, but rather
just the fact that loading it crashed the kernel. In fact not until
2022 was the driver fixed to maybe actually register its algorithms with
the crypto API. The 2022 fix does have a Reported-by and Tested-by, but
that may similarly have been just about making the error messages go
away as opposed to someone actually wanting to use the driver.
As such, it seems appropriate to retire this driver in mainline.
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As QCE is an order of magnitude slower than the ARMv8 Crypto Extensions
on the CPU, and is also less well tested, give it a lower priority.
Previously the QCE SHA algorithms had higher priority than the ARMv8 CE
equivalents, and the ciphers such as AES-XTS had the same priority which
meant the QCE versions were chosen if they happened to be loaded later.
Fixes: ec8f5d8f6f76 ("crypto: qce - Qualcomm crypto engine driver")
Cc: stable@vger.kernel.org
Cc: Bartosz Golaszewski <brgl@bgdev.pl>
Cc: Neil Armstrong <neil.armstrong@linaro.org>
Cc: Thara Gopinath <thara.gopinath@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use a scoped guard to simplify the cleanup handling.
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Having switched to workqueue from tasklet, we are no longer limited to
atomic APIs and can now convert the spinlock to a mutex. This, along
with the conversion from tasklet to workqueue grants us ~15% improvement
in cryptsetup benchmarks for AES encryption.
While at it: use guards to simplify locking code.
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There's nothing about the qce driver that requires running from a
tasklet. Switch to using the system workqueue.
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The buffer allocated in qce_ahash_hmac_setkey is always freed before
returning to use __free() to automate it.
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make qce_register_algs() a managed interface. This allows us to further
simplify the remove() callback.
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Make qce_dma_request() into a managed interface. With this we can
simplify the error path in probe() and drop another operations from
remove().
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Use devm_clk_get_optional_enabled() to avoid having to enable the clocks
separately as well as putting the clocks in error path and the remove()
callback.
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
There's no need to call icc_set_bw(qce->mem_path, 0, 0); in error path
as this will already be done in the release path of devm_of_icc_get().
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If we encounter an error when registering alorithms with the crypto
framework, we just bail out and don't unregister the ones we
successfully registered in prior iterations of the loop.
Add code that goes back over the algos and unregisters them before
returning an error from qce_register_algs().
Cc: stable@vger.kernel.org
Fixes: ec8f5d8f6f76 ("crypto: qce - Qualcomm crypto engine driver")
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
If qce_check_version() fails, we should jump to err_dma as we already
called qce_dma_request() a couple lines before.
Cc: stable@vger.kernel.org
Fixes: ec8f5d8f6f76 ("crypto: qce - Qualcomm crypto engine driver")
Signed-off-by: Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
Reviewed-by: Neil Armstrong <neil.armstrong@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
As sig is now a standalone type, it no longer needs to have a wide
mask that includes akcipher.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
This patch moves the rhashtable mailing list over to linux-crypto.
This would allow rhashtable patches to go through my tree instead
of the networking tree.
More uses are popping up outside of the network stack and having it
under the networking tree no longer makes sense.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
On iMX8DXL/QM/QXP(SECO) & iMX8ULP(ELE) SoCs, access to controller
region(CAAM page 0) is not permitted from non secure world.
use JobR's register space to access page 0 registers.
Fixes: 6a83830f649a ("crypto: caam - warn if blob_gen key is insecure")
Signed-off-by: Gaurav Jain <gaurav.jain@nxp.com>
Reviewed-by: Ahmad Fatoum <a.fatoum@pengutronix.de>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Document the crypto engine on the QCS8300 Platform.
Acked-by: Rob Herring (Arm) <robh@kernel.org>
Signed-off-by: Yuvaraj Ranganathan <quic_yrangana@quicinc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Add the compatible string for QCom ICE on qcs8300 SoCs.
Signed-off-by: Yuvaraj Ranganathan <quic_yrangana@quicinc.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Document QCS8300 compatible for the True Random Number
Generator.
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Signed-off-by: Yuvaraj Ranganathan <quic_yrangana@quicinc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The error detection of the data aggregation feature is separated from
the compression/decompression feature. This patch enables the error
detection and reporting of the data aggregation feature. When an
unrecoverable error occurs in the algorithm core, the device reports
the error to the driver, and the driver will reset the device.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
The zip device adds data aggregation feature, data with the
same key can be combined.
This patch enables the device data aggregation feature.
New feature is called "hashagg" name and registered to
the uacce subsystem to allow applications to submit data
aggregation operations in user space.
Signed-off-by: Weili Qian <qianweili@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>