198 Commits

Author SHA1 Message Date
Herbert Xu
ed3630b83e crypto: hash - Add crypto_clone_ahash/shash
This patch adds the helpers crypto_clone_ahash and crypto_clone_shash.
They are the hash-specific counterparts of crypto_clone_tfm.

This allows code paths that cannot otherwise allocate a hash tfm
object to do so.  Once a new tfm has been obtained its key could
then be changed without impacting other users.

Note that only algorithms that implement clone_tfm can be cloned.
However, all keyless hashes can be cloned by simply reusing the
tfm object.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-04-20 18:20:04 +08:00
Herbert Xu
0a742389bc crypto: acomp - Count error stats differently
Move all stat code specific to acomp into the acomp code.

While we're at it, change the stats so that bytes and counts
are always incremented even in case of error.  This allows the
reference counting to be removed as we can now increment the
counters prior to the operation.

After the operation we simply increase the error count if necessary.
This is safe as errors can only occur synchronously (or rather,
the existing code already ignored asynchronous errors which are
only visible to the callback function).

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-03-14 17:06:42 +08:00
Herbert Xu
d5770679ad crypto: skcipher - Use crypto_request_complete
Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13 18:34:49 +08:00
Herbert Xu
ba354b2fdb crypto: kpp - Use crypto_request_complete
Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13 18:34:48 +08:00
Herbert Xu
d9588045f5 crypto: hash - Use crypto_request_complete
Use the crypto_request_complete helper instead of calling the
completion function directly.

This patch also removes the voodoo programming previously used
for unaligned ahash operations and replaces it with a sub-request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13 18:34:48 +08:00
Herbert Xu
700d507805 crypto: akcipher - Use crypto_request_complete
Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13 18:34:48 +08:00
Herbert Xu
372e6b80ba crypto: aead - Use crypto_request_complete
Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13 18:34:48 +08:00
Herbert Xu
4cc01c7f3d crypto: acompress - Use crypto_request_complete
Use the crypto_request_complete helper instead of calling the
completion function directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2023-02-13 18:34:48 +08:00
Herbert Xu
a5a49249ef crypto: kpp - Add ctx helpers with DMA alignment
This patch adds helpers to access the kpp context structure and
request context structure with an added alignment for DMA access.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:40 +08:00
Herbert Xu
4ac3377645 crypto: akcipher - Add ctx helpers with DMA alignment
This patch adds helpers to access the akcipher context structure and
request context structure with an added alignment for DMA access.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:40 +08:00
Herbert Xu
12658ac5e6 crypto: skcipher - Add ctx helpers with DMA alignment
This patch adds helpers to access the skcipher context structure and
request context structure with an added alignment for DMA access.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:40 +08:00
Herbert Xu
b5f755fbd5 crypto: hash - Add ctx helpers with DMA alignment
This patch adds helpers to access the ahash context structure and
request context structure with an added alignment for DMA access.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:40 +08:00
Herbert Xu
f8e4d1d0ac crypto: aead - Add ctx helpers with DMA alignment
This patch adds helpers to access the aead context structure and
request context structure with an added alignment for DMA access.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:40 +08:00
Herbert Xu
14386d4713 crypto: Prepare to move crypto_tfm_ctx
The helper crypto_tfm_ctx is only used by the Crypto API algorithm
code and should really be in algapi.h.  However, for historical
reasons many files relied on it to be in crypto.h.  This patch
changes those files to use algapi.h instead in prepartion for a
move.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:40 +08:00
Herbert Xu
4d2b225a67 crypto: kpp - Move reqsize into tfm
The value of reqsize cannot be determined in case of fallbacks.
Therefore it must be stored in the tfm and not the alg object.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:40 +08:00
Herbert Xu
3e71e5b0ef crypto: akcipher - Move reqsize into tfm
The value of reqsize cannot be determined in case of fallbacks.
Therefore it must be stored in the tfm and not the alg object.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:39 +08:00
Herbert Xu
56861cbde1 crypto: kpp - Add helper to set reqsize
The value of reqsize should only be changed through a helper.
To do so we need to first add a helper for this.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-12-02 18:12:39 +08:00
Eric Biggers
c060e16ddb Revert "crypto: shash - avoid comparing pointers to exported functions under CFI"
This reverts commit 22ca9f4aaf431a9413dcc115dd590123307f274f because CFI
no longer breaks cross-module function address equality, so
crypto_shash_alg_has_setkey() can now be an inline function like before.

This commit should not be backported to kernels that don't have the new
CFI implementation.

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-25 17:39:19 +08:00
Herbert Xu
e6cb02bd0a crypto: skcipher - Allow sync algorithms with large request contexts
Some sync algorithms may require a large amount of temporary
space during its operations.  There is no reason why they should
be limited just because some legacy users want to place all
temporary data on the stack.

Such algorithms can now set a flag to indicate that they need
extra request context, which will cause them to be invisible
to users that go through the sync_skcipher interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-11-18 16:59:34 +08:00
Gaosheng Cui
d126edd771 crypto: aead - Remove unused inline functions from aead
The aead_enqueue_request, aead_dequeue_request and aead_get_backlog
are no longer used since commit 04a4616e6a21 ("crypto: omap-aes-gcm
- convert to use crypto engine"), their functinoality has been
replaced by crypto engine, so remove them.

Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-09-30 13:59:13 +08:00
Jason A. Donenfeld
2d16803c56 crypto: blake2s - remove shash module
BLAKE2s has no currently known use as an shash. Just remove all of this
unnecessary plumbing. Removing this shash was something we talked about
back when we were making BLAKE2s a built-in, but I simply never got
around to doing it. So this completes that project.

Importantly, this fixs a bug in which the lib code depends on
crypto_simd_disabled_for_test, causing linker errors.

Also add more alignment tests to the selftests and compare SIMD and
non-SIMD compression functions, to make up for what we lose from
testmgr.c.

Reported-by: gaochao <gaochao49@huawei.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: stable@vger.kernel.org
Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-06-10 16:43:49 +08:00
Linus Torvalds
93e220a62d Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "API:
   - hwrng core now credits for low-quality RNG devices.

  Algorithms:
   - Optimisations for neon aes on arm/arm64.
   - Add accelerated crc32_be on arm64.
   - Add ffdheXYZ(dh) templates.
   - Disallow hmac keys < 112 bits in FIPS mode.
   - Add AVX assembly implementation for sm3 on x86.

  Drivers:
   - Add missing local_bh_disable calls for crypto_engine callback.
   - Ensure BH is disabled in crypto_engine callback path.
   - Fix zero length DMA mappings in ccree.
   - Add synchronization between mailbox accesses in octeontx2.
   - Add Xilinx SHA3 driver.
   - Add support for the TDES IP available on sama7g5 SoC in atmel"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (137 commits)
  crypto: xilinx - Turn SHA into a tristate and allow COMPILE_TEST
  MAINTAINERS: update HPRE/SEC2/TRNG driver maintainers list
  crypto: dh - Remove the unused function dh_safe_prime_dh_alg()
  hwrng: nomadik - Change clk_disable to clk_disable_unprepare
  crypto: arm64 - cleanup comments
  crypto: qat - fix initialization of pfvf rts_map_msg structures
  crypto: qat - fix initialization of pfvf cap_msg structures
  crypto: qat - remove unneeded assignment
  crypto: qat - disable registration of algorithms
  crypto: hisilicon/qm - fix memset during queues clearing
  crypto: xilinx: prevent probing on non-xilinx hardware
  crypto: marvell/octeontx - Use swap() instead of open coding it
  crypto: ccree - Fix use after free in cc_cipher_exit()
  crypto: ccp - ccp_dmaengine_unregister release dma channels
  crypto: octeontx2 - fix missing unlock
  hwrng: cavium - fix NULL but dereferenced coccicheck error
  crypto: cavium/nitrox - don't cast parameter in bit operations
  crypto: vmx - add missing dependencies
  MAINTAINERS: Add maintainer for Xilinx ZynqMP SHA3 driver
  crypto: xilinx - Add Xilinx SHA3 driver
  ...
2022-03-21 16:02:36 -07:00
Nicolai Stange
46ed5269bf crypto: kpp - provide support for KPP spawns
The upcoming support for the RFC 7919 ffdhe group parameters will be
made available in the form of templates like "ffdhe2048(dh)",
"ffdhe3072(dh)" and so on. Template instantiations thereof would wrap the
inner "dh" kpp_alg and also provide kpp_alg services to the outside again.

The primitves needed for providing kpp_alg services from template instances
have been introduced with the previous patch. Continue this work now and
implement everything needed for enabling template instances to make use
of inner KPP algorithms like "dh".

More specifically, define a struct crypto_kpp_spawn in close analogy to
crypto_skcipher_spawn, crypto_shash_spawn and alike. Implement a
crypto_grab_kpp() and crypto_drop_kpp() pair for binding such a spawn to
some inner kpp_alg and for releasing it respectively. Template
implementations can instantiate transforms from the underlying kpp_alg by
means of the new crypto_spawn_kpp(). Finally, provide the
crypto_spawn_kpp_alg() helper for accessing a spawn's underlying kpp_alg
during template instantiation.

Annotate everything with proper kernel-doc comments, even though
include/crypto/internal/kpp.h is not considered for the generated docs.

Signed-off-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03 10:47:50 +12:00
Nicolai Stange
1038fd78a1 crypto: kpp - provide support for KPP template instances
The upcoming support for the RFC 7919 ffdhe group parameters will be
made available in the form of templates like "ffdhe2048(dh)",
"ffdhe3072(dh)" and so on. Template instantiations thereof would wrap the
inner "dh" kpp_alg and also provide kpp_alg services to the outside again.
Furthermore, it might be perhaps be desirable to provide KDF templates in
the future, which would similarly wrap an inner kpp_alg and present
themselves to the outside as another kpp_alg, transforming the shared
secret on its way out.

Introduce the bits needed for supporting KPP template instances. Everything
related to inner kpp_alg spawns potentially being held by such template
instances will be deferred to a subsequent patch in order to facilitate
review.

Define struct struct kpp_instance in close analogy to the already existing
skcipher_instance, shash_instance and alike, but wrapping a struct kpp_alg.
Implement the new kpp_register_instance() template instance registration
primitive. Provide some helper functions for
- going back and forth between a generic struct crypto_instance and the new
  struct kpp_instance,
- obtaining the instantiating kpp_instance from a crypto_kpp transform and
- for accessing a given kpp_instance's implementation specific context
  data.

Annotate everything with proper kernel-doc comments, even though
include/crypto/internal/kpp.h is not considered for the generated docs.

Signed-off-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03 10:47:49 +12:00
Jason A. Donenfeld
d2a02e3c8b lib/crypto: blake2s: avoid indirect calls to compression function for Clang CFI
blake2s_compress_generic is weakly aliased by blake2s_compress. The
current harness for function selection uses a function pointer, which is
ordinarily inlined and resolved at compile time. But when Clang's CFI is
enabled, CFI still triggers when making an indirect call via a weak
symbol. This seems like a bug in Clang's CFI, as though it's bucketing
weak symbols and strong symbols differently. It also only seems to
trigger when "full LTO" mode is used, rather than "thin LTO".

[    0.000000][    T0] Kernel panic - not syncing: CFI failure (target: blake2s_compress_generic+0x0/0x1444)
[    0.000000][    T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.16.0-mainline-06981-g076c855b846e #1
[    0.000000][    T0] Hardware name: MT6873 (DT)
[    0.000000][    T0] Call trace:
[    0.000000][    T0]  dump_backtrace+0xfc/0x1dc
[    0.000000][    T0]  dump_stack_lvl+0xa8/0x11c
[    0.000000][    T0]  panic+0x194/0x464
[    0.000000][    T0]  __cfi_check_fail+0x54/0x58
[    0.000000][    T0]  __cfi_slowpath_diag+0x354/0x4b0
[    0.000000][    T0]  blake2s_update+0x14c/0x178
[    0.000000][    T0]  _extract_entropy+0xf4/0x29c
[    0.000000][    T0]  crng_initialize_primary+0x24/0x94
[    0.000000][    T0]  rand_initialize+0x2c/0x6c
[    0.000000][    T0]  start_kernel+0x2f8/0x65c
[    0.000000][    T0]  __primary_switched+0xc4/0x7be4
[    0.000000][    T0] Rebooting in 5 seconds..

Nonetheless, the function pointer method isn't so terrific anyway, so
this patch replaces it with a simple boolean, which also gets inlined
away. This successfully works around the Clang bug.

In general, I'm not too keen on all of the indirection involved here; it
clearly does more harm than good. Hopefully the whole thing can get
cleaned up down the road when lib/crypto is overhauled more
comprehensively. But for now, we go with a simple bandaid.

Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in")
Link: https://github.com/ClangBuiltLinux/linux/issues/1567
Reported-by: Miles Chen <miles.chen@mediatek.com>
Tested-by: Miles Chen <miles.chen@mediatek.com>
Tested-by: Nathan Chancellor <nathan@kernel.org>
Tested-by: John Stultz <john.stultz@linaro.org>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-02-04 19:22:32 +01:00
Linus Torvalds
5c947d0dba Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto updates from Herbert Xu:
 "Algorithms:

   - Drop alignment requirement for data in aesni

   - Use synchronous seeding from the /dev/random in DRBG

   - Reseed nopr DRBGs every 5 minutes from /dev/random

   - Add KDF algorithms currently used by security/DH

   - Fix lack of entropy on some AMD CPUs with jitter RNG

  Drivers:

   - Add support for the D1 variant in sun8i-ce

   - Add SEV_INIT_EX support in ccp

   - PFVF support for GEN4 host driver in qat

   - Compression support for GEN4 devices in qat

   - Add cn10k random number generator support"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (145 commits)
  crypto: af_alg - rewrite NULL pointer check
  lib/mpi: Add the return value check of kcalloc()
  crypto: qat - fix definition of ring reset results
  crypto: hisilicon - cleanup warning in qm_get_qos_value()
  crypto: kdf - select SHA-256 required for self-test
  crypto: x86/aesni - don't require alignment of data
  crypto: ccp - remove unneeded semicolon
  crypto: stm32/crc32 - Fix kernel BUG triggered in probe()
  crypto: s390/sha512 - Use macros instead of direct IV numbers
  crypto: sparc/sha - remove duplicate hash init function
  crypto: powerpc/sha - remove duplicate hash init function
  crypto: mips/sha - remove duplicate hash init function
  crypto: sha256 - remove duplicate generic hash init function
  crypto: jitter - add oversampling of noise source
  MAINTAINERS: update SEC2 driver maintainers list
  crypto: ux500 - Use platform_get_irq() to get the interrupt
  crypto: hisilicon/qm - disable qm clock-gating
  crypto: omap-aes - Fix broken pm_runtime_and_get() usage
  MAINTAINERS: update caam crypto driver maintainers list
  crypto: octeontx2 - prevent underflow in get_cores_bmap()
  ...
2022-01-11 10:21:35 -08:00
Jason A. Donenfeld
6048fdcc5f lib/crypto: blake2s: include as built-in
In preparation for using blake2s in the RNG, we change the way that it
is wired-in to the build system. Instead of using ifdefs to select the
right symbol, we use weak symbols. And because ARM doesn't need the
generic implementation, we make the generic one default only if an arch
library doesn't need it already, and then have arch libraries that do
need it opt-in. So that the arch libraries can remain tristate rather
than bool, we then split the shash part from the glue code.

Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Masahiro Yamada <masahiroy@kernel.org>
Cc: linux-kbuild@vger.kernel.org
Cc: linux-crypto@vger.kernel.org
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
2022-01-07 00:25:25 +01:00
Stephan Müller
b808f32023 crypto: kdf - Add key derivation self-test support code
As a preparation to add the key derivation implementations, the
self-test data structure definition and the common test code is made
available.

The test framework follows the testing applied by the NIST CAVP test
approach.

The structure of the test code follows the implementations found in
crypto/testmgr.c|h. In case the KDF implementations will be made
available via a kernel crypto API templates, the test code is intended
to be merged into testmgr.c|h.

Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-11-26 16:25:17 +11:00
Daniele Alessandrelli
eaffe377e1 crypto: ecc - Export additional helper functions
Export the following additional ECC helper functions:
- ecc_alloc_point()
- ecc_free_point()
- vli_num_bits()
- ecc_point_is_zero()

This is done to allow future ECC device drivers to re-use existing code,
thus simplifying their implementation.

Functions are exported using EXPORT_SYMBOL() (instead of
EXPORT_SYMBOL_GPL()) to be consistent with the functions already
exported by crypto/ecc.c.

Exported functions are documented in include/crypto/internal/ecc.h.

Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-10-29 21:04:03 +08:00
Daniele Alessandrelli
a745d3ace3 crypto: ecc - Move ecc.h to include/crypto/internal
Move ecc.h header file to 'include/crypto/internal' so that it can be
easily imported from everywhere in the kernel tree.

This change is done to allow crypto device drivers to re-use the symbols
exported by 'crypto/ecc.c', thus avoiding code duplication.

Signed-off-by: Daniele Alessandrelli <daniele.alessandrelli@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-10-29 21:04:03 +08:00
Ard Biesheuvel
22ca9f4aaf crypto: shash - avoid comparing pointers to exported functions under CFI
crypto_shash_alg_has_setkey() is implemented by testing whether the
.setkey() member of a struct shash_alg points to the default version,
called shash_no_setkey(). As crypto_shash_alg_has_setkey() is a static
inline, this requires shash_no_setkey() to be exported to modules.

Unfortunately, when building with CFI, function pointers are routed
via CFI stubs which are private to each module (or to the kernel proper)
and so this function pointer comparison may fail spuriously.

Let's fix this by turning crypto_shash_alg_has_setkey() into an out of
line function.

Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-06-17 15:07:31 +08:00
Arnd Bergmann
8d195e7a8a crypto: poly1305 - fix poly1305_core_setkey() declaration
gcc-11 points out a mismatch between the declaration and the definition
of poly1305_core_setkey():

lib/crypto/poly1305-donna32.c:13:67: error: argument 2 of type ‘const u8[16]’ {aka ‘const unsigned char[16]’} with mismatched bound [-Werror=array-parameter=]
   13 | void poly1305_core_setkey(struct poly1305_core_key *key, const u8 raw_key[16])
      |                                                          ~~~~~~~~~^~~~~~~~~~~
In file included from lib/crypto/poly1305-donna32.c:11:
include/crypto/internal/poly1305.h:21:68: note: previously declared as ‘const u8 *’ {aka ‘const unsigned char *’}
   21 | void poly1305_core_setkey(struct poly1305_core_key *key, const u8 *raw_key);

This is harmless in principle, as the calling conventions are the same,
but the more specific prototype allows better type checking in the
caller.

Change the declaration to match the actual function definition.
The poly1305_simd_init() is a bit suspicious here, as it previously
had a 32-byte argument type, but looks like it needs to take the
16-byte POLY1305_BLOCK_SIZE array instead.

Fixes: 1c08a104360f ("crypto: poly1305 - add new 32 and 64-bit generic versions")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-04-02 18:28:12 +11:00
Ard Biesheuvel
64ca771cd6 crypto: x86 - remove glue helper module
All dependencies on the x86 glue helper module have been replaced by
local instantiations of the new ECB/CBC preprocessor helper macros, so
the glue helper module can be retired.

Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-14 17:10:29 +11:00
Eric Biggers
28dcca4cc0 crypto: blake2b - sync with blake2s implementation
Sync the BLAKE2b code with the BLAKE2s code as much as possible:

- Move a lot of code into new headers <crypto/blake2b.h> and
  <crypto/internal/blake2b.h>, and adjust it to be like the
  corresponding BLAKE2s code, i.e. like <crypto/blake2s.h> and
  <crypto/internal/blake2s.h>.

- Rename constants, e.g. BLAKE2B_*_DIGEST_SIZE => BLAKE2B_*_HASH_SIZE.

- Use a macro BLAKE2B_ALG() to define the shash_alg structs.

- Export blake2b_compress_generic() for use as a fallback.

This makes it much easier to add optimized implementations of BLAKE2b,
as optimized implementations can use the helper functions
crypto_blake2b_{setkey,init,update,final}() and
blake2b_compress_generic().  The ARM implementation will use these.

But this change is also helpful because it eliminates unnecessary
differences between the BLAKE2b and BLAKE2s code, so that the same
improvements can easily be made to both.  (The two algorithms are
basically identical, except for the word size and constants.)  It also
makes it straightforward to add a library API for BLAKE2b in the future
if/when it's needed.

This change does make the BLAKE2b code slightly more complicated than it
needs to be, as it doesn't actually provide a library API yet.  For
example, __blake2b_update() doesn't really need to exist yet; it could
just be inlined into crypto_blake2b_update().  But I believe this is
outweighed by the benefits of keeping the code in sync.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:39 +11:00
Eric Biggers
8786841bc2 crypto: blake2s - adjust include guard naming
Use the full path in the include guards for the BLAKE2s headers to avoid
ambiguity and to match the convention for most files in include/crypto/.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers
42ad8cf821 crypto: blake2s - optimize blake2s initialization
If no key was provided, then don't waste time initializing the block
buffer, as its initial contents won't be used.

Also, make crypto_blake2s_init() and blake2s() call a single internal
function __blake2s_init() which treats the key as optional, rather than
conditionally calling blake2s_init() or blake2s_init_key().  This
reduces the compiled code size, as previously both blake2s_init() and
blake2s_init_key() were being inlined into these two callers, except
when the key size passed to blake2s() was a compile-time constant.

These optimizations aren't that significant for BLAKE2s.  However, the
equivalent optimizations will be more significant for BLAKE2b, as
everything is twice as big in BLAKE2b.  And it's good to keep things
consistent rather than making optimizations for BLAKE2b but not BLAKE2s.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers
8c4a93a127 crypto: blake2s - share the "shash" API boilerplate code
Add helper functions for shash implementations of BLAKE2s to
include/crypto/internal/blake2s.h, taking advantage of
__blake2s_update() and __blake2s_final() that were added by the previous
patch to share more code between the library and shash implementations.

crypto_blake2s_setkey() and crypto_blake2s_init() are usable as
shash_alg::setkey and shash_alg::init directly, while
crypto_blake2s_update() and crypto_blake2s_final() take an extra
'blake2s_compress_t' function pointer parameter.  This allows the
implementation of the compression function to be overridden, which is
the only part that optimized implementations really care about.

The new functions are inline functions (similar to those in sha1_base.h,
sha256_base.h, and sm3_base.h) because this avoids needing to add a new
module blake2s_helpers.ko, they aren't *too* long, and this avoids
indirect calls which are expensive these days.  Note that they can't go
in blake2s_generic.ko, as that would require selecting CRYPTO_BLAKE2S
from CRYPTO_BLAKE2S_X86, which would cause a recursive dependency.

Finally, use these new helper functions in the x86 implementation of
BLAKE2s.  (This part should be a separate patch, but unfortunately the
x86 implementation used the exact same function names like
"crypto_blake2s_update()", so it had to be updated at the same time.)

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Eric Biggers
057edc9c8b crypto: blake2s - move update and final logic to internal/blake2s.h
Move most of blake2s_update() and blake2s_final() into new inline
functions __blake2s_update() and __blake2s_final() in
include/crypto/internal/blake2s.h so that this logic can be shared by
the shash helper functions.  This will avoid duplicating this logic
between the library and shash implementations.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:38 +11:00
Ard Biesheuvel
0eb76ba29d crypto: remove cipher routines from public crypto API
The cipher routines in the crypto API are mostly intended for templates
implementing skcipher modes generically in software, and shouldn't be
used outside of the crypto subsystem. So move the prototypes and all
related definitions to a new header file under include/crypto/internal.
Also, let's use the new module namespace feature to move the symbol
exports into a new namespace CRYPTO_INTERNAL.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2021-01-03 08:41:35 +11:00
Herbert Xu
ce0d5d63e8 crypto: lib/blake2s - Move selftest prototype into header file
This patch fixes a missing prototype warning on blake2s_selftest.

Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-12-04 18:13:14 +11:00
Herbert Xu
b00ba76a03 crypto: ahash - Add ahash_alg_instance
This patch adds the helper ahash_alg_instance which is used to
convert a crypto_ahash object into its corresponding ahash_instance.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-28 16:58:28 +10:00
Ira Weiny
8afa25aa83 crypto: hash - Remove unused async iterators
Revert "crypto: hash - Add real ahash walk interface"
This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4.

The callers of the functions in this commit were removed in ab8085c130ed

Remove these unused calls.

Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd")
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-08-21 14:47:50 +10:00
Waiman Long
453431a549 mm, treewide: rename kzfree() to kfree_sensitive()
As said by Linus:

  A symmetric naming is only helpful if it implies symmetries in use.
  Otherwise it's actively misleading.

  In "kzalloc()", the z is meaningful and an important part of what the
  caller wants.

  In "kzfree()", the z is actively detrimental, because maybe in the
  future we really _might_ want to use that "memfill(0xdeadbeef)" or
  something. The "zero" part of the interface isn't even _relevant_.

The main reason that kzfree() exists is to clear sensitive information
that should not be leaked to other future users of the same memory
objects.

Rename kzfree() to kfree_sensitive() to follow the example of the recently
added kvfree_sensitive() and make the intention of the API more explicit.
In addition, memzero_explicit() is used to clear the memory to make sure
that it won't get optimized away by the compiler.

The renaming is done by using the command sequence:

  git grep -w --name-only kzfree |\
  xargs sed -i 's/kzfree/kfree_sensitive/'

followed by some editing of the kfree_sensitive() kerneldoc and adding
a kzfree backward compatibility macro in slab.h.

[akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
[akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Joe Perches <joe@perches.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-08-07 11:33:22 -07:00
Eric Biggers
e72b48c5e7 crypto: geniv - remove unneeded arguments from aead_geniv_alloc()
The type and mask arguments to aead_geniv_alloc() are always 0, so
remove them.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-07-16 21:49:07 +10:00
Jason A. Donenfeld
1c08a10436 crypto: poly1305 - add new 32 and 64-bit generic versions
These two C implementations from Zinc -- a 32x32 one and a 64x64 one,
depending on the platform -- come from Andrew Moon's public domain
poly1305-donna portable code, modified for usage in the kernel. The
precomputation in the 32-bit version and the use of 64x64 multiplies in
the 64-bit version make these perform better than the code it replaces.
Moon's code is also very widespread and has received many eyeballs of
scrutiny.

There's a bit of interference between the x86 implementation, which
relies on internal details of the old scalar implementation. In the next
commit, the x86 implementation will be replaced with a faster one that
doesn't rely on this, so none of this matters much. But for now, to keep
this passing the tests, we inline the bits of the old implementation
that the x86 implementation relied on. Also, since we now support a
slightly larger key space, via the union, some offsets had to be fixed
up.

Nonce calculation was folded in with the emit function, to take
advantage of 64x64 arithmetic. However, Adiantum appeared to rely on no
nonce handling in emit, so this path was conditionalized. We also
introduced a new struct, poly1305_core_key, to represent the precise
amount of space that particular implementation uses.

Testing with kbench9000, depending on the CPU, the update function for
the 32x32 version has been improved by 4%-7%, and for the 64x64 by
19%-30%. The 32x32 gains are small, but I think there's great value in
having a parallel implementation to the 64x64 one so that the two can be
compared side-by-side as nice stand-alone units.

Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-16 15:18:12 +08:00
Eric Biggers
a39c66cc2f crypto: shash - convert shash_free_instance() to new style
Convert shash_free_instance() and its users to the new way of freeing
instances, where a ->free() method is installed to the instance struct
itself.  This replaces the weakly-typed method crypto_template::free().

This will allow removing support for the old way of freeing instances.

Also give shash_free_instance() a more descriptive name to reflect that
it's only for instances with a single spawn, not for any instance.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:57 +08:00
Eric Biggers
0f8f6d86d4 crypto: geniv - convert to new way of freeing instances
Convert the "seqiv" template to the new way of freeing instances where a
->free() method is installed to the instance struct itself.  Also remove
the unused implementation of the old way of freeing instances from the
"echainiv" template, since it's already using the new way too.

In doing this, also simplify the code by making the helper function
aead_geniv_alloc() install the ->free() method, instead of making seqiv
and echainiv do this themselves.  This is analogous to how
skcipher_alloc_instance_simple() works.

This will allow removing support for the old way of freeing instances.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:57 +08:00
Eric Biggers
48fb3e5785 crypto: hash - add support for new way of freeing instances
Add support to shash and ahash for the new way of freeing instances
(already used for skcipher, aead, and akcipher) where a ->free() method
is installed to the instance struct itself.  These methods are more
strongly-typed than crypto_template::free(), which they replace.

This will allow removing support for the old way of freeing instances.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:57 +08:00
Eric Biggers
6d1b41fce0 crypto: ahash - unexport crypto_ahash_type
Now that all the templates that need ahash spawns have been converted to
use crypto_grab_ahash() rather than look up the algorithm directly,
crypto_ahash_type is no longer used outside of ahash.c.  Make it static.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:57 +08:00
Eric Biggers
629f1afc15 crypto: algapi - remove obsoleted instance creation helpers
Remove lots of helper functions that were previously used for
instantiating crypto templates, but are now unused:

- crypto_get_attr_alg() and similar functions looked up an inner
  algorithm directly from a template parameter.  These were replaced
  with getting the algorithm's name, then calling crypto_grab_*().

- crypto_init_spawn2() and similar functions initialized a spawn, given
  an algorithm.  Similarly, these were replaced with crypto_grab_*().

- crypto_alloc_instance() and similar functions allocated an instance
  with a single spawn, given the inner algorithm.  These aren't useful
  anymore since crypto_grab_*() need the instance allocated first.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2020-01-09 11:30:57 +08:00