Merge branch into tip/master: 'x86/sev'

# New commits in x86/sev:
    21fc6178e920 ("x86/sev/docs: Document the SNP Reverse Map Table (RMP)")
    8ae3291f773b ("x86/sev: Add full support for a segmented RMP table")
    0f14af0d1d7d ("x86/sev: Treat the contiguous RMP table as a single RMP segment")
    ac517965a5a1 ("x86/sev: Map only the RMP table entries instead of the full RMP range")
    e2f3d40df82e ("x86/sev: Move the SNP probe routine out of the way")
    4972808d6f4a ("x86/sev: Require the RMPREAD instruction after Zen4")
    0cbc02584158 ("x86/sev: Add support for the RMPREAD instruction")
    3e43c60eb3e3 ("x86/sev: Prepare for using the RMPREAD instruction to access the RMP")

Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Ingo Molnar 2024-12-19 20:24:28 +01:00
commit 42e848120d
5 changed files with 702 additions and 102 deletions

View File

@ -130,8 +130,126 @@ SNP feature support.
More details in AMD64 APM[1] Vol 2: 15.34.10 SEV_STATUS MSR
Reverse Map Table (RMP)
=======================
The RMP is a structure in system memory that is used to ensure a one-to-one
mapping between system physical addresses and guest physical addresses. Each
page of memory that is potentially assignable to guests has one entry within
the RMP.
The RMP table can be either contiguous in memory or a collection of segments
in memory.
Contiguous RMP
--------------
Support for this form of the RMP is present when support for SEV-SNP is
present, which can be determined using the CPUID instruction::
0x8000001f[eax]:
Bit[4] indicates support for SEV-SNP
The location of the RMP is identified to the hardware through two MSRs::
0xc0010132 (RMP_BASE):
System physical address of the first byte of the RMP
0xc0010133 (RMP_END):
System physical address of the last byte of the RMP
Hardware requires that RMP_BASE and (RPM_END + 1) be 8KB aligned, but SEV
firmware increases the alignment requirement to require a 1MB alignment.
The RMP consists of a 16KB region used for processor bookkeeping followed
by the RMP entries, which are 16 bytes in size. The size of the RMP
determines the range of physical memory that the hypervisor can assign to
SEV-SNP guests. The RMP covers the system physical address from::
0 to ((RMP_END + 1 - RMP_BASE - 16KB) / 16B) x 4KB.
The current Linux support relies on BIOS to allocate/reserve the memory for
the RMP and to set RMP_BASE and RMP_END appropriately. Linux uses the MSR
values to locate the RMP and determine the size of the RMP. The RMP must
cover all of system memory in order for Linux to enable SEV-SNP.
Segmented RMP
-------------
Segmented RMP support is a new way of representing the layout of an RMP.
Initial RMP support required the RMP table to be contiguous in memory.
RMP accesses from a NUMA node on which the RMP doesn't reside
can take longer than accesses from a NUMA node on which the RMP resides.
Segmented RMP support allows the RMP entries to be located on the same
node as the memory the RMP is covering, potentially reducing latency
associated with accessing an RMP entry associated with the memory. Each
RMP segment covers a specific range of system physical addresses.
Support for this form of the RMP can be determined using the CPUID
instruction::
0x8000001f[eax]:
Bit[23] indicates support for segmented RMP
If supported, segmented RMP attributes can be found using the CPUID
instruction::
0x80000025[eax]:
Bits[5:0] minimum supported RMP segment size
Bits[11:6] maximum supported RMP segment size
0x80000025[ebx]:
Bits[9:0] number of cacheable RMP segment definitions
Bit[10] indicates if the number of cacheable RMP segments
is a hard limit
To enable a segmented RMP, a new MSR is available::
0xc0010136 (RMP_CFG):
Bit[0] indicates if segmented RMP is enabled
Bits[13:8] contains the size of memory covered by an RMP
segment (expressed as a power of 2)
The RMP segment size defined in the RMP_CFG MSR applies to all segments
of the RMP. Therefore each RMP segment covers a specific range of system
physical addresses. For example, if the RMP_CFG MSR value is 0x2401, then
the RMP segment coverage value is 0x24 => 36, meaning the size of memory
covered by an RMP segment is 64GB (1 << 36). So the first RMP segment
covers physical addresses from 0 to 0xF_FFFF_FFFF, the second RMP segment
covers physical addresses from 0x10_0000_0000 to 0x1F_FFFF_FFFF, etc.
When a segmented RMP is enabled, RMP_BASE points to the RMP bookkeeping
area as it does today (16K in size). However, instead of RMP entries
beginning immediately after the bookkeeping area, there is a 4K RMP
segment table (RST). Each entry in the RST is 8-bytes in size and represents
an RMP segment::
Bits[19:0] mapped size (in GB)
The mapped size can be less than the defined segment size.
A value of zero, indicates that no RMP exists for the range
of system physical addresses associated with this segment.
Bits[51:20] segment physical address
This address is left shift 20-bits (or just masked when
read) to form the physical address of the segment (1MB
alignment).
The RST can hold 512 segment entries but can be limited in size to the number
of cacheable RMP segments (CPUID 0x80000025_EBX[9:0]) if the number of cacheable
RMP segments is a hard limit (CPUID 0x80000025_EBX[10]).
The current Linux support relies on BIOS to allocate/reserve the memory for
the segmented RMP (the bookkeeping area, RST, and all segments), build the RST
and to set RMP_BASE, RMP_END, and RMP_CFG appropriately. Linux uses the MSR
values to locate the RMP and determine the size and location of the RMP
segments. The RMP must cover all of system memory in order for Linux to enable
SEV-SNP.
More details in the AMD64 APM Vol 2, section "15.36.3 Reverse Map Table",
docID: 24593.
Secure VM Service Module (SVSM)
===============================
SNP provides a feature called Virtual Machine Privilege Levels (VMPL) which
defines four privilege levels at which guest software can run. The most
privileged level is 0 and numerically higher numbers have lesser privileges.

View File

@ -451,6 +451,8 @@
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* Virtual TSC_AUX */
#define X86_FEATURE_SME_COHERENT (19*32+10) /* AMD hardware-enforced cache coherency */
#define X86_FEATURE_DEBUG_SWAP (19*32+14) /* "debug_swap" AMD SEV-ES full debug state swap support */
#define X86_FEATURE_RMPREAD (19*32+21) /* RMPREAD instruction */
#define X86_FEATURE_SEGMENTED_RMP (19*32+23) /* Segmented RMP support */
#define X86_FEATURE_SVSM (19*32+28) /* "svsm" SVSM present */
/* AMD-defined Extended Feature 2 EAX, CPUID level 0x80000021 (EAX), word 20 */

View File

@ -644,6 +644,7 @@
#define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */
#define MSR_AMD64_SVM_AVIC_DOORBELL 0xc001011b
#define MSR_AMD64_VM_PAGE_FLUSH 0xc001011e
#define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f
#define MSR_AMD64_SEV_ES_GHCB 0xc0010130
#define MSR_AMD64_SEV 0xc0010131
#define MSR_AMD64_SEV_ENABLED_BIT 0
@ -682,11 +683,12 @@
#define MSR_AMD64_SNP_SMT_PROT BIT_ULL(MSR_AMD64_SNP_SMT_PROT_BIT)
#define MSR_AMD64_SNP_RESV_BIT 18
#define MSR_AMD64_SNP_RESERVED_MASK GENMASK_ULL(63, MSR_AMD64_SNP_RESV_BIT)
#define MSR_AMD64_VIRT_SPEC_CTRL 0xc001011f
#define MSR_AMD64_RMP_BASE 0xc0010132
#define MSR_AMD64_RMP_END 0xc0010133
#define MSR_AMD64_RMP_CFG 0xc0010136
#define MSR_AMD64_SEG_RMP_ENABLED_BIT 0
#define MSR_AMD64_SEG_RMP_ENABLED BIT_ULL(MSR_AMD64_SEG_RMP_ENABLED_BIT)
#define MSR_AMD64_RMP_SEGMENT_SHIFT(x) (((x) & GENMASK_ULL(13, 8)) >> 8)
#define MSR_SVSM_CAA 0xc001f000

View File

@ -355,10 +355,15 @@ static void bsp_determine_snp(struct cpuinfo_x86 *c)
/*
* RMP table entry format is not architectural and is defined by the
* per-processor PPR. Restrict SNP support on the known CPU models
* for which the RMP table entry format is currently defined for.
* for which the RMP table entry format is currently defined or for
* processors which support the architecturally defined RMPREAD
* instruction.
*/
if (!cpu_has(c, X86_FEATURE_HYPERVISOR) &&
c->x86 >= 0x19 && snp_probe_rmptable_info()) {
(cpu_feature_enabled(X86_FEATURE_ZEN3) ||
cpu_feature_enabled(X86_FEATURE_ZEN4) ||
cpu_feature_enabled(X86_FEATURE_RMPREAD)) &&
snp_probe_rmptable_info()) {
cc_platform_set(CC_ATTR_HOST_SEV_SNP);
} else {
setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);

View File

@ -18,6 +18,7 @@
#include <linux/cpumask.h>
#include <linux/iommu.h>
#include <linux/amd-iommu.h>
#include <linux/nospec.h>
#include <asm/sev.h>
#include <asm/processor.h>
@ -31,10 +32,29 @@
#include <asm/iommu.h>
/*
* The RMP entry format is not architectural. The format is defined in PPR
* Family 19h Model 01h, Rev B1 processor.
* The RMP entry information as returned by the RMPREAD instruction.
*/
struct rmpentry {
u64 gpa;
u8 assigned :1,
rsvd1 :7;
u8 pagesize :1,
hpage_region_status :1,
rsvd2 :6;
u8 immutable :1,
rsvd3 :7;
u8 rsvd4;
u32 asid;
} __packed;
/*
* The raw RMP entry format is not architectural. The format is defined in PPR
* Family 19h Model 01h, Rev B1 processor. This format represents the actual
* entry in the RMP table memory. The bitfield definitions are used for machines
* without the RMPREAD instruction (Zen3 and Zen4), otherwise the "hi" and "lo"
* fields are only used for dumping the raw data.
*/
struct rmpentry_raw {
union {
struct {
u64 assigned : 1,
@ -58,12 +78,48 @@ struct rmpentry {
*/
#define RMPTABLE_CPU_BOOKKEEPING_SZ 0x4000
/*
* For a non-segmented RMP table, use the maximum physical addressing as the
* segment size in order to always arrive at index 0 in the table.
*/
#define RMPTABLE_NON_SEGMENTED_SHIFT 52
struct rmp_segment_desc {
struct rmpentry_raw *rmp_entry;
u64 max_index;
u64 size;
};
/*
* Segmented RMP Table support.
* - The segment size is used for two purposes:
* - Identify the amount of memory covered by an RMP segment
* - Quickly locate an RMP segment table entry for a physical address
*
* - The RMP segment table contains pointers to an RMP table that covers
* a specific portion of memory. There can be up to 512 8-byte entries,
* one pages worth.
*/
#define RST_ENTRY_MAPPED_SIZE(x) ((x) & GENMASK_ULL(19, 0))
#define RST_ENTRY_SEGMENT_BASE(x) ((x) & GENMASK_ULL(51, 20))
#define RST_SIZE SZ_4K
static struct rmp_segment_desc **rmp_segment_table __ro_after_init;
static unsigned int rst_max_index __ro_after_init = 512;
static unsigned int rmp_segment_shift;
static u64 rmp_segment_size;
static u64 rmp_segment_mask;
#define RST_ENTRY_INDEX(x) ((x) >> rmp_segment_shift)
#define RMP_ENTRY_INDEX(x) ((u64)(PHYS_PFN((x) & rmp_segment_mask)))
static u64 rmp_cfg;
/* Mask to apply to a PFN to get the first PFN of a 2MB page */
#define PFN_PMD_MASK GENMASK_ULL(63, PMD_SHIFT - PAGE_SHIFT)
static u64 probed_rmp_base, probed_rmp_size;
static struct rmpentry *rmptable __ro_after_init;
static u64 rmptable_max_pfn __ro_after_init;
static LIST_HEAD(snp_leaked_pages_list);
static DEFINE_SPINLOCK(snp_leaked_pages_list_lock);
@ -116,36 +172,6 @@ static __init void snp_enable(void *arg)
__snp_enable(smp_processor_id());
}
#define RMP_ADDR_MASK GENMASK_ULL(51, 13)
bool snp_probe_rmptable_info(void)
{
u64 rmp_sz, rmp_base, rmp_end;
rdmsrl(MSR_AMD64_RMP_BASE, rmp_base);
rdmsrl(MSR_AMD64_RMP_END, rmp_end);
if (!(rmp_base & RMP_ADDR_MASK) || !(rmp_end & RMP_ADDR_MASK)) {
pr_err("Memory for the RMP table has not been reserved by BIOS\n");
return false;
}
if (rmp_base > rmp_end) {
pr_err("RMP configuration not valid: base=%#llx, end=%#llx\n", rmp_base, rmp_end);
return false;
}
rmp_sz = rmp_end - rmp_base + 1;
probed_rmp_base = rmp_base;
probed_rmp_size = rmp_sz;
pr_info("RMP table physical range [0x%016llx - 0x%016llx]\n",
rmp_base, rmp_end);
return true;
}
static void __init __snp_fixup_e820_tables(u64 pa)
{
if (IS_ALIGNED(pa, PMD_SIZE))
@ -178,35 +204,176 @@ static void __init __snp_fixup_e820_tables(u64 pa)
}
}
void __init snp_fixup_e820_tables(void)
static void __init fixup_e820_tables_for_segmented_rmp(void)
{
u64 pa, *rst, size, mapped_size;
unsigned int i;
__snp_fixup_e820_tables(probed_rmp_base);
pa = probed_rmp_base + RMPTABLE_CPU_BOOKKEEPING_SZ;
__snp_fixup_e820_tables(pa + RST_SIZE);
rst = early_memremap(pa, RST_SIZE);
if (!rst)
return;
for (i = 0; i < rst_max_index; i++) {
pa = RST_ENTRY_SEGMENT_BASE(rst[i]);
mapped_size = RST_ENTRY_MAPPED_SIZE(rst[i]);
if (!mapped_size)
continue;
__snp_fixup_e820_tables(pa);
/*
* Mapped size in GB. Mapped size is allowed to exceed
* the segment coverage size, but gets reduced to the
* segment coverage size.
*/
mapped_size <<= 30;
if (mapped_size > rmp_segment_size)
mapped_size = rmp_segment_size;
/* Calculate the RMP segment size (16 bytes/page mapped) */
size = PHYS_PFN(mapped_size) << 4;
__snp_fixup_e820_tables(pa + size);
}
early_memunmap(rst, RST_SIZE);
}
static void __init fixup_e820_tables_for_contiguous_rmp(void)
{
__snp_fixup_e820_tables(probed_rmp_base);
__snp_fixup_e820_tables(probed_rmp_base + probed_rmp_size);
}
/*
* Do the necessary preparations which are verified by the firmware as
* described in the SNP_INIT_EX firmware command description in the SNP
* firmware ABI spec.
*/
static int __init snp_rmptable_init(void)
void __init snp_fixup_e820_tables(void)
{
u64 max_rmp_pfn, calc_rmp_sz, rmptable_size, rmp_end, val;
void *rmptable_start;
if (rmp_cfg & MSR_AMD64_SEG_RMP_ENABLED) {
fixup_e820_tables_for_segmented_rmp();
} else {
fixup_e820_tables_for_contiguous_rmp();
}
}
if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))
return 0;
static bool __init clear_rmptable_bookkeeping(void)
{
void *bk;
if (!amd_iommu_snp_en)
goto nosnp;
bk = memremap(probed_rmp_base, RMPTABLE_CPU_BOOKKEEPING_SZ, MEMREMAP_WB);
if (!bk) {
pr_err("Failed to map RMP bookkeeping area\n");
return false;
}
memset(bk, 0, RMPTABLE_CPU_BOOKKEEPING_SZ);
memunmap(bk);
return true;
}
static bool __init alloc_rmp_segment_desc(u64 segment_pa, u64 segment_size, u64 pa)
{
u64 rst_index, rmp_segment_size_max;
struct rmp_segment_desc *desc;
void *rmp_segment;
/* Calculate the maximum size an RMP can be (16 bytes/page mapped) */
rmp_segment_size_max = PHYS_PFN(rmp_segment_size) << 4;
/* Validate the RMP segment size */
if (segment_size > rmp_segment_size_max) {
pr_err("Invalid RMP size 0x%llx for configured segment size 0x%llx\n",
segment_size, rmp_segment_size_max);
return false;
}
/* Validate the RMP segment table index */
rst_index = RST_ENTRY_INDEX(pa);
if (rst_index >= rst_max_index) {
pr_err("Invalid RMP segment base address 0x%llx for configured segment size 0x%llx\n",
pa, rmp_segment_size);
return false;
}
if (rmp_segment_table[rst_index]) {
pr_err("RMP segment descriptor already exists at index %llu\n", rst_index);
return false;
}
rmp_segment = memremap(segment_pa, segment_size, MEMREMAP_WB);
if (!rmp_segment) {
pr_err("Failed to map RMP segment addr 0x%llx size 0x%llx\n",
segment_pa, segment_size);
return false;
}
desc = kzalloc(sizeof(*desc), GFP_KERNEL);
if (!desc) {
memunmap(rmp_segment);
return false;
}
desc->rmp_entry = rmp_segment;
desc->max_index = segment_size / sizeof(*desc->rmp_entry);
desc->size = segment_size;
rmp_segment_table[rst_index] = desc;
return true;
}
static void __init free_rmp_segment_table(void)
{
unsigned int i;
for (i = 0; i < rst_max_index; i++) {
struct rmp_segment_desc *desc;
desc = rmp_segment_table[i];
if (!desc)
continue;
memunmap(desc->rmp_entry);
kfree(desc);
}
free_page((unsigned long)rmp_segment_table);
rmp_segment_table = NULL;
}
/* Allocate the table used to index into the RMP segments */
static bool __init alloc_rmp_segment_table(void)
{
struct page *page;
page = alloc_page(__GFP_ZERO);
if (!page)
return false;
rmp_segment_table = page_address(page);
return true;
}
static bool __init setup_contiguous_rmptable(void)
{
u64 max_rmp_pfn, calc_rmp_sz, rmptable_segment, rmptable_size, rmp_end;
if (!probed_rmp_size)
goto nosnp;
return false;
rmp_end = probed_rmp_base + probed_rmp_size - 1;
/*
* Calculate the amount the memory that must be reserved by the BIOS to
* Calculate the amount of memory that must be reserved by the BIOS to
* address the whole RAM, including the bookkeeping area. The RMP itself
* must also be covered.
*/
@ -218,15 +385,140 @@ static int __init snp_rmptable_init(void)
if (calc_rmp_sz > probed_rmp_size) {
pr_err("Memory reserved for the RMP table does not cover full system RAM (expected 0x%llx got 0x%llx)\n",
calc_rmp_sz, probed_rmp_size);
goto nosnp;
return false;
}
rmptable_start = memremap(probed_rmp_base, probed_rmp_size, MEMREMAP_WB);
if (!rmptable_start) {
pr_err("Failed to map RMP table\n");
goto nosnp;
if (!alloc_rmp_segment_table())
return false;
/* Map only the RMP entries */
rmptable_segment = probed_rmp_base + RMPTABLE_CPU_BOOKKEEPING_SZ;
rmptable_size = probed_rmp_size - RMPTABLE_CPU_BOOKKEEPING_SZ;
if (!alloc_rmp_segment_desc(rmptable_segment, rmptable_size, 0)) {
free_rmp_segment_table();
return false;
}
return true;
}
static bool __init setup_segmented_rmptable(void)
{
u64 rst_pa, *rst, pa, ram_pa_end, ram_pa_max;
unsigned int i, max_index;
if (!probed_rmp_base)
return false;
if (!alloc_rmp_segment_table())
return false;
rst_pa = probed_rmp_base + RMPTABLE_CPU_BOOKKEEPING_SZ;
rst = memremap(rst_pa, RST_SIZE, MEMREMAP_WB);
if (!rst) {
pr_err("Failed to map RMP segment table addr 0x%llx\n", rst_pa);
goto e_free;
}
pr_info("Segmented RMP using %lluGB segments\n", rmp_segment_size >> 30);
ram_pa_max = max_pfn << PAGE_SHIFT;
max_index = 0;
ram_pa_end = 0;
for (i = 0; i < rst_max_index; i++) {
u64 rmp_segment, rmp_size, mapped_size;
mapped_size = RST_ENTRY_MAPPED_SIZE(rst[i]);
if (!mapped_size)
continue;
max_index = i;
/*
* Mapped size in GB. Mapped size is allowed to exceed the
* segment coverage size, but gets reduced to the segment
* coverage size.
*/
mapped_size <<= 30;
if (mapped_size > rmp_segment_size) {
pr_info("RMP segment %u mapped size (0x%llx) reduced to 0x%llx\n",
i, mapped_size, rmp_segment_size);
mapped_size = rmp_segment_size;
}
rmp_segment = RST_ENTRY_SEGMENT_BASE(rst[i]);
/* Calculate the RMP segment size (16 bytes/page mapped) */
rmp_size = PHYS_PFN(mapped_size) << 4;
pa = (u64)i << rmp_segment_shift;
/*
* Some segments may be for MMIO mapped above system RAM. These
* segments are used for Trusted I/O.
*/
if (pa < ram_pa_max)
ram_pa_end = pa + mapped_size;
if (!alloc_rmp_segment_desc(rmp_segment, rmp_size, pa))
goto e_unmap;
pr_info("RMP segment %u physical address [0x%llx - 0x%llx] covering [0x%llx - 0x%llx]\n",
i, rmp_segment, rmp_segment + rmp_size - 1, pa, pa + mapped_size - 1);
}
if (ram_pa_max > ram_pa_end) {
pr_err("Segmented RMP does not cover full system RAM (expected 0x%llx got 0x%llx)\n",
ram_pa_max, ram_pa_end);
goto e_unmap;
}
/* Adjust the maximum index based on the found segments */
rst_max_index = max_index + 1;
memunmap(rst);
return true;
e_unmap:
memunmap(rst);
e_free:
free_rmp_segment_table();
return false;
}
static bool __init setup_rmptable(void)
{
if (rmp_cfg & MSR_AMD64_SEG_RMP_ENABLED) {
return setup_segmented_rmptable();
} else {
return setup_contiguous_rmptable();
}
}
/*
* Do the necessary preparations which are verified by the firmware as
* described in the SNP_INIT_EX firmware command description in the SNP
* firmware ABI spec.
*/
static int __init snp_rmptable_init(void)
{
unsigned int i;
u64 val;
if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))
return 0;
if (!amd_iommu_snp_en)
goto nosnp;
if (!setup_rmptable())
goto nosnp;
/*
* Check if SEV-SNP is already enabled, this can happen in case of
* kexec boot.
@ -235,7 +527,22 @@ static int __init snp_rmptable_init(void)
if (val & MSR_AMD64_SYSCFG_SNP_EN)
goto skip_enable;
memset(rmptable_start, 0, probed_rmp_size);
/* Zero out the RMP bookkeeping area */
if (!clear_rmptable_bookkeeping()) {
free_rmp_segment_table();
goto nosnp;
}
/* Zero out the RMP entries */
for (i = 0; i < rst_max_index; i++) {
struct rmp_segment_desc *desc;
desc = rmp_segment_table[i];
if (!desc)
continue;
memset(desc->rmp_entry, 0, desc->size);
}
/* Flush the caches to ensure that data is written before SNP is enabled. */
wbinvd_on_all_cpus();
@ -246,12 +553,6 @@ static int __init snp_rmptable_init(void)
on_each_cpu(snp_enable, NULL, 1);
skip_enable:
rmptable_start += RMPTABLE_CPU_BOOKKEEPING_SZ;
rmptable_size = probed_rmp_size - RMPTABLE_CPU_BOOKKEEPING_SZ;
rmptable = (struct rmpentry *)rmptable_start;
rmptable_max_pfn = rmptable_size / sizeof(struct rmpentry) - 1;
cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "x86/rmptable_init:online", __snp_enable, NULL);
/*
@ -272,48 +573,212 @@ nosnp:
*/
device_initcall(snp_rmptable_init);
static struct rmpentry *get_rmpentry(u64 pfn)
static void set_rmp_segment_info(unsigned int segment_shift)
{
if (WARN_ON_ONCE(pfn > rmptable_max_pfn))
return ERR_PTR(-EFAULT);
return &rmptable[pfn];
rmp_segment_shift = segment_shift;
rmp_segment_size = 1ULL << rmp_segment_shift;
rmp_segment_mask = rmp_segment_size - 1;
}
static struct rmpentry *__snp_lookup_rmpentry(u64 pfn, int *level)
{
struct rmpentry *large_entry, *entry;
#define RMP_ADDR_MASK GENMASK_ULL(51, 13)
if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))
static bool probe_contiguous_rmptable_info(void)
{
u64 rmp_sz, rmp_base, rmp_end;
rdmsrl(MSR_AMD64_RMP_BASE, rmp_base);
rdmsrl(MSR_AMD64_RMP_END, rmp_end);
if (!(rmp_base & RMP_ADDR_MASK) || !(rmp_end & RMP_ADDR_MASK)) {
pr_err("Memory for the RMP table has not been reserved by BIOS\n");
return false;
}
if (rmp_base > rmp_end) {
pr_err("RMP configuration not valid: base=%#llx, end=%#llx\n", rmp_base, rmp_end);
return false;
}
rmp_sz = rmp_end - rmp_base + 1;
/* Treat the contiguous RMP table as a single segment */
rst_max_index = 1;
set_rmp_segment_info(RMPTABLE_NON_SEGMENTED_SHIFT);
probed_rmp_base = rmp_base;
probed_rmp_size = rmp_sz;
pr_info("RMP table physical range [0x%016llx - 0x%016llx]\n",
rmp_base, rmp_end);
return true;
}
static bool probe_segmented_rmptable_info(void)
{
unsigned int eax, ebx, segment_shift, segment_shift_min, segment_shift_max;
u64 rmp_base, rmp_end;
rdmsrl(MSR_AMD64_RMP_BASE, rmp_base);
if (!(rmp_base & RMP_ADDR_MASK)) {
pr_err("Memory for the RMP table has not been reserved by BIOS\n");
return false;
}
rdmsrl(MSR_AMD64_RMP_END, rmp_end);
WARN_ONCE(rmp_end & RMP_ADDR_MASK,
"Segmented RMP enabled but RMP_END MSR is non-zero\n");
/* Obtain the min and max supported RMP segment size */
eax = cpuid_eax(0x80000025);
segment_shift_min = eax & GENMASK(5, 0);
segment_shift_max = (eax & GENMASK(11, 6)) >> 6;
/* Verify the segment size is within the supported limits */
segment_shift = MSR_AMD64_RMP_SEGMENT_SHIFT(rmp_cfg);
if (segment_shift > segment_shift_max || segment_shift < segment_shift_min) {
pr_err("RMP segment size (%u) is not within advertised bounds (min=%u, max=%u)\n",
segment_shift, segment_shift_min, segment_shift_max);
return false;
}
/* Override the max supported RST index if a hardware limit exists */
ebx = cpuid_ebx(0x80000025);
if (ebx & BIT(10))
rst_max_index = ebx & GENMASK(9, 0);
set_rmp_segment_info(segment_shift);
probed_rmp_base = rmp_base;
probed_rmp_size = 0;
pr_info("Segmented RMP base table physical range [0x%016llx - 0x%016llx]\n",
rmp_base, rmp_base + RMPTABLE_CPU_BOOKKEEPING_SZ + RST_SIZE);
return true;
}
bool snp_probe_rmptable_info(void)
{
if (cpu_feature_enabled(X86_FEATURE_SEGMENTED_RMP))
rdmsrl(MSR_AMD64_RMP_CFG, rmp_cfg);
if (rmp_cfg & MSR_AMD64_SEG_RMP_ENABLED)
return probe_segmented_rmptable_info();
else
return probe_contiguous_rmptable_info();
}
/*
* About the array_index_nospec() usage below:
*
* This function can get called by exported functions like
* snp_lookup_rmpentry(), which is used by the KVM #PF handler, among
* others, and since the @pfn passed in cannot always be trusted,
* speculation should be stopped as a protective measure.
*/
static struct rmpentry_raw *get_raw_rmpentry(u64 pfn)
{
u64 paddr, rst_index, segment_index;
struct rmp_segment_desc *desc;
if (!rmp_segment_table)
return ERR_PTR(-ENODEV);
entry = get_rmpentry(pfn);
if (IS_ERR(entry))
return entry;
paddr = pfn << PAGE_SHIFT;
rst_index = RST_ENTRY_INDEX(paddr);
if (unlikely(rst_index >= rst_max_index))
return ERR_PTR(-EFAULT);
rst_index = array_index_nospec(rst_index, rst_max_index);
desc = rmp_segment_table[rst_index];
if (unlikely(!desc))
return ERR_PTR(-EFAULT);
segment_index = RMP_ENTRY_INDEX(paddr);
if (unlikely(segment_index >= desc->max_index))
return ERR_PTR(-EFAULT);
segment_index = array_index_nospec(segment_index, desc->max_index);
return desc->rmp_entry + segment_index;
}
static int get_rmpentry(u64 pfn, struct rmpentry *e)
{
struct rmpentry_raw *e_raw;
if (cpu_feature_enabled(X86_FEATURE_RMPREAD)) {
int ret;
/* Binutils version 2.44 supports the RMPREAD mnemonic. */
asm volatile(".byte 0xf2, 0x0f, 0x01, 0xfd"
: "=a" (ret)
: "a" (pfn << PAGE_SHIFT), "c" (e)
: "memory", "cc");
return ret;
}
e_raw = get_raw_rmpentry(pfn);
if (IS_ERR(e_raw))
return PTR_ERR(e_raw);
/*
* Map the raw RMP table entry onto the RMPREAD output format.
* The 2MB region status indicator (hpage_region_status field) is not
* calculated, since the overhead could be significant and the field
* is not used.
*/
memset(e, 0, sizeof(*e));
e->gpa = e_raw->gpa << PAGE_SHIFT;
e->asid = e_raw->asid;
e->assigned = e_raw->assigned;
e->pagesize = e_raw->pagesize;
e->immutable = e_raw->immutable;
return 0;
}
static int __snp_lookup_rmpentry(u64 pfn, struct rmpentry *e, int *level)
{
struct rmpentry e_large;
int ret;
if (!cc_platform_has(CC_ATTR_HOST_SEV_SNP))
return -ENODEV;
ret = get_rmpentry(pfn, e);
if (ret)
return ret;
/*
* Find the authoritative RMP entry for a PFN. This can be either a 4K
* RMP entry or a special large RMP entry that is authoritative for a
* whole 2M area.
*/
large_entry = get_rmpentry(pfn & PFN_PMD_MASK);
if (IS_ERR(large_entry))
return large_entry;
ret = get_rmpentry(pfn & PFN_PMD_MASK, &e_large);
if (ret)
return ret;
*level = RMP_TO_PG_LEVEL(large_entry->pagesize);
*level = RMP_TO_PG_LEVEL(e_large.pagesize);
return entry;
return 0;
}
int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level)
{
struct rmpentry *e;
struct rmpentry e;
int ret;
e = __snp_lookup_rmpentry(pfn, level);
if (IS_ERR(e))
return PTR_ERR(e);
ret = __snp_lookup_rmpentry(pfn, &e, level);
if (ret)
return ret;
*assigned = !!e->assigned;
*assigned = !!e.assigned;
return 0;
}
EXPORT_SYMBOL_GPL(snp_lookup_rmpentry);
@ -326,20 +791,28 @@ EXPORT_SYMBOL_GPL(snp_lookup_rmpentry);
*/
static void dump_rmpentry(u64 pfn)
{
struct rmpentry_raw *e_raw;
u64 pfn_i, pfn_end;
struct rmpentry *e;
int level;
struct rmpentry e;
int level, ret;
e = __snp_lookup_rmpentry(pfn, &level);
if (IS_ERR(e)) {
pr_err("Failed to read RMP entry for PFN 0x%llx, error %ld\n",
pfn, PTR_ERR(e));
ret = __snp_lookup_rmpentry(pfn, &e, &level);
if (ret) {
pr_err("Failed to read RMP entry for PFN 0x%llx, error %d\n",
pfn, ret);
return;
}
if (e->assigned) {
if (e.assigned) {
e_raw = get_raw_rmpentry(pfn);
if (IS_ERR(e_raw)) {
pr_err("Failed to read RMP contents for PFN 0x%llx, error %ld\n",
pfn, PTR_ERR(e_raw));
return;
}
pr_info("PFN 0x%llx, RMP entry: [0x%016llx - 0x%016llx]\n",
pfn, e->lo, e->hi);
pfn, e_raw->lo, e_raw->hi);
return;
}
@ -358,16 +831,16 @@ static void dump_rmpentry(u64 pfn)
pfn, pfn_i, pfn_end);
while (pfn_i < pfn_end) {
e = __snp_lookup_rmpentry(pfn_i, &level);
if (IS_ERR(e)) {
pr_err("Error %ld reading RMP entry for PFN 0x%llx\n",
PTR_ERR(e), pfn_i);
e_raw = get_raw_rmpentry(pfn_i);
if (IS_ERR(e_raw)) {
pr_err("Error %ld reading RMP contents for PFN 0x%llx\n",
PTR_ERR(e_raw), pfn_i);
pfn_i++;
continue;
}
if (e->lo || e->hi)
pr_info("PFN: 0x%llx, [0x%016llx - 0x%016llx]\n", pfn_i, e->lo, e->hi);
if (e_raw->lo || e_raw->hi)
pr_info("PFN: 0x%llx, [0x%016llx - 0x%016llx]\n", pfn_i, e_raw->lo, e_raw->hi);
pfn_i++;
}
}