KVM: x86/mmu: Dedup logic for detecting TLB flushes on leaf SPTE changes

Now that the shadow MMU and TDP MMU have identical logic for detecting
required TLB flushes when updating SPTEs, move said logic to a helper so
that the TDP MMU code can benefit from the comments that are currently
exclusive to the shadow MMU.

No functional change intended.

Link: https://lore.kernel.org/r/20241011021051.1557902-16-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
This commit is contained in:
Sean Christopherson 2024-10-10 19:10:47 -07:00
parent 51192ebdd1
commit c9b625625b
3 changed files with 30 additions and 20 deletions

View File

@ -488,23 +488,6 @@ static void mmu_spte_set(u64 *sptep, u64 new_spte)
/* Rules for using mmu_spte_update:
* Update the state bits, it means the mapped pfn is not changed.
*
* If the MMU-writable flag is cleared, i.e. the SPTE is write-protected for
* write-tracking, remote TLBs must be flushed, even if the SPTE was read-only,
* as KVM allows stale Writable TLB entries to exist. When dirty logging, KVM
* flushes TLBs based on whether or not dirty bitmap/ring entries were reaped,
* not whether or not SPTEs were modified, i.e. only the write-tracking case
* needs to flush at the time the SPTEs is modified, before dropping mmu_lock.
*
* Don't flush if the Accessed bit is cleared, as access tracking tolerates
* false negatives, and the one path that does care about TLB flushes,
* kvm_mmu_notifier_clear_flush_young(), flushes if a young SPTE is found, i.e.
* doesn't rely on lower helpers to detect the need to flush.
*
* Lastly, don't flush if the Dirty bit is cleared, as KVM unconditionally
* flushes when enabling dirty logging (see kvm_mmu_slot_apply_flags()), and
* when clearing dirty logs, KVM flushes based on whether or not dirty entries
* were reaped from the bitmap/ring, not whether or not dirty SPTEs were found.
*
* Returns true if the TLB needs to be flushed
*/
static bool mmu_spte_update(u64 *sptep, u64 new_spte)
@ -527,7 +510,7 @@ static bool mmu_spte_update(u64 *sptep, u64 new_spte)
WARN_ON_ONCE(!is_shadow_present_pte(old_spte) ||
spte_to_pfn(old_spte) != spte_to_pfn(new_spte));
return is_mmu_writable_spte(old_spte) && !is_mmu_writable_spte(new_spte);
return leaf_spte_change_needs_tlb_flush(old_spte, new_spte);
}
/*

View File

@ -467,6 +467,34 @@ static inline bool is_mmu_writable_spte(u64 spte)
return spte & shadow_mmu_writable_mask;
}
/*
* If the MMU-writable flag is cleared, i.e. the SPTE is write-protected for
* write-tracking, remote TLBs must be flushed, even if the SPTE was read-only,
* as KVM allows stale Writable TLB entries to exist. When dirty logging, KVM
* flushes TLBs based on whether or not dirty bitmap/ring entries were reaped,
* not whether or not SPTEs were modified, i.e. only the write-tracking case
* needs to flush at the time the SPTEs is modified, before dropping mmu_lock.
*
* Don't flush if the Accessed bit is cleared, as access tracking tolerates
* false negatives, and the one path that does care about TLB flushes,
* kvm_mmu_notifier_clear_flush_young(), flushes if a young SPTE is found, i.e.
* doesn't rely on lower helpers to detect the need to flush.
*
* Lastly, don't flush if the Dirty bit is cleared, as KVM unconditionally
* flushes when enabling dirty logging (see kvm_mmu_slot_apply_flags()), and
* when clearing dirty logs, KVM flushes based on whether or not dirty entries
* were reaped from the bitmap/ring, not whether or not dirty SPTEs were found.
*
* Note, this logic only applies to shadow-present leaf SPTEs. The caller is
* responsible for checking that the old SPTE is shadow-present, and is also
* responsible for determining whether or not a TLB flush is required when
* modifying a shadow-present non-leaf SPTE.
*/
static inline bool leaf_spte_change_needs_tlb_flush(u64 old_spte, u64 new_spte)
{
return is_mmu_writable_spte(old_spte) && !is_mmu_writable_spte(new_spte);
}
static inline u64 get_mmio_spte_generation(u64 spte)
{
u64 gen;

View File

@ -1034,8 +1034,7 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu,
return RET_PF_RETRY;
else if (is_shadow_present_pte(iter->old_spte) &&
(!is_last_spte(iter->old_spte, iter->level) ||
WARN_ON_ONCE(is_mmu_writable_spte(iter->old_spte) &&
!is_mmu_writable_spte(new_spte))))
WARN_ON_ONCE(leaf_spte_change_needs_tlb_flush(iter->old_spte, new_spte))))
kvm_flush_remote_tlbs_gfn(vcpu->kvm, iter->gfn, iter->level);
/*