mirror of
git://slackware.nl/current.git
synced 2024-12-27 09:59:16 +01:00
e0eaf6e451
We're going to go ahead and jump to the 6.1.4 kernel, in spite of the fact that a kernel bisect identified the patch that was preventing 32-bit from booting here on a Thinkpad X1E: ------ From 2e479b3b82c49bfb9422274c0a9c155a41caecb7 Mon Sep 17 00:00:00 2001 From: Michael Kelley <mikelley@microsoft.com> Date: Wed, 16 Nov 2022 10:41:24 -0800 Subject: [PATCH] x86/ioremap: Fix page aligned size calculation in __ioremap_caller() commit 4dbd6a3e90e03130973688fd79e19425f720d999 upstream. Current code re-calculates the size after aligning the starting and ending physical addresses on a page boundary. But the re-calculation also embeds the masking of high order bits that exceed the size of the physical address space (via PHYSICAL_PAGE_MASK). If the masking removes any high order bits, the size calculation results in a huge value that is likely to immediately fail. Fix this by re-calculating the page-aligned size first. Then mask any high order bits using PHYSICAL_PAGE_MASK. Fixes: ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") ------ The non-SMP non-PAE 32-bit kernel is fine even without the patch revert. The PAE kernel also works fine with this patch reverted without any need to revert ffa71f33a820 (the patch that this one is supposed to fix). The machine's excessive (for 32-bit) amount of physical RAM (64GB) might also be a factor here considering the PAE kernel works on all the other machines around here without reverting this patch. The patch is reverted only on 32-bit. Upstream report still pending. Enjoy! :-) a/kernel-generic-6.1.4-x86_64-1.txz: Upgraded. a/kernel-huge-6.1.4-x86_64-1.txz: Upgraded. a/kernel-modules-6.1.4-x86_64-1.txz: Upgraded. a/tree-2.1.0-x86_64-1.txz: Upgraded. d/kernel-headers-6.1.4-x86-1.txz: Upgraded. k/kernel-source-6.1.4-noarch-1.txz: Upgraded. l/gvfs-1.50.3-x86_64-1.txz: Upgraded. l/hunspell-1.7.2-x86_64-1.txz: Upgraded. l/libnice-0.1.21-x86_64-1.txz: Upgraded. n/tin-2.6.2-x86_64-1.txz: Upgraded. isolinux/initrd.img: Rebuilt. kernels/*: Upgraded. usb-and-pxe-installers/usbboot.img: Rebuilt.
53 lines
1.9 KiB
Diff
53 lines
1.9 KiB
Diff
From 2e479b3b82c49bfb9422274c0a9c155a41caecb7 Mon Sep 17 00:00:00 2001
|
|
From: Michael Kelley <mikelley@microsoft.com>
|
|
Date: Wed, 16 Nov 2022 10:41:24 -0800
|
|
Subject: [PATCH] x86/ioremap: Fix page aligned size calculation in
|
|
__ioremap_caller()
|
|
|
|
commit 4dbd6a3e90e03130973688fd79e19425f720d999 upstream.
|
|
|
|
Current code re-calculates the size after aligning the starting and
|
|
ending physical addresses on a page boundary. But the re-calculation
|
|
also embeds the masking of high order bits that exceed the size of
|
|
the physical address space (via PHYSICAL_PAGE_MASK). If the masking
|
|
removes any high order bits, the size calculation results in a huge
|
|
value that is likely to immediately fail.
|
|
|
|
Fix this by re-calculating the page-aligned size first. Then mask any
|
|
high order bits using PHYSICAL_PAGE_MASK.
|
|
|
|
Fixes: ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode")
|
|
Signed-off-by: Michael Kelley <mikelley@microsoft.com>
|
|
Signed-off-by: Borislav Petkov <bp@suse.de>
|
|
Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
|
|
Cc: <stable@kernel.org>
|
|
Link: https://lore.kernel.org/r/1668624097-14884-2-git-send-email-mikelley@microsoft.com
|
|
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
---
|
|
arch/x86/mm/ioremap.c | 8 +++++++-
|
|
1 file changed, 7 insertions(+), 1 deletion(-)
|
|
|
|
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
|
|
index 1ad0228f8ceb..19058d746695 100644
|
|
--- a/arch/x86/mm/ioremap.c
|
|
+++ b/arch/x86/mm/ioremap.c
|
|
@@ -216,9 +216,15 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size,
|
|
* Mappings have to be page-aligned
|
|
*/
|
|
offset = phys_addr & ~PAGE_MASK;
|
|
- phys_addr &= PHYSICAL_PAGE_MASK;
|
|
+ phys_addr &= PAGE_MASK;
|
|
size = PAGE_ALIGN(last_addr+1) - phys_addr;
|
|
|
|
+ /*
|
|
+ * Mask out any bits not part of the actual physical
|
|
+ * address, like memory encryption bits.
|
|
+ */
|
|
+ phys_addr &= PHYSICAL_PAGE_MASK;
|
|
+
|
|
retval = memtype_reserve(phys_addr, (u64)phys_addr + size,
|
|
pcm, &new_pcm);
|
|
if (retval) {
|
|
--
|
|
2.39.0
|
|
|