summaryrefslogtreecommitdiff
path: root/i386/intel
AgeCommit message (Collapse)Author
2025-06-27i386 intel read fault fixMilos Nikic
include the missing header, fix the warning. Message-ID: <20250625014727.40695-1-nikic.milos@gmail.com>
2025-04-07x86_64: update ifdef to exclude the x86_64 for i386 only specific conditionsEtienne Brateau
Message-ID: <20250407201126.1553736-1-etienne.brateau@gmail.com>
2025-02-12Use MACRO_BEGIN/ENDSamuel Thibault
This notably fixes at least a SAVE_HINT call.
2024-12-09pmap: Separate temporary_mapping from set_page_dirDamien Zammit via Bug reports for the GNU Hurd
Prepare for smp parallel init where we want to call these two functions on different cpus at different times. Message-ID: <20241209121706.879984-5-damien@zamaudio.com>
2024-10-22fix some compiler warnings.jbranso@dismail.de
I compiled with ./configure --enable-xen --enable-acpi. * i386/intel/pmap.c (pmap_bootstrap_xen, pmap_bootstrap, pmap_set_page_readwrite, pmap_clear_bootstrap_pagetable, pmap_map_mfn, pmap_expand_level, pmap_collect): Lots of tiny changes. I've copied in some of the error messages. cast many variables to (long unsigned int), (vm_offset_t) -> (unsigned long), %llx <-- (uint64_t) variable, In file included from i386/intel/pmap.c:63: i386/intel/pmap.c: In function 'pmap_bootstrap_xen': i386/intel/pmap.c:703:39: warning: format '%lx' expects argument of type 'long unsigned int', but argument 6 has type 'unsigned int' [-Wformat=] 703 | panic("couldn't pin page %p(%lx)", l1_map[n_l1map], (vm_offset_t) kv_to_ma (l1_map[n_l1map])); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~ i386/intel/pmap.c: In function 'pmap_set_page_readwrite': i386/intel/pmap.c:897:23: warning: format '%lx' expects argument of type 'long unsigned int', but argument 5 has type 'vm_offset_t' {aka 'unsigned int'} [-Wformat=] 897 | panic("couldn't set hiMMU readwrite for addr %lx(%lx)\n", vaddr, (vm_offset_t) pa_to_ma (paddr)); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~ | | | vm_offset_t {aka unsigned int} ./kern/debug.h:67:50: note: in definition of macro 'panic' 67 | Panic (__FILE__, __LINE__, __FUNCTION__, s, ##__VA_ARGS__) | ^ i386/intel/pmap.c:897:64: note: format string is defined here 897 | panic("couldn't set hiMMU readwrite for addr %lx(%lx)\n", vaddr, (vm_offset_t) pa_to_ma (paddr)); | ~~^ | | | long unsigned int | %x i386/intel/pmap.c:897:23: warning: format '%lx' expects argument of type 'long unsigned int', but argument 6 has type 'unsigned int' [-Wformat=] 897 | panic("couldn't set hiMMU readwrite for addr %lx(%lx)\n", vaddr, (vm_offset_t) pa_to_ma (paddr)); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./kern/debug.h:67:50: note: in definition of macro 'panic' 67 | Panic (__FILE__, __LINE__, __FUNCTION__, s, ##__VA_ARGS__) | ^ i386/intel/pmap.c:897:68: note: format string is defined here 897 | panic("couldn't set hiMMU readwrite for addr %lx(%lx)\n", vaddr, (vm_offset_t) pa_to_ma (paddr)); | ~~^ | | | long unsigned int | %x Message-ID: <20241022173641.2774-1-jbranso@dismail.de>
2024-10-21fix a compiler warning.jbranso@dismail.de
* i386/intel/pmap.c (pmap_page_table_page_dealloc): define it only on the Xen platform. Best not to delete page_alloc, so we know how to do so if need be. i386/intel/pmap.c:1265:1: warning: 'pmap_page_table_page_dealloc' defined but not used [-Wunused-function] 1265 | pmap_page_table_page_dealloc(vm_offset_t pa) | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ i386/intel/pmap.c:1171:1: warning: 'pmap_page_table_page_alloc' defined but not used [-Wunused-function] 1171 | pmap_page_table_page_alloc(void) | ^~~~~~~~~~~~~~~~~~~~~~~~~~ Message-ID: <20241020190744.2522-3-jbranso@dismail.de>
2024-03-03pmap: Avoid leaking USER bit in page tablesSamuel Thibault
We should only set USER - for user processes maps - for 32bit Xen support This was not actually posing problem since in 32bit segmentation protects us, and in 64bit the l4 entry for the kernel is already set. But better be safe than sorry.
2024-02-23spl: Introduce assert_splvm and use it in process_pmap_updatesSamuel Thibault
Suggested-by: Damien Zammit <damien@zamaudio.com>
2024-02-22vm_map_lookup: Add parameter for keeping map lockedDamien Zammit
This adds a parameter called keep_map_locked to vm_map_lookup() that allows the function to return with the map locked. This is to prepare for fixing a bug with gsync where the map is locked twice by mistake. Co-Authored-By: Sergey Bugaev <bugaevc@gmail.com> Message-ID: <20240222082410.422869-3-damien@zamaudio.com>
2024-02-19process_pmap_updates: Use _nocheck form of lock, already at splvmDamien Zammit
2023-10-02Fix non-PAE buildSamuel Thibault
2023-10-01pmap: Factorize l4 base accessSamuel Thibault
This also makes the code more coherent with other levels.
2023-10-01ddb: Add whatis commandSamuel Thibault
This is convenient when tracking buffer overflows
2023-08-28pmap_phys_address: Fix castingSamuel Thibault
We need to extend the frame number to 64bit before converting to address.
2023-08-28pmap: Fix spurious pte release on 64bit and PAESamuel Thibault
2023-08-14pmap: Add missing declarationSamuel Thibault
2023-08-14pmap+slab: Add more smoketestsSamuel Thibault
Checking the range of addresses for operations on the kernel_pmap is quite cheap, and allows to catch oddities quite early enough.
2023-08-14pmap: Fix mayhem when releasing near the end of virtual memorySamuel Thibault
l is used to skip over the area mapped by a whole pde. It was clipped to e, but if e is already near the end of virtual memory, l will wrap-around to 0, and mayhem entails.
2023-08-14pmap: Add MAPWINDOW_SIZE macroSamuel Thibault
2023-08-14pmap: reserve last virtual pageSamuel Thibault
So as to catch trying to dereference -1
2023-08-14pmap: Add more debugging information when getting a null pv_listSamuel Thibault
2023-08-14pmap: Fix x32 PAE buildsSamuel Thibault
2023-08-14pmap: Fix coping with VM_MAX_USER_ADDRESS not being aligned on l4 limitSamuel Thibault
It is however usually aligned on l3 limit.
2023-08-14x86_64: Fix memory leak on pmap destructionSamuel Thibault
We do need to go through the whole page table, to release the pdpt-s and pdet-s that map the kernel. Only the ptpt-s that map the kernel are shared between tasks.
2023-08-12pmap: Make pmap_protect sparse-pde awareSamuel Thibault
222020cff440 ("pmap: dynamically allocate the whole user page tree map") made the pde array sparse, but missed updating pmap_protect accordingly: we have to re-lookup for the pde on each PDE_MAPPED_SIZE section.
2023-08-06pmap: Add missing PMAP_READ_LOCK fixes uninitialized splDamien Zammit
Message-Id: <20230805154913.2003121-1-damien@zamaudio.com>
2023-05-26pmap: only map lower BIOS memory 1:1 when using Linux driversLuca Dariz
* i386/intel/pmap.c: add the check for LINUX_DEV; we could also check for !__x86_64__, as this config would break ther kernel map by removing the BIOS mem map, needed by the console and keyboard drivers, but hopefully we won't need to enable Linux drivers on x86_64. Message-Id: <20230526184801.753581-2-luca@orpolo.org>
2023-05-26xen: Fix 64bit buildSamuel Thibault
2023-05-21pmap: Simplify codeSamuel Thibault
Notably, ptes_per_vm_page is meaninful for the pte level anyway.
2023-05-21pmap: dynamically allocate the whole user page tree mapLuca Dariz
* i386/intel/pmap.c: switch to dynamic allocation of all the page tree map levels for the user-space address range, using a separate kmem cache for each level. This allows to extend the usable memory space on x86_64 to use more than one L3 page for user space. The kernel address map is left untouched for now as it needs a different initialization. * i386/intel/pmap.h: remove hardcoded user pages and add macro to recontruct the page-to-virtual mapping Message-Id: <20230521085758.365640-1-luca@orpolo.org>
2023-04-11Fix Xen buildSamuel Thibault
2023-02-27x86_64: allow compilation if ! USER32Luca Dariz
* i386/intel/pmap.c: remove #error and allow compilation, keeping a reminder to fix the pmap module. Message-Id: <20230227204501.2492152-2-luca@orpolo.org>
2023-02-16x86_64: fix some compiler warningsLuca Dariz
* i386/include/mach/i386/vm_param.h: extend the vm constants to ULL on x86_64 to avoid a shift overflow warning * i386/intel/pmap.c: fix cast and unused variables Message-Id: <20230216213318.2048699-1-luca@orpolo.org>
2023-02-15pmap: Make mapwindow per CPUSamuel Thibault
They are used temporarily without CPU exchanges, and may need to be used concurrently so 2 slots only would not be enough anyway. This also saves having to lock for them.
2023-02-15smp: Fix more busy loopsSamuel Thibault
We need to avoid the kernel optimizing away the reads from memory. Use a standard relaxing instruction for that.
2023-02-15pmap: Do not TLB shootdown IPI for mapwindow updatesSamuel Thibault
These are used only temporarily by the current processor only, so we don't need to notify other processors about them. We however then should flush TLB at allocation, to make sure we don't have some remnant.
2023-02-15pmap: Fix busy loop waiting for pmap usersSamuel Thibault
We need to avoid the kernel optimizing away the read from pmap->cpus_using. Use a standard relaxing instruction for that.
2023-02-15pmap: Do not send TLB flush IPI when a cpu is idleSamuel Thibault
MARK_CPU_ACTIVE already knows to flush TLB when a cpu comes out of idle. However, add memory barriers to be sure that setting cpu_update_needed is seen before testing for cpus_idle.
2023-02-14Fix warningSamuel Thibault
2023-02-14Remove verbose debug printfsDamien Zammit
Message-Id: <20230213084919.1157074-5-damien@zamaudio.com>
2023-02-13pmap: Signal cpu for TLB update if kernel_pmapDamien Zammit
Message-Id: <20230213084919.1157074-3-damien@zamaudio.com>
2023-02-12move kernel virtual address space to upper addressesLuca Dariz
* i386/i386/vm_param.h: adjust constants to the new kernel map - the boothdr.S code already sets up a temporary map to higher addresses, so we can use INIT_VM_MIN_KERNEL_ADDRESS as in xen - increase the kernel map size to accomodate for bigger structures and more memory - adjust kernel max address and directmap limit * i386/i386at/biosmem.c: enable directmap check also on x86_64 * i386/include/mach/i386/vm_param.h: increase user virtual memory limit as it's not conflicting with the kernel's anymore * i386/intel/pmap.h: adjust lin2pdenum_cont() and INTEL_PTE_PFN to the new kernel map * x86_64/Makefrag.am: change KERNEL_MAP_BASE to be above 4G, and according to mcmodel=kernel. This will allow to use the full memory address space. Message-Id: <20230212172818.1511405-10-luca@orpolo.org>
2023-02-12separate initialization of kernel and user PTP tablesLuca Dariz
* i386/i386/vm_param.h: temporariliy fix kernel upper address * i386/intel/pmap.c: split kernel and user L3 map initialization. For simplicity in handling the different configurations, on 32-bit (+PAE) the name PDPNUM_KERNEL is used in place of PDPNUM, while only on x86_64 the PDPNUM_USER and PDPNUM_KERNEL are treated differently. Also, change iterating over PTP tables in case the kernel map is not right after the user map. * i386/intel/pmap.h: define PDPNUM_USER and PDPNUM_KERNEL and move PDPSHIFT to simplify ifdefs. Message-Id: <20230212172818.1511405-9-luca@orpolo.org>
2023-02-12add more explicit names for user space virtual space limitsLuca Dariz
* i386/i386/vm_param.h: add VM_MAX/MIN_USER_ADDRESS to kernel headers. * i386/i386/db_interface.c * i386/i386/ldt.c * i386/i386/pcb.c * i386/intel/pmap.c * kern/task.c: replace VM_MAX/MIN_ADDRESS with VM_MAX/MIN_USER_ADDRESS Message-Id: <20230212172818.1511405-7-luca@orpolo.org>
2023-02-12use L4 page table directly on x86_64 instead of short-circuiting to pdpbaseLuca Dariz
This is a preparation to run the kernel on high addresses, where the user vm region and the kernel vm region will use different L3 page tables. * i386/intel/pmap.c: on x86_64, retrieve the value of pdpbase from the L4 table, and add the pmap_pdp() helper (useful also for PAE). * i386/intel/pmap.h: remove pdpbase on x86_64. Message-Id: <20230212172818.1511405-6-luca@orpolo.org>
2023-02-12factor out PAE-specific bootstrapLuca Dariz
* i386/intel/pmap.c: move it to pmap_bootstrap_pae() Message-Id: <20230212172818.1511405-5-luca@orpolo.org>
2023-02-12factor out xen-specific bootstrapLuca Dariz
* i386/intel/pmap.c: move it to pmap_bootstrap_xen() Message-Id: <20230212172818.1511405-4-luca@orpolo.org>
2023-02-12pmap: Fix warningSamuel Thibault
2023-02-12prepare pmap helpers for full 64 bit memory mapLuca Dariz
* i386/intel/pmap.c: start walking the page table tree from the L4 table instead of the PDP table in pmap_pte() and pmap_pde(), preparing for the kernel to run on high addresses. Message-Id: <20230212172818.1511405-2-luca@orpolo.org>
2023-02-12add L4 kmem cache for x86_64Luca Dariz
* i386/intel/pmap.c: allocate the L4 page table from a dedicate kmem cache instead of the generic kernel map. Also improve readability of nested ifdef's. Message-Id: <20230212170313.1501404-4-luca@orpolo.org>