Age | Commit message (Collapse) | Author |
|
For some reason, commit be5b9d6ab9f7e7a81c367e4bb0823ba11f85940f didn't
take care of all reserved identifiers.
|
|
The new build system, called xbuild, is a minimalistic kbuild-like
make-based build system, also using kconfig for scalable configurations.
|
|
The kernel_map/kernel_pmap/kernel_task/etc... names were reused as they
were in the Mach source code. They've been a (mostly harmless) long-standing
violation of the coding rules.
|
|
The real problem actually only applies to "max" names, for which the
value is ambiguous, as "max" usually implies the value is included in
the associated range, which is not the case for these macros.
|
|
|
|
|
|
|
|
Move the page properties into the new x86/page module, and the virtual
memory layout macros into the x86/pmap module.
|
|
|
|
Instead of mixing standard headers and internal redefinitions of standard
types, completely rely on the compiler for what is guaranteed for a free
standing environment. This results in the removal of kern/stddef.h and
kern/stdint.h. The kern/types.h header is reintroduced for the different
(and saner) purpose of defining types not specified in standard C,
namely ssize_t for now.
|
|
Using a single header for all types causing inclusion circular
dependencies isn't very elegant and doesn't scale.
|
|
Talk about entries "per page table" instead of "per page table page".
This is made obvious by the new PAE hierarchy, where the root page
table isn't a page.
|
|
This is mostly done for the machine-independent part.
|
|
Instead of presenting PAE mode as a two-level hierarchy (the same as
non-PAE 32-bits) with a page directory 4 pages wide, present it as a
three-level hierarchy. The purpose of this change is to break the
requirement of the page directory being 4 contiguous pages, which
is likely to fail because of fragmentation.
|
|
Update the test_vm_page_fill module accordingly. The vm_kmem module
needs to be reworked in order to correctly handle failures.
|
|
This module is probably the most impacted by the direct physical
mapping.
First, it establishes the direct physical mapping before paging is
enabled, and uses the largest available page size when doing so.
In addition, the recursive mapping of PTEs is removed, since all
page table pages are allocated from the direct physical mapping.
This changes the way mapping requests are processed.
The ability to access any page table page at any time removes the
need for virtual addresses reserved for temporary mappings.
Since it's now perfectly possible to return physical address 0 for
a kernel mapping, change the interface of the pmap_extract function
and rename it to pmap_kextract to stress the fact that it should
only be used for kernel mappings.
|
|
This kind of strict accounting can lead to a kind of thrashing where one
or more tables can get allocated and released very frequently. For now,
take the naive approach of not freeing page table pages until the complete
tree is torn down.
Remove the now irrelevant x86_pmap_remove_ptp test module.
|
|
Use the term "skip" instead of "shift" to align with radix tree
terminology.
|
|
- declare CPU descriptors as percpu variables
- make the percpu segment register point to the percpu area instead of
the CPU descriptor
- remove the ugly accessors for the local CPU descriptor, pmap and TCB
and use percpu variables for them instead
- implement the cpu_local accessors as described in the percpu
documentation
|
|
|
|
|
|
Although currently not very useful since all pmap operations are global,
this change enables the precise targeting of processors when maintaining
the consistency of physical maps. It is essential to a scalable virtual
memory system where non-overlapping mapping operations can be processed
concurrently.
It also temporarily removes some functionalities such as the ability to
manipulate non-kernel pmaps and lazy TLB invalidation. These will be
added again in the future.
|
|
Start application processors once the kernel is completely initialized,
right before starting the scheduler. This simplifies the procedure with
regard to inter-processor pmap updates.
|
|
Explicitely show levels start from 0 in macro names, and use clearer names
for base addresses in the recursive mapping as well as for the number of
PTEs per PTP.
|
|
Use "ptp" to refer to page table page(s), making it clearly different from
"pt" which is used to denote paging translation.
|
|
This is required in order to properly release page table pages when removing
mappings from the kernel physical map.
|
|
Remove the pmap_klimit, pmap_kgrow, pmap_kenter and pmap_kremove functions
from the pmap interface. The regular pmap_enter and pmap_remove functions
are now used instead. The kernel physical map is handled almost exactly
like user physical maps, except shared root page table pages need special
care. The pmap_kenter and pmap_kremove functions still exist but are private
to the pmap implementation.
|
|
The root_pt member of the pmap structure is ambiguous. Explicitely declare
it as a physical address and add the _pa suffix to its name.
|
|
|
|
|
|
Add a processor bitmap per physical map to determine processors on which
a pmap is loaded, so that only those processors receive update IPIs. In
addition, implement lazy TLB invalidation by not loading page tables
when switching to a kernel thread. To finish with, the thread module now
calls pmap_load unconditionally without making assumptions about pmap
optimizations.
|
|
Similar to pmap_protect and pmap_extract, pmap_update is meant to handle
both kernel and regular pmap updates.
|
|
As it was done for pmap_protect, replace a kernel-specific call with one
that can handle both the kernel and regular pmaps.
The new function isn't complete yet and cannot handle physical maps that
aren't the kernel pmap or the currently loaded pmap.
|
|
This change is merely a slight interface modification to get rid of a
function that shouldn't be exported and replace it with the true entry
point for setting physical mapping protection.
The new function isn't complete yet and cannot handle physical maps that
aren't the kernel pmap or the currently loaded pmap.
|
|
|
|
|
|
It is expected some future processes will require the kernel to have its own
separate low level address space instead of always using the high part of
user tasks. It also simplifies collecting statistics and managing other
kernel specific data in a generic way.
In addition, this change removes some duplicated boot data and makes boot
code use more virtual to physical translations.
|
|
|
|
The rule of thumb of page table management is to flush the TLB whenever
the page tables are changed in a way that requires it (e.g. to prevent
inconsistencies due to prefetching). The recursive mapping entries are no
exception and must abide by this rule.
|
|
|
|
In order to avoid circular dependencies, the param.h header shouldn't include
anything that may cause them. Move the ptemap size there and remove the
machine/pmap.h inclusion.
|
|
|
|
|
|
Don't involve the pmap module directly, as there could be others. Make
the cpu module completely responsible for synchronizing all processors
on kernel entry so that interrupts can be explicitely enabled there.
|
|
|
|
Scheduling is temporarily disabled until the thread module is able to
cope with multiple processors.
|
|
|
|
Let the pmap module internally handle kernel mapping requests.
|
|
Let the pmap module internally handle the mapping limit.
|
|
There are no precise enough criteria to justify the separation of these
two directories.
|