summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2013-06-01kern/thread: slightly rework scheduler invocationRichard Braun
Rename THREAD_RESCHEDULE to THREAD_YIELD and thread_reschedule to thread_yield for better clarity, and add the thread_schedule inline function that checks for the THREAD_YIELD flag before calling thread_yield (yielding only checks if preemption is enabled).
2013-06-01kern/thread: short comment about thread lockingRichard Braun
2013-06-01kern/thread: minor naming changeRichard Braun
Refer to scheduling class data instead of context.
2013-05-26vm/vm_kmem: check that kernel space doesn't start at 0Richard Braun
Address 0 is commonly used to report allocation errors, in particular to the kmem module, which itself returns NULL on failure.
2013-05-25kern/thread: make more extensive use of cpumapsRichard Braun
2013-05-25kern/llsync: replace raw bitmaps with cpumapsRichard Braun
2013-05-24x86/param: update kernel space end address on i386Richard Braun
The kernel image is loaded where the kernel space ends. The previous end address left 2 MiB to the kernel image. Since boot stacks are now statically allocated, a kernel configured for a large number of processors would overflow the area reserved for the kernel image. Reduce the kernel space end address so that 64 MiB are now available for the kernel image.
2013-05-24kern/llsync: group related functionsRichard Braun
This minor change groups the reset/commit and register/unregister functions together for better discoverability.
2013-05-24kern/llsync: fix checkpoint reset interrupt handlingRichard Braun
This change makes reset requests write both the per-processor flag as well as the checkpoint bitmap atomically, and adjusts the module logic accordingly. It fixes a race between checkpoint reset and system timer interrupts where the timer interrupt would make the local processor commit its checkpoint although it can't be reliably determined that it reached a checkpoint since the last global checkpoint, because the reset interrupt wasn't received yet. This problem would rarely happen on real hardware because of the near-instant handling of IPIs, but it was observed on virtual machines.
2013-05-24kern/llsync: fix first processor registrationRichard Braun
Intuitively, registering the first processor should trigger a global checkpoint to get the lockless synchronization system started. It is possible, however, that this case occurs frequently on idle systems, where processors are normally not registered, and only perform load balancing. It can also happen that a processor determines itself as the only registered one whereas reset interrupts have not yet been processed, in which case a global checkpoint should just not occur. The real condition for a global checkpoint is the number of pending checkpoints reaching 0.
2013-05-24kern/llsync: minor refactoringRichard Braun
Move code handling the unregistration of the last processor in the global checkpoint processing function, where list management actually occurs.
2013-05-24kern/llsync: disable interrupts on per-CPU data accessRichard Braun
Not strictly required, but makes things simpler at virtually no cost.
2013-05-24kern/llsync: assume interrupts are disabled on commitRichard Braun
2013-05-24kern/llsync: improve concurrencyRichard Braun
Keep a local copy of a processor registration state to avoid acquiring the global lock when attempting to commit a checkpoint from an unregistered processor.
2013-05-24kern/llsync: fix deadlockRichard Braun
2013-05-24kern/llsync: set worker thread processor affinityRichard Braun
2013-05-19kern/thread: implement processor affinityRichard Braun
Processor affinity can be set before a thread is created, but currently not changed afterwards.
2013-05-19kern/cpumap: new moduleRichard Braun
2013-05-16x86/mb: remove eflags register from the clobber listRichard Braun
GCC already assumes the condition code register (eflags) is always clobbered on this architecture.
2013-05-16x86/trap: pass raw function names to trap macrosRichard Braun
Using complete names instead of forging them actually helps clarity.
2013-05-16kern/thread: fix wakeup with respect to pinningRichard Braun
2013-05-16kern/llsync: minor comment fixRichard Braun
2013-05-16kern/kmem: reduce fragmentationRichard Braun
This reverts a change brought when reworking slab lists handling that made the allocator store slabs in LIFO order, whatever their reference count. While it's fine for free slabs, it actually increased fragmentation for partial slabs.
2013-05-16kern/task: fix task creationRichard Braun
The kernel task was inserted instead of the newly created task on task creation (!). Fix that dumb mistake. In addition, insert the new task at the end of the task list.
2013-05-15kern/list: rename list_insert to list_insert_headRichard Braun
This change increases clarity.
2013-05-15x86/cpu: pass flags by address to cpu_intr_saveRichard Braun
Not a necessary change, but done for consistency.
2013-05-15kern/thread: return unsigned run queue identifiersRichard Braun
The correct type for run queue or processor identifiers is unsigned int. The signed variant was used because of the bitmap interface, but there will never be enough processors to trigger a problem with it, while using signed integers can quickly mess things up.
2013-05-15kern/bitmap: move helper functions to bitmap_i.hRichard Braun
Although the dinstinction between those and the public interface was already easy to make, it's a bit more consistent and elegant this way.
2013-05-15kern/thread: add comment about balancer threadsRichard Braun
Balancer threads use their own version of thread_sleep because they have to keep the run queue locked after pulling threads, otherwise remote balancers might pull threads from a run queue right before a scheduling decision is made, possibly putting the run queue in a state where only expired threads remain. Although this might be relaxed in the future, this is how it's currently done, and a clear note in the main balancer function definitely helps against breaking that part.
2013-05-15kern/thread: set the runq member for idler threadsRichard Braun
Not required nor used, but done for consistency.
2013-05-15x86/param: minor comment fixRichard Braun
2013-05-15x86/cpu: reset lockless synchronization checkpointsRichard Braun
2013-05-15kern/llsync: new moduleRichard Braun
This module provides lockless synchronization so that reads can safely occur during updates, without holding a lock. It is based on passive serialization as described in US patent 4809168, and achieves a goal similar to Linux RCU (Read-Copy Update).
2013-05-15kern/thread: rework idle loopRichard Braun
The upcoming lockless synchronization implementation requires the idle loop to report when it's about to enter/leave the idle state. Preemption must be disabled to accomplish that.
2013-05-15x86/cpu: make cpu_idle safely enable interruptsRichard Braun
The idle loop is being reworked and requires this change.
2013-05-15x86/{cpu,trap}: implement lockless checkpoint reset IPIsRichard Braun
These inter-processor interrupts will be used by the upcoming llsync module implementing lockless synchronization.
2013-05-15kern/thread: describe thread_sleep memory barrier semanticsRichard Braun
2013-05-15kern/bitmap: new bitmap_copy functionRichard Braun
2013-05-13kern/thread: make thread_active_runqs privateRichard Braun
2013-05-13kern/thread: fix thread waking on remote run queueRichard Braun
2013-05-12kern/thread: update kernel thread naming rulesRichard Braun
2013-05-12kern/thread: minor name change in struct thread_attrRichard Braun
Rename sched_policy to policy.
2013-05-12kern/thread: fix getting caller task during bootstrapRichard Braun
2013-05-12kern/thread: fix balancer threads policyRichard Braun
2013-05-12kern/thread: fix reaper thread policyRichard Braun
2013-05-09kern/thread: remove an unneeded memory barrierRichard Braun
The memory barrier semantics of locks already provide all the required ordering when checking for pinned threads.
2013-04-21x86/pmap: replace spin locks with mutexes where relevantRichard Braun
2013-04-21kern/kmem: rework slab lists handlingRichard Braun
Don't enforce strong ordering of partial slabs. Separating partial slabs from free slabs is already effective against fragmentation, and sorting would sometimes cause pathological scalability issues. In addition, store new slabs (whether free or partial) in LIFO order for better cache usage.
2013-04-20kern/kmem: fix locking errorRichard Braun
2013-04-19kern/kmem: move internal data to kmem_i.hRichard Braun
As it's done for other modules, this separation makes the public interface easy to identify.