summaryrefslogtreecommitdiff
path: root/libhurd-mm
diff options
context:
space:
mode:
authorNeal H. Walfield <neal@gnu.org>2008-12-12 10:20:33 +0100
committerNeal H. Walfield <neal@gnu.org>2008-12-12 10:20:33 +0100
commit57487c9bf316f3ed262d17e4c10b284a263b8187 (patch)
tree95b3ef42f0f0d0a1be513f912855385cc9688ca8 /libhurd-mm
parentf67f59da6de65186c674afb8e7307d6b23f48b63 (diff)
New IPC system. Update code accordingly.
hurd/ 2008-12-11 Neal H. Walfield <neal@gnu.org> Adapt RPC interfaces according to changes in IPC semantics. * messenger.h: New file. * message.h: New file. * ipc.h: New file. * headers.m4: Link sysroot/include/hurd/message.h to hurd/message.h, sysroot/include/hurd/messenger.h to hurd/messenger.h, and sysroot/include/hurd/ipc.h to hurd/ipc.h. * cap.h: Include <hurd/addr.h> and <stdbool.h>. (enum cap_type): Define cap_messenger, cap_rmessenger and cap_type_count. (cap_type_string): Handle cap_messenger and cap_rmessenger. (cap_types_compatible): Likewise. (cap_type_weak_p): Likewise. (cap_type_weaken): Likewise. (cap_type_strengthen): Likewise. (oid_t): Replace L4 type with standard type. (CAP_VOID): Define. * rpc.h [! RPC_TARGET]: Don't error out if not defined. [RPC_TARGET_ARG_]: Don't define or undefine. [RPC_TARGET_]: Likewise. [RPC_TARGET_NEED_ARG]: Ignore. Don't include <l4/ipc.h> or <l4/space.h>. Include <hurd/message.h> and <hurd/ipc.h>. (reply_buffer) [RM_INTERN]: Declare. (messenger_message_load) [RM_INTERN]: Likewise. [! RM_INTERN] Include <hurd/message-buffer.h>. (cap_t): Define. (CPP_FOREACH): Define. (CPP_SAFE_DEREF): Likewise. (RPC_ARGUMENTS): Take additional argument prefix. Use it. Update users. (RPC_CHOP): Rename from this... (RPC_CHOP2): ... to this. Update users. (RPC_TYPE_SHIFT): New define. (RPCLOADARG): Rewrite according to new marshalling semantics. (RPCSTOREARG): Likewise. (RPC_SEND_MARSHAL): Likewise. (RPC_SEND_UNMARSHAL): Likewise. (RPC_REPLY_MARSHAL): Likewise. (RPC_REPLY_UNMARSHAL): Likewise. (RPC_RECEIVE_MARSHAL): New define. (RPC_MARSHAL_GEN_): Break this into... (RPC_SEND_MARSHAL_GEN_): ... this... (RPC_RECEIVE_MARSHAL_GEN_): ... this... (RPC_REPLY_MARSHAL_GEN_): ... and this. Update users. (RPC_MARSHAL_GEN_): Redefine in terms of the new macros. (RPC_SEND_): Rewrite according to new marshalling and IPC semantics. (RPC_SEND_NONBLOCKING_): Define. (RPC_): Rewrite according to new marshalling and IPC semantics. (RPC_REPLY_): Likewise. (RPC_SIMPLE_): Don't define. (RPC_SIMPLE): Don't define. (RPC): Take additional argument ret_cap_count. Update users. (rpc_error_reply_marshal): Rewrite according to new marshalling and IPC semantics. (rpc_error_reply): Likewise. * t-rpc.c (RPC_TARGET_NEED_ARG): Don't define. (RPC_TARGET): Define. (RPC_noargs): Set to a large interger. (RPC_caps): New define. (noargs): Update interface specification according to new IDL interface. Update users. (onein): Likewise. (oneout): Likewise. (onlyin): Likewise. (onlyout): Likewise. (mix): Likewise. (noargs): Likewise. (onein): Likewise. (oneout): Likewise. (onlyin): Likewise. (onlyout): Likewise. (mix): New interface. (RPC_TARGET): Don't undefine. (main): Update to use the new RPC marshalling interface. Write a test using the new `mix' interface. * activity.h (RPC_TARGET_NEED_ARG): Don't undefine. (RPC_TARGET): Don't define. (activity_policy): Update interface specification according to new IDL interface. Update users. (activity_info): Likewise. * cap.h: (RPC_TARGET_NEED_ARG): Don't undefine. (RPC_TARGET): Don't define. (RM_object_slot_copy_out): Don't define. (RM_object_slot_copy_in): Likewise. (RM_object_slot_read): Likewise. (RM_object_reply_on_destruction): Define. (cap_copy): Update interface specification according to new IDL interface. Update users. (cap_rubout): Likewise. (cap_read): Likewise. (object_discarded_clear): Likewise. (object_discard): Likewise. (object_status): Likewise. (object_name): Likewise. (object_reply_on_destruction): New interface replacing thread_wait_destroy. (object_slot_copy_out): Remove interface. (object_slot_copy_in): Likewise. (object_slot_read): Likewise. (RPC_TARGET): Don't undefine. * exceptions.h: Don't include <l4/thread.h>. Include <l4/space.h>. (RPC_STUB_PREFIX): Redefine to `activation'. (RPC_ID_PREFIX EXCEPTION): Redefine to `ACTIVATION'. (RPC_TARGET_NEED_ARG): Don't define. (RPC_TARGET_ARG_TYPE): Likewise. (RPC_TARGET): Likewise. (EXCEPTION_fault): Rename from this... (ACTIVATION_fault): ... to this. Update users. (exception_method_id_string): Rename from this... (activation_method_id_string): ... to this. (struct exception_info): Rename from this... (struct activation_fault_info): ... to this. Update users. (EXCEPTION_INFO_FMT): Rename from this... (ACTIVATION_FAULT_INFO_FMT): ... to this. Update users. (EXCEPTION_INFO_PRINTF): Rename from this... (ACTIVATION_FAULT_INFO_PRINTF): ... to this. Update users. (fault): Update interface specification according to new IDL interface. Update users. * folio.h (RPC_TARGET_NEED_ARG): Don't undefine. (RPC_TARGET): Don't define. (folio_alloc): Update interface specification according to new IDL interface. Update users. (folio_free): Likewise. (folio_object_alloc): Likewise. (folio_policy): Likewise. (RPC_TARGET): Don't undefine. * futex.h (RPC_TARGET_NEED_ARG): Don't undefine. (RPC_TARGET): Don't define. (futex): Update interface specification according to new IDL interface. Update users. (RPC_TARGET): Don't undefine. (futex_using): New function. (futex): Implement in terms of it. (futex_wait_using): New function. (futex_wait): Implement in terms of it. (futex_wake_using): New function. (futex_wake): Implement in terms of it. * thread.h (RM_thread_wait_object_destroyed): Don't define. (RM_thread_raise_exception): Rename from this... (RM_thread_activation_collect): ... to this. (RM_thread_id): Define. (RPC_TARGET_NEED_ARG): Don't undefine. (RPC_TARGET): Don't define. (struct hurd_thread_exregs_in): Remove fields aspace, activity, exception_page, aspace_out, activity_out and exception_page_out. (thread_exregs): Update interface specification according to new IDL interface. Add additional parameters exception_messenger and exception_messenger_out. Update users. (thread_wait_object_destroyed): Remove interface. (struct exception_buffer): Don't define. (thread_raise_exception): Remove interface. (thread_id): New interface. (thread_activation_collect): Likewise. (RPC_TARGET): Don't undefine. * RPC: Update. * exceptions.h (hurd_activation_handler_init_early): New declaration. (hurd_activation_handler_init): Likewise. (hurd_utcb): Likewise. (EXCEPTION_STACK_SIZE_LOG2): Don't define. (EXCEPTION_STACK_SIZE): Likewise. (hurd_activation_state_alloc): New declaration. (exception_page_cleanup): Rename from this... (hurd_activation_state_free): ... to this. Update users. (exception_handler_activated): Rename from this... (hurd_activation_handler_activated): ... to this. (exception_handler_normal): Rename from this... (hurd_activation_handler_normal): ... to this. Update users. Take additional parameter utcb. (exception_handler_entry): Rename from this... (hurd_activation_handler_entry): ... to this. (exception_handler_end): Rename from this... (hurd_activation_handler_end): ... to this. (hurd_activation_message_register): New declaration. (hurd_activation_message_unregister): Likewise. (hurd_activation_stack_dump): Likewise. * thread.h [! __have_vg_thread_id_t] (__have_vg_thread_id_t): Define. [! __have_vg_thread_id_t && USE_L4]: Include <l4.h>. [! __have_vg_thread_id_t && !USE_L4]: Include <stdint.h>. [! __have_vg_thread_id_t] (vg_thread_id_t): Define. [! __have_vg_thread_id_t] (vg_niltid): Define. [! __have_vg_thread_id_t] (VG_THREAD_ID_FMT): Define. [! __have_activation_frame] (__have_activation_frame): Define. [! __have_activation_frame && USE_L4]: Include <l4/ipc.h>. [! __have_activation_frame] (struct hurd_message_buffer): Declare. [! __have_activation_frame] (struct activation_frame): Define in this case. Add fields normal_mode_stack and canary. [! __have_activation_frame && i386] (struct activation_frame): Change regs to have 10 elements. Add fields eax, ecx, edx, eflags, eip, ebx, edi, esi, ebp and esp. [! __have_activation_frame && !USE_L4] (struct activation_frame): Remove fields saved_sender, saved_receiver, saved_timeout, saved_error_code, saved_flags, and saved_br0 in this case. [__need_vg_thread_id_t || __need_activation_frame] (__need_vg_thread_id_t): Undefine. [__need_vg_thread_id_t || __need_activation_frame] (__need_activation_frame): Likewise. [!__need_vg_thread_id_t && !__need_activation_frame]: Include the rest of the file in this case. Include <stdint.h>, <hurd/types.h>, <hurd/addr.h>, <hurd/addr-trans.h>, <hurd/cap.h>, <hurd/messenger.h> and <setjmp.h>. (hurd_activation_frame_longjmp): New declaration. (struct hurd_fault_catcher): New definition. (hurd_fault_catcher_register): New declaration. (hurd_fault_catcher_unregister): Likewise. (struct exception_page): Rename from this... (struct vg_utcb): ... to this. Update users. Remove field exception. Add fields protected_payload, messenger_id, inline_words, inline_caps, inline_word_count, inline_cap_count, inline_data, exception_buffer, extant_messages, catchers, alternate_stack, alternate_stack_inuse, canary0, canary1. (UTCB_CANARY0): Define. (UTCB_CANARY1): Likewise. (THREAD_EXCEPTION_PAGE_SLOT): Rename from this... (THREAD_UTCB): ... to this. (THREAD_EXCEPTION_MESSENGER): Define. (THREAD_SLOTS): Likewise. (THREAD_SLOTS_LOG2): Likewise. (HURD_EXREGS_SET_EXCEPTION_PAGE): Rename from this... (HURD_EXREGS_SET_UTCB): ... to this. Update users. (HURD_EXREGS_SET_EXCEPTION_MESSENGER): Define. (HURD_EXREGS_SET_REGS): Add HURD_EXREGS_SET_EXCEPTION_MESSENGER. (vg_myself): New function. * startup.h (struct hurd_startup_data): Add field messengers. viengoos/ 2008-12-12 Neal H. Walfield <neal@gnu.org> Implement messengers and convert to new IPC semantics. * messenger.h: New file. * messenger.c: New file. * Makefile.am (viengoos_SOURCES): Add messenger.h and messenger.c. * ager.c: Include "messenger.h". (update_stats): Update notifivation code to use messengers. * cap.c: Include <hurd/messenger.h>. (cap_shootdown): Follow thread and messenger objects. * object.h (object_wait_queue_head): Use and return struct messenger *'s, not struct thread *'s. Update users. (object_wait_queue_tail): Likewise. (object_wait_queue_next): Likewise. (object_wait_queue_prev): Likewise. (object_wait_queue_enqueue): Likewise. (object_wait_queue_dequeue): Likewise. Rename from this... (object_wait_queue_unlink): ... to this. (object_wait_queue_push): New declaration. (folio_object_wait_queue_for_each): Use and return struct messenger *'s, not struct thread *'s. Update users. (object_wait_queue_for_each): Likewise. * object.c: Include <hurd/messenger.h> and "messenger.h". (folio_object_alloc): When destroying a messenger, call messenger_destroy. (folio_object_alloc): Send notifications using messengers. (object_wait_queue_head): Use and return struct messenger *'s, not struct thread *'s. (object_wait_queue_tail): Likewise. (object_wait_queue_next): Likewise. (object_wait_queue_prev): Likewise. (object_wait_queue_check): Likewise. (object_wait_queue_enqueue): Likewise. Add MESSENGER to end of the queue, not the beginning. (object_wait_queue_push): New function. (object_wait_queue_dequeue): Use and return struct messenger *'s, not struct thread *'s. Rename from this... (object_wait_queue_unlink): ... to this. * pager.c: Include "messenger.h". * thread.h: Don't include "list.h". Include <hurd/cap.h> and <hurd/thread.h>. (struct folio): Remove declaration. (THREAD_SLOTS): Don't define. (THREAD_WAIT_FUTEX): Move from here... * messenger.h (MESSENGER_WAIT_FUTEX): ... to here. * thread.h (THREAD_WAIT_DESTROY): Move from here... * messenger.h (MESSENGER_WAIT_DESTROY): ... to here. * thread.h (THREAD_WAIT_ACTIVITY_INFO): Move from here... * messenger.h (MESSENGER_WAIT_ACTIVITY_INFO): ... to here. * thread.h (struct thread): Rename field exception_page to utcb. Add field exception_messenger. Remove fields wait_queue_p, wait_queue_head, wait_queue_tail, wait_reason, wait_reason_arg, wait_reason_arg2, wait_queue and futex_waiter_node. (futex_waiters): Don't declare. (thread_exregs): Change input capabilities to not be pointers to capabilities but just capability structures. Add argument exception_messenger. Remove arguments aspace_out, activity_out and exception_page_out. Update users. (thread_activate): New declaration. (thread_raise_exception): Change MSG's type to be struct vg_message *. Update users. (thread_deliver_pending): New declaration. * thread.c (thread_deinit): Remove code to remove THREAD from a wait queue. (thread_exregs): Change input capabilities to not be pointers to capabilities but just capability structures. Update code. Add argument exception_messenger. Set THREAD's exception messenger according to it and CONTROL. Remove arguments aspace_out, activity_out and exception_page_out. Don't save the old capabilities. (thread_raise_exception): Move body of function... (thread_activate): ... to this new function. Update to use messengers. (thread_raise_exception): Implement in terms of it. (thread_deliver_pending): New function. * server.c: Include <hurd/ipc.h> and "messenger.h". (DEBUG): If label is the IPC label, use "IPC" as the function. (OBJECT_): Take additional parameter WRITABLE. Save whether the object is writable in *WRITABLE. Update users. (OBJECT): Likewise. (server_loop): Update to use messengers and the new IPC interface. Update method implementations appropriately. Don't marshal faults using exception_fault_send_marshal but the new activation_fault_send_marshal. Remove implementations of object_slot_copy_out, object_slot_copy_in and object_slot_read. Reimplement object_discard. In the thread_exregs implementation, handle the exception messenger. Implement thread_id. Remove thread_wait_object_destroyed. Implement object_reply_on_destruction. In activity_info and activity_policy, don't operate on PRINCIPAL but the invoke activity. Implement thread_activation_collect. When blocking on a futex, don't enqueue the calling thread but the reply messenger. Implement the messenger_id method. (REPLY): Redefine before processing an object invocation to reply using the reply messenger included in the request. * rm.h: Include <l4/message.h>. (rm_method_id_string): Don't handle object_slot_copy_out, object_slot_copy_in, object_slot_read, exception_collect or thread_wait_object_destroyed. Handle object_reply_on_destruction, thread_id, thread_activation_collect. (RPC_TARGET_NEED_ARG): Don't undefine. (RPC_TARGET): Don't define. (struct io_buffer): Redefine in terms of L4_NUM_BRS. (write): Update interface specification according to new IDL interface. Update users. (read): Likewise. (as_dump): Likewise. (fault): Likewise. (RPC_STUB_PREFIX): Don't undefine. (RPC_ID_PREFIX): Likewise. libhurd-mm/ 2008-12-12 Neal H. Walfield <neal@gnu.org> Update to new RPC interface and IPC semantics. Support messengers. * message-buffer.h: New file. * message-buffer.c: Likewise. * Makefile.am (libhurd_mm_a_SOURCES): Add message-buffer.h and message-buffer.c. * headers.m4: Link sysroot/include/hurd/message-buffer.h to libhurd-mm/message-buffer.h. * exceptions.c: Include <hurd/mm.h>, <hurd/rm.h> and <backtrace.h>. (hurd_fault_catcher_register): New function. (hurd_fault_catcher_unregister): Likewise. (hurd_activation_frame_longjmp): Likewise. (utcb_state_save): Rename from this... (l4_utcb_state_save): ... to this. Take a `struct activation_frame *', not a `struct exception_frame *'. (utcb_state_restore): Rename from this... (l4_utcb_state_restore): ... to this. Take a `struct activation_frame *', not a `struct exception_frame *'. (exception_fetch_exception): Rename from this... (hurd_activation_fetch): ... to this. (hurd_activation_message_register): New function. (hurd_activation_frame_longjmp): Likewise. (exception_frame_slab): Rename from this... (activation_frame_slab): ... to this. Use a static initializer. (exception_frame_slab_alloc): Rename from this... (activation_frame_slab_alloc): ... to this. Don't preserve the L4 utcb. (exception_frame_slab_dealloc): Rename from this... (activation_frame_slab_dealloc): ... to this. (exception_frame_alloc): Rename from this... (activation_frame_alloc): ... to this. If there are no preallocated frames, panic. Move the hard allocation code to... (check_activation_frame_reserve): ... this new function. (hurd_activation_stack_dump): New function. (hurd_activation_handler_normal): Take an additional parameter, the utcb. Add consistency checks. Handle IPC and closures. Update fault handling code to use the new fault interface. If unable to resolve the fault via the pager mechanism, see if a fault catcher in installed. Check the UTCB's canary. If running on the alternate stack, clear UTCB->ALTERNATE_STACK_INUSE on exit. (hurd_activation_handler_activated): Take a `struct vg_utcb *', not a `struct exception_page *'. Handle IPC and closures. Improve test to determine if the fault was a stack fault. If so, return to normal mode to handle the fault and use an alternate stack. (activation_handler_area0): New local variable. (activation_handler_msg): Likewise. (initial_utcb): Likewise. (simple_utcb_fetcher): New function. (hurd_utcb): New variable. (hurd_activation_handler_init_early): New function. (hurd_activation_handler_init): Likewise. (exception_handler_init): Remove function. (ACTIVATION_AREA_SIZE_LOG2): Define. (ACTIVATION_AREA_SIZE): Likewise. (hurd_activation_state_alloc): New function. (exception_page_cleanup): Rename from this... (hurd_activation_state_free): ... to this. Rewrite. * ia32-exception-entry.S (_hurd_activation_handler_entry): Save the eflags before executing a sub instruction. Don't try to smartly calculate the location of the UTCB. Instead, just reload it. (activation_frame_run): Use an alternate stack, if requested. Save ebx and ebi. Pass the utcb to the callback. * mm-init.c [i386]: Include <hurd/pager.h>. Include <backtrace.h>. (mm_init): Call hurd_activation_handler_init_early and hurd_activation_handler_init. Don't call exception_handler_init. (mm_init) [! NDEBUG && i386]: Test the activation code. * as-build.c (do_index): Handle indexing a cap_thread or a cap_messenger. (as_build): Likewise. * as-dump.c (do_walk): Handle indexing a cap_thread or a cap_messenger. * as-lookup.c (as_lookup_rel_internal): Likewise. * as.c (as_walk): Likewise. * storage.c: Include <backtrace.h>. (shadow_setup): Update use of rm_folio_object_alloc according to its new interface. (storage_check_reserve_internal): Likewise. (storage_free_): Likewise. (FREE_PAGES_SERIALIZE): Bump to 32. (storage_alloc): If we try to get storage more than 5 lives, print a warning that we may be experiencing live lock. * pager.h (pager_fault_t): Change info's type from `struct exception_info' to `struct activation_fault_info'. (PAGER_VOID): Define. * map.h: Don't include <hurd/exceptions.h>. Include <hurd/as.h>. (maps_lock_lock): Don't use EXCEPTION_STACK_SIZE but AS_STACK_SPACE. (map_fault): Change info's type from `struct exception_info' to `struct activation_fault_info'. * map.c (map_fault): Change info's type from `struct exception_info' to `struct activation_fault_info'. * as.h (AS_STACK_SPACE): Define. (as_lock): Use AS_STACK_SPACE instead of EXCEPTION_STACK_SIZE. (as_lock_readonly): Likewise. * as.h (AS_CHECK_SHADOW): Only check the address translator for capabilities that designate cappages. * anonymous.h (ANONYMOUS_MAGIC): Define. (struct anonymous_pager): Add field magic. * anonymous.c (fault): Assert that ANON->MAGIC has the expected value. Correctly size PAGES. (mdestroy): Assert that ANON->MAGIC has the expected value. (destroy): Likewise. (advise): Likewise. (anonymous_pager_alloc): Initialize ANON->MAGIC. benchmarks/ 2008-12-12 Neal H. Walfield <neal@gnu.org> Update according to new RPC interfaces. * activity-distribution.c (main): Update use of rm_activity_policy and rm_activity_info to be consistent with the new interface. Replace use of `struct exception_info' with `struct activation_fault_info'. * cache.c (helper): Update use of rm_activity_policy and rm_activity_info to be consistent with the new interface. * shared-memory-distribution.c (main): Likewise. hieronymus/ 2008-12-12 Neal H. Walfield <neal@gnu.org> Update according to new RPC interfaces. * hieronymus.c (activity_alloc): Update use of rm_activity_policy, rm_activity_info and rm_folio_object_alloc to be consistent with new interface. Replace use of rm_thread_wait_object_destroyed with rm_object_reply_on_destruction. libc-parts/ 2008-12-11 Neal H. Walfield <neal@gnu.org> Update to new RPC interfaces. * _exit.c (_exit): Update use of rm_folio_object_alloc to be consistent with the new interface. * backtrace.c (RA) [!RM_INTERN]: Set up a fault catch handler to avoid gratuitously faulting. (backtrace) [!RM_INTERN]: Set up a jump buffer. Jump to it on a fault. (backtrace_print): Use s_printf, not printf. * ia32-crt0.S (STACK_SIZE): Increase to 128 kb. * process-spawn.c (process_spawn): Don't use a capability slot to identify the root of the new thread's address space, allocate a thread object. Allocate messengers for the new thread and save them in STARTUP_DATA->MESSENGERS. * s_printf.c (io_buffer_flush): Use the debug output interface. (s_putchar): Don't call rm_write but use io_buffer_flush. libpthread/ 2008-12-11 Neal H. Walfield <neal@gnu.org> Update to new RPC interfaces, IPC semantics. * sysdeps/viengoos/bits/pthread-np.h: Include <hurd/exceptions.h>. (pthread_hurd_utcb_np): New declaration. * sysdeps/viengoos/pt-hurd-utcb-np.c: New file. * Makefile.am (libpthread_a_SOURCES): Add pt-hurd-utcb.c. * sysdeps/viengoos/pt-sysdep.h (EXCEPTION_AREA_SIZE): Don't define. (EXCEPTION_AREA_SIZE_LOG2): Likewise. (EXCEPTION_PAGE): Likewise. (PTHREAD_SYSDEP_MEMBERS): Remove fields exception_area, and exception_area_va. Add fields utcb and lock_message_buffer. * sysdeps/viengoos/pt-thread-alloc.c: Include <hurd/message-buffer.h>. (__pthread_thread_alloc): Initialize thread->lock_message_buffer. When executed the first time, set the thread's L4 user-defined handler. Initialize THREAD->UTCB with the thread's current utcb. Set HURD_UTCB to PTHREAD_HURD_UTCB_NP. For subsequent threads, don't manually set up the activation area. Instead, call hurd_activation_state_alloc. * sysdeps/viengoos/pt-thread-dealloc.c: Include <hurd/message-buffer.h>. (__pthread_thread_dealloc): Call __pthread_thread_halt. Don't manually clean up the activation area. Instead, call hurd_activation_state_free. Free THREAD->LOCK_MESSAGE_BUFFER. * sysdeps/viengoos/ia32/pt-setup.c (stack_setup): Pre-fault the first four pages of the new stack. (__pthread_setup): Don't set up the activation area. * sysdeps/viengoos/pt-wakeup.c (__pthread_wakeup): Use futex_wake_using with the calling thread's lock messenger. * sysdeps/viengoos/pt-block.c (__pthread_block): Use futex_wait_using and provide THREAD->LOCK_MESSAGE_BUFFER as the message buffer. * sysdeps/viengoos/pt-thread-start.c (__pthread_thread_start): Don't set the first thread's L4 user-defined handler here. (__pthread_thread_start): Update use of rm_thread_exregs according to be consistent with new interface. * sysdeps/viengoos/pt-thread-halt.c (__pthread_thread_halt): If THREAD is the current thread, call vg_suspend. * sysdeps/viengoos/pt-setactivity-np.c (pthread_setactivity_np): Update use of rm_thread_exregs according to be consistent with new interface. * sysdeps/viengoos/ia32/signal-dispatch-lowlevel.c (signal_dispatch_lowlevel): Use __builtin_frame_address to get the current stack frame's start. Update use of rm_thread_exregs according to be consistent with new interface. ruth/ 2008-12-12 Neal H. Walfield <neal@gnu.org> Update to new RPC interfaces. * ruth.c (main): Update use of rm_folio_alloc, rm_folio_object_alloc, rm_thread_exregs, rm_activity_policy, rm_activity_info. Replace use of rm_thread_wait_object_destroy with rm_object_reply_on_destruction. Replace use of `struct exception_info' with `struct activation_fault_info'. Fix signal test's use of condition variables to not rely on the scheduler. When checking deallocation code, set up a fault handler to programmatically determine success.
Diffstat (limited to 'libhurd-mm')
-rw-r--r--libhurd-mm/ChangeLog117
-rw-r--r--libhurd-mm/Makefile.am1
-rw-r--r--libhurd-mm/anonymous.c43
-rw-r--r--libhurd-mm/anonymous.h6
-rw-r--r--libhurd-mm/as-build.c26
-rw-r--r--libhurd-mm/as-dump.c24
-rw-r--r--libhurd-mm/as-lookup.c60
-rw-r--r--libhurd-mm/as.c15
-rw-r--r--libhurd-mm/as.h12
-rw-r--r--libhurd-mm/exceptions.c1035
-rw-r--r--libhurd-mm/headers.m41
-rw-r--r--libhurd-mm/ia32-exception-entry.S366
-rw-r--r--libhurd-mm/map.c8
-rw-r--r--libhurd-mm/map.h6
-rw-r--r--libhurd-mm/message-buffer.c315
-rw-r--r--libhurd-mm/message-buffer.h80
-rw-r--r--libhurd-mm/mm-init.c197
-rw-r--r--libhurd-mm/pager.h9
-rw-r--r--libhurd-mm/storage.c45
19 files changed, 1861 insertions, 505 deletions
diff --git a/libhurd-mm/ChangeLog b/libhurd-mm/ChangeLog
index 703e145..56d2842 100644
--- a/libhurd-mm/ChangeLog
+++ b/libhurd-mm/ChangeLog
@@ -1,3 +1,120 @@
+2008-12-12 Neal H. Walfield <neal@gnu.org>
+
+ Update to new RPC interface and IPC semantics. Support messengers.
+
+ * message-buffer.h: New file.
+ * message-buffer.c: Likewise.
+ * Makefile.am (libhurd_mm_a_SOURCES): Add message-buffer.h and
+ message-buffer.c.
+ * headers.m4: Link sysroot/include/hurd/message-buffer.h to
+ libhurd-mm/message-buffer.h.
+ * exceptions.c: Include <hurd/mm.h>, <hurd/rm.h> and
+ <backtrace.h>.
+ (hurd_fault_catcher_register): New function.
+ (hurd_fault_catcher_unregister): Likewise.
+ (hurd_activation_frame_longjmp): Likewise.
+ (utcb_state_save): Rename from this...
+ (l4_utcb_state_save): ... to this. Take a `struct
+ activation_frame *', not a `struct exception_frame *'.
+ (utcb_state_restore): Rename from this...
+ (l4_utcb_state_restore): ... to this. Take a `struct
+ activation_frame *', not a `struct exception_frame *'.
+ (exception_fetch_exception): Rename from this...
+ (hurd_activation_fetch): ... to this.
+ (hurd_activation_message_register): New function.
+ (hurd_activation_frame_longjmp): Likewise.
+ (exception_frame_slab): Rename from this...
+ (activation_frame_slab): ... to this. Use a static initializer.
+ (exception_frame_slab_alloc): Rename from this...
+ (activation_frame_slab_alloc): ... to this. Don't preserve the L4
+ utcb.
+ (exception_frame_slab_dealloc): Rename from this...
+ (activation_frame_slab_dealloc): ... to this.
+ (exception_frame_alloc): Rename from this...
+ (activation_frame_alloc): ... to this. If there are no
+ preallocated frames, panic. Move the hard allocation code to...
+ (check_activation_frame_reserve): ... this new function.
+ (hurd_activation_stack_dump): New function.
+ (hurd_activation_handler_normal): Take an additional parameter,
+ the utcb. Add consistency checks. Handle IPC and closures.
+ Update fault handling code to use the new fault interface. If
+ unable to resolve the fault via the pager mechanism, see if a
+ fault catcher in installed. Check the UTCB's canary. If running
+ on the alternate stack, clear UTCB->ALTERNATE_STACK_INUSE on exit.
+ (hurd_activation_handler_activated): Take a `struct vg_utcb *',
+ not a `struct exception_page *'. Handle IPC and closures.
+ Improve test to determine if the fault was a stack fault. If so,
+ return to normal mode to handle the fault and use an alternate
+ stack.
+ (activation_handler_area0): New local variable.
+ (activation_handler_msg): Likewise.
+ (initial_utcb): Likewise.
+ (simple_utcb_fetcher): New function.
+ (hurd_utcb): New variable.
+ (hurd_activation_handler_init_early): New function.
+ (hurd_activation_handler_init): Likewise.
+ (exception_handler_init): Remove function.
+ (ACTIVATION_AREA_SIZE_LOG2): Define.
+ (ACTIVATION_AREA_SIZE): Likewise.
+ (hurd_activation_state_alloc): New function.
+ (exception_page_cleanup): Rename from this...
+ (hurd_activation_state_free): ... to this. Rewrite.
+ * ia32-exception-entry.S (_hurd_activation_handler_entry): Save
+ the eflags before executing a sub instruction. Don't try to
+ smartly calculate the location of the UTCB. Instead, just reload
+ it.
+ (activation_frame_run): Use an alternate stack, if requested.
+ Save ebx and ebi. Pass the utcb to the callback.
+ * mm-init.c [i386]: Include <hurd/pager.h>.
+ Include <backtrace.h>.
+ (mm_init): Call hurd_activation_handler_init_early and
+ hurd_activation_handler_init. Don't call exception_handler_init.
+ (mm_init) [! NDEBUG && i386]: Test the activation code.
+
+ * as-build.c (do_index): Handle indexing a cap_thread or a
+ cap_messenger.
+ (as_build): Likewise.
+ * as-dump.c (do_walk): Handle indexing a cap_thread or a
+ cap_messenger.
+ * as-lookup.c (as_lookup_rel_internal): Likewise.
+ * as.c (as_walk): Likewise.
+
+ * storage.c: Include <backtrace.h>.
+ (shadow_setup): Update use of rm_folio_object_alloc according to
+ its new interface.
+ (storage_check_reserve_internal): Likewise.
+ (storage_free_): Likewise.
+ (FREE_PAGES_SERIALIZE): Bump to 32.
+ (storage_alloc): If we try to get storage more than 5 lives, print
+ a warning that we may be experiencing live lock.
+
+ * pager.h (pager_fault_t): Change info's type from `struct
+ exception_info' to `struct activation_fault_info'.
+ (PAGER_VOID): Define.
+ * map.h: Don't include <hurd/exceptions.h>. Include <hurd/as.h>.
+ (maps_lock_lock): Don't use EXCEPTION_STACK_SIZE but
+ AS_STACK_SPACE.
+ (map_fault): Change info's type from `struct exception_info' to
+ `struct activation_fault_info'.
+ * map.c (map_fault): Change info's type from `struct
+ exception_info' to `struct activation_fault_info'.
+
+ * as.h (AS_STACK_SPACE): Define.
+ (as_lock): Use AS_STACK_SPACE instead of EXCEPTION_STACK_SIZE.
+ (as_lock_readonly): Likewise.
+
+ * as.h (AS_CHECK_SHADOW): Only check the address translator for
+ capabilities that designate cappages.
+
+ * anonymous.h (ANONYMOUS_MAGIC): Define.
+ (struct anonymous_pager): Add field magic.
+ * anonymous.c (fault): Assert that ANON->MAGIC has the expected
+ value. Correctly size PAGES.
+ (mdestroy): Assert that ANON->MAGIC has the expected value.
+ (destroy): Likewise.
+ (advise): Likewise.
+ (anonymous_pager_alloc): Initialize ANON->MAGIC.
+
2008-12-04 Neal H. Walfield <neal@gnu.org>
* mmap.c (mmap): Use correct format conversions.
diff --git a/libhurd-mm/Makefile.am b/libhurd-mm/Makefile.am
index e0f08b8..64dce7c 100644
--- a/libhurd-mm/Makefile.am
+++ b/libhurd-mm/Makefile.am
@@ -50,6 +50,7 @@ libhurd_mm_a_SOURCES = mm.h \
mmap.c sbrk.c \
mprotect.c \
madvise.c \
+ message-buffer.h message-buffer.c \
$(ARCH_SOURCES)
libas_kernel_a_CPPFLAGS = $(KERNEL_CPPFLAGS)
diff --git a/libhurd-mm/anonymous.c b/libhurd-mm/anonymous.c
index 656711b..c679507 100644
--- a/libhurd-mm/anonymous.c
+++ b/libhurd-mm/anonymous.c
@@ -31,6 +31,7 @@
#include <hurd/rm.h>
#include <profile.h>
+#include <backtrace.h>
#include "anonymous.h"
#include "pager.h"
@@ -132,18 +133,23 @@ static struct hurd_slab_space anonymous_pager_slab
static bool
fault (struct pager *pager, uintptr_t offset, int count, bool read_only,
- uintptr_t fault_addr, uintptr_t ip, struct exception_info info)
+ uintptr_t fault_addr, uintptr_t ip, struct activation_fault_info info)
{
struct anonymous_pager *anon = (struct anonymous_pager *) pager;
+ assert (anon->magic == ANONYMOUS_MAGIC);
- debug (5, "Fault at %p, %d pages (%d kb); pager at " ADDR_FMT "+%d",
- fault_addr, count, count * PAGESIZE / 1024,
- ADDR_PRINTF (anon->map_area), offset);
+ debug (5, "%p: fault at %p, spans %d pg (%d kb); "
+ "pager: %p-%p (%d pages; %d kb), offset: %x",
+ anon, (void *) fault_addr, count, count * PAGESIZE / 1024,
+ (void *) (uintptr_t) addr_prefix (anon->map_area),
+ (void *) (uintptr_t) addr_prefix (anon->map_area) + anon->pager.length,
+ anon->pager.length / PAGESIZE, anon->pager.length / 1024,
+ offset);
ss_mutex_lock (&anon->lock);
bool recursive = false;
- void *pages[count];
+ void **pages;
profile_region (count > 1 ? ">1" : "=1");
@@ -204,17 +210,20 @@ fault (struct pager *pager, uintptr_t offset, int count, bool read_only,
fault_addr -= left;
offset -= left;
- count += (left + right) / PAGESIZE;
+ count = (left + PAGESIZE + right) / PAGESIZE;
assertx (offset + count * PAGESIZE <= pager->length,
"%x + %d pages <= %x",
offset, count, pager->length);
- debug (5, "Fault at %p, %d pages (%d kb); pager at " ADDR_FMT "+%d",
- fault_addr, count, count * PAGESIZE / 1024,
+ debug (5, "Faulting %p - %p (%d pages; %d kb); pager at " ADDR_FMT "+%d",
+ (void *) fault_addr, (void *) fault_addr + count * PAGE_SIZE,
+ count, count * PAGESIZE / 1024,
ADDR_PRINTF (anon->map_area), offset);
}
+ pages = __builtin_alloca (sizeof (void *) * count);
+
if (! (anon->flags & ANONYMOUS_NO_ALLOC))
{
hurd_btree_storage_desc_t *storage_descs;
@@ -244,7 +253,7 @@ fault (struct pager *pager, uintptr_t offset, int count, bool read_only,
storage address as object_discarded_clear also
returns a mapping and we are likely to access the
data at the fault address. */
- err = rm_object_discarded_clear (ADDR_VOID,
+ err = rm_object_discarded_clear (ADDR_VOID, ADDR_VOID,
storage_desc->storage);
assertx (err == 0, "%d", err);
@@ -326,6 +335,7 @@ fault (struct pager *pager, uintptr_t offset, int count, bool read_only,
#endif
}
+ assert (anon->magic == ANONYMOUS_MAGIC);
ss_mutex_unlock (&anon->lock);
profile_region_end ();
@@ -367,6 +377,8 @@ fault (struct pager *pager, uintptr_t offset, int count, bool read_only,
debug (5, "Fault at %x resolved", fault_addr);
+ assert (anon->magic == ANONYMOUS_MAGIC);
+
return r;
}
@@ -374,6 +386,7 @@ static void
mdestroy (struct map *map)
{
struct anonymous_pager *anon = (struct anonymous_pager *) map->pager;
+ assert (anon->magic == ANONYMOUS_MAGIC);
/* XXX: We assume that every byte is mapped by at most one mapping.
We may have to reexamine this assumption if we allow multiple
@@ -468,6 +481,7 @@ destroy (struct pager *pager)
assert (! pager->maps);
struct anonymous_pager *anon = (struct anonymous_pager *) pager;
+ assert (anon->magic == ANONYMOUS_MAGIC);
/* Wait any fill function returns. */
ss_mutex_lock (&anon->fill_lock);
@@ -498,6 +512,7 @@ advise (struct pager *pager,
uintptr_t start, uintptr_t length, uintptr_t advice)
{
struct anonymous_pager *anon = (struct anonymous_pager *) pager;
+ assert (anon->magic == ANONYMOUS_MAGIC);
switch (advice)
{
@@ -541,7 +556,7 @@ advise (struct pager *pager,
case pager_advice_normal:
{
- struct exception_info info;
+ struct activation_fault_info info;
info.discarded = anon->policy.discardable;
info.type = cap_page;
/* XXX: What should we set info.access to? */
@@ -585,6 +600,8 @@ anonymous_pager_alloc (addr_t activity,
struct anonymous_pager *anon = buffer;
memset (anon, 0, sizeof (*anon));
+ anon->magic = ANONYMOUS_MAGIC;
+
anon->pager.length = length;
anon->pager.fault = fault;
anon->pager.no_refs = destroy;
@@ -650,7 +667,7 @@ anonymous_pager_alloc (addr_t activity,
{
if ((flags & ANONYMOUS_FIXED))
{
- debug (0, "(%x, %x (%x)): Specified range " ADDR_FMT "+%d "
+ debug (0, "(%p, %x (%p)): Specified range " ADDR_FMT "+%d "
"in use and ANONYMOUS_FIXED specified",
hint, length, hint + length - 1,
ADDR_PRINTF (anon->map_area), count);
@@ -668,7 +685,7 @@ anonymous_pager_alloc (addr_t activity,
anon->map_area = as_alloc (width, count, true);
if (ADDR_IS_VOID (anon->map_area))
{
- debug (0, "(%x, %x (%x)): No VA available",
+ debug (0, "(%p, %x (%p)): No VA available",
hint, length, hint + length - 1);
goto error_with_buffer;
}
@@ -699,7 +716,7 @@ anonymous_pager_alloc (addr_t activity,
panic ("Memory exhausted.");
- debug (5, "Installed pager at %x spanning %d pages",
+ debug (5, "Installed pager at %p spanning %d pages",
*addr_out, length / PAGESIZE);
return anon;
diff --git a/libhurd-mm/anonymous.h b/libhurd-mm/anonymous.h
index fd1bedb..e946dc2 100644
--- a/libhurd-mm/anonymous.h
+++ b/libhurd-mm/anonymous.h
@@ -80,12 +80,16 @@ enum
typedef bool (*anonymous_pager_fill_t) (struct anonymous_pager *anon,
uintptr_t offset, uintptr_t count,
void *pages[],
- struct exception_info info);
+ struct activation_fault_info info);
+
+#define ANONYMOUS_MAGIC 0xa707a707
struct anonymous_pager
{
struct pager pager;
+ uintptr_t magic;
+
/* The staging area. Only valid if ANONYMOUS_STAGING_AREA is
set. */
void *staging_area;
diff --git a/libhurd-mm/as-build.c b/libhurd-mm/as-build.c
index 589799d..a44f172 100644
--- a/libhurd-mm/as-build.c
+++ b/libhurd-mm/as-build.c
@@ -125,7 +125,9 @@ do_index (activity_t activity, struct cap *pte, addr_t pt_addr, int idx,
struct cap *fake_slot)
{
assert (pte->type == cap_cappage || pte->type == cap_rcappage
- || pte->type == cap_folio);
+ || pte->type == cap_folio
+ || pte->type == cap_thread
+ || pte->type == cap_messenger || pte->type == cap_rmessenger);
/* Load the referenced object. */
struct object *pt = cap_to_object (activity, pte);
@@ -156,6 +158,15 @@ do_index (activity_t activity, struct cap *pte, addr_t pt_addr, int idx,
return fake_slot;
+ case cap_thread:
+ assert (idx < THREAD_SLOTS);
+ return &pt->caps[idx];
+
+ case cap_messenger:
+ /* Note: rmessengers don't expose their capability slots. */
+ assert (idx < VG_MESSENGER_SLOTS);
+ return &pt->caps[idx];
+
default:
return NULL;
}
@@ -244,7 +255,9 @@ ID (as_build) (activity_t activity,
area. */
break;
else if ((pte->type == cap_cappage || pte->type == cap_rcappage
- || pte->type == cap_folio)
+ || pte->type == cap_folio
+ || pte->type == cap_thread
+ || pte->type == cap_messenger)
&& remaining >= pte_gbits
&& pte_guard == addr_guard)
/* PTE's (possibly zero-width) guard matches and the
@@ -591,6 +604,15 @@ ID (as_build) (activity_t activity,
width = FOLIO_OBJECTS_LOG2;
break;
+ case cap_thread:
+ width = THREAD_SLOTS_LOG2;
+ break;
+
+ case cap_messenger:
+ /* Note: rmessengers don't expose their capability slots. */
+ width = VG_MESSENGER_SLOTS_LOG2;
+ break;
+
default:
AS_DUMP;
PANIC ("Can't insert object at " ADDR_FMT ": "
diff --git a/libhurd-mm/as-dump.c b/libhurd-mm/as-dump.c
index 839013e..27dfc6d 100644
--- a/libhurd-mm/as-dump.c
+++ b/libhurd-mm/as-dump.c
@@ -23,6 +23,7 @@
#include <hurd/as.h>
#include <hurd/stddef.h>
#include <assert.h>
+#include <backtrace.h>
#ifdef RM_INTERN
#include <md5.h>
@@ -174,6 +175,29 @@ do_walk (activity_t activity, int index,
return;
+ case cap_thread:
+ if (addr_depth (addr) + THREAD_SLOTS_LOG2 > ADDR_BITS)
+ return;
+
+ for (i = 0; i < THREAD_SLOTS; i ++)
+ do_walk (activity, i, root,
+ addr_extend (addr, i, THREAD_SLOTS_LOG2),
+ indent + 1, true, output_prefix);
+
+ return;
+
+ case cap_messenger:
+ /* rmessenger's don't expose their capability slots. */
+ if (addr_depth (addr) + VG_MESSENGER_SLOTS_LOG2 > ADDR_BITS)
+ return;
+
+ for (i = 0; i < VG_MESSENGER_SLOTS; i ++)
+ do_walk (activity, i, root,
+ addr_extend (addr, i, VG_MESSENGER_SLOTS_LOG2),
+ indent + 1, true, output_prefix);
+
+ return;
+
default:
return;
}
diff --git a/libhurd-mm/as-lookup.c b/libhurd-mm/as-lookup.c
index 0a270ab..62343dd 100644
--- a/libhurd-mm/as-lookup.c
+++ b/libhurd-mm/as-lookup.c
@@ -62,6 +62,8 @@ as_lookup_rel_internal (activity_t activity,
enum as_lookup_mode mode, union as_lookup_ret *rt,
bool dump)
{
+ assert (root);
+
struct cap *start = root;
#ifndef NDEBUG
@@ -99,7 +101,10 @@ as_lookup_rel_internal (activity_t activity,
remaining - CAP_GUARD_BITS (root))),
remaining);
- assert (CAP_TYPE_MIN <= root->type && root->type <= CAP_TYPE_MAX);
+ assertx (CAP_TYPE_MIN <= root->type && root->type <= CAP_TYPE_MAX,
+ "Cap at " ADDR_FMT " has type %d?! (" ADDR_FMT ")",
+ ADDR_PRINTF (addr_chop (address, remaining)), root->type,
+ ADDR_PRINTF (address));
if (root->type == cap_rcappage)
/* The page directory is read-only. Note the weakened access
@@ -240,6 +245,59 @@ as_lookup_rel_internal (activity_t activity,
break;
+ case cap_thread:
+ case cap_messenger:
+ /* Note: rmessengers don't expose their capability slots. */
+ {
+ /* Index the object. */
+ int bits;
+ switch (root->type)
+ {
+ case cap_thread:
+ bits = THREAD_SLOTS_LOG2;
+ break;
+
+ case cap_messenger:
+ bits = VG_MESSENGER_SLOTS_LOG2;
+ break;
+ }
+
+ if (remaining < bits)
+ {
+ debug (1, "Translating " ADDR_FMT "; not enough bits (%d) "
+ "to index %d-bit %s at " ADDR_FMT,
+ ADDR_PRINTF (address), remaining, bits,
+ cap_type_string (root->type),
+ ADDR_PRINTF (addr_chop (address, remaining)));
+ DUMP_OR_RET (false);
+ }
+
+ struct object *object = cap_to_object (activity, root);
+ if (! object)
+ {
+#ifdef RM_INTERN
+ debug (1, "Failed to get object with OID " OID_FMT,
+ OID_PRINTF (root->oid));
+ DUMP_OR_RET (false);
+#endif
+ return false;
+ }
+#ifdef RM_INTERN
+ assert (object_type (object) == root->type);
+#endif
+
+ int offset = extract_bits64_inv (addr, remaining - 1, bits);
+ assert (0 <= offset && offset < (1 << bits));
+ remaining -= bits;
+
+ if (dump_path)
+ debug (0, "Indexing %s: %d/%d (%d)",
+ cap_type_string (root->type), offset, bits, remaining);
+
+ root = &object->caps[offset];
+ break;
+ }
+
default:
/* We designate a non-address bit translating object but we
have no bits left to translate. This is not an unusual
diff --git a/libhurd-mm/as.c b/libhurd-mm/as.c
index f8b5d25..b7643ab 100644
--- a/libhurd-mm/as.c
+++ b/libhurd-mm/as.c
@@ -20,6 +20,7 @@
#include "as.h"
#include "storage.h"
+#include <hurd/rm.h>
#include <pthread.h>
#include <hurd/folio.h>
@@ -506,10 +507,12 @@ as_init (void)
err = rm_cap_read (meta_data_activity, ADDR_VOID, addr,
&type, &properties);
assert (! err);
+ if (! cap_types_compatible (type, desc->type))
+ rm_as_dump (ADDR_VOID, ADDR_VOID);
assertx (cap_types_compatible (type, desc->type),
- "%s != %s",
- cap_type_string (type),
- cap_type_string (desc->type));
+ "Object at " ADDR_FMT ": %s != %s",
+ ADDR_PRINTF (addr),
+ cap_type_string (type), cap_type_string (desc->type));
int gbits = CAP_ADDR_TRANS_GUARD_BITS (properties.addr_trans);
addr_t slot_addr = addr_chop (addr, gbits);
@@ -797,6 +800,12 @@ as_walk (int (*visit) (addr_t addr,
case cap_folio:
slots_log2 = FOLIO_OBJECTS_LOG2;
break;
+ case cap_thread:
+ slots_log2 = THREAD_SLOTS_LOG2;
+ break;
+ case cap_messenger:
+ slots_log2 = VG_MESSENGER_SLOTS_LOG2;
+ break;
default:
assert (0 == 1);
break;
diff --git a/libhurd-mm/as.h b/libhurd-mm/as.h
index 0d4d48b..4b0a448 100644
--- a/libhurd-mm/as.h
+++ b/libhurd-mm/as.h
@@ -92,6 +92,9 @@ as_lock_ensure_stack (int amount)
space[i] = 0;
}
+/* The amount of stack space that needs to be available to avoid
+ faulting. */
+#define AS_STACK_SPACE (8 * PAGESIZE)
/* Address space lock. Should hold a read lock when accessing the
address space. Must hold a write lock when modifying the address
@@ -104,7 +107,7 @@ as_lock (void)
extern pthread_rwlock_t as_rwlock;
extern l4_thread_id_t as_rwlock_owner;
- as_lock_ensure_stack (EXCEPTION_STACK_SIZE - PAGESIZE);
+ as_lock_ensure_stack (AS_STACK_SPACE);
storage_check_reserve (false);
@@ -133,7 +136,7 @@ as_lock_readonly (void)
extern pthread_rwlock_t as_rwlock;
extern l4_thread_id_t as_rwlock_owner;
- as_lock_ensure_stack (EXCEPTION_STACK_SIZE - PAGESIZE);
+ as_lock_ensure_stack (AS_STACK_SPACE);
storage_check_reserve (false);
@@ -218,7 +221,8 @@ extern struct cap shadow_root;
&& (!!__acs_p.policy.discardable \
== !!(__acs_cap)->discardable))) \
die = true; \
- else if (__acs_p.addr_trans.raw != (__acs_cap)->addr_trans.raw) \
+ else if ((__acs_type == cap_cappage || __acs_type == cap_rcappage) \
+ && __acs_p.addr_trans.raw != (__acs_cap)->addr_trans.raw) \
die = true; \
\
if (die) \
@@ -596,7 +600,7 @@ as_cap_lookup (addr_t addr, enum cap_type type, bool *writable)
TYPE is the required type. If the type is incompatible
(cap_rcappage => cap_cappage and cap_rpage => cap_page), bails. If
TYPE is -1, then any type is acceptable. May cause paging. If
- non-NULL, returns whether the slot is writable in *WRITABLE.
+ non-NULL, returns whether the object is writable in *WRITABLE.
This function locks (and unlocks) as_lock. */
static inline struct cap
diff --git a/libhurd-mm/exceptions.c b/libhurd-mm/exceptions.c
index 6827f71..26ad7c7 100644
--- a/libhurd-mm/exceptions.c
+++ b/libhurd-mm/exceptions.c
@@ -22,70 +22,192 @@
#include <hurd/stddef.h>
#include <hurd/exceptions.h>
#include <hurd/storage.h>
-#include <hurd/slab.h>
#include <hurd/thread.h>
+#include <hurd/mm.h>
+#include <hurd/rm.h>
+#include <hurd/slab.h>
#include <l4/thread.h>
#include <signal.h>
#include <string.h>
+#include <backtrace.h>
#include "map.h"
#include "as.h"
+void
+hurd_fault_catcher_register (struct hurd_fault_catcher *catcher)
+{
+ struct vg_utcb *utcb = hurd_utcb ();
+ assert (utcb);
+ assert (catcher);
+
+ catcher->magic = HURD_FAULT_CATCHER_MAGIC;
+
+ catcher->next = utcb->catchers;
+ catcher->prevp = &utcb->catchers;
+
+ utcb->catchers = catcher;
+ if (catcher->next)
+ catcher->next->prevp = &catcher->next;
+}
+
+void
+hurd_fault_catcher_unregister (struct hurd_fault_catcher *catcher)
+{
+ assertx (catcher->magic == HURD_FAULT_CATCHER_MAGIC,
+ "%p", (void *) catcher->magic);
+ catcher->magic = ~HURD_FAULT_CATCHER_MAGIC;
+
+ *catcher->prevp = catcher->next;
+ if (catcher->next)
+ catcher->next->prevp = catcher->prevp;
+}
+
extern struct hurd_startup_data *__hurd_startup_data;
+void
+hurd_activation_frame_longjmp (struct activation_frame *activation_frame,
+ jmp_buf buf, bool set_ret, int ret)
+{
+#ifdef i386
+ /* XXX: Hack! Hack! This is customized for the newlib version!!!
+
+ From newlib/newlib/libc/machine/i386/setjmp.S
+
+ jmp_buf:
+ eax ebx ecx edx esi edi ebp esp eip
+ 0 4 8 12 16 20 24 28 32
+ */
+ /* A cheap check to try and ensure we are using a newlib data
+ structure. */
+ assert (sizeof (jmp_buf) == sizeof (uintptr_t) * 9);
+
+ uintptr_t *regs = (uintptr_t *) buf;
+ activation_frame->eax = *(regs ++);
+ activation_frame->ebx = *(regs ++);
+ activation_frame->ecx = *(regs ++);
+ activation_frame->edx = *(regs ++);
+ activation_frame->esi = *(regs ++);
+ activation_frame->edi = *(regs ++);
+ activation_frame->ebp = *(regs ++);
+ activation_frame->esp = *(regs ++);
+ activation_frame->eip = *(regs ++);
+
+ /* The return value is stored in eax. */
+ if (set_ret)
+ activation_frame->eax = ret;
+
+#else
+# warning Not ported to this architecture
+#endif
+}
+
static void
-utcb_state_save (struct exception_frame *exception_frame)
+l4_utcb_state_save (struct activation_frame *activation_frame)
{
- l4_word_t *utcb = _L4_utcb ();
-
- exception_frame->saved_sender = utcb[_L4_UTCB_SENDER];
- exception_frame->saved_receiver = utcb[_L4_UTCB_RECEIVER];
- exception_frame->saved_timeout = utcb[_L4_UTCB_TIMEOUT];
- exception_frame->saved_error_code = utcb[_L4_UTCB_ERROR_CODE];
- exception_frame->saved_flags = utcb[_L4_UTCB_FLAGS];
- exception_frame->saved_br0 = utcb[_L4_UTCB_BR0];
- memcpy (&exception_frame->saved_message,
- utcb, L4_NUM_MRS * sizeof (l4_word_t));
+ uintptr_t *utcb = _L4_utcb ();
+
+ activation_frame->saved_sender = utcb[_L4_UTCB_SENDER];
+ activation_frame->saved_receiver = utcb[_L4_UTCB_RECEIVER];
+ activation_frame->saved_timeout = utcb[_L4_UTCB_TIMEOUT];
+ activation_frame->saved_error_code = utcb[_L4_UTCB_ERROR_CODE];
+ activation_frame->saved_flags = utcb[_L4_UTCB_FLAGS];
+ activation_frame->saved_br0 = utcb[_L4_UTCB_BR0];
+ memcpy (&activation_frame->saved_message,
+ &utcb[_L4_UTCB_MR0], L4_NUM_MRS * sizeof (uintptr_t));
}
static void
-utcb_state_restore (struct exception_frame *exception_frame)
+l4_utcb_state_restore (struct activation_frame *activation_frame)
+{
+ uintptr_t *utcb = _L4_utcb ();
+
+ utcb[_L4_UTCB_SENDER] = activation_frame->saved_sender;
+ utcb[_L4_UTCB_RECEIVER] = activation_frame->saved_receiver;
+ utcb[_L4_UTCB_TIMEOUT] = activation_frame->saved_timeout;
+ utcb[_L4_UTCB_ERROR_CODE] = activation_frame->saved_error_code;
+ utcb[_L4_UTCB_FLAGS] = activation_frame->saved_flags;
+ utcb[_L4_UTCB_BR0] = activation_frame->saved_br0;
+ memcpy (&utcb[_L4_UTCB_MR0], &activation_frame->saved_message,
+ L4_NUM_MRS * sizeof (uintptr_t));
+}
+
+/* Fetch any pending activation. */
+void
+hurd_activation_fetch (void)
{
- l4_word_t *utcb = _L4_utcb ();
-
- utcb[_L4_UTCB_SENDER] = exception_frame->saved_sender;
- utcb[_L4_UTCB_RECEIVER] = exception_frame->saved_receiver;
- utcb[_L4_UTCB_TIMEOUT] = exception_frame->saved_timeout;
- utcb[_L4_UTCB_ERROR_CODE] = exception_frame->saved_error_code;
- utcb[_L4_UTCB_FLAGS] = exception_frame->saved_flags;
- utcb[_L4_UTCB_BR0] = exception_frame->saved_br0;
- memcpy (utcb, &exception_frame->saved_message,
- L4_NUM_MRS * sizeof (l4_word_t));
+ debug (0, DEBUG_BOLD ("XXX"));
+
+ /* Any reply will come in the form of a pending activation being
+ delivered. This RPC does not generate a response. */
+ error_t err = rm_thread_activation_collect_send (ADDR_VOID, ADDR_VOID,
+ ADDR_VOID);
+ if (err)
+ panic ("Sending thread_activation_collect failed: %d", err);
}
-static struct hurd_slab_space exception_frame_slab;
+void
+hurd_activation_message_register (struct hurd_message_buffer *message_buffer)
+{
+ if (unlikely (! mm_init_done))
+ return;
+
+ struct vg_utcb *utcb = hurd_utcb ();
+ assert (utcb);
+ assert (message_buffer);
+
+ debug (5, "Registering %p (utcb: %p)", message_buffer, utcb);
+
+ if (utcb->extant_message)
+ panic ("Already have an extant message buffer!");
+
+ utcb->extant_message = message_buffer;
+ message_buffer->just_free = false;
+ message_buffer->closure = NULL;
+}
+
+void
+hurd_activation_message_unregister (struct hurd_message_buffer *message_buffer)
+{
+ if (unlikely (! mm_init_done))
+ return;
+
+ struct vg_utcb *utcb = hurd_utcb ();
+ assert (utcb);
+ assert (message_buffer);
+ assert (utcb->extant_message == message_buffer);
+ utcb->extant_message = NULL;
+}
+
+/* Message buffers contain an activation frame. Exceptions reuse
+ message buffers and can be nested. To avoid squashing the
+ activation frame, we need to allocate */
+
+static error_t activation_frame_slab_alloc (void *, size_t, void **);
+static error_t activation_frame_slab_dealloc (void *, void *, size_t);
+
+static struct hurd_slab_space activation_frame_slab
+ = HURD_SLAB_SPACE_INITIALIZER (struct activation_frame,
+ activation_frame_slab_alloc,
+ activation_frame_slab_dealloc,
+ NULL, NULL, NULL);
static error_t
-exception_frame_slab_alloc (void *hook, size_t size, void **ptr)
+activation_frame_slab_alloc (void *hook, size_t size, void **ptr)
{
assert (size == PAGESIZE);
- struct exception_frame frame;
- utcb_state_save (&frame);
-
struct storage storage = storage_alloc (meta_data_activity,
cap_page, STORAGE_EPHEMERAL,
OBJECT_POLICY_DEFAULT, ADDR_VOID);
*ptr = ADDR_TO_PTR (addr_extend (storage.addr, 0, PAGESIZE_LOG2));
- utcb_state_restore (&frame);
-
return 0;
}
static error_t
-exception_frame_slab_dealloc (void *hook, void *buffer, size_t size)
+activation_frame_slab_dealloc (void *hook, void *buffer, size_t size)
{
assert (size == PAGESIZE);
@@ -94,345 +216,704 @@ exception_frame_slab_dealloc (void *hook, void *buffer, size_t size)
return 0;
}
-
-static struct exception_frame *
-exception_frame_alloc (struct exception_page *exception_page)
+
+static void
+check_activation_frame_reserve (struct vg_utcb *utcb)
{
- struct exception_frame *exception_frame;
+ if (unlikely (! utcb->activation_stack
+ || ! utcb->activation_stack->prev))
+ /* There are no activation frames in reserve. Allocate one. */
+ {
+ void *buffer;
+ error_t err = hurd_slab_alloc (&activation_frame_slab, &buffer);
+ if (err)
+ panic ("Out of memory!");
+
+ struct activation_frame *activation_frame = buffer;
+ activation_frame->canary = ACTIVATION_FRAME_CANARY;
+
+ activation_frame->prev = NULL;
+ activation_frame->next = utcb->activation_stack;
+ if (activation_frame->next)
+ activation_frame->next->prev = activation_frame;
+
+ if (! utcb->activation_stack_bottom)
+ /* This is the first frame we've allocated. */
+ utcb->activation_stack_bottom = activation_frame;
+ }
+}
+
+static struct activation_frame *
+activation_frame_alloc (struct vg_utcb *utcb)
+{
+ struct activation_frame *activation_frame;
- if (! exception_page->exception_stack
- && exception_page->exception_stack_bottom)
+ if (! utcb->activation_stack
+ && utcb->activation_stack_bottom)
/* The stack is empty but we have an available frame. */
{
- exception_frame = exception_page->exception_stack_bottom;
- exception_page->exception_stack = exception_frame;
+ activation_frame = utcb->activation_stack_bottom;
+ utcb->activation_stack = activation_frame;
}
- else if (exception_page->exception_stack
- && exception_page->exception_stack->prev)
+ else if (utcb->activation_stack
+ && utcb->activation_stack->prev)
/* The stack is not empty and we have an available frame. */
{
- exception_frame = exception_page->exception_stack->prev;
- exception_page->exception_stack = exception_frame;
+ activation_frame = utcb->activation_stack->prev;
+ utcb->activation_stack = activation_frame;
}
else
/* We do not have an available frame. */
- {
- void *buffer;
- error_t err = hurd_slab_alloc (&exception_frame_slab, &buffer);
- if (err)
- panic ("Out of memory!");
-
- exception_frame = buffer;
+ panic ("Activation frame reserve is empty.");
- exception_frame->prev = NULL;
- exception_frame->next = exception_page->exception_stack;
- if (exception_frame->next)
- exception_frame->next->prev = exception_frame;
+ return activation_frame;
+}
+
+void
+hurd_activation_stack_dump (void)
+{
+ struct vg_utcb *utcb = hurd_utcb ();
- exception_page->exception_stack = exception_frame;
+ int depth = 0;
+ struct activation_frame *activation_frame;
+ for (activation_frame = utcb->activation_stack;
+ activation_frame;
+ activation_frame = activation_frame->next)
+ {
+ depth ++;
+ debug (0, "%d (%p): ip: %p, sp: %p, eax: %p, ebx: %p, ecx: %p, "
+ "edx: %p, edi: %p, esi: %p, ebp: %p, eflags: %p",
+ depth, activation_frame,
+ (void *) activation_frame->eip,
+ (void *) activation_frame->esp,
+ (void *) activation_frame->eax,
+ (void *) activation_frame->ebx,
+ (void *) activation_frame->ecx,
+ (void *) activation_frame->edx,
+ (void *) activation_frame->edi,
+ (void *) activation_frame->esi,
+ (void *) activation_frame->ebp,
+ (void *) activation_frame->eflags);
- if (! exception_page->exception_stack_bottom)
- /* This is the first frame we've allocated. */
- exception_page->exception_stack_bottom = exception_frame;
}
-
- return exception_frame;
}
-/* Fetch an exception. */
void
-exception_fetch_exception (void)
+hurd_activation_handler_normal (struct activation_frame *activation_frame,
+ struct vg_utcb *utcb)
{
- l4_msg_t msg;
- rm_exception_collect_send_marshal (&msg, ADDR_VOID);
- l4_msg_load (msg);
-
- l4_thread_id_t from;
- l4_msg_tag_t msg_tag = l4_reply_wait (__hurd_startup_data->rm, &from);
- if (l4_ipc_failed (msg_tag))
- panic ("Receiving message failed: %u", (l4_error_code () >> 1) & 0x7);
-}
+ assert (utcb == hurd_utcb ());
+ assert (activation_frame->canary == ACTIVATION_FRAME_CANARY);
+ assert (utcb->activation_stack == activation_frame);
-/* XXX: Before returning from either exception_handler_normal or
- exception_handler_activated, we need to examine the thread's
- control state and if the IPC was interrupt, set the error code
- appropriately. This also requires changing all invocations of IPCs
- to loop on interrupt. Currently, this is not a problem as the only
- exception that we get is a page fault, which can only occur when
- the thread is not in an IPC. (Sure, there are string buffers, but
- we don't use them.) */
+ do_debug (4)
+ {
+ static int calls;
+ int call = ++ calls;
-void
-exception_handler_normal (struct exception_frame *exception_frame)
-{
- debug (5, "Exception handler called (0x%x.%x, exception_frame: %p, "
- "next: %p)",
- l4_thread_no (l4_myself ()), l4_version (l4_myself ()),
- exception_frame, exception_frame->next);
+ int depth = 0;
+ struct activation_frame *af;
+ for (af = utcb->activation_stack; af; af = af->next)
+ depth ++;
- l4_msg_t *msg = &exception_frame->exception;
+ debug (0, "Activation (%d; %d nested) (frame: %p, next: %p)",
+ call, depth, activation_frame, activation_frame->next);
+ hurd_activation_stack_dump ();
+ }
+
+ struct hurd_message_buffer *mb = activation_frame->message_buffer;
+ assert (mb->magic == HURD_MESSAGE_BUFFER_MAGIC);
- l4_msg_tag_t msg_tag = l4_msg_msg_tag (*msg);
- l4_word_t label;
- label = l4_label (msg_tag);
+ check_activation_frame_reserve (utcb);
- switch (label)
+ if (mb->closure)
{
- case EXCEPTION_fault:
- {
- addr_t fault;
- uintptr_t ip;
- uintptr_t sp;
- struct exception_info info;
-
- error_t err;
- err = exception_fault_send_unmarshal (msg, &fault, &sp, &ip, &info);
- if (err)
- panic ("Failed to unmarshal exception: %d", err);
-
- bool r = map_fault (fault, ip, info);
- if (! r)
+ debug (5, "Executing closure %p", mb->closure);
+ mb->closure (mb);
+ }
+ else
+ {
+ debug (5, "Exception");
+
+ assert (mb == utcb->exception_buffer);
+
+ uintptr_t label = vg_message_word (mb->reply, 0);
+ switch (label)
+ {
+ case ACTIVATION_fault:
{
- debug (0, "SIGSEGV at " ADDR_FMT " (ip: %p, sp: %p, eax: %p, "
- "ebx: %p, ecx: %p, edx: %p, edi: %p, esi: %p, "
+ addr_t fault;
+ uintptr_t ip;
+ uintptr_t sp;
+ struct activation_fault_info info;
+
+ error_t err;
+ err = activation_fault_send_unmarshal (mb->reply,
+ &fault, &sp, &ip, &info,
+ NULL);
+ if (err)
+ panic ("Failed to unmarshal exception: %d", err);
+
+ debug (5, "Fault at " ADDR_FMT " (ip: %p, sp: %p, eax: %p, "
+ "ebx: %p, ecx: %p, edx: %p, edi: %p, esi: %p, ebp: %p, "
"eflags: %p)",
- ADDR_PRINTF (fault), ip, sp,
- exception_frame->regs[0],
- exception_frame->regs[5],
- exception_frame->regs[1],
- exception_frame->regs[2],
- exception_frame->regs[6],
- exception_frame->regs[7],
- exception_frame->regs[3]);
-
- extern int backtrace (void **array, int size);
-
- void *a[20];
- int count = backtrace (a, sizeof (a) / sizeof (a[0]));
- int i;
- s_printf ("Backtrace: ");
- for (i = 0; i < count; i ++)
- s_printf ("%p ", a[i]);
- s_printf ("\n");
-
- siginfo_t si;
- memset (&si, 0, sizeof (si));
- si.si_signo = SIGSEGV;
- si.si_addr = ADDR_TO_PTR (fault);
-
- /* XXX: Should set si.si_code to SEGV_MAPERR or
- SEGV_ACCERR. */
-
- pthread_kill_siginfo_np (pthread_self (), si);
- }
+ ADDR_PRINTF (fault),
+ (void *) ip, (void *) sp,
+ (void *) activation_frame->eax,
+ (void *) activation_frame->ebx,
+ (void *) activation_frame->ecx,
+ (void *) activation_frame->edx,
+ (void *) activation_frame->edi,
+ (void *) activation_frame->esi,
+ (void *) activation_frame->ebp,
+ (void *) activation_frame->eflags);
+
+ extern l4_thread_id_t as_rwlock_owner;
+
+ bool r = false;
+ if (likely (as_rwlock_owner != l4_myself ()))
+ r = map_fault (fault, ip, info);
+ if (! r)
+ {
+ uintptr_t f = (uintptr_t) ADDR_TO_PTR (fault);
+ struct hurd_fault_catcher *catcher;
+ for (catcher = utcb->catchers; catcher; catcher = catcher->next)
+ {
+ assertx (catcher->magic == HURD_FAULT_CATCHER_MAGIC,
+ "Catcher %p has bad magic: %p",
+ catcher, (void *) catcher->magic);
+
+ if (catcher->start <= f
+ && f <= catcher->start + catcher->len - 1)
+ {
+ debug (5, "Catcher caught fault at %p! (callback: %p)",
+ (void *) f, catcher->callback);
+ if (catcher->callback (activation_frame, f))
+ /* The callback claims that we can continue. */
+ break;
+ }
+ else
+ debug (5, "Catcher %p-%p does not cover fault %p",
+ (void *) catcher->start,
+ (void *) catcher->start + catcher->len - 1,
+ (void *) f);
+ }
+
+ if (! catcher)
+ {
+ if (as_rwlock_owner == l4_myself ())
+ debug (0, "I hold as_rwlock!");
+
+ debug (0, "SIGSEGV at " ADDR_FMT " "
+ "(ip: %p, sp: %p, eax: %p, ebx: %p, ecx: %p, "
+ "edx: %p, edi: %p, esi: %p, ebp: %p, eflags: %p)",
+ ADDR_PRINTF (fault),
+ (void *) ip, (void *) sp,
+ (void *) activation_frame->eax,
+ (void *) activation_frame->ebx,
+ (void *) activation_frame->ecx,
+ (void *) activation_frame->edx,
+ (void *) activation_frame->edi,
+ (void *) activation_frame->esi,
+ (void *) activation_frame->ebp,
+ (void *) activation_frame->eflags);
+
+ backtrace_print ();
+
+ siginfo_t si;
+ memset (&si, 0, sizeof (si));
+ si.si_signo = SIGSEGV;
+ si.si_addr = ADDR_TO_PTR (fault);
+
+ /* XXX: Should set si.si_code to SEGV_MAPERR or
+ SEGV_ACCERR. */
+
+ pthread_kill_siginfo_np (pthread_self (), si);
+ }
+ }
- break;
- }
+ break;
+ }
- default:
- panic ("Unknown message id: %d", label);
+ default:
+ panic ("Unknown message id: %d", label);
+ }
}
- utcb_state_restore (exception_frame);
+ if (activation_frame->normal_mode_stack == utcb->alternate_stack)
+ utcb->alternate_stack_inuse = false;
+
+ assert (utcb->canary0 == UTCB_CANARY0);
+ assert (utcb->canary1 == UTCB_CANARY1);
+
+ l4_utcb_state_restore (activation_frame);
}
#ifndef NDEBUG
-static l4_word_t
-crc (struct exception_page *exception_page)
+static uintptr_t
+crc (struct vg_utcb *utcb)
{
- l4_word_t crc = 0;
- l4_word_t *p;
- for (p = (l4_word_t *) exception_page; p < &exception_page->crc; p ++)
+ uintptr_t crc = 0;
+ uintptr_t *p;
+ for (p = (uintptr_t *) utcb; p < &utcb->crc; p ++)
crc += *p;
return crc;
}
#endif
-struct exception_frame *
-exception_handler_activated (struct exception_page *exception_page)
+struct activation_frame *
+hurd_activation_handler_activated (struct vg_utcb *utcb)
{
- /* We expect EXCEPTION_PAGE to be page aligned. */
- assert (((uintptr_t) exception_page & (PAGESIZE - 1)) == 0);
- assert (exception_page->activated_mode);
+ assert (((uintptr_t) utcb & (PAGESIZE - 1)) == 0);
+ assert (utcb->canary0 == UTCB_CANARY0);
+ assert (utcb->canary1 == UTCB_CANARY1);
+ assert (utcb->activated_mode);
+ /* XXX: Assumption that stack grows down... */
+ assert (utcb->activation_handler_sp - PAGESIZE <= (uintptr_t) &utcb);
+ assert ((uintptr_t) &utcb <= utcb->activation_handler_sp);
+
+ if (unlikely (! mm_init_done))
+ /* Just returns: during initialization, we don't except any faults or
+ asynchronous IPC. We do expect that IPC will be made but it will
+ always be made with VG_IPC_RETURN and as such just returning will
+ do the right thing. */
+ return NULL;
+
+ /* This comes after the mm_init_done check as when switching utcbs,
+ this may not be true. */
+ assertx (utcb == hurd_utcb (),
+ "%p != %p (func: %p; ip: %p, sp: %p)",
+ utcb, hurd_utcb (), hurd_utcb,
+ (void *) utcb->saved_ip, (void *) utcb->saved_sp);
+
+ debug (5, "Activation handler called (utcb: %p)", utcb);
+
+ struct hurd_message_buffer *mb
+ = (struct hurd_message_buffer *) (uintptr_t) utcb->messenger_id;
+
+ debug (5, "Got message %llx (utcb: %p)", utcb->messenger_id, utcb);
- /* Allocate an exception frame. */
- struct exception_frame *exception_frame
- = exception_frame_alloc (exception_page);
- utcb_state_save (exception_frame);
+ assert (mb->magic == HURD_MESSAGE_BUFFER_MAGIC);
- debug (5, "Exception handler called (exception_page: %p)",
- exception_page);
+ struct activation_frame *activation_frame = activation_frame_alloc (utcb);
+ assert (activation_frame->canary == ACTIVATION_FRAME_CANARY);
+
+ l4_utcb_state_save (activation_frame);
+
+ activation_frame->message_buffer = mb;
#ifndef NDEBUG
- exception_page->crc = crc (exception_page);
+ utcb->crc = crc (utcb);
#endif
- l4_msg_t *msg = &exception_page->exception;
+ /* Whether we need to process the activation in normal mode. */
+ bool trampoline = true;
- l4_msg_tag_t msg_tag = l4_msg_msg_tag (*msg);
- l4_word_t label;
- label = l4_label (msg_tag);
+ if (mb == utcb->extant_message)
+ /* The extant IPC reply. Just return, everything is in place. */
+ {
+#ifndef NDEBUG
+ do_debug (0)
+ {
+ int label = 0;
+ if (vg_message_data_count (mb->request) >= sizeof (uintptr_t))
+ label = vg_message_word (mb->request, 0);
+ error_t err = -1;
+ if (vg_message_data_count (mb->reply) >= sizeof (uintptr_t))
+ err = vg_message_word (mb->reply, 0);
+
+ debug (5, "Extant RPC: %s (%d) -> %d",
+ rm_method_id_string (label), label, err);
+ }
+#endif
- switch (label)
+ utcb->extant_message = NULL;
+ trampoline = false;
+ }
+ else if (mb->closure)
{
- case EXCEPTION_fault:
- {
- addr_t fault;
- uintptr_t ip;
- uintptr_t sp;
- struct exception_info info;
-
- error_t err;
- err = exception_fault_send_unmarshal (msg, &fault, &sp, &ip, &info);
- if (err)
- panic ("Failed to unmarshal exception: %d", err);
-
- /* XXX: We assume that the stack grows down here. */
- uintptr_t f = (uintptr_t) ADDR_TO_PTR (fault);
- if (sp - PAGESIZE <= f && f <= sp + PAGESIZE * 4)
- /* The fault occurs within four pages of the stack pointer.
- It has got to be a stack fault. Handle it here. */
+ debug (5, "Closure");
+ }
+ else if (mb == utcb->exception_buffer)
+ /* It's an exception. Process it. */
+ {
+ debug (5, "Exception");
+
+ uintptr_t label = vg_message_word (mb->reply, 0);
+ switch (label)
+ {
+ case ACTIVATION_fault:
{
- debug (5, "Handling fault at " ADDR_FMT " in activated mode "
- "(ip: %x, sp: %x).",
+ addr_t fault;
+ uintptr_t ip;
+ uintptr_t sp;
+ struct activation_fault_info info;
+
+ error_t err;
+ err = activation_fault_send_unmarshal (mb->reply,
+ &fault, &sp, &ip, &info,
+ NULL);
+ if (err)
+ panic ("Failed to unmarshal exception: %d", err);
+
+ debug (4, "Fault at " ADDR_FMT "(ip: %x, sp: %x).",
ADDR_PRINTF (fault), ip, sp);
- bool r = map_fault (fault, ip, info);
- if (! r)
+ uintptr_t f = (uintptr_t) ADDR_TO_PTR (fault);
+ uintptr_t stack_page = (sp & ~(PAGESIZE - 1));
+ uintptr_t fault_page = (f & ~(PAGESIZE - 1));
+ if (stack_page == fault_page
+ || stack_page - PAGESIZE == fault_page)
+ /* The fault on the same page as the stack pointer or
+ the following page. It is likely a stack fault.
+ Handle it using the alternate stack. */
{
- debug (0, "SIGSEGV at " ADDR_FMT " (ip: %p, sp: %p, eax: %p, "
- "ebx: %p, ecx: %p, edx: %p, edi: %p, esi: %p, "
- "eflags: %p)",
- ADDR_PRINTF (fault), ip, sp,
- exception_frame->regs[0],
- exception_frame->regs[5],
- exception_frame->regs[1],
- exception_frame->regs[2],
- exception_frame->regs[6],
- exception_frame->regs[7],
- exception_frame->regs[3]);
-
- siginfo_t si;
- memset (&si, 0, sizeof (si));
- si.si_signo = SIGSEGV;
- si.si_addr = ADDR_TO_PTR (fault);
-
- /* XXX: Should set si.si_code to SEGV_MAPERR or
- SEGV_ACCERR. */
-
- pthread_kill_siginfo_np (pthread_self (), si);
- }
- assert (exception_page->crc == crc (exception_page));
+ debug (5, "Stack fault at " ADDR_FMT "(ip: %x, sp: %x).",
+ ADDR_PRINTF (fault), ip, sp);
- utcb_state_restore (exception_frame);
+ assert (! utcb->alternate_stack_inuse);
+ utcb->alternate_stack_inuse = true;
- assert (exception_page->crc == crc (exception_page));
- assertx (exception_page->exception_stack == exception_frame,
- "%p != %p",
- exception_page->exception_stack, exception_frame);
+ assert (utcb->alternate_stack);
- exception_page->exception_stack
- = exception_page->exception_stack->next;
- return NULL;
+ activation_frame->normal_mode_stack = utcb->alternate_stack;
+ }
+
+ debug (5, "Handling fault at " ADDR_FMT " in normal mode "
+ "(ip: %x, sp: %x).",
+ ADDR_PRINTF (fault), ip, sp);
+
+ break;
}
- debug (5, "Handling fault at " ADDR_FMT " in normal mode "
- "(ip: %x, sp: %x).",
- ADDR_PRINTF (fault), ip, sp);
+ default:
+ panic ("Unknown message id: %d", label);
+ }
+
+ /* Unblock the exception handler messenger. */
+ error_t err = vg_ipc (VG_IPC_RECEIVE | VG_IPC_RECEIVE_ACTIVATE
+ | VG_IPC_RETURN,
+ ADDR_VOID, utcb->exception_buffer->receiver,
+ ADDR_VOID,
+ ADDR_VOID, ADDR_VOID, ADDR_VOID, ADDR_VOID);
+ assert (! err);
+ }
+ else if (mb->just_free)
+ {
+ debug (5, "Just freeing");
+ hurd_message_buffer_free (mb);
+ trampoline = false;
+ }
+ else
+ {
+ panic ("Unknown messenger %llx (extant: %p; exception: %p) (label: %d)",
+ utcb->messenger_id,
+ utcb->extant_message, utcb->exception_buffer,
+ vg_message_word (mb->reply, 0));
+ }
+
+ /* Assert that the utcb has not been modified. */
+ assert (utcb->crc == crc (utcb));
+
+ if (! trampoline)
+ {
+ debug (5, "Direct return");
+
+ assert (utcb->activation_stack == activation_frame);
+ utcb->activation_stack = utcb->activation_stack->next;
- break;
- }
+ l4_utcb_state_restore (activation_frame);
- default:
- panic ("Unknown message id: %d", label);
+ activation_frame = NULL;
+ }
+ else
+ {
+ debug (5, "Continuing in normal mode");
+ l4_utcb_state_restore (activation_frame);
}
- /* Handle the fault in normal mode. */
+ assert (utcb->canary0 == UTCB_CANARY0);
+ assert (utcb->canary1 == UTCB_CANARY1);
- /* Copy the relevant bits. */
- memcpy (&exception_frame->exception, msg,
- (1 + l4_untyped_words (msg_tag)) * sizeof (l4_word_t));
+ return activation_frame;
+}
+
+static char activation_handler_area0[PAGESIZE]
+ __attribute__ ((aligned (PAGESIZE)));
+static char activation_handler_msg[PAGESIZE]
+ __attribute__ ((aligned (PAGESIZE)));
+static struct vg_utcb *initial_utcb = (void *) &activation_handler_area0[0];
+
+static struct vg_utcb *
+simple_utcb_fetcher (void)
+{
+ assert (initial_utcb->canary0 == UTCB_CANARY0);
+ assert (initial_utcb->canary1 == UTCB_CANARY1);
- assert (exception_page->crc == crc (exception_page));
- return exception_frame;
+ return initial_utcb;
}
+struct vg_utcb *(*hurd_utcb) (void);
+
void
-exception_handler_init (void)
+hurd_activation_handler_init_early (void)
{
- error_t err = hurd_slab_init (&exception_frame_slab,
- sizeof (struct exception_frame), 0,
- exception_frame_slab_alloc,
- exception_frame_slab_dealloc,
- NULL, NULL, NULL);
- assert (! err);
+ initial_utcb->canary0 = UTCB_CANARY0;
+ initial_utcb->canary1 = UTCB_CANARY1;
- extern struct hurd_startup_data *__hurd_startup_data;
+ hurd_utcb = simple_utcb_fetcher;
- /* We use the start of the area (lowest address) as the exception page. */
- addr_t stack_area = as_alloc (EXCEPTION_STACK_SIZE_LOG2, 1, true);
- void *stack_area_base
- = ADDR_TO_PTR (addr_extend (stack_area, 0, EXCEPTION_STACK_SIZE_LOG2));
+ struct vg_utcb *utcb = hurd_utcb ();
+ assert (utcb == initial_utcb);
- debug (5, "Exception area: %x-%x",
- stack_area_base, stack_area_base + EXCEPTION_STACK_SIZE - 1);
+ /* XXX: We assume the stack grows down! SP is set to the end of the
+ exception page. */
+ utcb->activation_handler_sp
+ = (uintptr_t) activation_handler_area0 + sizeof (activation_handler_area0);
- void *page;
- for (page = stack_area_base;
- page < stack_area_base + EXCEPTION_STACK_SIZE;
- page += PAGESIZE)
- {
- addr_t slot = addr_chop (PTR_TO_ADDR (page), PAGESIZE_LOG2);
+ /* The word beyond the base of the stack is interpreted as a pointer
+ to the exception page. Make it so. */
+ utcb->activation_handler_sp -= sizeof (void *);
+ * (void **) utcb->activation_handler_sp = utcb;
- as_ensure (slot);
+ utcb->activation_handler_ip = (uintptr_t) &hurd_activation_handler_entry;
+ utcb->activation_handler_end = (uintptr_t) &hurd_activation_handler_end;
- struct storage storage;
- storage = storage_alloc (ADDR_VOID, cap_page,
- STORAGE_LONG_LIVED,
- OBJECT_POLICY_DEFAULT,
- slot);
+ struct hurd_thread_exregs_in in;
+ memset (&in, 0, sizeof (in));
+
+ struct vg_message *msg = (void *) &activation_handler_msg[0];
+ rm_thread_exregs_send_marshal (msg, HURD_EXREGS_SET_UTCB, in,
+ ADDR_VOID, ADDR_VOID,
+ PTR_TO_PAGE (utcb), ADDR_VOID,
+ __hurd_startup_data->messengers[1]);
+
+ error_t err;
+ err = vg_ipc_full (VG_IPC_RECEIVE | VG_IPC_SEND | VG_IPC_RECEIVE_ACTIVATE
+ | VG_IPC_RECEIVE_SET_THREAD_TO_CALLER
+ | VG_IPC_RECEIVE_SET_ASROOT_TO_CALLERS
+ | VG_IPC_RECEIVE_INLINE
+ | VG_IPC_SEND_SET_THREAD_TO_CALLER
+ | VG_IPC_SEND_SET_ASROOT_TO_CALLERS,
+ ADDR_VOID,
+ __hurd_startup_data->messengers[1], ADDR_VOID, ADDR_VOID,
+ ADDR_VOID, __hurd_startup_data->thread,
+ __hurd_startup_data->messengers[0], PTR_TO_PAGE (msg),
+ 0, 0, ADDR_VOID);
+ if (err)
+ panic ("Failed to send IPC: %d", err);
+ if (utcb->inline_words[0])
+ panic ("Failed to install utcb page: %d", utcb->inline_words[0]);
+}
- if (ADDR_IS_VOID (storage.addr))
- panic ("Failed to allocate page for exception state");
- }
+void
+hurd_activation_handler_init (void)
+{
+ struct vg_utcb *utcb;
+ error_t err = hurd_activation_state_alloc (__hurd_startup_data->thread,
+ &utcb);
+ if (err)
+ panic ("Failed to allocate activation state: %d", err);
- struct exception_page *exception_page = stack_area_base;
+ assert (! initial_utcb->activation_stack);
- /* XXX: We assume the stack grows down! SP is set to the end of the
- exception page. */
- exception_page->exception_handler_sp
- = (uintptr_t) stack_area_base + EXCEPTION_STACK_SIZE;
+ initial_utcb = utcb;
+
+ debug (4, "initial_utcb (%p) is now: %p", &initial_utcb, initial_utcb);
+}
+
+/* The activation area is 16 pages large. It consists of the utch,
+ the activation stack and an alternate stack (which is needed to
+ handle stack faults). */
+#define ACTIVATION_AREA_SIZE_LOG2 (PAGESIZE_LOG2 + 4)
+#define ACTIVATION_AREA_SIZE (1 << ACTIVATION_AREA_SIZE_LOG2)
+
+error_t
+hurd_activation_state_alloc (addr_t thread, struct vg_utcb **utcbp)
+{
+ debug (5, DEBUG_BOLD ("allocating activation state for " ADDR_FMT),
+ ADDR_PRINTF (thread));
+
+ addr_t activation_area = as_alloc (ACTIVATION_AREA_SIZE_LOG2, 1, true);
+ void *activation_area_base
+ = ADDR_TO_PTR (addr_extend (activation_area,
+ 0, ACTIVATION_AREA_SIZE_LOG2));
+
+ debug (0, "Activation area: %p-%p",
+ activation_area_base, activation_area_base + ACTIVATION_AREA_SIZE);
+
+ int page_count = 0;
+ /* Be careful! We assume that pages is properly set up after at
+ most 2 allocations! */
+ addr_t pages_[2];
+ addr_t *pages = pages_;
+
+ void alloc (void *addr)
+ {
+ addr_t slot = addr_chop (PTR_TO_ADDR (addr), PAGESIZE_LOG2);
+
+ as_ensure (slot);
+
+ struct storage storage;
+ storage = storage_alloc (ADDR_VOID, cap_page,
+ STORAGE_LONG_LIVED,
+ OBJECT_POLICY_DEFAULT,
+ slot);
+
+ if (ADDR_IS_VOID (storage.addr))
+ panic ("Failed to allocate page for exception state");
+
+ if (pages == pages_)
+ assert (page_count < sizeof (pages_) / sizeof (pages_[0]));
+ pages[page_count ++] = storage.addr;
+ }
+
+ /* When NDEBUG is true, we leave some pages empty so that should
+ something overrun, we'll fault. */
+#ifndef NDEBUG
+#define SKIP 1
+#else
+#define SKIP 0
+#endif
+
+ int page = SKIP;
+
+ /* Allocate the utcb. */
+ struct vg_utcb *utcb = activation_area_base + page * PAGESIZE;
+ alloc (utcb);
+ page += 1 + SKIP;
+
+ /* And set up the small activation stack.
+ UTCB->ACTIVATION_HANDLER_SP is the base of the stack.
+
+ XXX: We assume the stack grows down! */
+#ifndef NDEBUG
+ /* Use a dedicated page. */
+ utcb->activation_handler_sp
+ = (uintptr_t) activation_area_base + page * PAGESIZE;
+ alloc ((void *) utcb->activation_handler_sp);
+
+ utcb->activation_handler_sp += PAGESIZE;
+ page += 1 + SKIP;
+#else
+ /* Use the end of the UTCB. */
+ utcb->activation_handler_sp = utcb + PAGESIZE;
+#endif
+
+ /* At the top of the stack page, we use some space to remember the
+ storage we allocate so that we can free it later. */
+ utcb->activation_handler_sp
+ -= sizeof (addr_t) * ACTIVATION_AREA_SIZE / PAGESIZE;
+ memset (utcb->activation_handler_sp, 0,
+ sizeof (addr_t) * ACTIVATION_AREA_SIZE / PAGESIZE);
+ memcpy (utcb->activation_handler_sp, pages, sizeof (addr_t) * page_count);
+ pages = (addr_t *) utcb->activation_handler_sp;
/* The word beyond the base of the stack is a pointer to the
exception page. */
- exception_page->exception_handler_sp -= sizeof (void *);
- * (void **) exception_page->exception_handler_sp = exception_page;
+ utcb->activation_handler_sp -= sizeof (void *);
+ * (void **) utcb->activation_handler_sp = utcb;
- exception_page->exception_handler_ip = (l4_word_t) &exception_handler_entry;
- exception_page->exception_handler_end = (l4_word_t) &exception_handler_end;
- struct hurd_thread_exregs_in in;
- in.exception_page = addr_chop (PTR_TO_ADDR (exception_page), PAGESIZE_LOG2);
+ /* And a medium-sized alternate stack. */
+ void *a;
+ for (a = activation_area_base + page * PAGESIZE;
+ a < activation_area_base + ACTIVATION_AREA_SIZE - SKIP * PAGESIZE;
+ a += PAGESIZE)
+ alloc (a);
+
+ assert (a - activation_area_base + page * PAGESIZE >= AS_STACK_SPACE);
+
+ /* XXX: We again assume that the stack grows down. */
+ utcb->alternate_stack = a;
+
+
+ utcb->activation_handler_ip = (uintptr_t) &hurd_activation_handler_entry;
+ utcb->activation_handler_end = (uintptr_t) &hurd_activation_handler_end;
+
+ utcb->exception_buffer = hurd_message_buffer_alloc_long ();
+ utcb->extant_message = NULL;
+
+ utcb->canary0 = UTCB_CANARY0;
+ utcb->canary1 = UTCB_CANARY1;
+
+ debug (5, "Activation area: %p-%p; utcb: %p; stack: %p; alt stack: %p",
+ (void *) activation_area_base,
+ (void *) activation_area_base + ACTIVATION_AREA_SIZE - 1,
+ utcb, (void *) utcb->activation_handler_sp, utcb->alternate_stack);
+
+
+ /* Unblock the exception handler messenger. */
+ error_t err = vg_ipc (VG_IPC_RECEIVE | VG_IPC_RECEIVE_ACTIVATE
+ | VG_IPC_RETURN,
+ ADDR_VOID, utcb->exception_buffer->receiver, ADDR_VOID,
+ ADDR_VOID, ADDR_VOID, ADDR_VOID, ADDR_VOID);
+ assert (! err);
+
+ *utcbp = utcb;
+
+ struct hurd_thread_exregs_in in;
struct hurd_thread_exregs_out out;
- err = rm_thread_exregs (ADDR_VOID, __hurd_startup_data->thread,
- HURD_EXREGS_SET_EXCEPTION_PAGE,
- in, &out);
+
+ err = rm_thread_exregs (ADDR_VOID, thread,
+ HURD_EXREGS_SET_UTCB
+ | HURD_EXREGS_SET_EXCEPTION_MESSENGER,
+ in, ADDR_VOID, ADDR_VOID,
+ PTR_TO_PAGE (utcb), utcb->exception_buffer->receiver,
+ &out, NULL, NULL, NULL, NULL);
+ if (err)
+ panic ("Failed to install utcb");
+
+ err = rm_cap_copy (ADDR_VOID,
+ utcb->exception_buffer->receiver,
+ ADDR (VG_MESSENGER_THREAD_SLOT, VG_MESSENGER_SLOTS_LOG2),
+ ADDR_VOID, thread,
+ 0, CAP_PROPERTIES_DEFAULT);
if (err)
- panic ("Failed to install exception page");
+ panic ("Failed to set messenger's thread");
+
+ check_activation_frame_reserve (utcb);
+
+ return 0;
}
void
-exception_page_cleanup (struct exception_page *exception_page)
+hurd_activation_state_free (struct vg_utcb *utcb)
{
- struct exception_frame *f;
- struct exception_frame *prev = exception_page->exception_stack_bottom;
+ assert (utcb->canary0 == UTCB_CANARY0);
+ assert (utcb->canary1 == UTCB_CANARY1);
+ assert (! utcb->activation_stack);
+ /* Free any activation frames. */
+ struct activation_frame *f;
+ struct activation_frame *prev = utcb->activation_stack_bottom;
while ((f = prev))
{
prev = f->prev;
- hurd_slab_dealloc (&exception_frame_slab, f);
+ hurd_slab_dealloc (&activation_frame_slab, f);
}
-}
+ hurd_message_buffer_free (utcb->exception_buffer);
+
+ /* Free the allocated storage. */
+ /* Copy the array as we're going to free the storage that it is
+ in. */
+ addr_t pages[ACTIVATION_AREA_SIZE / PAGESIZE];
+ memcpy (pages,
+ utcb->activation_handler_sp + sizeof (uintptr_t),
+ sizeof (addr_t) * ACTIVATION_AREA_SIZE / PAGESIZE);
+
+ int i;
+ for (i = 0; i < sizeof (pages) / sizeof (pages[0]); i ++)
+ if (! ADDR_IS_VOID (pages[i]))
+ storage_free (pages[i], false);
+
+ /* Finally, free the address space. */
+ int page = SKIP;
+ void *activation_area_base = (void *) utcb - page * PAGESIZE;
+ as_free (addr_chop (PTR_TO_ADDR (activation_area_base),
+ ACTIVATION_AREA_SIZE_LOG2),
+ false);
+}
diff --git a/libhurd-mm/headers.m4 b/libhurd-mm/headers.m4
index d0157e4..8686d6f 100644
--- a/libhurd-mm/headers.m4
+++ b/libhurd-mm/headers.m4
@@ -18,6 +18,7 @@ AC_CONFIG_LINKS([
sysroot/include/hurd/map.h:libhurd-mm/map.h
sysroot/include/hurd/pager.h:libhurd-mm/pager.h
sysroot/include/hurd/anonymous.h:libhurd-mm/anonymous.h
+ sysroot/include/hurd/message-buffer.h:libhurd-mm/message-buffer.h
])
AC_CONFIG_COMMANDS_POST([
diff --git a/libhurd-mm/ia32-exception-entry.S b/libhurd-mm/ia32-exception-entry.S
index 47fbe83..cc9ac78 100644
--- a/libhurd-mm/ia32-exception-entry.S
+++ b/libhurd-mm/ia32-exception-entry.S
@@ -1,4 +1,4 @@
-/* ia32-exception-entry.S - Exception handler dispatcher.
+/* ia32-activation-entry.S - Activation handler dispatcher.
Copyright (C) 2007, 2008 Free Software Foundation, Inc.
Written by Neal H. Walfield <neal@gnu.org>.
@@ -22,124 +22,125 @@
.text
-/* Offsets into a struct exception_page. */
-#define MODE (0*4)
-#define SAVED_IP (1*4)
-#define SAVED_SP (2*4)
-#define SAVED_THREAD_STATE (3*4)
-
-#define EXCEPTION_STACK (4*4)
-
-/* Relative to one word beyond the bottom of the stack. */
-#define EXCEPTION_PAGE_PTR (-1*4)
-#define SAVED_EAX (-2*4)
-#define SAVED_ECX (-3*4)
-#define SAVED_FLAGS (-4*4)
-#define SAVED_EDX (-5*4)
-
-/* Offsets into a struct exception_fault. */
-#define EF_SAVED_EAX 0
-#define EF_SAVED_ECX 4
-#define EF_SAVED_EDX 8
-#define EF_SAVED_FLAGS 12
-#define EF_SAVED_IP 16
-#define EF_SAVED_EBX 20
-#define EF_SAVED_EDI 24
-#define EF_SAVED_ESI 28
-#define EF_NEXT 32
+/* Offsets into a struct vg_utcb. */
+#define UTCB_MODE (0*4)
+#define UTCB_SAVED_IP (1*4)
+#define UTCB_SAVED_SP (2*4)
+#define UTCB_SAVED_THREAD_STATE (3*4)
+#define UTCB_ACTIVATION_FRAME_STACK (4*4)
+/* The bits of the mode word. */
#define ACTIVATED_MODE_BIT 0
#define PENDING_MESSAGE_BIT 1
#define INTERRUPT_IN_TRANSITION_BIT 2
- /* Handle an exception. */
- .globl exception_handler_entry, _exception_handler_entry
-exception_handler_entry:
-_exception_handler_entry:
+/* Offsets into a struct activation_frame. */
+#define AF_SAVED_EAX 0
+#define AF_SAVED_ECX 4
+#define AF_SAVED_EDX 8
+#define AF_SAVED_FLAGS 12
+#define AF_SAVED_IP 16
+#define AF_SAVED_EBX 20
+#define AF_SAVED_EDI 24
+#define AF_SAVED_ESI 28
+#define AF_SAVED_EBP 32
+#define AF_SAVED_ESP 36
+#define AF_NORMAL_MODE_STACK 40
+#define AF_NEXT 44
+
+ /* Handle an activation. */
+ .globl hurd_activation_handler_entry, _hurd_activation_handler_entry
+hurd_activation_handler_entry:
+_hurd_activation_handler_entry:
/* How we will use the stack:
relative to entry sp
- | relative to sp after saving edx
+ | relative to sp after saving the register file.
| |
v v
- +0 +24 pointer to exception_page
- -4 +20 saved eax \
- -8 +16 saved ecx \ save
- -12 +12 saved flags / area
- -16 +8 saved edx /
- -20 +4 entry edx
- -24 +0 entry eflags
+ +0 +24 pointer to utcb
+ -4 +20 entry eflags
+ -8 +16 entry edx
+ -12 +12 saved eax \
+ -16 +8 saved ecx \ save
+ -20 +4 saved flags / area
+ -24 +0 saved edx /
*/
- /* Adjust the stack: our saved EAX, ECX, EDX and FLAGS may be
- there. */
-
- sub $16, %esp
-
- /* %ESP points to the top of the exception page. If the
- interrupt in transition flag is not set, then we need to save
- the caller-saved registers. Otherwise, we were interrupted
- while returning to normal mode and the the saved state, not
- our registers, reflects the real user state (see big comment
- below for more information). */
+ /* %ESP points to the top of the UTCB. If the interrupt in
+ transition flag is not set, then we need to save the
+ caller-saved registers. Otherwise, we were interrupted while
+ returning to normal mode and the the saved state, not our
+ registers, reflects the real user state (see big comment below
+ for more information). */
- pushl %edx
- /* Save the eflags before we do anything serious. */
+ /* Save the eflags before we do *anything*. */
pushf
- /* %EDX is now the only register which we can touch. Make it
- a pointer to the exception page. Recall: we stashed a pointer
- to the exception page at the word following the botton of the
- stack. */
- mov 24(%esp), %edx
-
- /* Now check if the interrupt in transition flag is set. */
- bt $INTERRUPT_IN_TRANSITION_BIT, MODE(%edx)
- jc after_save
-
- /* Nope; we need to save the current EAX, ECX and eflags. */
- mov %eax, 20(%esp)
- mov %ecx, 16(%esp)
+ /* We need to check if the interrupt in transition flag is
+ set. Free up a register and make it a pointer to the UTCB.
+ Recall: we stashed a pointer to the UTCB at the word following
+ the botton of the stack. */
+ pushl %edx
+ mov 8(%esp), %edx
+
+ bt $INTERRUPT_IN_TRANSITION_BIT, UTCB_MODE(%edx)
+ jc skip_save
+
+ /* Nope; we need to save the current EAX and ECX and copy the
+ entry eflags and EDX into the save area. */
+
+ pushl %eax
+ pushl %ecx
/* entry eflags. */
- popl %ecx
- mov %ecx, (12-4)(%esp)
+ mov 12(%esp), %eax
+ pushl %eax
/* entry edx. */
- popl %ecx
- mov %ecx, (8-4-4)(%esp)
+ mov 12(%esp), %eax
+ pushl %eax
+
jmp after_adjust
-after_save:
-
- /* Adjust the stack: we don't need our entry flags or entry edx. */
- add $8, %esp
+skip_save:
+ /* We don't save the entry registers but use the saved values.
+ Adjust the stack pointer to point to the start of the save area. */
+ sub $16, %esp
+
+after_adjust:
-after_adjust:
+ /* We are going to call the activation handler. According to
+ the i386 ABI:
- /* We are going to call the exception handler. But first save
- our pointer to the exception page on the stack. */
- pushl %edx
+ - caller saved registers are: eax, ecx, edx
+ - callee saved registers are: ebp, ebx, esi, edi
+
+ We've already saved the original eax, ecx and edx. The
+ function will preserve the rest.
- /* The exception handler function takes a single argument:
- the exception page. */
+ The only value we care about is our pointer to the UTCB (which
+ is in edx) and which we can save on the stack. */
+ pushl %edx
- /* Push the exception page. */
+ /* The activation handler function takes a single argument:
+ the UTCB. */
pushl %edx
/* Clear the direction flag as per the calling conventions. */
cld
- call exception_handler_activated
- /* The exception frame, if any, is in EAX. */
+ call hurd_activation_handler_activated
+ /* The activation frame, if any, is in EAX. */
/* Clear the arguments. */
add $4, %esp
- /* Restore exception page pointer. */
+ /* Restore UTCB pointer. */
popl %edx
- /* Check if there is an exception frame. */
+ /* Check if hurd_activation_handler_activated returned an
+ activation frame. */
test %eax, %eax
- jnz exception_frame_run
+ jnz activation_frame_run
- /* There is no exception frame, transition immediately back to
+ /* There is no activation frame, transition immediately back to
normal mode.
To return to normal mode, we need to restore the saved
@@ -160,156 +161,165 @@ after_adjust:
But this raises another problem: the IP and SP that the kernel
sees are not those that return us to user code. As this code
- relies on the exception stack, a nested stack will leave us in
+ relies on the activation stack, a nested stack will leave us in
an inconsistent state. (This can also happen if we receive a
message before returning to user code.) To avoid this, we
register our restore to normal mode function with the kernel.
If the kernel transitions us back to activated mode while the
EIP is in this range, then it does not save the EIP and ESP
- and invokes the exception handler with the
+ and invokes the activation handler with the
interrupt_in_transition flag set. */
/* Reset the activation bit. */
- and $(~1), MODE(%edx)
-
- /* Set EAX to one word beyond the bottom of the stack (i.e.,
- pointing at the pointer to the exception page. */
- add $PAGESIZE, %esp
- and $(~(PAGESIZE-1)), %esp
- mov %esp, %eax
+ and $(~1), UTCB_MODE(%edx)
/* Check for pending messages. This does not need to be
atomic as if we get interrupted here, we automatically
transition back to activated mode. */
- bt $PENDING_MESSAGE_BIT, MODE(%edx)
- jc process_pending
+ bt $PENDING_MESSAGE_BIT, UTCB_MODE(%edx)
+ jnc no_pending
+
+ /* There is a pending activation. Force its delivery. As we
+ are no longer in activated mode, either we'll be activated
+ with the interrupt-in-transition bit set (and thus never
+ return here) or we'll return. In the latter case, we just
+ resume execution. */
+
+ /* Save the UTCB. */
+ pushl %edx
+
+ cld
+ call hurd_activation_fetch
+
+ popl %edx
+
+no_pending:
+
+ /* Set EAX to the start of the save area. */
+ mov %esp, %eax
/* Restore the user stack. */
- mov SAVED_SP(%edx), %esp
+ mov UTCB_SAVED_SP(%edx), %esp
/* Copy the saved EIP and saved flags to the user stack. */
- mov SAVED_IP(%edx), %ecx
+ mov UTCB_SAVED_IP(%edx), %ecx
pushl %ecx
- mov SAVED_FLAGS(%eax), %ecx
+ mov 4(%eax), %ecx
pushl %ecx
/* Restore the general-purpose registers. */
- mov SAVED_EDX(%eax), %edx
- mov SAVED_ECX(%eax), %ecx
- mov SAVED_EAX(%eax), %eax
+ mov 0(%eax), %edx
+ mov 8(%eax), %ecx
+ mov 12(%eax), %eax
/* Restore the saved eflags. */
popf
/* And finally, the saved EIP and in doing so the saved ESP. */
ret
-process_pending:
- /* This code is called if after leaving activated mode, we
- detect a pending message. %EDX points to the exception page
- and eax one word beyond the bottom of the exception stack. */
-
- /* Set activated mode and interrupt in transition. */
- or $(1 | 4), MODE(%edx)
-
- /* Set the ESP to the top of the stack. */
- mov %eax, %esp
- add $4, %esp
-
- /* Get the pending exception. */
- call exception_fetch_exception
-
- jmp exception_handler_entry
-
-
-exception_frame_run:
- /* EAX contains the exception frame, EDX the exception page,
- and ESP points after the saved edx. */
+
+activation_frame_run:
+ /* EAX contains the activation frame, EDX the UTCB, and ESP
+ points to the save area. ECX has been saved in the save area. */
- /* Copy all relevant register state from the exception page
- and save area to the exception frame. We use edx as the
- intermediate. We can restore it from the exception stack
- (it's the word following the base). */
+ /* Copy all relevant register state from the UTCB
+ and save area to the activation frame. We use ecx as the
+ intermediate. */
- mov SAVED_IP(%edx), %edx
- mov %edx, EF_SAVED_IP(%eax)
+ mov UTCB_SAVED_IP(%edx), %ecx
+ mov %ecx, AF_SAVED_IP(%eax)
+ mov UTCB_SAVED_SP(%edx), %ecx
+ mov %ecx, AF_SAVED_ESP(%eax)
/* edx. */
- mov 0(%esp), %edx
- mov %edx, EF_SAVED_EDX(%eax)
+ mov 0(%esp), %ecx
+ mov %ecx, AF_SAVED_EDX(%eax)
/* flags. */
- mov 4(%esp), %edx
- mov %edx, EF_SAVED_FLAGS(%eax)
+ mov 4(%esp), %ecx
+ mov %ecx, AF_SAVED_FLAGS(%eax)
/* ecx. */
- mov 8(%esp), %edx
- mov %edx, EF_SAVED_ECX(%eax)
+ mov 8(%esp), %ecx
+ mov %ecx, AF_SAVED_ECX(%eax)
/* eax. */
- mov 12(%esp), %edx
- mov %edx, EF_SAVED_EAX(%eax)
-
- mov %ebx, EF_SAVED_EBX(%eax)
- mov %edi, EF_SAVED_EDI(%eax)
- mov %esi, EF_SAVED_ESI(%eax)
-
- /* Restore the exception page pointer (edx). */
- mov 16(%esp), %edx
+ mov 12(%esp), %ecx
+ mov %ecx, AF_SAVED_EAX(%eax)
+
+ /* We save the rest for debugging purposes. */
+ mov %ebx, AF_SAVED_EBX(%eax)
+ mov %edi, AF_SAVED_EDI(%eax)
+ mov %esi, AF_SAVED_ESI(%eax)
+ mov %ebp, AF_SAVED_EBP(%eax)
+
+ /* Abandon the activation stack. If AF->NORMAL_MODE_STACK is
+ 0, we use the top of the normal user stack. Otherwise, we use
+ the stack indicated by AF->NORMAL_MODE_STACK. */
- /* Restore the user ESP. */
- mov SAVED_SP(%edx), %esp
+ mov AF_NORMAL_MODE_STACK(%eax), %esp
+ test %esp, %esp
+ jnz stack_setup
+ mov UTCB_SAVED_SP(%edx), %esp
+stack_setup:
- /* We've now stashed away all the state we need to restore to
- the interrupted state. */
+ /* We've now stashed away all the state that was in the UTCB
+ or on the activation stack that we need to restore the
+ interrupted state. */
- /* Reset the activation bit. */
- and $(~1), MODE(%edx)
+ /* Clear the activation bit. */
+ and $(~1), UTCB_MODE(%edx)
- /* XXX: Check for pending message. */
+ .global hurd_activation_handler_end, _hurd_activation_handler_end
+hurd_activation_handler_end:
+_hurd_activation_handler_end:
- .global exception_handler_end, _exception_handler_end
-exception_handler_end:
-_exception_handler_end:
+ /* We have now left activated mode. We've saved all the state
+ we need to return to the interrupted state in the activation
+ frame and ESP points to another stack (i.e., not the activate
+ stack). If a fault now occurs, nothing bad can happend. */
- /* We have now left activated mode. We've saved all the
- state we need to return to the interrupted state in the
- exception frame and ESP points to the normal stack. If a
- fault now occurs, nothing bad can happend. */
-
- /* Save the exception page pointer. */
+ /* Save the UTCB pointer. */
pushl %edx
- /* Save the exception frame pointer. */
+ /* Save the activation frame pointer. */
pushl %eax
- /* Call the continuation (single argument: exception frame
- pointer). */
+ /* Call the continuation (two arguments: activation frame
+ pointer and the utcb). */
+ pushl %edx
pushl %eax
cld
- call exception_handler_normal
+ call hurd_activation_handler_normal
- /* Remove the argument. */
- add $4, %esp
+ /* Remove the arguments. */
+ add $8, %esp
- /* Restore the exception frame pointer. */
+ /* Restore the activation frame pointer. */
popl %eax
- /* And restore the exception page pointer. */
+ /* And restore the UTCB pointer. */
popl %edx
- /* Restore the user state. */
- mov EF_SAVED_IP(%eax), %ecx
+ /* Restore the interrupted state. */
+
+ /* First, the interrupted stack. */
+ mov AF_SAVED_ESP(%eax), %esp
+
+ /* Then, push the state onto that stack. */
+ mov AF_SAVED_IP(%eax), %ecx
pushl %ecx
- mov EF_SAVED_FLAGS(%eax), %ecx
+ mov AF_SAVED_FLAGS(%eax), %ecx
pushl %ecx
- mov EF_SAVED_EDX(%eax), %ecx
+ mov AF_SAVED_EDX(%eax), %ecx
pushl %ecx
- mov EF_SAVED_ECX(%eax), %ecx
+ mov AF_SAVED_ECX(%eax), %ecx
pushl %ecx
- mov EF_SAVED_EAX(%eax), %ecx
+ mov AF_SAVED_EAX(%eax), %ecx
pushl %ecx
- /* Remove our exception frame, which is at the top
- of the exception frame stack. */
- mov EF_NEXT(%eax), %ecx
- mov %ecx, EXCEPTION_STACK(%edx)
+ /* Remove our activation frame, which is at the top
+ of the activation frame stack. */
+ mov AF_NEXT(%eax), %ecx
+ mov %ecx, UTCB_ACTIVATION_FRAME_STACK(%edx)
+ /* Finally, restore the register file. */
popl %eax
popl %ecx
popl %edx
diff --git a/libhurd-mm/map.c b/libhurd-mm/map.c
index 4c3ebb2..001b97d 100644
--- a/libhurd-mm/map.c
+++ b/libhurd-mm/map.c
@@ -126,7 +126,7 @@ map_install (struct map *map)
assert ((map->access & ~MAP_ACCESS_ALL) == 0);
- debug (5, "Installing %c%c map at %x+%x(%x) referencing %x starting at %x",
+ debug (5, "Installing %c%c map at %x+%x(%x) referencing %p starting at %x",
map->access & MAP_ACCESS_READ ? 'r' : '~',
map->access & MAP_ACCESS_WRITE ? 'w' : '~',
map->region.start, map->region.start + map->region.length,
@@ -314,7 +314,7 @@ map_join (struct map *first, struct map *second)
}
bool
-map_fault (addr_t fault_addr, uintptr_t ip, struct exception_info info)
+map_fault (addr_t fault_addr, uintptr_t ip, struct activation_fault_info info)
{
/* Find the map. */
struct region region;
@@ -332,9 +332,9 @@ map_fault (addr_t fault_addr, uintptr_t ip, struct exception_info info)
{
do_debug (5)
{
- debug (0, "No map covers " ADDR_FMT "(" EXCEPTION_INFO_FMT ")",
+ debug (0, "No map covers " ADDR_FMT "(" ACTIVATION_FAULT_INFO_FMT ")",
ADDR_PRINTF (fault_addr),
- EXCEPTION_INFO_PRINTF (info));
+ ACTIVATION_FAULT_INFO_PRINTF (info));
for (map = hurd_btree_map_first (&maps);
map;
map = hurd_btree_map_next (map))
diff --git a/libhurd-mm/map.h b/libhurd-mm/map.h
index dba2389..febc3ea 100644
--- a/libhurd-mm/map.h
+++ b/libhurd-mm/map.h
@@ -23,8 +23,8 @@
#include <hurd/btree.h>
#include <hurd/addr.h>
-#include <hurd/exceptions.h>
#include <hurd/mutex.h>
+#include <hurd/as.h>
#include <stdint.h>
#include <stdbool.h>
#include <assert.h>
@@ -161,7 +161,7 @@ maps_lock_lock (void)
{
extern ss_mutex_t maps_lock;
- map_lock_ensure_stack (EXCEPTION_STACK_SIZE - PAGESIZE);
+ map_lock_ensure_stack (AS_STACK_SPACE);
ss_mutex_lock (&maps_lock);
}
@@ -231,6 +231,6 @@ extern bool map_join (struct map *first, struct map *second);
/* Raise a fault at address ADDR. Returns true if the fault was
handled, false otherwise. */
extern bool map_fault (addr_t addr,
- uintptr_t ip, struct exception_info info);
+ uintptr_t ip, struct activation_fault_info info);
#endif
diff --git a/libhurd-mm/message-buffer.c b/libhurd-mm/message-buffer.c
new file mode 100644
index 0000000..c1326ab
--- /dev/null
+++ b/libhurd-mm/message-buffer.c
@@ -0,0 +1,315 @@
+/* message-buffer.c - Implementation of messaging data structure management.
+ Copyright (C) 2008 Free Software Foundation, Inc.
+ Written by Neal H. Walfield <neal@gnu.org>.
+
+ This file is part of the GNU Hurd.
+
+ GNU Hurd is free software: you can redistribute it and/or modify it
+ under the terms of the GNU Lesser General Public License as
+ published by the Free Software Foundation, either version 3 of the
+ License, or (at your option) any later version.
+
+ GNU Hurd is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with GNU Hurd. If not, see
+ <http://www.gnu.org/licenses/>. */
+
+#include <hurd/stddef.h>
+#include <hurd/slab.h>
+#include <hurd/storage.h>
+#include <hurd/as.h>
+#include <hurd/startup.h>
+#include <hurd/capalloc.h>
+#include <hurd/exceptions.h>
+
+extern struct hurd_startup_data *__hurd_startup_data;
+
+static char initial_pages[4][PAGESIZE] __attribute__ ((aligned (PAGESIZE)));
+static int initial_page;
+#define INITIAL_PAGE_COUNT (sizeof (initial_pages) / sizeof (initial_pages[0]))
+static int initial_messenger;
+#define INITIAL_MESSENGER_COUNT \
+ (sizeof (__hurd_startup_data->messengers) \
+ / sizeof (__hurd_startup_data->messengers[0]))
+
+static error_t
+slab_alloc (void *hook, size_t size, void **ptr)
+{
+ assert (size == PAGESIZE);
+
+ if (unlikely (initial_page < INITIAL_PAGE_COUNT))
+ {
+ *ptr = initial_pages[initial_page ++];
+ return 0;
+ }
+
+ struct storage storage = storage_alloc (meta_data_activity, cap_page,
+ STORAGE_LONG_LIVED,
+ OBJECT_POLICY_DEFAULT, ADDR_VOID);
+ if (ADDR_IS_VOID (storage.addr))
+ panic ("Out of space.");
+ *ptr = ADDR_TO_PTR (addr_extend (storage.addr, 0, PAGESIZE_LOG2));
+
+ return 0;
+}
+
+static error_t
+slab_dealloc (void *hook, void *buffer, size_t size)
+{
+ assert (size == PAGESIZE);
+
+ addr_t addr = addr_chop (PTR_TO_ADDR (buffer), PAGESIZE_LOG2);
+ storage_free (addr, false);
+
+ return 0;
+}
+
+static error_t
+slab_constructor (void *hook, void *object)
+{
+ struct hurd_message_buffer *mb = object;
+ assert (mb->magic == 0);
+ mb->magic = ~HURD_MESSAGE_BUFFER_MAGIC;
+
+ return 0;
+}
+
+static void
+slab_destructor (void *hook, void *object)
+{
+ struct hurd_message_buffer *mb = object;
+
+ if (mb->magic != HURD_MESSAGE_BUFFER_MAGIC)
+ /* It was never initialized. */
+ {
+ assert (mb->magic == ~HURD_MESSAGE_BUFFER_MAGIC);
+ return;
+ }
+
+ storage_free (mb->sender, false);
+ storage_free (addr_chop (PTR_TO_ADDR (mb->request), PAGESIZE_LOG2),
+ false);
+ storage_free (mb->receiver, false);
+ storage_free (addr_chop (PTR_TO_ADDR (mb->reply), PAGESIZE_LOG2),
+ false);
+}
+
+/* Storage descriptors are alloced from a slab. */
+static struct hurd_slab_space message_buffer_slab
+ = HURD_SLAB_SPACE_INITIALIZER (struct hurd_message_buffer,
+ slab_alloc, slab_dealloc,
+ slab_constructor, slab_destructor, NULL);
+
+
+static struct hurd_message_buffer *
+hurd_message_buffer_alloc_hard (void)
+{
+ void *buffer;
+ error_t err = hurd_slab_alloc (&message_buffer_slab, &buffer);
+ if (err)
+ panic ("Out of memory!");
+
+ struct hurd_message_buffer *mb = buffer;
+
+ if (mb->magic == HURD_MESSAGE_BUFFER_MAGIC)
+ /* It's already initialized. */
+ return mb;
+
+ assert (mb->magic == ~HURD_MESSAGE_BUFFER_MAGIC);
+ mb->magic = HURD_MESSAGE_BUFFER_MAGIC;
+
+ struct storage storage;
+
+ /* The send messenger. */
+ if (unlikely (initial_messenger < INITIAL_MESSENGER_COUNT))
+ mb->sender = __hurd_startup_data->messengers[initial_messenger ++];
+ else
+ {
+ storage = storage_alloc (meta_data_activity, cap_messenger,
+ STORAGE_LONG_LIVED,
+ OBJECT_POLICY_DEFAULT, ADDR_VOID);
+ if (ADDR_IS_VOID (storage.addr))
+ panic ("Out of space.");
+
+ mb->sender = storage.addr;
+ }
+
+ /* The receive messenger. */
+ if (unlikely (initial_messenger < INITIAL_MESSENGER_COUNT))
+ mb->receiver_strong = __hurd_startup_data->messengers[initial_messenger ++];
+ else
+ {
+ storage = storage_alloc (meta_data_activity, cap_messenger,
+ STORAGE_LONG_LIVED,
+ OBJECT_POLICY_DEFAULT, ADDR_VOID);
+ if (ADDR_IS_VOID (storage.addr))
+ panic ("Out of space.");
+
+ mb->receiver_strong = storage.addr;
+ }
+
+ /* Weaken it. */
+#if 0
+ mb->receiver = capalloc ();
+ struct cap receiver_cap = as_cap_lookup (mb->receiver_strong, cap_messenger,
+ NULL);
+ assert (receiver_cap.type == cap_messenger);
+ as_slot_lookup_use
+ (mb->receiver,
+ ({
+ bool ret = cap_copy_x (ADDR_VOID,
+ ADDR_VOID, slot, mb->receiver,
+ ADDR_VOID, receiver_cap, mb->receiver_strong,
+ CAP_COPY_WEAKEN,
+ CAP_PROPERTIES_VOID);
+ assert (ret);
+ }));
+#endif
+ mb->receiver = mb->receiver_strong;
+
+ /* The send buffer. */
+ if (unlikely (initial_page < INITIAL_PAGE_COUNT))
+ mb->request = (void *) &initial_pages[initial_page ++][0];
+ else
+ {
+ storage = storage_alloc (meta_data_activity, cap_page,
+ STORAGE_LONG_LIVED,
+ OBJECT_POLICY_DEFAULT, ADDR_VOID);
+ if (ADDR_IS_VOID (storage.addr))
+ panic ("Out of space.");
+
+ mb->request = ADDR_TO_PTR (addr_extend (storage.addr, 0, PAGESIZE_LOG2));
+ }
+
+ /* And the receive buffer. */
+ if (unlikely (initial_page < INITIAL_PAGE_COUNT))
+ mb->reply = (void *) &initial_pages[initial_page ++][0];
+ else
+ {
+ storage = storage_alloc (meta_data_activity, cap_page,
+ STORAGE_LONG_LIVED,
+ OBJECT_POLICY_DEFAULT, ADDR_VOID);
+ if (ADDR_IS_VOID (storage.addr))
+ panic ("Out of space.");
+
+ mb->reply = ADDR_TO_PTR (addr_extend (storage.addr, 0, PAGESIZE_LOG2));
+ }
+
+
+ /* Now set the messengers' id. */
+ vg_messenger_id_receive_marshal (mb->reply);
+ vg_messenger_id_send_marshal (mb->request,
+ (uint64_t) (uintptr_t) mb,
+ mb->receiver);
+
+ /* Set the reply messenger's id first as the activation handler
+ requires that it be set correctly. This will do that just before
+ the reply is sent. */
+ hurd_activation_message_register (mb);
+ err = vg_ipc_full (VG_IPC_RECEIVE | VG_IPC_SEND | VG_IPC_RECEIVE_ACTIVATE
+ | VG_IPC_RECEIVE_SET_THREAD_TO_CALLER
+ | VG_IPC_SEND_SET_THREAD_TO_CALLER,
+ ADDR_VOID, mb->receiver, PTR_TO_PAGE (mb->reply),
+ ADDR_VOID,
+ ADDR_VOID, mb->receiver,
+ mb->sender, PTR_TO_PAGE (mb->request),
+ 0, 0, ADDR_VOID);
+ if (err)
+ panic ("Failed to set receiver's id");
+
+ err = vg_messenger_id_reply_unmarshal (mb->reply, NULL);
+ if (err)
+ panic ("Setting receiver's id: %d", err);
+
+ hurd_activation_message_register (mb);
+ err = vg_ipc_full (VG_IPC_RECEIVE | VG_IPC_SEND | VG_IPC_RECEIVE_ACTIVATE,
+ ADDR_VOID, mb->receiver, PTR_TO_PAGE (mb->reply),
+ ADDR_VOID,
+ ADDR_VOID, mb->sender,
+ mb->sender, PTR_TO_PAGE (mb->request),
+ 0, 0, ADDR_VOID);
+ if (err)
+ panic ("Failed to set sender's id");
+
+ err = vg_messenger_id_reply_unmarshal (mb->reply, NULL);
+ if (err)
+ panic ("Setting sender's id: %d", err);
+
+ return mb;
+}
+
+static struct hurd_message_buffer *buffers;
+static int buffers_count;
+
+static void
+hurd_message_buffer_free_internal (struct hurd_message_buffer *buffer,
+ bool already_accounted)
+{
+ /* XXX We should perhaps free some buffers if we go over a high
+ water mark. */
+ // hurd_slab_dealloc (&message_buffer_slab, buffer);
+
+ /* Add BUFFER to the free list. */
+ for (;;)
+ {
+ buffer->next = buffers;
+ if (__sync_val_compare_and_swap (&buffers, buffer->next, buffer)
+ == buffer->next)
+ {
+ if (! already_accounted)
+ __sync_fetch_and_add (&buffers_count, 1);
+ return;
+ }
+ }
+}
+
+void
+hurd_message_buffer_free (struct hurd_message_buffer *buffer)
+{
+ hurd_message_buffer_free_internal (buffer, false);
+}
+
+#define BUFFERS_LOW_WATER 4
+
+struct hurd_message_buffer *
+hurd_message_buffer_alloc (void)
+{
+ struct hurd_message_buffer *mb;
+ do
+ {
+#if 0
+ if (likely (mm_init_done)
+ && unlikely (buffers_count <= BUFFERS_LOW_WATER))
+ {
+ int i = BUFFERS_LOW_WATER;
+ mb = hurd_message_buffer_alloc_hard ();
+
+ if (buffers_count == BUFFERS_LOW_WATER)
+ return mb;
+
+ hurd_message_buffer_free_internal (buffer, true);
+ }
+#endif
+
+ mb = buffers;
+ if (! mb)
+ {
+ mb = hurd_message_buffer_alloc_hard ();
+ return mb;
+ }
+ }
+ while (__sync_val_compare_and_swap (&buffers, mb, mb->next) != mb);
+ __sync_fetch_and_add (&buffers_count, -1);
+
+ return mb;
+}
+
+struct hurd_message_buffer *
+hurd_message_buffer_alloc_long (void)
+{
+ return hurd_message_buffer_alloc_hard ();
+}
diff --git a/libhurd-mm/message-buffer.h b/libhurd-mm/message-buffer.h
new file mode 100644
index 0000000..21e44b4
--- /dev/null
+++ b/libhurd-mm/message-buffer.h
@@ -0,0 +1,80 @@
+/* message-buffer.h - Interface for managing messaging data structures.
+ Copyright (C) 2008 Free Software Foundation, Inc.
+ Written by Neal H. Walfield <neal@gnu.org>.
+
+ This file is part of the GNU Hurd.
+
+ GNU Hurd is free software: you can redistribute it and/or modify it
+ under the terms of the GNU Lesser General Public License as
+ published by the Free Software Foundation, either version 3 of the
+ License, or (at your option) any later version.
+
+ GNU Hurd is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with GNU Hurd. If not, see
+ <http://www.gnu.org/licenses/>. */
+
+#ifndef __have_hurd_message_buffer
+# define __have_hurd_message_buffer
+
+# include <stdint.h>
+# include <stdbool.h>
+# include <hurd/addr.h>
+
+/* Forward. */
+struct vg_message;
+
+#define HURD_MESSAGE_BUFFER_MAGIC 0x111A61C
+
+struct hurd_message_buffer
+{
+ uintptr_t magic;
+
+ struct hurd_message_buffer *next;
+
+ /* A messenger associated REQUEST. The messenger's identifier is
+ set to the data structure's address. */
+ addr_t sender;
+ struct vg_message *request;
+ /* A messenger associated with REPLY. The messenger's identifier is
+ set to the data structure's address. */
+ addr_t receiver_strong;
+ /* A weakened version. */
+ addr_t receiver;
+ struct vg_message *reply;
+
+ /* If not NULL, then this routine is called. */
+ void (*closure) (struct hurd_message_buffer *mb);
+
+ /* XXX: Whether the activation should resume the thread or simply
+ free the buffer. Ignored if callback is not NULL. */
+ bool just_free;
+
+ void *cookie;
+};
+
+#endif /* __have_hurd_message_buffer */
+
+#ifdef __need_hurd_message_buffer
+# undef __need_hurd_message_buffer
+#else
+
+# ifndef _HURD_MESSAGE_BUFFER
+# define _HURD_MESSAGE_BUFFER
+
+/* Allocate a message buffer. */
+extern struct hurd_message_buffer *hurd_message_buffer_alloc (void);
+
+/* Allocate a message buffer, which is unlikely to be freed soon. */
+extern struct hurd_message_buffer *hurd_message_buffer_alloc_long (void);
+
+/* Free a message buffer. */
+extern void hurd_message_buffer_free (struct hurd_message_buffer *buf);
+
+# endif /* _HURD_MESSAGE_BUFFER */
+
+#endif /* !__need_hurd_message_buffer */
diff --git a/libhurd-mm/mm-init.c b/libhurd-mm/mm-init.c
index 8604568..ea12898 100644
--- a/libhurd-mm/mm-init.c
+++ b/libhurd-mm/mm-init.c
@@ -26,6 +26,12 @@
#include <hurd/startup.h>
#include <hurd/exceptions.h>
+#ifdef i386
+#include <hurd/pager.h>
+#endif
+
+#include <backtrace.h>
+
#include "storage.h"
#include "as.h"
@@ -48,9 +54,198 @@ mm_init (addr_t activity)
else
meta_data_activity = activity;
+ hurd_activation_handler_init_early ();
storage_init ();
as_init ();
- exception_handler_init ();
+ hurd_activation_handler_init ();
mm_init_done = 1;
+
+#ifndef NDEBUG
+ /* The following test checks the activation trampoline. In
+ particular, it checks that the register file before a fault
+ matches the register file after a fault. This is interesting
+ because such an activation is handled in normal mode. That
+ means, when the fault occurs, we enter the activation handler,
+ return an activation frame, enter normal mode, execute the normal
+ mode activation handler, call the call back functions, and then
+ return to the interrupted code. */
+#ifdef i386
+ void test (int nesting)
+ {
+ addr_t addr = as_alloc (PAGESIZE_LOG2, 1, true);
+ void *a = ADDR_TO_PTR (addr_extend (addr, 0, PAGESIZE_LOG2));
+
+ int recursed = false;
+
+ struct storage storage;
+ bool fault (struct pager *pager,
+ uintptr_t offset, int count, bool ro,
+ uintptr_t fault_addr, uintptr_t ip,
+ struct activation_fault_info info)
+ {
+ assert (a == (void *) (fault_addr & ~(PAGESIZE - 1)));
+ assert (count == 1);
+
+ struct vg_utcb *utcb = hurd_utcb ();
+ struct activation_frame *activation_frame = utcb->activation_stack;
+ debug (4, "Fault at %p (ip: %p, sp: %p, eax: %p, "
+ "ebx: %p, ecx: %p, edx: %p, edi: %p, esi: %p, ebp: %p, "
+ "eflags: %p)",
+ fault,
+ (void *) activation_frame->eip,
+ (void *) activation_frame->esp,
+ (void *) activation_frame->eax,
+ (void *) activation_frame->ebx,
+ (void *) activation_frame->ecx,
+ (void *) activation_frame->edx,
+ (void *) activation_frame->edi,
+ (void *) activation_frame->esi,
+ (void *) activation_frame->ebp,
+ (void *) activation_frame->eflags);
+
+ assert (activation_frame->eax == 0xa);
+ assert (activation_frame->ebx == 0xb);
+ assert (activation_frame->ecx == 0xc);
+ assert (activation_frame->edx == 0xd);
+ assert (activation_frame->edi == 0xd1);
+ assert (activation_frame->esi == 0x21);
+ assert (activation_frame->ebp == (uintptr_t) a);
+ /* We cannot easily check esp and eip here. */
+
+ as_ensure (addr);
+ storage = storage_alloc (ADDR_VOID,
+ cap_page, STORAGE_UNKNOWN,
+ OBJECT_POLICY_DEFAULT,
+ addr);
+
+ if (nesting > 1 && ! recursed)
+ {
+ recursed = true;
+
+ int i;
+ for (i = 0; i < 3; i ++)
+ {
+ debug (5, "Depth: %d; iter: %d", nesting - 1, i);
+ test (nesting - 1);
+ debug (5, "Depth: %d; iter: %d done", nesting - 1, i);
+ }
+ }
+
+ return true;
+ }
+
+ struct pager pager = PAGER_VOID;
+ pager.length = PAGESIZE;
+ pager.fault = fault;
+ pager_init (&pager);
+
+ struct region region = { (uintptr_t) a, PAGESIZE };
+ struct map *map = map_create (region, MAP_ACCESS_ALL, &pager, 0, NULL);
+
+ uintptr_t pre_flags, pre_esp;
+ uintptr_t eax, ebx, ecx, edx, edi, esi, ebp, esp, flags;
+ uintptr_t canary;
+
+ /* Check that the trampoline works. */
+ __asm__ __volatile__
+ (
+ "mov %%esp, %[pre_esp]\n\t"
+ "pushf\n\t"
+ "pop %%eax\n\t"
+ "mov %%eax, %[pre_flags]\n\t"
+
+ /* Canary. */
+ "pushl $0xcab00d1e\n\t"
+
+ "pushl %%ebp\n\t"
+
+ "mov $0xa, %%eax\n\t"
+ "mov $0xb, %%ebx\n\t"
+ "mov $0xc, %%ecx\n\t"
+ "mov $0xd, %%edx\n\t"
+ "mov $0xd1, %%edi\n\t"
+ "mov $0x21, %%esi\n\t"
+ "mov %[addr], %%ebp\n\t"
+ /* Fault! */
+ "mov %%eax, 0(%%ebp)\n\t"
+
+ /* Save the current ebp. */
+ "pushl %%ebp\n\t"
+ /* Restore the old ebp. */
+ "mov 4(%%esp), %%ebp\n\t"
+
+ /* Save the rest of the GP registers. */
+ "mov %%eax, %[eax]\n\t"
+ "mov %%ebx, %[ebx]\n\t"
+ "mov %%ecx, %[ecx]\n\t"
+ "mov %%edx, %[edx]\n\t"
+ "mov %%edi, %[edi]\n\t"
+ "mov %%esi, %[esi]\n\t"
+
+ /* Save the new flags. */
+ "pushf\n\t"
+ "pop %%eax\n\t"
+ "mov %%eax, %[flags]\n\t"
+
+ /* Save the new ebp. */
+ "mov 0(%%esp), %%eax\n\t"
+ "mov %%eax, %[ebp]\n\t"
+
+ /* Fix up the stack. */
+ "add $8, %%esp\n\t"
+
+ /* Grab the canary. */
+ "popl %%eax\n\t"
+ "mov %%eax, %[canary]\n\t"
+
+ /* And don't forget to save the new esp. */
+ "mov %%esp, %[esp]\n\t"
+
+ : [eax] "=m" (eax), [ebx] "=m" (ebx),
+ [ecx] "=m" (ecx), [edx] "=m" (edx),
+ [edi] "=m" (edi), [esi] "=m" (esi), [ebp] "=m" (ebp),
+ [pre_esp] "=m" (pre_esp), [esp] "=m" (esp),
+ [pre_flags] "=m" (pre_flags), [flags] "=m" (flags),
+ [canary] "=m" (canary)
+ : [addr] "m" (a)
+ : "%eax", "%ebx", "%ecx", "%edx", "%edi", "%esi");
+
+ debug (4, "Regsiter file: "
+ "eax: %p, ebx: %p, ecx: %p, edx: %p, "
+ "edi: %p, esi: %p, ebp: %p -> %p, esp: %p -> %p, flags: %p -> %p",
+ (void *) eax, (void *) ebx, (void *) ecx, (void *) edx,
+ (void *) edi, (void *) esi, (void *) a, (void *) ebp,
+ (void *) pre_esp, (void *) esp,
+ (void *) pre_flags, (void *) flags);
+
+ assert (eax == 0xa);
+ assert (ebx == 0xb);
+ assert (ecx == 0xc);
+ assert (edx == 0xd);
+ assert (edi == 0xd1);
+ assert (esi == 0x21);
+ assert (ebp == (uintptr_t) a);
+ assert (esp == pre_esp);
+ assert (flags == pre_flags);
+ assert (canary == 0xcab00d1e);
+
+ maps_lock_lock ();
+ map_disconnect (map);
+ maps_lock_unlock ();
+ map_destroy (map);
+
+ storage_free (storage.addr, false);
+ as_free (addr, 1);
+ }
+
+ int i;
+ for (i = 0; i < 3; i ++)
+ {
+ debug (5, "Depth: %d; iter: %d", 3, i + 1);
+ test (3);
+ debug (5, "Depth: %d; iter: %d done", 3, i + 1);
+ }
+#endif
+#endif
}
diff --git a/libhurd-mm/pager.h b/libhurd-mm/pager.h
index 6d4c517..66b75ef 100644
--- a/libhurd-mm/pager.h
+++ b/libhurd-mm/pager.h
@@ -36,7 +36,7 @@ struct pager;
typedef bool (*pager_fault_t) (struct pager *pager,
uintptr_t offset, int count, bool ro,
uintptr_t fault_addr, uintptr_t ip,
- struct exception_info info);
+ struct activation_fault_info info);
/* The count sub-trees starting at ADDR are no longer referenced and
their associated storage may be reclaimed. */
@@ -81,8 +81,11 @@ struct pager
pager_advise_t advise;
};
-/* Initialize the pager. LENGTH and FAULT must be set
- appropriately. */
+#define PAGER_VOID { NULL, 0, 0, NULL, NULL, NULL }
+
+/* Initialize the pager. All fields must be set appropriately. After
+ calling this function, LENGTH and FAULT may no longer be
+ changed. */
extern bool pager_init (struct pager *pager);
/* Deinitialize the pager PAGER, destroying all the mappings in the
diff --git a/libhurd-mm/storage.c b/libhurd-mm/storage.c
index d6d21a1..cfdb6ed 100644
--- a/libhurd-mm/storage.c
+++ b/libhurd-mm/storage.c
@@ -28,6 +28,7 @@
#include <hurd/startup.h>
#include <hurd/rm.h>
#include <hurd/mutex.h>
+#include <backtrace.h>
#ifndef NDEBUG
struct ss_lock_trace ss_lock_trace[SS_LOCK_TRACE_COUNT];
@@ -255,7 +256,7 @@ shadow_setup (struct cap *cap, struct storage_desc *desc)
error_t err = rm_folio_object_alloc (meta_data_activity,
desc->folio, idx, cap_page,
OBJECT_POLICY_DEFAULT, 0,
- ADDR_VOID, ADDR_VOID);
+ NULL, NULL);
assert (err == 0);
shadow = ADDR_TO_PTR (addr_extend (addr_extend (desc->folio,
idx, FOLIO_OBJECTS_LOG2),
@@ -331,7 +332,7 @@ static bool storage_init_done;
soon have a problem. In this case, we serialize access to the pool
of available pages to allow some thread that is able to allocate
more pages the chance to do so. */
-#define FREE_PAGES_SERIALIZE 16
+#define FREE_PAGES_SERIALIZE 32
static pthread_mutex_t storage_low_mutex
= PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP;
@@ -453,8 +454,11 @@ storage_check_reserve_internal (bool force_allocate,
}
/* And then the folio. */
- error_t err = rm_folio_alloc (activity, addr, FOLIO_POLICY_DEFAULT);
+ addr_t a = addr;
+ error_t err = rm_folio_alloc (activity, activity, FOLIO_POLICY_DEFAULT,
+ &a);
assert (! err);
+ assert (ADDR_EQ (addr, a));
/* Allocate and fill a descriptor. */
struct storage_desc *s = storage_desc_alloc ();
@@ -524,10 +528,18 @@ storage_alloc (addr_t activity,
struct storage_desc *desc;
bool do_allocate = false;
+ int tries = 0;
do
{
- storage_check_reserve_internal (do_allocate, activity, expectancy,
- true);
+ if (tries ++ == 5)
+ {
+ backtrace_print ();
+ debug (0, "Failing to get storage (free count: %d). Live lock?",
+ free_count);
+ }
+
+ storage_check_reserve_internal (do_allocate, meta_data_activity,
+ expectancy, true);
/* Find an appropriate storage area. */
struct storage_desc *pluck (struct storage_desc *list)
@@ -594,9 +606,10 @@ storage_alloc (addr_t activity,
addr_t folio = desc->folio;
addr_t object = addr_extend (folio, idx, FOLIO_OBJECTS_LOG2);
- debug (5, "Allocating object %d from " ADDR_FMT " (" ADDR_FMT ") "
- "(%d left), copying to " ADDR_FMT,
- idx, ADDR_PRINTF (folio), ADDR_PRINTF (object),
+ debug (5, "Allocating object %d as %s from " ADDR_FMT " (" ADDR_FMT ") "
+ "(%d left), installing at " ADDR_FMT,
+ idx, cap_type_string (type),
+ ADDR_PRINTF (folio), ADDR_PRINTF (object),
desc->free, ADDR_PRINTF (addr));
atomic_decrement (&free_count);
@@ -621,13 +634,13 @@ storage_alloc (addr_t activity,
ss_mutex_unlock (&storage_descs_lock);
}
- error_t err = rm_folio_object_alloc (activity,
- folio, idx, type,
- policy, 0,
- addr, ADDR_VOID);
+ addr_t a = addr;
+ error_t err = rm_folio_object_alloc (activity, folio, idx, type, policy, 0,
+ &a, NULL);
assertx (! err,
"Allocating object %d from " ADDR_FMT " at " ADDR_FMT ": %d!",
idx, ADDR_PRINTF (folio), ADDR_PRINTF (addr), err);
+ assert (ADDR_EQ (a, addr));
struct object *shadow = desc->shadow;
struct cap *cap = NULL;
@@ -686,6 +699,8 @@ storage_alloc (addr_t activity,
void
storage_free_ (addr_t object, bool unmap_now)
{
+ debug (5, DEBUG_BOLD ("Freeing " ADDR_FMT), ADDR_PRINTF (object));
+
addr_t folio = addr_chop (object, FOLIO_OBJECTS_LOG2);
atomic_increment (&free_count);
@@ -697,7 +712,7 @@ storage_free_ (addr_t object, bool unmap_now)
storage = hurd_btree_storage_desc_find (&storage_descs, &folio);
assertx (storage,
"No storage associated with " ADDR_FMT " "
- "(did you pass the storage address?",
+ "(did you pass the storage address?)",
ADDR_PRINTF (object));
ss_mutex_lock (&storage->lock);
@@ -784,7 +799,7 @@ storage_free_ (addr_t object, bool unmap_now)
error_t err = rm_folio_object_alloc (meta_data_activity,
folio, idx, cap_void,
OBJECT_POLICY_DEFAULT, 0,
- ADDR_VOID, ADDR_VOID);
+ NULL, NULL);
assert (err == 0);
if (likely (!! shadow))
@@ -884,7 +899,7 @@ storage_init (void)
ss_mutex_unlock (&storage_descs_lock);
debug (1, "%d folios, %d objects used, %d free objects",
- folio_count, __hurd_startup_data->desc_count, free_count);
+ folio_count, __hurd_startup_data->desc_count, (int) free_count);
storage_init_done = true;