summaryrefslogtreecommitdiff
path: root/ev.pod
diff options
context:
space:
mode:
Diffstat (limited to 'ev.pod')
-rw-r--r--ev.pod343
1 files changed, 186 insertions, 157 deletions
diff --git a/ev.pod b/ev.pod
index 4d470a2..63c37a5 100644
--- a/ev.pod
+++ b/ev.pod
@@ -508,7 +508,9 @@ events to filter out spurious ones, recreating the set when required. Last
not least, it also refuses to work with some file descriptors which work
perfectly fine with C<select> (files, many character devices...).
-Epoll is truly the train wreck analog among event poll mechanisms.
+Epoll is truly the train wreck analog among event poll mechanisms,
+a frankenpoll, cobbled together in a hurry, no thought to design or
+interaction with others.
While stopping, setting and starting an I/O watcher in the same iteration
will result in some caching, there is still a system call per such
@@ -1616,26 +1618,19 @@ fd as you want (as long as you don't confuse yourself). Setting all file
descriptors to non-blocking mode is also usually a good idea (but not
required if you know what you are doing).
-If you cannot use non-blocking mode, then force the use of a
-known-to-be-good backend (at the time of this writing, this includes only
-C<EVBACKEND_SELECT> and C<EVBACKEND_POLL>). The same applies to file
-descriptors for which non-blocking operation makes no sense (such as
-files) - libev doesn't guarantee any specific behaviour in that case.
-
Another thing you have to watch out for is that it is quite easy to
-receive "spurious" readiness notifications, that is your callback might
+receive "spurious" readiness notifications, that is, your callback might
be called with C<EV_READ> but a subsequent C<read>(2) will actually block
-because there is no data. Not only are some backends known to create a
-lot of those (for example Solaris ports), it is very easy to get into
-this situation even with a relatively standard program structure. Thus
-it is best to always use non-blocking I/O: An extra C<read>(2) returning
-C<EAGAIN> is far preferable to a program hanging until some data arrives.
+because there is no data. It is very easy to get into this situation even
+with a relatively standard program structure. Thus it is best to always
+use non-blocking I/O: An extra C<read>(2) returning C<EAGAIN> is far
+preferable to a program hanging until some data arrives.
If you cannot run the fd in non-blocking mode (for example you should
not play around with an Xlib connection), then you have to separately
re-test whether a file descriptor is really ready with a known-to-be good
-interface such as poll (fortunately in our Xlib example, Xlib already
-does this on its own, so its quite safe to use). Some people additionally
+interface such as poll (fortunately in the case of Xlib, it already does
+this on its own, so its quite safe to use). Some people additionally
use C<SIGALRM> and an interval timer, just to be sure you won't block
indefinitely.
@@ -1673,16 +1668,48 @@ There is no workaround possible except not registering events
for potentially C<dup ()>'ed file descriptors, or to resort to
C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.
+=head3 The special problem of files
+
+Many people try to use C<select> (or libev) on file descriptors
+representing files, and expect it to become ready when their program
+doesn't block on disk accesses (which can take a long time on their own).
+
+However, this cannot ever work in the "expected" way - you get a readiness
+notification as soon as the kernel knows whether and how much data is
+there, and in the case of open files, that's always the case, so you
+always get a readiness notification instantly, and your read (or possibly
+write) will still block on the disk I/O.
+
+Another way to view it is that in the case of sockets, pipes, character
+devices and so on, there is another party (the sender) that delivers data
+on it's own, but in the case of files, there is no such thing: the disk
+will not send data on it's own, simply because it doesn't know what you
+wish to read - you would first have to request some data.
+
+Since files are typically not-so-well supported by advanced notification
+mechanism, libev tries hard to emulate POSIX behaviour with respect
+to files, even though you should not use it. The reason for this is
+convenience: sometimes you want to watch STDIN or STDOUT, which is
+usually a tty, often a pipe, but also sometimes files or special devices
+(for example, C<epoll> on Linux works with F</dev/random> but not with
+F</dev/urandom>), and even though the file might better be served with
+asynchronous I/O instead of with non-blocking I/O, it is still useful when
+it "just works" instead of freezing.
+
+So avoid file descriptors pointing to files when you know it (e.g. use
+libeio), but use them when it is convenient, e.g. for STDIN/STDOUT, or
+when you rarely read from a file instead of from a socket, and want to
+reuse the same code path.
+
=head3 The special problem of fork
Some backends (epoll, kqueue) do not support C<fork ()> at all or exhibit
useless behaviour. Libev fully supports fork, but needs to be told about
-it in the child.
+it in the child if you want to continue to use it in the child.
-To support fork in your programs, you either have to call
-C<ev_default_fork ()> or C<ev_loop_fork ()> after a fork in the child,
-enable C<EVFLAG_FORKCHECK>, or resort to C<EVBACKEND_SELECT> or
-C<EVBACKEND_POLL>.
+To support fork in your child processes, you have to call C<ev_loop_fork
+()> after a fork in the child, enable C<EVFLAG_FORKCHECK>, or resort to
+C<EVBACKEND_SELECT> or C<EVBACKEND_POLL>.
=head3 The special problem of SIGPIPE
@@ -3472,6 +3499,144 @@ To exit from any of these loops, just set the corresponding exit variable:
// exit both
exit_main_loop = exit_nested_loop = 1;
+=item Thread locking example
+
+Here is a fictitious example of how to run an event loop in a different
+thread than where callbacks are being invoked and watchers are
+created/added/removed.
+
+For a real-world example, see the C<EV::Loop::Async> perl module,
+which uses exactly this technique (which is suited for many high-level
+languages).
+
+The example uses a pthread mutex to protect the loop data, a condition
+variable to wait for callback invocations, an async watcher to notify the
+event loop thread and an unspecified mechanism to wake up the main thread.
+
+First, you need to associate some data with the event loop:
+
+ typedef struct {
+ mutex_t lock; /* global loop lock */
+ ev_async async_w;
+ thread_t tid;
+ cond_t invoke_cv;
+ } userdata;
+
+ void prepare_loop (EV_P)
+ {
+ // for simplicity, we use a static userdata struct.
+ static userdata u;
+
+ ev_async_init (&u->async_w, async_cb);
+ ev_async_start (EV_A_ &u->async_w);
+
+ pthread_mutex_init (&u->lock, 0);
+ pthread_cond_init (&u->invoke_cv, 0);
+
+ // now associate this with the loop
+ ev_set_userdata (EV_A_ u);
+ ev_set_invoke_pending_cb (EV_A_ l_invoke);
+ ev_set_loop_release_cb (EV_A_ l_release, l_acquire);
+
+ // then create the thread running ev_loop
+ pthread_create (&u->tid, 0, l_run, EV_A);
+ }
+
+The callback for the C<ev_async> watcher does nothing: the watcher is used
+solely to wake up the event loop so it takes notice of any new watchers
+that might have been added:
+
+ static void
+ async_cb (EV_P_ ev_async *w, int revents)
+ {
+ // just used for the side effects
+ }
+
+The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex
+protecting the loop data, respectively.
+
+ static void
+ l_release (EV_P)
+ {
+ userdata *u = ev_userdata (EV_A);
+ pthread_mutex_unlock (&u->lock);
+ }
+
+ static void
+ l_acquire (EV_P)
+ {
+ userdata *u = ev_userdata (EV_A);
+ pthread_mutex_lock (&u->lock);
+ }
+
+The event loop thread first acquires the mutex, and then jumps straight
+into C<ev_run>:
+
+ void *
+ l_run (void *thr_arg)
+ {
+ struct ev_loop *loop = (struct ev_loop *)thr_arg;
+
+ l_acquire (EV_A);
+ pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0);
+ ev_run (EV_A_ 0);
+ l_release (EV_A);
+
+ return 0;
+ }
+
+Instead of invoking all pending watchers, the C<l_invoke> callback will
+signal the main thread via some unspecified mechanism (signals? pipe
+writes? C<Async::Interrupt>?) and then waits until all pending watchers
+have been called (in a while loop because a) spurious wakeups are possible
+and b) skipping inter-thread-communication when there are no pending
+watchers is very beneficial):
+
+ static void
+ l_invoke (EV_P)
+ {
+ userdata *u = ev_userdata (EV_A);
+
+ while (ev_pending_count (EV_A))
+ {
+ wake_up_other_thread_in_some_magic_or_not_so_magic_way ();
+ pthread_cond_wait (&u->invoke_cv, &u->lock);
+ }
+ }
+
+Now, whenever the main thread gets told to invoke pending watchers, it
+will grab the lock, call C<ev_invoke_pending> and then signal the loop
+thread to continue:
+
+ static void
+ real_invoke_pending (EV_P)
+ {
+ userdata *u = ev_userdata (EV_A);
+
+ pthread_mutex_lock (&u->lock);
+ ev_invoke_pending (EV_A);
+ pthread_cond_signal (&u->invoke_cv);
+ pthread_mutex_unlock (&u->lock);
+ }
+
+Whenever you want to start/stop a watcher or do other modifications to an
+event loop, you will now have to lock:
+
+ ev_timer timeout_watcher;
+ userdata *u = ev_userdata (EV_A);
+
+ ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
+
+ pthread_mutex_lock (&u->lock);
+ ev_timer_start (EV_A_ &timeout_watcher);
+ ev_async_send (EV_A_ &u->async_w);
+ pthread_mutex_unlock (&u->lock);
+
+Note that sending the C<ev_async> watcher is required because otherwise
+an event loop currently blocking in the kernel will have no knowledge
+about the newly added timer. By waking up the loop it will pick up any new
+watchers in the next event loop iteration.
+
=back
@@ -4473,143 +4638,7 @@ watcher callback into the event loop interested in the signal.
=back
-=head4 THREAD LOCKING EXAMPLE
-
-Here is a fictitious example of how to run an event loop in a different
-thread than where callbacks are being invoked and watchers are
-created/added/removed.
-
-For a real-world example, see the C<EV::Loop::Async> perl module,
-which uses exactly this technique (which is suited for many high-level
-languages).
-
-The example uses a pthread mutex to protect the loop data, a condition
-variable to wait for callback invocations, an async watcher to notify the
-event loop thread and an unspecified mechanism to wake up the main thread.
-
-First, you need to associate some data with the event loop:
-
- typedef struct {
- mutex_t lock; /* global loop lock */
- ev_async async_w;
- thread_t tid;
- cond_t invoke_cv;
- } userdata;
-
- void prepare_loop (EV_P)
- {
- // for simplicity, we use a static userdata struct.
- static userdata u;
-
- ev_async_init (&u->async_w, async_cb);
- ev_async_start (EV_A_ &u->async_w);
-
- pthread_mutex_init (&u->lock, 0);
- pthread_cond_init (&u->invoke_cv, 0);
-
- // now associate this with the loop
- ev_set_userdata (EV_A_ u);
- ev_set_invoke_pending_cb (EV_A_ l_invoke);
- ev_set_loop_release_cb (EV_A_ l_release, l_acquire);
-
- // then create the thread running ev_loop
- pthread_create (&u->tid, 0, l_run, EV_A);
- }
-
-The callback for the C<ev_async> watcher does nothing: the watcher is used
-solely to wake up the event loop so it takes notice of any new watchers
-that might have been added:
-
- static void
- async_cb (EV_P_ ev_async *w, int revents)
- {
- // just used for the side effects
- }
-
-The C<l_release> and C<l_acquire> callbacks simply unlock/lock the mutex
-protecting the loop data, respectively.
-
- static void
- l_release (EV_P)
- {
- userdata *u = ev_userdata (EV_A);
- pthread_mutex_unlock (&u->lock);
- }
-
- static void
- l_acquire (EV_P)
- {
- userdata *u = ev_userdata (EV_A);
- pthread_mutex_lock (&u->lock);
- }
-
-The event loop thread first acquires the mutex, and then jumps straight
-into C<ev_run>:
-
- void *
- l_run (void *thr_arg)
- {
- struct ev_loop *loop = (struct ev_loop *)thr_arg;
-
- l_acquire (EV_A);
- pthread_setcanceltype (PTHREAD_CANCEL_ASYNCHRONOUS, 0);
- ev_run (EV_A_ 0);
- l_release (EV_A);
-
- return 0;
- }
-
-Instead of invoking all pending watchers, the C<l_invoke> callback will
-signal the main thread via some unspecified mechanism (signals? pipe
-writes? C<Async::Interrupt>?) and then waits until all pending watchers
-have been called (in a while loop because a) spurious wakeups are possible
-and b) skipping inter-thread-communication when there are no pending
-watchers is very beneficial):
-
- static void
- l_invoke (EV_P)
- {
- userdata *u = ev_userdata (EV_A);
-
- while (ev_pending_count (EV_A))
- {
- wake_up_other_thread_in_some_magic_or_not_so_magic_way ();
- pthread_cond_wait (&u->invoke_cv, &u->lock);
- }
- }
-
-Now, whenever the main thread gets told to invoke pending watchers, it
-will grab the lock, call C<ev_invoke_pending> and then signal the loop
-thread to continue:
-
- static void
- real_invoke_pending (EV_P)
- {
- userdata *u = ev_userdata (EV_A);
-
- pthread_mutex_lock (&u->lock);
- ev_invoke_pending (EV_A);
- pthread_cond_signal (&u->invoke_cv);
- pthread_mutex_unlock (&u->lock);
- }
-
-Whenever you want to start/stop a watcher or do other modifications to an
-event loop, you will now have to lock:
-
- ev_timer timeout_watcher;
- userdata *u = ev_userdata (EV_A);
-
- ev_timer_init (&timeout_watcher, timeout_cb, 5.5, 0.);
-
- pthread_mutex_lock (&u->lock);
- ev_timer_start (EV_A_ &timeout_watcher);
- ev_async_send (EV_A_ &u->async_w);
- pthread_mutex_unlock (&u->lock);
-
-Note that sending the C<ev_async> watcher is required because otherwise
-an event loop currently blocking in the kernel will have no knowledge
-about the newly added timer. By waking up the loop it will pick up any new
-watchers in the next event loop iteration.
+See also L<Thread locking example>.
=head3 COROUTINES