[R6RS] Changing the transcoding mid-stream

William D Clinger will at ccs.neu.edu
Wed Aug 23 12:51:45 EDT 2006

Kent quoting me:
> > No, I don't think so.  The lookahead-u8 procedure is
> > supposed to block until the next byte is available or
> > an apparent end of the input is seen.  Having no data
> > ready is not the same as "an apparent end of the input".
> > Implementations are free to represent an apparent end of
> > the input however they like, but they have to buffer it
> > in order to satisfy the specifications of lookahead-u8
> > and lookahead-char, even if the buffer-mode is "none".
> I think it is more useful for these procedures to return EOF only if the
> file is still at EOF.  I can imagine implementing a nonblocking "tail -f"
> by repeatedly checking for eof with lookahead-u8.  It's kind of silly to
> add some mechanism to the port to handle this edge case only to force
> programmers to have to call get-u8 after lookahead-u8 to consume the eof
> on ports that may not be at eof permanently.

It sounds to me as though you are trying to achieve
the effect of non-blocking input (e.g. char-ready?)
by exploiting an OS-specific behavior.

That might be acceptable as an OS-specific hack, but
I don't see the argument for making it a semantics
that all implementations must support, regardless of
the OS.  If you think non-blocking I/O should be one
of the requirements for this I/O system (and it hasn't
been; I think it has been an explicit non-requirement),
then we should design the non-blocking operations from
scratch instead of trying to synthesize them from the
blocking operations.

> Here's a related question.  I presume that lookahead-char will raise an
> exception if end-of-file is reached after some but before before enough
> bytes have been read to make a whole character.  If this exception is
> caught, the application continues, the underlying file is extended
> independently with enough bytes to fill out the character, and get-u8 is
> called again, must get-char also fail with the same exception?

So far as I know, we have not yet specified the
semantics of I/O exceptions.  The justification
for that, I believe, is a presumption that the
the R6RS will not require any of the I/O exceptions
to be continuable.  If that presumption is correct,
then the semantics of continuing from an I/O
exception will be implementation-dependent.

In any case, the situation you described sounds like
just one of many race conditions that can arise when
someone is reading from a file that is being written
by a concurrent process.  I don't think it is our
job to resolve all such race conditions; I don't even
think it is our job to resolve race conditions that
could arise from multiple threads within a single
Scheme process.

I wonder whether the "exclusive" file option might
be relevant here.  SRFI 79 does not seem to describe
any semantics for "exclusive", so I don't know whether
it could be used to prevent the kind of race condition
you described.


More information about the R6RS mailing list