2016-06-20 17:30:05 UTC
were some comments having to do with implementation limits in a
discussion in comp.lang.c. These comments raised some questions
in my mind, which I would like to ask to the group for any
reactions anyone might have. Here is the background, ie, a
few relevant sentences taken from the aforementioned clc discussion.
(I don't have an attribution line handy, and in any case I'm not
responding to these statements, just giving them as background.)
However, since the standard makes no distinction between the
heap and the stack, and because the behavior is undefined if you
exceed an implementation's limits, including memory limits,
there's literally nothing that an implementation could do with
such code that would constitute an incorrect translation of it.
However, what permits that optimization is the fact that the
behavior is undefined if an implementation's limits (such as the
limit on the total amount of memory available) are exceeded.
The standard imposes no requirements on the implementation with
regards to how much memory any particular program may use, so a
fully conforming implementation is allowed to translate ANY
program in a way that causes it to exceed the implementation's
memory limits. This would give the implementation extremely low
quality, but would not render it non-conforming.
At first blush these assertions seem innocuous enough, or maybe I
should say reasonable. But are they right? The first question,
although the less important of the two, is this: is exceeding an
implementation limit automatically undefined behavior? More
precisely, may we assume that exceeding any implementation limit
is undefined behavior unless the Standard includes an explicit
statement that excuses it from being undefined (eg, defines it,
or says it is unspecified, or implementation-defined, etc)? My
position is that this question does not have a clear answer. In
support of this position, let me draw your attention to 18.104.22.168,
paragraphs 5 and 6 (in N1570). Because of the heading, these
paragraphs clearly are concerned with an implementation limit.
Yet there is an explicit statement about undefined behavior if
the limit is exceeded. There would be no need for that statement
if exceeding an implementation were automatically undefined
behavior. Using a "the exception proves the rule" inference, we
may conclude that exceeding an implementation limit is not by
default undefined behavior. Any reactions?
The second question is more important, and also (in my view) more
profound. Does the circumstance of running out of memory even
qualify as exceeding an implementation limit? Or more starkly,
does the amount of memory available count as "an implemenatation
limit"? The answer hinges on what counts as "the implementation."
My position is that factors like how much memory is available are
not part of "the implementation", and so running out of memory is
not "exceeding an implementation limit". My reasoning goes as
follows. If running out of memory counts as part of "the
implementation", that potentially encompasses an enormous range of
software and hardware. For example, it may depend on how much swap
space is available, which in turn depends on what other programs
are running on the computer at the time. It may depend on whether
certain disk blocks have gone bad, which depends on the particular
drive and also various external events, eg, power glitches from
whatever source is used to supply power. The power surges may even
depend on extra-terrestrial influences, eg, cosmic rays or sunspot
activity. All of these things potentially influence how much
memory is available for a program to use. It's absurd to say all
of these things are included in "the implementation". A line has
to be drawn somewhere. Now the question is where to draw the line.
I believe the Standard supplies a clear answer to this question, in
the last point of section 1, paragraph 2. That says:
[This International Standard does not specify:] all minimal
requirements of a data-processing system that is capable of
supporting a conforming implementation.
All of a systems particulars - the physical computer, how much
memory it has, what operating system it runs, what other programs
are running or may start running, all of that stuff - fall under
the heading of "a data-processing system [that supports] a
conforming implementation", and are not part of the implemenation
itself. Running out of memory is not "exceeding an implementation
limit"; it just means program execution failed for some reason.
As far as the implementation is concerned, that happening is no
different than a power failure or memory being zapped by a cosmic
ray. Surely no one thinks cosmic rays are meant to be part of
"a C implementation".
Let me mention also another result that bears on this issue. It
isn't too hard to write a strictly conforming program that will
use more memory than is available on any computer on earth. If
running that program runs out of memory, does it have undefined
behavior? I believe it does not. The definition of undefined
behavior is given in section 3.4.3 (again in N1570)
behavior, upon use of a nonportable or erroneous program
construct or of erroneous data, for which this International
Standard imposes no requirements
By definition, a strictly conforming program does not make use of
any nonportable or erroneous program construct or erroneous data.
A strictly conforming program is obliged not to exceed any
/minimum/ implementation limit, but these limits are listed in the
Standard, in section 22.214.171.124 (and perhaps a few other places, but
the key thing is all of them are known), so they can be observed.
Hence a strictly conforming program cannot transgress into
undefined behavior. It follows that the circumstance of running
out of memory is not "undefined behavior" as the Standard uses the
term, but just a failed program execution, about which eventuality
the Standard explicitly chooses not to address.
I welcome any comments or reactions to the above.