Post by Chris CroughtonPost by k***@wizard.netPost by Chris CroughtonPost by k***@wizard.netWhy would I want to write complicated code like that, when "%jd"
exists?
Because on many systems it /doesn't/ exist.
If you don't have a conforming C99 compiler,
/and library/
Sorry, I should have said "implementation" rather than "compiler".
Post by Chris CroughtonPost by k***@wizard.netthen the best possible
substitute for intmax_t is 'long'; if that isn't big enough, then
you'll have to give up on portability and write code that's specific to
the platform you're using.
No, you can write portable conversion code for it. Last time I did that
it took me under half an hour (generic base (2 to 36) input and output)
If it took you a half hour to write it, then it would require several
hours of testing and writing up of documentation, in order to permit it
to be used as part of our software. The total cost to my employer of my
salary for that amount of time would exceed by several orders of
magnitude the value of the expected savings in CPU time (and I'm not
claiming to be exceptionally well paid).
Post by Chris CroughtonPost by k***@wizard.netHowever, platforms where only C90 is
available, and 'long' isn't big enough, are rare enough in my
experience that I don't bother worrying about them. As always, your
experience may differ.
Any platform with gcc the /compiler/ supports intmax_t (gcc should
install stdint.h if it doesn't exist). However, the /library/ very
often does not support the 'j' modifier because gcc uses the system C
library. The same with the rest of the C99 additions to the library
(the long double versions of the maths functions, for instance). That's
quite a lot of platforms (and I don't think gcc is alone in that)...
Since the library doesn't support the 'j' modifier, the
compiler+library doesn't constitute a conforming implementation of C99.
It might conform with C90, in which case my previous comments about
'long' apply; the fact that there's a possible typedef for intmax_t
doesn't affect the validity of those comments. It should have intmax_t
as a typedef for 'long' and PRIdMAX should expand to "%ld", and I'm not
willing to worry about the exceptional cases where that's wrong. If it
doesn't conform to either standard, it's off-topic for this newsgroup.
...
Post by Chris CroughtonPost by k***@wizard.netNo, I'm referring to the vendor that provided a third-party library.
I'm referring to the format string that corresponds to that third party
library's PGSt_integer typedef, which is almost certainly not "%jd".
The printf() family isn't their responsibility. Providing a macro that
contains a suitable format string would be a good idea, but it wasn't
actually done.
Well, that's a QoI issue with your vendor.
Yes, but since it's also an fairly commonplace type of QoI issue, it
also qualifies as a legitimate reason for needing intmax_t and "%jd".
Post by Chris CroughtonPost by k***@wizard.netIf I didn't have a responsibility to the future, it would be easy. We
don't use any compiler that supports a type with more than 64 bits, so
if I didn't have to worry about upward compatibility I could use
int_fast64_t for all such purposes. However, my code is supposed to
continue working correctly even when compiled by an implementation of C
that does support a 128 bit type, so long as it supports it in a
conforming fashion, even if every third-party typedef that has no
conflicting constraints on it gets updated to refer to int128_t.
Code should be designed for the values which are reasonable in the
problem domain, not whatever types the compiler may produce in the
future.
The problem domain includes, in this case, use of PGSt_integer to store
ID codes that are assigned by a program that comes with the library, in
accordance with an undocumented algorithm (as users of the library,
we're not supposed to know or care what that algorithm is), which might
(and currently does) use the full range of values available to
PGSt_integer. Source code for that program is available (but keep in
mind that the installation script installs different source code on
different platforms). However, since there's no promise that the
algorithm will remain the same in future versions, no responsible
programmer should bother examining that source code to determine what
the possible range of values is.
Post by Chris Croughton... Cases where I don't know that the range of a variable won't fit
into 64 bits are vanishingly rare, ones where it may not fit into 32
bits in the future are pretty rare (the main one being a time difference
in seconds). But the point is that unless you implement the conversion
yourself you still have no guarantee of future-proof code, because the
compiler may implement intmax_t as 128 bits but the library (which you
say isn't provided by the same supplier as the compiler) may not support
it and you'll be stuffed if you cast to intmax_t.
The library is provided as source code (different source code for
different platforms, with lots of conditional compilation). If, after
conditional compilation, the source code for a given platform requires
int128_t, and the compiler we use doesn't support that type, the
library won't even compile. As a user of this library, that's not a
problem I have to worry about.
However, I also happen to be the person responsible for installing this
library; in that capacity, if the compiler I'm using is one that the
library is certified to work with, I would report the failure as a bug
back to the vendor. On the other hand, if the third-party library's
header files typedef PGSt_integer as int128_t, and fails to provide a
corresponding format string macro, I can't label those as bugs, because
neither of those possibilities violate any promises the vendor has
made.