Re: How do 64-bit applications differ from a code perspective?
Originally Posted by
johnl
The C99 standard introduces a set of typedefs:
- intptr_t is a signed integer type which can hold a pointer value
- uintptr_t is an unsigned integer type which can hold a pointer value
- ptrdiff_t is an integer type that can hold the result of subtracting two pointers.
In the case that you do need to convert from pointer to integer or vice versa, usage of these types will prevent you from running into any 32-bit vs 64-bit compatibility issues.
Also, use the correct format-specifier in printf functions:
Code:
size_t size = 4;
size_t* foo = &size;
printf("%u %x\n", size, foo); /* bad */
printf("%zu %p\n", size, foo); /* good */
That's useful to know
Personally I don't think the standards commitee people understood what Kernighan and Ritchie intended.
The way I read their book was that it was all designed to maximize performance. Thus the int data type was intended to represent whatever (minimum) size was needed to manipulate memory addresses.
The short data type was meant to be what ever size was most efficiently handled as single cycle ALU instruction (typically the width of a processor register).
A char was meant to be the smallest addressable single unit, so for instance on some DSP processor that could be 48 bits and identical to the short data type...
Long was meant to be the longest kind of integer that there were instructions for e.g. when you multiply two 32 bit registers you would get a 64 bit result, but it doesn't mean the processor can natively do 64 bit arithmetic.
Anyway the standards people have blundered off proliferating endless different integral types like long longs and wchar_16's and strict enums and size_t and evidently intptr_t and uintptr_t so you can choose to have negative memory addresses now too
Take nothing but photographs. Leave nothing but footprints.
Bookmarks