Page 2 of 2 FirstFirst 12
Results 11 to 12 of 12

Thread: How do 64-bit applications differ from a code perspective?

  1. #11
    Join Date
    Aug 2007
    Location
    Novocastria, Australia
    Beans
    751
    Distro
    Ubuntu 9.04 Jaunty Jackalope

    Re: How do 64-bit applications differ from a code perspective?

    Quote Originally Posted by worksofcraft View Post
    some think that if you subtract one pointer from another you get an "int" result...
    Fair enough, I've never seen that sort of thing before.

    Incidentally Java ONLY runs on the Java virtual machine and I heard a rumor that said Java virtual machine is defined as a 32 bit machine, so it shouldn't really run on 64 bit machines at all and probably will never be using half the bits that the hardware actually has
    Java has some 64 bit datatypes such as double and long, which are implemented as 64 bit native types on a 64 bit computers. On 32 bit computers, 64 bit datatypes are stored in two adjacent (?) locations, and extra processing has to be used to manipulate them.

  2. #12
    Join Date
    Sep 2007
    Location
    Christchurch, New Zealand
    Beans
    1,328
    Distro
    Ubuntu

    Re: How do 64-bit applications differ from a code perspective?

    Quote Originally Posted by johnl View Post
    The C99 standard introduces a set of typedefs:
    • intptr_t is a signed integer type which can hold a pointer value
    • uintptr_t is an unsigned integer type which can hold a pointer value
    • ptrdiff_t is an integer type that can hold the result of subtracting two pointers.


    In the case that you do need to convert from pointer to integer or vice versa, usage of these types will prevent you from running into any 32-bit vs 64-bit compatibility issues.

    Also, use the correct format-specifier in printf functions:

    Code:
    size_t size = 4;
    size_t* foo = &size;
    
    printf("%u %x\n", size, foo);  /* bad */
    printf("%zu %p\n", size, foo); /* good */
    That's useful to know

    Personally I don't think the standards commitee people understood what Kernighan and Ritchie intended.

    The way I read their book was that it was all designed to maximize performance. Thus the int data type was intended to represent whatever (minimum) size was needed to manipulate memory addresses.

    The short data type was meant to be what ever size was most efficiently handled as single cycle ALU instruction (typically the width of a processor register).

    A char was meant to be the smallest addressable single unit, so for instance on some DSP processor that could be 48 bits and identical to the short data type...

    Long was meant to be the longest kind of integer that there were instructions for e.g. when you multiply two 32 bit registers you would get a 64 bit result, but it doesn't mean the processor can natively do 64 bit arithmetic.

    Anyway the standards people have blundered off proliferating endless different integral types like long longs and wchar_16's and strict enums and size_t and evidently intptr_t and uintptr_t so you can choose to have negative memory addresses now too
    Take nothing but photographs. Leave nothing but footprints.

Page 2 of 2 FirstFirst 12

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •