I am working with large binary files (several gigabytes each), and I need to be able to address specific bytes in those files. So I keep addresses of data within these files using 64-bit integers, specifically numpy types int64 and uint64. However, I noticed that the unsigned version is giving an error:
However, when I used the signed version numpy.int64, there is no problem:
>>> ids1 = np.fromfile('TR_45_3 ch 0 cluster 1.txt', dtype = np.uint64, sep = ' ')
Traceback (most recent call last):
File "<pyshell#55>", line 1, in <module>
File "xxx\tes_utility.py", line 32, in goto
self.f.seek(newpos * 2 * self.rec_len)
OverflowError: Python int too large to convert to C long
What is the difference between these two types that is causing this error?
>>> ids1 = np.fromfile('TR_45_3 ch 0 cluster 1.txt', dtype = np.int64, sep = ' ')
(The text files hold positions of records. In this case, the records are stored using 16-bit integers, and each record is 256 data points - hence the multipliers in the f.seek() command. tes4530 is an instance of a class that handles the binary data file, and goto() sets the byte position at the beginning of the record with the given number.
Also, numpy is loaded using "import numpy as np".)