lu333031
September 23rd, 2014, 11:40 PM
I need to get accurate timestamps for a data stream (100s of microSec accuracy). For the most part, I am able to get the correct clock value but from time to time, there seems to be some delays. The deviation is arbitrary and I am unable to see any patterns.
Problem seen on Ubuntu 12.04 (32-bit).
No other OS is installed on the system and no other apps are running while running the test.
Problem seen with both gettimeofday() and clock_gettime() calls. Interestingly, when the
timestamp is off in one, it is off in the other as well, leading me to suspect that the kernel is
going off to do something before attending to the clock/time related calls.
The following (compilable) code snippet illustrates the problem (no real data is being read
here, only time is read every 100uS. But the timestamps are skewed from time to time) -
/************************************************** *
* File myts.c
************************************************** */
#include <stdio.h>
#include <time.h>
#include <sys/time.h>
#define NSEC_PER_SEC 1000000000 /* nSecs in a second */
#define USEC_PER_SEC 1000000 /* uSecs in a second */
typedef struct timespec timespec_t;
typedef struct timeval timeval_t;
typedef struct timezone timezone_t;
int main()
{
timespec_t timeSpec;
timeval_t timeVal;
timezone_t timeZone;
unsigned long usTStamp1, usTStamp2, uTStampDiff;
unsigned long nsTStamp1, nsTStamp2, nTStampDiff;
while (1)
{
usleep(100);
if ((gettimeofday(&timeVal, &timeZone) == -1) ||
(clock_gettime((int)CLOCK_MONOTONIC, &timeSpec) == -1))
{
fprintf(stderr, "\nSys call error (TStamp1)!\n");
continue;
}
/* Save timestamp in uSec */
usTStamp1 = timeVal.tv_usec +
(timeVal.tv_sec * USEC_PER_SEC);
/* Save timestamp in nSec */
nsTStamp1 = timeSpec.tv_nsec +
(timeSpec.tv_sec * NSEC_PER_SEC);
/* Sleep 100us between timestamps */
usleep(100);
if ((gettimeofday(&timeVal, &timeZone) == -1) ||
(clock_gettime( (int)CLOCK_MONOTONIC, &timeSpec) == -1))
{
fprintf(stderr, "\nSys call error (TStamp2)!\n");
continue;
}
/* Save timestamp in uSec */
usTStamp2 = timeVal.tv_usec +
(timeVal.tv_sec * USEC_PER_SEC);
/* Save timestamp in nSec */
nsTStamp2 = timeSpec.tv_nsec +
(timeSpec.tv_sec * NSEC_PER_SEC);
uTStampDiff = usTStamp2 - usTStamp1;
nTStampDiff = nsTStamp2 - nsTStamp1;
fprintf(stderr, "%lu %lu\n", uTStampDiff, nTStampDiff);
} /* while (1) */
}
/************************************************** */
Code compiled and run as follows:
> gcc -o myts myts.c -lrt
> ./myts > myts.txt 2>&1
If you run the program for 4-5 seconds and abort with a Ctrl-C, myts.txt shows 2 colums of data.
Coln. 1 is the difference between 2 consecutive timestamps in microSec obtd. using gettimeofday().
Coln. 2 is the difference between 2 consecutive timestamps in nanoSec obtd. using clock_gettime().
Consecutive samples are 100uS apart but it appears that the smallest sleep the system is capable of is about
165-170uS (which is acceptable for our purposes). From time to time, the time difference is anywhere from
200-4000uS instead of ~170uS. This is causing the problem as the data is timestamped incorrectly. When the
deviation in timestamp occurs, it seems to occur for both calls (thus eliminating the cause of error as a specific
call).
Can anybody explain these arbitrary anomalous timestamps and suggest a way to prevent this?
Running the program with "nice -20" did not help.
Thanks.
Lu
Problem seen on Ubuntu 12.04 (32-bit).
No other OS is installed on the system and no other apps are running while running the test.
Problem seen with both gettimeofday() and clock_gettime() calls. Interestingly, when the
timestamp is off in one, it is off in the other as well, leading me to suspect that the kernel is
going off to do something before attending to the clock/time related calls.
The following (compilable) code snippet illustrates the problem (no real data is being read
here, only time is read every 100uS. But the timestamps are skewed from time to time) -
/************************************************** *
* File myts.c
************************************************** */
#include <stdio.h>
#include <time.h>
#include <sys/time.h>
#define NSEC_PER_SEC 1000000000 /* nSecs in a second */
#define USEC_PER_SEC 1000000 /* uSecs in a second */
typedef struct timespec timespec_t;
typedef struct timeval timeval_t;
typedef struct timezone timezone_t;
int main()
{
timespec_t timeSpec;
timeval_t timeVal;
timezone_t timeZone;
unsigned long usTStamp1, usTStamp2, uTStampDiff;
unsigned long nsTStamp1, nsTStamp2, nTStampDiff;
while (1)
{
usleep(100);
if ((gettimeofday(&timeVal, &timeZone) == -1) ||
(clock_gettime((int)CLOCK_MONOTONIC, &timeSpec) == -1))
{
fprintf(stderr, "\nSys call error (TStamp1)!\n");
continue;
}
/* Save timestamp in uSec */
usTStamp1 = timeVal.tv_usec +
(timeVal.tv_sec * USEC_PER_SEC);
/* Save timestamp in nSec */
nsTStamp1 = timeSpec.tv_nsec +
(timeSpec.tv_sec * NSEC_PER_SEC);
/* Sleep 100us between timestamps */
usleep(100);
if ((gettimeofday(&timeVal, &timeZone) == -1) ||
(clock_gettime( (int)CLOCK_MONOTONIC, &timeSpec) == -1))
{
fprintf(stderr, "\nSys call error (TStamp2)!\n");
continue;
}
/* Save timestamp in uSec */
usTStamp2 = timeVal.tv_usec +
(timeVal.tv_sec * USEC_PER_SEC);
/* Save timestamp in nSec */
nsTStamp2 = timeSpec.tv_nsec +
(timeSpec.tv_sec * NSEC_PER_SEC);
uTStampDiff = usTStamp2 - usTStamp1;
nTStampDiff = nsTStamp2 - nsTStamp1;
fprintf(stderr, "%lu %lu\n", uTStampDiff, nTStampDiff);
} /* while (1) */
}
/************************************************** */
Code compiled and run as follows:
> gcc -o myts myts.c -lrt
> ./myts > myts.txt 2>&1
If you run the program for 4-5 seconds and abort with a Ctrl-C, myts.txt shows 2 colums of data.
Coln. 1 is the difference between 2 consecutive timestamps in microSec obtd. using gettimeofday().
Coln. 2 is the difference between 2 consecutive timestamps in nanoSec obtd. using clock_gettime().
Consecutive samples are 100uS apart but it appears that the smallest sleep the system is capable of is about
165-170uS (which is acceptable for our purposes). From time to time, the time difference is anywhere from
200-4000uS instead of ~170uS. This is causing the problem as the data is timestamped incorrectly. When the
deviation in timestamp occurs, it seems to occur for both calls (thus eliminating the cause of error as a specific
call).
Can anybody explain these arbitrary anomalous timestamps and suggest a way to prevent this?
Running the program with "nice -20" did not help.
Thanks.
Lu