Problem With Clock() And New Toolchain


dockthepod

Member
Joined
Jun 15, 2006
Messages
250
I've switched from using the official SDK and am now using the toolchain provided here.

My code worked fine in the official SDK, but with this new (much faster I might add) toolchain I'm having troubles with clock(). I'm using clock() as part of my score keeping mechanism and it is returning the same value at obviously different times. Occasionally it will return a small difference in time, but it's semi-random and seemingly incorrect. Mostly when I get a difference in time, it's 10000 (or 0.01 seconds) but a couple of times I've seen 0.16 seconds. I don't know why those two numbers pop up, but for the most part I'm seeing a difference of 0.

I've recompiled my code with the official SDK and it works fine. I've been pulling my hair out here and don't think that I'm doing anything stupid (this time ;)). The only thing that I can imagine is that somehow a wrong header or something is being used, but my Makefile is pretty explicit. Does GCC use environmental variables sometimes?
 
The c library function "clock()" returns processor time used. Do you want elapsed time?

Code:
CLOCK(3)				   Linux Programmer's Manual				  CLOCK(3)

NAME
	   clock - Determine processor time

SYNOPSIS
	   #include <time.h>

	   clock_t clock(void);

DESCRIPTION
	   The clock() function returns an approximation of processor time used by
	   the program.

RETURN VALUE
	   The value returned is the CPU time used so far as a clock_t; to get the
	   number  of  seconds  used,  divide by CLOCKS_PER_SEC.  If the processor
	   time used is not available or its  value  cannot  be  represented,  the
	   function returns the value (clock_t)-1.

CONFORMING TO
	   ANSI  C.  POSIX requires that CLOCKS_PER_SEC equals 1000000 independent
	   of the actual resolution.

NOTES
	   The C standard allows for arbitrary values at the start of the program;
	   subtract  the value returned from a call to clock() at the start of the
	   program to get maximum portability.

	   Note  that  the  time  can  wrap  around.   On  a  32bit  system  where
	   CLOCKS_PER_SEC  equals 1000000 this function will return the same value
	   approximately every 72 minutes.

	   On several other implementations, the value returned  by  clock()  also
	   includes  the times of any children whose status has been collected via
	   wait() (or another wait-type call).  Linux does not include  the  times
	   of  waited-for  children in the value returned by clock().  The times()
	   function, which explicitly returns  (separate)  information  about  the
	   caller and its children, may be preferable.

SEE ALSO
	   getrusage(2), times(2)

GNU							   2002-06-14						  CLOCK(3)
 
Ah yes, once again I'm an idiot. Yes, I just want to measure the time between two events in my game. The other functions that I looked at only got down to the seconds level, so this clock() seemed like the ticket. And for a bit, it was :)

This was working for me because my app was sucking down all the CPU time. Once I switched over to the new toolchain and optimized some stuff I was sleeping for the most part.

I guess I'll use SDL's timing functions instead, though I'm curious what the correct thing to use in a standard c app would be.
 
function gettimeofday should help :
Code:
GETTIMEOFDAY(2)			Linux Programmer's Manual		   GETTIMEOFDAY(2)

NAME
	   gettimeofday, settimeofday - get / set time

SYNOPSIS
	   #include <sys/time.h>

	   int gettimeofday(struct timeval *tv, struct timezone *tz);
	   int settimeofday(const struct timeval *tv , const struct timezone *tz);

DESCRIPTION
	   The functions gettimeofday() and settimeofday() can  get  and  set  the
	   time  as  well  as a timezone.  The tv argument is a struct timeval (as
	   specified  in <sys/time.h>):

	   struct timeval {
			   time_t		 tv_sec;		/* seconds */
			   suseconds_t	tv_usec;  /* microseconds */
	   };

	   and gives the number of seconds and microseconds since the  Epoch  (see
	   time(2)).  The tz argument is a struct timezone:

	   struct timezone {
			   int  tz_minuteswest; /* minutes W of Greenwich */
			   int  tz_dsttime;	 /* type of dst correction */
	   };

	   The  use  of  the timezone struct is obsolete: the tz_dsttime field has
	   never been used under Linux; it has not been and will not be  supported
	   by  libc or glibc.  Each and every occurrence of this field in the ker-
	   nel source (other than the declaration) is a bug. Thus,  the  following
	   is purely of historic interest.

	   The  field  tz_dsttime  contains  a symbolic constant (values are given
	   below) that indicates in which part of the year Daylight Saving Time is
	   in force. (Note: its value is constant throughout the year: it does not
	   indicate that DST is in force, it just selects an algorithm.)  The day-
	   light saving time algorithms defined are as follows :

		DST_NONE	 /* not on dst */
		DST_USA	  /* USA style dst */
		DST_AUST	 /* Australian style dst */
		DST_WET	  /* Western European dst */
		DST_MET	  /* Middle European dst */
		DST_EET	  /* Eastern European dst */
		DST_CAN	  /* Canada */
		DST_GB	   /* Great Britain and Eire */
		DST_RUM	  /* Rumania */
		DST_TUR	  /* Turkey */
		DST_AUSTALT  /* Australian style with shift in 1986 */

	   Of  course  it turned out that the period in which Daylight Saving Time
	   is in force cannot be given by a simple  algorithm,  one  per  country;
	   indeed, this period is determined by unpredictable political decisions.
	   So this method of representing time zones  has  been  abandoned.  Under
	   Linux, in a call to settimeofday() the tz_dsttime field should be zero.

	   Under Linux there is some peculiar 'warp clock' semantics associated to
	   the  settimeofday()  system call if on the very first call (after boot-
	   ing) that has a non-NULL tz argument, the tv argument is NULL  and  the
	   tz_minuteswest field is non-zero. In such a case it is assumed that the
	   CMOS clock is on local time, and that it has to be incremented by  this
	   amount  to  get UTC system time.  No doubt it is a bad idea to use this
	   feature.

	   The following macros are defined to operate on a struct timeval:

	   #define	   timerisset(tvp)\
			   ((tvp)->tv_sec || (tvp)->tv_usec)
	   #define	   timercmp(tvp, uvp, cmp)\
			   ((tvp)->tv_sec cmp (uvp)->tv_sec ||\
			   (tvp)->tv_sec == (uvp)->tv_sec &&\
			   (tvp)->tv_usec cmp (uvp)->tv_usec)
	   #define	   timerclear(tvp)\
			   ((tvp)->tv_sec = (tvp)->tv_usec = 0)

	   If either tv or tz is null, the corresponding structure is not  set  or
	   returned.

	   Only the superuser may use settimeofday().

RETURN VALUE
	   gettimeofday() and settimeofday() return 0 for success, or -1 for fail-
	   ure (in which case errno is set appropriately).

ERRORS
	   EFAULT One of tv or tz pointed outside the accessible address space.

	   EINVAL Timezone (or something else) is invalid.

	   EPERM  The calling process has insufficient privilege to call  settime-
			  ofday(); under Linux the CAP_SYS_TIME capability is required.

NOTE
	   The prototype for settimeofday() and the defines for timercmp, timeris-
	   set, timerclear, timeradd, timersub are (since glibc2.2.2) only  avail-
	   able  if  _BSD_SOURCE  is defined (either explicitly, or implicitly, by
	   not defining _POSIX_SOURCE or compiling with the -ansi flag).

	   Traditionally, the fields of struct timeval were longs.

CONFORMING TO
	   SVr4, 4.3BSD. POSIX 1003.1-2001 describes gettimeofday() but  not  set-
	   timeofday().

SEE ALSO
	   date(1), adjtimex(2), time(2), ctime(3), ftime(3), capabilities(7)

Linux 2.6.6					   2004-05-27				   GETTIMEOFDAY(2)
 
Back
Top