由买买提看人间百态

boards

本页内容为未名空间相应帖子的节选和存档,一周内的贴子最多显示50字,超过一周显示500字 访问原贴
Programming版 - How to measure CPU time spent on a code block (with C or C++)?
相关主题
how to apply OOD to a code for both win and linux platform ?问一个C\C++中clock()函数溢出问题
怎么知道一个线程结束没有?Xerces-C++ in vs.net question
C 程序 clock() timer 问题How to tell gcc stop compiling.
关于程序必须支持win and linux, 可不可以用class而不是#ifdef WIN32 ?global variable usage question in C++
老哥使用的一项技术: extern定义全局变量C 多线程的一个问题
python的缩进问题,导致你没法从block开头跳到结尾两个class的交叉引用问题
Clock() problem一个naive的问题——是否有这种工具?
C++ 问题紧急求救最基本的C语言编程问题请教
相关话题的讨论汇总
话题: timer话题: __话题: block话题: cpu话题: code
进入Programming版参与讨论
1 (共1页)
d*******o
发帖数: 5897
1
Hi, I have a C program like this:
...
CODE_BLOCK;
...
I want to know how much CPU time spent on CODE_BLOCK. Since the process
executing CODE_BLOCK may be preempted during execution, this CPU time may
not be equal to the (wall-clock) time elapsed from the beginning of CODE_
BLOCK to the end of it.
Can anyone tell me how to do this?
Thanks.
t****t
发帖数: 6806
2
getrusage

【在 d*******o 的大作中提到】
: Hi, I have a C program like this:
: ...
: CODE_BLOCK;
: ...
: I want to know how much CPU time spent on CODE_BLOCK. Since the process
: executing CODE_BLOCK may be preempted during execution, this CPU time may
: not be equal to the (wall-clock) time elapsed from the beginning of CODE_
: BLOCK to the end of it.
: Can anyone tell me how to do this?
: Thanks.

i**********e
发帖数: 1145
3
Try this:
It will use the high resolution performance clock if available (for both
windows/*nix platform), otherwise it will roll back to the non-high
resolution timer.
/**
* Timer class definition file.
*
* Provides basic timer functionality which calculates running time for
certain
* part of your program. Very useful for analyzing algorithm running time.
* After a Timer object is created, calling the Start() member function
starts
* the timer running. The Stop() member function stops the timer. Once the
* timer is stopped, Start() must be called again to reset the timer.
* GetElapsedSeconds() returns the number of seconds elapsed. Calling
* GetElapsedSeconds() while the timer is still running gives you the
elapsed
* time up to that point.
*
* Note:
* This Timer class implementation is platform independent. The accuracy of
the
* timer might differ based on the platform/hardware. Both *NIX/Windows
version
* provides high resolution up to one microseconds. For other platforms the
timer
* from standard library is used instead. It is not as precise, but is
* supported across all platforms.
*/
#ifndef TIMER_H
#define TIMER_H
#if defined(linux) || defined(__linux) || defined(__linux__) || defined(__
unix__) || defined(unix) || defined(__unix)
#define __PLATFORM_UNIX__
#include
#define stopwatch timeval
#elif defined(_WIN32) || defined(__WIN32__) || defined(WIN32)
#define __PLATFORM_WINDOWS__
#define NOMINMAX // required to stop windows.h messing up std::min
#define WIN32_LEAN_AND_MEAN
#include
#include
#define stopwatch LARGE_INTEGER
#else
#define __PLATFORM_OTHER__
#include
#define stopwatch clock_t
#endif
class Timer
{
public:
Timer();
void Start();
void Stop();
double GetElapsedSeconds();
private:
stopwatch begin;
stopwatch end;
bool isRunning;

#ifdef __PLATFORM_WINDOWS__
DWORD mStartTick;
LONGLONG mLastTime;
LARGE_INTEGER mFrequency;
DWORD mTimerMask;
#endif

void SetBeginTime();
void SetEndTime();
};
Timer::Timer()
: isRunning(false)
{
}
void Timer::Start()
{
isRunning = true;
SetBeginTime();
}
void Timer::Stop()
{
isRunning = false;
SetEndTime();
}
#ifdef __PLATFORM_UNIX__
void Timer::SetBeginTime()
{
gettimeofday(&begin, NULL);
}
void Timer::SetEndTime()
{
gettimeofday(&end, NULL);
}
double Timer::GetElapsedSeconds()
{
if (isRunning)
SetEndTime();
return (end.tv_sec - begin.tv_sec) + (end.tv_usec - begin.tv_usec) /
1000000.0;
}
#endif
#ifdef __PLATFORM_WINDOWS__
void Timer::SetBeginTime()
{
mTimerMask = 1;
// Get the current process core mask
DWORD procMask;
DWORD sysMask;
#if _MSC_VER >= 1400 && defined (_M_X64)
GetProcessAffinityMask(GetCurrentProcess(), (PDWORD_PTR)&procMask, (
PDWORD_PTR)&sysMask);
#else
GetProcessAffinityMask(GetCurrentProcess(), &procMask, &sysMask);
#endif
// If procMask is 0, consider there is only one core available
// (using 0 as procMask will cause an infinite loop below)
if (procMask == 0)
procMask = 1;
// Find the lowest core that this process uses
while( ( mTimerMask & procMask ) == 0 )
{
mTimerMask <<= 1;
}
HANDLE thread = GetCurrentThread();
// Set affinity to the first core
DWORD oldMask = SetThreadAffinityMask(thread, mTimerMask);
// Get the constant frequency
QueryPerformanceFrequency(&mFrequency);
// Query the timer
QueryPerformanceCounter(&begin);
mStartTick = GetTickCount();
// Reset affinity
SetThreadAffinityMask(thread, oldMask);
mLastTime = 0;
}
void Timer::SetEndTime()
{
HANDLE thread = GetCurrentThread();
// Set affinity to the first core
DWORD oldMask = SetThreadAffinityMask(thread, mTimerMask);
// Query the timer
QueryPerformanceCounter(&end);
// Reset affinity
SetThreadAffinityMask(thread, oldMask);
LONGLONG newTime = end.QuadPart - begin.QuadPart;

// get milliseconds to check against GetTickCount
unsigned long newTicks = (unsigned long) (1000 * newTime / mFrequency.
QuadPart);

// detect and compensate for performance counter leaps
// (surprisingly common, see Microsoft KB: Q274323)
unsigned long check = GetTickCount() - mStartTick;
signed long msecOff = (signed long)(newTicks - check);
if (msecOff < -100 || msecOff > 100)
{
// We must keep the timer running forward :)
LONGLONG adjust = (std::min)(msecOff * mFrequency.QuadPart / 1000,
newTime - mLastTime);
begin.QuadPart += adjust;
newTime -= adjust;
}
// Record last time for adjust
mLastTime = newTime;
}
double Timer::GetElapsedSeconds()
{
if (isRunning)
SetEndTime();
return static_cast(mLastTime) / mFrequency.QuadPart;
}
#endif
#ifdef __PLATFORM_OTHER__
void Timer::SetBeginTime()
{
begin = clock();
}
void Timer::SetEndTime()
{
end = clock();
}
double Timer::GetElapsedSeconds()
{
if (isRunning)
SetEndTime();
return static_cast(end - begin) / CLOCKS_PER_SEC;
}
#endif
#endif
l*****o
发帖数: 473
4
oprofile.
# opcontrol --reset
# opcontrol --setup --event=CPU_CLK_UNHALTED:6000
# opcontrol --separate=lib
# opcontrol --start
XXXXXXXX
#opcontrol --dump
#opcontrol --stop
# opreport -l XXX -d > a.
a****l
发帖数: 8211
5
depends on what OS you are running on. which one are you using?

【在 d*******o 的大作中提到】
: Hi, I have a C program like this:
: ...
: CODE_BLOCK;
: ...
: I want to know how much CPU time spent on CODE_BLOCK. Since the process
: executing CODE_BLOCK may be preempted during execution, this CPU time may
: not be equal to the (wall-clock) time elapsed from the beginning of CODE_
: BLOCK to the end of it.
: Can anyone tell me how to do this?
: Thanks.

1 (共1页)
进入Programming版参与讨论
相关主题
最基本的C语言编程问题请教老哥使用的一项技术: extern定义全局变量
问个C++问题,高手帮帮忙python的缩进问题,导致你没法从block开头跳到结尾
32/64编程怎么做才好呢Clock() problem
Why should i include .cpp instead of .hC++ 问题紧急求救
how to apply OOD to a code for both win and linux platform ?问一个C\C++中clock()函数溢出问题
怎么知道一个线程结束没有?Xerces-C++ in vs.net question
C 程序 clock() timer 问题How to tell gcc stop compiling.
关于程序必须支持win and linux, 可不可以用class而不是#ifdef WIN32 ?global variable usage question in C++
相关话题的讨论汇总
话题: timer话题: __话题: block话题: cpu话题: code