l********r 发帖数: 175 | 1 I wrote Fortran program to calculate the average and standard deviations of
a large number of data. It turned out that the instantaneous results are
always positive, then the last several sets of average values became
negative. I think it is because the total sum of those data reached the
limit of computer. So it began to show negative results.
I define the type of data as double precision. Is there anyone knows how to
solve the problem? Thanks a lot. |
f*******y 发帖数: 988 | 2 scale一下 if 精度不是啥concern
of
to
【在 l********r 的大作中提到】 : I wrote Fortran program to calculate the average and standard deviations of : a large number of data. It turned out that the instantaneous results are : always positive, then the last several sets of average values became : negative. I think it is because the total sum of those data reached the : limit of computer. So it began to show negative results. : I define the type of data as double precision. Is there anyone knows how to : solve the problem? Thanks a lot.
|
l***8 发帖数: 149 | 3 I bet it is not the fault of "double precision" data. The "double precision"
data are floating point. Their range is extremely wide (IEEE double can be
up to 1e307) and it is not possible to overflow from positive to negative (
you will only get a +NaN).
I think you're seeing negative numbers because the "count" is an integer and
FORTRAN calculates the average as (double)"sum" / (int)"count". If you have
more than 2 billion data entries (or 4 billion if the "count" is unsigned),
you'll see negat |