[Gllug] Basic numerical precision question
'lesleyb'
lesleyb at herlug.org.uk
Sun Sep 26 01:29:06 UTC 2010
On Sun, Sep 26, 2010 at 12:51:54AM +0100, Sanatan Rai wrote:
> On 26 September 2010 00:40, lesleyb <lesleyb at herlug.org.uk> wrote:
> > On the basis that machine epsilon is the smallest positive number
> > distinguishable from 0, and that the absolute value of your result
> > is larger than epsilon, your problem is where exactly?
>
> Sure it can be larger, but I don't expect it to be 1000 times larger.
> I expect the difference to be of the order of the epsilon, ie of
> the order of 1e-16.
Just to reassure us my results are ...
2.220446049e-16
sum2 = 2540.43
avgSum2 = 846.81
avg*avg = 846.81
avgSum2 - avg*avg = -2.273736754e-13
so I have a similar order of magnitude for the difference result c.f machine
epsilon.
The value 29.1 is not likely to have an exact binary representation.
A machine representation is 29.1 + an error.
The error must be propagated differently in the two calculations.
Changing the values to something which can be accurately represented in
binary form e.g. val={2.5,2.5,2.5} gives these results on my machine
.
2.220446049e-16
sum2 = 18.75
avgSum2 = 6.25
avg*avg = 6.25
avgSum2 - avg*avg = 0
demonstrating a lack of propagation error.
>
> > But I am still suspicious -
> > (a + b) * (a + b) = a^2 + b^2 +2ab > a^2+b^2 when a,b > 0.
> >
> > sum2 is equivalent to a^2 + b^2
> > sum is equivalent to (a+b)*(a+b)
> >
> > Thus sum > sum2 always.
> >
>
> Not quite, you are missing the averaging.
> The average of squares >= the square of the average always.
>
I'll work out the proof for that another day.
Regards
L.
--
Gllug mailing list - Gllug at gllug.org.uk
http://lists.gllug.org.uk/mailman/listinfo/gllug
More information about the GLLUG
mailing list