Evaluating polynomials - floating point or optimization issue? -


from numerical recipes:

we assume know enough never evaluate polynomial way: p=c[0]+c[1]*x+c[2]*x*x+c[3]*x*x*x+c[4]*x*x*x*x; ...

obviously don't know enough...

is optimization issue or floating point arithmetic issue , why?

you can compute p=c[0]+c[1]*x+c[2]*x*x+c[3]*x*x*x+c[4]*x*x*x*x; as:

p=c[0]+(c[1]+(c[2]+(c[3]+c[4]*x)*x)*x)*x; 

there fewer multiplications in second form. second form called “horner's”.

is optimization issue or floating point arithmetic issue , why?

it optimization issue. however, modern processors have floating-point operation multiplication , addition in single instruction without intermediate rounding multiplication, , while benefit programmers see still optimization, means result more accurate.

horner's form well-adapted computation fused-multiply-add instruction.


finally, should point out sake of completeness modern processors can more efficient if polynomial presented them in form more parallelism. see instance estrin's scheme or this blog post down-to-earth explanation. book not alluding requirement of knowing estrin's scheme. alluding requirement of knowing horner's scheme.


Comments

Popular posts from this blog

database - VFP Grid + SQL server 2008 - grid not showing correctly -

jquery - Set jPicker field to empty value -

.htaccess - htaccess convert request to clean url and add slash at the end of the url -