neural network - Unusual cause of NaN in C++? Do limits approaching zero cause NaN? -
while working on c++ project involves neural networks, hampered nan results. after great deal of tracing (trying find origin of nan), realized source sigmoid derivative function, shown below.
double sigder(double n){ return 2*exp(-n) / pow(1 + exp(-n), 2); }
although has domain of real numbers, values such -1008.3 caused result of nan. according mathematica, correct result should close 0 - 2.522*10^-438. i've averted issue in following manner:
double sigder(double n){ double res = 2*exp(-n) / pow(1 + exp(-n), 2); if( isnan(res) ){ return 0; } else{ return res; } }
with simple assumption, code functions expected; however, still don't understand why sigder(<# large magnitude>) not return ~0. please inform me causes of nan in c++ (xcode ide) other dividing 0 , taking root of negative?
thanks in advance! i'd know why signer(-1008.3) returns nan , how better/more efficiently trace source of nan values.
denominator in every case trying reach infinity (which means whole fraction trying reach 0). this, however, means you're defining somewhere division infinity (in conjunction limited double range).
an explanation in lies in c++ exp function returns +-huge_val if return value cannot represented double.
having said that, when result cannot contained within double variable result in dividing infinity , nan
btw if want operate on big numbers can implement class stores numbers eg in string , overload operators.
Comments
Post a Comment