[galib] Floats vs Doubles and GALib

Sri kskk02 at gmail.com
Fri Nov 23 22:59:26 EST 2007


Cool analysis !!! Is that 2 mins or 2 sec's extra ? No idea why they used 
float's. Double's is what i always use considering memory limitations isnt 
really a big issue anyway.

sri

----- Original Message ----- 
From: "Peter Jay Salzman" <p at dirac.org>
To: "GALib Mailing List" <galib at MIT.EDU>
Sent: Friday, November 23, 2007 7:11 PM
Subject: [galib] Floats vs Doubles and GALib


Hi All,

This afternoon I made a 2nd copy of GALib and modified it so that it uses
doubles instead of floats.  I ran some tests to see whether the difference
was noticeable.  It was, by quite a bit.

My objective function and initializer:


   TYPE objective( GAGenome &g )
   {
      GARealGenome& genome = ( GARealGenome& )g;

      TYPE x = genome.gene(0);
      TYPE y = genome.gene(1);
      TYPE z = genome.gene(2);

      return exp( -pow(x - 1.0, 2) - pow(y - 2.0, 2) - pow(z - 3.0, 2) );
   }

   TYPE IC[] = { 0.0, 0.0, 0.0 };

   void initializer( GAGenome &g )
   {
      GARealGenome& genome = ( GARealGenome& )g;
      for( int i = 0; i < 3; ++i )
         genome.gene(i, IC[i]);
   }


I used a GARealGenome with alleles:

   GARealAlleleSetArray alleles;
   alleles.add( 0.0, 1.1, GAAllele::INCLUSIVE, GAAllele::INCLUSIVE);
   alleles.add(-2.0, 4.5, GAAllele::EXCLUSIVE, GAAllele::INCLUSIVE);
   alleles.add(-4.0, 4.0, GAAllele::EXCLUSIVE, GAAllele::INCLUSIVE);
   GARealGenome genome(alleles, objective);
   genome.initializer(initializer);

and a GASteadyState GA.



The basic results (relative error) are here.  The numbers represent the
average relative error in the result at a given population size.  The
average was taken over 3 trials, and by relative error I mean (true -
numerical)/ true * 100 expressed as a percentage.

   popSize      double      float
    1500         1.14        2.41
    5000          .45       1.39
   10000          .02         .73

In the last 2 trials at N=10000, the "double GA" managed to get the exact
answer to 7 digits.

In the last 2 trials, I recalled reading that double arithmetic can be
faster than float arithmetic in C/C++ due to unintentional widening and
subsequent narrowing of the datatype that can slow down the processor on
Linux and MacOS [1].  So I timed the last two trials, and here are the times
for execution (wall clock, unloaded) for the last 2 trials at a population
of N=10000 in seconds:

   double     float
   17.1921    19.0479
   17.6359    19.0986

Two minutes is quite a lot when you're sitting in front of a computer
waiting for a result.

So I wanted to ask: why does GALib use floats and not doubles?

I created a detailed Excel file that shows results, calculations, and
timings for the various trials.  If anyone wants to see it, let me know.

I probably can't release the "double GALib" code since this is a project for
work, even though I spent my day off doing it (I had to modify it further to
make GALib compile as a DLL under MS Windows (which I use at work).
However, I don't think anyone would mind if I gave out the program I wrote
to do the benchmarks.

Thanks,
Pete

ps- If someone could answer my question about unconstrained genes, I'd
greatly appreciate it!


[1] Linux always performs register to register calculations in double
precision, while Windows does not.

   http://www.intel.com/support/performancetools/fortran/sb/cs-007783.htm

-- 
GPG Fingerprint: B9F1 6CF3 47C4 7CD8 D33E  70A9 A3B9 1945 67EA 951D
Last night I dreamt of 09-f9-11-02-9d-74-e3-5b-d8-41-56-c5-63-56-88-c0

"A mathematician is a machine for converting coffee    p at dirac.org
 into theorems."     -- Paul Erdös                     http://www.dirac.org
_______________________________________________
galib mailing list
galib at mit.edu
http://mailman.mit.edu/mailman/listinfo/galib 




More information about the galib mailing list