
Just as with disk drive sizes, there is some confusion on this. These refer to factors of 1000, not 1024. The terms megaflop, gigaflop, teraflop, etc.
GOAL OF LINPACK BENCHMARK SOFTWARE
If someone is trying to sell you a processor or a software package and they just quote flops without saying which, you should call them on it. The ATI Firestream processors can generally do 1/5th as many double-precision flops as single precision. The AMD Athlon and Phenom processors can generally do half as many double-precision flops as single precision. A processor that is capable of so many single-precision gigaflops may only be capable of a small fraction of that many double-precision gigaflops. If a processor has a fused multiply-add instruction that does a multiplication and an addition in a single instruction - generally A += B * C - that counts as 2 operations.Īlways be careful in distinguishing between single-precision flops and double-precision flops. How much longer depends on the processor, but there's sort of a defacto standard in the HPC community to count one division as 4 flops. Since most processors can do an addition, comparison, or multiplication in a single cycle, those are all counted as one flop. I'd just like to add a couple of finer points:ĭivision is special. Why do you need other ways to measure performance? What's wrong with just working out the FLOPS count as your boss asked you to? )
GOAL OF LINPACK BENCHMARK CODE
You need to know that "I'm not getting the FP throughput that should be possible, so clearly other parts of my code are preventing FP instructions from being available when the CPU is ready to issue one". And if that is what is holding your implementation back, you need to know that. It's very easy to prevent the CPU's FP unit from being utilized efficiently, by having too many dependencies between FP ops, or by having too many branches or similar preventing efficient scheduling. How much time do you want to spend optimizing this, if you don't even know whether the CPU is fundamentally capable of running any more instructions per second? It's easy to measure "My program runs in X minutes", and if you feel that is unacceptable then sure, you can go "I wonder if I can chop 30% off that", but you don't know if that is possible unless you work out exactly how much work is being done, and exactly what the CPU is capable of at peak. And since the FP units constitute the bulk of the work, that means your software has a problem. It means that something other than the floating point ops is holding you back, and preventing the FP units from working efficiently. But if your program has a 50% CPU utilization (relative to the peak FLOPS count), that is a somewhat more constant value (it'll still vary between radically different CPU architectures, but it's a lot more consistent than execution time).īut knowing that "My CPU is capable of X GFLOPS, and I'm only actually achieving a throughput of, say, 20% of that" is very valuable information in high-performance software. You could simply measure how long a program takes to run, but that varies wildly depending on CPU. For math-heavy algorithms like yours, that is pretty much the standard way to measure performance. One which runs at 70% is probably not going to get much more efficient unless you change the basic algorithm. A program which runs 30% of the FLOPS the CPU is capable of, has room for optimization. If you know the CPU's theoretical peak performance in FLOPS, you can work out how efficiently you use the CPU's floating point units, which are often one of the hard to utilize efficiently. In any case, it's a useful tool for examining how well you utilize the CPU. That means that as a performance measure, it is fairly close to the hardware, which means that 1) you have to know your hardware to compute the ideal FLOPS on the given architecture, and you have to know your algorithm and implementation to figure out how many floating point ops it actually consists of. (Some CPU's can perform addition and multiplication as one operation, others can't, for example). It's a pretty decent measure of performance, as long as you understand exactly what it measures.įLOPS is, as the name implies FLoating point OPerations per Second, exactly what constitutes a FLOP might vary by CPU.
