16 bit floating point?
I take it you mean 16-bits per channel? as opposed to claiming that the NV3X family actually only does 16-bits FP instead of a full 128?
The hazards of talking on topics you're not entirely aware of I guess - the GeForce FX is actually more accurate than the Radeon 9500/9600/9700/9800 - ATI only does 96-bits, rounded to 128-bits at the end while NVIDIA offer a true 128-bit and intermediary 64-bit (if developers choose to use it) which has certain performance benefits (and is still an improvement over the IQ of DX8 shaders)
Hmm, I knew i forgot something, forgot I even posted this.. I suppose I should have made that a little more clear.
Granted, you are correct, the GFFX is technically and in practical testing easilly as fast as the 9800 (I'd want either of the cards personally , babystep up from a 9700Pro) As you stated it does indeed have full support for 128-bit, and it does have its increase in IQ and preformance in some cases.
But in the case of a few "unofficial" drivers (even the latest set of official Det's 43.45), it has become apparent that the drivers default to 16FP, but to comply with the DirectX 9 specifications a compliant card has to meet the minuimum 24FP afaik. So at "defaults" the driver would technically break spec.
To the average gamer this would probably mean nothing more than a grain of salt.
Either way you cant go wrong, both of these cards are completely respectable and plenty of preformance to spare on games comming up
(Dug up the old thread at FM that some of the info came from if you guys want to read though the crazyness Here)