cmrx64 14 hours ago

Displeased with this trend of LLM-assisted “research”. The central claims are false, the examples of floating point are false and off by a factor of 1e11, the latency numbers are complete WTF and disconnected from reality (‘cycles’ of what).

Fixed point arithmetic with a dynamic scale is presented along the way motivating floating point in probably every computer architecture class. It’s a floating point.

This guy needs to open a book. I recommend Nick Higham’s _Accuracy and Stability of Numerical Algorithms_.

  • vgo96 14 hours ago

    I think you are discrediting LLMs, gemini 2.5 pro catches most of the flaws in the author's article. I think the author just doesn't understand floating point.

    • kragen 14 hours ago

      How do you know if Gemini caught the flaws you didn't notice?

    • cmrx64 14 hours ago

      possibly so. I’m even seeing GPT 4.1-mini ripping it apart when prompted with only the content. DeepSeek (not with thinking) is fooled.

  • GaggiX 14 hours ago

    I wouldn't be surprised if the author suffers from Bipolar Disorder or Schizophrenia reading the repo and his own citations, also a quite dumb LLM if used.

HextenAndy 14 hours ago

Surely that's just normal floating point but missing e.g. normalisation? Floating point is literally an int exponent and an int mantissa. Except real floating point adjusts the exponent to avoid integer overflow in the mantissa - which is where rounding happens.

In DPA the mantissa just overflows (silently in the C implementation) and then what?

  • withinboredom 14 hours ago

    He increased the size of a float from 4 bytes to 9 bytes, so it is more accurate; then claims that it is so much better. If he did the same implementation, but in 4 bytes; and it was more accurate. Then ... maybe this would be "revolutionary" (to quote TFA).

vgo96 14 hours ago

The author says that every real number x can be represented as

x = m * 2^p

where m is an integer(mantissa) and p is an integer (point position)

Well this is clearly wrong, take x = 1/3 for example

1/3 = m * 2^p

m = 1 / (3 * 2^p), where m is an integer, doesn't hold true for any integer p.

If the author had read first 2 pages of

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...

they could have avoided the embarrassment.

  • SetTheorist 14 hours ago

    Amusingly, even the numbers in the first example are counter-examples to the author's false claim: neither 0.1 nor 0.2 can be represented thusly.

  • cmrx64 14 hours ago

    brilliant reference, thank you!

bradrn 14 hours ago

Er… this may be a stupid question, but how is this actually different to ordinary floating-point arithmetic? Ordinary IEEE-754 floating-point numbers already store the mantissa and exponent in separate fields, don’t they?

cscheid 14 hours ago

I’m dismayed that people are willing to put their names on garbage like this.

If you want the serious version of the idea instead of the LLM diarrhea, just go Jonathan Shewchuk’s robust predicates work: https://people.eecs.berkeley.edu/~jrs/papers/robustr.pdf from 1997.

  • ncruces 13 hours ago

    Thanks, didn't know this one! Have some reading to do.

    For a library that implements just the two component version of this, commonly known as a double-double, for a mantissa of 107 bits and an exponent of 11, see: https://github.com/ncruces/dbldbl

kragen 14 hours ago

This is incorrect. There are a number of eerors, as others have pointed out, but for me the most central one is not that almost all reals are uncomputable numbers, but that the product of two 64-bit integers is 128 bits, as anyone who has done arbitrary-precision rational math has noticed.

I think it's great to experiment with improving fundamental algorithms, but not to make misleading claims about your results.

nephrite 14 hours ago

They just reimplemented floating point with bigger mantissa and exponent. Rounding erorrs will appear with sufficiently large/small numbers.

  • ncruces 14 hours ago

    Actually, the exponent is smaller. IEEE 754 64-bit binary floats have an 11 bit exponent, 1 sign bit, and 53 bit mantissa (one of the bits of the mantissa is implied, by actually knowing what they're doing, rather than… whatever this is).

gbanfalvi 14 hours ago

This feels so obvious and simple how is this not already a standard thing everywhere? Is it because the mantissa and point position don't both fit into a single register?

  • HextenAndy 14 hours ago

    It is a standard thing. It's called floating point.

  • Cthulhu_ 14 hours ago

    It already is; the author is trying to sell a solution to a problem that is known, well-understood and worked around, often with programming language types like `Money` or numerical calculation libraries. The "30 year compound interest inaccuracy" problem was solved 50+ years ago.

    I'm really not sure what the angle of the author and their "organization" is, which was only created this month. The library and idea is cool and all, but it strongly implies the author didn't actually do any research before building a solution, one that probably (but I'm not qualified to say) has some issues they overlooked. See e.g. https://cs.opensource.google/go/go/+/master:src/math/big/flo... for a modern implementation of big accurate floating point numbers.

    fake edit: reading that this was AI generated, my time and attention was wasted on this.

agnishom 14 hours ago

Isn't this how floating point already works?

constantcrying 14 hours ago

There is a fundamental theorem of numerical analysis, due to Kahan, that certain, very desirable properties of modeling the real numbers on a computer are incompatible.

Again and again people try to "fix" IEEE floating points without realizing that they are trying to do something akin to creating free energy. Whenever you start out with a project like this you have to start by asking yourself what desirable property you are willing to let go of. Not presenting that flaw makes the whole thing look either dishonest or uninformed.

>Any real number x can be represented as

This statement is just false. I do not know why you would start out by making such a basic error. The real numbers are uncountable you can not represent all real numbers by a pair of integers. This is basic real analysis

>The mathematics is elementary. The impact is revolutionary.

???

>Special thanks to everyone who said "that's just how it works" - you motivated me to prove otherwise.

Maybe the people know better than you? It is after all a mathematical theorem.