NitpickLawyer a day ago

Having the ability to throw math heavy ML papers at the assistants and get simplified explanations / pseudocode back is absolutely amazing, as someone who's forgot most of what I learned in uni, 25+ years back and never really used it since.

  • tyre 6 hours ago

    This is where LLMs shine for learning imo: throwing a paper in Claude and getting an overview then being able to ask questions.

    Especially for fields that I didn’t study at the Bachelors or Masters level, like biology. Getting to engage with deeper research with a knowledgeable tutor assistant has enabled me to go deeper than I otherwise could.

    • paulryanrogers 3 hours ago

      Isn't there a risk that you're engaging with an inaccurate summarization? At some point inaccurate information is worse than no information.

      Perhaps in low stakes situations it could at least guarantee some entertainment value. Though I worry that folks will get into high stakes situations without the tools to distinguish facts from smoothly worded slop.

      • paladin314159 3 hours ago

        I've been doing this a fair amount recently, and way I manage it is: first, give the LLM the PDF and ask it to summarize + provide high-level reading points. Then read the paper with that context to verify details, and while doing so, ask the LLM follow-up questions (very helpful for topics I'm less familiar with). Typically, everything is either directly in the original paper or verifiable on the internet, so if something feels off then I'll dig into it. Through the course of ~20 papers, I've run into one or two erroneous statements made by the LLM.

        To your point, it would be easy to accidentally accept things as true (especially the more subjective "why" things), but the hit rate is good enough that I'm still getting tons of value through this approach. With respect to mistakes, it's honestly not that different from learning something wrong from a friend or a teacher, which, frankly, happens all the time. So it pretty much comes down to the individual person's skepticism and desire for deep understanding, which usually will reveal such falsehoods.

      • anon291 3 hours ago

        There is, but just ask it to cite the foundational material. A huge issue with reading papers in topics you don't know about is that you lack the prerequisite knowledge and without a professor in that field, it may be difficult to really build that. Chat GPT is a huge productivity boost. Just ask it to cite references and read those.

    • fragmede 4 hours ago

      I'm not sure the exact dollar value of feeling safe enough to ask really stupid questions that I should already know the answer to and I'd be embarrassed if anyone saw me ask Claude, but it's more than I'm paying them. Maybe that's the enshittification play. Extra $20/month if you don't want it to sound judgey about your shit.

  • codemac 2 hours ago

    Math notation is high context, so it's great to just ask llm's to print out the low context version in something like lisp where I can read and decompose it quickly.

adidoit 8 hours ago

I hope we continue to see gains for scientific professionals and companies doing research.

Even imperfect assistants increase leverage.

kregasaurusrex a day ago

'Vibe formalizing' is a logical extension of 'vibe engineering' implemented by 'vibe coding'. Sometimes I have trouble with getting the individual puzzle pieces of a problem to fall into place, where a hypothetical 'Move 37 As A Service' to unify informal methods with mathematical rigor deserves to be explored!

  • anon291 3 hours ago

    I had put on the back burner some polyhedral compilation papers. I had read all the material, but some key questions still meant it was not possible for me to implement it. In particular, I was looking at barvinoks counting algorithm and did not understand why you needed to expand the polynomials in a pointed cone. However, chatgpt correctly led me through the reasoning. Could it have made a mistake? Of course. And it did. However, since my confusion meant that I was also wrong, bouncing the idea back and forth was really useful. Plus the ai bots are better at understanding your own particular points of confusion.

gerdesj 6 hours ago

Erdős

I was told by a hungarian, that hungarian written spelling and spoken pronunciation is pretty precisely aligned compared to, say, english. Except when it comes to names when it gets a bit random!

Why not do the bloke the decency to spell his name correctly? Those diacritics are important.

Anyway, I was told that Paul's name is very roughly pronounced by an anglophone as: "airdish".

  • mjcohen 5 hours ago

    (I saw this on a math department bulletin board about 1960)

    A theorem both deep and profound States that every circle is round But in a paper by Erdos Written in Kurdish A counterexample is found

  • renewiltord an hour ago

    Irrelevant. Cf. Diogenes on death

  • yeasku 5 hours ago

    Is not American so nobody here cares...

  • umanwizard 4 hours ago

    I could be wrong, but FWIW I doubt Hungarians include diacritics that don’t exist in Hungarian (like ñ) when writing foreign names.

    • hgal 3 hours ago

      You are wrong.

WhyOhWhyQ 8 hours ago

I've had mixed results with AI on research mathematics. I've gotten it to auto-complete non-trivial arguments, and I've found some domains where it seems hopelessly lost. I think we're still at a point in history where mathematicians will not be replaced by AI and can only benefit by dabbling with it.

  • godelski 7 hours ago

    I've had similar results in both mathematics and programming. For one paper I was writing I wanted to try them and it was a fairly straightforward problem counting the number of permutations. I spent much more time trying to get the AI to figure it out than it took to actually solve it. Couldn't get it to do it even after I solved the problem. Similarly in coding I've had it fail to find trivial bugs like an undefined keyword, which would have easily been caught had I turned on ctags (because the keyword was inherited from a class, which made it so easy to miss). But similarly I've had them succeed on non-trivial tasks and those have been of great help, though nothing has ever been end-to-end just the AI.

    So I agree. I think these tools are very helpful, but I also think it is unhelpful that people are trying to sell them as much more than they are. The over hype of them not only legitimizes any pushback but provides ammunition. I believe the truth is that they're helpful tools, but are still far from replacing expertise. I believe that if we try to oversell them then we run the risk of ruining their utility. If you promise the moon you have the deliver the moon. That's different from aiming for the moon and landing in the trees.

RossBencina a day ago

Also interesting that the responses include anti-Lean material.

  • orochimaaru 9 hours ago

    I'm not a mathematician, but how credible is that anti-Lean material? Are they marketing an alternative programmatic approach, as in they're anti-lean because "I got something else" or are they philosophically anti-Lean and have valid arguments?

    • dwohnitmok 6 hours ago

      It's mainly the latter, although the author makes half-hearted gestures as some sort of CAS (Computer Algebra System) being better.

      It's not very credible. There are individual fragments that make sense but it's not consistent when taken together.

      For example, by making reference to Godelian problems and his overall mistrust of infinitary structures, he's implicitly endorsing ultrafinitism (not just finitism because e.g. PRA which is the usual theory for finitary proofs also falls prey to Godel's incompleteness theorems). But this is inconsistent with his expressed support for CASes, which very happily manipulate structures that are meant to be infinitary.

      He tries to justify this on the grounds that CASes only perform a finite number of symbol manipulations to arrive at an answer, but so too is true for Lean, otherwise typechecking would never terminate. Indeed this is true of any formal system you could run on a computer.

      Leaving aside his inconsistent set of arguments for CAS over Lean (and there isn't really a strong distinction between the two honestly; you could argue that Lean and other dependently types proof assistants are just another kind of CAS), his implicit support of ultrafinitism already would require a huge amount of work to make applicable to a computer system. There isn't a consensus on the logical foundations of ultrafinitism yet so building out a proof checker that satisfies ultrafinitistic demands isn't even really well-defined and requires a huge amount of theory crafting.

      And just for clarity, finitism is the notion that unboundedness is okay but actual infinities are suspect. E.g. it's okay to say "there are an infinite number of natural numbers" which is understood to be shorthand for "there is no bound on natural numbers" but it's not okay to treat the infinitary object N of all natural numbers as a real thing. So e.g. some finitists are okay with PA over PRA.

      On the other hand ultrafinitists deny unboundedness and say that sufficiently large natural numbers simply do not exist (most commonly the operationalization of this is that the question of whether a number exists or not is a matter of computation that scales with the size of the number, if the computation has not completed we cannot have confidence the number exists, and hence sufficiently large numbers for which the relevant computations have not been completed do not exist). This means e.g. quantification or statements of the form "for all natural numbers..." are very dangerous and there's not a complete consensus yet on the correct formalization of this from an ultrafinitistic point of view (or whether such statements would ever be considered coherent).

    • lakecresva 2 hours ago

      It seems pretty obviously machine generated, and the appearance of the word "metaphysics" in the output suggests the prompt author didn't know what they were talking about to begin with.

  • CamperBob2 a day ago

    Due to his position and general fame, Tao has to deal with a larger-than-usual number of kooks.

  • testartr 8 hours ago

    [flagged]

    • DroneBetter 8 hours ago

      I think Zeilberger is taken heavily out of context and confused with Norman Wildberger a lot; he certainly has some eccentric opinions but that one is not at all reflected in his blog's contents (which are largely things like "[particular paper] presents [conjecture/proof] that can be [resolved/shortened] by routine methods" that are only routine because of his decades of work), it's a shame that him being the go-to example of a crank seems to have become engrained into LLMs

anon291 3 hours ago

I was driving around tonight doing errands and while I was doing so, had a great conversation with chatgpt as to the intimate details of the llvm and GCC pipeline schedulers. It's a huge productivity boost. It has now taken notes for me for some compiler stuff I'm experimenting with. This would previously have been impossible.

  • egl2020 an hour ago

    Based on my experience, I would expect that the LLM was wrong about some of those details. Of course, your mileage (see what I did there?) may vary.

bgwalter 16 hours ago

[flagged]

  • lanstin 13 hours ago

    Because he is smart enough to use the existing (frontier) tools to get good results and create a sort of collaborative environment that is novel for research maths.

    • lanstin 5 hours ago

      As for collaboration I meant: https://terrytao.wordpress.com/2024/09/25/a-pilot-project-in... The issue with horizontal scaling of maths research is trust: if you don't know the author, it is more work to verify their work, especially non-formal proofs. Lean4 enables large projects be split up into pieces where lean can validate each intermediate result, so a much broader group of people can contribute pieces without jeopardizing the overall soundness.

    • sd9 7 hours ago

      Indeed. Who’s to say whether Wiles would have used AI assistance if it had, you know, existed, in 1994.

      • CamperBob2 3 hours ago

        Wiles's initial presentation of the proof had a serious flaw that killed the whole thing until he found a workaround. I don't remember how long it took him to get out of the jam, but I'm sure he would have handed his credit card to the Devil himself if he thought it might help.

        People who don't take advantage of the best available tools and techniques don't get to that level to begin with.

    • bgwalter 13 hours ago

      Collaborative environment meaning that any PFY employed by the "AI" providers can read your most intimate thought processes and keep track of embarrassing failures or misconceptions.

      • perching_aix 13 hours ago

        The embarrassing failures or misconceptions of math experts with regards to research level mathematics? Definitely a serious problem.

        Though by your "Perelman and Wiles didn't need "AI" assistance" comment, you'd surely be there on the sidelines to ridicule them for each and every single one. I guess maybe that's where your concerns are coming from?

        I can practically see how these concerns of yours would suddenly evaporate if they started using self-hosted models instead... ... yeah, right, who are we kidding?

        • TomatoCo 7 hours ago

          If mathematicians aren't occasionally saying something that's obviously wrong to a 300-level student, are they really pushing the envelope? I'm just a programmer, but I find that my, and my coworkers, biggest insights always come right after we've said something seriously dumb.

  • perching_aix 15 hours ago

    Thankfully it's mathematics, so people powerscaling their idols, deifying them at the detriment of others, and putting terms into quotes mockingly, is not what determines whether results hold or not. Perhaps the only field not fundamentally shackled by this type of quackery, even if people try their hardest from time to time to make it so.

    • bgwalter 13 hours ago

      [flagged]

      • perching_aix 13 hours ago

        It's fine, at least you admit that what you wrote was just to insult.

        For people who at least pretend to care to not think in strawmans, it's been nearly six years, and their deus has never exited said machina (if it's ever been in there to begin with, or anywhere else).