Discussion about this post

User's avatar
Philip Harris's avatar

We might widen the discussion of AI beyond mathematics. Daughter and I the other day discussed AI as a lightening fact-checking facility, especially Google AI online. I thought mistakes and misleading summaries and conclusions were apparent and difficult to check unless one had sufficient scholarship in the field. She gave me evidence (a convincing demo) that in her field mistakes were almost always present, and in 'the answer' that we were looking at, riddled with error, including an inappropriate reference to her own work as a 'source' for the confident sounding answer.

Expand full comment
Ran's avatar

I'm not sure I really know what it means, on a philosophical level, for an AI to "prove" something. I'm not a mathematician, but from what I see on the outside looking in, mathematics seems like a fundamentally human endeavor: not just an objective collection of facts that happen to be true, but a human understanding of what facts are proven and of what even constitutes proof. (And this understanding has evolved over time; a proof that was accepted by an earlier generation is not necessarily considered rigorous today.) So if a chatbot generates a "proof" but no one understands it yet, is it even really a proof? How about if it doesn't actually provide a proof, but just prints "I have discovered a truly marvelous proof of this, which this chat window is too small to contain"?

I guess to some extent this has been an issue ever since the the four color theorem was proved with computer assistance in 1976; but at least there it's a deterministic program that humans wrote and that humans can examine and vet, giving us confidence in each part and confidence that the sum of the parts is the desired proof. With AI-generated proofs, we have nothing like that.

Expand full comment
4 more comments...

No posts

Ready for more?