We might widen the discussion of AI beyond mathematics. Daughter and I the other day discussed AI as a lightening fact-checking facility, especially Google AI online. I thought mistakes and misleading summaries and conclusions were apparent and difficult to check unless one had sufficient scholarship in the field. She gave me evidence (a convincing demo) that in her field mistakes were almost always present, and in 'the answer' that we were looking at, riddled with error, including an inappropriate reference to her own work as a 'source' for the confident sounding answer.
Yes I think this is quite common. AI might suggest places that need to be looked into further if used for fact checking, but will generate a lot of false positives. As with many uses of generic AI chatbots, it's best done with caution and scepticism!
I'm not sure I really know what it means, on a philosophical level, for an AI to "prove" something. I'm not a mathematician, but from what I see on the outside looking in, mathematics seems like a fundamentally human endeavor: not just an objective collection of facts that happen to be true, but a human understanding of what facts are proven and of what even constitutes proof. (And this understanding has evolved over time; a proof that was accepted by an earlier generation is not necessarily considered rigorous today.) So if a chatbot generates a "proof" but no one understands it yet, is it even really a proof? How about if it doesn't actually provide a proof, but just prints "I have discovered a truly marvelous proof of this, which this chat window is too small to contain"?
I guess to some extent this has been an issue ever since the the four color theorem was proved with computer assistance in 1976; but at least there it's a deterministic program that humans wrote and that humans can examine and vet, giving us confidence in each part and confidence that the sum of the parts is the desired proof. With AI-generated proofs, we have nothing like that.
Of course, AI could still threaten the existence of mathematicians if it causes some kind of cultural change that makes that human endeavor no longer seem worthwhile. I had a friend in college who couldn't see the appeal of sudoku once he learned that it was possible to write a program to solve them. Maybe someday that's how everyone will feel about things that AI can do, and there won't be any interest in math anymore (except for applications)?
Hi Ran, you're bang on the money with this. You'll enjoy the follow up article that I've written which covers exactly these questions. What does it mean for us to have created a mathematical proof if no humans can understand it?
We might widen the discussion of AI beyond mathematics. Daughter and I the other day discussed AI as a lightening fact-checking facility, especially Google AI online. I thought mistakes and misleading summaries and conclusions were apparent and difficult to check unless one had sufficient scholarship in the field. She gave me evidence (a convincing demo) that in her field mistakes were almost always present, and in 'the answer' that we were looking at, riddled with error, including an inappropriate reference to her own work as a 'source' for the confident sounding answer.
Yes I think this is quite common. AI might suggest places that need to be looked into further if used for fact checking, but will generate a lot of false positives. As with many uses of generic AI chatbots, it's best done with caution and scepticism!
Yes, but universal or at least far more likely than I had supposed? And inherently uncheckable, if I have understood your post?
I'm not sure I really know what it means, on a philosophical level, for an AI to "prove" something. I'm not a mathematician, but from what I see on the outside looking in, mathematics seems like a fundamentally human endeavor: not just an objective collection of facts that happen to be true, but a human understanding of what facts are proven and of what even constitutes proof. (And this understanding has evolved over time; a proof that was accepted by an earlier generation is not necessarily considered rigorous today.) So if a chatbot generates a "proof" but no one understands it yet, is it even really a proof? How about if it doesn't actually provide a proof, but just prints "I have discovered a truly marvelous proof of this, which this chat window is too small to contain"?
I guess to some extent this has been an issue ever since the the four color theorem was proved with computer assistance in 1976; but at least there it's a deterministic program that humans wrote and that humans can examine and vet, giving us confidence in each part and confidence that the sum of the parts is the desired proof. With AI-generated proofs, we have nothing like that.
Of course, AI could still threaten the existence of mathematicians if it causes some kind of cultural change that makes that human endeavor no longer seem worthwhile. I had a friend in college who couldn't see the appeal of sudoku once he learned that it was possible to write a program to solve them. Maybe someday that's how everyone will feel about things that AI can do, and there won't be any interest in math anymore (except for applications)?
Hi Ran, you're bang on the money with this. You'll enjoy the follow up article that I've written which covers exactly these questions. What does it mean for us to have created a mathematical proof if no humans can understand it?