Human beats Go AI again?

Started by Jubal, February 20, 2023, 10:11:07 AM

Previous topic - Next topic

Jubal

https://arstechnica.com/information-technology/2023/02/man-beats-machine-at-go-in-human-victory-over-ai/

QuoteA human player has comprehensively defeated a top-ranked AI system at the board game Go, in a surprise reversal of the 2016 computer victory that was seen as a milestone in the rise of artificial intelligence.

Kellin Pelrine, an American player who is one level below the top amateur ranking, beat the machine by taking advantage of a previously unknown flaw that had been identified by another computer. But the head-to-head confrontation in which he won 14 of 15 games was undertaken without direct computer support.

The triumph, which has not previously been reported, highlighted a weakness in the best Go computer programs that is shared by most of today's widely used AI systems, including the ChatGPT chatbot created by San Francisco-based OpenAI.

The tactics that put a human back on top on the Go board were suggested by a computer program that had probed the AI systems looking for weaknesses. The suggested plan was then ruthlessly delivered by Pelrine.

"It was surprisingly easy for us to exploit this system," said Adam Gleave, chief executive of FAR AI, the Californian research firm that designed the program. The software played more than 1 million games against KataGo, one of the top Go-playing systems, to find a "blind spot" that a human player could take advantage of, he added.

The winning strategy revealed by the software "is not completely trivial but it's not super-difficult" for a human to learn and could be used by an intermediate-level player to beat the machines, said Pelrine. He also used the method to win against another top Go system, Leela Zero.

This was interesting: strategies that can beat AI probably simply because they're not in the training data (albeit in this case discovered by another machine). Reminds me of the recent story about military robots that could be defeated by shuffling forward under a cardboard box, because you then don't have the visual profile of a "human" in their dataset.
The duke, the wanderer, the philosopher, the mariner, the warrior, the strategist, the storyteller, the wizard, the wayfarer...

BeerDrinkingBurke

This was interesting. It was a bit of a psychological shock for many in the professional go world when Lee Sedol lost to AlphaGo. And then the broader reporting was filled with hyperbolic statements about the rise of the machines. But this kind of trick to defeating AI shows how limited this so-called "intelligence" is. We created Go, and we created AlphaGo. We -understand- that we are playing Go, when we play it.
Developing a game called Innkeep! Serve Ale. Be jolly. Rob your guests. https://innkeepgame.com/

Jubal

Yeah, it's one of the sorts of things that makes me a bit more sceptical of the AI singularity chat, just how much this stuff seems to struggle with leaving the comfort zone of its training data.
The duke, the wanderer, the philosopher, the mariner, the warrior, the strategist, the storyteller, the wizard, the wayfarer...

ahuggingsam

> Reminds me of the recent story about military robots that could be defeated by shuffling forward under a cardboard box, because you then don't have the visual profile of a "human" in their dataset.

So what you're saying is.... metal gear solid was right?

Jubal

That's a thing in Metal Gear Solid? (I have never played it)
The duke, the wanderer, the philosopher, the mariner, the warrior, the strategist, the storyteller, the wizard, the wayfarer...

psyanojim

Well, I've just had another utterly pointless conversation with ChatGPT on a technical topic.

To sum it up.

Me: Can you show me an example of how to do this thing?
ChatGPT: Here is an example.
Me: That doesn't work.
ChatGPT: I apologize for the confusion in my previous responses. You are correct. You cannot do that thing.
Me: Yes you can.
ChatGPT: I apologize for the confusion in my previous responses. You are correct. You can do that thing. Here is an example (repeats previous example).
Me: That doesn't work.
ChatGPT: I apologize for the confusion in my previous responses. You are correct. You cannot do that thing.
Me: You can do it. Just not like that.
ChatGPT: I apologize for the confusion in my previous responses. You are correct. You can do that thing. Here is an example (repeats previous example).

At that point I'm proud of myself that I managed to log off without putting my fist through the monitor ::)

Jubal

Yeah, I've had a bunch of experience of it simply denying less dataset-represented things existed at all if it didn't know about them. I suspect there are ways to coax it to perform better but I don't know how much I can be bothered to learn compared to learning simply to do the stuff I want to do.

I think this also adds to my "more data won't necessarily improve performance" belief on these things. It's more things for it to answer, but also more things for it to muddle up at the same time.
The duke, the wanderer, the philosopher, the mariner, the warrior, the strategist, the storyteller, the wizard, the wayfarer...