My Connect Four AI sucks and I'm not sure why

The code can be found here: https://github.com/Jelly-Pudding/improvedconnectfour/blob/main/connect4.py

The minimax and evaluate_board functions are copied pretty much word for word from this exercise:
https://www.codecademy.com/courses/machine-learning/projects/minimax-connect-four (see connect_four.py)

My evaluation function’s body is a bit different, but the ai shouldn’t make the mistakes that it currently makes. It sometimes messes up even simple things like not blocking the opponent if they have three vertical pieces in a column etc…

Yeah, I don’t really understand why it’s bad…

I also don’t really understand this from the minimax function:

	if is_maximizing == True:
		best_value = -float("Inf")
		moves = classer.available_moves()
		random.shuffle(moves)
		best_move = moves[0]
		for move in moves:
			copied = copy.deepcopy(classer)
			copied.aiinputter(move)
			hypothetical_value = minimax(copied, False, depth - 1, alpha, beta, evaluate_board)[0]
			if hypothetical_value > best_value:
				best_value = hypothetical_value
				best_move = move
			alpha = max(alpha, best_value)
			if alpha >= beta:
				break
		return [best_value, best_move]
	elif is_maximizing == False:
		best_value = float("Inf")
		moves = classer.available_moves()
		random.shuffle(moves)
		best_move = moves[0]
		for move in moves:
			copied = copy.deepcopy(classer)
			copied.aiinputter(move)
			hypothetical_value = minimax(copied, True, depth -1, alpha, beta, evaluate_board)[0]
			if hypothetical_value < best_value:
				best_value = hypothetical_value
				best_move = move
			beta = min(beta, best_move)
			if alpha >= beta:
				break
		return [best_value, best_move]

As it goes through the function again there’s this line and this line:

hypothetical_value = minimax(copied, False, depth - 1, alpha, beta, evaluate_board)[0]
hypothetical_value = minimax(copied, True, depth -1, alpha, beta, evaluate_board)[0]

So as it loops there’s the line best_value = -float(“Inf”) or best_value = float(“Inf”) - meaning that the actual best value so far is lost for each maximum and minimum… I just don’t understand what is happening here. Never mind I think I actually understand this now.

I don’t know. I’m trying to find fault with these functions and the rest of my code. Something obviously isn’t right if the AI can’t always see that one move will make it immediately lose to a connect four.

I’ve been messing around with this for ages.

Finally, and counter-intuitively, I got it to work by changing the depth from 6/7 to 5. Suddenly, the AI(s) work great, and the new AI based off the one found in the linked project beats the old one a lot.

Although I have found the solution, I don’t understand it. How come DECREASING the depth improved the AI(s)? Is it something to do with my computer being bad - and being unable to handle higher depths? There is alpha-beta pruning, but still - my desktop isn’t the best in the world and she’s quite old now.

Would quite like to know the answer even though I don’t face the same issue anymore. Thanks in advance for any replies :slight_smile:

Your computer resources shouldn’t cause functional errors like that, it would either take a long time or crash I think.
I spent some time looking it over, but I’m having a hard time following it. More conceptually I would say some math isn’t lining up in your AI such that it’s missing a row when looking at the board.