Artificial neural networks may show that the mind is not the brain – Walter Bradley Center for Natural and Artificial Intelligence | Region & Cash

“Minsky portrays the spirit brilliantly
as a ‘society’ made up of tiny components that are
even thoughtless” – Simon &
Shoemaker, 1987

What is the human mind? The AI ​​pioneer Marvin Minsky (1927-2016) said in 1987 that essentially “minds are what brains do”. That is, the mind is the result of electrical waves circulating through the brain as neurons spike and synapses transmit signals. But is that true? Can we test this idea?

We can actually do that by using artificial neural networks.

One of the most popular approaches to artificial intelligence is artificial neural networks. These networks, inspired by an early model of how neurons fire (the McCulloch-Pitts model), are made up of nodes, with each node resembling a neuron. A node receives signals and then sends them to its connected nodes based on an activation function.

Of course there are differences between neural networks and the brain. For one thing, biological neurons in the brain are much more complex than the nodes of the neural network, which are usually simple mathematical functions. Another difference is that the brain relies on a lot of feedback between neurons, but neural networks tend to receive little feedback. Typically each neuron is connected to only a few other neurons, but in neural networks each node is connected to many other nodes.

Some other differences:

– Neurons transmit signals at a frequency of 100 hertz while neural networks transmit signals
megahertz range.

– Neurons send peak signals and neural networks send constant signals.

– Neurons have three-digit precision at best, while neural networks have infinite precision.

There are many other differences as well. In fact, neural networks are better than brains in some ways. We can see this by looking at a single neuron using the Hodgkin-Huxley model. In this model, a neuron is essentially the same as a perceptron — a very simple mathematical function that takes a weighted sum of its inputs and then generates a 1 if the value exceeds a threshold and a 0 otherwise.

The important point about a perceptron is that it is not differentiable. This makes it impossible to derive a Gradient. In other words, the perceptron cannot be trained Error Backpropagation (a math-based error correction method). This rules out the only effective way to train neural networks.

Since neurons are perceptrons, the brain cannot be trained as a neural network. This leads us to the surprising conclusion that a neural network is a much more effective learner than the brain.

The complexity of neurons only exacerbates this problem

Research shows that a neuron is more complex than a single perceptron. A neuron can be a multilayer
perceptron. However, the basic problem is compounded because the neuron now has to not only propagate errors to its connections, but also propagate errors within itself in order to train itself. So while a single perceptron is at least trainable, a multi-layer perceptron would quickly become untrainable. Being inherently more complicated, neurons would face the same problem.

How does this help us test if the mind is the brain? If there’s something the brain can’t do but the mind can can do, then the mind is not simply what the brain does. If I say I’m Michael Jordan but can’t dunk the basketball, I’m not Michael Jordan. We can use this approach to show that the mind is not the brain.

Back to neural networks: since artificial neural networks are a better version of the brain, the brain cannot do what neural networks cannot. If humans can solve problems that neural networks cannot solve, then the human mind is doing something that a neural network cannot. Therefore, the human mind also does something that the brain cannot.

At this point we can say that neural networks can never match human performance, because neural networks can only compete with the human mind in very narrow areas – and therefore the mind is not the brain.

While compelling, this argument won’t satisfy nerds who want hard scientific data. So in my next post, I’m going to bring you some hard science data where I present a logic experiment showing that humans can be proven to outperform neural networks.

You might also like to read: The Salem Hypothesis: Why Engineers See the Universe as It Was Designed. Not because we are terrorists or black-and-white thinkers, as is claimed. A simple computer program shows the limits of the random creation of information. Engineers doubt random evolution because a computer using an evolution-based program would chug far past the heat death of our universe. (Eric Holloway)

Leave a Comment