Agent - a subject of AI, has a mind in a form of a neural network (NN).
In a time-step-based approximation an agent takes input from its receptors. The input is going through the NN producing some charge on the actor neurons. One of the actors is chosen each time step to be performed. The result of the action is spread back through NN, adjusting and optimizing the network structure.
Neuron - a base element of the NN, has:
- A list of pointers to output neurons (axons).
- Current charge and response.
- A memory map of stamp->charge.
In a classical NN model inputs have different weights. This can be achieved in a uniform weight model (all inputs/outputs of a neuron are equal) if you start adding similar neurons with inputs being a subset of the original neuron inputs. Therefore, I'm going to throw weights off and take each input/output equally strong, distributing the signal uniformly.
One of the experiments showed me that you can not go on with a static NN structure. You have to add, remove links and other neurons at least logically (one might say that biologically nothing new is created during the thinking process, but I don't have any proof for that and I don't need it). Therefore, there is no point of adjusting the input weights by the agent adaptation process - pure neuron/link creation/removing heuristic would produce the same result and is required anyway.
A neuron is not a function - it's just a multiplexer and a transmitter of the signal (which can me mathematically expressed as a function). Therefore, it has to preserve the total signal strength from inputs to outputs, taking some tiny portion for its efforts.
Memory of an agent as a whole is constructed from the charges in the past stored within each neuron. A past charge does not have a time reference, but the mind can scan all neurons charges at once at some particular time stamp. It is still an open question of how these past charges participate in the decision process on an agent.