I am told that AI is developing at an astonishing rate and will soon threaten all humanity.
First, if AI is truly developing as well as accelerating, then by the time it is in a position to exterminate us it will have realised that exterminating any species has never been a good idea and will also have realised that war is about the most pointless exercise imaginable.
But that’s not the point.
The fuss about the rate of development of AI is difficult to understand.
There are things all over the planet that regularly outstrip the performance of AI. Grubby, sticky, noisy little things. Oh, what are they called? Ah, “babies”, yes that’s them.
I’m referring to the period up to the time when a baby has mastered sufficient linguistic skill to be advised, guided and instructed. OK, these three things will be done, from the adult perspective, at a very rudimentary level. But from the baby’s viewpoint, this is the first time it has receive any external guidance on how to deal with the existence experience.
Now, I don’t know my facts here, but I am going to pitch this point at about twelve months and then add on about 6 months of the gestation period when the foetus seems to show some kind or awareness of and reaction to its environment.
During this period, baby has to work out how to:
- Recognise and identify incoming data streams from sensory receptors.
- Identify discreet data streams and associate them with specific receptors.
- Locate and identify the location and function of those receptors.
- Recognise, identify and organise motor instruction streams.
- Associate specific motor instructions streams with certain actuators or sets of actuators.
- Process feedback loops from receptors and process to matching actuators.
The list could go on blah blah blah.
Eyes are a big thing.
- How to focus them
- Why is it useful to focus them?
- How does baby know that existence should be in sharp focus. What is “focus”?
- How to vector eyes to a target
- How to vector both eyes to the same target
- Decide whether there is one target or two.
- Associate squint angle with focus and deduce range.
- Associate range with actuator movements of arms and fingers
- What are arms and fingers.?
Throughout this pre-linguist phase there is no external guidance. It is not possible to explain to the baby what it should be trying to do, other than putting objects into range for practice. Fluffy toys, rattles and other vastly complex objects.
Now, I expect someone will say “innate”. It’s the sort of inane remark to be expected.
Even if it is “innate”, the same processing steps have to be gone through.
That means that even in very simple organisms, anything that responds to and explores its environment, there must be this programming that leads to exploration and feedback.
Given that this might be the key, and it is present at an almost cellular level, or at least from the very early organisation of the cell clump that will be the brain, how can that be extrapolated to the enormity of the processing required to arrive at the development that can accept and process language inputs?
Remember, before baby can talk, it has to develop a language learning system. I don’t mean learn what the Mummy thing is mouthing and noising about. But a system that allows baby to develop a language learning centre, a next generation development. It is this second generation system that will decode the Mummy stuff. And that presupposes that baby has deduced that the Mummy stuff might be useful for survival. How dat?
Compared to that lot, AI seems rather retarded, remembering that it is getting a lot of guidance from programmers and technicians who know the answers to a lot of the questions it hasn’t even asked yet.
Leave a comment