I'd say we know enough to know it's not spirit.
There's that good old human "we know it" spirit! ;D
We know it's synaptic activity, that it's linked to the number of neurons in our nervous system, mainly the brain.
I can't remember what it's called (and Internet search failed to produce the name) - but did you know we have alot more brain cells before we're born? Here's a cliip or two
www.newhorizons.org/neuro/kotulak.htmFrom conception to about halfway through fetal life, brain cells grow from one to about 200 billion. Then brain cells begin to die off, leveling off at about 100 billion at birth, the number that remain through adulthood.
If I recall, more die off around age 8 or 12 or something...
However, that's just present cells - the over-abundance provides for more ability to boot-strap the connections process, etc. (Thus, of course, more connections with time, etc - and specifically more 'fed-back' connections that hang around).
And, of course, we should all praise
goldfish instead of dolphins.
We know this because babies aren’t as conscious as fully grown humans, because brain damage often results in changes in levels of consciousness, character, personality, the very things which make us 'us'. Why would physical head trauma affect the spirit? Same goes with people suffering mental retardation, we know their brains are defective, are their spirits too? Why add spirit into the equation when the physical explanations suffice?
How do you sense someone else's consciousness?
Does it feel the same as your own consciousness?
"What is consciousness?"
We don't know how the intricate workings of electricity or gravity, yet most people have no trouble accepting these are natural phenomena.
And alot of people don't have trouble accepting that there is a God... So acceptance isn't a pre-requisite for anything here. (Remember, logic in a sub-domain does not (generally) apply outside the domain in which it is formed).
Interesting point, but I may have to debate the last part. How complex are AI systems compared to our brains? We have billions of neurons with billions of synaptic pathways built from experiences throughout the years of our life, how does this compare to the number of pathways AI systems can take?
Neural networks (in AI) are a simulation of the brain. The ideal is to simulation neurons that can (ideally) connect to any other neuron - where the connection stays or goes based on feedback.
Important aspects (as you guys have identified in humans) are inputs, outputs, and feedback. (Though, interestingly, for most neural networks, feedback (source) decisions are completely external ... but ideally the human brain uses the same set of neurons to interpret / provide feedback).
I'd actually heard of a project that was going to develop a brain the size of a cat's brain (at least according to the article I read). I wonder if that was the same as
this project (found via quick search)?
(Note, by the way, that we easily have the ability to provide more inputs and outputs to an AI system - so, assuming we can produce a (similarly) conscious system, there's no question that it would have advantage... Same with bio-implants... At some point we will be seeing imbalance and it's going to create some fun...).
- - -
The thing is, I have a major problem with everyone defining consciousness based on results of what consciousness ideally produces...
Have you heard of the
turing test? (A contest where you try to determine if the you're text-chatting with a computer or human). The thing is, the AI players in the turing test are usually basic trickery - very far from conscious, yet sometimes it's hard to tell. (ie. it's possible to deceive judgements of consciousness on the outside - often like how other people sometimes can't tell one is shy).
Even with a neural network simulation - which you can do via descrete electronic components representing neuraons or representing them virutally in software (generally slower), what is the magic element that is consciousness?
External results can ideally be produced by intelligent or non-intelligent systems. But what is that strange feeling of consciousness that we experience inwardly? Why do we experience it as we do?
I guess I'm asking a lot of questions because I don’t really understand AI either. What are it's limits? Is it constrained by the language the code is written in, the code itself, the hardware it runs on? And how do these constraints compare to our own limitations?
Note that coding virtualized has very little to do with text languages... You can write (encode) programs using DNA, for example. A hardware AI neural network is built using descrite neural representations - the effects encoded as circuitry that can form dynamic paths (interconnections).
(Note: The basic programmatic model of computers that most people think of is known as the
Von Neumann architecture ... but a powerful programmer abstracts himself completely from this architecture ... in the end, programming is an engineering science - you implement a result using primitves - whether those primitives are text-mnemonics, gravity, memes, or what not ... This is why we can program using DNA and get an actual result to a problem).
Given a purely physical definition of the human, then the human (and much better) are within our ability to create (given time and technologies). Thus, in that ideal, the limits are
way beyond the human. (Though human augmentation is a concrete reality - more than humanesque AI).
I get the feeling that we're more 'free' than any AI system we have devised.
I agree - most of the AI systems to date are much more limited (even when they have a ton more inputs and outputs and space systems, etc).
So, not limiting yourself to any technology - thus, using things like the fact that we are now starting to grow all sorts of human tissues... (thus, assuming we can grow every type of tissue, including all types of brain matter...) Do would believe we can build a conscious human?
The 'something' could be our experiences and memories.
Interestingly, our memory is a strange thing - not really memory as much as relational information - must less real than computer memory where things are encoded perfectly... but that is definately part of the difference in how we process.
Obviously our experiences are dictated by our perception of the world, our senses, something which AI lacks.
But, that begs the question... why do we have perception (instead of just (ideal) information storage)?
Why does AI lack that?
What is conscousness?
(AI doesn't lack senses in the very physical notion of the word, by the way... it does in the sentient sense though... but
what is that?).