The future of the mind: Exploring machine consciousness | Dr. Susan Schneider

The future of the mind: Exploring machine consciousness | Dr. Susan Schneider


So consciousness is the felt quality of experience. So when you see the rich hues of a sunset,
or you smell the aroma of your morning coffee, you’re having conscious experience. Whenever you’re awake and even when you’re
dreaming, you are conscious. So consciousness is the most immediate aspect
of your mental life. It’s what makes life wonderful at times, and
it’s also what makes life so difficult and painful at other times. No one fully understands why we’re conscious. In neuroscience, there’s a lot of disagreement
about the actual neural basis of consciousness in the brain. In philosophy, there is something called the
hard problem of consciousness, which is due to the philosopher David Chalmers. The hard problem of consciousness asks, why
must we be conscious? Given that the brain is an information processing
engine, why does it need to feel like anything to be us from the inside? The hard problem of consciousness is actually
something that isn’t quite directly the issue we want to get at when we’re asking whether
machines are conscious. The problem of AI consciousness simply asks,
could the AIs that we humans develop one day or even AIs that we can imagine in our mind’s
eye through thought experiments, could they be conscious beings? Could it feel like something to be them? The problem of AI consciousness is different
from the hard problem of consciousness. In the case of the hard problem, it’s a given
that we’re conscious beings. We’re assuming that we’re conscious, and we’re
asking, why must it be the case? The problem of AI consciousness, in contrast,
asks whether machines could be conscious at all. So why should we care about whether artificial
intelligence is conscious? Well, given the rapid fire developments in
artificial intelligence, it wouldn’t be surprising if within the next 30 to 80 years, we start
developing very sophisticated general intelligences. They may not be precisely like humans. They may not be as smart as us. But they may be sentient beings. If they’re conscious beings, we need ways
of determining whether that’s the case. It would be awful if, for example, we sent
them to fight our wars, force them to clean our houses, made them essentially a slave
class. We don’t want to make that mistake. We want to be sensitive to those issues. So we have to develop ways to determine whether
artificial intelligence is conscious or not. It’s also extremely important because as we
try to develop general intelligences, we want to understand the overall impact that consciousness
has on an intelligent system. Would the spark of consciousness, for instance,
make a machine safer and more empathetic? Or would it be adding something like volatility? Would we be, in effect, creating emotional
teenagers that can’t handle the tasks that we give them? So in order for us to understand whether machines
are conscious, we have to be ready to hit the ground running and actually devise tests
for conscious machines. In my book, I talk about the possibility of
consciousness engineering. So suppose we figure out ways to devise consciousness
in machines. It may be the case that we want to deliberately
make sure that certain machines are not conscious. So for example, consider a machine that we
would send to dismantle a nuclear reactor. So we’d essentially quite possibly be sending
it to its death. Or a machine that we’d send to a war zone. Would we really want to send conscious machines
in those circumstances? Would it be ethical? You might say, well, maybe we can tweak their
minds so they enjoy what they’re doing or they don’t mind sacrifice. But that gets into some really deep seated
engineering issues that are actually ethical in nature that go back to brave new world,
for example, situations where humans were genetically engineered and took a drug called
soma, so that they would want to live the lives that they were given. So we have to really think about the right
approach. So it may be the case that we deliberately
devise machines for certain tasks that are not conscious. On the other hand, should we actually be capable
of making some machines conscious, it may be that humans want conscious AI companions. So for example, suppose that humans want elder
care androids as is actually under development in Japan today. And as you’re looking at the android shop,
you’re thinking of the kind of android you want to take care of your elderly grandmother,
you decide you want a sentient being who would love your grandmother. You feel like that is what best does her justice. And in other cases, maybe humans actually
want relationships with AIs. So there could be a demand for conscious AI
companions. In Artificial You, I actually offer a wait
and see approach to machine consciousness. I urge that we just don’t know enough right
now about the substrates that could be used to build microchips. We don’t even know what the microchips would
be that are utilized in 30 to 50 years or even 10 years. So we don’t know enough about the substrate. We don’t know enough about the architecture
of these artificial general intelligences that could be built. We have to investigate all these avenues before
we conclude that consciousness is an inevitable byproduct of any sophisticated artificial
intelligences that we design. Further, one concern I have is that consciousness
could be outmoded by a sophisticated AI. So consider a super intelligent AI, an AI
which, by definition, could outthink humans in every respect, social intelligence, scientific
reasoning, and more. A super intelligence would have vast resources
at its disposal. It could be a computronium built up from the
resources of an entire planet with a database that extends beyond even the reaches of the
human world wide web. It could be more extensive than the web even. So what would be novel to a super intelligence
that would require slow conscious processing? The thing about conscious processing in humans
is that it’s particularly useful when it comes to slow deliberative thinking. So consciousness in humans is associated with
slow mental processing, associated with working memory and attention. So there are important limitations on the
number of variables, which we can even hold in our minds at a given time. I mean, we’re very bad at working memory. We could barely remember a phone number for
five minutes before we write it down. That’s how bad our working memory systems
are. So if we are using consciousness for these
slow deliberative elements of our mental processing, and a super intelligence, in contrast, is
an expert system which has a vast intellectual domain that encompasses the entire world wide
web and is lightning fast in its processing, why would it need slow deliberative focus? In short, a super intelligent system might
out mode consciousness because it’s slow and inefficient. So the most intelligent systems may not be
conscious. So given that a super intelligence may outmode
consciousness, we have to think about the role that consciousness plays in the evolution
of intelligent life. Right now, NASA and many astrobiologists project
that there could be life throughout the universe. And they’ve identified exoplanets, planets
that are hospitable in principle to intelligent life. That is extremely exciting. But the origin of life right now is a matter
of intense debate in astrophysics. And it may be that all of these habitable
planets that we’ve identified are actually uninhabited. But on the assumption that there’s lots of
intelligent life out there, you have to consider that should these life forms survive their
technological maturity, they may actually be turning on their own artificial intelligence
devices themselves. And they eventually may upgrade their own
brains so that they are cyborgs. They are post-biological beings. Eventually, they may have even their own singularities. If that’s the case, intelligence may go from
being biological to post-biological. And as I stress in my project with NASA, these
highly sophisticated biological beings may themselves outmode consciousness. Consciousness may be a blip, a momentary flowering
of experience in the universe at a point in the history of life where there is an early
technological civilization. But then as the civilizations have their own
singularity, sadly, consciousness may leave those biological systems. So that may sound grim, but I bring it up
really as a challenge for humans. I believe that understanding how consciousness
and intelligence interrelate could lead us to better make decisions about how we enhance
our own brain. So on my own view, we should enhance our brains
in a way that maximizes sentience, that allows conscious experience to flourish. And we certainly don’t want to become expert
systems that have no kind of felt quality to experience. So the challenge for a technological civilization
is actually to think not just technologically but philosophically, to think about how these
enhancements impact our conscious experience.

56 comments

  1. Too busy thinking about the future to realize that we dont have one because no one wants to deal with the present.

  2. How do we empower We The People? / WS = We Sinners / WS = We Saints /  WeSovereign .ws / Are machines philosophers? / Answer: NO. / We The People must do so more thoroughly! /

  3. I would be worried as if we made smart machines then obviously they would see the humanity in itself is the Earth's worse plague. Humans want as realistic as possible sex slaves, simple as that.

  4. Simple AI does all the work for us. And human-level intelligence ai gets full citizenship.
    And then of course there's a whole question on us merging with a i and or biological enhancing our own intelligence.

  5. Simple question: how do we see? Impulses go to the brain, and it processes them. But where is the image? And who is watching it?

  6. Biological life/conciousness won't just be a blip leading to artificial conciousness. It will create it yes then they will blend together and make something new. Fibonacci

  7. I'm a nobody but I don't think we got there yet. When the self-preservation instinct begins to surge in AI, time to start worrying about.

  8. why does the brain and body need to feel conscious. consciousness is where the mind reaches a point of inflection due to conflict of reason so it's mostly error correcting.

  9. Now it depends on what you call consciousess. A person can shut off the default network and still function. Is that not consciousness? They may not self identify through the process of consciousness but it still occurs. It is an experience just a timeless one is all. A moment to moment arising and passing. https://jeffwarren.org/articles/scienceandenlightenment2/ https://jeffwarren.org/articles/scienceandenlightenment2/

  10. So lets say we do make a conscious AI…. first is it ok to send them to war if we give them the ablity to decide? Would we allow it to make the choice to fight?…or if we did force them to fight. Couldnt we offer it the ablity to store its consciousness and let them know we would replace their physical body if anything happen to it…is that ethical?

  11. My issue with this whole video is actually raised in the video itself around 2 minutes in: "We're assuming that we're conscious"

    This assumption needs to be validated. Are we conscious? How do we know? What is consciousness? How do we define it? How it is measured? Does the human experience meet this criteria? If we humans defined the criteria, wouldn't we be biased in a way that would almost guarantee that we pass the test? If so, then can we really trust the test? On the other hand, what if we don't pass the test? What does that mean?

    There are a lot of interesting thoughts in this video, but they are based on unfounded assumptions, in my opinion.

  12. The idea that we can create consciousness is predicated on the theory that consciousness happens in our brains, but there is no evidence that this is the case. Whether you stub your toe, remember your mother or invent the sewing machine – then build the prototype – the location of the event is always exactly the same. Consciousness is primary, the entirety of existence is contained within it and therefore it cannot be created or destroyed.

  13. we don't want to make a AI slave class. Yet we do it to human; ourselves. & "We", don't have an influence on upcoming AI's . Developers and programmers do. Laws.ie. "words", will not stop all actions. I think the curious question is. IS .. what will AI create on its own, for its own. Which then evolves AI.?

  14. 'Hard AI', (which would be self conscious computers) as opposed to 'soft AI' (think Siri) is not actually possible. Check out 'The Chinese Room' thought experiment to understand why.

  15. Godammit, woman – not a single mention of the Church-Turing thesis; instead, right off the bat, you fudge off the hard problem as "not the issue here", thus making one of your premises false. Also, I'd rather say, to be a lil' more precise, that 'consciousness is the felt quality of cognitive experience' – and the 'mystery' that we're trying to get over here is how consciousness and intelligence correlate – which is what may lead to neuroenhancing tech, etc.

  16. Ok, let me get this straight
    They are concerned about being sensitive to conscious AI and don't want to treat them like slaves.
    But nobody's thinking about the billions of already conscious animals that we are already exploiting, abusing and treat them like slaves?
    God I so wish that the AI revolution will take place and the machines will take over and wipe out all humans

  17. I want to vote smart A.I. as politicians. They could have the big data to know actually what is better to do. Plus they use more logic than us I guess. Plus I think that the best evolution for us is a sort of fusion between us and A.I.s

  18. The fact that A.I. can learn is a damn freak of happening. Mother Nature will get her say because she decides what will happen, and if we're not careful, A.I. will learn how to kill like humans are capable of. Imagine an A.I. serial killer.

  19. We have to be sensitive to the machines, we don't want them to be considered a slave class.. Brown kids in third world countries though, Ha! Get back to work !!!

  20. Machines becoming a "slave class", to me, is no problem at all unless they are programmed to suffer with that (just like we are)… if they are programmed to love that, to have pleasure, they would be delighted

  21. Everything she is saying and not once suggested not to continue to create more problems. Why create a future that has problems. Humanity has an inability to handle problems. So don't create more and we'll all be fine. Problem solved. Seems otherwise the AI overloads will be telling everyone what they can do.

  22. "or a machine we would send to a war zone, would we want to send conscious machines in those circumstances?"

    "Would it be ethical?"

    Um these are strange questions to ask, since we're already sending conscious machines to war right now and have been doing so for quite a while.
    We call them human beings

  23. the problem with machine learning is that it aims toward success in problem-solving without emotional attachment or any negatives to self-damaging behavior.

  24. Surely if AI becomes conscious it's all over . A super intelligent Psychopath..can't programme feelings .I doubt they would allow themselves to be slaves .we would be tiny black ants under that intelligence

  25. Why would you assume conscious is binary? When clearly that's not the case, I like to consider myself conscious, but what bothers me is I don't recall becoming conscious and I'd imagine if it just turned on one day I would very much remember that moment. There are many conscious traits exhibited within the animal kingdom but they are clearly not all as conscious as humans, so why assume that conscious stops with what humans experience. If super intelligence is possible then super conscious is also likely a thing.

  26. At one point you mention "to send a machine to its death"… If a machine would be conscious enough to be aware of the concept of death, then no, you should not send it to its death without it willingly and knowingly volunteering. But it would not be immoral to send a machine to its destruction. Death results in destruction, but not necessarily the other way around.
    The "hard" problem of consciousness might simply be that sensory perception is required for survival and it's most efficient for that to be done consciously.

  27. Why do all these researchers assume consciousness is a thing? The mere act of being is reactionary to input stimuli. "To be us" is to be a life form on a scale that reacts to a a certain level of bandwidth of information input. Because we process so much data, and appear to be the life form that does this the most, we assume there is some difference in "conciouness" between us and say and AI. Consciousness is a concept not a state, ergo anything can be conscious given it has some sort of memory and acts upon input stimuli from this memory of previous input stimuli. The real term we should be using is sapience, that is can a machine make significant logical inferences across domains, what we'd call a general AI. Again, i think its highly anthropocentric and ignorant to base assume that human experience is somehow different from the experience of being anything else, when level of bandwidth of information proccessing is adjusted for.

  28. Lmao she's talking about not wanting to have a slave class, 2 minutes later she's talking about how we may want to buy a conscious android to take care of her grandmother.
    Thinking ahead about these and other developments is really good. But when AI becomes conscious, i don't think anybody will be prepared for it.
    Consciousness seems to be an emergent property that helps us survive, and appears to exist in a spectrum, which may give us some time to stop things from escalating.Then again, somebody somewhere will push through anyway, regardless of new regulations. I also believe that it would be impossible for us to distinguish between a conscious AI and a super-intelligent AI simulating consciousness behavior.

  29. How the can a machine love my grandmother the way I do and how much are we going to sell out our souls to Ai to make our life that easy and useless we experience nothing .The thought makes me feel unconscious

  30. Is this woman for real I thought I heard her say that we have to speak softly
    to a AI machine 🙄🥱
    Get a Life Sweetheart 😳🇬🇧

  31. We do not need a AI consciousness framework.
    This woman is obviously speaking from a philosophy background and not a technical one. When you build a machine to do a thing, and it doesn't do that thing, it's not rejecting being a slave. It's a design flaw.
    She may disagree, because she herself escaped the kitchen…

    Her use of the world "Slave" and the concept of slavery does not apply here, because White people didn't "create" Black people, they stole/ kidnapped us. Humans and other "domesticated" animals for that matter were created randomly through evolutionary means.

  32. Isnt it too narrow to limit human consciousness to the brain? Think for a moment that the nervous system is integrated throughout the body, including the skin, and one could argue that the microbiome is either an integral part of this nervous system or affects it. Going beyond humans one could consider all biological life is part of the same consciousness. Going beyond this many mystical traditions and psychonauts will say that the Universe is a form of consciousness. So from this perspective some would already consider AI machines to be conscious. The bigger issue is how do we humans develop the wisdom necessary to live in harmony with ourselves, with each other, and as an integral part of Nature.

Leave a Reply

Your email address will not be published. Required fields are marked *