In the centuries since Galileo proved heliocentrism, science has gradually come to understand more and more of our universe's natural phenomena: gravity, quantum mechanics, even ripples in space-time. But the final frontier of science isn't out there, says cosmologist and MIT professor Max Tegmark, it's the world inside our heads: consciousness. It's a highly divisive issue—some scientists think it's unimportant or a question for philosophers, while others like Tegmark think that the human experience and the meaning and purpose of life would disappear if the lights of our consciousness were to go out. Ultimately, Tegmark thinks we can understand consciousness scientifically by finding the pattern of matter from which consciousness springs. What is the difference between your brain and the food you feed it? It's all quarks, says Tegmark, the difference is the pattern they're arranged into. So how can we develop a theory of consciousness? Can we build a consciousness detector? And can we really understand what we are without unlocking humanity's greatest mystery? Tegmark muses on all of this above. Max's latest book is Life 3.0: Being Human in the Age of Artificial Intelligence Read more at BigThink.com: bigthink.com/videos/max-tegmark-why-consciousness-is-the-most-divisive-issue-in-science-today Follow Big Think here: YouTube: goo.gl/CPTsV5 Facebook: www.facebook.com/BigThinkdotcom Twitter: twitter.com/bigthink Transcript: Of all the words I know there’s no word that makes many of my colleagues more emotional and prone to foam at the mouth than the one I’m just about to say: consciousness. A lot of scientists dismiss this as complete BS and as totally irrelevant and a lot of others think this is the central thing—you have to worry about machines getting conscious and so on. What do I think? I think consciousness is both irrelevant and incredibly important. Let me explain why. First of all, if you are chased by a heat-seeking missile it’s completely irrelevant to you whether this heat-seeking missile is conscious, whether it’s having a subjective experience, whether it feels like anything to be that heat-seeking missile, because all you care about is what the heat-seeking missile does, not how it feels. That shows that it’s a complete red herring to think that you’re safe from future AI if it’s not conscious. It’s its behavior you want to make sure is aligned with your goals. On the other hand there is a way in which consciousness is incredibly important, I feel, and there’s also a way in which it’s absolutely fascinating. If we rewind 400 years or so, Galileo, he could’ve told you that if you throw an apple and a hazelnut they’re going to move exactly in this shape of a parabola and he can give you all the math for it, but he would have no clue why the apple was red and the hazelnut was brown or why the apple was soft and the hazelnut was hard. That seemed to him beyond science, and science back 400 years ago could only really say sensible things about this very limited domain of phenomenon to do with motion. Then came Maxwell's equations which told us all about light and colors and that became within the realm of science. Then we got to quantum mechanics, which told us why the apple is softer than the hazelnut and all the other properties of matter, and science has gradually conquered more and more of the natural phenomenon. And if you ask now what science can do it’s actually a lot faster to describe what little it is that science cannot talk about sensibly. And I think the final frontier actually is consciousness. People mean a lot of different things by that word, I simply mean subjective experience, the experience of colors, sounds, emotions and so on, that it feels like something to be me, which is quite separate from my behavior, which I could have even if I were a zombie and didn’t experience anything at all, potentially.