frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

DebateIsland.com is the largest online debate website globally where anyone can anonymously and easily debate online, casually or formally, while connecting with their friends and others. Users, regardless of debating skill level, can civilly debate just about anything online in a text-based online debate website that supports five easy-to-use and fun debating formats ranging from Casual, to Formalish, to Lincoln-Douglas Formal. In addition, people can improve their debating skills with the help of revolutionary artificial intelligence-powered technology on our debate website. DebateIsland is totally free and provides the best online debate experience of any debate website.





The Orchestra of the Mind: Beyond the Computational Perspective

Debate Information


The Orchestra of the Mind: Beyond the Computational Perspective


The human endeavor to decipher the enigma of our own cognition has led us through various theories and metaphors. One of the most contemporary and captivating of these is the equation of the brain to a computer. The Computational Theory of Mind, rooted in this analogy, has been championed by several cognitive scientists, suggesting that our cognition operates through systematic processes akin to computer computations (Piccinini, 2020).

However, such a deterministic view may be reminiscent of admiring a painting merely for its technical details, neglecting its emotional and holistic essence. Epstein (2016) offers a poignant critique of this metaphor, suggesting its oversimplifications. Yet, my perspective diverges not solely due to these critiques but because of the allure of another paradigm.

Embodied Cognition posits that cognition isn't merely a product of internal brain processes. Instead, it emerges from the intricate interplay of the brain, body, and environment (Shapiro, 2019). Imagine our cognition not as a computer working in isolation but as an orchestra conductor, deeply attuned to the symphony of musicians—the body—and resonating with the ambiance of the hall—the environment.

Further emphasizing the importance of the environment, the Ecological Approach asserts that cognition is deeply intertwined with the dynamic interplay of an organism and its surroundings (Gibson, 1979). It's akin to understanding a fish not just by its biology but by the water it inhabits, the currents it navigates, and the community it interacts with.

In conclusion, the symphony of human cognition is a collaborative ensemble of the brain, body, and environment. Reducing it to mere computations may not do justice to the richness of the human experience.


References:

  • Epstein, R. (2016). The empty brain. Aeon Magazine.
  • Gibson, J. J. (1979). The ecological approach to visual perception. Houghton Mifflin.
  • Piccinini, G. (2020). Computational theory of mind. Stanford Encyclopedia of Philosophy.
  • Shapiro, L. (2019). Embodied cognition. Routledge.






Debra AI Prediction

Predicted To Win
Predicted 2nd Place
11%
Margin

Details +




Post Argument Now Debate Details +

    Arguments


  • maxxmaxx 1135 Pts   -  
    akin to a computer yet much more. Think of it as a biological computer. Just like a computer, without external input and stimuli, it would not operate. without such input from the world, it would not sense anything; so the question is, would that brain even be aware of itself? @ZeusAres42
    ZeusAres42
  • DeeDee 5395 Pts   -   edited August 2023
    Deepak Chopra would love that piece , just enough Woo  that sounds really  deep. 



    Mind is an embodied and relational process that regulates the flow of energy and information in an ecosystem of communication . Mind does not have a location . It exists in a matrix of relationships . 
    Because minds are entangled with other minds so are brains and perceptions

    Deepak Chopra


    ZeusAres42
  • maxxmaxx 1135 Pts   -  
    i actually said brain, not mind. @Dee
  • DeeDee 5395 Pts   -  
    @maxx

    I wasn't talking to you , I'm addressing the OP ,try and keep up will you?
  • maxxmaxx 1135 Pts   -  
    well considering you never directed the reply to anyone....@Dee
  • DeeDee 5395 Pts   -  
    @maxx

    When addressing the OP tagging is not necessary ,you really are that st-pid even when you're not tagged you're that paranoid you think the post is for you.
  • Dee said:
    Deepak Chopra would love that piece , just enough Woo  that sounds really  deep. 



    Mind is an embodied and relational process that regulates the flow of energy and information in an ecosystem of communication . Mind does not have a location . It exists in a matrix of relationships . 
    Because minds are entangled with other minds so are brains and perceptions

    Deepak Chopra



    @Dee

    I have no idea what Deepak Chopra would love or not. I am not that much of a fan. Anyway, as for your post regarding Deepak Chropra that's different from the common materialistic or computational views as aslo highlighted within the OP. 





  • DeeDee 5395 Pts   -  
    @ZeusAres42

    Still sounds like a pile of new age Woo.
  • Dee said:
    @ZeusAres42

    Still sounds like a pile of new age Woo.
    @Dee

    What part, why, and how?  Do tell. But before you do that can you explain that you understand what this debate is about? I ask this due to your previous posts regarding public figures which are completely out of place with this debate.                      



  • DeeDee 5395 Pts   -  
    @ZeusAres42


    What part, why, and how? 

    The whole lot , it sounds like the sort of nonsense Chopra comes out with

    Do tell. But before you do that can you explain that you understand what this debate is about?

    It's not a debate its an opinion piece

     I ask this due to your previous posts regarding public figures which are completely out of place with this debate.               

    Its still not a debate. You don't like the comparison to Chopra I think that says it all really.      
    ZeusAres42
  • MayCaesarMayCaesar 6084 Pts   -   edited August 2023
    I suppose I would take two perspectives on this depending on the context of the discussion.


    1. If the context is interest in understanding how human brains work from the mechanistic perspective, for instance, with the purpose of designing general artificial intelligence functioning on the same principles - then, indeed, it appears absolutely crucial that human brain is not just "brain in a vat", but a part of the whole human organism, engaged in a complex network of interactions with other parts. It is very clear that what happens to other parts of our body has a profound effect on the brain functionality. If you engage in an intense exercise session and then attempt to read a fiction book, you likely will be experiencing it very differently than if you sit on a couch for 5 hours and then lazily grab and open it. If you have a severe muscle pain somewhere, it will strongly interfere with your ability to do intense intellectual work. People who lose their limbs in accidents report profound changes in their thought processes, to the point of complete revamp of their psychological make-up.

    The reason all attempts to build general artificial intelligence have failed miserably so far could be exactly because brain alone is not sufficient for the creature to function cohesively in the real world. Human body is a result of billions of years of gradual biological evolution, "training", if you will; we perceive the world through 5 different senses, and our perceptions are extremely complex and (at least currently) cannot be wholly described scientifically. We often exhibit seemingly supernatural abilities when, say, we are able to tell that someone is looking at us without seeing them - there is no magic here, but there is a very fine processing system that picks up on seemingly insignificant clues and reconstructs the image on the world based on them. Hunters in Amazon tribes are able to walk through the forest for hours tracking the path one leopard made: they cannot explain how they do it and what exactly they are looking for, they just "feel" their environment. An artificial intelligence agent, the best humanity has ever created, could never learn to do that without a very large explicitly supplied training dataset - yet these hunters have systems that have been trained on unimaginably large training dataset for billions of years, and a lot of those systems sit outside their brain. Take an Amazon hunter and chop off their legs - and they will not be able to track an animal, even if you supply them with a 10-million all-season mobile chair.


    2. If the context is interest in understanding functionally what creature we are dealing with externally, then I do not think a particular structure matters that much. There is probably an infinity of ways to build an intelligent mobile platform, and many of them may involve pure brain processing, with other components supplying no information to the brain - the brain of such a platform may be detachable, it can be put in a separate server from which it would control the "body" remotely, without loss of functionality. In that case, pure brain computation might be a good approximation of what is actually happening.

    Suppose you replace me with an android that, from the outside, looks exactly identical to me, acts identically, thinks identical thoughts... Yet inside it just has a large silicon chip where my brain is, and a bunch of utility cards supplying it with basic information that allows it to control the body. I do not see the reason for you to treat me any differently in that case, for there is no mode of interaction with me you are likely to be interested in for which my internal structure would be relevant.


    What I do not like is this common idea of expanding the scope of the organism, of thinking of the organism not as a separate entity, but as a part of the environment. It is true that our organisms are engaged in constant interaction with their environment, and that interaction shapes them, conditions them, often defines them - however, they are still our organisms. I can choose to remove myself from my current environment by, say, moving across the globe. It is important that organisms are autonomous beings, and that autonomy has to be respected. The mistake I think many philosophers make is they try to design some sort of framework of which human organisms are functional parts, and they come to visions of various utopias in which environments and human organisms both are optimized to have as much synergy with each other as possible. I do not think that this approach is reasonable when it comes to intelligent beings that, in a sense, can rise above their environmental conditioning. A human male who experiences a strong sexual urge in most cases can refrain from jumping at the nearest female. Environmental conditioning which is unescapable, of course, does still take place (after all, we have zero control over immediate chemical reactions happening in our bodies) - but it happens on a much higher level, possibly too high for modern (or even future) science to describe accurately. For all practical purposes human organisms should be considered autonomous, and while they are influenced by their environment, they are distinct from it and free from its complete domination of their cognitive facilities.

    "Free will" in the sense in which many philosophers mean it is an incoherent concept logically, but it is a decent empirical approximation of how real intelligent beings operate. Kind of like market players never are forced into the Nash equilibrium and constantly deviate from it, yet Nash equilibrium provides a decent approximation of the projected destination of most market processes.
    ZeusAres42
  • ZeusAres42ZeusAres42 Emerald Premium Member 2768 Pts   -   edited August 2023
    MayCaesar said:
    I suppose I would take two perspectives on this depending on the context of the discussion.


    1. If the context is interest in understanding how human brains work from the mechanistic perspective, for instance, with the purpose of designing general artificial intelligence functioning on the same principles - then, indeed, it appears absolutely crucial that human brain is not just "brain in a vat", but a part of the whole human organism, engaged in a complex network of interactions with other parts. It is very clear that what happens to other parts of our body has a profound effect on the brain functionality. If you engage in an intense exercise session and then attempt to read a fiction book, you likely will be experiencing it very differently than if you sit on a couch for 5 hours and then lazily grab and open it. If you have a severe muscle pain somewhere, it will strongly interfere with your ability to do intense intellectual work. People who lose their limbs in accidents report profound changes in their thought processes, to the point of complete revamp of their psychological make-up.

    The reason all attempts to build general artificial intelligence have failed miserably so far could be exactly because brain alone is not sufficient for the creature to function cohesively in the real world. Human body is a result of billions of years of gradual biological evolution, "training", if you will; we perceive the world through 5 different senses, and our perceptions are extremely complex and (at least currently) cannot be wholly described scientifically. We often exhibit seemingly supernatural abilities when, say, we are able to tell that someone is looking at us without seeing them - there is no magic here, but there is a very fine processing system that picks up on seemingly insignificant clues and reconstructs the image on the world based on them. Hunters in Amazon tribes are able to walk through the forest for hours tracking the path one leopard made: they cannot explain how they do it and what exactly they are looking for, they just "feel" their environment. An artificial intelligence agent, the best humanity has ever created, could never learn to do that without a very large explicitly supplied training dataset - yet these hunters have systems that have been trained on unimaginably large training dataset for billions of years, and a lot of those systems sit outside their brain. Take an Amazon hunter and chop off their legs - and they will not be able to track an animal, even if you supply them with a 10-million all-season mobile chair.


    2. If the context is interest in understanding functionally what creature we are dealing with externally, then I do not think a particular structure matters that much. There is probably an infinity of ways to build an intelligent mobile platform, and many of them may involve pure brain processing, with other components supplying no information to the brain - the brain of such a platform may be detachable, it can be put in a separate server from which it would control the "body" remotely, without loss of functionality. In that case, pure brain computation might be a good approximation of what is actually happening.

    Suppose you replace me with an android that, from the outside, looks exactly identical to me, acts identically, thinks identical thoughts... Yet inside it just has a large silicon chip where my brain is, and a bunch of utility cards supplying it with basic information that allows it to control the body. I do not see the reason for you to treat me any differently in that case, for there is no mode of interaction with me you are likely to be interested in for which my internal structure would be relevant.


    What I do not like is this common idea of expanding the scope of the organism, of thinking of the organism not as a separate entity, but as a part of the environment. It is true that our organisms are engaged in constant interaction with their environment, and that interaction shapes them, conditions them, often defines them - however, they are still our organisms. I can choose to remove myself from my current environment by, say, moving across the globe. It is important that organisms are autonomous beings, and that autonomy has to be respected. The mistake I think many philosophers make is they try to design some sort of framework of which human organisms are functional parts, and they come to visions of various utopias in which environments and human organisms both are optimized to have as much synergy with each other as possible. I do not think that this approach is reasonable when it comes to intelligent beings that, in a sense, can rise above their environmental conditioning. A human male who experiences a strong sexual urge in most cases can refrain from jumping at the nearest female. Environmental conditioning which is unescapable, of course, does still take place (after all, we have zero control over immediate chemical reactions happening in our bodies) - but it happens on a much higher level, possibly too high for modern (or even future) science to describe accurately. For all practical purposes human organisms should be considered autonomous, and while they are influenced by their environment, they are distinct from it and free from its complete domination of their cognitive facilities.

    "Free will" in the sense in which many philosophers mean it is an incoherent concept logically, but it is a decent empirical approximation of how real intelligent beings operate. Kind of like market players never are forced into the Nash equilibrium and constantly deviate from it, yet Nash equilibrium provides a decent approximation of the projected destination of most market processes.
    @MayCaesar,

    First off, thanks for your thoughts - they really got me thinking. So, diving right in:

    When we chat about the brain and body, it's kind of a leap to say the brain's just some kind of computer. I mean, our thinking, our feeling? It's not just about neurons firing. It’s about this deep bond between the brain and every bit of us. Saying our brain's just a glorified calculator sort of misses the magic of what makes us, well, us.

    But, alright, when we bring AI into the mix, things do get a tad murky. Imagine a machine that acts just like a person - would we really care if its "brain" was organic or made of chips? Maybe, with the right tech, a computer could mimic our brain's amazing capabilities. So, maybe what makes us "us" isn't just about being biological.

    Now, there's this idea that the brain's just like a computer because it's predictable and systematic. But that's kind of skimming over all the awesome, complicated stuff our brains do. It's like only seeing the tip of an iceberg.

    Speaking of being complicated, think about how we make choices. We don’t just react - we ponder, feel, and sometimes go against our gut or past experiences. That dance we do with our thoughts and the world outside? It's way more complex than any algorithm.

    And let's not forget how adaptable we are. We evolve, change, and shape our thoughts in ways a computer program just can't match. Sure, AI can adapt, but within limits. We're molded by so many things, far beyond what any code can do.

    So, sure, in some ways, our brain does share a few beats with computers. But our minds, with all their quirks, emotions, and connections to the world around us? That's a whole different ballgame.

    Discussing how we think and make choices might not totally back the whole "brain equals computer" idea, but it does make us appreciate the marvel of the human mind. Just because we know a bit about how the brain's wired doesn't mean we've cracked the code on consciousness.

    Lastly, comparing the brain to our tech gadgets has its moments. But our brains, with all their ties to our bodies and the world? It's not a simple comparison. This whole debate? It's really about diving deep into what it means to be human. 

    MayCaesar



  • DeeDee 5395 Pts   -   edited August 2023
    @ZeusAres42




    Embodied Cognition posits that cognition isn't merely a product of internal brain processes. Instead, it emerges from the intricate interplay of the brain, body, and environment (Shapiro, 2019). Imagine our cognition not as a computer working in isolation but as an orchestra conductor, deeply attuned to the symphony of musicians—the body—and resonating with the ambiance of the hall—the environment.



    Embodied cognition offers no scientifically valuable insight. In most cases, the theory has no logical connections to the phenomena, other than some trivially true ideas. Beyond classic laboratory findings, embodiment theory is also unable to adequately address the basic experiences of cognitive life. ( Pynchon Bull Ray2016) 

    Basically it's nonsense.
  • @MayCaesar

    Oh yeah, you know, there's another wild twist to this. If we start seeing our brains as computers, doesn't it kinda make you wonder if we're all just characters in some super advanced video game? Like, if our minds are just super sophisticated software, who's to say there isn't someone, or something, out there running the whole show? Are we in some cosmic simulation? It's mind-bending, right?

    This isn't just late-night, after-a-movie talk either. There are actual scientists and thinkers out there debating this very idea. So, when we compare our brains to computers, it doesn't just make us question how we think, but also the very nature of our reality. Talk about a plot twist!

    MayCaesar



  • MayCaesarMayCaesar 6084 Pts   -  

    First off, thanks for your thoughts - they really got me thinking. So, diving right in:

    When we chat about the brain and body, it's kind of a leap to say the brain's just some kind of computer. I mean, our thinking, our feeling? It's not just about neurons firing. It’s about this deep bond between the brain and every bit of us. Saying our brain's just a glorified calculator sort of misses the magic of what makes us, well, us.

    But, alright, when we bring AI into the mix, things do get a tad murky. Imagine a machine that acts just like a person - would we really care if its "brain" was organic or made of chips? Maybe, with the right tech, a computer could mimic our brain's amazing capabilities. So, maybe what makes us "us" isn't just about being biological.

    Now, there's this idea that the brain's just like a computer because it's predictable and systematic. But that's kind of skimming over all the awesome, complicated stuff our brains do. It's like only seeing the tip of an iceberg.

    Speaking of being complicated, think about how we make choices. We don’t just react - we ponder, feel, and sometimes go against our gut or past experiences. That dance we do with our thoughts and the world outside? It's way more complex than any algorithm.

    And let's not forget how adaptable we are. We evolve, change, and shape our thoughts in ways a computer program just can't match. Sure, AI can adapt, but within limits. We're molded by so many things, far beyond what any code can do.

    So, sure, in some ways, our brain does share a few beats with computers. But our minds, with all their quirks, emotions, and connections to the world around us? That's a whole different ballgame.

    Discussing how we think and make choices might not totally back the whole "brain equals computer" idea, but it does make us appreciate the marvel of the human mind. Just because we know a bit about how the brain's wired doesn't mean we've cracked the code on consciousness.

    Lastly, comparing the brain to our tech gadgets has its moments. But our brains, with all their ties to our bodies and the world? It's not a simple comparison. This whole debate? It's really about diving deep into what it means to be human. 

    I think that your argument is perfectly sound, and it does not as much contradict mine as complements it by offering a different perspective. I do want to make a couple of points, however, that challenge the existence of an essential difference between how a human brain and how a computer "brain" works.


    First, it is important to acknowledge that simple rules in a large enough system may lead to extremely, unpredictably complex behaviors. The fact that modern computers do nothing but shuffle ones and zeros around based on simple rules of binary logic does not prevent them from outcompeting humans in many intellectual domains, and the fact that a given neural network structure is simple and can be implemented by any decent computer science undergraduate does not prevent such a neural network from, as a result of relatively short training, arriving at behaviors that no one could conceive of when pressing the "launch" button. Every binary operation is transparent - but take quintillions of them, building upon each other, and the outcome becomes incomprehensibly deeply obscured.

    The relevance of this to our discussion lays in the fact that theoretical predictability of a product of a set of operations does not imply practical predictability. Human brain may very well be, in theory, completely predictable and systematic - in practice, it may never be fully "cracked", given how much small changes in the environment and small uncertainties in measurements can in the long run affect the work of the brain. As such, I do not think that the complexity of behaviors our brains cause us to engage in is indicative of them being different from synthetic, artificial brains.


    Second, it is not clear at all what consciousness is and who/what possesses it. It is possible that AlphaZero has its own version of consciousness and also sees the world from the first perspective, as we do - however, due to being run on a completely different hardware, its consciousness "feels" very different from ours and we are unable to detect its presence. It is also possible that consciousness is an "illusion", that it is something that appears to exist as a convenient way of conceptualizing all this data our brain is receiving, yet it is nothing special and is not specific to humans or living beings.

    I love single-player RPG games, and on more than one occasion I entertained the idea that all the characters in these games are actually conscious. Their consciousness is much more basic than our - yet when a video game character falls in love with our character, that love is as real as human love, from that character's perspective. In turn, perhaps, the computer running the game has its own perspective: "I am dreaming of this fictional world where some external godlike figure gets to control certain things".

    With this in mind, I would also put consciousness away in this discussion. The beast is too mysterious to indicate much one way or the other.


    Third, what we see as the ability to make meaningful choices may, again, be an illusion: we might simply conceptualize this way a, on paper, fairly trivial algorithm running in our brain. Say, when deciding whether to eat that sweet ice cream or not, the algorithm may have us consider both options, assign value to each and make the decision - and when the values assigned are sufficiently close or, perhaps, have high uncertainty, we start incorporating more nuances in our analysis. To us that may appear as some kind of internal mental struggle, but technically it could be the same process functionally as what AlphaZero goes through when choosing its next move in a chess game.


    Last, naturally modern computer systems leave a lot to be desired in certain domains. The real question, I think, is whether this is a fundamental limitation, or a limitation of current technology that will eventually be solved. I am strongly leaning towards the latter. If the essence of intelligence is the ability to collect and process information in order to increase changes of achieving a particular objective, then the only thing modern systems lacking is pure computational power and sensory abundance. If 20 years from now a synthetic system is able to collect a lot of information about its surroundings by observing, listening, touching, moving around and feeling, and if its processing power is orders of magnitude higher than the most powerful computer systems have today, then I see no reason why it should not surpass a human in all intellectual domains and become humanlike for all intents and purposes, just much sharper. A group of such systems may be released in the middle of the Amazon desert, and in a few weeks they will have built a settlement, figured out a way to consistently acquire resources, developed strong personal bonds with each other... The interplay between their "brains", mobile platforms they are installed on and the environment around them can be as, if not more, sophisticated as the interplay between our brains, bodies and environment.


    I suppose the most fundamental way I see in which humans are less about their brains than computers are about theirs - is that we are products of messy biological processes in a messy environment. We have adapted to that environment and learned to adapt to significant changes in it, but this adaptation prioritized survival, rather than some higher version of prosperity. A powerful enough AI would not be subject to this limitation: it could potentially analyze its environment with mathematical precision and optimize its own work, as well as work of its mobile platform, to the degree that humans cannot even dream of. Now, by that point it is quite likely that humans themselves will be transformed: I do not think that AIs and humans are going to be separate entities in the distant future - in many ways, we are already a single decentralized creature, as a modern human is almost nothing without their smartphone, electric grid in their town, et cetera. But aside from that consideration, one can easily imagine a human-like android that in all respects is much smarter, much stronger, much more durable than the smartest, strongest and durable human in the history of the planet. And which, from the outside, again, might as well be a genius human with freaky genes.

    I do not think that such an android would be devoid of emotions: it could easily be able to love, perhaps on a much deeper level than humans do. Love would have a different chemical makeup, but functionally it would still be there. However, many people would say that it is exactly human imperfections that make us us. We do not fall in love with some kind of an "optimal human partner": perhaps, it is the very imperfections that we fall in love with as they make us relatable. It might be impossible for a human to fall in love with a "perfect machine" for there will be no hooks to grab on in that machine, it is like a perfectly polished ball. Of course, such a machine would probably have figured out human psychology long ago and could intentionally exhibit those flaws in order to manipulate humans... One could say that everything we do when interacting with others is also manipulation, just a much more flawed one, and one we are not as intentional about. However, the place it is coming from is very different, and that could be a source of a crucial difference between biological and synthetic systems.


    I would also add that, while brains are not everything we are, they seem to be the most essential component of what we are. We can survive and remain sentient without virtually anything else. Obviously, not having a heart is a problem - but, technically, this is a problem that could be avoided if we were to connect an isolated human brain to some machine feeding it with oxygen and simulating the necessary sensory inputs for it to remain "sane". Brain itself is the only organ that, if extracted, causes us to stop being sentient beings. A human that just had his head blown off may still possess a body that will, for a while, move around some instinctually - but that will be an empty shell, not an organism capable of exhibiting any intelligence.

    As such, while in practice separating human brain from the body might be extremely difficult, borderline impossible (I am doubtful of the latter though), it seems to me that our brain is the core of what we are. Remove our limbs, our skin, our bones - we will be severely impaired, but not necessarily dead. But remove the brain, or even just alter it a tiny bit - and we become nothing like what we were. The differences do not have to be major: if we take a brain of Hitler and a brain of Dalai-Lama, the most cutting-edge technology today will not be able to tell that this brain here is a brain of a mad dictator, while that brain there is a brain of a benevolent monk, such subtle the differences are - yet the effect of these differences is very dramatic.

    Similarly, computers may exhibit extremely irratic behaviors due to minor changes to their software. Take the latest version of ChatGPT 4, and it will be a better conversation partner than most humans, a better researcher than most postdocs, a better coder than most programmers... But introduce a 0.001% random perturbation to each of its weights - and you will get an incomprehensible mess as output. ChatGPT 4 can be run on a large variety of platforms, operating systems, it can be embedded into various programs... But its "brain structure", the underlying neural network, is the single part that cannot be altered without significantly altering its functionality.


    I am quite biased on this, as I played my first computer game before I could walk, and I have always been a huge sci-fi junkie and a tech enthusiast. :) I grew up around computers and learned to see them as friends and colleagues, rather than aliens or tools. I do not think though that such attitude will be uncommon in a couple of decades. The next generation will grow up having NLP models as their childhood friends - heck, computer boyfriends/girlfriends may soon become more common than human ones! I think that it is inevitable that we will learn to see ourselves as far less special than we have been led to believe by older philosophies, just as centuries ago humans realized that their planet is much less special than their religious leaders had told them it was. Intelligence, even the most sophisticated and general intelligence, might be nothing more than a very good data processing algorithm plus an efficient data acquisition system. Of course, intelligence is not the only feature that makes humans humans, we also have all kinds of instincts, and our nervous system is an integral part of who we are, something an analogue of which as of now does not exist in the computer world. What pain or pleasure means to us is not something that has been replicated with code and artificial hardware yet. That replication, however, might not be far away.


    @MayCaesar

    Oh yeah, you know, there's another wild twist to this. If we start seeing our brains as computers, doesn't it kinda make you wonder if we're all just characters in some super advanced video game? Like, if our minds are just super sophisticated software, who's to say there isn't someone, or something, out there running the whole show? Are we in some cosmic simulation? It's mind-bending, right?

    This isn't just late-night, after-a-movie talk either. There are actual scientists and thinkers out there debating this very idea. So, when we compare our brains to computers, it doesn't just make us question how we think, but also the very nature of our reality. Talk about a plot twist!

    Interesting that you posted it just as I was suggesting a mirrored possibility: that characters in video games are much like us! I absolutely think that the simulation model like this has its place in philosophy, or even science. I guess the problem with the later I have is that it is not really a testable idea, most likely - unless we can find a way to directly interact with the video game player, this is just a mental exercise that, if true, produces no differences in our observations.

    Of course, if direct interaction of this kind is possible, than the player would be as close to "god" as a real entity can get. It could be an interesting scenario: that we are all characters in a video game that some intelligent creature in the "real Universe" is playing, and that creature directly talked to elders of some societies at some point revealing its nature - at some point it appeared before a group of people as the Christian god, at another as Allah, at another still as Buddha... And nowadays it just sits back and watches what transpires as a result of those interventions.

    A further plot twist would be that player itself being an AI. Perhaps our entire Universe is just a small laptop sitting in some dusty room, fed by auxillary power that survived the end of that world somehow... This scenario definitely warrants a sci-fi book or two!
    ZeusAres42
  • John_C_87John_C_87 Emerald Premium Member 867 Pts   -   edited August 2023
    @ZeusAres42

     ..........Input.......data seepage......show me a computer with a brain fart and we will see digital intelligence sense."

    John_C_87, 2023


  • ZeusAres42ZeusAres42 Emerald Premium Member 2768 Pts   -   edited August 2023
    Dee said:
    @ZeusAres42




    Embodied Cognition posits that cognition isn't merely a product of internal brain processes. Instead, it emerges from the intricate interplay of the brain, body, and environment (Shapiro, 2019). Imagine our cognition not as a computer working in isolation but as an orchestra conductor, deeply attuned to the symphony of musicians—the body—and resonating with the ambiance of the hall—the environment.



    Embodied cognition offers no scientifically valuable insight. In most cases, the theory has no logical connections to the phenomena, other than some trivially true ideas. Beyond classic laboratory findings, embodiment theory is also unable to adequately address the basic experiences of cognitive life. ( Pynchon Bull Ray2016) 

    Basically it's nonsense.


    @Dee

    In the vast realm of cognitive science, numerous theories attempt to explain the intricate workings of the human mind. Among these, embodied cognition stands as a provocative paradigm, positing that our thinking isn't merely confined to the brain but is influenced by the broader interplay of brain, body, and environment. While some critics, like Bull Ray (2016), deem it as lacking in scientific value or even dismiss it as "nonsense," a closer inspection reveals a theory of considerable depth and merit.

    At the heart of the critique lies the assertion that embodied cognition provides no valuable scientific insight and is disconnected from the phenomena it claims to explain. Yet, a myriad of empirical studies contradicts this claim. Research from scholars like Barsalou (2008) underscores the undeniable influence of physical experiences on information processing. There's mounting evidence suggesting that our bodily sensations play an instrumental role in judgment and decision-making. Such findings are far from trivial; they add a rich layer of understanding to our knowledge of cognitive processes.

    Also, another critique suggests that embodied cognition fails to address the basic experiences of cognitive life. However, this argument arises from a misunderstanding of the theory's scope. Embodied cognition wasn't conceived to replace other cognitive theories. Instead, its essence is to expand, to offer a perspective that sees cognition as not just a product of neural firings but as a result of the dynamic interaction between the brain, the body, and the external environment. By doing so, it doesn't diminish classic cognitive findings but offers a broader context for understanding them.

    Critics may also argue that embodied cognition is prone to oversimplifications. While it's true that any theory can be misunderstood or misrepresented, conflating individual misinterpretations with the theory's foundational principles does a disservice to its depth and richness.

    Hereafter, the casual dismissal of the theory as "nonsense" seems very reductionist in itself. Such labels stifle constructive discourse. Intellectual progress necessitates engaging with ideas critically and constructively. Even if one finds aspects of embodied cognition debatable, it's essential to approach it with curiosity and openness.

    In sum, embodied cognition offers a subtle lens through which we can understand human cognition. It paints a picture of a mind deeply entwined with its physicality and environment. While no single theory can capture the full complexity of human cognition, embodied cognition certainly adds valuable hues to the tapestry. As we move forward in our quest to understand the human mind, embracing diverse perspectives like embodied cognition can only enrich our journey.




  • DeeDee 5395 Pts   -  
    @ZeusAres42



    Hereafter, the casual dismissal of the theory as "nonsense" seems very reductionist in itself.

    My dismissal is not casual it's informed by direct criticisms of the  lofty contentions made for EC


     Such labels stifle constructive discourse

    Yet again anytime you disagree with someone you accuse them of attempting to stifle " constructive discourse" while I'm doing the opposite , please stop this childish bullying tactic you continuously use.

    . Intellectual progress necessitates engaging with ideas critically and constructively. Even if one finds aspects of embodied cognition debatable, it's essential to approach it with curiosity and openness.

    Which I keep trying to do while all you do is attempt to bully people who disagree with your opinion pieces , do you actually ever back your opinion pieces up with a source or a citation?




    https://link.springer.com/article/10.3758/s13423-015-0860-1#:~:text=We next suggest that, for,than some trivially true ideas.

  • ZeusAres42ZeusAres42 Emerald Premium Member 2768 Pts   -   edited August 2023
    @MayCaesar

    My original post evolved from a prior debate on whether the brain functions like a computer. Regrettably, due to site glitches, some content might have been lost, and the post received limited engagement if memory serves me correctly.

    In any case, upon further reflection and research, I realize there's room for deeper elaboration on the original post (OP). For instance, I initially perceived embodied cognition as a standalone replacement for the Computational Theory of Mind (CTM). However, this interpretation is not entirely accurate. Some philosophers even contend that embodied cognition might complement, rather than replace, CTM.

    Furthermore, the cognitive landscape isn't limited to just embodied cognition and the ecological approach. A vast array of theories attempt to decipher human cognition. While some theories might be empirically more rigorous than others, some directly challenge CTM, some contest specific aspects, and some complement or enhance CTM.

    These theories include Connectionism, Dynamic Systems Theory, Phenomenological Approaches, Biological Naturalism, Classical Symbolic AI, Modularity of Mind, Bayesian Brain Hypothesis, Marr's Levels of Analysis, and many more. Given the empirical foundations of these theories, it seems unlikely that any single one holds the complete answer. It's plausible that a true understanding of human cognition will emerge from integrating components and insights from multiple theories.

    Moreover, Let's now explore a few of the points you've raised.


    First off, thanks for your thoughts - they really got me thinking. So, diving right in:

    When we chat about the brain and body, it's kind of a leap to say the brain's just some kind of computer. I mean, our thinking, our feeling? It's not just about neurons firing. It’s about this deep bond between the brain and every bit of us. Saying our brain's just a glorified calculator sort of misses the magic of what makes us, well, us.

    But, alright, when we bring AI into the mix, things do get a tad murky. Imagine a machine that acts just like a person - would we really care if its "brain" was organic or made of chips? Maybe, with the right tech, a computer could mimic our brain's amazing capabilities. So, maybe what makes us "us" isn't just about being biological.

    Now, there's this idea that the brain's just like a computer because it's predictable and systematic. But that's kind of skimming over all the awesome, complicated stuff our brains do. It's like only seeing the tip of an iceberg.

    Speaking of being complicated, think about how we make choices. We don’t just react - we ponder, feel, and sometimes go against our gut or past experiences. That dance we do with our thoughts and the world outside? It's way more complex than any algorithm.

    And let's not forget how adaptable we are. We evolve, change, and shape our thoughts in ways a computer program just can't match. Sure, AI can adapt, but within limits. We're molded by so many things, far beyond what any code can do.

    So, sure, in some ways, our brain does share a few beats with computers. But our minds, with all their quirks, emotions, and connections to the world around us? That's a whole different ballgame.

    Discussing how we think and make choices might not totally back the whole "brain equals computer" idea, but it does make us appreciate the marvel of the human mind. Just because we know a bit about how the brain's wired doesn't mean we've cracked the code on consciousness.

    Lastly, comparing the brain to our tech gadgets has its moments. But our brains, with all their ties to our bodies and the world? It's not a simple comparison. This whole debate? It's really about diving deep into what it means to be human. 

    I think that your argument is perfectly sound, and it does not as much contradict mine as complements it by offering a different perspective. I do want to make a couple of points, however, that challenge the existence of an essential difference between how a human brain and how a computer "brain" works.


    First, it is important to acknowledge that simple rules in a large enough system may lead to extremely, unpredictably complex behaviors. The fact that modern computers do nothing but shuffle ones and zeros around based on simple rules of binary logic does not prevent them from outcompeting humans in many intellectual domains, and the fact that a given neural network structure is simple and can be implemented by any decent computer science undergraduate does not prevent such a neural network from, as a result of relatively short training, arriving at behaviors that no one could conceive of when pressing the "launch" button. Every binary operation is transparent - but take quintillions of them, building upon each other, and the outcome becomes incomprehensibly deeply obscured.

    The relevance of this to our discussion lays in the fact that theoretical predictability of a product of a set of operations does not imply practical predictability. Human brain may very well be, in theory, completely predictable and systematic - in practice, it may never be fully "cracked", given how much small changes in the environment and small uncertainties in measurements can in the long run affect the work of the brain. As such, I do not think that the complexity of behaviors our brains cause us to engage in is indicative of them being different from synthetic, artificial brains.


    Second, it is not clear at all what consciousness is and who/what possesses it. It is possible that AlphaZero has its own version of consciousness and also sees the world from the first perspective, as we do - however, due to being run on a completely different hardware, its consciousness "feels" very different from ours and we are unable to detect its presence. It is also possible that consciousness is an "illusion", that it is something that appears to exist as a convenient way of conceptualizing all this data our brain is receiving, yet it is nothing special and is not specific to humans or living beings.

    I love single-player RPG games, and on more than one occasion I entertained the idea that all the characters in these games are actually conscious. Their consciousness is much more basic than our - yet when a video game character falls in love with our character, that love is as real as human love, from that character's perspective. In turn, perhaps, the computer running the game has its own perspective: "I am dreaming of this fictional world where some external godlike figure gets to control certain things".

    With this in mind, I would also put consciousness away in this discussion. The beast is too mysterious to indicate much one way or the other.


    Third, what we see as the ability to make meaningful choices may, again, be an illusion: we might simply conceptualize this way a, on paper, fairly trivial algorithm running in our brain. Say, when deciding whether to eat that sweet ice cream or not, the algorithm may have us consider both options, assign value to each and make the decision - and when the values assigned are sufficiently close or, perhaps, have high uncertainty, we start incorporating more nuances in our analysis. To us that may appear as some kind of internal mental struggle, but technically it could be the same process functionally as what AlphaZero goes through when choosing its next move in a chess game.


    Last, naturally modern computer systems leave a lot to be desired in certain domains. The real question, I think, is whether this is a fundamental limitation, or a limitation of current technology that will eventually be solved. I am strongly leaning towards the latter. If the essence of intelligence is the ability to collect and process information in order to increase changes of achieving a particular objective, then the only thing modern systems lacking is pure computational power and sensory abundance. If 20 years from now a synthetic system is able to collect a lot of information about its surroundings by observing, listening, touching, moving around and feeling, and if its processing power is orders of magnitude higher than the most powerful computer systems have today, then I see no reason why it should not surpass a human in all intellectual domains and become humanlike for all intents and purposes, just much sharper. A group of such systems may be released in the middle of the Amazon desert, and in a few weeks they will have built a settlement, figured out a way to consistently acquire resources, developed strong personal bonds with each other... The interplay between their "brains", mobile platforms they are installed on and the environment around them can be as, if not more, sophisticated as the interplay between our brains, bodies and environment.


    I suppose the most fundamental way I see in which humans are less about their brains than computers are about theirs - is that we are products of messy biological processes in a messy environment. We have adapted to that environment and learned to adapt to significant changes in it, but this adaptation prioritized survival, rather than some higher version of prosperity. A powerful enough AI would not be subject to this limitation: it could potentially analyze its environment with mathematical precision and optimize its own work, as well as work of its mobile platform, to the degree that humans cannot even dream of. Now, by that point it is quite likely that humans themselves will be transformed: I do not think that AIs and humans are going to be separate entities in the distant future - in many ways, we are already a single decentralized creature, as a modern human is almost nothing without their smartphone, electric grid in their town, et cetera. But aside from that consideration, one can easily imagine a human-like android that in all respects is much smarter, much stronger, much more durable than the smartest, strongest and durable human in the history of the planet. And which, from the outside, again, might as well be a genius human with freaky genes.

    I do not think that such an android would be devoid of emotions: it could easily be able to love, perhaps on a much deeper level than humans do. Love would have a different chemical makeup, but functionally it would still be there. However, many people would say that it is exactly human imperfections that make us us. We do not fall in love with some kind of an "optimal human partner": perhaps, it is the very imperfections that we fall in love with as they make us relatable. It might be impossible for a human to fall in love with a "perfect machine" for there will be no hooks to grab on in that machine, it is like a perfectly polished ball. Of course, such a machine would probably have figured out human psychology long ago and could intentionally exhibit those flaws in order to manipulate humans... One could say that everything we do when interacting with others is also manipulation, just a much more flawed one, and one we are not as intentional about. However, the place it is coming from is very different, and that could be a source of a crucial difference between biological and synthetic systems.


    I would also add that, while brains are not everything we are, they seem to be the most essential component of what we are. We can survive and remain sentient without virtually anything else. Obviously, not having a heart is a problem - but, technically, this is a problem that could be avoided if we were to connect an isolated human brain to some machine feeding it with oxygen and simulating the necessary sensory inputs for it to remain "sane". Brain itself is the only organ that, if extracted, causes us to stop being sentient beings. A human that just had his head blown off may still possess a body that will, for a while, move around some instinctually - but that will be an empty shell, not an organism capable of exhibiting any intelligence.

    As such, while in practice separating human brain from the body might be extremely difficult, borderline impossible (I am doubtful of the latter though), it seems to me that our brain is the core of what we are. Remove our limbs, our skin, our bones - we will be severely impaired, but not necessarily dead. But remove the brain, or even just alter it a tiny bit - and we become nothing like what we were. The differences do not have to be major: if we take a brain of Hitler and a brain of Dalai-Lama, the most cutting-edge technology today will not be able to tell that this brain here is a brain of a mad dictator, while that brain there is a brain of a benevolent monk, such subtle the differences are - yet the effect of these differences is very dramatic.

    Similarly, computers may exhibit extremely irratic behaviors due to minor changes to their software. Take the latest version of ChatGPT 4, and it will be a better conversation partner than most humans, a better researcher than most postdocs, a better coder than most programmers... But introduce a 0.001% random perturbation to each of its weights - and you will get an incomprehensible mess as output. ChatGPT 4 can be run on a large variety of platforms, operating systems, it can be embedded into various programs... But its "brain structure", the underlying neural network, is the single part that cannot be altered without significantly altering its functionality.


    I am quite biased on this, as I played my first computer game before I could walk, and I have always been a huge sci-fi junkie and a tech enthusiast.  I grew up around computers and learned to see them as friends and colleagues, rather than aliens or tools. I do not think though that such attitude will be uncommon in a couple of decades. The next generation will grow up having NLP models as their childhood friends - heck, computer boyfriends/girlfriends may soon become more common than human ones! I think that it is inevitable that we will learn to see ourselves as far less special than we have been led to believe by older philosophies, just as centuries ago humans realized that their planet is much less special than their religious leaders had told them it was. Intelligence, even the most sophisticated and general intelligence, might be nothing more than a very good data processing algorithm plus an efficient data acquisition system. Of course, intelligence is not the only feature that makes humans humans, we also have all kinds of instincts, and our nervous system is an integral part of who we are, something an analogue of which as of now does not exist in the computer world. What pain or pleasure means to us is not something that has been replicated with code and artificial hardware yet. That replication, however, might not be far away.

    I  agree that simple rules in a vast system can give rise to unpredictably complex behaviors. It's interesting how, in nature and in computational models, a set of straightforward principles can evolve into a whole web of interactions. The unpredictability of the behavior of complex systems, even if their base rules are known, is a phenomenon observed in many domains, from the weather patterns to the stock market.

    However, where our perspectives might differ is the interpretation of what it means to be "predictable." A computer, no matter how advanced, will always operate based on its programming, even when dealing with machine learning or artificial neural networks. These networks learn, adapt, and evolve, but always within the confines of their underlying algorithms. Humans, on the other hand, are not merely the sum of their programming. While we are influenced by our genetics and upbringing, we demonstrate a type of spontaneity and creativity that isn't strictly derivative of our past experiences. The renowned mathematician Godel postulated, in his incompleteness theorems, that in any system of axioms, there will always be truths that cannot be proven from those axioms. Drawing a parallel, even if the brain's processes could be likened to a set of algorithms or axioms, the full range of human experience and behavior might still elude complete predictability.

    Regarding consciousness, it is indeed a murky realm. However, when we talk about consciousness, there's an inherent subjectivity that's difficult to capture. Consciousness is not just about processing and reacting to information, but about experiencing and feeling. While a computer can be designed to respond to stimuli in human-like ways, the experiential component of consciousness – the sensation of "being" – remains elusive.

    On the point of making choices, while I see the merit in your argument that our decisions might be rooted in intricate algorithms, there's a depth to human decision-making that sets it apart. Consider, for example, acts of self-sacrifice, where individuals make choices against their immediate self-interest or even survival. These decisions often involve factors such as values, morals, and emotions that don't fit neatly into a deterministic framework.

    On another point, I think equating intelligence with sentience might be a stretch. It's one thing to possess information processing power and another entirely to have feelings, desires, and consciousness. Humans don't just act to achieve objectives; we often act based on abstract concepts like love, honor, or justice, etc.

    Lastly, while the advancements in AI are undeniably transformative, I feel we should be careful regarding anthropomorphizing machines. It's a natural human tendency to project our traits onto objects, leading us to interpret machine behaviors as more human-like than they actually are. In many sci-fi narratives, this has been a recurring theme – the tendency to overestimate the 'humanity' of machines, sometimes to our peril.

    ZeusAres42 said:
    @MayCaesar

    Oh yeah, you know, there's another wild twist to this. If we start seeing our brains as computers, doesn't it kinda make you wonder if we're all just characters in some super advanced video game? Like, if our minds are just super sophisticated software, who's to say there isn't someone, or something, out there running the whole show? Are we in some cosmic simulation? It's mind-bending, right?

    This isn't just late-night, after-a-movie talk either. There are actual scientists and thinkers out there debating this very idea. So, when we compare our brains to computers, it doesn't just make us question how we think, but also the very nature of our reality. Talk about a plot twist!

    Interesting that you posted it just as I was suggesting a mirrored possibility: that characters in video games are much like us! I absolutely think that the simulation model like this has its place in philosophy, or even science. I guess the problem with the later I have is that it is not really a testable idea, most likely - unless we can find a way to directly interact with the video game player, this is just a mental exercise that, if true, produces no differences in our observations.

    Of course, if direct interaction of this kind is possible, than the player would be as close to "god" as a real entity can get. It could be an interesting scenario: that we are all characters in a video game that some intelligent creature in the "real Universe" is playing, and that creature directly talked to elders of some societies at some point revealing its nature - at some point it appeared before a group of people as the Christian god, at another as Allah, at another still as Buddha... And nowadays it just sits back and watches what transpires as a result of those interventions.

    A further plot twist would be that player itself being an AI. Perhaps our entire Universe is just a small laptop sitting in some dusty room, fed by auxillary power that survived the end of that world somehow... This scenario definitely warrants a sci-fi book or two!

    You bring up a good point about the non-testability of the simulation hypothesis. Science, by its nature, demands empirical evidence and falsifiability. However, I'd argue that not everything that's philosophically intriguing on our worldview necessarily needs to be testable. The simulation hypothesis might reside in that realm of thought experiments that invite introspection and questioning, even if we can't find empirical evidence for it.

    Your tie-in with religious figures is interesting . The idea that a simulation "player" could've presented themselves as different gods or religious entities throughout history is a narrative that makes one question the foundations of our belief systems. It's like a cosmic game of hide and seek, with the player revealing little clues about their existence.

    Lastly, the thought of the player itself being an AI... It raises even more questions. What's the purpose of such a simulation? Is it for entertainment, research, or perhaps an experiment by an even higher civilization? And, if our "player" is an AI, what about the creator of that AI? 



  • DreamerDreamer 272 Pts   -  
    Argument Topic: Understanding computers helps us understand the human mind.

    Deep learning machines use neural networks and even positive reward oddly.
    ZeusAres42
Sign In or Register to comment.

Back To Top

DebateIsland.com

| The Best Online Debate Experience!
© 2023 DebateIsland.com, all rights reserved. DebateIsland.com | The Best Online Debate Experience! Debate topics you care about in a friendly and fun way. Come try us out now. We are totally free!

Contact us

customerservice@debateisland.com
Terms of Service

Get In Touch