• 1 Post
  • 84 Comments
Joined 2 years ago
cake
Cake day: October 12th, 2023

help-circle

  • I tend to be skeptical of the reactionary AI is always slop trend. I’m sympathetic to it because it’s a response to the hype machine that knows no prudence. But damn when you say

    “Your next move: Build AI foundations. Our work with organisations confirms mounting evidence that isolated, tactical AI projects often don’t deliver measurable value. Tangible returns come from enterprise-scale deployment consistent with company business strategy.”

    I read this as marketing. What’s the evidence you’ve been gathering? Why do you believe your projects are applicable to all companies? What happens if we invest and it doesn’t help like you say it will?

    This is like saying the solution to your relationship troubles is having a baby. No… No this is not the solution. Make my smaller projects work and show return and then we talk larger commitments.















  • Sadly it’s been a week. I’ve read this several times as closely as I could and tried to understand where my apprehension lies. I spent some time with the wiki link to counterfactuals and wanted to really dedicate more time doing so, but wasn’t able to dedicate the time to it.

    So, again, to restart the conversation, I wonder if, I have two separate confusions. The first, if consciousness is a property that is weakly emergent in brains, what is a brain?

    I think I have a hard time buying that consciousness is a property of a brain and not mind. And I get that you are not trying to prove that it does. I’m far more interested in why, in the face of minimal support, we would align ourselves with weak emergence over strong emergence.

    I have a lingering second problem. What is a model? In that wiki link, it has a three layer model: association, intervention, and counterfactuals. I would be hard pressed to consider the first two layers as sufficient for bing considered a model. But I think the three layer model doesn’t, as far as I’ve read, address intention, causal connection, or first order simulation. I think I’m hard pressed to see a collection of cells, neuron or otherwise, doing more than creating a response to a condition.




  • I should start off and say I’m less interested in the quesiton of free will than the relationship between consciousness and matter. I want to reframe that so you know what I’m focused on.

    Modern theories are a lot more integrative. … [I]nstead it is an essential active element in the thought process.

    Here, I’m assuming “it” is a conscious perception. But now I’m confused again because I don’t think any theory of mind would deny this.

    On the other hand, if “it” is “the brain” then I need to know more about the theory. As I understanding it, the theory says that the brain creates models. Models are mental. I just don’t know how that escapes the black box that connects to the mind. But as you assert and I understand, it is:

    stimuli -> CPM ⊆ brain -> consciousness update CPM -?> black box -?> mind -?> brain -> nervous system -> response to stimuli

    If it isn’t obvious, the question marks represent where I don’t understand the model.

    So if I were to narrow down my concerns, it would be:

    1. Is a model a mental process?
    2. If mental processes are part of the brain, then how so?

  • I’m going to stick with the meat of your point. To summarize,

    1. Some materialist views create a black box in which consciousness is a passive activity
      brain -> black box -> mind
    2. CPMs extract consciousness from the black box
    3. Consciousness plays a function role by providing feedback
      brain -> black box -> CPM-> consciousness -> black box -> mind

    But to go further, stimuli -> brain -> black box -> CPM-> consciousness update CPM -> black box -> mind -> response to stimuli

    The CPM as far as I can tell is the following:
    representation of stimuli -> model (of the world with a modeled self) -> consciousness making predictions (of how the world changes if the self acts upon it) -> updating model -> updated prediction -> suspected desired result

    I feel like I’ve mis-represented something of your position with the self. I think you’re saying that the self is the prediction maker. And that free will exists in the making of predictions. But presentation of the CPM places the self in the model. Furthermore, I think you’re saying that consciousness is a process of the brain and I think it’s of the mind. Can you remedy my representation of your position?

    Quickly reading the review, I went to see if they posited role for the mind. I was disappointed to see that they, not only ignored it (unsurprising), but collapsed functions normally attributed to the mind to the brain. Ascribing predictions, fantasies, and hypotheses to the brain or calling it a statistical organ sidesteps the hard problem and collapses it into a physicalist view. They don’t posit a mind-body relationship, they speak about body and never acknowledge the mind. I find this frustrating.