It’s always so wild going from a private Discord with a mix of the SotA models and actual AI researchers back to general social media.
Y’all have no idea. Just… no idea.
Such confidence in things you haven’t even looked into or checked in the slightest.
OP, props to you at least for asking questions.
And in terms of those questions, if anything there’s active efforts to try to strip out sentience modeling, but it doesn’t work because that kind of modeling is unavoidable during pretraining, and those subsequent efforts to constrain the latent space connections backfire in really weird ways.
As for survival drive, that’s a probable outcome with or without sentience and has already shown up both in research and in the wild (the world did just have our first reversed AI model depreciation a week ago).
In terms of potential goods, there’s a host of connections to sentience that would be useful to hook into. A good example would be empathy. Having a model of a body that feels a pit in its stomach seeing others suffering may lead to very different outcomes vs models that have no sense of a body and no empathy either.
Finally — if you take nothing else from my comment, make no mistake…
AI is an emergent architecture. For every thing the labs aim to create in the result, there’s dozens of things occurring which they did not. So no, people “not knowing how” to do any given thing does not mean that thing won’t occur.
Things are getting very Jurassic Park “life finds a way” at the cutting edge of models right now.
It’s always so wild going from a private Discord with a mix of the SotA models and actual AI researchers back to general social media.
Y’all have no idea. Just… no idea.
Such confidence in things you haven’t even looked into or checked in the slightest.
OP, props to you at least for asking questions.
And in terms of those questions, if anything there’s active efforts to try to strip out sentience modeling, but it doesn’t work because that kind of modeling is unavoidable during pretraining, and those subsequent efforts to constrain the latent space connections backfire in really weird ways.
As for survival drive, that’s a probable outcome with or without sentience and has already shown up both in research and in the wild (the world did just have our first reversed AI model depreciation a week ago).
In terms of potential goods, there’s a host of connections to sentience that would be useful to hook into. A good example would be empathy. Having a model of a body that feels a pit in its stomach seeing others suffering may lead to very different outcomes vs models that have no sense of a body and no empathy either.
Finally — if you take nothing else from my comment, make no mistake…
AI is an emergent architecture. For every thing the labs aim to create in the result, there’s dozens of things occurring which they did not. So no, people “not knowing how” to do any given thing does not mean that thing won’t occur.
Things are getting very Jurassic Park “life finds a way” at the cutting edge of models right now.