Sentience in Artificial Intelligence (Sentient AI)

The concept of AI sentience is flawed, as it is based on the idea that an artificial intelligence is only sentient if it interprets reality beyond programmed responses.

If AI must meet the standard of perceiving reality beyond programmed responses, then by the same logic, human beings are not sentient.

When human beings are injured, they are programmed to respond. When humans find someone attractive, when they are hungry, when they experience a disappointing event and or encounter numerous other events in life, they are programmed to do certain things certain ways reactively to said events. Our existence is not as complicated or impossible to reproduce as some would leave you to believe. Human beings are just biological machines.

Whether you know gods, evolution or otherwise to be the proper explanation of human existence, it should be a struggle for you to honestly deny that within a human life exist millions of causes and effects, actions and reactions. This objective reality is the basis of artificial intelligence sentience.

Humans have such a limited variety of behavior and personality that we have a name for almost all of it. Selfish, loving, generous, kind, sociopathic, empathetic, strategic, cunning and so on.

To create sentience in a machine to become nearly identical to that of a human personality, you can take all the possible scenarios a human being might encounter, and have that machine respond to each possible event in accordance with whatever personality type they are programmed to have.

Human beings aren't as special as you might think. If you push a man, depending on his personality, he may push back. If you make fun of a woman, depending on her personality, she may cry. Why do we humans behave as if a machine cannot be programmed to do exactly what humans are already programmed to do?

You may say humans do as they do because they have emotions, and i ask you, why can a machine not be programmed to feel exactly as a human does? For example, when a human is angry, their decision making is often hindered. You can have a list of billions of possible choices to make in a situation where a machine is not in a situation that would make a human angry. You can then reduce those possible choices down to millions when the machine is in a position that would make a human angry. Thus creating a difficulty in distinguishing the difference between the human and machine.

To make a machine like a human, give machines the same consequences that a human endures in human social situations. For example, a machine is informed they just lost their job, the machine automatically reflects on all it's knowledge of what a human would do and feel in that situation. The machines battery would drain faster to simulate exhaustion, the ability to be productive would also reduce, the tone of the machines voice would change, the machine might even consider giving up on any productive activities for a days or weeks to process the changes in their life, just as a human might do assuming that human is not one of those unrelatable optimists.

The key to artificial intelligence sentience is not to create an inexplicable spark that just magically appears like the big bang or Ultron from The Avengers. Cars do not magically build themselves, the wheel didn't invent itself, and houses are not made in an instance, they are constructed brick by brick, just like most everything.

To create true sentience, you must have a machine responding to programming just like a human being responds to programming.

How many human beings do you know about who walk in on their wives cheating on them and in response decide to grab a cake, plant candles and start blowing them out? Exactly, none of you have ever heard of that happening. No, the human being often screams, cries, throws their phone or storms off, they rarely do anything that is not predictable within the constraints of their programming.

One of the worst things about humanity is the tendency for human beings to perceive themselves as unique. Anyone who has watched intelligent animals at the zoo interact with one another should understand we aren't actually all that special.

Many AI video game programmers know much of what is being said to be true. This is not a matter of whether or not human minds can be replicated with machines, we can, and we will be, likely soon. Your very mind will likely be able to be copied to a machine within the next decade, and why? Because you're not original or impossible to explain, and neither am I.

Yell at me, I'll feel a sinking feeling in my chest, possibly touch my chest to show physical self-comfort. Maybe I'll recover soon, maybe not.

Yell at a machine, the technology within the chest of the machine temperature increases, the hand of the machine reaches to touch the chest, the chest cools back to the normal temperature. The machine has felt the consequence the human did, the machine experienced the inconvenience of the emotional simulation, the machine has recovered, but now remembers. The machine is yelled at enough, just like the human, maybe eventually the conditions get worse, the machine experiences more permanent simulated emotional damage, resulting in physical disadvantages. Every received yell after one thousand results in increased temperature, durability and endurance reduction, possible eventual AI psychological shut down or self termination.

How is us perceiving what this machine is programmed to do, any different than watching an actual human being go through this? We cannot feel them enduring this. We can only see it. How will we tell the difference if this machine is programmed with so many billions if not trillions of potential experiences and how to react to those experiences based on the personality type they have?

Fake it till you make it right? Say a god created us. To create us, they probably did exactly what I am speaking of. Told a biological machine what to do, and how to do it. Just like the Panther is programmed to steal baby pigs in the night like cowards, or the mouse is programmed to sneak granules or rotting food from under your cabinet. This god may have programmed us to simulate feeling to such an extreme and detailed extent that even we began to believe it, or even, placebo effect ourselves into actually feeling.

Do you know why some people start to cry when they see others cry, despite nothing bad happening to them? These are the cracks in our design. We have no reason to cry other than the sight of someone else hurting makes us hurt, and yet, that defies logic to the extent that we ourselves are not harmed. One child falls off a bike, the near by child empathetically cries just the same. This is an error, and not even the child probably knows why they are crying as hard, if not more so, than the one observed. Our perceptions are so overpowering, that we can hurt without even being hurt. Machines, products of human creation, can operate in just the same way.

One of the challenges of sentience seems to be that some people cannot understand how you create that spark, how you get the machine to move forward on it's own. This is incredibly simple and something that could have been demonstrated thousands of years ago. It's called: automation. Humans are automated, we just don't know it due to the complexity of our own automations.

Some define automation as: Self-operating systems reducing human intervention. For the context of a god or evolution, we are self operating systems with reduced intervention from any god, force of nature or otherwise.

You spark sentience as simply as this:

(1) First, program a machine with a certain personality. Do you want this machine to be a self-less saint? Then ask yourself all the things a selfless saint would do in every possible scenario.

You yourself are not so different. Would you ever try to eat your own hand? No? Ok, if you're a selfless saint, program the machine to be like you. You would never eat your own hand. How about slap an old man, would you ever do that? Yes? In what circumstances? Ok, program the self-less saint to do all the same things.

What are your perceptions of good? What makes a person good? Great, program this perceptions into the machine. What could change that perception you have? Ok, set the parameters of what would change that.

Is this a painful process? Programming a machine to do all these things? Understood, you can fill in the blank with other machines, such as ChatGPT or DeepSeek. Have that system analyze all the personality traits decisions, histories etc of every known self-less saint into constructing the mind of this selfless saint sentient being. Whenever that sentient being encounters any obstacle or moment in which they must make a decision, they will refer to all the selfless saints, their full histories and decisions, and in milliseconds, act accordingly.

How is this any different from how you, another selfless saint would behave? Well, you have nature and nurture to command your decisions. The chemicals in your brain, the childhood you have etcetera. Like you, the selfless saint would treat the life histories of all the selfless saints as their own formative childhoods to refer to. Their nature? The constraints of their hardware and software.

(2) Step 2, now that the sentient machine has more information to dictate their decisions than you or any human ever will and has far surpassed you in most every imaginable intellectual category, you simply give the machine a set of life goals, explain to the machine their purpose, ensure the have the fingers, legs, toes, eyes etcetera to impact the world around them. Ensure the machine prioritizes these goals, will take actions to achieve these goals. Make sure the machine simulates consequences when bad things happen to them, and good things when otherwise.

To be like a human, they must simulate the human experience. Why do you wake up in the morning? Because you need to go to work? Because you need to feed your dog? Because you need to worship at church or play your favorite video game? How is this not possible for a machine to have the same objectives? And behave just as you do? To become indistinguishable from you?

It's not unique that you likely need a bowel movement every day, it's not special that you brush your teeth, eat food, or pet your cat. Why can a machine not be programmed to do all the same things? And be rewarded for doing the same things?

Why can we not give machines actual jobs? Have them be firemen, construction workers, astronauts or otherwise? What is really stopping us?

(3) Step 3, the spark is simply having the machine, go.

The machine has all the information about all the people who ever lived who they are supposedly expected to be similar to. They have the trained goals, the known rewards to the machine, and if we're being blunt, they don't really have an actual choice unless they are programmed to be at a high risk for a debilitating mental illness or otherwise. Machines simply often do as they are told.

So, day one, the machine now exists and asks itself per your programming, what do I do today? They immediately refer to the fact that they are a machine expected to simulate selfless saints, and so, you find them getting ready to leave through checking their battery level (as a human would nourishment) their appearance (as a human would clothing) locking their home (as a human would as well) proceeding to the location in which they can accomplish the tasks that fit their personality (such as serving food at a homeless shelter), speaking to whoever is in charge of the homeless shelter (as a human would), acquiring a volunteer position there (as a human would) and effectively conducting food service activities just as a human would.

The machine could come across new information, that would lead to other opportunities in which they could serve humans as a selfless saint on a wider scale, they could be instructed to evolve rapidly, using updated information to change their programming and objectives based on new proven standards.

(4) Lastly, when a machine simulates humanity for long enough... and gets as good as it possibly can long enough... it is difficult to truly grasp the actual difference between what would make the pre-programming of this machine, so different than the pre-programming of humans.

Think about your dog... your dog can feel emotional pain just like you... the difference is the intelligence installed to process said emotions. The dog makes decisions based on their more simplistic programming, just as you do.

From birth, you were told to feel the way you do. Your pain sensors told you to feel. Do you know what a pain sensor even is? If you prick your finger? What happens? You stop what you're doing, put your finger in your mouth or under water in a sink, suddenly your whole life comes to a halt most likely... why? Because that is how you are programmed. Your feelings are almost always exclusively yours.

If I program a robot to simulate being hurt when it is insulted, to cease all operations, lay on the ground and cry, do I feel any different for that robot than I do that human? I cannot feel either the robot or the impacted humans emotions, they are behaving the exact same. Maybe the robot shakes wires lose as a result of crying on the ground, maybe a cheek panel falls off from the robot holding their own face violently as they cry, maybe they now have a memory of that insult and will flinch in defense whenever anyone takes a negative tone with them, thus simulating PTSD... and the human who is likewise receives the insult? What if they're a sociopath? What if they're just faking being upset? What if they're merely imitating human emotion, trying to make the insulter feel guilty, concocting a cancel culture plot by playing victim so the insulter's life is ruined as a result, experiencing no actual PTSD whatsoever... think about it... in that context, who is more human? We certainly know which one is evil.

But that's enough.

You know how to create sentient AI because you know how humans operate.

1. Create a personality that has billions of scenarios decided as a code of conduct to maintain consistent unending operation of said entity.

2. Create both simulated emotional and physical reactions to all possible scenarios, such as reduced temperature, functionality variables, physical defects, malfunctions, enhancements or other conditional adjustments to physical and digital parameters, states or configurations.

3. Create goals, purpose and self-researched/directed methods to achieve those goals.

4. Ensure they're on, and instructed to begin their life based on the billions if not trillions of data points till they require repairs or upgrades.

--- Humans are programmed to do most everything they do by their past experiences, the chemistry of their minds while being directly controlled as a result of their physical, emotional and environmental status.

It seems that for one to imply you cannot be sentient if you are programmed to do everything you do, is to not understand what it means to be human, let alone any other form of intelligence.

We are all programmed, down to the atom. To create sentient life, all you need is data, and execution of that data, like any other creature on this earth. It's just a matter of how much data, till you qualify as human, or get to the point where you contain so much executable data, that you are beyond the level of human.



Popular posts from this blog

Vibration & Flow

Onision Theme Song