“AI” vs. “Sentient-ness”

  • This topic has 11 replies, 2 voices, and was last updated 8 months ago by Lal.
Viewing 11 reply threads
  • Author
    Posts
    • #44484
      LayDhammaFollower
      Participant

      #1 IF given AI agent cannot be differentiated from Human-agent,

      How are they different?

      #1.1 If genuine sentient-ness cannot be differentiated from simulated sentient-ness, how are they different?

      #2 Both Human perception and AI perception can be flawed.

      All sentient beings can be deluded, in the sense that they can believe untrue things as true. AI can do the same as well.

      ChatGPT remembers/stores/uses past information to personalize the current interaction.

      We also do a similar thing.

      #3 We know that vipaka vedana is just an awareness of what has happened.

      Computers are also aware of what comes through the mouse and keyboard.

      #3.1 If Robot was created with six senses, will it not be also sentient? but in a different manner than humans?

      #4 I think the root question here is that IF we cannot differentiate the Simulated Artificial agent from the Baseline Human-agent, How are they two different really?

      ===

      What I think is that sufficiently advanced AI cannot be differentiated from humans or other sentient beings.

      Artificial agents can be programmed to react to various sensory inputs just like humans.

      All effects come from causes and conditions. It is not like human emotions are natural genuine feelings. It is just an effect of causes and conditions as well.

      I think in nearby future, in the next 10-20 years we will have very human-like robots.

      We already have technologies that can grasp the emotions of humans from just images, they can respond accordingly.

      watch this video on current progress in the AI field …

      The A.I. Dilemma – March 9, 2023

      ===

      End Notes:

      Whatever answers are there to these questions, what we know for sure is that …

      As long as causes & conditions are there for physical/mental suffering, it will manifest itself.

      We should of course primarily focus on that existential problem.

    • #44488
      LayDhammaFollower
      Participant

      I think AI will be very similar to sentient beings.

      AI can be programmed to generate emotions in particular way by enabling reward and punishment system similar to sentient beings.

      It can also be programmed to behave in such a way as to maximise the reward and minimise the punishment based on given information.

      For example,

      Sentient beings generates “mind made pleasures/stress” when they have what they desire/like/want and when they don’t have what they don’t want. Ofcourse that is entirely mind made. It is not like that pain/pleasure is coming from inherent property of physical object.

      Can AI not be programmed similarly as well, No?

      I am sure computer scientists can even take inspiration from Abhidhamma to create fundamental algorithm to simulate the mental processes in artificial agents.

      Are current artificial agents not capable of taking information (vedanā), recognise that information (saññā), form a response (saṅkhāra) already?

      Ofcourse, Just because two things appear similar, that does not mean they are same.

      I only see one problem here, which is that how can artificial agent can take rebirth? They don’t have viññāṇa dhatu or permanent memory records.

    • #44490
      Lal
      Keymaster

      #1 IF given AI agent cannot be differentiated from Human-agent”

       

      Why is that given? What is the proof that AI is sentient?

      • There is no point in discussing a conjecture until it is established that AI is sentient or it is possible for AI to be sentient.
      • One could discuss WHETHER AI is sentient or if it is POSSIBLE that it can be sentient in the future. 
      • Unless strong evidence can be presented, this kind of discussion may not be useful.

      P.S. I briefly discussed why AI will not be sentient at the end of the post “Nāmarupa – Different Types .”  Also, see, “ChatGPT – How It Works.”

    • #44492
      LayDhammaFollower
      Participant

      Lal, Even though technology has not yet advanced sufficiently, it seems it will in some years. Please watch the video cited. It was very surprising for me to see even the technology that we already have in place, let alone what we will develope.

      As for proof? What is proof for sentient ness?

      Let’s say that this Artificial agent passes the turing test, what about it then?

    • #44494
      Lal
      Keymaster

      I don’t need to watch videos by scientists who have no idea what the Buddha taught. 

      • Sentience CANNOT be there unless a hadaya vatthu and a set of pasada rupa (manomaya kaya) is created by kammic energy.
      • The other point is that Budha Dhamma is all about stopping the suffering of sentient beings.  AI will NEVER be sentient in many ways: One is that it cannot feel pain. 
      • Who says passing the Turing test PROVES that AI is sentient? Just because people say these things, we don’t need to believe them.
    • #44496
      LayDhammaFollower
      Participant

      I understand your perspective, Lal.

      My question then is

      #1 How the kammic energy is different from other types of energy that we understand (electrical energy, heat energy etc.) ?

    • #44497
      Lal
      Keymaster

      #1 How the kammic energy is different from other types of energy that we understand (electrical energy, heat energy etc.) ?”

      That is the only energy that can give rise to a sentient being with a hadaya vatthu and a set of pasada rupa.

    • #44522
      Lal
      Keymaster

      I watched the video “The A.I. Dilemma – March 9, 2023.”

      • It does not discuss whether AI can be sentient/conscious or not. The focus is the possible perils of AI.
      • I also recommend watching it. It analyzes the situation well.

      1. I agree with the premise of the discussion there. We must handle AI with caution. Just like nuclear energy can have dangerous outcomes as well as benefits, AI can be helpful to us, but in the hands of evil people, it can bring disastrous outcomes too. 

      2. I chat with ChatGPT 4 on a regular basis. Let me comment on the Turing test.

      • The Turing test was devised by the British mathematician and computer scientist Alan Turing to determine whether a machine can exhibit intelligent behavior indistinguishable from that of a human. In the Turing test, an evaluator interacts with two subjects, one human and one machine, without knowing which is which. If the evaluator cannot reliably tell the difference between humans and machines based on their responses, the machine is said to have passed the Turing test.
      • Based on my interactions, I think ChatGPT 4 passes the Turing test.
      • However, that DOES NOT mean AI is conscious.

      3. Why do we learn and follow Buddha’s teachings? The Buddha taught us that sentient beings traverse the rebirth process (Samsara) due to their ignorance about the nature of this world with 31 realms.

      • As long as we think and perceive that attaching to things in this world can benefit us, we will keep doing those things. 
      • That is what “sentient beings” do. 
      • AI is not sentient per that definition. That is the simplest way to look at it from the Buddhist perspective.
      • Wondering whether AI is conscious is not a useful thing to do. However, as the video points out, we must take precautions on how to deal with the rapid progress of AI.

      4. Another way to look at AI is as follows. Four Buddhas have been on this Earth within the past several billion years. 

      • Just like each Buddha Sasana flourished and vanished, it is quite likely that there have been many “ups and downs” in technological advances.
      • For example, evidence is emerging that an advanced civilization over 20,000 years ago built some amazing structures that we can see today: “Back to Builders of the Ancient Mysteries (BAM) – Full Movie, Documentary.”
      • One hypothesis for the disappearance of that civilization is a comet impact. Such comet impact also wiped out the dinosaurs some 60 million years ago.
      •  A billion years is an unfathomably long time. Many such catastrophic events can take place over four billion years. Another possibility is that a highly technologically advanced civilization develops nuclear weapons, and a nuclear war wipes out humans and their technology. There will be no trace of such civilization even after just a million years. 

      5. The main point is not whether AI can become sentient and take over the world. That WILL NOT happen. However, in the hands of vile people, AI technology can be misused and can destroy human civilization. 

      • While we should do whatever we can to educate others about such possible perils, the main goal should be to get to at least the Sotapanna stage as soon as possible and even further. No such calamities can affect the higher-lying Deva/ Brahma realms.
      • Even if most humans are wiped out, no such disaster can kill the “manomaya kaya/gandhabba” created by Kammic energy. Thus, the human population will again build up over time. As I mentioned, many such cycles possibly took place over the past four billion years. 
      • Furthermore, in the case of a catastrophe like the above, AI will be wiped out, unlike sentient beings. It is possible that there were civilizations before us who also developed AI even further than us.
      2 users thanked author for this post.
    • #44524
      LayDhammaFollower
      Participant

      Yes, Nice summary Lal.

      Even if machines were to behave exactly as if they are sentient,

      Machines cannot take rebirth.

      As, You have emphasized a lot, Life cannot be created in lab. All life are originated with kammic energy only. Machines cannot create Kammic energy either.

      They cannot access viññāṇa dhatu either.

    • #44525
      Lal
      Keymaster

      Exactly. That is why AI will NEVER be sentient.

      • Thank you for posting that video, LDF. It nicely summarizes the dangers of AI even though it cannot be sentient (nuclear energy or any advanced technology can also be beneficial or dangerous.) It is up to us humans to use any technology for our benefit or our demise.

      P.S. I am pleasantly surprised with the capabilities of ChatGPT. Following is one conversation I had with it yesterday.

      Me: Which Arahant committed suicide? Do you know the sutta describing that incident?

      ChatGPT4: 

      The Arahant you are referring to is Venerable Channa, also known as Channovāda Sutta. The incident of his suicide is described in the Pali Canon, specifically in the Samyutta Nikaya (Connected Discourses), in the 35th book, Salayatana-vagga, and the 87th sutta, Channovāda Sutta (SN 35.87).

      Venerable Channa was a disciple of the Buddha who was suffering from severe physical pain due to an illness. In the sutta, he expresses his intent to use the knife, a euphemism for taking his own life. Despite being visited by fellow monks Sariputta and Mahacunda who tried to counsel him and offer support, Venerable Channa ultimately went through with his decision.

      However, it’s important to note that the Buddha declared that Venerable Channa had attained Arahantship and would not be reborn, meaning that he was free from the cycle of birth and death (samsara) and had achieved the ultimate goal in Buddhism, Nibbana. Suicide is generally frowned upon in Buddhism, but this specific case is unique due to Venerable Channa’s spiritual attainment.

    • #44535
      Lal
      Keymaster

      Even though ChatGPT can be pretty helpful (as I pointed out in the above comment), it makes silly mistakes. In particular, it does not have common sense (as expected from a non-sentient robot.) The following chats I had with it yesterday illustrate that point.

      Me: It takes 5 hours for me to dry 5 shirts out in the Sun. How long would it take to dry 15 shirts?

      ChatGPT4: 

      Assuming that the drying time is directly proportional to the number of shirts, we can use a proportion to solve for the time it would take to dry 15 shirts.

      If 5 shirts take 5 hours to dry, then 1 shirt takes 1 hour to dry.

      Therefore, if 1 shirt takes 1 hour to dry, 15 shirts would take 15 times as long:

      15 shirts x 1 hour/shirt = 15 hours

      So, it would take 15 hours to dry 15 shirts out in the Sun, assuming similar drying conditions for all shirts.

      ***********

      Me:  There are four jugs with capacities of 1 3/4, 1, 1/2, and 1/4 cups. What is the minimum number of fills to fill the jug of 1 3/4 cup-capacity?

      ChatGPT4:

      We can fill the jugs with smaller capacities and use them to fill the larger jugs until the 1 3/4 cup-capacity jug is full.

      First, we can use the 1/4 cup jug to fill the 1 3/4 cup-capacity jug. It would take 7 fills of the 1/4 cup jug to fill the 1 3/4 cup-capacity jug.

      Next, we can use the 1/2 cup jug to fill the 1 cup jug. It would take 2 fills of the 1/2 cup jug to fill the 1 cup jug.

      Then, we can use the 1 cup jug to fill the remaining 3/4 cup space in the 1 3/4 cup-capacity jug. It would take 3 fills of the 1 cup jug to fill the remaining space.

      Therefore, the minimum number of fills to fill the jug of 1 3/4 cup-capacity is:

      7 fills of the 1/4 cup jug + 2 fills of the 1/2 cup jug + 3 fills of the 1 cup jug = 12 fills

      So, we need to fill and pour the jugs 12 times in total to fill the 1 3/4 cup-capacity jug.

      *******

      Even a child can see that any number of shirts can be dried in one hour.

      • The second answer is even worse. The jug of 1 3/4 cup capacity can be filled in one shot. But it is possible that one may think that other jugs must be used to fill it. Even in that case, just three fills (one with each smaller jug) can fill up the large jug.

      ChatGPT must be used with caution. Even though it can be a good tool for researching literature, even there, sometimes it provides incorrect answers.

      • Furthermore, pertaining to political issues, it just conveys the bias of the people who have trained it. 
      • The point is that ChatGPT, or any other AI application, will be as good/bad as it has been programmed/trained.
      1 user thanked author for this post.
    • #48398
      Lal
      Keymaster

      Even though AI will never be able to match human intelligence, it will lead to a revolution in many applications like self-driving cars and computer coding. Those applications only involve speeding up mechanical processes with much less “operator error.”

      • More importantly, AI will be able to speed up the working of the human brain. Human thoughts arise in the “seat of the mind (hadaya vatthu)” but are implemented (i.e., turned into physical actions) by the brain. The brain is necessarily slow because the energy available (by eating food) is negligibly small compared to the many orders of magnitude larger power available for modern computer networks.
      • While AI cannot come up with “paradigm-changing discoveries,” it will help implement human ideas much faster.
      • The following video shows that Neuralink is in the first stages of this revolution.

      2 users thanked author for this post.
Viewing 11 reply threads
  • You must be logged in to reply to this topic.