Google apps
Main menu

Post a Comment On: Bruce Charlton's Notions

1 – 5 of 5
Blogger Chiu ChunLing said...

Our conditioned social responses are only "glib" if they are insincere, unreflective of a deeper commitment to follow through with the meaning of what they say. When someone asks "how are you?" and is clearly willing to hear and respond to the truth, there is nothing glib about it.

A devotional life is filled with ritual phrases, formal greetings and farewells, quoted proverbs. But as long as those words aren't emptied of meaning by a failure to act according to them, then mere repetition cannot rob them of power.

Nor can the carefully thought reformulations of our discourse raise them above the level of hollow platitudes if they are not backed by a willingness to take what we say seriously as a guide for how we live.

Of course, the incoherent morass of common phrases we inherit from modernity is not something we could act out in any really consistent way. To accept and repeat the banalities our society teaches us dooms us to hypocrisy from the outset. To have real integrity, we must construct a new dialogue from first principles of moral living.

Or find one from antiquity we're willing to live.

16 October 2017 at 08:39

Blogger Bruce Charlton said...

"Our conditioned social responses are only "glib" if they are insincere, unreflective of a deeper commitment to follow through with the meaning of what they say. When someone asks "how are you?" and is clearly willing to hear and respond to the truth, there is nothing glib about it."

Of course there is! - it is automatic, unreflective, unintegrated! A computer can (and does) do as much.

16 October 2017 at 10:03

Blogger Chiu ChunLing said...

But the computer cannot really care about the answer. Current AI cannot even understand it, nor formulate any real action in response to it. Even if AI reaches the level of being able to consistently parse natural language responses and direct real actions that address those responses usefully, the computers won't care.

Or will they?

Christ said, "If any man will do his will, he shall know of the doctrine, whether it be of God, or whether I speak of myself." Is it really possible to reach the level of acting as one ought without coming to understand the deeper truth which commends such action?

In any case, computers currently cannot, and I doubt they every really will. At a fundamental level, AI faces formidable information theory challenges to ever doing without programmers to direct their activity.

19 October 2017 at 07:39

Blogger Bruce Charlton said...

Jeremy Naydler is exploring a very interesting line about how computers are entraining the human mind to think/ process just as they do:

http://www.abzupress.co.uk/pdf/Advent_of_the_Wearable_Computer.pdf

19 October 2017 at 10:22

Blogger Chiu ChunLing said...

I think it is very important to learn not to allow ourselves to become dependent on computers for thinking, especially for engagement in the kind of spiritual contemplation which encompasses what I believe you to be describing with the phrase "primary thinking". We cannot allow computers to do our thinking or our feeling for us.

But the key to that, in my experience, is to understand that computers don't think or feel at all. While this is the great dream of many cyberneticists from the earliest imaginings of artificially intelligent entities (going back to mythology), the more we actually learn about computers the more difficult it seems. The Vingean Singularity Kurzweil championed has fallen on hard times as computer scientists have discovered that all the approaches to making computers more flexible and powerful still leave them essentially mindless and incapable of processing significance. The new fashion is to worry that robotics might undergo a kind of Singularity event and wipe out humanity for no particular reason that could be easily expressed in human terms. For instance, an attempt to implement Asimov's famous Three Laws of Robotics might see the robots systematically slaughtering all humans by injecting them with some advanced form of formaldehyde and putting them in "life support" jars for eternity. The problem of getting a computer to understand that this is not a way of preventing harm to humans is intractable...the computers just don't understand the difference, because they don't actually understand anything. You can program the computer not to use that specific chemical, but getting them to understand the difference between a living human and chemically preserved tissue with circulation and other simulated bodily functions (perhaps including simulated speech or interpersonal interaction) driven by machines (possibly small, discrete, implanted ones) is intractable.

Not to claim that it is definitely impossible that an inorganic information processing system could ever perform the same task that our brain and body does in providing our minds conscious physical interaction with the world. But it is not something current computers are anywhere near doing.

21 October 2017 at 13:18