PDA

View Full Version : AI and Consciousness



Chessmaster
April 4th, 2008, 01:11 PM
Do you think that humans will ever be able to make self aware computers? i.e. some being that has some form of subjective consciousness.

Ray Kurzweil has some interesting vids and articles on this (he definitely thinks so, and not too far away either).

http://www.kurzweilai.net/index.html?flash=1

hessiess
April 4th, 2008, 01:36 PM
its perfectily posable that as computers get more powerfal, thay will be able to think for themselvs.

Xzallion
April 4th, 2008, 01:43 PM
It really depends on how you define consciousness. We have developed psuedo-AI's in the form of text based chat-bots that can pass the Turing test (http://en.wikipedia.org/wiki/Turing_test), but this doesn't actually think for itself. Its a tricky question, as we are trying to design something we can't define (yet).

its perfectily posable that as computers get more powerfal, thay will be able to think for themselvs.
I don't believe we need much more power, just to define what we are after and discover the math behind it.

Patrick-Ruff
April 4th, 2008, 01:48 PM
yes, probably. I'd guess the next 50 years we'll probably have that . . .

that's based on the assumptions of a real AI developer.

Chessmaster
April 4th, 2008, 02:00 PM
It really is a tricky one. I am not sure what I think about it, although my suspicions are that it is not a case of us designing AI to be conscious, but more of a case of designing something (based on say some algorithm) that operates in such a way that it "learns" to be self aware or something along those lines. Whether we will ever truly understand how it works is another matter.

Xzallion is right in that we don't really know what we are looking for. How would we know that a computer is conscious? Passing something like the Turing test just doesn't seem to be enough.

But then again, isn't that what we do with other people? We assume that others are conscious because they pass our own folk psychology version of the Turing test and they are relevantly similar to ourselves. We would probably conclude that Aliens were conscious if they behaved in certain ways, why not computers? Or is it something we can never really know?

I doubt that there will ever be a Eureka moment. Computers will probably just become so integrated into our lives that we treat them as if they are, and they behave as if they are, conscious. Perhaps after time, they will become conscious but on their own accord.

SupaSonic
April 4th, 2008, 02:23 PM
I've always thought that in order to create an AI one has to create a machine with the ability to learn. After that it can teach itself to be self-conscious.

This task is so difficult though, that I doubt anyone has even got a remote idea how to do that. I'm still hopeful though.

Chessmaster
April 5th, 2008, 12:42 AM
I've always thought that in order to create an AI one has to create a machine with the ability to learn. After that it can teach itself to be self-conscious.

This task is so difficult though, that I doubt anyone has even got a remote idea how to do that. I'm still hopeful though.

Especially seeing as we don't even know how humans learn!

heartburnkid
April 5th, 2008, 12:45 AM
One thing I've learned is that there are two words that just don't apply to science and engineering: "can't" and "never".

True AI will happen; it's only a matter of time, resources, and interest.

Tundro Walker
April 5th, 2008, 09:47 AM
You have to be careful of the Kurz dude. He's highly optimistic in his predictions about advancements following an exponential instead of linear curve. While I would tend to agree with that, he seems to to leave out (or at least de-emphasize) the importance of market trends on technological advancements. Technologies generally follow stair-step bumps up, not smooth gradients, because pretty soon a set technology (EG: modern computers) provide everything that the common user needs. Since the common user is satisfied with what the have, they don't buy new computers, and thus the market tapers off.

When the market was booming in the 90's, we saw huge computer growth. We're still seeing quite a bit, but there isn't much more for folks to do with them that modern hardware can't already do well. Maybe advanced graphics, audio, etc, but after a while, you get the diminishing return on investment ... upgrades that are so imperceptible it's just not worth the investment (EG: upgrade from DX9 to DX10 ... when the get to DX11, the difference will probably be so imperceptible to the human eye, it won't be worth it)

Anyways, a lull in the market will trickle down to a lull in research in about 5 years, which causes more stagnation ... until some majorly cool need comes along to kick-start everything again (like iPods did).

Kurz is an interesting read, though.

We will eventually make an AI, but I find it ironic that you seem to have a pessimistic spin on your answers. IE: we will make one, and you assume something might go wrong and we'll need to turn it off. Or, we won't make one because we're barking up the wrong tree. Man, you need to be more positive. How about ... we will make one, and it will be the perfect boyfriend / girlfriend everyone wants, and thus human breeding grinds to a dead halt as folks no longer socialize with each other, instead dating their perfect AI robot beau's. Human population plummets, and we eventually hump ourselves into extinction by fornicating with AI robot companions.

Hmmm, that's sort of pessimistic, too. But, at least we go out with a smile on our face.

gn2
April 5th, 2008, 02:04 PM
I believe that all organic life forms will all become extinct and what we refer to as AI will become genuine intelligence.
Mechanised conscious lifeforms will be able to travel throughout the galaxy/universe.
As bio-organisms we will never be able to do so and are ultimately fated for total extinction when our energy source (the sun) runs out.

days_of_ruin
April 5th, 2008, 03:08 PM
One thing I've learned is that there are two words that just don't apply to science and engineering: "can't" and "never".

True AI will happen; it's only a matter of time, resources, and interest.

So do you think there will ever be time machines or teleporters?

zekopeko
April 5th, 2008, 04:17 PM
there is already progress on teleporters from two different directions. one is the classic "beam me up scotty" the other is just copying information on the matter that is teleported and then recreated on the other end but the original isn't teleported just cloned on the other end (think agent smith).

about AI's: we are going to make them. the really interesting part is will they "learn" to feel. they may provide us with the true meaning of emotion since they will only evolve from logic not instinct and logic.
Also i can't even think what kind of IQ those suckers are going to have since they will have far more processing power then a human brain.

heartburnkid
April 5th, 2008, 04:39 PM
So do you think there will ever be time machines or teleporters?

Teleporters, yes, though you'll likely never get me into one, even if they were invented within my lifetime (as I'm not entirely convinced that they wouldn't just destroy you and recreate a copy somewhere else).

Time machines are a bit difficult, since there's considerable evidence that they never will be invented -- namely, that there are no records of any time travelers ever visiting the past. Still, the possibility is there.

sanderella
April 5th, 2008, 04:50 PM
I don't think humans will be able to create sentient life or machines here on earth. Maybe in the next life..........:KS

Chessmaster
April 6th, 2008, 06:11 AM
You have to be careful of the Kurz dude. He's highly optimistic in his predictions about advancements following an exponential instead of linear curve. While I would tend to agree with that, he seems to to leave out (or at least de-emphasize) the importance of market trends on technological advancements. Technologies generally follow stair-step bumps up, not smooth gradients, because pretty soon a set technology (EG: modern computers) provide everything that the common user needs. Since the common user is satisfied with what the have, they don't buy new computers, and thus the market tapers off.

When the market was booming in the 90's, we saw huge computer growth. We're still seeing quite a bit, but there isn't much more for folks to do with them that modern hardware can't already do well. Maybe advanced graphics, audio, etc, but after a while, you get the diminishing return on investment ... upgrades that are so imperceptible it's just not worth the investment (EG: upgrade from DX9 to DX10 ... when the get to DX11, the difference will probably be so imperceptible to the human eye, it won't be worth it)

Anyways, a lull in the market will trickle down to a lull in research in about 5 years, which causes more stagnation ... until some majorly cool need comes along to kick-start everything again (like iPods did).

Kurz is an interesting read, though.

We will eventually make an AI, but I find it ironic that you seem to have a pessimistic spin on your answers. IE: we will make one, and you assume something might go wrong and we'll need to turn it off. Or, we won't make one because we're barking up the wrong tree. Man, you need to be more positive. How about ... we will make one, and it will be the perfect boyfriend / girlfriend everyone wants, and thus human breeding grinds to a dead halt as folks no longer socialize with each other, instead dating their perfect AI robot beau's. Human population plummets, and we eventually hump ourselves into extinction by fornicating with AI robot companions.

Hmmm, that's sort of pessimistic, too. But, at least we go out with a smile on our face.

Yeah, I tend to take Kurzweil with a grain of salt. I am deeply suspicuiss in that I am not sure that we have any idea what it is we are really trying to make - what on earth does it even mean to be conscious, self aware etc. Still, interesting and very intelligent guy. All credit to him if he makes some serious progress and I woudn't want to rain on his parade because I think it is all really very interesting.

As for the "But could we turn it off?" comment. I should have reworded that to, "would it be ethical to turn it off?"...especially if they we as sentient, intelligent etc as us. I guess we make babies and cannot turn them off when ever we want after they reach a certain level of sentience, self awareness, interests etc. Would it be the same for AI?

As for would they be good or bad robots? Hmmmm. We are a long long way away from clearly understanding what exactly morality is and how and why we are moral. No one can even decide what ethical theory we as humans should adopt, so how you would program this I am not sure. Obviously you would want them to learn ethics (among other things) from their environment, but what that even means is a tricky one.

Chessmaster
April 6th, 2008, 06:23 AM
Teleporters, yes, though you'll likely never get me into one, even if they were invented within my lifetime (as I'm not entirely convinced that they wouldn't just destroy you and recreate a copy somewhere else).

Time machines are a bit difficult, since there's considerable evidence that they never will be invented -- namely, that there are no records of any time travelers ever visiting the past. Still, the possibility is there.

Interesting points.

As for teleporters. I guess issues of personal identity make it all pretty weird. If they did just make a copy of you would you be the same person? What if it make two copies of you....which one would be you? You can't be both! (There was a star trek the next generation episode about this).

As for time travel. I am not sure that just because people haven't been here to visit from the future doesn't mean that they won't be make in the future. Maybe this is just not an interesting time to visit...Kinda like some small town you have heard of that is really boring and never visit in your life because there are more interesting places to vists. Seeing as there are potentially almost infinite number of times they could visit and each person only has a limited life span (i.e. just because you can time travel you still only have a normal life expectancy, or at least most probably finite) it is perhaps not that surprising we might not see a time traveller in our time.

Stephen Hawking used to think that because we haven't see time travellers indicates that time travel is impossible, but I am pretty sure he has changed his mind on this in the last few years.

Tundro Walker
April 6th, 2008, 07:05 AM
If it's one thing every robot sci fi movie has ever taught us ... there's definitely some "line" that gets crossed when something should stop being treated like an "object" or property, and start being treated as an equal.

It was hard enough for some folks to get over slavery, and slavery still exists in the world today. So, if some folks have a hard enough time just treating others humans humanely, then it'll probably be more folks treating robots inhumanely, since robots would be seen as objects up and to a point.

Then again, taking the other extreme, you will have the folks who are hyper-sensative to others. There's people that have become attached to their Roombas, treating it like a robotic pet. I don't go so far as that with my Roomba, but I do respect it for doing work I don't want to do. But humanity operates on a bell curve. Where I and others appreciate a robot doing house work, others would treat it like a slave, barking orders to it, demeaning it and generally being horrible to it.

The movie "I Robot" had the whole 3 commandments for robots. Something like you can't hurt a human and what-not. I think that would set us humans up, as a whole, for failure in interacting with robots, though. If sentient robots let us get away with abusing them, we'd have a lot of jerk people running around, and it could lead to civil unrest amongst robots if they were sentient enough to care.

My biggest concern, not just with AI but with all technology, is that it advances far faster than our appreciation for it does. iPods show up and everyone's excited, and now it's just "meh, whatever". Lots of folks have the "12 yo love/hate video game player" mentality, where they totally worship the ground some new technology walks on when it first comes out, then a few months later, after they've used it or "conquered" it, they no longer appreciate it, sometimes to the point of kicking it to the curb or destroying it.

I bagged on your for being pessimistic about AI, but I'm pretty pessimistic about it myself (probably was projecting my thoughts of it onto your statement.. hehe). I don't think we'll end up with an Isaac Asimov / Star Trek world. We'll probably end up with a world like the Matrix, totally screwed up because we didn't appreciate the fruits of our own labor.