PDA

View Full Version : Will computers ever be able to solve philosophy problems?



brawnypandora0
March 22nd, 2011, 10:30 PM
Some pressing questions might include, "Is there ever a morally justifiable philosophy in which committing genocide is permissible?" or "What type of truth can the Gettier problem best certify?"

This reminds me of on Jeopardy's Watson and how Noam Chomsky described it, saying that he's "not impressed with a bigger steamroller." It seems AI will never reach the level of solving deep problems in philosophy.

Now in terms of solving problems in mathematics, I'm sure computers will be able to begin to do that within my lifetime, even though the proof of Goldbach's conjecture has eluded mathematicians for centuries.

coolbrook
March 22nd, 2011, 10:54 PM
Beam us up, Scotty!

brawnypandora0
March 22nd, 2011, 11:00 PM
Beam us up, Scotty!

What does that mean?

Hippytaff
March 22nd, 2011, 11:02 PM
Some pressing questions might include, "Is there ever a morally justifiable philosophy in which committing genocide is permissible?" or "What type of truth can the Gettier problem best certify?"

This reminds me of on Jeopardy's Watson and how Noam Chomsky described it, saying that he's "not impressed with a bigger steamroller." It seems AI will never reach the level of solving deep problems in philosophy.

Now in terms of solving problems in mathematics, I'm sure computers will be able to begin to do that within my lifetime, even though the proof of Goldbach's conjecture has eluded mathematicians for centuries.

Wow

um...no I don't but I love the idea

JRV
March 22nd, 2011, 11:05 PM
This reminds me of on Jeopardy's Watson and how Noam Chomsky described it, saying that he's "not impressed with a bigger steamroller."

In other words, the Turing test is invalid. I agree.

The question of true intelligence in a computer has fascinated me for some time. In 1969 I was working on the first installation of a CNC machine. An Excelon printed circuit board drill. And the question arose "Is it ethical to have a computer making parts for itself?".

I don't think a computer will solve philosophy problems for a very long time, if ever.

Has anyone read The Difference Engine by William Gibson and Bruce Sterling or Player Piano by Kurt Vonnegut?

Daveth
March 22nd, 2011, 11:26 PM
What does that mean?

Star Trek, the original Series, Chief engineer Montgomery Scott....sad I know this, to be sure!

JKyleOKC
March 22nd, 2011, 11:35 PM
It seems AI will never reach the level of solving deep problems in philosophy.How can we ever achieve AI without being able to put a strict definition to the concept of "intelligence"???

A better term than AI would be "simulation of intelligent activity" and we've had that ever since the Eliza program was first written...

Now for the title question, s/computers/humans/ and you have the answer. If any single true answer exists, we have yet to find it...

samcot
March 23rd, 2011, 12:05 AM
Perhaps a better question is: Will humans ever be able to solve philosophy problems?

wep940
March 23rd, 2011, 01:32 AM
Indeed, while humans still write the programming (even generated OS's still have human involvement) the limits will always be what a human can do. Some of the philisophogal (?) question actually require an opinion, not neccessarily a fact. So, the answer will always be as "deep" as the person(s) doing the programming and their opinions.

I totally agree - artificial intelligence is a misnomer - it is always a simulation. As much as we might want to think that the computer is "thinking for itself" (what articifial intelligence implies), it is still a matter of pre-programmed steps. The programming is making the descisions, including pre-programmed options if something isn't directly apparent. Right now everything on a computer is logic - the OS and the programming. Analog to digital conversion, automatic data collection, etc., still at their underlying lowest have human programmed logic.

wep940
March 23rd, 2011, 01:33 AM
Perhaps a better question is: Will humans ever be able to solve philosophy problems?

I'm not sure - should I consult my AI machine? ;) ;)

beew
March 23rd, 2011, 01:53 AM
Are "philosophical questions" solvable? I think the purpose of these questions are mostly for use as starting point for some open ended explorations, rather than to get a definitive answer.

As for the Turing test, it basically demonstrates how people can be easily fooled rather than whether machines possess intelligence. Its implications are more relevant in human psychology than AI (In an AI talk I was told that a simple bot has fooled a bunch of horny guys on a sex chat room)

brawnypandora0
March 23rd, 2011, 03:13 AM
Are "philosophical questions" solvable? I think the purpose of these questions are mostly for use as starting point for some open ended explorations, rather than to get a definitive answer.

As for the Turing test, it basically demonstrates how people can be easily fooled rather than whether machines possess intelligence. Its implications are more relevant in human psychology than AI (In an AI talk I was told that a simple bot has fooled a bunch of horny guys on a sex chat room)

Wow! I'm sure the "industry" isn't fully aware of this yet.

MasterNetra
March 23rd, 2011, 03:29 AM
Some pressing questions might include, "Is there ever a morally justifiable philosophy in which committing genocide is permissible?" or "What type of truth can the Gettier problem best certify?"

This reminds me of on Jeopardy's Watson and how Noam Chomsky described it, saying that he's "not impressed with a bigger steamroller." It seems AI will never reach the level of solving deep problems in philosophy.

Now in terms of solving problems in mathematics, I'm sure computers will be able to begin to do that within my lifetime, even though the proof of Goldbach's conjecture has eluded mathematicians for centuries.

Yes. Why? Because we are doing it. Our brains are electro-mechanical computers themselves.

Dustin2128
March 23rd, 2011, 03:31 AM
In other words, the Turing test is invalid. I agree.

The question of true intelligence in a computer has fascinated me for some time. In 1969 I was working on the first installation of a CNC machine. An Excelon printed circuit board drill. And the question arose "Is it ethical to have a computer making parts for itself?".

I don't think a computer will solve philosophy problems for a very long time, if ever.

Has anyone read The Difference Engine by William Gibson and Bruce Sterling or Player Piano by Kurt Vonnegut?
I'm reading the difference engine at the moment, really facinating book. I disagree with you because as netra already said, *we* are just chemical computers, really.

MasterNetra
March 23rd, 2011, 03:35 AM
I wonder though, what will be our course of action once a computer is made the surpasses our minds in ALL areas...Do we and it(them) work together to figure out or make it so we can upgrade ourselves? Will we start figuring out how to upgrade ourselves before hand? ...

armageddon08
March 23rd, 2011, 03:43 AM
Understanding the human brain is the key. We don't still know about half the processes occurring in the brain.

MasterNetra
March 23rd, 2011, 04:08 AM
Understanding the human brain is the key. We don't still know about half the processes occurring in the brain.

Indeed we still have much to learn but to think, we know so so much more about it and our surrounding world/universe then we have EVER known in our species existence. Maybe one day, perhaps even within our own life times we may learn and be advance enough to start taking our evolution fully into our own hands. Upgrading our minds and external computers hand in hand. With how tech and such continues to advance at a exponential rate its very much possible that we or many us could see this at least start unfolding within our own lifetimes. Not next year maybe but 10, 20, 30 years later? Who knows what discoveries we would have come across and who knows what advances we will have made.

samcot
March 23rd, 2011, 04:18 AM
You check with AI, and I'll check with the Maverick Meerkat Oracle:

http://t2.gstatic.com/images?q=tbn:ANd9GcSFBedJYkuZKCOOXpXE6l2tFW3Om4gQh x4ntssreUSKMkxMTC5T2w

GabrielYYZ
March 23rd, 2011, 04:24 AM
the problem will arise when a computer can bootstrap another without a human being involved, when it can extend its own source code and when it can learn from its bugs and correct them.

after all, that's what we all do...

note: in the case of humans, replace "bootstrap" with procreate and "without a human being" with another species.

MasterNetra
March 23rd, 2011, 04:28 AM
the problem will arise when a computer can bootstrap another without a human being involved, when it can extend its own source code and when it can learn from its bugs and correct them.

after all, that's what we all do...

note: in the case of humans, replace "bootstrap" with procreate and "without a human being" with another species.

More likely it will be a problem for us, if we become a problem for them.

Khakilang
March 23rd, 2011, 04:36 AM
Nah! It would not solve any philosophy problem. That is human problem.

slackthumbz
March 23rd, 2011, 05:12 AM
"We," said Majikthise, "are Philosophers."

"Though we may not be," said Vroomfondel waving a warning finger at the programmers.

"Yes we are," insisted Majikthise. "We are quite definitely here as representatives of the Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons, and we want this machine off, and we want it off now!"

"What's the problem?" said Lunkwill.

"I'll tell you what the problem is mate," said Majikthise, "demarcation, that's the problem!"

"We demand," yelled Vroomfondel, "that demarcation may or may not be the problem!"

"You just let the machines get on with the adding up," warned Majikthise, "and we'll take care of the eternal verities thank you very much. You want to check your legal position you do mate. Under law the Quest for Ultimate Truth is quite clearly the inalienable prerogative of your working thinkers. Any bloody machine goes and actually finds it and we're straight out of a job aren't we? I mean what's the use of our sitting up half the night arguing that there may or may not be a God if this machine only goes and gives us his bleeding phone number the next morning?"

"That's right!" shouted Vroomfondel, "we demand rigidly defined areas of doubt and uncertainty!"


-- Douglas Adams, The Hitch Hikers Guide to the Galaxy

cascade9
March 23rd, 2011, 05:15 AM
Some pressing questions might include, "Is there ever a morally justifiable philosophy in which committing genocide is permissible?" or "What type of truth can the Gettier problem best certify?"

You've kiding, right?

To answer both of those at the same time- the nazis in 1933-1945 (and today for that matter) would say that they had 'jusitifed true belief' that commiting genocide was not only allowed, it was desirable (which is why they did some of the most stupid things ever.....I really shouldnt go into the faluty logic behind those actions though, I'm aleady pushing the edge with proving godwins law LOL)

I dont belive that philosophy really is that great for 'solving' ethical problems. It is good for justifing the actions taken over ethical problems though. But I really hate Descartes, who is a person who really pushed the idea that philosophy was usful for 'solving' problems like that, and my favourite philosopher is Montaigne. So to use Montaignes words, "Que sais-je?" (What do I know?)

Shining Arcanine
March 23rd, 2011, 05:55 AM
How can we ever achieve AI without being able to put a strict definition to the concept of "intelligence"???

A better term than AI would be "simulation of intelligent activity" and we've had that ever since the Eliza program was first written...

Now for the title question, s/computers/humans/ and you have the answer. If any single true answer exists, we have yet to find it...

The definition exists, but you need to think of it. I mentioned my definition of artificial intelligence to someone doing research in this area recently and he thought it was awesome, although in hindsight, that was a mistake. The world is not ready for this yet and providing anything that brings the world toward clear definition will be the reverse of delaying it.

Copper Bezel
March 23rd, 2011, 11:48 AM
Artifical intelligence is a perfectly appropriate term for something like Watson, using predictive semantic algorithms to produce hopefully-relevant responses. Something that emulates rather than simulates human intelligence is another thing entirely and entirely theoretical, and will probably require both a bottom-up "training" process (something that resembles language acquisition) and a different architecture for the brain entirely (multiple systems working independently and communicating their results to one another, like the human brain, instead of a top-down configuration like Watson's.)

The only thing that has ever solved philosophical questions is scientific fact. Of the example questions, the first is a practical one; the part that isn't about predicting outcomes (what are the consequences of genocide?) is based on values that come from human experience and biology that the computer would have to learn from us anyway (who has the right to take human life, or the life of an entire cultural lineage and tradition, and what is its value?) The second is essentially a semantic one - English isn't quite precise enough to convey the distinction being made.

I'm not going to go as far as to say that philosophy is a god of the gaps, but I think I just did. = )

matthew.ball
March 23rd, 2011, 11:51 AM
"What type of truth can the Gettier problem best certify?"
Ehh, the "Gettier Problem" has nothing to do with truth per se. They just show that "justified true belief" is not a sufficient condition for knowledge. If you take "luck" out of the picture, they don't really show anything either.

Random_Dude
March 23rd, 2011, 12:23 PM
I don't think philosophical questions have an answer, they are a matter of opinion, not a mathematical problem.
So no human or computer will ever find a correct and definitive answer.

Cheers :cool:

wizard10000
March 23rd, 2011, 12:31 PM
I don't think philosophical questions have an answer, they are a matter of opinion, not a mathematical problem.

This. Philosophical problems are not designed to be solved; that's why they're philosophical problems :D

donkyhotay
March 23rd, 2011, 02:30 PM
Will computers ever be able to solve philosophy problems?

No, at the most computers might give more questions to think about but no answers.

3Miro
March 23rd, 2011, 03:24 PM
"Solution" does not exist in philosophy. Math has that concept and while computers already solve many math problems (that is what computers were designed for in the first place), computers are essentially gigantic calculators. A computer is exactly a computer, it can add, subtract, multiply and divide with the added option to store number (there are also limitations on exactly what can be added/divided and so on). The computer can do everything that can be mathematically reduced to those 4+1 operations, if something cannot be reduced to those, then computers cannot do it. A computer is also dependent on a programmer to do the math and reduce the problem in the first place.

We will have to wait for a while, but it will be interesting to see if a computer can make an accurate simulation of the inner workings of a human brain and see if "intelligence" comes out of it.

realzippy
March 23rd, 2011, 03:38 PM
First the computer had to know
- that life ends.
- what love is.
- what pain is.

wizard10000
March 23rd, 2011, 03:44 PM
First the computer had to know
- that life ends.

Most humans don't even acknowledge that. Almost every religious philosophy teaches some sort of afterlife as IMO humans as a species tend to innately reject the concept of their own nonexistence.


- what love is.

Humans can't define this either. A lot of people believe love is an emotion - I tend to believe affection is an emotion but love is a choice. ¿See? My definition is different than a lot of people's ;)


- what pain is.

This also cannot be quantified.

Ain't philosophy grand?

:D

3Miro
March 23rd, 2011, 03:56 PM
First the computer had to know
- that life ends.
- what love is.
- what pain is.

Those may be associated with being a human (since almost everybody has experienced them), however, I don't see how they relate to intelligence.

sydbat
March 23rd, 2011, 04:10 PM
"We," said Majikthise, "are Philosophers."

"Though we may not be," said Vroomfondel waving a warning finger at the programmers.

"Yes we are," insisted Majikthise. "We are quite definitely here as representatives of the Amalgamated Union of Philosophers, Sages, Luminaries and Other Thinking Persons, and we want this machine off, and we want it off now!"

"What's the problem?" said Lunkwill.

"I'll tell you what the problem is mate," said Majikthise, "demarcation, that's the problem!"

"We demand," yelled Vroomfondel, "that demarcation may or may not be the problem!"

"You just let the machines get on with the adding up," warned Majikthise, "and we'll take care of the eternal verities thank you very much. You want to check your legal position you do mate. Under law the Quest for Ultimate Truth is quite clearly the inalienable prerogative of your working thinkers. Any bloody machine goes and actually finds it and we're straight out of a job aren't we? I mean what's the use of our sitting up half the night arguing that there may or may not be a God if this machine only goes and gives us his bleeding phone number the next morning?"

"That's right!" shouted Vroomfondel, "we demand rigidly defined areas of doubt and uncertainty!"


-- Douglas Adams, The Hitch Hikers Guide to the GalaxyI miss Douglas Adams.

Also, the answer to the question is 42. /thread

Zero2Nine
March 23rd, 2011, 04:12 PM
That question implies there is a solution to philosophy problems.

realzippy
March 23rd, 2011, 04:13 PM
Those may be associated with being a human (since almost everybody has experienced them), however, I don't see how they relate to intelligence.

Exaclty,you got it.So the answer to OP's question simply is:

No.

wizard10000
March 23rd, 2011, 04:22 PM
Exaclty,you got it.So the answer to OP's question simply is:

No.

I thought the answer was 42 :confused:

confused,

-- wiz

3Miro
March 23rd, 2011, 04:23 PM
I thought the answer was 42 :confused:

confused,

-- wiz

The answer is 42, the problem is that we don't understand the real question.

cascade9
March 23rd, 2011, 06:12 PM
Most humans don't even acknowledge that. Almost every religious philosophy teaches some sort of afterlife as IMO humans as a species tend to innately reject the concept of their own nonexistence.

Considered that maybe they actually know something about that? Nobody has ever 'proved' that there isnt an afterlife (or reincarnation), and its possibly unprovable either way.

This is why I dont like the idea of 'computers solving philosophy problems'. Aside from the way that 'sovling' those problems has issues by itself, if somebody did program a computer to solve those problems its likely to be programed by an athiest, or maybe an agnostic if we were lucky.

Assuming that death will result in 'nonexistence' is just as loaded a position to take as assuming that death will results in a different outcome.

MasterNetra
March 23rd, 2011, 07:07 PM
Most humans don't even acknowledge that. Almost every religious philosophy teaches some sort of afterlife as IMO humans as a species tend to innately reject the concept of their own nonexistence.



Humans can't define this either. A lot of people believe love is an emotion - I tend to believe affection is an emotion but love is a choice. ¿See? My definition is different than a lot of people's ;)



This also cannot be quantified.

Ain't philosophy grand?

:D

Actually love and pain are grounded in the physical world. Physical pain (body pain) of course exists due to touch receptors all over our body when damaged or activated properly they send a signal all the way up to our brain where it is processed as it is. Emotion "pain", and emotion in general is a complicated mixture of Chemical processes interacting and often being stimulated with/by our brain cells.

Love, Pain, and emotion can be qualified, wither or not we have found a suitable way of doing so is a different matter.

wizard10000
March 23rd, 2011, 07:20 PM
...Assuming that death will result in 'nonexistence' is just as loaded a position to take as assuming that death will results in a different outcome.

I never said that death would result in nonexistence, I said it was my *opinion* that humans as a species have a difficult time comprehending the concept that they may cease to exist at some point.

MasterNetra
March 23rd, 2011, 07:23 PM
The answer is 42, the problem is that we don't understand the real question.

Not the 42 thing again, seriously, am I the only who got that whole thing was a pun for the meaning of life is to "multiply"?

RiceMonster
March 23rd, 2011, 07:25 PM
Not the 42 thing again, seriously, am I the only who got that whole thing was a pun for the meaning of life is to "multiply"?

They're just referencing the lines in the book, not interpreting it. You're over analyzing what they're saying.

KL_72_TR
March 23rd, 2011, 07:25 PM
p { margin-bottom: 0.08in; } Ha ha ha … This is so funny :-)
Million years of evolution for creating a thinking 'Brain'.
Now that thinking 'Brain' is not satisfied with the result the AI (that same 'Brain' created) has reached in less than 100 years.
Yes and again Yes... One day AI is going to surpass us.
I'm worried about one thing, are we going to infect the AI with thoughts as: revenge, angry, lies … ?? :-(
That is the question.

MasterNetra
March 23rd, 2011, 07:33 PM
They're just referencing the lines in the book, not interpreting it. You're over analyzing what they're saying.

My mind is referencing the movie (the older British one)...didn't read the book.


p { margin-bottom: 0.08in; } Ha ha ha … This is so funny :-)
Million years of evolution for creating a thinking 'Brain'.
Now that thinking 'Brain' is not satisfied with the result the AI (that same 'Brain' created) has reached in less than 100 years.
Yes and again Yes... One day AI is going to surpass us.
I'm worried about one thing, are we going to infect the AI with thoughts as: revenge, angry, lies … ?? :-(
That is the question.

lol Billions of years technically, but yea, I hope not. Among the worse things we could do to it.

RiceMonster
March 23rd, 2011, 07:37 PM
My mind is referencing the movie (the older British one)...didn't read the book.

In the book, they create a super computer (Deep Thought) to answer "the question of life the universe and everything". Deep Thought tells them the answer is 42, and that since "the question of life the universe and everything" is not really a question in itself, they need to build a new super computer (aka earth) to find out what the question is so the answer of 42 will make more sense.

MasterNetra
March 23rd, 2011, 07:39 PM
In the book, they create a super computer (Deep Thought) to answer "the question of life the universe and everything". Deep Thought tells them the answer is 42, and that since "the question of life the universe and everything" is not really a question in itself, they need to build a new super computer (aka earth) to find out what the question is so the answer of 42 will make more sense.

The movie covered that yes. I was pointing out that the entire (which becomes much more obvious at the end of it) that the answer ultimately was to Multiply, as seen with the question that lead to 42. As it was a multiplication problem. A lot of people I've ran into didn't catch that.

t0p
March 23rd, 2011, 07:42 PM
The answer is 42, the problem is that we don't understand the real question.

The "real question" to that answer is "what is 6 times 7?" Eris, I thought everyone* knew that.

As for the question of "artificial intelligence", "real intelligence" and the rest of it: too many people are assuming that computers will always be as they are now: glorified abacuses. I expect that in the future scientists or someone will learn how to create machines that can think, feel, love, hate, and the rest of the kaboodle that we conceited humans believe are somehow unique to "natural" life.



"The man most directly responsible is Miles Bennett Dyson. He is the director of special projects at Cyberdyne Systems Corporation... In a few months he develops a revolutionary type of microprocessor...In three years Cyberdyne will become the largest supplier of military computer systems. All Stealth Bombers are upgraded with Cyberdyne computers becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

"Skynet fights back."



We already have unmanned drones armed with hellfire missiles and Eris knows what other vile contraptions - how long before we remove the necessity of some stupe with the "fire" button and let the drones make up their own "minds" whether or not to annihilate ground targets. Heck, the current stupes ain't too good at the job, launching attacks on wedding parties and the like. Drone decisions can't be much worse than that. What could possibly go wrong?**

* That is, everyone excluding the vast majority of the human race who neither know nor care what I'm talking about.

** That's a hypothetical question. If you didn't realise that, you're probably a crummy bot in need of an upgrade.

3Miro
March 23rd, 2011, 07:57 PM
I expect that in the future scientists or someone will learn how to create machines that can think, feel, love, hate, and the rest of the kaboodle that we conceited humans believe are somehow unique to "natural" life.


Why in the world would anyone build a computer capable of "hate" and "love" and then put it in charge of the military. If you make a computer capable of feelings, then you do that in a lap to study how it works. If you make a military machine, you make it like a very good soldier, follows orders thoroughly.

MasterNetra
March 23rd, 2011, 08:00 PM
Why in the world would anyone build a computer capable of "hate" and "love" and then put it in charge of the military. If you make a computer capable of feelings, then you do that in a lap to study how it works. If you make a military machine, you make it like a very good soldier, follows orders thoroughly.

+1 this, no sane general, or military personal is going to utilize a man-made emotional machine for warfare.

sydbat
March 24th, 2011, 04:14 PM
My mind is referencing the movie (the older British one)...didn't read the book.The Television series you are referencing (BBC 1981, based on the BBC Radio series 1978 ) was pretty close to the book. However, you have to read the book to actually understand what the Radio play and TV show were about, and to get the stuff that was missed...and there was a lot of stuff missed, as is the case with any adaptation.

I never saw the 2005 movie, but everyone I know who did see it, said it missed the point entirely.

wep940
March 25th, 2011, 09:28 PM
Going way back to my systems programming days - it would have been cool if philosophy could have solved some of my computer problems. Imagine sitting in your office, the weight of a 24/7 center that is down on your shoulders, thinking "what's life?" and BINGO! - a voice from above says - "fix that thing or you're fired!".