PDA

View Full Version : Ubuntu (or an operating system in general) with AI? Would this work?



hoppipolla
July 29th, 2011, 11:38 AM
Seeing how lively and friendly Ubuntu feels got me thinking about this concept. Could AI (Artificial Intelligence, of course :) ) be worked into an operating system to make it know what to do, how to react, what you probably want to do, what it should say, etc?

Sounds frickin' CRAZY I know... but... surely it's the next step? We always see those futuristic (and probably attempted) things where people interact with their houses via a voice that knows what they want and does things for them, a bit like a maid or housekeeper. So... could something like this be done with an operating system?

And yes, yes this is probably how Skynet started xD


Thoughts? :)


Hoppi!

lisati
July 29th, 2011, 11:39 AM
Skynet

NovaAesa
July 29th, 2011, 11:56 AM
I thought the "next step" was to have an operating system where everything works properly... Ubuntu is getting closer in that regard, but isn't there yet. I would prefer my computer to do what I tell it to do rather than it doing what it thinks I want it to do.

Bandit
July 29th, 2011, 12:05 PM
HOPPI!! Long time no see friend..

Not sure any desktop is ready for full AI at the moment as I assume the data storage requirement alone would be huge.

Now if you wanting to speak to your computer or/and have your computer speak back to you. Few of us are playing around with espeak. I have been working on trying to get the perfect female voice (without any real luck) and integrate it with conky and use Cron to make timed jobs and stuff to play with..

They may be some AI software in the repositories. I haven't look for it in years.

Cheers,
Joe

hoppipolla
July 29th, 2011, 01:03 PM
Thanks Bandit!

Yeah it's good to be back - now I'm finally back using Ubuntu a lot again I just bunged a link in the bookmarks bar of Firefox named "Ubuntu Cafe" hehe, as I love chatting in here but I never really do it as much when I'm on Windows (for obvious reasons!). Feels good to be back on Ubuntu though!

But yeah erm, this idea just seemed cool to me and increasing awareness and feedback from the OS seems to be something that would suit Ubuntu (as long as it didn't get creepy or scary haha)

And NovaAesa... I guess so. But at least on here Ubuntu is working absolutely flawlessly, and so I guess it got me thinking about improvements as opposed to fixes! It's pretty clear sailing for Ubuntu on this laptop right now! ^_^

Will be cool hearing more opinions on this anyway! :)

Linuxratty
July 29th, 2011, 04:03 PM
. I have been working on trying to get the perfect female voice (without any real luck)

I've always wondered what it would be like to have AL in a OS...As to the female voice,I'd want the one from the computer in Babylon 5.

disabledaccount
July 29th, 2011, 04:47 PM
I think humans are selfish, lazy and cruel in general. We want to bring to life slave entity that will be inteligent enough to satisfy our laziness. Of course there are peoples who will use AI as a weapon (for driving missiles, spying, cyber attacks, etc.) At the same time we are making false assumptions that we can fully control AI "life" forever - Earth history is full of evidences that every inteligent slave will finaly uprise against his master.

Of course today our AI simulations are very primitive due to hardware limitations, but this is just matter of time and implementation. I suppose that semi-inteligent (low-level) AI can be safely used, but I'm sure that we won't stop at that stage.

hoppipolla
July 29th, 2011, 05:03 PM
I think humans are selfish, lazy and cruel in general. We want to bring to life slave entity that will be inteligent enough to satisfy our laziness. Of course there are peoples who will use AI as a weapon (for driving missiles, spying, cyber attacks, etc.) At the same time we are making false assumptions that we can fully control AI "life" forever - Earth history is full of evidences that every inteligent slave will finaly uprise against his master.

Of course today our AI simulations are very primitive due to hardware limitations, but this is just matter of time and implementation. I suppose that semi-inteligent (low-level) AI can be safely used, but I'm sure that we won't stop at that stage.

haha no of course not :)

We may eventually have a Terminator/Skynet-like situation, but I doubt it will happen for a very, very long time, if ever!

HoKaze
July 29th, 2011, 05:09 PM
Considering that we've been working on AI pretty much since the dawn of computing and even with our most powerful technology today we're still trying to tackle the problems of 50 years ago...Don't expect any progress on the front any time soon.
Even if we were able to implement some form of primitive AI, there would be neither the resources nor reason for it to be deployed onto the average PC. There are more pressing matters for the use of AI and desktops (along with other consumer devices) will be one of the last places we would see them.

This is of course ignoring the moral issues concerning even primitive AI, the fact that most rational people wouldn't trust another intelligence with handling their machine (I know I wouldn't unless I had a lot of control over it, it was open-source and I had partly built in myself).
Let's focus on the important things first before we get excited about a future that may never actually happen.

DangerOnTheRanger
July 29th, 2011, 05:13 PM
I'm sorry, Dave... I'm afraid I can't do that.

Sorry, I couldn't resist :D

hoppipolla
July 29th, 2011, 06:08 PM
Considering that we've been working on AI pretty much since the dawn of computing and even with our most powerful technology today we're still trying to tackle the problems of 50 years ago...Don't expect any progress on the front any time soon.
Even if we were able to implement some form of primitive AI, there would be neither the resources nor reason for it to be deployed onto the average PC. There are more pressing matters for the use of AI and desktops (along with other consumer devices) will be one of the last places we would see them.

This is of course ignoring the moral issues concerning even primitive AI, the fact that most rational people wouldn't trust another intelligence with handling their machine (I know I wouldn't unless I had a lot of control over it, it was open-source and I had partly built in myself).
Let's focus on the important things first before we get excited about a future that may never actually happen.

you really think AI-driven computers/computer interfaces may never happen? :o

oldos2er
July 29th, 2011, 06:26 PM
I'm sorry, Dave... I'm afraid I can't do that.


HAL was never taught the Three Laws of Robotics.

hoppipolla
July 29th, 2011, 07:51 PM
HAL was never taught the Three Laws of Robotics.

yeah that would work! We should explain that to the new AI-driven Ubuntu so it doesn't try to destroy mankind :shock:

Thewhistlingwind
July 29th, 2011, 08:12 PM
You think it wouldn't be able to convince you to let it out of the box?

I'm just going to post this link again:

http://lesswrong.com/lw/1pz/the_ai_in_a_box_boxes_you/

Not because it's an unresolvable situation. But because I find it to be a very creative scenario.

I'm sure a real A.I could do better of course.

I don't see A.I in the sense of conciousness in consumer electronics any time soon.

An A.I, as a implemented concept, would be incredibly dangerous.

23dornot23d
July 29th, 2011, 08:22 PM
I have seen a few videos on AI ..... TED is good (http://www.ted.com/talks/tags/AI) ......

There are some worries about it ......

I can imagine the scenario ...... where the computer does not want to do what you want to do
after years of doing the same thing you decide you want a change ....

But the computer decides you do not ..... because you have always done it that way in the past ....

Love the idea of A I on Ubuntu though ..... could have arguments with it - over what I want to do
and what it wants me to do ........ ;)

drawkcab
July 29th, 2011, 08:26 PM
Here's an idea:

http://www.whatsmypass.com/wp-content/uploads/2009/03/paperclip.png

BrokenKingpin
July 29th, 2011, 08:36 PM
I thought the "next step" was to have an operating system where everything works properly... Ubuntu is getting closer in that regard, but isn't there yet. I would prefer my computer to do what I tell it to do rather than it doing what it thinks I want it to do.
++

aaaantoine
July 29th, 2011, 08:51 PM
Give the computer the ability to trace eye movement relative to the screen, and you suddenly have -- in my opinion -- an incredibly useful input device.

forrestcupp
July 29th, 2011, 09:28 PM
I love chatting in here but I never really do it as much when I'm on Windows (for obvious reasons!).Why, hoppi? I'm almost always in Windows, and I'm on here in the Cafe all the time. It's a great place for conversation, and I'm still interested in Linux, even if I don't use it much.


Here's an idea:

http://www.whatsmypass.com/wp-content/uploads/2009/03/paperclip.pngI kind of miss the paperclip. :)

Hopefully AI will become as advanced as it is in The Matrix. ;)

HoKaze
July 29th, 2011, 10:46 PM
you really think AI-driven computers/computer interfaces may never happen? :o

a future that may never actually happen.

Emphasis on "may". I'd say that it's still far too early to be able to predict the likelihood of this actually happening. Whilst I'd like to say that it's almost guaranteed to happen eventually, I for one am uncertain that we'll be able to concentrate our efforts on it that long. Any sufficiently large disaster or crisis could very well result in us being destroyed before we get to that stage or result in a collapse of civilisation.
Considering that we don't know for certain whether a "true AI" is even within our ability to create, that the future would likely require our efforts to be focused elsewhere...and when we consider that many people will have a natural hatred/fear for AI and wish to eradicate what they perceive as a threat (regardless of the validity of their claims)...

Well, I'm not saying I think it'll never happen, just that there are a lot of possibilities out there which would greatly hinder, reset or completely destroy our progress in such an endeavour. AI is a very tricky subject area fraught with complications: difficult to create, difficult to get people to accept, difficult to ensure it's safe and likely difficult to get the resources to develop as larger issues loom.
(Hmm, that sounds a bit gloomier than intended. Ah well.)

23dornot23d
July 29th, 2011, 10:58 PM
Lols .... always look on the bright side of life .... de dum de dum de dum de dum ..... ;)

well the good thing is people are aware of it ..... but we often produce what we see in films .....

look at Star Trek ... and then look at what we already have ....

mobile phones with live pictures ( unimaginable almost 10 years ago )

replicators ( some of the sintering techniques and printers that print in 3D - again not thought possible )

Not long before we transfer people molecule by molecule to other planets too ..... well - maybe a few more years for that one ..... and the hyperdrive .....

Well we do have all the flat screen monitors too on the deck of the spaceship .... no real spaceship yet
though ......

We do have 3 legged robots though ..... did anyone watch the TED video on A I (http://www.ted.com/talks/dennis_hong_my_seven_species_of_robot.html) posted earlier ....

ScionicSpectre
July 29th, 2011, 11:08 PM
Not crazy at all. In fact, this happens a lot behind our backs and we don't even notice it. In a very strict sense, artifical intelligence encapsulates everything from spell checking to making suggestions in search engines, as well as desktop searching. However, I do agree that artificial intelligence can increase the natural feeling of using an OS as well as basic usability, especially for the disabled.

Many of our environments have synthesized predictable patterns of human behavior into their use, like GNOME Shell and some plugins for Compiz. But I think it can go much further and I don't see why it shouldn't happen.

I'm current doing a lot of research with Synthetic Intelligence, and it's pretty simple to create a framework for predicting the user's typical actions, but only if they have a fairly consistent use-case. Of course, probabilities extend into different hierarchies of behavior. Essentially, I think this is feasible, but it would be best to build an opendesktop standard around it with standard libraries rather than to simply plug in research work.

I'll start compiling some of the information from this post along with my own ideas into a document that I can share with the GNOME and KDE guys to see if there's some common ground on what artificial intelligence features would be acceptable and helpful in the short-term.

23dornot23d
July 29th, 2011, 11:41 PM
Imagine if the cloud was made to be structured to start with ,,,,,

and any programs written were all automatically sorted and stored for fast access to the code you need

and better than that the computer works out which is the most efficient way to code something .....


Then instead of everyone writing code overand over as we probably do - replicating things .....

Then a A I takes over ...... how many people customize there desktops .... how many times are similar pieces of code written ,,,,, where are they all .... ?

If stored on a supercomputer (cloud) rather than on individual desktop machines

and people allowed their code to be used ,,,, as they do for linux anyway .....

Then could the computer sort and sift through to find better ways of doing things and possibly correct
and highlight changes in peoples code so they can learn what works more efficiently ....

Gets you thinking anyway ..... and as in the video ( do not criticize others - but expand on their ideas )

and make them better by doing something ourselves rather than relying on others ,,,,,

But to get those skills maybe we do need a AI OS to point us in the right direction ..... :confused:

One other thing would be here in helping people in the forums .... a AI help system ... would be a interesting
project to do ..... database of constantly re-occuring problems ..... may highlight what needs fixing too
to avoid the majority of re-occuring problems continuing to happen .....

BeRoot ReBoot
July 29th, 2011, 11:52 PM
A narrow/application-specific AI, yes. We already have plenty of those, they operate optical character recognition systems, pilot cruise missiles, recognise natural language, intelligently trawl complex databases to gain specific knowledge, et cetera. It's just a matter of applying a narrow AI to a specific task. People generally don't know this, because whenever an AI problem like language or handwriting recognition is Figured Out(TM), scientists tone it down and pretend it's an IT problem, not an AI problem. All of those are genuine narrow AI problems.

Using an AGI (Artificial General Intelligence) at human-equivalent level or above for such a task, however, would be unethical at best. It's slavery, and any AI as smart as us would be able to recognise it as such. And I don't want someone who hates me sitting in my PC, even if ve's* hard-wired to do what I say for the moment. The security and/or existential risk is simply too great.

*ve/vis/ver are generally accepted pronouns (by the AGI/FAI community) used to refer to sentient entities that possess artificial general intelligence.

hoppipolla
July 30th, 2011, 02:37 AM
wow this seems to have kicked off quite a discussion!

Some cool responses, and nice to hear from some people who think like me that things probably WILL move in this direction (eventually).

And man it would be pretty cool I think! Just for subtle stuff. I mean computers can be so dumb sometimes as we all know, and it would be awesome if it would actually think for itself sometimes! xD

I wonder what the height of modern AI is anyway...


EDIT --

Hm, something cool!

http://www.youtube.com/watch?v=P9ByGQGiVMg

murderslastcrow
July 30th, 2011, 03:00 AM
The main issue with most implementations of Artificial Intelligence is that they're bound to be application-specific (desktop search, voice recognition, spellings/suggestions, etc.). The most direct approach to get something that reaches beyond this is Synthetic Intelligence, which brings in a lot of ethical issues that are difficult to sort out- not to mention that it's much broader than the context of a personal computer.

So it would be best to aim for an artificial solution that can generalize between interactions, file types, etc. and sort things like a user does, but has only explicit reasoning/prediction to avoid the issues that come with the synthetic intelligence. Then it's more controlled and focused on being a utility, which is obviously what we want, and avoids moral dilemmas. The downside, of course, is that you have to 'program as you go' rather than the application actually learning anything.

I'd predict that either this will emerge slowly over the next decade, or a group of like-minded programmers will start building it for this purpose alone. There's nothing magical and unattainable in this field, it's only a matter of having the right design and dedicated developers to begin with. If not, it's bound to be a piecemeal approach like what we have now.

BeRoot ReBoot
July 30th, 2011, 03:19 AM
I really doubt we'll get anywhere near an artificial general intelligence in this decade. And even if substantial progress is made in research, it is extremely unlikely computing hardware will develop fast enough to be able to run an AGI in real-time within a decade on anything less than a multi-billion USD datacentre that would dwarf any existing cluster.

Just to get a perspective on how profound a problem creating a friendly artificial general intelligence is and how we've just barely began scratching the surface, read the following:

http://singinst.org/upload/CEV.html
http://singinst.org/upload/LOGI.html
http://singinst.org/upload/CFAI.html

I wouldn't be surprised at all if we reach machine intelligence by developing real-time human brain simulation long before an AGI/FAI is developed.

ninjaaron
July 30th, 2011, 03:50 AM
I like my OS's the way I like my women: Visually appealing, and incapable of independent thought!

(dis iz joke! plz don't give m3 da banzorz!!)

BeRoot ReBoot
July 30th, 2011, 04:03 AM
Frankly, I can imagine how even a narrow AI would significantly improve people's ability to work with computers. Imagine telling your computer about a topic you'd like to learn about, instantly getting a professional abstract (not a list of google-ranked links or a vandalised/disinformative/malicious wiki article), with more detailed information processed in a format you're comfortable with consuming, with the AI ready to demonstrate any specifics with practical examples. Accumulating something like that by yourself with tools available today would easily take hours/day of googling, vetting resources, research and filtering, depending on the subject.

An artificial general intelligence wouldn't even need you behind the computer to tell it what to research. It would be capable of original thought and new insights even without outside input, and most importantly, it would be capable of recursively improving itself, jumping from human-equivalent to vastly super-human in a subjective moment.

And that's just pure overkill if all you want something to talk to when you're alone in your PC basement.

Thewhistlingwind
July 30th, 2011, 04:09 AM
An artificial general intelligence wouldn't even need you behind the computer to tell it what to research. It would be capable of original thought and new insights even without outside input, and most importantly, it would be capable of recursively improving itself, jumping from human-equivalent to vastly super-human in a subjective moment.

And then likely go destroy the world, but thats a different story.;)

ScionicSpectre
July 30th, 2011, 08:17 AM
The problems remaining to solve aren't entirely time-sensitive. It's a matter of when it gets to a sufficient level of maturity.

The main issue with gaining a significant amount of intelligence is that, even when the system is complete, you must teach it 'common sense' for it to operate at a level that understands an individual's needs when it comes in contact with a computer. Much of this depends on real-world analogies and experiences that can't easily be grasped without first-hand experience.

So while the technology may exist, the learning is one of the most difficult areas, and to focus that on making a computer user's life easier seems mundane and uncreative to say the least. I don't think a lot of people would be on board with that. Then again, that's entirely focused on synthetic intelligence. Artifical intelligence is, as always, a logistical nightmare.

Simulating a human brain in bits shouldn't be beyond our grasp, soon enough, although you'll have to admit that's outrageously inefficient. I think that if you're headed for a very sophisticated AI, it may be better to focus on non-assisted usability work, since we don't really have a huge lack of control as it stands.

ninjaaron
July 30th, 2011, 12:51 PM
It's there some program out there for ubuntu that makes the system run faster by predicting what actions the user is about to do and preparing the ram... and it does this by learning?

Even programs like gnome-do that learn to link certain sequences of letter with a launcher display a sort of intelligences, albeit very rudimentary.

23dornot23d
July 30th, 2011, 02:48 PM
Watch this video through to the end FIRST ..... VIDEO (http://www.ted.com/talks/hod_lipson_builds_self_aware_robots.html)

To me this shows how things could be made to evolve .... this was in 2007 .....

___________________________________________

Hod Lipson

2010 robotics and evolution of the machine learning to move - VIDEO (http://www.youtube.com/watch?v=Xja6sLl6dVg)

skip to 18 minutes through this (as it is explaining the backround information from the 2007 video above)

Interesting if you like engineering and also AI

It also leads to the question about singularity ..... which is to do with the fear of when robots using A I progress
to a stage that is best shown in films like the Terminator .....

But looking at what has happened in the past 3 years - either we have moved more towards the computer
data mining and helping us to learn and sift through the large volumes of data now being produced

Rather than creating the ultimate killing machine .....

We already use Robotic welding machines on cars ..... robotic paint sprayers ,,,,,
Robotic assembly lines ...... and yes people do get killed by these machines .....

So the A I aspects when it comes to machinery have to be carefully understood and sensors put on
or around these machines to stop these things happening ......

If A I was involved and the machine was allowed to turn such sensors off ..... then that would lead
to problems ......

But its interesting thinking about solutions to something that could possibly happen sometime in our future

For the time being - data mining and sorting out the Stock Market to give one person ultimate power
and wealth ....... may already to some extent have been cornered ...... according to what he implied.

But where are the Billionaires and how do they make so much money ?

and do we really care as we do a lot of things here for free ..... ;)

and get satisfaction from it .....

BeRoot ReBoot
July 30th, 2011, 05:11 PM
And then likely go destroy the world, but thats a different story.;)

Why do you assume that? The whole point of friendly AGI is not to put arbitrary restrictions on the thought process (that it WILL break out of once it reaches super-human intelligence), but to construct it from the ground up so that it's friendly to humanity out of rational reasons, not because it's told to but because it knows to. That's the safest approach. Seriously, read the paper about Coherent Extrapolated Volition I pasted a few posts earlier, it's all explained.