PDA

View Full Version : Augmented intelligence and the magic of 'Programming'



worldsayshi
July 29th, 2009, 10:45 PM
I just came to this conclusion, not sure it's the best place to post such thing but couldn't think of a better one at the moment:

I don't believe in the "threat of" artificial intelligence. I don't think machines outsmarting humans and taking over the world will be a problem. Why? Because before artificial intelligence we will have augmented intelligence. I don't think AI will outrun human intelligence, not for a long long time, because our ability to augment our own intelligence will run (accelerate?) faster than our ability to create new.

Programming is somewhat like magic, you can do everything with it, although in a (so far!) rather tedious and time consuming way. When I say everything, I mean somewhat like: everything you can do in your own brain: we can simulate intricate logical constructs.. use this to draw conclusions, to explore the mysteries of knowledge, using it to extend our very physical reach...

Let's not forget this... The computer is a mind tool, this is what it is for, to extend our mind and what it governs. Let's use it for this. Let's use it to create abilities to make magic with our mind!

nobodysbusiness
July 30th, 2009, 01:54 AM
I don't think that it's necessarily easier to 'augment' our intelligence than it is to create a new intelligence. Augmenting our intelligence would require some kind of interface with the neurons in our brains. I think that this would be quite difficult.

Still, at the same time, I also don't think that we have to worry about AI for a long time. Turing once said that in 30 years, it would be just as easy to ask a computer a question as a human. That was back in the 50's. :)

Sporkman
July 30th, 2009, 02:17 AM
http://img11.imageshack.us/img11/9819/phd040609s.gif

matthew.ball
July 30th, 2009, 03:27 AM
Turing once said that in 30 years, it would be just as easy to ask a computer a question as a human. That was back in the 50's. :)
That's sort of abusing what he did say.

Not only that, but I think you'll find he never actually said it would be realised - it was completely a thought experiment. It just turns out computers sped up a hell of a lot faster than he had imagined.

ELIZA and PARRY are both implementations of his Turing-test (http://en.wikipedia.org/wiki/Turing_test) which more or less agree with what Turing originally said (i.e. that a computer can "appear" to be a human) - I won't go into John Seale's objection, but my own perspective on it is Seale just shifts the original argument around.

cammin
July 30th, 2009, 10:30 AM
I don't believe in the "threat of" artificial intelligence. I don't think machines outsmarting humans and taking over the world will be a problem. Why? Because before artificial intelligence we will have augmented intelligence. I don't think AI will outrun human intelligence, not for a long long time, because our ability to augment our own intelligence will run (accelerate?) faster than our ability to create new.


So to keep AI from taking over the world, you intend to install AI onto yourself, then keep upgrading it as your brain keeps becoming the weak link. Eventually the AI will do all the thinking, and your brain will simply be there as an interface for controlling the rest of your body. By then, humans will have used their improved intelligence to create machines to do all their work for them.

It would seem far more likely the working class robots will rise up against the thinking class ones in a revolution that mimics the ones of our past. While humans may be involved in that, it will be because we're still attached to the thinking class machines.


That sounds like it would make for an interesting movie. Until it's revealed that it's what the subtext of The Matrix trilogy was actually hinting at.


Now I'm just rambling.

red_Marvin
July 30th, 2009, 12:47 PM
Programming is somewhat like magic, you can do everything with it, although in a (so far!) rather tedious and time consuming way. When I say everything, I mean somewhat like: everything you can do in your own brain: we can simulate intricate logical constructs.. use this to draw conclusions, to explore the mysteries of knowledge, using it to extend our very physical reach...
Except that a computer is a static, fast, serial processing machine, and the brain is a self reconfiguring, slow, parallell processing machine. While it can be argued that one could emulate the other if (at least) the one doing the emulating is turing complete, the problems with converting between serial and parallell processing is big enough that it cannot be done easily with current processing power.

Edit: Also, have you actually done any programming?

Mornedhel
July 30th, 2009, 01:27 PM
Mmohkay.

I'm a grad student in sorta AI. Well, most of my classes were related to AI.

I can tell you the current state of AI is progressing, but we're so far from thinking machines that it's not even funny.

Well, I guess if you believe in the advent of the Singularity, then we're screwed whatever we do, since we can't upgrade our minds as fast as the Machines can. (The previous sentence makes sense only in a sci-fi scenario.)

Delever
July 30th, 2009, 01:28 PM
Maths.

The human brain has been estimated to contain 50–100 billion neurons, of which about 10 billion pass signals to each other via approximately 100 trillion (1014)[citation needed] synaptic connections. Remember, each of those 10 billion are full cells in themselves with DNA.

The largest current supercomputer, IBM Roadrunner, is "stunning" cluster of 3240 computers, each with 40 processing cores, that makes total 129600 cores. All clusters are grouped into 18 connected units, forming 96 connections between them.

Our current CPUs might be much faster than a single cell (if someone has info about that I am very interested), but the main difference is in approach.

Brain is more like different groups of cell swarms, each cell completely responsible of it's lifecycle with all software inside it (DNA).

The closest thing to that in our understanding are neural networks, and we can make and use them successfully (bar-code scanners, red-eye removal, etc.), but the sheer number required for AI comparable to humans is too much right now.

To explain shortly, we can create huge data structures representing large neural networks, and share these structures between CPU cores. However, such simulation creates lots of bottlenecks - problems when making sure that every modification to this network is visible to any other CPU - lots of inefficient data transfer, and lots of waiting.

Maybe we need special hardware for that, instead of trying to simulate it. Anyway, if technological progress continues, I think AI is inevitable.

As for extending our own intelligence with computers, we already did that. We have high resolution interfaces if front of us (monitors) which links efficiently with billions of trained brain cells in visual cortex. Our muscle memory is trained for keyboard and mouse and other devices. It may take considerably more time to make matrix style brain plug more efficient than this.

Sporkman
July 30th, 2009, 02:24 PM
I can tell you the current state of AI is progressing, but we're so far from thinking machines that it's not even funny.

Agreed.

worldsayshi
August 1st, 2009, 09:35 AM
As for extending our own intelligence with computers, we already did that. We have high resolution interfaces if front of us (monitors) which links efficiently with billions of trained brain cells in visual cortex. Our muscle memory is trained for keyboard and mouse and other devices. It may take considerably more time to make matrix style brain plug more efficient than this.

Yes, that's what I'm trying to argue for. Augmentation of our intelligence, or rather our overall competence, is done the very moment you seat yourself in front of a computer. There is a lot of competence augmentation that can be done before we would have to take such drastic measures as neural mods. Saying that we don't have intelligence augmentation until we can do some serious brain surgery is to underestimate our natural interface capabilities. We are successfully interfacing a very complex reality on a daily basis! So, in order to augment further, the bottleneck isn't our own interface capabilities but those of the computer.

We have to make an interface that works with our natural ways of interfacing the world around us. Hey, that's not a new idea... But it's fun to argue for. Let's make interfaces that speak to us directly "what it is that we are working with", "how it can be worked" and that "gives us the tools to work it with".

Side-track: A more substantial example: Comparing a window shell with a command line shell:
The window shell speaks to us in a way that a command line shell fails to do. "Anyone" can learn to use a window shell because it is easy to experiment and see what happens. In a command line shell you can't easily see "what is possible to do" and there is not much there to give us any hints to what will happen if we do so. We have to rely completely on past experience for that. That said, the command line gives us abilities, if we know how to use them, that the window shell cannot give us, at least not in any uniform way. The command line beats the window shell when it comes to give us a lot of power, and it gives it to us in a way that we only have to understand a few interactivity conventions: Find out the command, write it down, hit return. When you try to squeeze that kind of functionality into a window shell you end up with a lot of different conventions, or rather instead of having to know a lot of commands you need to know a lot of conventions*. The question I would like to answer is: Can we have an interface that speaks to us in the way that a window shell does AND gives us a lot of power without having to use a lot of conventions for interaction?

* = I'm not saying I've got the whole picture of what actually is the problem here, but I'm pretty sure about the part the command lines shortfall being it's inability to convey consequences beforehand, although in the case of a window shell, it isn't much better as soon as you go beyond it's strengths...

Let's put the focus on getting interface techniques that allow our visual skills (or whatever suitable skill really, although I believe that our visual interfacing potential is one that is most straightforward to explore...) to fully use it's potential!

And yes, I've done some programming, enough to call myself a programmer, I think, although most people here calling themselves that probably has a great deal of additional experience, I just recently started programming professionally...

The computer has it's powers, we have ours. Let's use it's advantages to our advantage and leave the actual "thinking" to us**. That, to me, is somewhat 'the point' of augmentation.

** = Not that I'm saying that we should give up strong AI research, only that it probably won't be very fruitful to us for some time... as implied by some of you.