PDA

View Full Version : Join in the next Linux revolution - the interface!



Mazza558
November 6th, 2008, 11:29 PM
Well, I've been pretty busy over the last few days, as I've been planning and thinking about a conceptual desktop environment that, if I gain at least some support, might become the next thing that Linux becomes famous for. Now, these are pretty big statements to make, but it would be inspiring to see it actually become a reality. I honestly and wholeheartedly think this could work, and am absolutely willing to help developing it provided other people are. Of course, the main difficulty I have is that I can't code. But what I can do is imagine a way of using a computer that is surely much more logical. Without further ado, let me explain what I am proposing...

The current problem

A typical computer interface is too complicated. Take your eyes off this post and look around the screen you're staring at. Scrollbars. Buttons. Panels. We know that this interface works, but it just doesn't feel natural. Web pages are built exactly the same. They are linear, sporting interfaces of their own through links, etc. It feels as if we are using an interface that was conceptualized when processing power was weak, and when the web was barely well-known. The idea of having windows to move around the screen was a massive leap forward for everyone at the time, but I believe it's time to move on.

Users nowadays don't have direct access to information. To do something as simple as checking your email requires a complex series of procedures - log on, wait for the DE to load, find the firefox icon, click it, find email bookmark, click the bookmark, wait for webpage to load, and finally you get the information you need.

In effect, an interface of widgets can only get in the way of what you actually want to find out or do. It serves as a means of getting there, but even the idea of separate programs seems bizarre.

What I propose

I propose an interface with no windows. At least not in the way they currently are. Instead, the user tells the computer what to do. The interface would instead be focused on a ball, orb, globe, sphere at the bottom center of the screen. Moving the mouse over this might expand into a text box. The entire interface could be perhaps be built entirely on compiz in combination with lots and lots of scripts (and a bit of code to pull it all together/) Now, of course, I haven't got anywhere near deciding on a way for this to work yet, but it might, for example, be vaguely similar to what Mozilla Labs Ubiquity is achieving. In fact, if you haven't seen it already, watch the concept video here (http://labs.mozilla.com/2008/08/introducing-ubiquity/). Imagine something similar to this for your whole PC. Imagine having a conversation with your computer. Imagine these scenarios:


Tom wants to find out if he has any new emails, and also wants to see who's online on MSN. He types "who's online? any new emails?" and the computer interprets these commands and carries them out. It displays the buddy list (in some form) and notifies him that he has 2 unread emails.


Claire has a report to write, but wants to see what's happening in the 2012 Olympics in the meantime. She types "write" and a word processor of some sort loads. She then types "2012 olympics news", hovers her mouse over the orb, and the computer would then go away and see if the term is popular on the web (through a google search) and provide suggestions with snippets of news.

Now, these are still very vague ideas at the moment, but you might see what I mean now. A unified interface where every program, every process is linked in a network of processes. This network also links to the web to pull information out of it. With this interface, you should never have to look at a web page again. It's quite hard to explain what I have in mind, and I'll probably clarify and explain parts of it that you don't understand.

Please post your suggestions - how would you go about making this work? What needs tweaking? How could files be managed in this way? Also, if you're willing to see this in action, don't hesitate to say so! Maybe it could be a reality...

P.S - this proposal really needs a name :)

RATM_Owns
November 6th, 2008, 11:37 PM
I love the ball idea. :P

Mazza558
November 6th, 2008, 11:40 PM
I love the ball idea. :P

Yeah. It would be literally the only thing visible besides a wallpaper if you've just logged on (unless you wanted extras like a clock).

billgoldberg
November 6th, 2008, 11:45 PM
It indeed sounds a lot like ubiquity.

And I love ubiquity, so I would love this new concept.

I would be willing to contribute, not as a coder (don't know how) but to write the docs or something.

Mazza558
November 6th, 2008, 11:46 PM
It indeed sounds a lot like ubiquity.

And I love ubiquity, so I would love this new concept.

I would be willing to contribute, not as a coder (don't know how) but to write the docs or something.

Sounds great. It's good to get some more ideas going around, a long time before any code is written.

cardinals_fan
November 7th, 2008, 02:51 AM
I think this is a mostly good idea. The problem with current interfaces isn't clutter (my desktop isn't cluttered). However, language is far more powerful than clicking two mouse buttons. I recommend you read this essay (http://humanized.com/weblog/2008/07/21/language-based-interfaces-part-1-the-problem/) (if you haven't already) by Jono DiCarlo, part of the Humanized team employed by Mozilla in constructing Ubiquity. Their desktop product, Enso, falls far short of the all-reaching control you propose, but it does do some things well.

Bölvağur
November 7th, 2008, 02:55 AM
Count me in.
This sounds like a perfect project to contribute to compared to my studies.

Get my email or something to include me in the development please :popcorn:

init1
November 7th, 2008, 02:56 AM
Well, I've been pretty busy over the last few days, as I've been planning and thinking about a conceptual desktop environment that, if I gain at least some support, might become the next thing that Linux becomes famous for. Now, these are pretty big statements to make, but it would be inspiring to see it actually become a reality. I honestly and wholeheartedly think this could work, and am absolutely willing to help developing it provided other people are. Of course, the main difficulty I have is that I can't code. But what I can do is imagine a way of using a computer that is surely much more logical. Without further ado, let me explain what I am proposing...

The current problem

A typical computer interface is too complicated. Take your eyes off this post and look around the screen you're staring at. Scrollbars. Buttons. Panels. We know that this interface works, but it just doesn't feel natural. Web pages are built exactly the same. They are linear, sporting interfaces of their own through links, etc. It feels as if we are using an interface that was conceptualized when processing power was weak, and when the web was barely well-known. The idea of having windows to move around the screen was a massive leap forward for everyone at the time, but I believe it's time to move on.

Users nowadays don't have direct access to information. To do something as simple as checking your email requires a complex series of procedures - log on, wait for the DE to load, find the firefox icon, click it, find email bookmark, click the bookmark, wait for webpage to load, and finally you get the information you need.

In effect, an interface of widgets can only get in the way of what you actually want to find out or do. It serves as a means of getting there, but even the idea of separate programs seems bizarre.

What I propose

I propose an interface with no windows. At least not in the way they currently are. Instead, the user tells the computer what to do. The interface would instead be focused on a ball, orb, globe, sphere at the bottom center of the screen. Moving the mouse over this might expand into a text box. The entire interface could be perhaps be built entirely on compiz in combination with lots and lots of scripts (and a bit of code to pull it all together/) Now, of course, I haven't got anywhere near deciding on a way for this to work yet, but it might, for example, be vaguely similar to what Mozilla Labs Ubiquity is achieving. In fact, if you haven't seen it already, watch the concept video here (http://labs.mozilla.com/2008/08/introducing-ubiquity/). Imagine something similar to this for your whole PC. Imagine having a conversation with your computer. Imagine these scenarios:

Now, these are still very vague ideas at the moment, but you might see what I mean now. A unified interface where every program, every process is linked in a network of processes. This network also links to the web to pull information out of it. With this interface, you should never have to look at a web page again. It's quite hard to explain what I have in mind, and I'll probably clarify and explain parts of it that you don't understand.

Please post your suggestions - how would you go about making this work? What needs tweaking? How could files be managed in this way? Also, if you're willing to see this in action, don't hesitate to say so! Maybe it could be a reality...

P.S - this proposal really needs a name :)
Hrm. I have thought about this idea before. It's basically CLI, but with plain English commands. Perhaps an alternative to bash could be integrated into this design. So instead of typing


rm *.mp3

You would type


Delete all mp3 files

And (recognizing that the user may or may not want to include subdirectories) it could ask


Do you want to include subdirectories?

kk0sse54
November 7th, 2008, 03:15 AM
Fantastic idea :), kinda reminds me of the Sugar Desktop Enviroment a bit but far more practical for everyday computing. What you should do is start a website outlining your plan to start bringing people in.

zmjjmz
November 7th, 2008, 03:17 AM
I şink ğat init1's idea is great, togeğer with sound recognition.

Le-Froid
November 7th, 2008, 03:28 AM
I şink ğat init1's idea is great, togeğer with sound recognition.

Can you say that in American or British English? :p

tdrusk
November 7th, 2008, 03:36 AM
I'm just throwing this out there.

Check out "Sugar". It's the desktop for one laptop per child.

Bölvağur
November 7th, 2008, 03:36 AM
Can you say that in American or British English? :p

omg he is actually ğe first one here ğat uses ğe correct letter to ğe correct sound... unlike some people ğat begin wiş L and uses ş for everyşing.

init1
November 7th, 2008, 03:39 AM
I şink ğat init1's idea is great, togeğer with sound recognition.
Yeah you could set up microphones around your house so it would be like Star Trek:


Computer: Play CD

Greyed
November 7th, 2008, 03:45 AM
A typical computer interface is too complicated.

I beg to differ. It is exceedingly simple.


We know that this interface works, but it just doesn't feel natural.

Of course it isn't. But then nothing you propose would be natural, either. Remember, even language is taught. So what is the difference between being taught a language, being taught how to type, being taught how to use these interfaces? None.


Users nowadays don't have direct access to information.

Until we get cybernetics users will not have direct access to information. Everything is just sugar coating over that basic fact.


To do something as simple as checking your email requires a complex series of procedures - log on, wait for the DE to load, find the firefox icon, click it, find email bookmark, click the bookmark, wait for webpage to load, and finally you get the information you need.

Or set auto-login and have an email client open on the session. Then it becomes a matter of... Turn on the computer.


In effect, an interface of widgets can only get in the way of what you actually want to find out or do. It serves as a means of getting there, but even the idea of separate programs seems bizarre.

No, in this case it would be the ignorance of the options available to you. I'm sorry, but I cannot take the idea seriously because at the core you're proposing that tools are the anathema of progress. There's only a dozen or so thousands of years of human progress that we can easily verify that contradicts that core idea. That is probably just a fraction of the actual amount of time. Are some tools inefficient and should be discarded? Of course! But your proposal is not doing that at all.



I propose an interface with no windows. At least not in the way they currently are. Instead, the user tells the computer what to do.

Which is what we do now. And, I might add, more efficiently than your system. I got to my email by pushing the power button.


Imagine something similar to this for your whole PC. Imagine having a conversation with your computer. Imagine these scenarios:

This is where it falls flat. Ok, someone wants to write a paper and she says "write". How does the computer know which application to bring up. It's applicationless. Er, counter to the whole Unix philosophy and what lots of people are using Linux for, choice. Then we get into another problem. What happens when her paper is larger than the screen. No widgets = no scrollbars. I'm having a conversation with the computer? Ok, how does it know when I want to write page down versus commanding it to page down? You mean something as plebian as a keyboard and special keys? Yea gads, we have that now.


P.S - this proposal really needs a name :)

WPF.

cardinals_fan
November 7th, 2008, 03:49 AM
This is where it falls flat. Ok, someone wants to write a paper and she says "write". How does the computer know which application to bring up. It's applicationless. Er, counter to the whole Unix philosophy and what lots of people are using Linux for, choice. Then we get into another problem. What happens when her paper is larger than the screen. No widgets = no scrollbars. I'm having a conversation with the computer? Ok, how does it know when I want to write page down versus commanding it to page down? You mean something as plebian as a keyboard and special keys? Yea gads, we have that now.

"Write" would really just be an alias for "abiword" or something similar. If this were implemented halfway competently, that would be easy to change in a config file.

The fish shell has some interesting features that might be relevant to this discussion.

zmjjmz
November 7th, 2008, 04:04 AM
Basically we could have a whole database of aliases in a bash.bashrc file and a tiny GUI to configure ğem.

init1
November 7th, 2008, 04:06 AM
This is where it falls flat. Ok, someone wants to write a paper and she says "write". How does the computer know which application to bring up. It's applicationless. Er, counter to the whole Unix philosophy and what lots of people are using Linux for, choice. Then we get into another problem. What happens when her paper is larger than the screen. No widgets = no scrollbars. I'm having a conversation with the computer? Ok, how does it know when I want to write page down versus commanding it to page down? You mean something as plebian as a keyboard and special keys? Yea gads, we have that now.

First of all, there would need to be some setup. The first time you type:


Write a paper

It will say:


Do you want to use Abiword, Open Office, or another application?

And then it will remember what you choose. If you don't want to use the default, you can type the application that you want to use:


Write a paper with Open Office

Second, I think that the most practical way to implement this would be to use existing software. So yes, there would still be widgets such as scroll bars.

Saint Angeles
November 7th, 2008, 04:07 AM
so basically you want computers like the movie adaptation of The Andromeda Strain?

(all voice activated... no other input required!)

zmjjmz
November 7th, 2008, 04:09 AM
Second, I think that the most practical way to implement this would be to use existing software. So yes, there would still be widgets such as scroll bars.

Well presumably it would open a new document in OpenOffice or Abiword, which would use normal themes and such.
Maybe we could use a GTK theme sans scrollbar/titlebar/and such.

brunovecchi
November 7th, 2008, 04:11 AM
Have you tried Gnome Do? It gives a feeling a lot like you are expecting from your interface. and it's already available.

init1
November 7th, 2008, 04:13 AM
Basically we could have a whole database of aliases in a bash.bashrc file and a tiny GUI to configure them.
Actually I was thinking of something more complex. The problem with using bash is that you would need to know bash's expressions. You would need to know that in order to delete all mp3's in the current directory, you would type:


Delete *.mp3

This is fine for the average Linux user, but the average computer user isn't going to want to memorize such things.

Greyed
November 7th, 2008, 04:24 AM
Then that is their failing. Spending 10m to learn a simple concept that can save days, even years, of labor is worth it.

zmjjmz
November 7th, 2008, 04:32 AM
Actually I was thinking of something more complex. The problem with using bash is that you would need to know bash's expressions. You would need to know that in order to delete all mp3's in the current directory, you would type:


Delete *.mp3

This is fine for the average Linux user, but the average computer user isn't going to want to memorize such things.

My point is ğat someone would spend ğeir time to write out a hugeass bash.bashrc list of aliases wiş stuff like ğat, i.e.

alias "Delete all mp3s"="rm *.mp3"

Nano Geek
November 7th, 2008, 04:42 AM
It sounds overly complicated to me.
How exactly would this be simpler than point and click?

cardinals_fan
November 7th, 2008, 04:54 AM
so basically you want computers like the movie adaptation of The Andromeda Strain?

(all voice activated... no other input required!)
If you mean the very recent made-for-TV adaptation, I just want to say that that sorry excuse for a movie was criminally awful.

Actually I was thinking of something more complex. The problem with using bash is that you would need to know bash's expressions. You would need to know that in order to delete all mp3's in the current directory, you would type:


Delete *.mp3

This is fine for the average Linux user, but the average computer user isn't going to want to memorize such things.
The only reason most major shells don't already fulfill all this is because they don't recognize spaces. It would be easy enough to add an alias for "open gmail", except that the space isn't allowed.

Saint Angeles
November 7th, 2008, 05:04 AM
If you mean the very recent made-for-TV adaptation, I just want to say that that sorry excuse for a movie was criminally awful.


yes... yes it was.

zmjjmz
November 7th, 2008, 05:07 AM
The only reason most major shells don't already fulfill all this is because they don't recognize spaces. It would be easy enough to add an alias for "open gmail", except that the space isn't allowed.

Really? I distinctly remember it working fine so long as ğe alias was present in ğe /etc/bash.bashrc file...

cardinals_fan
November 7th, 2008, 05:14 AM
Really? I distinctly remember it working fine so long as ğe alias was present in ğe /etc/bash.bashrc file...
Try it. I'm a Zsh user, but I just tried on BASH and couldn't get a two word alias.

etnlIcarus
November 7th, 2008, 05:34 AM
Put me in the sceptical group.

All your ideas seem to further obfuscate what the OS is really doing and the metaphors you're introducing seem to be less consistent than the mess of, "desktops", and, "menus", we have already.

Want to improve the user experience?

Make interfaces as direct as possible, while displaying only relevant data. Don't lie to the user just to make things easier for them. Rather, order data in terms of importance and class and strike a balance between sublevels (sub-sub-menus are a no-no) and keeping a dialogue clutter-free and human readable.

Convincing users their computer is HAL 9000 from 2001: A Space Odyssey is lying to users and isn't helping them; it's making them dependent upon fragile preconceptions which do not reflect that actual OS and leave them completely dependent upon other's knowledge when HAL suddenly stops talking back.

Worry less about making the computer look and behave like something from Hollywood and instead, try to achieve the usability balance of Win95 (controversial?), while making the OS as naked and exposed as possible.

zmjjmz
November 7th, 2008, 05:36 AM
Try it. I'm a Zsh user, but I just tried on BASH and couldn't get a two word alias.

Well no matter, we can just use zsh.
I've tried zsh and liked it, just not enough to use it as my default shell.

cardinals_fan
November 7th, 2008, 05:42 AM
Put me in the sceptical group.

All your ideas seem to further obfuscate what the OS is really doing and the metaphors you're introducing seem to be less consistent than the mess of, "desktops", and, "menus", we have already.

Want to improve the user experience?

Make interfaces as direct as possible, while displaying only relevant data. Don't lie to the user just to make things easier for them. Rather, order data in terms of importance and class and strike a balance between sublevels (sub-sub-menus are a no-no) and keeping a dialogue clutter-free and human readable.

Convincing users their computer is HAL 9000 from 2001: A Space Odyssey is lying to users and isn't helping them; it's making them dependent upon fragile preconceptions which do not reflect that actual OS and leave them completely dependent upon other's knowledge when HAL suddenly stops talking back.

Worry less about making the computer look and behave like something from Hollywood and instead, try to achieve the usability balance of Win95 (controversial?), while making the OS as naked and exposed as possible.
I agree that the floating ball thing is silly, but a language-based interface has many advantages. Read my link from the first page.

Well no matter, we can just use zsh.
I've tried zsh and liked it, just not enough to use it as my default shell.
No, I mean that no shells allow spaces (Zsh included). If you find one that does, let me know.

GrouchoMarx
November 7th, 2008, 05:50 AM
I'm not convinced that typing in human language makes interacting with the computer any easier. In the time it would take to type "Who's online? Any new emails?" you can check your email and IM client with just two clicks of the mouse. Imagine trying to navigate and set Firefox preferences using plain English. Would it be one long list? Or would it be divided up into separate tabs? And what happens if you've never used Firefox before and want to explore it's preferences? (RTFM? Yeah right :)). There is a lot of information conveyed to the user through widgets and windows that would be difficult to convey any other way. Removing them might reduce "clutter" for some people, but also remove important visual that we take for granted.

The problem with interface design is that you can't use the keyboard and the mouse at the same time. They are mutually exclusive. Yet the tasks for which each is suited are not. For instance, when you query a location in Google Maps you use the keyboard. But when you orient that map in your browser, it's easier to use the mouse. Browser usage is a composite process of queries and spatial interactions. Spatial interactions are just not as easy through the keyboard because you don't have fine grain, analog control. Additionally, the human brain interprets some problems spatially, and in those situations the spatial relationship between the mouse and the cursor actually helps. But switching between the keyboard and the mouse is awkward, and so user interface designers inspired by the power of Google and Web 2.0 marketing jargon try to gerrymander the user experience into a series of free associative queries. However, not all activities on the computer fit this paradigm. Sometimes the mind works better spatially. I would make an analogy with math. Before the invention of algebra you had to write out your math in human language. The invention of algebra provided a visual representation of equations that made much better use of the mind's geo-spatial faculties. Think how much easier it is to visualize FOIL, the product of two binomials, as algebra, than it is to express it in human sentences. The user interface is the algebra of computing.

Finally, a lot of the functionality you mention already exists in various application launchers, like Gnome Do (http://do.davebsd.com/), but if you really want to see a powerful, revolutionary user interface, learn Emacs. :) (Which brings up another point: that the most powerful tools are not always the easiest to learn/use.)

cardinals_fan
November 7th, 2008, 05:57 AM
I'm not convinced that typing in human language makes interacting with the computer any easier. In the time it would take to type "Who's online? Any new emails?" you can check your email and IM client with just two clicks of the mouse. Imagine trying to navigate and set Firefox preferences using plain English. Would it be one long list? Or would it be divided up into separate tabs? And what happens if you've never used Firefox before and want to explore it's preferences? (RTFM? Yeah right :)). There is a lot of information conveyed to the user through widgets and windows that would be difficult to convey any other way. Removing them might reduce "clutter" for some people, but also remove important visual that we take for granted.

The problem with interface design is that you can't use the keyboard and the mouse at the same time. They are mutually exclusive. Yet the tasks for which each is suited are not. For instance, when you query a location in Google Maps you use the keyboard. But when you orient that map in your browser, it's easier to use the mouse. Browser usage is a composite process of queries and spatial interactions. Spatial interactions are just not as easy through the keyboard because you don't have fine grain, analog control. Additionally, the human brain interprets some problems spatially, and in those situations the spatial relationship between the mouse and the cursor actually helps. But switching between the keyboard and the mouse is awkward, and so user interface designers inspired by the power of Google and Web 2.0 marketing jargon try to gerrymander the user experience into a series of free associative queries. However, not all activities on the computer fit this paradigm. Sometimes the mind works better spatially. I would make an analogy with math. Before the invention of algebra you had to write out your math in human language. The invention of algebra provided a visual representation of equations that made much better use of the mind's geo-spatial faculties. Think how much easier it is to visualize FOIL, the product of two binomials, as algebra, than it is to express it in human sentences. The user interface is the algebra of computing.

Finally, a lot of the functionality you mention already exists in various application launchers, like Gnome Do (http://do.davebsd.com/), but if you really want to see a powerful, revolutionary user interface, learn Emacs. :) (Which brings up another point: that the most powerful tools are not always the easiest to learn/use.)
I wouldn't say that language needs to follow all the bizarre nuances of English (or French or German or Korean or anything else). Language doesn't necessarily entail complete sentences or good grammar.

You raise a very valid point about the pain of switching between mouse and keyboard. I personally think that the keyboard is more important than the mouse.

TheSlipstream
November 7th, 2008, 06:03 AM
Perhaps you should try Openbox. A blank desktop with nothing in your way at all. The stock install has few features, since Openbox is a window manager, not a desktop environment, a la GNOME or KDE. You don't get to speak to it, but with GNOME-Do, you're approaching what you described, minus the condescending interface and intent of dumbing down users.

sixstorm
November 7th, 2008, 06:07 AM
When I first read the OP, I thought about the AI computer entities you see on the movies. You would see a human talking with a computer, using their own language, and the computer responding to those commands (obviously).

The way I see computers going in the next 5-10 years is this: more touch-screen and voice-activation. I also think we will see operating systems go a little too far with "looks" (notice how popular CF is? notice Windows 7?) and we will start entering a "simplicity" phase. OS devs will finally realize that while their OS is pretty, it is not simple enough for the end user OR it's just not powerful and/or efficient enough.

I like this idea, I'm always excited to see prototypes of next-gen programs, operating systems and basically, all technology as a whole.

-grubby
November 7th, 2008, 06:08 AM
Perhaps you should try Openbox. A blank desktop with nothing in your way at all. The stock install has few features, since Openbox is a window manager, not a desktop environment, a la GNOME or KDE. You don't get to speak to it, but with GNOME-Do, you're approaching what you described, minus the condescending interface and intent of dumbing down users.

I like dmenu much more than gnome-do, even if it doesn't do as much

cardinals_fan
November 7th, 2008, 06:56 AM
I like dmenu much more than gnome-do, even if it doesn't do as much
+1

...and it can do quite a bit with some tweaking.

mewithafez
November 7th, 2008, 07:22 AM
Why not just make something that understands english (or spanish, or german, or mandarin...)? The computer has a dictionary and tools for understanding grammar already, so lets say you told the computer "I need to write a paper on Spiritual Possession" that would go in with the tags "word processing" because it understands that writing on a computer is word processing, and when "paper" appears in the same sentence as "write" it means academic paper.

So, it pops up openoffice with the template for a paper automatically filled in with your name, the date, and the title "Spiritual Possession". Hell, it could even look up "Spiritual Possession" in the dictionary/encyclopedia/god forbid wikipedia against a list of classes you supply it in an "About Me" section and throw the name of the class onto the template. Then, this would be autosaved into the folder for that class in school, which it autocreated under /home/blablabla/Documents.

Aren't computers awesome?

Mazza558
November 7th, 2008, 09:12 AM
Although I already (and only) mentioned text control in the OP, this doesn't mean it has to be only text control. It would more be a combination of keyboard and mouse input. Mouse gestures look promising already, as easystroke has shown. Whilst exploring a map, for example, how about using text at the same time to narrow down your options for the place you're looking for?

If anyone's seen how people log on to the Google G1 phone, this is an example of how gestures work better than a password.

techmarks
November 7th, 2008, 10:13 AM
It sounds like a good idea, but....

I've a feeling the OP is probably a good typist.

For example typing "write" to bring up an application as oppose to maybe 2 or 3 mouse clicks probably takes about the same amount of time.

For me typing is not a problem, but for some people I know.... they would choose the 3 mouse clicks any day over having to actually type stuff.

But let me stay with your idea I would envision it like this.

I want to write a paper.

ok I have some type of command line and type "write"

now a window or just some floating icons pop up informing me of the application choices available, I could either click, say, or type the name of the one I'd prefer.

Basically this would entail programming a natural language processor, linked to a database of all the apps available and the file system.

It would be a sort of Artifical Intelligence program.

While the mechanics of it would not be so different,
typing, pushing buttons, icons, etc, the mindset would differ...

It would now put more of a burden on the interface developer and ultimately the computer to process requests, find the available tools and present them to the user.


I would think a language like Lisp would be a good for this, I'm mostly a C++ person, but it could also be done in C or C++, it just would not be such a natural language for this in my opinion.

The problem with Lisp is that there really aren't any good GUI libraries for it. I suppose Python would also be a good candidate language but I'd prefer Lisp, C or C++.

But have you considered that some people don't really like having to type, especially the 1 finger typists, clicking buttons is probably faster for that lot.

I think it's a good idea, but I've a feeling it would need to be a combination of graphical elements and the command line.

Mazza558
November 7th, 2008, 10:16 AM
It sounds like a good idea, but....

I've a feeling the OP is probably a good typist.

For example typing "write" to bring up an application as oppose to maybe 2 or 3 mouse clicks probably takes about the same amount of time.

For me typing is not a problem, but for some people I know.... they would choose the 3 mouse clicks any day over having to actually type stuff.

But let me stay with your idea I would envision it like this.

I want to write a paper.

ok I have some type of command line and type "write"

now a window or just some floating icons pop up informing me of the application choices available, I could either click, say, or type the name of the one I'd prefer.

Basically this would entail programming a natural language processor, linked to a database of all the apps available and the file system.

It would be a sort of Artifical Intelligence program.

While the mechanics of it would not be so different,
typing, pushing buttons, icons, etc, the mindset would differ...

It would now put more of a burden on the interface developer and ultimately the computer to process requests, find the available tools and present them to the user.


I would think a language like Lisp would be a good for this, I'm mostly a C++ person, but it could also be done in C or C++, it just would not be such a natural language for this in my opinion.

The problem with Lisp is that there really aren't any good GUI libraries for it. I suppose Python would also be a good candidate language but I'd prefer Lisp, C or C++.

But have you considered that some people don't really like having to type, especially the 1 finger typists, clicking buttons is probably faster for that lot.

I think it's a good idea, but I've a feeling it would need to be a combination of graphical elements and the command line.

Which was essentially what I was thinking. The user is using the mouse and keyboard at the same time - keyboard with the left hand only. A combination could potentially be quicker than one or the other on their own.

The easiest tasks would probably be entirely mouse controlled, through gestures and a simple interface.
Tasks involving multiple actions and getting information from the web would be far more suited to mostly text input (but still using the mouse in some way).

Bear in mind that an interface like this would have to gradually move from conventional ideas to its ultimate goal. To start with, it'd probably be a mod of gnome-do, a mod of easkstroke and some cairo-based interface running on top of compiz and using gtk. The windows themselves would still exist completely (as there's no solution to this at the moment).

tezer
November 7th, 2008, 12:34 PM
Which was essentially what I was thinking. The user is using the mouse and keyboard at the same time - keyboard with the left hand only. A combination could potentially be quicker than one or the other on their own.

The easiest tasks would probably be entirely mouse controlled, through gestures and a simple interface.
Tasks involving multiple actions and getting information from the web would be far more suited to mostly text input (but still using the mouse in some way).

Bear in mind that an interface like this would have to gradually move from conventional ideas to its ultimate goal. To start with, it'd probably be a mod of gnome-do, a mod of easkstroke and some cairo-based interface running on top of compiz and using gtk. The windows themselves would still exist completely (as there's no solution to this at the moment).

I'm afraid I can't see what is so innovative about the proposed interface... Mouse gestures? Using keys and a mouse same time? Just try to drug your mouse while typing... ;) Either one or another or in turns...
Natural language interface is ... useless. I remember Altavista was developing an NLP interface to their search engine, but it turned out that you can hardly make a user to type a couple of words not to say complete and grammatical sentences.
Have you ever played games with natural language interface? In the beginning it was fun to type something like "Come up to him and ask for directions". But after some time you look for better ways and discover ... shortcuts (like"g" for "go" or "come up to" and "ask dir" for "ask for directions")! After a while you may want to have graphical buttons for the most frequently used commands...
You said that the existing interface is not natural? So what is "natural"? Is it natural that a person instead of writing a paper on a paper with a pen, types letters on a screen? It may seem unnatural to chat on IM instead of going and seeing someone (or at least making a phone call). I dare to say that computers are 'unnatural' - as well as anything else except living in a cave, eating raw food and using MSWindows ;)
What else? Speech interfaces? I remember some fuss about it about 10 years ago. I tried it myself... Useless and ... noisy.
Someone noticed that a powerful interface is difficult. Indeed, CLI is very powerful, but you have to know a lot. The real challenge is to make difficult things easy. But it's not necessary a new interface, just a bit more wisely designed frontends probably...

urukrama
November 7th, 2008, 01:52 PM
First of all, there would need to be some setup. The first time you type:


Write a paper

It will say:

And then it will remember what you choose. If you don't want to use the default, you can type the application that you want to use:


Write a paper with Open Office

Second, I think that the most practical way to implement this would be to use existing software. So yes, there would still be widgets such as scroll bars.

How is that less complicated than pressing Win+F10 (which launches OpenOffice on my computer) or clicking an OpenOffice icon on a panel or in a menu?

techmarks
November 7th, 2008, 01:54 PM
On my desktop I have a text input line on the top into which I can type program names to run, (Ice Window Manager) but that's all it does.

I'll post an image later (I'm on the Windows XP computer right now)

Sometimes I wish it did more, like at least take shell command line arguments also and web page addresses that would open up in the default browser.

But I have to agree with the above post. I don't think most people like having to type.

Look at OSX for example it's very sucessful and it's all graphical.

urukrama
November 7th, 2008, 01:57 PM
I like dmenu much more than gnome-do, even if it doesn't do as much

You can do quite a bit with dmenu. Have a look at this post on my blog (http://urukrama.wordpress.com/2008/07/09/dmenu-script-for-configuration-files/); you can easily tweak that to do a wide variety of things with dmenu.

techmarks
November 7th, 2008, 02:05 PM
How is that less complicated than pressing Win+F10 (which launches OpenOffice on my computer) or clicking an OpenOffice icon on a panel or in a menu?

It just depends, what if the program is not on a menu or panel?

In that case it is faster to just type the name on a text input and run it, otherwise you'd have to start your file manager and then find the program file and click it.

Also if a certain program is not on the menu, or the window manager doesn't place it there for you when installed, alot of users aren't going to bother with configuring menus.

It's just another way to get there, one or the other may be faster, or more convenient at any one moment, but it's nice to have the choice.

Trail
November 7th, 2008, 03:16 PM
(I skipped reading some of the posts...)

So ok. "Hello computer, please go through all my custm application's logs in a directory tree, and find all occurences of a particular function call failing, then backtrack a few lines and check if some particular parameter had a nonzero value. For those logfiles with these circumstances, sort them out and open them with kwrite so I can manually review if the fault was indead because of the wrong parameter".

Well, I don't see that happening soon.

But, as someone else has already mentioned, you CAN have a conversation with the computer. Through CLI. Think the previous example is impossible? Observe:



grep -lIR "send result: -1" . | xargs grep "terminationType" | grep -v ":: 0" | cut -d ":" -f-1 | uniq| sort | kwrite

wersdaluv
November 7th, 2008, 04:23 PM
http://arstechnica.com/news.ars/post/20080714-gnome-3-0-officially-announced-and-explained.html

cardinals_fan
November 8th, 2008, 01:29 AM
Although I already (and only) mentioned text control in the OP, this doesn't mean it has to be only text control. It would more be a combination of keyboard and mouse input. Mouse gestures look promising already, as easystroke has shown. Whilst exploring a map, for example, how about using text at the same time to narrow down your options for the place you're looking for?

Using both the keyboard and mouse at the same time provides the worst of both worlds. Switching back and forth is a total pain. I prefer the keyboard, which is why I use keyboard-controllable apps whenever possible.

You can do quite a bit with dmenu. Have a look at this post on my blog (http://urukrama.wordpress.com/2008/07/09/dmenu-script-for-configuration-files/); you can easily tweak that to do a wide variety of things with dmenu.
You can do almost anything with dmenu, if you have the time to set it up :)

init1
November 8th, 2008, 02:04 AM
Then that is their failing. Spending 10m to learn a simple concept that can save days, even years, of labor is worth it.
True, but the idea is to make an interface that's natural. * is not natural. If I asked a random person on the street what "*" meant in computing, he/she would probably have no idea.


My point is ğat someone would spend ğeir time to write out a hugeass bash.bashrc list of aliases wiş stuff like ğat, i.e.

alias "Delete all mp3s"="rm *.mp3"
Bash doesn't accept spaces


Put me in the sceptical group.

All your ideas seem to further obfuscate what the OS is really doing and the metaphors you're introducing seem to be less consistent than the mess of, "desktops", and, "menus", we have already.

Want to improve the user experience?

Make interfaces as direct as possible, while displaying only relevant data. Don't lie to the user just to make things easier for them. Rather, order data in terms of importance and class and strike a balance between sublevels (sub-sub-menus are a no-no) and keeping a dialogue clutter-free and human readable.

Convincing users their computer is HAL 9000 from 2001: A Space Odyssey is lying to users and isn't helping them; it's making them dependent upon fragile preconceptions which do not reflect that actual OS and leave them completely dependent upon other's knowledge when HAL suddenly stops talking back.

Worry less about making the computer look and behave like something from Hollywood and instead, try to achieve the usability balance of Win95 (controversial?), while making the OS as naked and exposed as possible.
Current Gui's don't reflect the actual OS either


I'm not convinced that typing in human language makes interacting with the computer any easier. In the time it would take to type "Who's online? Any new emails?" you can check your email and IM client with just two clicks of the mouse. Imagine trying to navigate and set Firefox preferences using plain English. Would it be one long list? Or would it be divided up into separate tabs? And what happens if you've never used Firefox before and want to explore it's preferences? (RTFM? Yeah right :)). There is a lot of information conveyed to the user through widgets and windows that would be difficult to convey any other way. Removing them might reduce "clutter" for some people, but also remove important visual that we take for granted.

The problem with interface design is that you can't use the keyboard and the mouse at the same time. They are mutually exclusive. Yet the tasks for which each is suited are not. For instance, when you query a location in Google Maps you use the keyboard. But when you orient that map in your browser, it's easier to use the mouse. Browser usage is a composite process of queries and spatial interactions. Spatial interactions are just not as easy through the keyboard because you don't have fine grain, analog control. Additionally, the human brain interprets some problems spatially, and in those situations the spatial relationship between the mouse and the cursor actually helps. But switching between the keyboard and the mouse is awkward, and so user interface designers inspired by the power of Google and Web 2.0 marketing jargon try to gerrymander the user experience into a series of free associative queries. However, not all activities on the computer fit this paradigm. Sometimes the mind works better spatially. I would make an analogy with math. Before the invention of algebra you had to write out your math in human language. The invention of algebra provided a visual representation of equations that made much better use of the mind's geo-spatial faculties. Think how much easier it is to visualize FOIL, the product of two binomials, as algebra, than it is to express it in human sentences. The user interface is the algebra of computing.

Finally, a lot of the functionality you mention already exists in various application launchers, like Gnome Do (http://do.davebsd.com/), but if you really want to see a powerful, revolutionary user interface, learn Emacs. :) (Which brings up another point: that the most powerful tools are not always the easiest to learn/use.)
The best option then would be to give the user the ability to preform each task 2 ways: Through commands ("Write paper on US History") or through visual methods such as menus (Apps>Office>Open Office Writer).


Why not just make something that understands english (or spanish, or german, or mandarin...)? The computer has a dictionary and tools for understanding grammar already, so lets say you told the computer "I need to write a paper on Spiritual Possession" that would go in with the tags "word processing" because it understands that writing on a computer is word processing, and when "paper" appears in the same sentence as "write" it means academic paper.

So, it pops up openoffice with the template for a paper automatically filled in with your name, the date, and the title "Spiritual Possession". Hell, it could even look up "Spiritual Possession" in the dictionary/encyclopedia/god forbid wikipedia against a list of classes you supply it in an "About Me" section and throw the name of the class onto the template. Then, this would be autosaved into the folder for that class in school, which it autocreated under /home/blablabla/Documents.

Aren't computers awesome?
That would work, but it would still require a lot of programming


It sounds like a good idea, but....

I've a feeling the OP is probably a good typist.

For example typing "write" to bring up an application as oppose to maybe 2 or 3 mouse clicks probably takes about the same amount of time.

For me typing is not a problem, but for some people I know.... they would choose the 3 mouse clicks any day over having to actually type stuff.

But let me stay with your idea I would envision it like this.

I want to write a paper.

ok I have some type of command line and type "write"

now a window or just some floating icons pop up informing me of the application choices available, I could either click, say, or type the name of the one I'd prefer.

Basically this would entail programming a natural language processor, linked to a database of all the apps available and the file system.

It would be a sort of Artifical Intelligence program.

While the mechanics of it would not be so different,
typing, pushing buttons, icons, etc, the mindset would differ...

It would now put more of a burden on the interface developer and ultimately the computer to process requests, find the available tools and present them to the user.


I would think a language like Lisp would be a good for this, I'm mostly a C++ person, but it could also be done in C or C++, it just would not be such a natural language for this in my opinion.

The problem with Lisp is that there really aren't any good GUI libraries for it. I suppose Python would also be a good candidate language but I'd prefer Lisp, C or C++.

But have you considered that some people don't really like having to type, especially the 1 finger typists, clicking buttons is probably faster for that lot.

I think it's a good idea, but I've a feeling it would need to be a combination of graphical elements and the command line.
Yeah, a combination of GUI and CLI would be good.


How is that less complicated than pressing Win+F10 (which launches OpenOffice on my computer) or clicking an OpenOffice icon on a panel or in a menu?
The idea is to allow people to use what they already know to operate the computer.

brunovecchi
November 8th, 2008, 04:24 AM
The idea is to allow people to use what they already know to operate the computer.

So what do you suppose they already know? Nothing at all? Then your goal should be to create an interface so complex that it'll be able to understand the English language in its entirety, with all its subtleties, dialects, synonyms, etc.

Now, if you are willing to expect some sort of learning from the user to be able to communicate via a simplified english, then why not expect them to learn something as simple as recognizing a shiny picture of a nice document with a nice little pen on top of it to launch, surprise, a paper writing application? What's the difference there?

Has this user-friendliness paranoia reached the point where we do not expect any learning curves whatsoever from the user anymore? We might as well expect that computers just have to read our minds, interpret what we want to do, and do it without any other input. That would be awesome, wouldn't it?

24dhruv
November 8th, 2008, 02:39 PM
I m much interested in using dif. oses so i have also previewed windiows 7 it is practical,userfriendly and more visual(like it shows only icons in task bar and the breath is also increased)here u have described a good visual concept but is out of any need to the user.(userfriendliness and his need are most important u cant underestimate it.) ya compiz fusion's cube concept is much useful while using much higher level of windows.But let me tel u one thing that if there is such rocking interface i will be also interested in such great thing much more then any one else.with the usability and and without doing any comparision with linux(bcause there is no other os which can be customized in the way we can customize this.)i end here.

bash
November 8th, 2008, 03:48 PM
GNOME is working on some redefinement of the Desktop:

http://live.gnome.org/Boston2008/GUIHackfest/WindowManagementAndMore

init1
November 8th, 2008, 04:17 PM
So what do you suppose they already know? Nothing at all? Then your goal should be to create an interface so complex that it'll be able to understand the English language in its entirety, with all its subtleties, dialects, synonyms, etc.

Now, if you are willing to expect some sort of learning from the user to be able to communicate via a simplified english, then why not expect them to learn something as simple as recognizing a shiny picture of a nice document with a nice little pen on top of it to launch, surprise, a paper writing application? What's the difference there?

Has this user-friendliness paranoia reached the point where we do not expect any learning curves whatsoever from the user anymore? We might as well expect that computers just have to read our minds, interpret what we want to do, and do it without any other input. That would be awesome, wouldn't it?
Well, it wouldn't need to know the entire English language, just the words and expressions that a user might use to express what they want to do.
And yes, I think that user-friendliness has reached that point. I know people who can't find Powerpoint or Sound Recorder in Windows.

forrestcupp
November 8th, 2008, 04:44 PM
About the original post. It's a great idea, but we need to be looking forwards. You're still thinking backwards and in a box. A user shouldn't have to touch his keyboard or mouse to tell the computer to do something. A user should just be able to tell the computer to do something and it will do it. And for environments where silence is needed, we need some kind of new multi-touch input device. The current multi-touch options are a step in the right direction, but they have the limitations of either having to have the screen on a table, or making the user have to reach up and touch a normal screen, both of which are uncomfortable and unergonomic.

But typing in a text box is not the future. I can already do that to an extent in my Vista Start Orb.

troutbum13
November 8th, 2008, 05:09 PM
just my $.02...I think that is a step backwards for both technical users and casual users. If you are a techy you are going to have more power, control and specificity at the commandline...if you are my grandma, you are going to be frustrated by less intuitive syntax. If I see a floating text box, I have no idea where to begin...There are too may ( i.e. any word you know) options. If my grandma sees 6 icons on her desktop...there are 6 possible actions (click1...click2..click3...)

When you think about usability I think firefox and some of the other browsers combined with the soon-to-be-evil-empire, google, are a decent benchmark for blending intuitive usability and advanced capabilities. I can access multiple protocols, websites, applications, WAN, LAN, file systems and config directly through the URL bar...If I don't know exactly how or what I want to do I can use the search field and search with keywords and limiters, and I have a graphical interface, with very clear usage standards.

I can see as many widgets and toolbars as I like, or I can see only the main window (f11). And while it is not necessarily true for all web pages, the browser can be navigated exclusively with the keyboard or primarily with the mouse.

The only other thing I will add is that the default eeepc default DM, while not powerful is amazingly usable, I keep it installed dualboot, because I can hand my netbook to anyone (grandma included) and they can jump right in.

These things are not innovative or sexy...but they flat work for most users..what ever the next UI innovation is it should build on this, not detract from it

Th3Professor
November 23rd, 2008, 03:47 AM
A user should just be able to tell the computer to do something and it will do it.

Actually, truly thinking outside the box and moving forward with this revolutionary concept would also mean removal of speech commands. In other words, something like:

Neural Impulse Actuator.

...only a greater evolved version of the current devices on the market.

Tuxoid
November 24th, 2008, 02:58 AM
I currently have loose plans for an alternative desktop environment named the Work or Recreation Basis (WoRB for short). The plan is to avoid the desktop metaphor. The desktop metaphor seems very obsolete to me. We need a metaphor that accepts what a computer really is, a digital tool. Just like manual tools, like hammers or wrenches, a computer hints at it's intended operation based visual forms.

To give an analogy with a manual tool, a hammer has a very large, with one flattened side. There is a thin, cylindrical section about a third across its head. The thinner part is useless. What's useful is the two sides of it. The shape of these pieces suggests a hammer's uses. Having this focused, cylindric head on one side, it's easy to tell by contrast with the thinner section, it's meant to push things in.

Computer interfaces are much the same in suggesting the use of something. A command button looks like a standard button. They are comparable to buttons on a calculator, or a television remote. By having that design, but avoiding trying to look like the action the button performs, the button is effective.

When user interfaces attempt to vividly mimic the task they accomplish, in a visual manner, they end up sacrificing simplicity, the point of a user interface. To perfectly mimic a manual activity through the use of tools, you would have to completely sacrifice everything that makes that utility more efficient. For instance, with my example of a manual tool, a hammer, it could be imagined that before the hammer's invention, people may have attempted to pound things requiring such force with their fists.
Yet a hammer involves a completely different action. Yet on user interfaces, where brevity and simplification can help, UI designers now want to perfectly mimic the more-complicated methods that would have been used previous to computers.

This is just as bad trying to perfectly mimic the use of your fist as a hammer, as I described earlier. Deciding to mimic real-life concepts on a user interface is a way of going backwards.

We need to stop trying to 'be' real-life, on our UIs. It's a way of going backwards...

TBOL3
November 24th, 2008, 05:18 AM
To be honest, I really don't like this idea. While it's a good idea in concept, I would much rather type in rm *.mp3, then delete all mp3 files on my home partition.

But hey, I'm not totally closed to it, it does intreage me.

phen
November 24th, 2008, 04:15 PM
hello!

i'm sceptical, too.

you should really check out gnome-do, maybe together with AVN. you would be able to remove all panels, and to start apps by typing their name. it works for open office presentation with "presentation" for example.

it also works to type in "2*100" and results in 200.

but in fact most people dont want to write. they want to point and click. if possible with a multi-touch touchscreen.

i think that creating really simple interfaces for mouse or multi-touch (for the future) is the right direction.

no sub-submenus, no unnecessary icons. maybe dynamic panels that show only the icons needed for the specific task. for example the trash icon in the lower right corner. i need it every week. make it appear only when the user is using a file manager.

a last note: the most complex thing in user interfaces is to connect information between applications. for example you want google maps to show an adress you received in a mail, or you want to send you buddy on IM all movie trailers of the cinema in your city etc etc. if possible with only a few clicks...
(cannot think of better examples at the moment)
i think your idea could be a step backwards here.

halovivek
November 24th, 2008, 04:20 PM
yes we have to try this one

Th3Professor
November 25th, 2008, 10:40 PM
Actually, truly thinking outside the box and moving forward with this revolutionary concept would also mean removal of speech commands. In other words, something like:

Neural Impulse Actuator.

...only a greater evolved version of the current devices on the market.

;)

Th3Professor
November 25th, 2008, 10:43 PM
<double-post> ubu forums flipped out again </double-post>

semitone36
November 25th, 2008, 11:38 PM
Hmmm... interesting topic.

I have an idea on a direction to take this in. Somebody mentioned that we use a system of aliases to make the interaction process possible. But if you think about it, commands are actually aliases themselves.

The command rm *.mp3 is actually just abbreviated english that the computer translates into 10010110100110101 or something like that. So basically, everything boils down to machine language. Designing a cloud of aliases to operate on top of the shell is just asking to have a bloated system. Also, someone mentioned that the philosophy behind linux is to have transparency in the OS so that the user is truly in control. Adding more "tools" such as this is only going to further hide what a computer is ACTUALLY doing.

So I propose a compromise, having a human-like interaction with a computer, but without having a bloated system of aliases, and keeping transparency of process:

A world wide project to design a shell that uses a language as its commands.

All commands in this shell would be sentences that any normal person could understand. Basically it would be an OSS attempt at AI. If something like this could be achieved the desktop interface, mouse commands, speech recognition would all follow suit very easily.

This might mean a totally new kernel would have to be designed. Much like Mr. Torvalds designed Linux, we could bring forth a brand new way of computing.

Th3Professor
November 26th, 2008, 06:00 AM
:p ... Neural Impulse Actuator................

russo.mic
December 4th, 2008, 05:23 AM
True, but the idea is to make an interface that's natural. * is not natural. If I asked a random person on the street what "*" meant in computing, he/she would probably have no idea.


Bash doesn't accept spaces


Current Gui's don't reflect the actual OS either


The best option then would be to give the user the ability to preform each task 2 ways: Through commands ("Write paper on US History") or through visual methods such as menus (Apps>Office>Open Office Writer).


That would work, but it would still require a lot of programming


Yeah, a combination of GUI and CLI would be good.


The idea is to allow people to use what they already know to operate the computer.


I don't think that's fulfilling the point. The point is not to have to memorize the command "Delete all mp3s". What's the difference in memorizing that vs. "rm *.mp3 -r" ? none. Your still memorizing commands.

Simply creating a bunch of alisas in bashrc won't work, really. I can say this a few different ways:

Delete the mp3
delete my mp3s
delete Greenday
delete greenday
delete green day
delete greenday dookie
delete all of my mp3s
delete all of the mpeg layer 3 files on my harddrive, except the ones i really like

To truly have a CONVERSATION this would have to implement some kind of basic AI and language interpreter, i.e:

"delete all of the mp3s i don't like"

:what rating above which do you want to keep?

"um...3 stars and below can go"

:cool...

anything else and your just faking this conversation.

Good Luck! if this hits dev i'll be watching, I can assure you.

Russo

Th3Professor
December 6th, 2008, 08:03 PM
:p ... Neural Impulse Actuator................

;)