This How To is an edited version of my blog article, found here: http://www.prupert.co.uk/2010/08/02/...ime-in-ubuntu/

I found no useful information online for doing this in Ubuntu, so I decided to write down all my experiences. I include various options for different situations.

You first need to setup your audio inputs. If you are running with Gnome or KDE, this is easily done via the sound widget. If you are running headless, then you need to use the ncurses-based ALSA mixer. So, run
Code:
alsamixer
to set mixer levels via the command line. Select your Mic input as the recording input (use F5 to get to inputs and press Space to select the mic. Push up the volume of the Mic input. You may also need to push up the volume on a generic channel called Capture, if you have one. Don't forget to run
Code:
alsactl store
afterwards to save your settings. As a result of doing all this,
Code:
/dev/dsp
should now point to your microphone input. I found I have lots of strange issues with audio echo and feedback. I managed to eliminate this by ensuring all my audio streams where always only mono, not stereo.

FFmpeg and FFserver

Why Use? Only really useful if it uniquely supports a combination of video / audio feed that you need to produce.

One streaming option is FFmpeg and FFserver. You'll need to build the latest versions of FFmpeg (which comes with FFserver) , you can use the excellent guide here or use my automated script that I wrote, based on that How To, here. To use FFserver, you must first edit its config file. It seems when you build FFserver, it expects to find the config file at /etc/ffserver.conf, but it doesn't create one. There is a default one available on the FFmpeg website here or you can use the one that I have created, found here. You then run FFserver, using
Code:
ffserver
. Once FFserver is up and running, you then need to run FFmpeg, to pipe audio and or video to it. This is the FFmpeg command I found that allowed me to pipe the mic input to FFserver:
Code:
ffmpeg -f oss -i /dev/dsp http://localhost:8090/feed1.ffm
.

The most important part is "http://localhost:8090/feed1.ffm" as this sends the output of FFmpeg to FFserver. Try messing around with the various ffserver.conf options. You only need one input from FFmpeg, but you can have output streams in FFserver. However, be careful, your input has to match the output. So if the ouput expects video and audio, but you only send it video, then FFmpeg with fail (it seems FFmpeg reads the stream options from FFserver and checks that what it is sending it will work, so any problems will usually occur with FFmpeg and not FFserver).

The main problem for me with FFserver was the delay. It was typically around 30 seconds or so and got worse over time. This is a known problem with FFserver and thus I can't recommend it for real-time use. In fact, I couldn't really see any reason to use FFserver at all sadly. It is limited in the types of streams it can produce and how it can offer those streams. It's seems to get less attention that FFmpeg, as it only gets a few commits each month via SVN.

Icecast2 and Darkice

Why Use? Useful if you want to stream music or live DJ mixes. Supports MP3 and OGG (depending on the tool you use to pipe music to icecast.

Icecast is a streaming music solution, primarily designed to allow you stream music over a network, based on WinAmp's Shoutcast technology, essentially making your own radio station. Just like FFserver, you need a server program and a source program (that takes the audio and sends it to the server). I used
Code:
icecast2
as the server program and
Code:
darkice
as the source program. Darkice is perfect as it is designed to stream live audio from the audio input. Both programs are configured via xml files. Here is my icecast.xml and my darkice.xml. You'll have to edit both these files so they work on your system, such as the log file location is correct for example. You set up various streams in the icecast2.xml file, these are confusingly called mount points. You then set up various sources in the darkice.xml file, that direct the audio to these mount points. Due to the bizzare way that alsa sometimes works, /dev/dsp doesn't always work. So in the case of darkice, I used hw:0,0 instead. This refers to the same thing, but in a different way it seems (the reasons for it go beyond me, I think it refers to the first card and the first input (the first being 0, the second being 1 etc)). You then run the icecast2 server using:
Code:
icecast2 -b -c ~/icecast.xml
and then darkice using:
Code:
darkice -c ~/darkice.cfg.
. You might find you need to run darkice using sudo, as there might be some permission problems with accessing the audio input.

This worked much better than FFserver, the delay is around about five seconds. However, I got constant buffering issues, meaning the audio constantly cut off for ages. Also, over time, the delay would gradually get worse over time. It seems icecast is perfect for streaming music, but it doesn't work well in realtime.

SSH and CAT

Why Use? If you want a quick and dirty solution, for short term use and minimal setup.

A fellow Linux user suggested this. After ensuring that SSH is setup on both PCs (using
Code:
ssh
) you simply need to run this command from the PC that you want to listen to the audio on
Code:
ssh SERVER 'cat /dev/dsp > /dev/dsp'
where SERVER is the IP address of the PC that you want to stream the sound from. This literally copies the input from the microphone on the SERVER to the output of your local PC using SSH. The delay is about the same as with icecast, of around five seconds, but you will suffer from bad buffering issues, as it is transferring raw uncompressed audio over your network.


VLC

Why Use? If you want a huge amount of streaming options, both in format and streaming protocols.

VLC offers a huge range of options for both the protocol used to stream as well as the format of the stream. It also acts as both source and server, taking the audio input and streaming it (although it actually does this using the Live555 streaming media module). You can use http, rtp, rtsp among others and transcode the stream into pretty much any format you can think of. It can be quite complicated to setup. VLC can be run via a GUI or via the commandline using cvlc. The VLC wiki has lots of information about the various streaming options, but the documentation is sadly outdated. After lots of trial and error, I finally came to the following command that worked for me:
Code:
cvlc -vvv alsa://hw:0,0 --sout '#transcode{acodec=mp2,ab=32}:rtp{dst=192.168.1.5,port=1234,sdp=rtsp://192.168.1.5:8085/stream.sdp}'
This basically takes the microphone input, transcodes it into an MP2 file and then streams it via rtp to the address rtsp://192.168.1.5:8085/stream.sdp. If you then open "rtsp://192.168.1.5:8085/stream.sdp" in VLC on the client PC you get live audio, in near real time! The delay is about 1.5 seconds and requires*very little bandwidth. Sadly, there are three problems with this. First; only one client could connect at a time, secondly; the stream seems to fail after a while, sometimes after five minutes, sometime after two hours, but it would always fail and third it very CPU intensive (I am thinking this is why the stream failed on my box).


FFmpeg

Why Use? If you want a near-realtime reliable stream.

The best solution I found for realtime audio streaming is FFmpeg as it can stream via rtp nativley, with no need to use FFserver. I used the following command:
Code:
ffmpeg -f oss -i /dev/dsp -acodec libmp3lame -ab 32k -ac 1 -re -f rtp rtp://234.5.5.5:1234
. This is similiar to the VLC command, except that this time I am converting into MP3 and streaming to the address: rtp://234.5.5.5:1234. Once again, all you need to do is enter rtp://234.5.5.5:1234 as the streaming source in VLC. There are a few huge advantages with this method, first, the CPU usage is only about 25% on my naff little 750Mhz Transmeta Crusoe. Second, multiple clients can connect at once and third, it seems rock solid. I have had this command running for three days straight with no problems. Memory usage seems to creep up over time, but that's about all. I get a delay of roughly 1.5 seconds, but this does not change over time.

Other Options

There are other options that I tried but failed to work. I am including them here for completeness.

Pulse Audio: you can share your audio inputs and outputs over a network using Pulse Audio. I managed to see the shares, but whenever I accessed them, it seemed to crash my network, I guess due to the high bandwidth involved (I was using wireless).

Live555MediaServer / LiveCaster: this is the module that powers VLC. Sadly it only has minimal documentation and I couldn't figure it out.

MPEG4IP: primarily designed to stream video, it actually worked reasonably well, but requires a GUI and I was working headless.