I agree it is not easy fixing problems over the internet.
gcc -version actually returned 4.2.4
Something else I noticed when configuring the kernel. My ethernet card is not listed
It shows a rtk 8169 but does not show a rtk 8111c?
I agree it is not easy fixing problems over the internet.
gcc -version actually returned 4.2.4
Something else I noticed when configuring the kernel. My ethernet card is not listed
It shows a rtk 8169 but does not show a rtk 8111c?
Hi robasc, regarding point 3 above, I've been tinkering around and have possibly found a workaround, however, I didn't get chance to boot up the kerrighed cluster to test it out. Once more it involves a couple of things I missed out of the guide but remember doing, since I saw your message. Firstly, in the chroot'ed nfsboot system, you need to check your environment variables, so do the following.
Look to see if you have the variables CC=gcc-3.3, CXX=g++-3.3 and CPP=cpp-3.3. If not, then you may also not have the packages installed. So do the following:Code:$ export
Once you've got these installed and the variables setup, you can do the following to install Kerrighed, which is pretty much what the guide does.Code:apt-get install gcc-3.3 g++-3.3 cpp-3.3 export CC=gcc-3.3 export CXX=g++-3.3 export CPP=cpp-3.3
The -i flag will ignore errors, so you'll still get the error message but it seems to build and install all the necessary tools, modules, etc in the correct places. Hope this helps!Code:cd /usr/src/kerrighed-* ./configure --with-kernel=/usr/src/linux-2.6.20 make patch make defconfig cd ../linux-2.6.20 make menuconfig cd ../kerrighed-* make kernel make -i make kernel-install make install -i
hi everyone,
I've two interface in server (ubuntu server with DHCP,NFS,PXE server)
eth0 (192.168.1.0/24)--> hub which contains some nodes
eth1 (10.14.200.0/24)--> client for simulation (server client application)
I've installed kerrighed cluster according with tutorial,
bigjimjam says that i must run my simulation at node (192.168.0/24),but see my client simulation is at (10.14.200.0/24)
my simulation has been installed at server (ubuntu server) and it needs to be triggered by client
1) do i have to reinstall my simulation at node if i want to implement this clustering?
2) client has some problems if the simulation run at node (192.168.1.0/24) while client is at (10.14.200.0/24). even i can force my client move to (192.168.1.0/24)network.but I imagine if this kerrighed cluster is implemented for a web server which has IP PUBLIC ,how to manage the IP while I guess node at IP PRIVATE,any idea with this problems?
thank you
agung aryo,
Hi Agung Aryo, what type of simulation are you running? In my opinion, I can see too different options:
1) The client connects to the simulation on the server, which can then execute other programs/processes of the simulation on the cluster nodes via ssh.
2) The client connects to the server, which then grabs the information from the client and passes this to the cluster nodes when executing the simulation on the cluster nodes via ssh.
In order to have kerrighed working properly, only the cluster nodes should run the kerrighed kernel, the server should just be running a standard Ubuntu kernel. Also, the cluster nodes should have there own private network with the server. A client should not be connected to this private network, as all a client should see is the server, which then submits jobs to the cluster nodes. Hope this helps?
Hi everyone
I'm a new user of Kerrighed. I set uo a cluster composed of three machines just like it is said in the easy ubuntu clustering tutorial fo kerrighed.
The problem is, the disks in each node seem not to be working.
It is said that one of the features of kerrighed is cluster file systems which means total virtualization of all the node's disks.
Is Kerrighed really able to show all the disks as an only one disk??
Hello, koukoobangoo.
KerFS was (temporarily) removed at Kerrighed 2.0.0 because it caused kernel crashes. It now seems to be developed as a separate module:
http://www.kerrighed.org/wiki/index.php/KernelDevelKdFS
Bye,
Tony.
Hello bigjimjams,
why can't the server participate to the kerrighed cluster ? I couldn't find a technical explanation for this on their website, but maybe I haven't looked hard enough
Since kerrighed is an SSI cluster, it would be nice to be able to use also the server as an additional node; excluding from the migration, of course, some of the processes (e.g. X).
I'm building a very small experimental cluster, with less than 5 nodes... giving up on one would make quite a difference
Hi apprentice_clusterer, as far as I know it's because all the nodes that are part of the SSI cluster have to share a root file system of some sort, whether this is via NFS, UNFS3, etc. The server can't be in the cluster as it can't share the same root file system. Hope this helps.
Hi robasc, I've managed to get kerrighed running using the 64-bit version of Hardy. However, I ended up using the revision 4762 of the SVN version of kerrighed, as I seemed to run into issues compiling 2.3 for a 64-bit kernel. I followed the same procedure as the guide on Easy Ubuntu Clustering, but the following was different in the kerrighed install:
I'll update the wiki soon so that it details how to install the SVN version too. In later revisions from the svn, you can drop the -i flag, as they seem to have corrected the src folder == dest folder issue. You also have to ensure you install certain things into the kernel for the new scheduler in Kerrighed. These can be found on the SchedConfig page.Code:sudo apt-get install subversion svn checkout svn://scm.gforge.inria.fr/svn/kerrighed/trunk /nfsroot/kerrighed/usr/src/trunk -r 4762 sudo chroot /nfsroot/kerrighed apt-get install automake autoconf libtool pkg-config gawk rsync bzip2 gcc-3.3 ncurses-dev wget lsb-release xmlto patchutils xutils-dev build-essential grub g++-3.3 cd /usr/src/trunk ./autogen.sh ./configure CC=gcc-3.3 CXX=g++-3.3 make defconfig cd kernel make menuconfig cd .. make kernel make -i make kernel-install make install -i
As for the network card, I had the same chipset in my old machine and just enabled the 8169 in the kernel and it worked fine.
Hello, bigjimjams and apprentice_clusterer.
That's not quite right - They have to share configuration information, and an easy way to achieve that is to use a single NFSROOT. However, you could just replicate the information manually. We've run Kerrighed on four independant Debian compute servers without any problem - Well, at least not relating to sharing the root filesystem
In our case, I could share the server root filesystem, but I don't, because it's already being shared with openMosix compute nodes using UNFS3 with 'cluster' extensions enabled as I mentioned previously. I'll post instructions about UNFS3 and clusterNFS on the Wiki when I've got time. Basically, the UNFS3 server interprets 'tags' for different client IP addresses or for clients in general.g.:
When "unique_file" is requested by a client, the UNFS3 server checks for the presence of 'tagged' files on the server. If tagged files are found, they are interpreted, otherwise the normal file is served. This makes it possible to share the root filesystem of the NFSROOT server with the PXE-booted compute nodes.Code:unique_file unique_file$$192.168.1.98$$ unique_file$$192.168.1.99$$ shared_file shared_file$$CLIENT$$
The contents of "unique_file" on the server are different to either file on the nodes with the IP addresses. The contents of "shared_file" on the server are different to the contents of "shared_file" on the clients, but are the same on all clients. There is a performance overhead when interpreting tags, so I don't use it where it is not actually needed.
This is all documented at:
http://unfs3.sourceforge.net/
Bye,
Tony.
Bookmarks