Page 1 of 5 123 ... LastLast
Results 1 to 10 of 96

Thread: Step by step guide to setting up Ubuntu 11.10 server for Newbies!

Hybrid View

  1. #1
    Join Date
    Sep 2007
    Beans
    75

    Step by step guide to setting up Ubuntu 11.10 server for Newbies!

    Update 1/7/2012:
    Updated the UFW section to add a new in/out rule for dhclient
    Updated the NETATALK section to use a newer version of Netatalk and a different afpd.conf file
    Updated the NETATALK section to have a better prompt when using ssh or the console



    Section #1 - Introduction
    ----------------------------------------------------------------
    ----------------------------------------------------------------


    Hello,

    I have set up a small server at my home to handle some basic NAS-like functions for my network of Macs. I knew that I would have a hard time repeating the steps should I ever have to do it again so I documented the process. Well, that sort of snowballed and I wound up writing a fairly in-depth HOW TO on the process. Partly because I felt there wasn't anyplace where there was a comprehensive guide to doing what I wanted to do, and party because the act of "teaching others" actually makes me learn the process more completely myself. I am aiming this guide at relative newbies to the Linux world (though I expect a certain amount of technical savvy regarding computers in general).

    I tried to document not only the steps needed to recreate what I did, but also the basic knowledge needed to understand what exactly each step does. In this way, even if individual users need to stray from the exact path I have taken, they should be armed with enough knowledge to adapt the instructions to their particular needs.

    I am going to break the process down into several sections so as not to overwhelm anyone with a single block of text. Questions and comments and critiques are welcome.

    A note to the moderators: If I am posting in the wrong forum or breaking any rules here, please let me know and I will gladly adjust accordingly.






    Section #2 - Who wrote this guide and who is it for? DISCLAIMERS!!
    ----------------------------------------------------------------
    ----------------------------------------------------------------




    Part 1: Who wrote this guide? (Disclaimers!)

    Well, I wrote it of course!

    More specifically, I am a complete newbie to setting up and administering a server. Everything here is stuff I learned by searching the web over the past month or so. Prior to that, I had never used ssh, firewalls, netatalk, or anything of the sort. I *have* been using a Linux operating system as an end user at work for over 10 years now, so I am comfortable with navigating via the command line.

    But beyond that, I AM COMPLETELY NEW TO ALL OF THIS!

    Knowing this, take everything I post here with a huge helping of salt. I researched this as best as I could, but there are absolutely NO guarantees that I haven't done something completely wrong. I have, however, implemented all of this at home and it appears to be working. I am also (he buffs his nails on his shirt) a pretty smart guy. That said, hopefully the many folks here who are even smarter will chime in and correct any mistakes I have made. I will attempt to update and edit the guide based on these comments.

    And, finally, the requisite:

    I TAKE NO RESPONSIBILITY IF YOU MESS UP YOUR SYSTEM, DATA, SENSE OF BALANCE, MARRIAGE, WHAT HAVE YOU.

    Even if you follow my instructions to the word. Seriously. I did my best here, but I am only posting this as a courtesy. Go in with your eyes open.



    Part 2: Who is this guide for?

    This guide is intended for people like me. You are reasonably computer savvy but you don't know the first thing about setting up a Linux server. You can figure things out when they are more or less obvious (i.e. you won't need a tutorial to use the nano text editor - but using vi might be beyond your immediate comfort zone). You have used the command line, but not much beyond changing directories and moving files. You haven't spent any time thinking about computer security other than to complain when you are forced to use a capital letter AND a number in a password (oh, the unjustified indignity!) You may have heard of ports and port forwarding, demilitarized zones, network address translation (NAT), routers, and the like but are pretty muddy on what this all means.

    Of course, if you are more advanced, this guide might still be of some use. Who knows.



    Part 3: How hard is this going to be? I see a lot of text here...

    This isn't that hard. Now, you might get a little freaked out by the amount of text in this guide, but don't be! I can be a wordy person, but I also do my best to be clear and concise when needed. The large amount of text is intended to simplify the process by explaining what each step does and why I elected to do it.

    If you get past the amount of words and let yourself just read it one sentence at a time, it should be a fairly quick, easy, and painless process to follow. I have made my best effort to format the text in a way that is consistent, clean, and easy on the eyes. Personally, I usually do better when reading instructions that are printed on paper (vs. reading directly from the screen). If you are like me, you might consider printing the guide out and reading it through a few times before you go to bed. At the very least I expect it will be a fairly effective counter to any insomnia you may have. Be forewarned though, it is a lot of paper!

    Although this server build isn't too difficult a process (mostly thanks to all the hard work done by the Linux open source community, the many folks who kindly post their own tutorials, the folks at the Ubuntu and LinuxQuestions forums, as well as Canonical), it surely isn't a 10 minute install and go scenario. For that, you might consider FreeNas (http://www.freenas.org/). Alternatively, if you prefer Linux, you might also consider OpenMediaVault (http://blog.openmediavault.org/) which is based on Debian Squeeze and looks like it is shaping up to be a really nice, easy to install and manage solution.

    Personally, I prefer building my own server for the simple reason that it forces me to really understand how the system works. It also offers me the ability to customize the features that I want (which, as you will see, are not standard). I might switch to OpenMediaVault some day since it is also based on Debian. But if I do, I will have had the experience of setting up this server and that will allow me to customize OMV for my needs. On the other hand, I may go crazy and install arch linux. Either way, this is a great way to start learning how to manage your own server.

    Finally, if you just parrot what I do and blindly type in what you see me typing here you are pretty much guaranteed not to get a working server. In most cases you will have to slightly bend and tweak what I have done to get it to work for you. That is why there are so many words. They are there so that you can understand *why* I am doing something and then let you adjust accordingly for your particular situation.

    Ultimately though, just read it one sentence at a time and you will be fine.



    Part 4: Miscellany.

    If you have any questions, comments, or critiques please post them! I will do my best to respond (but also know that I work from 8-7 and have a bazillion projects on the weekend, so there may be a delay before I get back to you).

    Finally, I really relied on a number of websites and forums to put this together. To the extent that I remembered, I have included these links here. But a lot of people spent a lot of time posting a lot of useful information and I am sure I am only crediting a tiny fraction of those who helped me. I'm sorry if I missed mentioning them all, but there you go.





    Section #3 - What are my goals for the system? What are the limitations?
    ----------------------------------------------------------------
    ----------------------------------------------------------------


    Here is a brief rundown of the goals of this system:


    Part 1: This machine will be a file server, a Time Machine backup drive, and a media server in my home. Nothing Else.

    This part is fairly self explanatory. I intend to have this system be on (though sleeping most of the time) 24 hours a day. It will serve files to a local network of Macs (3 genuine articles, and one Dell mini 10v hackintosh). It will also be a network drive to which these Macs will back themselves up via time machine. Finally, it will serve music and movies to any of the machines. But that is it. It will not be running bittorrent, MySQL, Apache, etc. It will not serve web pages or crunch any data. I have modest needs and I prefer the security of having as few services running as possible.

    It will not be a high-performance server. It can stream multiple musical tracks at once. It can stream movies. But it isn't intended to replace your primary drive on your Mac. Instead think of it as a media and file repository that you use to stream media or access archived files.



    Part 2: There are actually going to be TWO servers in two different locations.

    Eventually, there will actually be two identical machines. One will be located in California. The other in Washington DC. They will be identical in both hardware and software. They will differ only in the actual user accounts (though the administrative user will be identical on both).

    Each of these servers will back themselves up to the other (i.e. mirror all of their data to their partner across the country). That way, if my server gets crushed in an earthquake here in California, all of the data will be safe in the great District of Columbia. And vice versa (I mean, hey. They just had an earthquake there. Freaky, right?)

    But that portion of the tutorial is currently missing because I am totally broke and cannot afford a second machine right now. Still, I will offer a basic outline how something like that would work.



    Part 3: Thoughts on system security.

    Because these servers will be partially "naked" out on the internet (i.e. I will need to access them remotely) I have spent a fair bit of time trying to make sure that they are secure. A lot of the steps I am going to take here may smack of overkill but I think multiple layers of security can only help. Specifically, as I am new to all of this, layering security helps protect against the consequences should I accidentally mess up one of those layers. But, this should serve as a reminder that I really don't have much of a clue as to what I am doing. I am just synthesizing what I have been able to glean from google searches. Real security experts, please chime in!



    Part 4: A note about OSX vs. Windows.

    I only have Macs, so this guide is directed pretty much exclusively to them. That said, much of what is described here should be easily translatable to a Windows network. It's just that I'm not going to do that (for pragmatic reasons, not religious ones). If someone reads this and manages to add Windows specific tips, I'll try to insert them here.

    Alternatively, you could just ditch the dark side and convert to OSX which is all sunshine and roses. (I kid! I kid!)



    Part 5: The hardware.

    Here is the hardware I am using for the server:

    Intel D510MO motherboard (dual atom cpu)
    OCZ ssd drive (to store the Server software)
    Rosewill 4 port SATA II card (rc-217)
    4 Western Digital Caviar Green 1TB drives in a RAID 5 configuration
    FSP AURUM SERIES Gold 400 power supply (80+ GOLD)
    Note: I cannot recommend this configuration. I am currently having difficulty with the S.M.A.R.T. reporting with this combination. Turning on regular S.M.A.R.T. monitoring seems to lock the server up with some consistency. That said, all of the steps that I outline here that have to do with S.M.A.R.T. are solid and with luck, my problems should not be your problems (unless you also have funky hardware). I will update this post once I have replaced some of the equipment with better specs.





    Section #4 - Define your terms!
    ----------------------------------------------------------------
    ----------------------------------------------------------------


    Since this is a guide for us newbies, I think it would be appropriate to define some basic terms here. Again, keep in mind that I am new to all of this as well, so if you have a better definition, please chime in. Also, if I made a mistake, let me know.


    SSH:

    This is a protocol that will be used to remotely administer your server. It basically lets you open a terminal on your local Mac (or PC) and start typing in commands as though you were physically working on the server. It has all sorts of security features to try and prevent evil people from hijacking your computer or listening in on the commands you are sending. You may have multiple ssh connections to your server at once, making it easier to edit a config file in one, and run terminal commands in another.



    Public/Private Keys:

    Most of us are used to the security model that relies on a user login (name) plus password. If you can supply both pieces of information, you are allowed into the system. Of course, if your login is easy to guess (say, your first name) evil-doers already have half of the information they need to get in! If your password is easy to guess (or crack because it is a regular word found in the dictionary) then the walls to your digital kingdom are little more than decoration.

    Key based security changes this model, making it much more secure. You can think of Public/Private keys as sort of being a Lock/Key system. (A note to experts: I know this is not at all how it really works, but I think it is a useful analogy). To use key-based security, you generate two keys using a program running on your local machine (my Mac in my instance). You store the public key (acting as a lock) on any system you want to remotely log into. You store the private key (acting as a key for this lock) on the computer you will be logging in from. This private key is itself locked up with a passphrase (essentially a password that is a full sentence including punctuation). Think of the local key as being locked up inside a safe with a combination lock on it (where the combination lock is your passphrase).

    To log into a remote system, you first supply your passphrase which unlocks your locally stored private key (think of opening the safe). This key is then used to "unlock" the public key that resides on the server. If everything "matches up", you are authenticated and can start using the remote machine.

    Now, don't mistake the passphrase that unlocks the private key for a password that unlocks the remote server. The passphrase only unlocks the key that resides on the computer you are sitting in front of (again, the passphrase is analogous to the combination to the safe that holds your key). If you were sitting at a different computer (one that did NOT have your private key stored on it) simply typing in the passphrase would not let you log into the server. You must have this private key file physically residing on the computer you are logging in from.

    Why is this more secure? Well, there is a ton of complicated cryptographic stuff going on in the background, but on the surface consider what it takes to break into a machine that does not use key based security: Once someone manages to get both your login and password (either by guessing, cracking, or social engineering) they have full access from anywhere in the world. Compare that to what is needed to break into a machine that uses key based security: They would still need to guess/crack/steal your login but without the actual key file that is stored on your (hopefully) non-remotely accessible computer, no access is possible. Even should your key be stolen (say your laptop is stolen), assuming you selected a reasonably secure passphrase, it would be useless because the key would still be protected inside the "safe". That would buy you enough time that you could then change the public key on your server long before any could ever hope to crack your passphrase (if, in fact, they ever could).



    cron/crontab:

    cron is a process that allows the user to run arbitrary commands on an arbitrary schedule. For example, should you want the server to check the hard drive temperatures every five minutes, you would instruct cron to run the hddtemp command once every five minutes. cron can be tasked with running any number of commands on any frequency as long as it is greater than one minute (i.e. it cannot execute commands any more frequently than one minute apart). For example, you can have a command executed every minute, and a second one executed only every Sunday night at 11pm. crontab (vs. cron) is simply the file that configures which commands should be executed at what time. We will be relying on cron to control basic server maintenance.



    IP Address:

    This is like a telephone number or street address for your computer. If you want to connect to the computer, you will have to "call" it at this number (or "visit" it at this address). The IP Address of a computer is not always constant. Often (usually for home computers) it will change from time to time.

    Also, there is a concept of a local address and public address. Typically, in a home user setting, your router (the device that is either connected directly to your modem or is actually integrated with your modem) will have a unique public IP address. This address is completely unique among all the IP addresses in the world (although it may change from time to time and the address you had yesterday may belong to your neighbor today). Behind your router, each of the computers on your home network will also have a unique IP address, though it is unique only with respect to the other computers on your local network. This is a local IP address. These local IP addresses are reused all over the world, but are sequestered behind their routers which keeps them from interfering with each other. The best analogy I can think of is a business with a single public phone number and multiple internal phones, each with their own extension. While the reception desk has a single phone number that is completely unique in the world, each extension (say 1454) is used over and over at different companies everywhere.

    Local IP addresses usually fall into one of several ranges:

    172.16.1.* where * can be any number between 1 and 255
    196.0.10.* where * can be any number between 1 and 255
    10.0.0.* where * can be any number between 1 and 255

    There are several methods commonly used to indicate a range of IP addresses.

    172.16.1.*
    172.16.1.0-255
    172.16.1.0/24

    All of these mean the same thing: The range of addresses from 0-255 on the subnet that starts with 172.16.1. Some tools that deal with IP ranges will require one format. Some require another.



    NAT:

    Network Address Translation. Typically your router will have this built in. Basically it masks the IP address of your home computer behind the public IP address of the router. So, internally you might have a machine with an address of 172.16.1.11, but externally it shows up as 132.456.23.66. To use the phone number analogy above, it is like calling the central switchboard of a company (the public IP address) but not knowing the internal extension of the person you are trying to reach (the local IP address). Without the internal extension, undesired calls can be stopped at the switchboard (aka, your router). Even when the router passes data on to the local machines, the outside world is never made privy to their actual IP addresses.



    Ports:

    Note: I read the following analogy somewhere on the web. Apologies for not being able to credit the originator, but I forgot to save the web address.

    One way to think of your computer is that it is like an office building. Its IP address is like the street address of the building. Each suite in the building provides a different service, as long as you are able to get in the door. Ports are like the suite numbers for these offices.

    If you want to get a new filling for your teeth, for example, you would go and knock on the door of your dentist (suite 22 for example). If you wanted some law advice, you would knock on a different door (suite 80 for example). Of course, many suites are vacant (suite 22,512 for example) and knocking on them produces no results.

    On your computer, once you have the IP address (comparable to the street address of the building), you can try to connect to any port number you want (just as you could knock on the door of any suite in the building if you so desired).

    SSH, for example, typically listens on port 22. If you want to ssh into a computer, you would first obtain its IP address and then try to connect to (knock on) port 22. Once the ssh server answers the knock, you can then start negotiating your admittance.

    Web servers typically run on port 80. If you want to visit a web site, you would once again obtain its IP address and then connect to (knock on) port 80. When you use a web browser, this is the process that is running behind the scenes.

    Having lots of ports "open" (i.e. listening for knocks) is a security risk. Even if the port is password protected, it is conceivable that someone could sneak past that and get into the "office". In fact, there are automated tools that simply go around and try to connect to every port on every IP address they can find. The are looking for an unprotected (or poorly protected) opening. For example, since port 22 is known to be the port typically used for ssh, this port is often targeted with tools designed to take advantage poorly managed ssh implementations.



    RAID:

    RAID stands for "Redundant Array of Inexpensive Disks". Or at least it used to. Apparently the word "Inexpensive" offended some folks, so now it stands for "Redundant Array of Independent Disks". Way less fun name.

    Regardless, RAID allows you to store your data across several disks in a way that either speeds up access, improves data integrity, or does both. I would highly recommend reading the Wikipedia article on RAID. It is comprehensible and offers a lot of useful information: http://en.wikipedia.org/wiki/RAID

    Most home server RAID configurations are focused more on fault tolerance than performance (i.e. RAID "levels" 1, 5, or 6). By adopting one of these RAID levels, you can survive having one or more drives fail without losing any data. All that is required is that you replace the faulty drive(s) and rebuild the array before an additional drive fails. In fact, you won't even have to stop using the system as all your data will still be available even while the system is in a degraded state.

    Of course, nothing comes for free. The increased redundancy of a RAID system means you will have to sacrifice some of your storage capacity for this level of protection.

    RAID 1 mirrors your data across an any number of drives. (each drive is an identical copy of all the others.). This cuts your actual storage space by the number of drives you have. For example, if you have two one-terabyte drives, you will only have one-terabyte of storage available. If you have four one-terabyte drives, you will still only have one-terabyte of storage available (but four copies of your data). RAID 1 can lose n-1 drives and still function where 'n' is the number of drives you have (i.e. as long as one drive continues to function, no data has been lost). Typically RAID 1 is limited to two drive setups due to the amount of storage space lost in more than two drive arrays.

    RAID 5 distributes your data plus parity information across a minimum of 3 drives. This cuts your storage by 1-1/n where n is the number of drives you have. In other words, if you have 3 drives, you will get a total of 2/3 (1 - 1/3) of your total drive space. If you have 5 drives, you will get a total of 4/5 (1 - 1/5) of your total drive space. For example, three one-terabyte drives will leave you with two-terabytes of storage space. Five one-terabyte drives will leave you with four-terabytes of storage. RAID 5 can only tolerate a single drive failure (no matter how many drives you actually have in the array) without losing data.

    RAID 6 distributes your data plus double parity across a minimum of 4 drives. This cuts your storage by 1-2/n where n is the number of drives you have. In other words, if you have 4 drives, you will get a total of 1/2 (1 - 2/4) of your total drive space. If you have 7 drives, you will get a total of 5/7 (1 - 2/7) of your total drive space. For example, four one-terabyte drives will give you two-terabytes of storage space. Seven one-terabyte drives will give you five-terabytes of storage space. The benefit of RAID 6 versus RAID 5, however, is that you can suffer two failed drives before losing data. This may seem like overkill until you consider the fact that after a single drive failure in RAID 5 (and RAID 1 if you are only running two drives), you are as vulnerable to data loss as though you did not have a RAID array at all. Considering the fact that a typical home user will most likely not have a spare on hand, this means: ordering a replacement, having the time to install it, and rebuilding the array (a process which can take the computer several days to complete in itself). Your data could be vulnerable for days if not weeks. Also, if you purchased your drives at the same time there is a distinct possibility that they were manufactured together. This increases the chance that they will tend to fail at about the same time.

    For the purposes of this server build, however, RAID 5 seems to be the best combination of flexibility and data redundancy. The entire data set will be backed up to another system across the country. Losing data to two or more drives will be annoying but hopefully not catastrophic.



    Software RAID vs. hardware RAID:

    RAID arrays can be managed via the CPU on your server, or via dedicated hardware that (supposedly) handles all of the duplicating / parity / striping whatnot. For most small home server applications, however, having a software RAID solution will most likely be the preferred solution for several reasons:

    1) Often hardware RAID is not really hardware RAID. Cheap RAID cards often do most of their work in software anyway, with only a mild assist from the hardware. So you are paying for some functionality that doesn't really buy you anything. In fact, you may be paying extra for a worse situation because...

    2) mdadm might actually run faster than many cheaper hardware RAID cards according to some reports.

    3) Finally - and in my mind most importantly - If a hardware RAID card fails, chances are you will have to buy the exact same card from the same vendor to ever get your data back. By comparison, software RAID can run on most any Linux system. If your hardware fails, any other Linux system running mdadm should be enough to allow you to recover your data.







    Section #5 - Lets get down to work… Install Ubuntu Server 11.10
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.
    - This HOW-TO assumes you are installing the 64-bit version of Ubuntu Server 11.10. For the most part there should be no issues if you install the 32-bit server that I am aware of. I just have never tried it and cannot guarantee that it will be exactly identical to what is described here.




    Step 1: Install a fresh copy of Ubuntu Server 11.10 64-bit and create an administrative user who isn't you.

    Install Ubuntu Server 11.10. The actual step by step process involved is fairly self explanatory, and if you need help there are many many guides on the internet that can help you. The only specifics that I would suggest are as follows:

    --------------------
    When asked to install packages, leave them all blank. We will be installing everything we need by hand.

    --------------------
    If asked about using LVM, select "no". LVM is an added layer of complexity that our tiny NAS does not need.

    --------------------
    When asked to create a new user, create one that is not your usual login. I.e. if you usually log in as 'bob' on most of your systems, make this one something like 'admiral'. Reserve 'bob' for your regular use on the machine, and then 'admiral' will be the login you use to manage the server.

    Note: Nearly all of the examples below assume you are logged in as this user (admiral in this example). Also note, whenever you see me type "admiral" you should substitute whatever administrative user name you selected.

    Make sure you pick a really strong password! Don't use a dictionary word, even if it is one of those fancy-schmancy SAT words. Don't think a "foreign" word is more secure. Don't use a proper name. Seriously. Dictionary words and names are frighteningly easy to crack.

    For example:

    shadowcat
    is an extremely weak password, while:

    squibtish1126? <--- silly name + girlfriend's sister's birthday + a question mark
    is pretty strong and not that hard to remember. Also, remember this is not your usual login. It is only used when you manage the system. So even if it is really annoying, you shouldn't be using it very often (only when administering the server).




    Step 2: Log in and write down the IP address and MAC address of the server.

    Log into the server. You will be doing this directly from the keyboard connected to the machine. Log in using the administrative user you set up above (admiral in my example) and the really hard to guess password you selected.

    You will need to know what the IP address of your server is at the moment. The IP address may (and most likely will) change later. I will show you how to handle this as one of the later steps in the process (it involves reserving an IP address for this machine on your router). For now, type:

    ifconfig
    Look for a line that starts with inet addr: followed by a bunch of numbers. Note: There may be more than one line like this. If so, use the line in the group labeled "eth0" (this is assuming you are using an ethernet cable to connect to the network and there is only a single ethernet connector in your machine. If this is not the case, you may have to do a bit of research.) Write down the four numbers separated by periods that immediately follow this text. For example: 172.16.1.16 This is the IP address of your machine and is the address you will use to log into it via ssh later. Write it down because you will NOT remember it.

    Next, look for a line that contains the text:

    Ethernet HWaddr.
    Copy down the six letter/number combinations separated by colons that follow. This will looks something like:

    a0:32:de:3b:02:2e
    This is the hardware MAC address of your ethernet card (Note: MAC does not have anything to do with Macintosh. It stands for Media Access Control). This is an identifier for your ethernet card and should be unique in all of the world. You will need this information later when you want to assign a (local) static IP address to your server as well as when you want to wake the machine up remotely. Write it down because if you think you will remember this, you are completely delusional.



    Step 3: Update the system to the latest software

    Update all the packages to their most up-to-date versions. To do this, type:

    sudo apt-get update
    sudo apt-get upgrade
    sudo apt-get autoclean






    Section #6 - Give your server a static IP address on your local network
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Unfortunately this section is, by necessity, a bit vague. I have no idea which router you are using, and every router has a slightly different way of reserving IP addresses. I am using an Apple Airport Extreme which has a relatively simple user interface. Most routers use a web-based interface. In either case, this step shouldn't be too hard, but you will have to use Google to figure out the exact steps.

    In general terms, what we want to do is "reserve" an IP address for your server. Typically, a router runs a DHCP server which will randomly assign the next available IP address to whichever device (laptop, desktop, smartphone, etc.) next requests one. Servers, however, should always have the same IP address to ensure that we can reliably find them on the network. To make sure that the server always gets the same address, we "reserve" one with the DHCP server on your router. The server will introduce itself to the DHCP server using its MAC address (you did write it down, didn't you?) and the DHCP server will respond by returning the reserved IP address. Other machines without reservations will continue to get a randomly assigned address (but never the reserved address). You may make multiple reservations for multiple different machines.

    Again, I'm sorry if this section is a bit vague, but I will attempt to illustrate by going through the steps using an Apple Airport Extreme. Between this and googling your actual router model, you should hopefully be able to follow along.



    Step 1: Open the Airport Utility app

    Open the AirPort Utility.app located in the /Applications/Utilities directory

    Select your device and choose "Manual Setup"

    Click on the "internet" tool bar button and then select the DHCP tab. There you will see a field called "DHCP Ending Address". Apparently other routers are smart enough not to hand out static ip addresses to dynamic hosts under any circumstances. Apple's Airport Extreme, however, will actually hand out a reserved address to any client as long as that reserved address has not been removed from the pool of available addresses. So, we need to tell it that any addresses 99 and below are dynamic. Any addresses 100 and above will be reserved static addresses. To do this, edit the field that is labeled "DHCP Ending Address" so that the last number in the string of dot-separated numbers is 99. For example, on my system I am using the 172.16.1.* subnet. Therefore the DHCP ending address is 172.16.1.99. If you were using the 192.168.0.* format, yours could be set to 192.168.0.99. Note, I am arbitrarily using 99 as the cutoff point. You could set yours higher or lower.

    Next, click on the + sign under DHCP reservations. A new sheet will appear asking for a description of the reservation. I am calling mine UbuntuServer. Make sure "Reserve address by" is set to:

    MAC Address
    and click "continue".

    You will now be asked for a MAC address. Type in the address you wrote down in the previous section. Below this, you will be asked to provide a reserved address. Here is where you assign an address to your server. I personally tend to set mine at 100 (i.e. 172.16.1.100) because that is a nice, easy to remember address for the server.

    Click "done" when you are finished, and then click "udpate". Your AirPort will restart itself.

    After it has restarted, you should probably restart your server to make sure it has taken the correct IP address (remember, you can get that info by typing ifconfig).



    Step 2: Repeat for any other non-computer devices on your network

    For reasons I will get into later, I also like to make sure that any non-computer devices that hook up to the network also use reserved IP addresses. My network printer, for example. It gets a reserved IP address in exactly the same way.


    Again, I'm sorry I can't be more specific here, but the above steps should be generalizable to your specific router (assuming Google is your friend).






    Section #7 - Prepare your hard drives to be combined into a RAID 5 array
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.
    - Much of the information in this section came from: http://www.linuxhomenetworking.com/w..._Software_RAID



    A short primer on Linux and Hard Drives.

    Since you are a bit more technical than the average user, you already understand that you can't just install a hard dive in a computer and start using it directly "out of the box". If you have used Macs or Windows, you know that you typically need to "format" the drive before use. If you are an OSX user accustomed to using the Disk Utility application, then you are also familiar with the idea of partitioning your hard drive before formatting it. Presumably whatever method Windows uses to manage its hard drives is similar.

    Basically, in order to user a hard drive on a computer, you need to have:

    1) The hard drive.
    2) One or more partitions on this hard drive.
    3) A file system on the partitions so that the operating system can store data (aka "formatting").
    Getting drives ready for our Linux server has the same requirements. The only significant differences between the process you are familiar with on the Mac and the one under Linux (Ubuntu) server is that:

    a) It is all command line driven, naturally.
    b) Linux has a peculiar way of "listing" drives (to Mac and Windows users anyway).
    Under Linux, devices connected to your computer are generally listed as what appear to be files in the /dev directory. One "file" per device. While there are many files in this directory, the ones that deal with hard drives generally start with either "hd" (for IDE drives) or "sd" (for SCSI and SATA drives). For example, if you have a machine with four SATA hard drives, you will find the following "files" listed on your system:

    /dev/sda <--- this is the first SATA (or SCSI) drive (a)
    /dev/sdb <--- this is the second SATA (or SCSI) drive (b)
    /dev/sdc <--- this is the third SATA (or SCSI) drive (c)
    /dev/sdd <--- this is the fourth SATA (or SCSI) drive (d)
    The first drive is "sda", the second is "sdb", and the third is "sdc" etc. You get the general idea (Though I have no idea what happens after you have more than 26 drives connected to your system). Note that these are the physical drives you have installed in your machine. The bare "metal" so to speak.

    As a different example, if you had four IDE drives connected to your system, you would find the following "files" listed on your system:

    /dev/hda <--- this is the master drive on the first IDE channel
    /dev/hdb <--- this is the slave drive on the first IDE channel
    /dev/hdc <--- this is the master drive on the second IDE channel
    /dev/hdd <--- this is the slave drive on the second IDE channel
    Now, these are just the bare drives attached to the system. Any partitions on these drives are listed as separate "files" with a similar name but adding a number to indicate the partition number. So, if you had two SATA drives with two partitions on the first drive, and a single partition on the second, you would have the following "files" listed on your system:

    /dev/sda <--- this is the first SATA drive (bare drive)
    /dev/sda1 <--- this is the first partition on this drive
    /dev/sda2 <--- this is the second partition on this drive
    /dev/sdb <--- this is the second SATA drive (bare drive)
    /dev/sdb1 <--- this is the first partition on this drive
    This is most of what you need to know in order to prepare your drives. There is only one additional concept to go over, and this is regarding our RAID management software. Remember, normally accessing a drive requires:

    Hard drive -> Partitions -> File System
    Well, when using the RAID management software we have to add one more layer:

    Hard drive -> Partitions -> Raid Management Software -> File System
    If you are using drives that already have data on them be aware that these drives will be wiped clean!



    Step 1: Figure out which drives will be included in the RAID array:

    Let's get a list of all the drives on our system. The easiest way to do this is to type the following (if you don't understand what sudo does, or don't understand what grep does, you should look them both up on Google. Alternatively, you could just blindly type what I show here - though that is cheating):

    sudo fdisk -l | grep -i "Disk " <--- note: that is a lowercase "L" just after "fdisk", followed by the pipe symbol.
    You will get a list that looks something like this (yours may vary in detail, but generally it will look the same). If you are using drives that already have data on them you may also get some warnings about unsupported partitions. You can just ignore these (or you could use gdisk to clean them up - which is what I had to do when migrating some disks from FreeNAS. The website http://www.rodsbooks.com/gdisk/wipegpt.html was helpful in removing the GPT data):

    Disk /dev/sdb doesn't contain a valid partition table
    Disk /dev/sdc doesn't contain a valid partition table
    Disk /dev/sdd doesn't contain a valid partition table
    Disk /dev/sde doesn't contain a valid partition table
    Disk /dev/sda: 4294 MB, 4292967296 bytes
    Disk identifier: 0x00047a51
    Disk /dev/sdb: 8589 MB, 8589934592 bytes
    Disk identifier: 0x00000000
    Disk /dev/sdc: 8589 MB, 8589934592 bytes
    Disk identifier: 0x00000000
    Disk /dev/sdd: 8589 MB, 8589934592 bytes
    Disk identifier: 0x00000000
    Disk /dev/sde: 8589 MB, 8589934592 bytes
    Disk identifier: 0x00000000
    Let's break this down. The first four lines read:

    Disk /dev/sdb doesn't contain a valid partition table
    Disk /dev/sdc doesn't contain a valid partition table
    Disk /dev/sdd doesn't contain a valid partition table
    Disk /dev/sde doesn't contain a valid partition table
    These lines are fdisk complaining that it can't find a partition on the following four drives:

    /dev/sdb
    /dev/sdc
    /dev/sdd
    /dev/sde
    So, I know I have four "bare" drives in my system: sdb, sdc, sdd, and sde.

    The next two lines read:

    Disk /dev/sda: 4294 MB, 4292967296 bytes
    Disk identifier: 0x00047a51
    This shows that I have a 4GB drive with a valid partition on it called sda. I know it has at least one valid partition on it because fdisk didn't complain about it in the first section.

    The next eight lines read:

    Disk /dev/sdb: 8589 MB, 8589934592 bytes
    Disk identifier: 0x00000000
    Disk /dev/sdc: 8589 MB, 8589934592 bytes
    Disk identifier: 0x00000000
    Disk /dev/sdd: 8589 MB, 8589934592 bytes
    Disk identifier: 0x00000000
    Disk /dev/sde: 8589 MB, 8589934592 bytes
    Disk identifier: 0x00000000
    This shows me the same four bare drives as before (sdb, sdc, sdd, sde) along with their sizes (8 GB each).

    So, in summary, I can deduce that sda holds my operating system, and that sdb-sde are bare drives that I will be adding to my RAID array.

    Again, be aware that depending on the drives you have in your system, you may get slightly different outputs than what I have here (for example, when I built this server using two drives from an older FreeNAS device, I got some warnings regarding GPT Partition Tables). Use what I explain here as a guide to understanding which drives are in your system, not as an absolute example of the outputs you will be seeing.




    Step 2: Partition each of the bare drives in preparation to be added to the array:

    NOTE: Partitioning a drive will delete ALL of the data on that drive.

    Even if your drive has multiple partitions on it, all of your data on that drive will be wiped clean. Be aware of this awesome power and responsibility when working with fdisk. If you point it at the wrong disk, whammo! You could be sorry. That said, in the case of our server the absolute worst we could be doing is accidentally wiping out the freshly installed OS. Just re-start the whole process if that happens (lord knows, the more times you do it, the faster you get - ask me how I know).



    I am going to go through the steps for one drive. Each subsequent drive is handled in exactly the same way.

    Let's begin with drive:

    /dev/sdb
    We will be using the entire drive for the array, meaning just a single partition. To begin, type:

    sudo fdisk /dev/sdb
    You will get a prompt that looks something like this:

    Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
    Building a new DOS disklabel with disk identifier 0x46623703.
    Changes will remain in memory only, until you decide to write them.
    After that, of course, the previous content won't be recoverable.

    Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

    WARNING: DOS-compatible mode is depreciated. It's strongly recommended to
    switch off the mode (command 'c') and change display units to
    sectors (command 'u').

    Command (m for help):
    If you press 'm' for a list of possible commands you get the following list:

    Command action
    a toggle a bootable flag
    b edit bsd disklabel
    c toggle the dos compatibility flag
    d delete a partition
    l list known partition types
    m print this menu
    n add a new partition
    o create a new empty DOS partition table
    p print the partition table
    q quit without saving changes
    s create a new empty Sun disklabel
    t change a partition's system id
    u change display/entry units
    v verify the partition table
    w write table to disk and exit
    x extra functionality (experts only)
    In this case, we wish to create a brand new partition on this disk. So press:

    n <--- create new partition
    We will then be asked:

    Command action
    e extended
    p primary partition (1-4)
    We want to create a primary partition, so press:

    p <--- create primary partition
    We will then be asked which partition to create:

    Partition number (1-4):
    We want to create the first partition, so press:

    1 <--- we only want to create one partition, and it will be the first one.
    We will be asked which cylinder to start with:

    First cylinder (1-1044, default 1):
    This is just asking us where on the disk to start the partition. We want the partition to take up the entire disk, so we will accept the default of 1. Just hit enter.

    We will then be asked which last cylinder to use:

    Last cylinder, +cylinders or +size{K,M,G} (1-1044, default 1044):
    Again, since we are going to use the full drive, accept the default. Just hit enter.

    At this point, we are dropped back to a prompt:

    Command (m for help):
    We have to tell fdiskt what kind of partition we want. In our case, since this will be part of an array, we want to make the partition type be "Linux raid auto". To set the partition type, type:

    t <--- change a partition's system id
    fdisk will let you know that it is working on partition 1, and then ask you for a hex code that describes what the partition type will be:

    Selected partition 1
    Hex code (type L to list codes):
    Go ahead and type:

    L <--- list partition codes
    You will be presented with a long list of different partitions, and then prompted again:

    Hex code (type L to list codes):
    You will see that Linux raid auto is listed after the hexidecimal "number" fd. So type:

    fd
    fdisk will let you know that it changed the system type of partition 1 to fd (Linux raid autodetect)

    That is it! To save the settings to the disk, type:

    w



    Step 3: Do it again for each of the other bare drives:

    Repeat this step once for each of your unpartitioned drives (sdc, sdd, sde).




    Step 4: Verify that your partitioning worked:

    This step is not absolutely necessary, but it would be good to make sure that your little trip through fdisk was productive. Type this again:

    sudo fdisk -l | grep -i Disk <--- note: that is a lowercase "L" just after "fdisk", followed by the pipe symbol.
    You should get an output that looks a little like this:

    Disk /dev/sda: 4294 MB, 4292967296 bytes
    Disk identifier: 0x00047a51
    Disk /dev/sdb: 8589 MB, 8589934592 bytes
    Disk identifier: 0x4623703
    Disk /dev/sdc: 8589 MB, 8589934592 bytes
    Disk identifier: 0x9623d36f
    Disk /dev/sdd: 8589 MB, 8589934592 bytes
    Disk identifier: 0xc6b0c733
    Disk /dev/sde: 8589 MB, 8589934592 bytes
    Disk identifier: 0xd8b1c4a3
    Notice that we didn't get fdisk barking about how it couldn't find partitions.

    You may also type:

    sudo fdisk -l | grep -i "linux raid autodetect"
    and it should give you one line for each of the drives you just partitioned:

    /dev/sdb1 1 1044 8385898+ fd Linux raid autodetect
    /dev/sdc1 1 1044 8385898+ fd Linux raid autodetect
    /dev/sdd1 1 1044 8385898+ fd Linux raid autodetect
    /dev/sde1 1 1044 8385898+ fd Linux raid autodetect
    note that this is displaying the actual partitions you just created, not the bare drives themselves. A subtle difference and not really one that makes a lot of difference for us right now. Still, it is worth noting…

    And so I did.










    Section #8 - Install software raid and configure hard drives
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.
    - Special thanks to the website: http://www.linuxhomenetworking.com/w..._Software_RAID from which I got most of the information in this section.
    - Please be aware that this step may take anywhere from several hours to several days to complete!


    Step 1: Download and install mdadm.

    mdadm is a software RAID solution that seems quite robust and has a lot of support. It will be the layer that sits between your partitioned hard drives and the file system. Using this software will make your four drives appear to be one larger drive (with data redundancy).

    Download and install mdadm:

    sudo apt-get install mdadm
    Unlike most other apt-get installations we will be doing in this setup, mdadm presents you with a big (text based) GUI. It is asking about your Postfix Configuration (or in non-sysadmin terms: email). This screen allows you to set up your mail configuration (this is to monitor your RAID array via email). We will set this up manually later, so select:

    No Configuration


    Step 2: Create the RAID array.

    mdadm takes several options when creating an array. We will use the following command (which will be explained immediately following, so don't just type this in yet before you understand it):

    sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
    This can be broken down as follows:

    -------------------------------------
    mdadm is the command that we are running:

    -------------------------------------
    The --create flag tells mdadm that we want to create a new array.

    -------------------------------------
    The --verbose flag tells mdadm that we want it to show us more information about what it is doing as it does it.

    -------------------------------------
    The /dev/md0 is the name of the device we want it to create. Remember devices? Anything beginning with "md" will be a RAID array on our system. In our case, we will only have a single array. But if we were to add a second array of disks in the future, that could be named: /dev/md1

    -------------------------------------
    The --level flag tells mdadm what type of RAID array to create. In this case, we want to create a RAID level 5 array.

    -------------------------------------
    The --raid-devices flag tells mdadm how many drives will be made a part of this array. In our case, we are adding four drives.

    -------------------------------------
    The last four arguments are the full path to each of the four partitions we want to add to the array. Note, we are pointing to the partitions, and not the bare drives. Notice the number 1 after each drive name.




    Once you have run the above command, you should get an output that looks something like this:

    mdadm: layout defaults to left-symmetric
    mdadm: chunk size defaults to 512K
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: size set to 976760320K
    mdadm: Defaulting to version 1.2 metadata
    [ 3832.268850] md/raid:md0: raid level 5 active with 3 out of 4 devices, algorithm 2
    mdadm: array /dev/md0 started.
    This indicates that your raid array has been successfully started. But don't celebrate just yet because…



    Step 3: Don't turn off that machine! Wait (and wait) for your raid to be initialized.

    Just because you started your array does not mean it has been fully initialized. This process actually takes a moment. To check on the status of the array type:

    cat /proc/mdstat
    You should get a result that looks something like this:

    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sde1[4] sdd1[2] sdc[1] sdb[0]
    2930280960 blocks suoer 1.2 level 5, 512K chunk, algorithm 2 [4/3] [UUU_]
    [>……………………..] recovery = 0.7% (7573036/976760320) finish=734.5min speed=21990K/sec

    unused devices: <none>
    Note that line about "recovery". mdadm is actually building your array right now and it takes a LONG time. In the example above, it is less than 1% complete and still has over 12 hours to go (734.5min)! And that is with four 1TB drives. A bigger system will take longer.



    Step 4: Get on with your life while you wait for mdadm to finish recovering your raid.

    Go outside. Seriously. You look terrible.



    Step 5: Come back the next day and verify that your raid was correctly initialized.

    Type:

    cat /proc/mdstat
    You should see something like this:

    Personalities : [raid6] [raid5] [raid4]
    md0 : active raid5 sde1[4] sdd1[2] sdc[1] sdb[0]
    2930280960 blocks suoer 1.2 level 5, 512K chunk, algorithm 2 [4/3] [UUU_]

    unused devices: <none>
    Note the lack of any lines that mention "recovery". Your raid is now correctly initialized.



    Step 6: Format your new raid array.

    The next step is to format the array with a file system. In my case (and most commonly) I will be using the ext4 file system. To format this raid partition, you will use the mkfs command.

    Type:

    sudo mkfs.ext4 /dev/md0
    You should get a result that looks something like this:

    mke2fs 1.41.14 (22-Dec-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=128 blocks, Stripe width=384 blocks
    183148544 inodes, 732570240 blocks
    36628512 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=4292967296
    22357 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
    8193, 24577, 40961, 57345, 73729

    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done

    This filesystem will be automatically checked every 29 mounts or
    180 days, whichever comes first. Use tune2fs -c or -i to override.


    Step 7: configure mdadm by editing the mdadm.conf file.

    You need to tell mdadm all about the raid you just created. Why do you have to do this manually when just a second ago you created an array using the very same computer?

    I dunno. It's just the Linux way I guess.

    Anyway, start by getting the information about the array that you will later have to add to the mdadm.conf file. To do this, type:

    sudo mdadm --detail --scan
    You should get back something that looks like this:

    ARRAY /dev/md0 metadata=1.2 name=ubuntu:0 UUID=77b695c4:32e5dd46:63dd7d16:17696e09
    Note that the name= portion (ubuntu in my case) will most likely be different for you (I think it matches the hostname you gave your machine when you first installed Linux, but I cannot be sure). The UUID will most definitively be different.

    We want to add the above line into the madadm.conf file. The easiest way to do this is to append the line to the end of the file and then edit it afterwards. Start by typing the following (note the pipe symbol after the word scan… it should be right above the enter key on your keyboard on U.S. keyboards):

    sudo mdadm --detail --scan | sudo tee --append /etc/mdadm/mdadm.conf
    Now, edit the /etc/mdadm/mdadm.conf file by typing:

    sudo nano /etc/mdadm/mdadm.conf
    The line you appended should be at the end of the file. While it is actually ok to just leave it there, the German in me wants it to live in the "correct" location. I just use cut and paste to move it just under the portions that reads:

    # definitions of existing MD arrays
    It appears that newer versions of mdadm don't like the "name=XXXXXX" or "metadata=NN" portions of this line. Remove both of those so that the line just reads (again, your UUID will be different):

    ARRAY /dev/md0 UUID=77b695c4:32e5dd46:63dd7d16:17696e09
    Save and close the file.

    I had an issue where the array would get renamed to /dev/md127 every time I rebooted the server (my mdadm.conf file clearly says it should be /dev/md0). Apparently this is new behavior for mdadm. If it has difficulty assembling the array upon booting it will somehow rename it to /dev/md127. If this is happening to you, try running the following command. It appears to have fixed it for me (thanks to YesWeCan and this thread: http://ubuntuforums.org/showthread.php?t=1764861):

    sudo update-initramfs -u





    Step 8: Create a mount point for your RAID array.

    Linux "mounts" drives into what appear to be directories. So, unlike a Windows machine in which each drive appears as a separate drive letter (C or D), a mounted drive in Linux appears to be just another directory that lives somewhere (anywhere) in the directory structure.

    For example, say you mounted your array at:

    /home/admiral/mounts/raidarray/
    and you have the following file on your array:

    music/classical/mozart/songs/deafguysong.mp3
    The full path to this file would be:

    /home/admiral/mounts/raidarray/music/classical/mozart/songs/deafguysong.mp3
    |-----------------------------|-------------------------------------------|
    |-This is on your boot drive--|-This portion lives on your RAID array-----|

    It looks like a single, unified path. But once you cd past .../raidarray/ you are actually navigating on your mounted RAID drives.

    In order to mount the drive in a directory, the directory must already exist. For our purposes, we will create a directory in /mnt called nas. This means our raid array will exist in the following location: /mnt/nas. Note, you can select any location you want, but typically mounted drives are placed somewhere in /mnt. To do this, type:

    sudo mkdir /mnt/nas
    Next, we need to edit the /etc/fstab file to make Ubuntu mount your raid at startup (otherwise this will just be an empty directory on your startup disk instead of a mounted raid array).

    sudo nano /etc/fstab
    add the following to the end of the file (of course, replace /mnt/nas with the directory you selected above):

    /dev/md0 /mnt/nas ext4 defaults 1 2
    close and save the file.

    Normally the /etc/fstab file is read at startup and your raid array will be mounted automatically. But since we don't want to reboot just now, let's mount the raid manually. To do this, we will simply tell Ubuntu to mount everything in the /etc/fstab file. This is accomplished by typing:

    mount -a
    And that's it. Let's verify that everything worked as planned. Use the df command to list all of your drives and their mount points (and how much space is available on each drive). Type the following (the -h option outputs data in a human readable format instead of blocks and bytes):

    df -h
    You should get an output that looks something like this:

    Filesystem Size Used Avail Use% Mounted on
    /dev/sda1 28G 1.1G 25G 5% /
    udev 988M 8.0K 988M 1% /dev
    tmpfs 399M 296K 398M 1% /run
    none 5.0M 0 5.0M 0% /run/lock
    none 996M 0 996M 0% /run/shm
    /dev/md0 2.7T 201M 2.6T 1% /mnt/nas
    The key thing to look for here is /dev/md0. We can see that it is pretty big (2.7 Terabytes) and that it is properly mounted on /mnt/nas.



    Step 9: Random note about mdadm and booting.

    mdadm will check the health of your array about once every thirty times you boot your system. This takes a few moments (about five minutes in my case) and during the time your server may appear to have hung. If you reboot your machine and it is not responsive, this may be what is causing the issue. Just fyi.










    Section #9 - Install and configure ssh
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.



    SSH is going to let us log into the server from another machine. This is going to make life easier for us in innumerable ways. For example, we will be able to have multiple shell windows open at once. We will be able to copy and paste from the internet (or even this tutorial). We won't be huddled over the server keyboard in the corner of the closet behind the cat box.


    Step 1: Install ssh.

    Start by installing the latest version of openssh:

    sudo apt-get install openssh-server



    Step 2: Find an unused port number and write it down (we will make ssh use this port instead of port 22).

    By default, ssh runs on port 22. Apparently, lots of automated attacks directly target this port. By changing your port number, you can cut down on the number of automated attempts to break into your system. While this isn't an actual security step by itself (you are merely hiding the entrance to ssh) it can at least mitigate some of the threat.

    A note on "security through obscurity" vs. "true security": Many people suggest that changing the ssh port is not true security because you are merely obscuring the opening to your ssh process. This is completely true. And if this were the only thing we were doing to secure the server it would hardly be worth doing. But changing the port in addition to using "real" security measures makes sense to me. Even if I am wearing a full body kevlar suit, I am going to hide in the bushes when the knife wielding maniac comes by. If I can cut down on the number of attacks somewhat by hiding IN ADDITION to properly locking down my server, I'm going to do it.

    Even if you decide to leave ssh on port 22, make sure you make some of the other changes listed here (especially the ones labeled IMPORTANT SECURITY MEASURE).

    To find an unused port number, type:

    sudo netstat -anp --tcp --udp | grep -i LISTEN
    You will get an output that looks something like this (you might have to hit ctrl-c to get control of your shell back):

    tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 704/sshd
    tcp6 0 0 :::22 :::* LISTEN 704/sshd
    From this you can deduce that only port 22 is currently open and in use (look for the number after the first colon). That means *most* any of the other ports in the range 1-65000 (roughly) are available. That said, ports below 1024 are reserved and should not be used. I have also read that ports over 40,000 should be left alone (I have no idea why). So I generally stay in the 15,000 - 30,000 range. That's still a lot of ports!

    Pick an unused port number and write it down. Seriously, write it down. If you forget it, you will have to physically log into the server and try and find the port number before you can remotely log in again.



    Step 3: Configure ssh.

    To configure ssh, edit the sshd_config file:

    sudo nano /etc/ssh/sshd_config

    -------------------------------------
    Edit the line that starts with:

    Port 22
    And change it to read:

    Port 22512 <-- 22512 is just an example. Use the port number you wrote down from Step 5.
    This will change the port that ssh uses when we restart it.


    -------------------------------------
    Edit the line that starts with:

    X11Forwarding
    And make sure it is set to "no" (we are not running X11 on this server)


    -------------------------------------
    Edit the line that starts with:

    AllowTcpForwarding
    And make sure it is set to "no" (it might already be set that way)


    -------------------------------------
    (IMPORTANT SECURITY MEASURE!)
    Edit the line that starts with:

    PermitRootLogin
    And make sure it is set to "no" (Even though in Ubuntu Server the root user is, by default, not enabled)


    -------------------------------------
    (IMPORTANT SECURITY MEASURE!)
    Edit the line that starts with:

    AllowUsers <--- (this line may not exist, or it may be commented out. In that case either add it or uncomment it)
    And make sure it reads:

    AllowUsers admiral <--- substitute your administrative user's login for "admiral"
    This means only a single user (admiral in this example) will ever be able to remotely log in and administer your system.


    -------------------------------------
    Edit the line that starts with: (this line may be commented out. In that case uncomment it.)

    Protocol
    And make sure it reads:

    Protocol 2
    I have seen arguments that ssh1 is just as secure as ssh2 but, as ssh1 is being depreciated, it seems best to stick with the newer protocol exclusively.


    -------------------------------------
    Save and close the file.




    Step 4: Restart ssh.

    Restart the ssh daemon by typing:

    sudo service ssh restart
    Now if you were to type:

    sudo netstat -anp --tcp --udp | grep -i LISTEN
    You should see something like this:

    tcp 0 0 0.0.0.0:22512 0.0.0.0:* LISTEN 704/sshd
    tcp6 0 0 :::22512 :::* LISTEN 704/sshd
    Which shows you that your ssh is now running on your new port (22512 in this example)




    Step 5: Get ssh ready to use public/private keys.

    Technically we have not yet turned off password based logins, but public/private keys are much more secure. Here I will show you how to set up the server to let you log in from your Mac securely using keys. Later, we will actually turn off passwords so that you will HAVE to use keys to log in remotely.

    Create an .ssh directory in your home directory. This is where your public key will be stored. If you don't know what a public key is, don't sweat it just now but you should take the time to learn about ssh eventually. Prep the server by typing the following:

    cd ~
    mkdir .ssh <- don't miss the leading "dot" on .ssh
    This just creates a hidden .ssh directory in your administrator's home directory.

    Next type:

    chmod go-rwx .ssh
    The chmod command changes the permissions so that only the administrator (admiral in this example) has permissions to read or write to this directory. You may read the chmod command as: chmod group and other - (subtract) read, write, and execute privileges from .ssh).




    Step 6: Log out of your server.

    Log out of the server. All further server configuration will be done via ssh from your Mac

    exit
    You should be returned to a screen that looks something like this (yours may vary depending on what you named your sever):

    Ubuntu 11.10 ubuntu tty1

    ubuntu login:









    Section #10 - Get your ssh keys working and log in from the Mac
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.
    - All of the following steps are performed from the terminal on your Mac! You should be logged into your Mac as your regular user (you do NOT have to be logged in as the administrative user you created on the server - "admiral" in the examples provided here).





    Step 1: Set up your ssh keys.

    Open a terminal window (again, you are doing all of this from your Mac!) and type the following to create your private and public keys:

    ssh-keygen
    The computer will ask you the following questions. Answer them as described:


    -------------------------------------

    Enter file in which to save the key (Users/bob/.ssh/id_rsa): <--- Note: your prompt will have YOUR Mac login name in it instead of 'bob'
    Just hit enter to accept the default location of ~/.ssh/id_rsa


    -------------------------------------

    Enter passphrase:
    Type in some sentence that you will remember. You can use letters, spaces, punctuation and numbers. The best passphrase has all four and is typically much longer than a normal password. Also beneficial is something that does *not* follow normal grammatical structure. Use something like:

    16 run blast_cast Peters jive with no nuts!?
    This may seem like it will be a pain to type over and over, but we will set it up so that you don't have to re-enter this passphrase all the time. A long, complicated passphrase will not really be much of a hassle and will be much more secure. That said, don't forget it! The above example might be too hard to remember. Pick a favorite line from a movie and alter it. Just make sure it is not a simple sentence as they are much easier to crack. Also, I've been told to never write this passphrase down. That is absurd. You should most definitely write it down. Just keep it someplace far from the computer, safe, and in a format that makes it meaningless to anyone who may find it.


    -------------------------------------

    Enter same passphrase again:
    Type it in again.


    The computer will inform you that it has saved some files in your home directory. It will give you some long gibberish key fingerprint and maybe even some random "image" made of characters. Ignore these for now.

    The next step is to copy your public key (one of the files it just saved) to the Ubuntu server. This will be the "lock" on the administrative account on the server. Nobody will be able to log into this account remotely unless they have the corresponding private key (and passphrase) on their computer. In our case, the private key lives on our Mac. To copy your public key to the sever type something similar to the following (I will show you an example here and then break it down so you will understand how to modify it for your particular situation - it isn't hard)

    scp -P 22512 ~/.ssh/id_rsa.pub admiral@172.16.1.16:.ssh/authorized_keys2 <--- don't type this exactly. See below how to modify it.
    Here is what it all means:

    -------------------------------------
    The scp means "secure copy". It will copy a file from one machine to another using ssh. Literally type "scp".

    -------------------------------------
    The -P 22512 tells ssh which port to use. Remember where you changed the ssh port above? Did you write it down? Use "-P" followed by that number.

    -------------------------------------
    The ~/.ssh/id_rsa.pub is the public key file that you created just now. The output of ssh-keygen which you just ran will have displayed this file name for you. But if you did as I suggested and just hit enter to accept the default, your file name will be identical to what I typed here. In this case you can literally type: "~/.ssh/id_rsa.pub"

    -------------------------------------
    admiral@ is the user you will be logging into the server as. This is the administrative user you first set up when creating the server. Substitute your administrative user name for "admiral" but keep the @ sign.

    -------------------------------------
    The 172.16.1.16: is the IP address of the ubuntu server. Remember when you wrote that down before? Aren't you glad you did? Don't forget the : at the end of this number.

    -------------------------------------
    The .ssh/authorized_keys2 is the name of the file where the public key will be stored on the server. Just use this directly as I have it typed. I.e. literally type ".ssh/authorized_keys2" (don't forget the leading dot!)



    So, put together, it becomes:

    scp -P <your ssh port number> <path to your local public key file> <UbuntuServerAdministrator>@<Ubuntu IP>:.ssh/authorized_keys2
    Once you have typed this, you will be prompted for your password (this should be the last time we log in using a password). This is the password for the administrative user on the Ubuntu server (In the examples I have been providing here, it would be the password for the user 'admiral'). Once you enter your password you will get a line that looks something like this (don't worry if it isn't exactly identical):

    id_rsa.pub 100% 406 0.4KB/s 00:00
    This means your public key is now stored on the ubuntu server for the user admiral (or whatever your administrative user is called).






    Step 2: Log into your ubuntu server from your mac using your private key.

    To manage the server from now on, you will use a command SIMILAR the following to log in from a command line:

    ssh -p 22512 -l admiral 172.16.1.16 <--- don't type this exactly. See below how to modify it.
    Here is what it all means:

    -------------------------------------
    The ssh means "secure shell". It will allow you to open a shell on the server remotely so that every command you type into the terminal window on your Mac will actually be executed on the server. Literally type "ssh".

    -------------------------------------
    The -p 22512 tells ssh which port to use. Again, remember changing the ssh port number? This is why I had you write it down, because you will use it over and over. Use "-p" followed by that number. Note: ssh uses a lowercase "-p" to indicate the port number whereas the scp command above used an uppercase "-P". Why is that you might ask? Dunno. Welcome to the vagaries of Linux.

    -------------------------------------
    -l admiral is the user you will be using to log into the server (the -l stands for "login as" I suspect). This is the administrative user you first set up when creating the server. Substitute your administrative user name for "admiral" but keep the "-l" before it.

    -------------------------------------
    The 172.16.1.16: is the IP address of the ubuntu server. Again, remember when you wrote that down before? I wouldn't tell you to waste precious ink if I didn't have a good reason.

    Once you type this in, you will be prompted for your passphrase. I am running OSX Lion, and it actually popped up a dialog box asking for my "password" and offering to save it in my keychain. Older versions of OSX may do it differently. I don't know. In any case, you should be prompted for your passphrase. Type it in and you should now be logged into your server via the command line.

    Feel free to open several terminal windows and log into the server multiple times (use the same command each time). Having multiple windows makes managing your server more convenient. These windows can be pointed to different directories at the same time, can be resized, etc. You can have a command-line text editor open in one window, and a directory listing in another. This is much more convenient than working directly on the server itself.

    Note: your private key lives on your Mac that you are typing these commands on. If you try to administer your server from a different computer, you will not be able to log in simply by using your passphrase. Remember, the passphrase only unlocks your key. It is the key (which, as mentioned, only lives on this computer) that unlocks the server. If you think you will be administering the server from another machine, you will have to copy your key to that machine as well. I won't tell you how to do that here, but Google can probably help.

    Also Note: At the moment we have not disabled the ability to log into the system using the password you first set up for the administrative user. If you don't feel comfortable using keys, or if you think you will be managing the server from multiple machines to which you cannot safely copy your key, you may leave passwords enabled. I, however, will be turning this ability off in the next step.



    Step 3: Remove the ability to ssh into the server using only a username and password (I.e. Require key based authentication).

    As we have been saying all along, simple password based security is not going to be adequate for paranoid people like us. Edit the ssh config file by typing:

    sudo nano /etc/ssh/sshd_config
    Find the line that starts with

    PasswordAuthentication <--- This line may be commented out. Just uncomment it.
    And make sure it is set to "no"

    Save and close the file.


    Restart ssh by typing the following (you shouldn't be disconnected, but if you are, just re-log in):

    sudo service ssh restart







    Section #11: install msmtp so that your server can send emails
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.
    - I used these guides a lot when working though this section: http://ubuntuforums.org/showthread.php?t=1185134



    There are a number of different methods that can be used to make your server do email. Since I am only interested in having the machine send me emails when there are problems (i.e. the server won't ever be receiving emails) I have elected to just have it use my regular email service (aka, something like gmail or yahoo email). This way I only have to install the most minimal software and will let the ISP handle all the complicated email stuff.


    Step 1: Install software that allows your server to send mail via your regular email account (msmtp).

    Note: User Bushflyr in the comments below set up a throwaway GMAIL account to receive these messages so that he would not have to have his actual email account password in the clear. That is a pretty good idea and I think I will make similar changes to my system. His comments (and setup guide) are here: http://ubuntuforums.org/showthread.p...0#post11792720 (or just scroll down to page 3 of the comments).

    In order for mdadm (or any other process) to send mail, I am simply going to set them up to use one of my regular email addresses. To do this, we need to install msmtp. This package will allow us to basically send email through an ISP smtp server (aka, use our regular email address).

    Type:

    sudo apt-get install msmtp-mta
    Once installed, we need to configure it. This involves creating a configuration file. The file does not already exist, so you you will have to create it by typing:

    sudo nano /etc/msmtprc
    (Again, the file does not already exist so don't be alarmed when nano indicates that this is a new file at the bottom of your window).

    Add the following information to the file:

    defaults
    tls off

    account default
    host smtp.youremailprovider.net
    port 25
    protocol smtp
    auth login
    user yourusername
    password secretpassword
    from you@yourdomain.com
    logfile /var/log/msmtp.log
    Of course, you will have to customize the actual settings to match those that your ISP requires to send an email. The items you will want to customize are:

    host <--- this is the smtp host name (smtp.domain.com) from your ISP
    port <--- this is the port that it receives mail on (usually 25, but my ISP - everyone.net - actually uses 2525)
    auth <--- this is the type of authorization your ISP uses. Mine only worked with "login" which is problematic (discussed below)
    user <--- your user name
    password <--- password (this is problematic as discussed below)
    from <--- which email address is this being sent from. Usually your own email address
    The exact settings will depend on your ISP or email provider requirements. The specifics of which are beyond the scope of this tutorial. I will try to help if anyone has questions in the comments, but generally speaking the information you need should all be available from a google search.

    Note: there are some significant security issues with the sample I show above.

    The first is that (for me) the only type of authorization that worked was to use "login". This type of authorization sends all of your information in plain text, including your password. Unfortunately, I can't seem to figure out how to fix that and my email provider (everyone.net) has the crappiest help of just about any service provider I have ever dealt with. Eventually I will be switching, but no time now.

    The second issue is that your email password is just sitting in this file waiting to be displayed to anyone who manages to work their way down to this directory on your server. Now, ideally nobody ever would manage to get that far, but wishful thinking is no replacement for solid security measures. Fortunately, msmtp allows you to use a different password storage system. Unfortunately, I am too tired at the moment to figure it out. So, for the time being, I am just going to leave it as is. I will return to this post and edit it appropriately once I have figured out a better method.








    Section #12 - Write some system maintenance shell scripts
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.



    There will be a number of scripts that we will want to have running in order to customize our server. Some of these scripts will be called directly by cron. Others will live in a custom scripts directory and be called on by yet other scripts. In this section we will start to get these set up. Later sections will deal with writing the actual scripts that manage the server.



    Step 1: Create a directory to hold a library of scripts.

    There are going to be a number of scripts that we want to run on a periodic basis, and we have several options as how to call these scripts.

    One option is to roll them all into a single script and call this monolithic script from cron. But this is less than ideal because it becomes harder to modify just one feature without affecting a second set. Also, there is no granularity with regard to the frequency with which different functions may be called.

    A second option is to write individual scripts, each with tightly focused functionality, and then install each of them into cron. And in fact, this is the basis of the technique we will be using.

    The third option is to create directories to hold an arbitrary collection of scripts, and then create a master script (launched by cron) that will simply execute any script it finds in this directory. In this scenario, adding functionality is simply a matter of writing a new script and dropping it into this directory, knowing it will be automatically executed alongside the rest of the scripts. Disabling a script is simply a matter of moving that script back out of the directory.

    We will be using a mixture of option 2 and option 3.

    The first step is to create the directories that will hold the collections of scripts. We will actually be making several directories. We will make one set for scripts that run continuously (every five minutes). We will make a second directory for scripts to be run once an hour. We will make another set for scripts that should be run once a day. And, finally, we will make another set for scripts that should run once a week. Each of these sets will comprise of two directories: one for active scripts, and a second where you can move scripts if you wish to deactivate them (this second directory is simply a convenience and is not integral to the functionality of the system). Start by creating the following directories:

    sudo mkdir /usr/local/sbin/continuous.active
    sudo mkdir /usr/local/sbin/continuous.inactive
    sudo mkdir /usr/local/sbin/hourly.active
    sudo mkdir /usr/local/sbin/hourly.inactive
    sudo mkdir /usr/local/sbin/daily.active
    sudo mkdir /usr/local/sbin/daily.inactive
    sudo mkdir /usr/local/sbin/weekly.active
    sudo mkdir /usr/local/sbin/weekly.inactive


    Step 2: Create a shell script to execute every script in one of these directories.

    We will want to create a set of scripts that, when executed by cron, will automatically invoke each script in one of these directories. We will create one script per directory (i.e. one script for continuous use, one for hourly use, one for daily use, and one for weekly use).



    -------------------------------------
    Start by creating the continuous script. Type:

    sudo pico /usr/local/sbin/continuous.sh
    and paste or type in the following:

    Code:
    #!/bin/bash
    
    ACTIVE_SCRIPTS_DIR=/usr/local/sbin/continuous.active
    
    for module in `find "$ACTIVE_SCRIPTS_DIR" -maxdepth 1 -mindepth 1 -type f`; do
    	if [ -x $module ]; then
    		$module
    	fi
    done
    save and close the file.

    Change its permissions to make it executable:

    sudo chmod u+x /usr/local/sbin/continuous.sh



    -------------------------------------
    Follow that by creating the hourly script (it is almost identical to the continuous script). Type:

    sudo pico /usr/local/sbin/hourly.sh
    and paste or type in the following:

    Code:
    #!/bin/bash
    
    ACTIVE_SCRIPTS_DIR=/usr/local/sbin/hourly.active
    
    for module in `find "$ACTIVE_SCRIPTS_DIR" -maxdepth 1 -mindepth 1 -type f`; do
    	if [ -x $module ]; then
    		$module
    	fi
    done
    save and close the file.

    Change its permissions to make it executable:

    sudo chmod u+x /usr/local/sbin/hourly.sh



    -------------------------------------
    Next, let's create the daily script (it is very nearly identical to the other two scripts). Type:

    sudo pico /usr/local/sbin/daily.sh
    and paste or type in the following:

    Code:
    #!/bin/bash
    
    ACTIVE_SCRIPTS_DIR=/usr/local/sbin/daily.active
    
    for module in `find "$ACTIVE_SCRIPTS_DIR" -maxdepth 1 -mindepth 1 -type f`; do
    	if [ -x $module ]; then
    		$module
    	fi
    done
    save and close the file.

    Change its permissions to make it executable:

    sudo chmod u+x /usr/local/sbin/daily.sh



    -------------------------------------
    Finally, let's create the weekly script (it is very nearly identical to the other three scripts). Type:

    sudo pico /usr/local/sbin/weekly.sh
    and paste or type in the following:

    Code:
    #!/bin/bash
    
    ACTIVE_SCRIPTS_DIR=/usr/local/sbin/weekly.active
    
    for module in `find "$ACTIVE_SCRIPTS_DIR" -maxdepth 1 -mindepth 1 -type f`; do
    	if [ -x $module ]; then
    		$module
    	fi
    done
    save and close the file.

    Change its permissions to make it executable:

    sudo chmod u+x /usr/local/sbin/weekly.sh



    Step 3: Insert these scripts into cron.

    (Note: I used this website when putting together this section: http://clickmojo.com/code/cron-tutorial.html)

    Obviously, these scripts will do nothing unless they are called on a repeating basis. That's where cron comes in. cron is a daemon that runs other scripts on an arbitrary schedule (as long as that schedule is no more granular than one minute). To insert items into cron, we need to edit the crontab file (for the root user). To do this, type:

    sudo crontab -e
    This will bring up an editor that will allow you to edit the contents of the crontab file. You may be asked which editor you wish to use. I just stick to nano as it is the easiest.

    Add the following lines:
    Code:
    MAILTO= yourusername@yourdomain.com
    */5 * * * * /usr/local/sbin/continuous.sh
    6 */1 * * * /usr/local/sbin/hourly.sh
    16 02 * * * /usr/local/sbin/daily.sh
    26 03 * * sun /usr/local/sbin/weekly.sh
    Obviously you will have to change the MAILTO line to point to your own email address.

    Save and close the file.

    Here is a quick breakdown of the different sections, but I would recommend you check the link I supplied (http://clickmojo.com/code/cron-tutorial.html) for more information.

    -------------------------------------
    MAILTO=yourusername@yourdomain.com sets up an email address to which cron will send messages should any issues arise.

    -------------------------------------
    */5 * * * * /usr/local/sbin/continuous.sh this runs the continuous.sh script every five minutes.

    -------------------------------------
    6 */1 * * * /usr/local/sbin/hourly.sh this runs the hourly.sh script six minutes after every hour.

    -------------------------------------
    16 02 * * * /usr/local/sbin/daily.sh this runs the daily script every day at 2:16 am.

    -------------------------------------
    26 03 * * sun /usr/local/sbin/weekly.sh this runs the weekly script every sunday at 3:26 am.



    Step 3: Test your scripts.

    I am a big fan of testing all of my automated scripts. It is far too easy to accidentally make a typo and then any vital functionality that you depend on may simply fail silently.

    To test our master scripts (and cron), we are going to create a simple script inside each of the script collection directories. We will wait to see that they are actually called before removing them again and being satisfied that every thing is working.

    Note: whenever creating new scripts that you intend to throw into one of these script collection directories, ALWAYS create it in your own home directory first, and test it there! If you create it directly in one of these directories it is possible (especially with the continuous directory) that cron will come by and execute while you are in the middle of editing it. That could lead to some disastrous results.



    -------------------------------------
    Test the continuous scripts:

    Create a new script in your home directory called testContinuous.sh:

    sudo pico ~/testContinuous.sh
    and paste or type the following into it:
    Code:
    #!/bin/bash
    
    echo "test of the continuous scripts" | mail -s "test of the continuous scripts" youremail@yourdomain.com
    Note: you will want to change the email to actually point to your correct email address.

    Save and close the file.

    change its permissions so that it is executable:

    sudo chmod u+x ~/testContinuous.sh
    and test it by typing:

    sudo ~/testContinuous.sh
    You should receive an email with the subject: test of the continuous scripts

    If that worked, move the script into your /usr/local/sbin/continuous.active directory:

    sudo mv ~/testContinuous.sh /usr/local/sbin/continuous.active/testContinuous.sh



    -------------------------------------
    Test the hourly scripts:

    Create a new script in your home directory called testHourly.sh:

    sudo pico ~/testHourly.sh
    and paste or type the following into it:
    Code:
    #!/bin/bash
    
    echo "test of the hourly scripts" | mail -s "test of the hourly scripts" youremail@yourdomain.com
    Note: you will want to change the email to actually point to your correct email address.

    Save and close the file.

    change its permissions so that it is executable:

    sudo chmod u+x ~/testHourly.sh
    and test it by typing:

    sudo ~/testHourly.sh
    You should receive an email with the subject: test of the hourly scripts

    If that worked, move the script into your /usr/local/sbin/hourly.active directory:

    sudo mv ~/testHourlys.sh /usr/local/sbin/hourly.active/testHourly.sh



    -------------------------------------
    Test the daily scripts:

    Create a new script in your home directory called testDaily.sh:

    sudo pico ~/testDaily.sh
    and paste or type the following into it:
    Code:
    #!/bin/bash
    
    echo "test of the daily scripts" | mail -s "test of the daily scripts" youremail@yourdomain.com
    Note: you will want to change the email to actually point to your correct email address.

    Save and close the file.

    change its permissions so that it is executable:

    sudo chmod u+x ~/testDaily.sh
    and test it by typing:

    sudo ~/testDaily.sh
    You should receive an email with the subject: test of the daily scripts

    If that worked, move the script into your /usr/local/sbin/daily.active directory:

    sudo mv ~/testDaily.sh /usr/local/sbin/daily.active/testDaily.sh



    -------------------------------------
    Test the weekly scripts:

    Create a new script in your home directory called testWeekly.sh:

    sudo pico ~/testWeekly.sh
    and paste or type the following into it:
    Code:
    #!/bin/bash
    
    echo "test of the weekly scripts" | mail -s "test of the weekly scripts" youremail@yourdomain.com
    Note: you will want to change the email to actually point to your correct email address.

    Save and close the file.

    change its permissions so that it is executable:

    sudo chmod u+x ~/testWeekly.sh
    and test it by typing:

    sudo ~/testWeekly.sh
    You should receive an email with the subject: test of the weekly scripts

    If that worked, move the script into your /usr/local/sbin/wekly.active directory:

    sudo mv ~/testWeekly.sh /usr/local/sbin/weekly.active/testWeekly.sh





    Leave each of these scripts in their respective directories until you get the email message from each one. This will tell you that your cron job and its associated scripts are working. You should get your first email within 5 minutes. The next one within an hour or so, the third one within a day, and the last one within a week. Of course, once you have gotten each email, you should delete the appropriate script from the .active directory. Or, as an example of how you will more commonly be manipulating these scripts, move it into the associated .inactive directory. There it will sit ready to be used again if needed. To do this for the testContinuous.sh script, type the following:

    sudo mv /usr/local/sbin/continuous.active/testContinuous.sh /usr/local/sbin/continuous.inactive/testContinuous.sh
    Moving the other three scripts is done in the same way.









    Section #13 - Install system health monitoring tools
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.



    This puppy is going to be sitting in the corner doing it's thing and we will forget to check on it. It's just a given. To that end, we will want to install a number of system monitoring tools that will notify us of any unhealthy situations. Specifically, we will want to monitor the health of the individual hard drives by checking their S.M.A.R.T. status (google it if you don't know that that is), the temperature of the drives, the temperature of the processor, and the health of the mdadm RAID array.


    BIG NOTE: As it happens, my particular setup (intel d510mo motherboard with Rosewill RC-217 4-port SATA card and 4 Western Digital 1TB drives) gets flaky when using S.M.A.R.T. Specifically, it would lock up the computer occasionally when accessing S.M.A.R.T. data either via a self test or from hddtemp. It would also cause my system to freeze on boot. It is inconsistent and seems to be related to when I move large amounts of data. I am still trying to figure out what is causing the issue, but for now I have turned off any S.M.A.R.T. monitoring (which seems to fix the issue). I still think you should go through the following steps and only disable S.M.A.R.T. if your system becomes unstable.


    Step 1: Install smartmontools to monitor your hard drives.

    The smartmontools will allow us to check the S.M.A.R.T. status periodically as well as keep track of the drive temperatures. To install these tools, type:

    sudo apt-get install smartmontools


    Step 2: Use smartctl to interactively check the health of your drives.

    After smartmontools is installed, we should do a few manual tests to make sure that everything is ok with our drives before we bother setting up the automatic monitoring. Note, nearly all of the information in this section is lifted from the following website: http://blog.shadypixel.com/monitorin...smartmontools/. I highly recommend reading through that page as it gives an excellent primer on using smartmontools for new users. I am summarizing what that page has to offer here:

    First, lets make sure that our drives have S.M.A.R.T. monitoring turned on (Note, I am using /dev/sdb here as an example. You will want to run the following tools once per drive in your system).

    sudo smartctl -s on -o on -S on /dev/sdb

    To paraphrase the shadypixel blog, Here is what it all means:

    -------------------------------------
    -s on turns on S.M.A.R.T. support or does nothing if it’s already enabled.

    -------------------------------------
    -o on turns on offline data collection. Offline data collection periodically updates certain S.M.A.R.T. attributes while the system is on. Theoretically this could have a performance impact. However, from the smartctl man page: "Normally, the disk will suspend offline testing while disk accesses are taking place, and then automatically resume it when the disk would otherwise be idle, so in practice it has little effect."

    -------------------------------------
    -S on enables “autosave of device vendor-specific Attributes”.

    -------------------------------------
    /dev/sdb is the device name that you want to enable S.M.A.R.T for.


    Upon running this command, you should get a response along these lines:

    smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-12-server] (local build)
    Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

    === START OF ENABLE/DISABLE COMMANDS SECTION ===
    SMART Enabled.
    SMART Attribute Autosave Enabled.
    SMART Automatic Offline Testing Enabled every four hours.

    Next, we will run a short test of the drive. The short test is less comprehensive than a full blown test, but it has the advantage of only taking about two minutes to run. To run this test, type:

    sudo smartctl -t short /dev/sdb
    You should get something like this in response:

    smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-12-server] (local build)
    Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

    === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
    Sending command: "Execute SMART Short self-test routine immediately in off-line mode".
    Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful.
    Testing has begun.
    Please wait 2 minutes for test to complete.
    Test will complete after Wed Nov 9 23:03:06 2011
    The key thing to take away from this output is that the test will take about two minutes to run. It also gives you a time after which you may come back to see what the results are. Note that the test is running in the background and does not block you from using either the ssh session or the computer in general.

    To see the results of the test, wait two minutes and then type:

    sudo smartctl -l selftest /dev/sdb
    Of course, if you run it too soon, you will not get the data you are interested in. You must wait till the test is done to get the appropriate information. Once the test is complete, and you run the above command, you should get an output something like this:

    smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-12-server] (local build)
    Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

    === START OF READ SMART DATA SECTION ===
    SMART Self-test log structure revision number 1
    Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
    # 1 Short offline Completed without error 00% 41 -
    As should be obvious, the key item in this output is the text that says Completed without error. If you get a different result, then it is time to start googling and, perhaps, exchange a drive.



    Step 3: Repeat the above steps for each of your drives.

    Repeat the above steps for each drive in your array.



    Step 4: Configure smartd to run these tests automatically.

    The above steps we ran manually, but ideally the system will automatically run these tests and notify us if there is a problem. To do this we will rely on a tool called smartd. To configure smartd, we need to edit the /etc/smartd.conf file. Do this by typing:

    sudo pico /etc/smartd.conf
    The very first step is to prevent smarted from automatically scanning for devices since we will be explicitly telling it which drives to scan. To do this, find the line that begins with:

    DEVICESCAN (there will be some text following it)
    and prepend it with a pound sign so that it reads:

    #DEVICESCAN (followed by its text)

    Next we will explicitly add the drives we want smartd to monitor. To do this, we will add a single line per device that is in the following pattern. Note that in this first incarnation we are setting smartd up in a test mode until we can verify that it is properly working. Once we are satisfied that it is properly monitoring the drives, we will return to this config file and edit it so that it will no longer be in test mode. Start by adding one line per device that adheres to the following pattern:

    Code:
    	/dev/sdb -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m myemail@myemailprovider.com -M test #exec /usr/share/smartmontools/smartd-runner
    Again, to paraphrase the excellent shadypixel blog:

    -------------------------------------
    /dev/sdb: Replace this with the device you wish to monitor. Note that this is the raw drive, not a partition on the drive (i.e. there is no number at the end of the device name).

    -------------------------------------
    -a: This enables some common options. You almost certainly want to use it. (type man smartd for more information on what, exactly, it does)

    -------------------------------------
    -d sat: On my system, smartctl correctly guesses that I have a serial ata drive. smartd on the other hand does not. If you had to add a “-d TYPE” parameter to the smartctl commands, you’ll almost certainly have to do the same here. If you didn’t, try leaving it out initially. You can add it later if smartd fails to start.

    -------------------------------------
    -o on, -S on: These have the same meaning as the smartctl equivalents

    -------------------------------------
    -s (S/../.././02|L/../../6/03): This schedules the short and long self-tests. In this example, the short self-test will run daily at 2:00 A.M. The long test will run on Saturday’s at 3:00 A.M. For more information, type man smartd.conf. Be aware that the system will need to actually be powered up and awake at those times in order for the tests to be run (we will be configuring the server to use power management later).

    -------------------------------------
    -m myemail@myemailprovider.com: This is the email address to send error reports to.

    -------------------------------------
    -M test: Normally this settings is not used. Its only purpose, as you might guess, is to test the smartd reporting system. -M test will force smarted to send a message to the email address you supplied whenever it is started. I have added this setting here so that we can satisfy ourselves that smartd is, in fact, able to send us a report. Once we are sure that is working, we will return to this config file and remove that setting.

    -------------------------------------
    -M exec /usr/share/smartmontools/smartd-runner: smartd-runner will execute each script in /etc/smartmontools/run.d/. These scripts are then responsible for emailing the user specified by the “-m” option. It just offers a bit of flexibility that allows custom actions based on different S.M.A.R.T. events.


    save and close the file.



    To get smartd running, you will need to edit the /etc/default/smartmontools. To do this, type:

    sudo pico /etc/default/smartmontools
    find the line that says:

    #start_smartd=yes
    and uncomment it so that it reads:

    start_smartd=yes
    save and close this file.



    Step 5: Restart smartd and make sure it properly sends an email.

    Restart the smartd daemon by typing:

    sudo /etc/init.d/smartmontools restart
    Now check your email. You should hopefully have gotten a message whose subject is similar to the following:

    SMART error (EmailTest) detected on host: ubuntu
    Note, your host name is most likely something different. Also, this is not an actual error message, but instead just a test (as you can see by looking at the body of the message). Finally, you should have gotten one email message per line that you added into the smartd.conf file.

    Assuming you properly received the emails, we will want to edit the smartd.conf file one more time to remove the test setting. To do this, type:

    sudo pico /etc/smartd.conf
    Edit each of the lines you added in step 4 and remove the -M test # (Note: make sure you remove the # sign that comes right after the word test!). Save and close the file.



    Step 6: Install hddtemp to monitor the temperature of your hard drives.

    (I referred to this website when working on this section: http://www.cyberciti.biz/tips/howto-...mperature.html)

    S.M.A.R.T. also allows you to monitor the temperature of your hard drives. But we will use a package called hddtemp to actually manage this feature (it still relies on the S.M.A.R.T. monitoring hardware on your drives).

    To install the package, type:

    sudo apt-get install hddtemp
    Upon installation, hddtemp will ask whether it should be run automatically as a daemon. By default it suggests that you not do this and in fact we won't. Select "no".

    hddtemp has a very simple usage. Simply type:

    sudo hddtemp /dev/sdb
    you should get an output that looks something like this:

    /dev/sdb: WDC WD10EARS-00MVWB0: 22°C


    Step 7: Create a shell script to monitor hard drive temps and shut down the server if necessary.

    Being able to interactively run hddtemp is fantastic, but we need it to actually monitor the drive temperatures on an ongoing basis and take action should the temperatures rise too high. Running hddtemp in daemon mode is not the solution however. As far as I can tell it merely listens to requests on TCP port 7634 and will report back the temperature when querried from a remote machine. This is less than ideal for our situation for a number of reasons. First, it requires that a second machine regularly check the temperature (or that we do it manually). Second, it requires that we punch yet another hole through the firewall (which isn't a big deal if we actually gained something significant for it, but that isn't the case here).

    Instead, we will create a shell script to monitor the drive temperatures and shut down the server if they go above 50 degrees C. This shell script will be dropped into the hourly.active directory we created earlier and will therefore run every hour.

    We will start by creating this script in our administrative user's home directory. We never want to create a script directly in the hourly.active directory because it should be fully tested before being placed there. To create this script, type the following:

    sudo pico ~/hddtemp_monitor.sh
    and then paste or type in the following:

    Code:
    #!/bin/bash
    HDDS="/dev/sdb /dev/sdc /dev/sdd /dev/sde"
    HDT=/usr/sbin/hddtemp
    LOG=/usr/bin/logger
    DOWN=/sbin/shutdown
    ALERT_LEVEL=50
    for disk in $HDDS
    do
      if [ -b $disk ]; then
    	HDTEMP=$($HDT $disk | awk '{ print $4}' | awk -F '°' '{ print $1}')
            if [ $HDTEMP -ge $ALERT_LEVEL ]; then
               echo "The server ubuntu is shutting down due to excessive hard drive temp. Drive: $disk has reached a temperature of $HDTEMP°C and has crossed its limit of $ALERT_LEVEL°C" | mail -s "ALERT! The server ubuntu is shutting down due to excessive hard drive temperature" youremail@yourdomain.com
               $LOG "System going down as hard disk : $disk temperature $HDTEMP°C crossed its limit"
               sync;sync
               $DOWN -h 0
            fi
      fi
    done
    Note: you will want to change the message in the script to one that makes sense for you (mainly change the name of the server). Also, make sure you change the email to your email address. If you want to use a temperature threshold other than 50 degrees C, you should also alter the line that reads:

    ALERT_LEVEL=50
    Save and close the file when you are done.



    Change the permissions on the file so that it becomes an executable script. Do this by typing:

    sudo chmod ugo+rwx ~/hddtemp_monitor.sh

    Again, because I am a big believer in testing our scripts, we should set it up to see whether it actually works. To do this, we will set the temperature threshold to 1 degree C. To do that, edit the file by typing:

    sudo pico ~/hddtemp_monitor.sh
    and change the line that reads:

    ALERT_LEVEL=50
    to something much lower (say, 1 degree). For example:

    ALERT_LEVEL=1
    and save and close the file.

    Note: testing this script will shut down your server immediately! It will boot you off of your ssh session. It will stop any running processes you may have. So don't test this if there is any reason why you shouldn't shut down the computer right now. That said, if you have been following this tutorial, there really isn't anything that should be running that would prevent you from shutting the machine down.

    To test the script, type:

    sudo ~/hddtemp_monitor.sh
    with any luck, your server will have shut down almost immediately and you will have received an alert email message. If so, go ahead and restart the server, and re-log in via ssh.

    Now, change the ALERT_LEVEL back up to 50 degrees C (or whatever value you want to use) To do that, edit the file by typing:

    sudo pico ~/hddtemp_monitor.sh
    and change the line that reads:

    ALERT_LEVEL=1
    back to 50 degrees:

    ALERT_LEVEL=50
    and save and close the file.

    Now move it into the hourly.active directory by typing:

    sudo mv ~/hddtemp_monitor.sh /usr/local/sbin/hourly.active/hddtemp_monitor.sh




    Step 7: Get mdadm to send emails when it encounters errors.

    (I used the following website when working through this section: http://blog.agdunn.net/?p=383)

    S.M.A.R.T. monitoring checks the physical health of your drives, as does hddtemp. But we also want to keep tabs on the health of the actual RAID array. Fortunately this is an exceptionally easy thing to set up. We simply need to tell mdadm where to send its emails. To do this, edit the mdadm.conf file. To do this, type:

    sudo pico /etc/mdadm/mdamd.conf
    Find the line that starts with:

    MAILADDR
    and make sure it points to your email address (so it looks something like this):

    MAILADDR youremail@yourdomain.com
    Save and close the file.

    Now let's run a quick test to make sure it is all working. Type:

    sudo mdadm --monitor --scan --test
    This will cause mdadm to send you a test email message. Note, you will not be automatically returned to the command line. Once you have verified that you have received the test email, you can press ctrl-c to stop mdadm from running in monitor mode.




    Step 8: Install lm-sensors to monitor the temperature of your system.

    (I used this website as reference for this section: https://help.ubuntu.com/community/SensorInstallHowto)

    We also want to monitor the temperature of our processor (and any other sensors our main board supports). To do this, we need to install lm-sensors to track that information. To install, type:

    sudo apt-get install lm-sensors
    Once it is installed, we want to run a utility called sensors-detect. This will attempt to detect all the different sensors on your motherboard. To run this app, type:

    sudo sensors-detect
    and answer "yes" to ALL of the questions (there are an awful lot of them!). You especially want to answer YES when it asks:

    Do you want to add these lines automatically to /etc/modules? (yes/NO)
    as this will edit the /etc/modules file and append any sensors it found.

    Next we need to insert the new modules into the kernel. Do this by typing:

    sudo /etc/init.d/module-init-tools restart
    Now, when you type the command:

    sudo sensors
    you should get back a detailed set of voltages and temperatures from your system. Note that some figures may be completely wrong. This, apparently, is normal and you may need to edit your sensors.conf file to either adjust or ignore some of these bogus readings. I would help you out with that, but as it happens my board (intel d510mo) refuses to report back ANY real information. Apparently this is a known issue and updating the BIOS should have fixed it but I am still fighting with that.

    Just note that having lm-sensors installed is not enough to monitor your system. You need to somehow check the sensors from time to time and act upon the information received.

    We will again be using a script to keep track of the output of the sensors command. We will start by creating this script in our administrative user's home directory. We never want to create a script directly in the hourly.active directory because it should be fully tested before being placed there. To create this script, type the following:

    sudo pico ~/sensors_montior.sh
    and then paste or type in the following (note, you may have to heavily edit this script depending on what kind of data you get back from the sensors command. I can try to help you with this if you have issues.):

    Code:
    #!/bin/bash
    LOG=/usr/bin/logger
    DOWN=/sbin/shutdown
    ALERT_LEVEL=80
    SENSORSCMD=sensors
    
    CORE0TEMP=$($SENSORSCMD | grep -i "Core 0" | awk '{print $3}' | awk -F '°' '{print $1}' | awk -F '+' '{print $2}' | awk -F '.' '{print $1}')
    CORE1TEMP=$($SENSORSCMD | grep -i "Core 1" | awk '{print $3}' | awk -F '°' '{print $1}' | awk -F '+' '{print $2}' | awk -F '.' '{print $1}')
    
    if [[ $CORE0TEMP =~ ^[0-9]+$ ]]; then
       if [ $CORE0TEMP -ge $ALERT_LEVEL ]; then
           echo "The server ubuntu is shutting down due to excessive Core 0 temp (it has reached a temperature of $CORE0TEMP°C and has crossed its limit of $ALERT_LEVEL°C" | mail -s "ALERT! The server ubuntu is shutting down due to excessive CPU temperature" youremail@yourdomain.com
           $LOG "System going down as Core 0 has crossed its limit (temp=$CORE0TEMP°C, limit=$ALERT_LEVEL°C"
           sync;sync
           $DOWN -h 0
       fi
    fi
    
    if [[ $CORE1TEMP =~ ^[0-9]+$ ]]; then
       if [ $CORE1TEMP -ge $ALERT_LEVEL ]; then
          echo "The server ubuntu is shutting down due to excessive Core 1 temp (it has reached a temperature of $CORE1TEMP°C and has crossed its limit of $ALERT_LEVEL°C" | mail -s "ALERT! The server ubuntu is shutting down due to excessive CPU temperature" youremail@yourdomain.com
          $LOG "System going down as Core 1 has crossed its limit (temp=$CORE1TEMP°C, limit=$ALERT_LEVEL°C"
          sync;sync
          $DOWN -h 0
       fi
    fi
    Note: you will want to change the message in the script to one that makes sense for you (mainly change the name of the server). Also, make sure you change the email to your email address. If you want to use a temperature threshold other than 80 degrees C, you should also alter the line that reads:

    ALERT_LEVEL=80
    Save and close the file when you are done.

    Change the permissions on the file so that it becomes an executable script. Do this by typing:

    sudo chmod ugo+rwx ~/sensors_monitor.sh



    Again, because I am a big believer in testing our scripts, we should set it up to see whether it actually works. To do this, we will set the temperature threshold to 1 degree C. To do that, edit the file by typing:

    sudo pico ~/sensors_montior.sh
    and change the line that reads:

    ALERT_LEVEL=80
    to something much lower (say, 1 degree). For example:

    ALERT_LEVEL=1
    and save and close the file.

    Note: testing this script will shut down your server immediately! It will boot you off of your ssh session. It will stop any running processes you may have. So don't test this if there is any reason why you shouldn't shut down the computer right now. That said, if you have been following this tutorial, there really isn't anything that should be running that would prevent you from shutting the machine down.

    To test the script, type:

    sudo ~/sensors_monitor.sh
    with any luck, your server will have shut down almost immediately and you will have received an alert email message. If so, go ahead and restart the server, and re-log in via ssh.

    Now, change the ALERT_LEVEL back up to 80 degrees C (or whatever value you want to use). To do that, edit the file by typing:

    sudo pico ~/sensors_montior.sh
    and change the line that reads:

    ALERT_LEVEL=1
    back to 80 degrees:

    ALERT_LEVEL=80
    and save and close the file.

    Now move it into the hourly.active directory by typing (this will make it run once every hour):

    sudo mv ~/sensors_monitor.sh /usr/local/sbin/hourly.active/sensors_monitor.sh






    Section #14 - Power management
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.


    This machine is going to be living a very lazy life. 90% of the time it will have absolutely nothing to do. So why have it up and drives spinning this whole time? Answer: we won't. In this section we are going to set it up so that the machine spends most of its time trying to go to sleep. It will only stay awake if:

    1) there are other machines on the network that are awake
    2) it is currently performing an important task (aka, backing itself up)
    3) it is a scheduled "awake" time (aka during system maintenance)


    Step 1: Install pm-utils.

    Most of our power management will be handled by a package called pm-utils. To install it, type:

    sudo apt-get install pm-utils
    Test to make sure the computer has the ability to go to sleep. But before you do, here is a tiny tiny tutorial on the differences between hibernate and sleep:

    Suspend:
    Suspend shuts down as much of the computer as it can, but keeps the RAM powered so as not to lose your system's state. In this state, your server is still drawing some power but much less than if it were fully awake. If there were a power failure, however, the contents of your RAM will be lost and the next time the computer is started there may be the normal issues of a system that was not properly shut down. The advantage of running in this "lighter sleep" is that the system wakes up fairly quickly.

    Hibernate:
    Hibernate copies the contents of the system's RAM to disk and then shuts the computer down to a nearly completely un-powered state (only the ethernet card should still be drawing power). Upon waking up, the machine will boot back into the last saved state by reading the contents of the saved RAM into memory. A power failure while the server is asleep does cause any issues as the memory is completely stored in a nonvolatile state. So even after a power failure (while the server was asleep) your system will still boot back into the state it was in when it last went to sleep. The disadvantage is that it takes the system much longer to wake up than when merely suspended. On my system it takes about 45 seconds from dead sleep to fully awake.

    Hybrid:
    There is a third setting called pm-suspend-hybrid which acts as a combination of suspend and hibernate. Using this, the contents of RAM are written to disk just as though the hibernate command had been issued. But the computer itself only goes into a light sleep as though the suspend command had been issued. If there is no power outage while the machine is asleep it will merely wake up as though it had only been suspended. If, however, the system loses power while it is asleep, it will still manage to recover to its most recent state by booting from the hibernate image. The advantages of this are all the speed of the Suspend method, with the robustness of the Hibernate method. The disadvantage is that it takes more time to go to sleep (as it still has to write memory to disk) and it still uses more power than a fully hibernating system.

    For a few reasons, I have settled on the hibernate strategy. The first is that I am after as low power a system as possible. The second is that I prefer the robustness of this technique over the speed of the Suspend technique (I have no intense need to access the server within 45 seconds of sitting down at my computer). You should feel free to use whichever version your system supports and that makes sense to you.

    Test hibernation by typing:

    sudo pm-hibernate
    Be aware that this will leave your ssh session hanging. If you manage to wake the system up within a fairly short period of time (by pressing the power button on the server or sending it a magic packet) your ssh session will resume. If you wait too long, however, you may have to re-log in.



    Step 2: Install fping.

    Our server will be the last machine on the network to go to sleep. Specifically, it will periodically check the network to see if any other systems are on and awake. If it finds that other machines are up and about, it will simply continue to run. If, however, it finds that it is the last machine standing, it will put itself to sleep, No need to serve files when nobody else is up, eh? In order to accomplish this, it will need to be able to scan the network for other, non-sleeping machines.

    Normally, the standard way of checking to see if a computer is on a network and listening for commands is to send it a "ping" command. For example, if you wanted to see if your server were up and running, you could issue the following command from the command line on your Mac:

    ping 172.16.1.100 <-- of course, you would have to substitute the actual IP address of your server here.
    If your server is active, you should get back the following response:

    PING 172.16.1.100 (172.16.1.10): 56 data bytes
    64 bytes from 172.16.1.100: icmp_seq=0 ttl=64 time=1.708 ms
    64 bytes from 172.16.1.100: icmp_seq=1 ttl=64 time=1.538 ms
    64 bytes from 172.16.1.100: icmp_seq=2 ttl=64 time=1.527 ms
    (you will have to hit ctrl-c to stop it).

    In our case, we will want to scan our network for multiple computers at the same time. That is where fping comes in. It is able to simultaneously "ping" multiple IP addresses at the same time. To install it, type:

    sudo apt-get install fping
    Now, if we want to see what machines are up and about on our network, we would simply type:

    sudo fping -a -g 172.16.1.0/24
    (translated, this means send a ping to every machine on the 172.16.1.* network). You will get back something similar to this:

    172.16.1.8
    172.16.1.100
    172.16.1.101
    ICMP Host Unreachable from 172.16.1.100 for ICMP Echo sent to 172.16.1.2
    ICMP Host Unreachable from 172.16.1.100 for ICMP Echo sent to 172.16.1.2
    ICMP Host Unreachable from 172.16.1.100 for ICMP Echo sent to 172.16.1.3
    etc...
    This list will be very long and it will take a few moments before it stops spitting out those error messages. fping has just let you know that there are three devices on the network that are currently responding to pings and a whole lot of addresses where nobody is listening.

    For our purposes, we don't care about empty IP addresses, so let's modify the command we are issuing so that we only hear about active machines on the network.

    sudo fping -a -g 172.16.1.0/24 2> /dev/null
    the section that reads 2> /dev/null effectively tells the shell to discard any error messages the command may generate (by throwing them into the Linux equivalent of a black hole: /dev/null). Running this command will give you an output similar to this:

    172.16.1.8
    172.16.1.100
    172.16.1.101
    That's great. Now we know that there are three active devices on the network, and we are not besieged by millions of unnecessary error messages about unused addresses.

    Now think back to when you first started setting up the server. Remember the section where I told you to put your server on a static IP address (mine is set to 172.16.1.100)? Remember also that I suggested that any other non-client devices be assigned static addresses as well (my HP printer is set to 172.16.1.101)? Those actions are showing up here. I have three machines on the network, but two of them (the server itself at 172.16.1.100 and the printer at 172.16.1.101) are never going to be clients of the server. In fact, any IP address of 172.16.1.100 and above will never actually be a client of my server. Also, due to the way the Apple Airport Extreme works, it will always assign itself to the address 172.16.1.1. So instead of using fping to scan the entire network for listening devices, I am only going to scan the range of possible clients. This range is 172.16.1.2 - 172.16.1.99. So let's modify the fping command as follows:

    sudo fping -a -g 172.16.1.2 172.16.1.99 2> /dev/null
    We should now only get a list of IP addresses that are listening AND are not either my server, router, printer, etc… For example:

    172.16.1.8
    Ta da! And that's it. We now have a way of getting a list of active clients on our network.



    Step 3: Write a script to have the server automatically go to sleep.

    Now let's convert this into a simple script that can periodically scan the network and put the server to sleep if there are no active clients listening. We will start by creating this script in our administrative user's home directory. We never want to create a script directly in the continuous.active directory because it should be fully tested before being placed there. To create this script, type the following:

    sudo pico ~/check_for_active_clients.sh
    and then paste or type in the following (note, you will have to edit this script depending on your specific network configuration - aka IP addresses. I can try to help you with this if you have issues.):

    Code:
    #! /bin/bash
    
    #check to see if any other devices are responding to pings
    #only check addresses 172.16.1.2 - 172.16.1.99 because
    #the Airport is always on address 172.16.1.1 and all of my
    #always on devices (that don't need server access) have
    #reserved IP address of 172.16.1.100 and above
    
    #if there are no responding clients in the ip range, go to sleep
    if [ `/usr/bin/fping -a -g 172.16.1.2 172.16.1.99 2> /dev/null | wc -l` -eq 0 ]; then
    	/usr/sbin/pm-hibernate
    fi
    Save and close the file when you are done.

    Change the permissions on the file so that it becomes an executable script. Do this by typing:

    sudo chmod ugo+rwx ~/check_for_active_clients.sh
    Again, because I am a big believer in testing our scripts, we should set it up to see whether it actually works. Unfortunately, this one is difficult to test from another machine simply because the existence of this other machine on the network will cause the script to do nothing. So, we will need to test it directly from the console connected to the server.

    Start by making sure that every device on your network with a dynamically assigned IP address is turned off (or asleep).

    Next, physically log into your server from the console.

    Once you are logged in, type:

    sudo ~/check_for_active_clients.sh
    (you will be prompted for your password).

    The server should process for a few moments and then go to sleep. Once it has, hit the power button to wake it back up. Confirm that it wakes up back to the point where you had left it. If this works, log out of the console and return to your regular computer (which should still have ssh access. If not, ssh back into the server to continue following this tutorial).

    Now move this script into the continuous.active directory (this will make it run once every five minutes). Do this by typing:

    sudo mv ~/check_for_active_clients.sh /usr/local/sbin/continuous.active/check_for_active_clients.sh
    Test it again by putting your desktop computer (any any other devices using dynamically assigned IP addresses) to sleep. Within five minutes you server should go to sleep. If it does, wake it back up by hitting the power key. Wake your desktop machine as well. You should still be logged in via ssh as long as you did not let it sleep too long.



    Step 4: Get your client machines to automatically wake up the server.

    It is awesome that the server will automatically go to sleep when it is not needed, but we don't want to have to physically walk over to it and wake it up every time we wake up one of our client machines. To remedy this, we will instruct our macs to automatically send a Wake-On-Lan magic packet to the server whenever they either start up or wake up. To accomplish this, we will be using the excellent SleepWatcher tool running on our Macs and written by Bernhard Baehr (a man with an excellent first name by the way). You can get his tool from:

    http://www.bernhard-baehr.de/

    Note to Windows users: There must be similar tools available for your system. Alas I don't know what they are and I am not interested in finding out. I will update this post with any links if anyone has them to offer.

    Note: much of the following is being done on our Macs! Pay close attention to when you should be working on the command line on your Mac and when you should be working in the ssh session to your server!

    Mac: Download SleepWatcher 2.2 and install it as described in the associated ReadMe (I will leave it to you to properly install it. The instructions included with the tool are clear and all I would be doing is repeating them here).

    Next, we need a way of sending a magic packet from the command line on our Mac's. The tool wol allows this (it is available for many different OS's as well). Go to:

    http://www.gknw.net/wol.html

    and download the Mac OSX version of the tool. Once downloaded, save the wol file (you don't ned the wol.c file) to /usr/local/sbin. To do this, make sure you are in your downloads directory in the terminal and then type:

    Mac: sudo mv ./wol /usr/local/sbin/

    make sure it is executably by typing:

    Mac: sudo chmod ugo+rwx /usr/local/sbin/wol

    Now test the script to see if it works. Put your server to sleep by typing:

    Ubuntu: sudo pm-suspend

    Your server should go to sleep in a few moments. Once it has, let's try to wake it back up by typing (on your Mac):

    Mac: wol XX:XX:XX:XX:XX:XX <-- but replace the XX:XX:XX:XX:XX:XX with the MAC address of your server (you wrote it down ages ago when I told you to, right?)

    Your server should wake back up.

    So now lets write a script that will actually send this magic packet. On your Mac, create a new script called wakeUbuntu.sh by typing:

    Mac: sudo pico /usr/local/sbin/wakeUbuntu.sh

    and copy/type the following text into the file.

    Code:
    #! /bin/bash
    
    sleep 10
    /usr/local/sbin/wol XX:XX:XX:XX:XX:XX
    Note, you will have to replace the XX:XX:XX:XX:XX:XX with the actual MAC address of your server. Also, the sleep command near the top of the script merely tells the it to pause for 10 seconds before continuing. I have added this because it can often take a little time before the Mac is fully connected to the wireless network. Sending a magic packet before this time fails.

    Save and close the file.

    Make sure the script is executable by typing:

    Mac: sudo chmod ugo+rwx /usr/local/sbin/wakeUbuntu.sh

    Test the script by putting your sever to sleep

    Ubuntu: sudo pm-suspend

    and calling this script from the command line on your Mac:

    Mac: /usr/local/sbin/wakeUbuntu.sh

    Your server should wake up.

    Now, finally, we need to tell SleepWatcher to call this script whenever our Mac wakes up. First run a test. Start by putting your server back to sleep:

    Ubuntu: sudo pm-suspend

    Next, type the following command on your Mac command line:

    Mac: /usr/local/sbin/sleepwatcher --verbose --wakeup /usr/local/sbin/wakeUbuntu.sh

    Then put your Mac to sleep. Wait a second or so to make sure it is fully sleeping, and then wake it up. You should see that it wakes up and, in a few moments, executes the wakeUbuntu.sh script which then wakes up your server. You should get an output similar to this on your Mac:

    /usr/local/sbin/wol: packet sent to EA60:FFFFFFFF-XX:XX:XX:XX:XX:XX
    sleepwatcher: wakeup: /usr/local/sbin/wakeUbuntu.sh: 0
    Press ctrl-c to cancel the sleepwatcher command.

    Now we have to make sure that sleepwatcher is set up to run in the background. To do this, we need to first edit the plist file that came with the tool. Go to the directory where you downloaded sleepwatcher and then cd into the config directory:

    Mac: cd ~/Downloads/sleepwatcher_2.2
    Mac: cd config

    Next, edit the plist file by typing:

    Mac: sudo pico de.bernhard-baehr.sleepwatcher-20compatibility.plist

    find the line that says:

    <string>-s /etc/rc.sleep</string>
    and delete it (we are not interested in having anything run when our Mac goes to sleep).

    Next, edit the line that says:

    <string>-w /etc/rc.wakeup</string>
    so that it now reads:

    <string>-w /usr/local/sbin/wakeUbuntu.sh</string>

    (we want it to run our wake ubuntu script when the Mac wakes up).

    Save and close the file.

    Now move it to the LaunchDaemons directory.

    Mac: sudo mv de.bernhard-baehr.sleepwatcher-20compatibility.plist /Library/LaunchDaemons

    and change its permissions to be readable by all, writable only by root:

    Mac: sudo chmod ugo-rwx /Library/LaunchDaemons/de.bernhard-baehr.sleepwatcher-20compatibility.plist
    Mac: sudo chmod ugo+r /Library/LaunchDaemons/de.bernhard-baehr.sleepwatcher-20compatibility.plist
    Mac: sudo chmod u+w /Library/LaunchDaemons/de.bernhard-baehr.sleepwatcher-20compatibility.plist

    Note: as a shortcut, you could also just type:

    Mac: sudo chmod 644 /Library/LaunchDaemons/de.bernhard-baehr.sleepwatcher-20compatibility.plist

    which has the same effect as all three lines above. It is just harder to remember (or derive) the exact numerical sequence for this pattern which is why I typically use the more verbose version above.

    Now change its ownership so that it is owned by root:

    Mac: sudo chown root /Library/LaunchDaemons/de.bernhard-baehr.sleepwatcher-20compatibility.plist

    Finally, tell launchctl to load sleepwatcher automatcially:

    Mac: sudo launchctl load /Library/LaunchDaemons/de.bernhard-baehr.sleepwatcher-20compatibility.plist

    You should now be able to put your Mac to sleep, wait up to five minutes for the server to fall asleep, and then wake up your Mac (and have it wake your server within a few moments)!

    Repeat this process for each Mac on your network that you want to automatically wake your server.







    Section #15 - Security: Install and configure DenyHosts
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.



    Machines directly connected to the internet are the focus of an unbelievable number of attacks. One such attack, I have read, is to try to brute force crack a system. As far as I can tell, this involves repeatedly trying to log in on different ports with different user names and passwords. Based on comments I have read, such attacks can occur hundreds of times per day - all of them simply random attempts to break into your system.

    DenyHosts is a package which helps cut down on brute force attacks by watching your logs. Any IP's that try to log into your system (and fail) a repeated number of times (between 5 and 10 times depending on the user name) are automatically blocked so they cannot make any further attempts.



    Step 1: Install and configure DenyHosts.

    Start by installing the package.

    sudo apt-get install denyhosts
    Next we need configure it. To do that, type:

    sudo nano /etc/denyhosts.conf
    Set:

    PURGE_DENY = 5d
    PURGE_THRESHOLD = 2
    BLOCK_SERVICE = ALL
    <--- you may also have to comment out the line that says BLOCK_SERVICE = sshd

    Finally, find the line that starts with:

    WORK_DIR
    And make a note of the directory it lists (most likely it will be /var/lib/denyhosts). Write it down. Don't be lazy now, just write it down.

    Save and close the file.



    Step 2: Build an exception for machines on your local network.

    DenyHosts, by default, will block ANY IP address if it fails to properly log in more than 10 times (5 times for non-valid user names). This includes your Mac on your local network. So, if you fat-finger your login attempt a bunch of times you could potentially lock yourself out of your own system. Is this likely? Not really, but it might be better if you set an exception to never block any of your local IP addresses.

    I imagine this might be a slight security risk in case your Mac is compromised. Of course, if that has happened then you have a whole host of other problems. That said, I just don't know if this is a good idea or not. If anyone has a particular opinion on this, I'd love to hear it. For the time being, however, I am going to make an exception for local IP addresses.

    To build a list of IP addresses that will never be blocked no matter what, we will create a file called allowed-hosts. This file will live in the WORK_DIR you took note of above.

    Most likely, the IP address range you will be adding can be derived from the IP address of your server. If it was, say:

    172.16.1.16
    then you would use:

    172.16.1.*
    (Just keep the first three numbers, and replace the last one with an asterisk.)

    In the work directory that you wrote down in step 1 above, create a file named allowed-hosts:

    sudo nano /var/lib/denyhosts/allowed-hosts <--- replace /var/lib/denyhosts with the directory you wrote down in step 1 above.
    Add the following (change the ip to match the pattern used for your subnet)

    #always allow ssh from the local subnet
    172.16.1.*
    <--- You should change this to match the pattern of your local subnet
    Save and close the file.



    Step 3: Define some restricted user names.

    DenyHosts, by default, will lock out any IP address that tries to log in with a nonexistent user name after only 5 attempts. If the user name is valid, however, it will allow a total of 10 login attempts before blocking the offending IP address. However, there are a bunch of valid users on your system that have no business ever trying to log in remotely (or even locally).

    We will add these users to a list of restricted user names. By doing this, even though they are "valid" users on the system, they will be blocked after only a single failed attempt.

    Get a list of all users by typing:

    lastlog
    You might want to open a TextEdit document so that you can copy the list of names into it. In my case, the following users should not ever be logging in remotely:

    root
    daemon
    bin
    sys
    sync
    games
    man
    lp
    mail
    news
    uucp
    proxy
    www-data
    backup
    list
    irc
    gnats
    nobody
    libuuid
    syslog
    landscape
    sshd
    Note that I include root in this list. Root should never be logging in remotely and I suspect that it is one of the most common user names that automated break-in scripts try. That said, there is already a setting in DenyHosts that limits root specifically. Including root here is just overkill… but I am totally ok with that. Depending on the packages you have installed on your system, you may have one or two additional users in this list. The key is to include in this list all users who will never explicitly be logging into the system remotely (which is really all users except those you created specifically). Also, note that if you add additional packages after this step (ftp, etc.) you may find that there are some additional users you would want to add to this list. Feel free to do so, but also know that accidentally leaving a user off of this list isn't by itself a serious security threat.

    You might note that the ONLY name NOT in this list is my original administrative user (admiral in the examples used thus far):



    To mark these users as restricted, create a file named restricted-usernames in the work directory that you wrote down in step 1:

    sudo nano /var/lib/denyhosts/restricted-usernames <--- replace /var/lib/denyhosts with the directory you wrote down in step 1 above.
    Copy the names (just the names, not their last login time) into this file.

    Save and close the file.


    Restart DenyHosts

    sudo /etc/init.d/denyhosts restart







    Section #16 - Security: Configure various other settings to improve the default security
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.



    There are a bunch of smaller things you can do to improve the security of your system. They aren't the first line of defense, but they can certainly help.

    I got much of this information from the following websites:

    http://www.upubuntu.com/2011/05/how-...er-ubuntu.html
    https://help.ubuntu.com/community/St...UnsafeDefaults


    Step 1: Turn off IPv6.

    Turn off IPv6 for now. It is not really being used much yet and I have no idea what the security implications are for having it on. We can turn it back on later if we need it and once we understand it better. Start by editing /etc/sysctl.conf:

    sudo nano /etc/sysctl.conf
    Add the following to the end of the file and then save:

    # IPv6
    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1
    Now, reload the configuration by typing:

    sudo sysctl -p
    You can check to see if IPv6 is disabled by typing:

    cat /proc/sys/net/ipv6/conf/all/disable_ipv6
    If it returns a 1, then IPv6 has been successfully disabled (I know this seems weird… I would have expected that returning 1 meant it was ON but… who knows?).











    Section #17: Security: Set up a firewall using ufw (Uncomplicated Firewall)
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.



    Firewalls are a basic security tool, and most likely THE primary tool in your defense against intrusion. Ubuntu 11.10 uses iptables to control your firewall rules (aka which ports are open to the outside world, and to whom they are open). Unfortunately, iptables are notoriously difficult to work with. Luckily Ubuntu ships with a tool called ufw (Uncomplicated Firewall) which is actually a tool that sits on top of iptables. You instruct ufw with some fairly basic rules, and it translates those rules into actual iptables chains. That said, ufw does not make a firewall stupidly simple, but it does bring it within range of the average person.

    ufw is installed on your server by default, though it is turned off. So there is no need to install it before use. To find out the current status of ufw, type:

    sudo ufw status
    You should get back a result telling you that it is currently turned off:

    Status: inactive
    We will need to configure it, and turn it on. Before we do, however, I want to warn you that if you make a mistake, you might lock yourself out of your server. If this happens, don't panic. You have only locked yourself out from remote access. You can always work directly on the machine itself to take steps necessary to re-establish a connection. That said, it is better to not lock yourself out, no? So take some time to research ufw beyond what I am explaining here.

    We are going to have an extremely restrictive set of rules. I intend to specifically allow only those few things the server was built to do, and disallow everything else. This only makes sense. Why leave any ports or protocols open if you do not intend to use them? So, to begin we will tell ufw to deny everything by default (i.e. block everything: both incoming and outcoming).

    sudo ufw default deny incoming
    sudo ufw default deny outgoing
    Now, I am going to selectively allow those services that I want to have remote access for.

    Start with ssh (remember you have it on a non-standard port):

    sudo ufw allow in 22512/tcp <-- replace 22512 with the port number you selected for ssh
    sudo ufw allow out 22512/tcp <-- replace 22512 with the port number you selected for ssh
    This tells ufw to allow anyone to connect to port 22512 (replace 22512 with the port you set up for ssh) using the tcp protocol. Now remember that ssh is pretty locked down already, so allowing access to this port does not mean anyone can ssh into your computer. It just means anyone has an opportunity to try to log in (but they would need to know the port number, have the key file, know your passphrase, and get all that within five tries before denyhosts kicks them out). Also note that we have to specify a direction (in/out) otherwise only the incoming traffic will be allowed.

    We also have to allow ping (or more specifically ICMP) to get through the firewall. By default ufw blocks the ability of your server to run ping (or, again, ICMP upon which both ping and fping are built). This is kind of esoteric, but important. If you start up your firewall but have not allowed ICMP to function through it, your computer will put itself to sleep within five minutes or less (Why? Remember the script we set up to check for active computers on the network? Well, if ufw blocks ICMP and, by extension, fping, this script will think that there are no other machines on the network and then simply put itself to sleep. Ask me how I know.)

    So, to allow ICMP we have to edit the "before rules" of ufw. This is different than the usual allow commands. I think it has to do with the fact that ICMP seems to run at a lower level than TCP/IP. In any event, I used this site as reference for the next section:

    http://www.kelvinism.com/howtos/enab...p-through-ufw/

    We need to edit the "before rules" of ufw. To do this, type:

    sudo pico /etc/ufw/before.rules
    Edit this file and add the following lines:

    # allow outbound icmp
    -A ufw-before-output -p icmp -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
    -A ufw-before-output -p icmp -m state --state ESTABLISHED,RELATED -j ACCEPT
    Save and close the file.

    Once you have allowed ssh and outgoing ICMP, you can actually turn on ufw (had you turned it on prior to allowing ssh, you would have been booted off the system. Had you turned it on before editing the before rules, your machine would have spontaneously gone to sleep within five minutes). To turn on ufw, type:

    sudo ufw enable

    I am now going to open port 25 which is the port that most smtp servers use to send emails, and port 53 which it needs to resolve dns requests (it uses dns to resolve smtp.whatever.com to an actual ip address). In some cases, your email provider may use a different port than 25 (mine, for example, is 2525). Just substitute this value for 25. Also, note that we are only opening these ports in the out direction.

    sudo ufw allow out 25
    sudo ufw allow out 53

    Next, I am going to allow certain ports to accept any connections from the local network. Outside the local network, these services will not be accessible. Note that I am using the address 172.16.1.0/24 to represent my local network. This translates to an IP range of 172.16.1.0 through 172.16.1.254. If your network uses a different set of IP addresses (The usual ones are: 192.168.1.0-254, 172.16.1.0-254, or 10.0.0.0-254) then substitute that. For example, if your server's ip address is, say, 192.168.0.10 then you would want to use: 192.168.0.0/24

    Start with allowing the dhcp client access.

    sudo ufw allow in from 172.16.1.0/24 to any port 68
    sudo ufw allow out from 172.16.1.0/24 to any port 68
    Now, I am going to open the ports for Netatalk and Avahi (even though these have not yet been installed):


    Netatalk

    sudo ufw allow in proto tcp from 172.16.1.0/24 to any port 548
    sudo ufw allow out proto tcp from 172.16.1.0/24 to any port 548
    sudo ufw allow in proto tcp from 172.16.1.0/24 to any port 427
    sudo ufw allow out proto tcp from 172.16.1.0/24 to any port 427
    sudo ufw allow in proto tcp from 172.16.1.0/24 to any port 4700
    sudo ufw allow out proto tcp from 172.16.1.0/24 to any port 4700
    sudo ufw allow in proto tcp from 127.0.0.1 to any port 4700
    sudo ufw allow out proto tcp from 127.0.0.1 to any port 4700

    Avahi

    sudo ufw allow in proto udp from 172.16.1.0/24 to any port 5353
    sudo ufw allow out proto udp from 172.16.1.0/24 to any port 5353
    sudo ufw allow in proto udp from 172.16.1.0/24 to any port 56794
    sudo ufw allow out proto udp from 172.16.1.0/24 to any port 56794
    sudo ufw allow in proto udp from 172.16.1.0/24 to any port 42826
    sudo ufw allow out proto udp from 172.16.1.0/24 to any port 42826

    dhclient (needed for DHCP to work)

    sudo ufw allow in from 172.16.1.0/24 to any port 67
    sudo ufw allow out from 172.16.1.0/24 to any port 67

    Finally, I am going to allow port 4040 which will be needed for our subsonic media server. Again, it will only allowed from the local network.

    sudo ufw allow in proto tcp from 172.16.1.0/24 to any port 4040
    Remember, by using the format from 172.16.1.0/24 we are limiting these ports to only accept connections from the local network. Any attempts to connect from the wider world will be blocked.










    Section #18: Security: Install port knocking software
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.
    - I referenced the following website for this section https://help.ubuntu.com/community/PortKnocking



    As you learned earlier, ports are like doors through which the server listens for requests and then allows authorized users through. We have turned off most ports on our system except those that we specifically intend to use (particularly netatalk, avahi, and ssh for example). That said, it would be even nicer if any of these ports that will eventually be open to the internet were turned off until a secret code was submitted that opened them up. Currently, if you try to connect to our server on the port you set up for ssh (22512 in the example we have been using up till now) you will be challenged for your ssh private key. Just this challenge would be enough for a malicious user to know that there was a locked door that they could focus their efforts against.

    Port knocking is a mechanism where we actually camouflage the ports so that there is no response of any kind when you try to connect. Instead, you need to "knock" on a pre-defined set of ports in a pre-defined order in a pre-defined amount of time before you can successfully try to connect to the port you want to connect to. Once this secret pattern of knocks has been registered, the port you want to connect to is opened up and you may then try to connect to that port in the normal manner.

    For example, if you wanted to connect to ssh on port 22512, you would first "knock" on ports 1789, 20933, 12892, and 14999 in that order (this is just an example, you may set any pattern you wish) and do it within ten seconds (again, you may set any duration you want) before port 22512 is opened up. At that point, you have a few moments to connect to this port. Note that connecting still requires all of the ssh credentials you would have needed prior to installing the port knocking software. The nice thing about this is that there is no easy way to determine that port knocking software is even running on your server. If a malicious user knocks in the wrong order it will appear as though there are no open ports anywhere. There is no "rejection" of the knocks. Instead, nothing happens. This is indistinguishable from a machine that actually doesn't have any open ports or is even physically not there.

    We will be using the knockd tool to implement port knocking. It isn't the most secure of the port knocking tools as it does not use encryption and some other more sophisticated techniques, but it is easy to set up and use. Remember, we are not relying on port knocking for security, we are just adding a layer of camouflage to our system. If someone sniffs out our port knocking sequence, they still need to get past our other security measures. I think this compromise is sufficiently secure for the type of server we are protecting.

    Port knocking does make it a bit more annoying to log in remotely, but the tradeoff appears to me to be worth it. Again, we are talking about layered security here.




    Step 1: Install knockd on your server.

    To install port knocking software, type:

    sudo apt-get install knockd
    Once it is installed, edit the config file by typing:

    sudo nano /etc/knockd.conf
    And modify it so that it reads as follows (of course, you will want to change the 7000,8000,9000 sequence to a custom, secret sequence of your own. You don't have to limit yourself to just three ports, but I have not had any luck re-using ports so only use a port once.):

    Code:
    [options]
            logfile = /var/log/knockd.log
    
    [openSSH]
            sequence      = 7000,8000,9000
            seq_timeout   = 30
            start_command = /usr/sbin/ufw allow proto tcp from %IP% to any port 22512
            tcpflags      = syn
            cmd_timeout   = 20
            stop_command  = /usr/sbin/ufw delete allow proto tcp from %IP% to any port 22512
    Save and close the file.

    Next, edit the start up file by typing:

    sudo nano /etc/default/knockd
    Change the line that reads:

    START_KNOCKD=0
    to:

    START_KNOCKD=1
    And add the following line to the end of the file:

    KNOCKD_OPTS="--debug --verbose -i eth0"
    Save and close the file.




    Step 2: Tell ufw to deny the port we are using for ssh.

    Earlier we had set up ufw to allow ssh access via a non-standard port (22512). But now that we have port knocking enabled, we have to make sure that any attempts to connect to our ssh port is denied by default (only after we have properly knocked will it be enabled).

    But before we close up this port, it will be handy to have some ssh shells open (they will not be terminated by changing the ufw rules, but any new attempts to connect will be blocked).

    Once you have some extra shells connected to the server via ssh, delete the open port in ufw. To do this, type:

    sudo ufw delete allow in 22512/tcp
    sudo ufw delete allow out 22512/tcp
    Of course, you will have to substitute the actual port number you set up for ssh.




    Step 3: Start knockd.

    Start knockd by typing:

    sudo service knockd restart
    you should be able to look at the log to see what it is doing. To do so, type:

    sudo cat /var/log/knockd.log
    And it will tell you the status of the knockd process. Of course, nothing is happening just yet because you still need to install a knocking client on your Mac.




    Step 4: Install the knocking software on your Mac.

    In order to get knockd to open up our ssh port, we will have to be able to send it a series of knock. There are multiple ways of doing this (and I had considered writing a script that utilizes the nc command but had some difficulty with that) but the easiest way to go is to download a knocking client for your Mac.

    On your Mac, go to the following website:

    http://www.zeroflux.org/projects/knock

    and download the MacOS Client. (Note, there are also clients for Windows and iOS)

    Move the downloaded knock file (called knock) to your /usr/local/sbin directory. (Note, if you can only find a file called knock-macos.tar it means you have not uncompressed it yet). Move this file this by typing:

    Mac: sudo mv ~/Downloads/knock /usr/local/sbin/knock

    Make sure it is executable by typing:

    Mac: sudo chmod 777 /usr/local/sbin/knock

    Now create a combined knock/ssh script so that you can connect to your server with a single command. Create the script by typing:

    Mac: sudo nano /usr/local/sbin/ubs

    And copy or type the following into it:

    Code:
    #!/bin/bash
    
    /usr/local/sbin/knock 172.16.1.100 7000
    /usr/local/sbin/knock 172.16.1.100 8000
    /usr/local/sbin/knock 172.16.1.100 9000
    sleep 2
    ssh -l admiral -p 22512 172.16.1.100
    Note, you will have to customize the above script so that it:

    a) Uses your actual sequence of ports (not 7000 8000 9000)
    b) Uses your actual administrative user (not admiral)
    c) Uses your actual ssh port number (not 22512).

    Also, normally you would simply have a single line which lists all of your ports in order, but I found that it would rarely work. Instead I had a lot more luck by separating each knock out into a separate call to the knock script.

    Save and close the file.

    Make sure it is only accessible to the root user on your Mac because the proper sequence of ports is something like a combination… you want to keep it secret. To do this, type:

    Mac: sudo chown root /usr/local/sbin/ubs
    Mac: sudo chmod go-rwx /usr/local/sbin/ubs
    Mac: sudo chmod u+rwx /usr/local/sbin/ubs

    Now test it to see if you can log into your server just by typing this one command (note: you have to have /usr/local/sbin in your PATH variable. If it isn't look up on Google how to do that for the shell you are using - which is most likely bash.):

    Mac: sudo ubs

    You will be prompted for your password and you may have to accept the server's RSA key fingerprint the first time.

    It works for me, but sometimes I need to run it more than once (that may simply be a case of some other process on the network hitting a port while the script is running and throwing off the sequence of knocks).







    Section #19 - Set up your users and the various directories on the RAID
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.



    Now is the time to create all the regular users on the server. These are the users who will be using the server to store files, back up their computers, stream music, etc. You should include your regular login in this list. You will also create directories where your data will be stored (all on the RAID array).




    Step 1: Create your first user.

    Creating a user is easy. Assuming you want to add a user named 'bob' just type:

    sudo adduser bob <--- replace 'bob' with the name of the user you want to add
    You will see the following:

    Adding user `bob' ...
    Adding new group `bob' (1001) ...
    Adding new user `bob' (1001) with group `bob' ...
    Creating home directory `/home/bob' ...
    Copying files from `/etc/skel' ...
    Then you will be prompted to enter a password for this user:

    New password:
    Go ahead and assign a default password for the user (they can change it later). Note, if you put in too basic a password you may get a warning that looks like:

    BAD PASSWORD: it is based on a dictionary word
    Regardless, you will be asked to re-enter the password:

    Retype new password:
    Go ahead and re-enter the password. You will be notified that the password was successfully set.

    passwd: password updated successfully
    After this, you will be guided through the process of entering some basic user information:

    Changing the user information for bob
    Enter the new value, or press ENTER for the default

    Full Name []: Bob Smith
    Room Number []:
    Work Phone []:
    Home Phone []:
    Other []:
    Just answer the questions. Note that I elected to ignore some of the questions (Room Number for example) by just hitting enter when asked for information. Finally you will be asked:

    Is the information correct? [Y/n] y
    Assuming it is all correct, just hit:

    y

    and press enter.

    That's it. You have just created a user named 'bob'




    Step 2: Lather, Rinse, Repeat.

    Repeat step 1 as many times as you have users that will be using the system.




    Step 3: Create the directory structure you want to use to organize data on your RAID array.

    I have my server set up in the following manner:

    1) A home directory on the RAID array for each user
    2) A media directory inside of which is my music, movies, etc...
    3) A shared directory called "Data" which is just open to anyone to store shared data
    4) A time machine directory for time machine backups
    You, of course, may set your system up in any way you like. To create these directories, do the following (substituting your own structure of course):

    sudo mkdir /mnt/nas/Users
    sudo mkdir /mnt/nas/Users/george
    sudo mkdir /mnt/nas/Users/ringo
    sudo mkdir /mnt/nas/Users/paul
    sudo mkdir /mnt/nas/users/yoko
    sudo mkdir /mnt/nas/Data
    sudo mkdir /mnt/nas/Media
    sudo mkdir /mnt/nas/Media/Music
    sudo mkdir /mnt/nas/Media/Movies
    sudo mkdir /mnt/nas/TimeMachine








    Section #20 - Set up Netatalk (open source Apple File Protocol) and Avahi (open source Bonjour implementation)
    ----------------------------------------------------------------
    ----------------------------------------------------------------

    Notes:

    - Anything in bold blue monotype is a command you type at the command prompt. bold green monotype is used to show prompts from the computer. Pink italics are used to show either the contents of text files or miscellaneous prompts.
    - The info in this section came from: http://www.pronetworks.org/forums/co...s-t109371.html



    This server will act as a file server and Time Machine backup drive for a group of Macs (and one Dell hackintosh).

    This will require that we install Netatalk (which supplies an open source implementation of the Apple File Protocol) and Avahi (which is an open source implementation of Bonjour).

    Since I don't own any Windows machines, I am both unversed and uninterested in the details of making this server talk to them. That said, I am sure the process is equally simple. You should investigate Samba which is, as I understand it, an open source implementation of the Windows networking protocol.



    Step 1: Install Netatalk (open source implementation of Apple File Protocol).

    Netatalk is a service that will allow your Macs to talk natively to the server (vs. having to pretend they are Windows machines and communicate via Samba). Ubuntu server 11.10 package repository contains a version of netatalk that is compatible with OSX Lion. Previous versions of Ubuntu do not (aka 11.04).

    Note: If you are following along with this tutorial but trying to install an older version of Ubuntu you might want to meander over to http://andypeace.com/netatalk.html where a very nice fellow by the name of Andy Peace has compiled a version of netatalk that might work for you (if you are installing a 64 bit system). You can download the file from the command line by using the wget command (wget http://andypeace.com/netatalk_2.2.0-1_amd64.deb) and installing it using the dpkg command (sudo dpkg -i netatalk_2.2.0-1_amd64.deb)

    Install netatalk by typing:

    sudo apt-get install netatalk


    UPDATE 1/7/2012: The version of Netatalk currently available for install from the 11.10 repositories does not work with lion (may not work with previous versions of OSX either). In order to install the latest version, do the following:

    I used this thread to help diagnose the issues:
    https://bugs.launchpad.net/ubuntu/+s...lk/+bug/810732

    Start by setting up your server so that it can install packages outside of the official ones available for the 11.10 version of Ubuntu. These outside packages are referred to as PPA's and are basically identical to a package you would install from the official repository, but are put together by members of the community and hosted elsewhere. Of course you want to be careful not to install just any package from just any source, but as long as you are careful to use respected sources you should be fine.

    To install the latest Netatalk ppa, we need to add the repository to our apt package management system sources list. I used the following website as a reference as to how to accomplish this: https://help.ubuntu.com/community/Re...es/CommandLine

    Go ahead and read that previous link for a very good description of what we will be doing (as I am going to go light on the description here). Note: In theory you should be able to add a repository simply by issuing the add-apt-repository command, but I was unable to get that to work (Perhaps I would have to install python software properties first, but the following method works and so I never bothered trying to go any further with this command).

    Start by backing up your /etc/apt/source.list file and then editing it. Do this by typing:

    sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup
    sudo nano /etc/apt/sources.list
    [/S]

    Add the following line to the end of this file:

    Save and close the file.

    Update your repositories list by typing:

    sudo apt-get update

    Now you can install Netatalk by simply typing:

    sudo apt-get install netatalk



    Step 2: Configure Netatalk.

    Now we need to configure Netatalk so that it will work with your OSX machines.

    Before we do, it might be useful to go into a little background on some of the terminology here as Netatalk is a fairly complicated beast.

    OSX uses the concept of file ID's to track files (vs. the actual name/path of the file). From the Netatalk documentation (http://netatalk.sourceforge.net/2.2/...ml#id4164944):

    "Unlike other protocols like smb or nfs, the AFP protocol mostly refers to files and directories by ID and not by a path (the IDs are also called CNID, that means Catalog Node ID). A typical AFP request uses a directory ID and a filename, something like "server, please open the file named 'Test' in the directory with id 167". For example "Aliases" on the Mac basically work by ID (with a fallback to the absolute path in more recent AFP clients. But this applies only to Finder, not to applications)."

    The thing you should take away from this is that Netatalk has to maintain a database of Catalog Node ID's for each file you are storing on the server. This database (and there are several options which kind to use but we will stick to the default of dbd) is responsible for tracking the CNID's for every file transaction that occurs. Having this database is required for our Netatalk installation to work. Incidentally, this database is also one reason why it is a bad idea to mess with the files being served by Netatalk outside of Netatalk. It becomes very easy for your CNID database to get out of sync with what is actually on disk. If you do find you need to manipulate these files from a command line on your server, use the ad tool that was installed along side Netatalk (http://netatalk.sourceforge.net/2.2/htmldocs/ad.1.html). We will actually be creating a special login for doing exactly that in a moment.

    Another thing to note is that the file protocol that we will be using is called AFP (Apple File Protocol). There is a rich history of different file services (serving files, print servers, time servers, etc.) that come with legacy Apple products (OS 9 and older primarily) and Netatalk seems capable of handling them all. But as we don't want any unnecessary complications, we will be disabling them all and limiting ourselves to just using AFP (via the afpd daemon).


    -------------------------------------
    Ok, now let's edit the Netatalk config file:

    sudo nano /etc/default/netatalk
    First, find the lines that start with:

    CNID_METAD_RUN
    AFPD_RUN
    And make sure they are both set to "yes". The first line turns on the default CNID database and has it run as a daemon. The second line turns on the Apple File Protocol and has it run as a daemon as well.


    Next, find the lines that start with:

    ATALKD_RUN
    PAPD_RUN
    TIMELORD_RUN
    A2BOOT_RUN
    And make sure they are all either set to "no" or commented out. These are all legacy protocols that we will not be using.

    Save and close the file.


    -------------------------------------
    Next we want to configure the Apple File Protocol service (afpd). To do this we need to edit the afpd config file:

    sudo nano /etc/netatalk/afpd.conf
    And add the following line to the end of the file:

    - -tcp -noddp -uamlist uams_dhx.so,uams_dhx2.so -nosavepassword


    UPDATE 1/7/2012: Apparently uams_dhx2.so does not play nice with OSX Lion, so we should update this line to use uams_dhx2_passwd.so

    - -tcp -noddp -uamlist uams_dhx.so,uams_dhx2_passwd.so -nosavepassword

    (Note that this lines starts with a dash, space, dash). Save and close the file.

    Here is a breakdown of the settings and what they mean:

    -tcp makes Netatalk run over the tcp protocol (vs.
    -noddp disables AFP-over-Appletalk. Appletalk is a legacy protocol no longer needed for OSX clients so we turn it off.
    -uamlist provides a list of possible authentication methods (how the server authenticates the user). We are going to limit it to uams_dhx.so and uams_dhx2.so both of which are fairly modern and fairly secure.
    -nosavepassword is fairly self explanatory. It will not store the user's password once authentication has been completed.

    Note that all of the above settings are the default, so technically you could skip adding this line completely.


    -------------------------------------
    Finally, set up the shared volumes (the "drives" that your Mac will see when they log into the server). In my case, I want one drive for each user (yoko, paul, ringo, and george), one drive for media, one drive for general data, and one drive to act as a time machine drive. In other words, the following remote drives will be visible by the Macs. They will look like separate drives even though they are merely separate directories on your RAID array (not all of these drives will actually be visible to each user though. I'll explain in a moment):

    yoko
    paul
    ringo
    george
    Data
    Media
    TimeMachine


    To set up these drives we will have to edit the the AppleVolumes.default file and add one line for each "drive" to the AppleVolumes.default file.

    sudo nano /etc/netatalk/AppleVolumes.default

    Find the line that looks like:

    ~/ "Home Directory"

    And comment it out. We will not be allowing users to log into their Linux home directories on this server. Instead they will be using the specific directories we created per user on the RAID.

    Next, add the following lines to the end (adjust change the usernames and paths to be appropriate, and add any additional directories as desired). We are explicitly defining which directories are allowed access by which users. The pattern is:

    path to directory name to display list of allowed users options

    To accomplish this, I have added the following lines:

    /mnt/nas/TimeMachine "TimeMachine" allow:geroge,ringo,paul,yoko cnidscheme:dbd options:tm
    /mnt/nas/Data "Data" allow:geroge,ringo,paul,yoko cnidscheme:dbd
    /mnt/nas/Media "Media" allow:geroge,ringo,paul,yoko cnidscheme:dbd
    /mnt/nas/Users/george "george" allow:george cnidscheme:dbd
    /mnt/nas/Users/ringo "ringo" allow:ringo cnidscheme:dbd
    /mnt/nas/Users/paul "paul" allow:paul cnidscheme:dbd
    /mnt/nas/Users/yoko "yoko" allow:yoko cnidscheme:dbd

    Note the "allow" statements. Any drive allowed for a user here will appear in the Finder on a Mac where that user is logged in. Any drive that is not allowed for a user will not be visible.



    Step 3: Install Avahi (an open source Bonjour service).

    Install avahi by typing:

    sudo apt-get install avahi-daemon

    Configure it by typing:

    sudo nano /etc/nsswitch.conf

    Find the line that starts with hosts and add

    mdns

    to the end of the line,

    Save and close the file.


    Create a new file for Avahi so it knows what services to broadcast to the Macs. Do this by typing:

    sudo nano /etc/avahi/services/afpd.service

    and paste in the following text

    Code:
    <?xml version="1.0" standalone='no'?><!--*-nxml-*-->
    <!DOCTYPE service-group SYSTEM "avahi-service.dtd">
    <service-group>
    <name replace-wildcards="yes">%h</name>
    <service>
    <type>_afpovertcp._tcp</type>
    <port>548</port>
    </service>
    <service>
    <type>_device-info._tcp</type>
    <port>0</port>
    <txt-record>model=Xserve</txt-record>
    </service>
    </service-group>
    You may replace the text "Xserve" with any of the following to get a different icon:

    ▪ iMac
    ▪ MacPro
    ▪ MacBook
    ▪ MacBookPro
    ▪ MacBookAir
    ▪ PowerBook
    ▪ PowerMac
    ▪ Macmini
    ▪ Xserve
    ▪ AirPort

    Save and close the file and then restart avahi by typing:

    sudo service avahi-daemon restart




    Step 4: Set up a script to list, copy, move, and delete netatalk managed files from the command line.

    It is a bad idea to move files around from behind netatalk's back. As mentioned before, netatalk maintains a database of file ID's that have to sync up with actual files on disk. This CNID database can easily get out of sync if you use either an ssh session or the console to move files that netatalk is managing. This is because netatalk will not be aware of the changes. The best idea is to use either the Finder on your mac or the Terminal on your mac (not via ssh) to manipulate files. However, there may be times when we want to use ssh or the console to manage our files. In these cases we will need to use tools other than ls, cp, mv, and rm. Instead we will use a netatalk supplied tool called "ad" which allows us to do many of the same file manipulations, but in a manner that keeps netatalk in the loop.

    That said, it can be very easy to forget to use the correct tool at the correct time. To solve that, we are going to write a small script that checks to see whether we are in a netatalk managed directory. If we are, it will automatically use the ad tool to perform our requested operation. If not, it will fall back to the standard shell commands. It isn't a fool proof system, but it should cover nearly all of the situations we find ourselves in.

    Again, please note that these tools are only necessary should you want to move files around when logged into the server via ssh or directly from the console. If you are logged in from the Finder you may use the terminal (not via ssh) to move files around since all of those actions are being filtered through netatalk already.

    Next, we will have to "wrap" the normal shell commands ls, mv, cp, and rm with our custom scripts to accomplish this. Let's start with 'ls'.




    Wrap 'ls'

    To create a new 'ls' command, type:

    sudo nano /usr/local/sbin/lstemp

    and copy or enter the following:

    Code:
    #!/bin/bash
    
    WDIR=`pwd`
    
    #if the .AppleDouble directory is present, it is save to assume it is a netatalk managed dir
    if [ -d "$WDIR/.AppleDouble" ]; then
    
            newargs=""
            #check for valid args, discarding any that are not supported
            for arg in $@; do
                    if [[ $arg != --* ]]; then
                            if [[ $arg == -* ]]; then
                                    if [[ $arg == *l* ]]; then
                                            newargs="$newargs -l"
                                    fi
                                    if [[ $arg == *d* ]]; then
                                            newargs="$newargs -d"
                                    fi
                                    if [[ $arg == *R* ]]; then
                                            newargs="$newargs -R"
                                    fi
                                    if [[ $arg == *u* ]]; then
                                            newargs="$newargs -u"
                                    fi
                            fi
                    fi
            done
    
            #add any non-optional arguments (anything that does not start with a - )
            for arg in $@; do
                    if [[ $arg != -* ]]; then
                            newargs="$newargs $arg"
                    fi
            done
    
            #let the user know they are using a non-standard version of ls
            echo -e "\e[00;41mMAC MODE\e[00m"
    
            #execute it
            /usr/bin/ad ls $newargs 1>&1
    
            #let them know again that they are using a non-standard version of ls
            echo -e "\e[00;41mMAC MODE\e[00m"
    
    else
    
            #just run ls in the normal manner
            /bin/ls $@ 1>&1
    
    fi
    Save and close the file.

    Make it executable by typing:

    sudo chmod 777 /usr/local/sbin/lstemp

    Test the script by changing directories to different locations on your server. For example, type:

    cd /usr/local/sbin
    lstemp -l

    You should get a regular listing of the contents of that directory.

    Next, switch to a directory being managed by netatalk.

    cd /mnt/nas/
    lstemp -l

    You should get a slightly different output, one that is identified with red banners indicating that you are in the MAC MODE.

    If this all works, rename your script to ls. This way, whenever you are in a directory being managed by netatalk, you will use the ad tools, and whenever you are in a normal directory you will use the standard Linux ls tool.

    mv /usr/local/sbin/lstemp /usr/local/sbin/ls

    Note, you can always use the original ls command at any time (even if you are in a netatalk managed directory) by typing:

    /bin/ls




    Wrap 'cp'

    Next, we will want to wrap the cp command. To create a new cp command, type:

    sudo nano /usr/local/sbin/cptemp

    and copy or enter the following:

    Code:
    #!/bin/bash
    
    WDIR=`pwd`
    
    #if the .AppleDouble directory is present, it is save to assume it is a netatalk managed dir
    if [ -d "$WDIR/.AppleDouble" ]; then
    
            newargs=""
            #check for valid args, discarding any that are not supported
            for arg in $@; do
                    if [[ $arg != --* ]]; then
                            if [[ $arg == -* ]]; then
                                    if [[ $arg == *a* ]]; then
                                            newargs="$newargs -a"
                                    fi
                                    if [[ $arg == *f* ]]; then
                                            newargs="$newargs -f"
                                    fi
                                    if [[ $arg == *i* ]]; then
                                            newargs="$newargs -i"
                                    fi
                                    if [[ $arg == *n* ]]; then
                                            newargs="$newargs -n"
                                    fi
                                    if [[ $arg == *p* ]]; then
                                            newargs="$newargs -p"
                                    fi
                                    if [[ $arg == *R* ]]; then
                                            newargs="$newargs -R"
                                    fi
                                    if [[ $arg == *v* ]]; then
                                            newargs="$newargs -v"
                                    fi
                                    if [[ $arg == *x* ]]; then
                                            newargs="$newargs -x"
                                    fi
                            fi
                    fi
            done
    
            #add any non-optional arguments (anything that does not start with a - )
            for arg in $@; do
                    if [[ $arg != -* ]]; then
                            newargs="$newargs $arg"
                    fi
            done
    
            #let the user know they are using a non-standard version of ls
            echo -e "\e[00;41mCOPYING IN MAC MODE\e[00m"
    
            #execute it
            /usr/bin/ad cp $newargs 1>&1
    
    else
    
            #just run ls in the normal manner
            /bin/cp $@ 1>&1
    fi
    Save and close the file.

    Make it executable by typing:

    sudo chmod 777 /usr/local/sbin/cptemp

    Test the script by changing directories to different locations on your server. For example, type:

    cd /usr/local/sbin
    sudo cptemp cptemp deleteme

    If you now do a listing of this directory, you should see that you have successfully copied the cptemp file into a file called deleteme. Don't delete this file just yet as it will prove to be a good test subject for our next script.

    Next, switch to a directory being managed by netatalk (You may have to use your finder to drag a couple of files into this directory so that you have files to try copying). Use cptest to copy a file here as well (name the destination something like deleteme). You should be notified that you are doing a Mac Mode copy. Check to see that the file does indeed exist by using the ls command (note that it should be using the Mac Mode automatically as well). Again, do not delete this file just yet as it will be useful for our next script.

    If this all works, rename your script to cp. This way, whenever you are in a directory being managed by netatalk, you will use the ad tools, and whenever you are in a normal directory you will use the standard Linux cp tool.

    mv /usr/local/sbin/cptemp /usr/local/sbin/cp

    Note, you can always use the original cp command at any time (even if you are in a netatalk managed directory) by typing:

    /bin/cp




    Wrap 'rm'

    Next, we will want to wrap the rm command. To create a new rm command, type:

    sudo nano /usr/local/sbin/rmtemp

    and copy or enter the following:

    Code:
    #!/bin/bash
    
    WDIR=`pwd`
    
    #if the .AppleDouble directory is present, it is save to assume it is a netatalk managed dir
    if [ -d "$WDIR/.AppleDouble" ]; then
    
            newargs=""
            #check for valid args, discarding any that are not supported
            for arg in $@; do
                    if [[ $arg != --* ]]; then
                            if [[ $arg == -* ]]; then
                                    if [[ $arg == *R* ]]; then
                                            newargs="$newargs -R"
                                    fi
                                    if [[ $arg == *v* ]]; then
                                            newargs="$newargs -v"
                                    fi
                            fi
                    fi
            done
    
            #add any non-optional arguments (anything that does not start with a - )
            for arg in $@; do
                    if [[ $arg != -* ]]; then
                            newargs="$newargs $arg"
                    fi
            done
    
            #let the user know they are using a non-standard version of ls
            echo -e "\e[00;41mDELETING IN MAC MODE\e[00m"
    
            #execute it
            /usr/bin/ad rm $newargs 1>&1
    
    else
    
            #just run ls in the normal manner
            /bin/rm $@ 1>&1
    
    fi
    Save and close the file.

    Make it executable by typing:

    sudo chmod 777 /usr/local/sbin/rmtemp

    Test the script by returning to the directories where you had copied files in the previous example:

    cd /usr/local/sbin

    and delete the deleteme file you had created earlier by typing:

    sudo rmtemp deleteme

    If you now do a listing of this directory, you should see that you have successfully deleted the file called deleteme.

    Next, switch to the same directory you had used to test netatalk copying before. (This was the directory where you most likely had to copy some files into via the finder so that you could test the cp command). You should have an extra file in here called something similar to deleteme. Use rmtest to delete this file. It should be successfully deleted, and you should have been notified that you were deleting in Mac Mode.

    If this all works, rename your script to rm. This way, whenever you are in a directory being managed by netatalk, you will use the ad tools, and whenever you are in a normal directory you will use the standard Linux rm tool.

    mv /usr/local/sbin/rmtemp /usr/local/sbin/rm

    Note, you can always use the original rm command at any time (even if you are in a netatalk managed directory) by typing:

    /bin/rm




    Wrap 'mv'

    Finally, we will want to wrap the mv command. To create a new mv command, type:

    sudo nano /usr/local/sbin/mvtemp

    and copy or enter the following:

    Code:
    #!/bin/bash
    
    WDIR=`pwd`
    
    #if the .AppleDouble directory is present, it is save to assume it is a netatalk managed dir
    if [ -d "$WDIR/.AppleDouble" ]; then
    
            newargs=""
            #check for valid args, discarding any that are not supported
            for arg in $@; do
                    if [[ $arg != --* ]]; then
                            if [[ $arg == -* ]]; then
                                    if [[ $arg == *f* ]]; then
                                            newargs="$newargs -f"
                                    fi
                                    if [[ $arg == *i* ]]; then
                                            newargs="$newargs -i"
                                    fi
                                    if [[ $arg == *n* ]]; then
                                            newargs="$newargs -n"
                                    fi
                                    if [[ $arg == *v* ]]; then
                                            newargs="$newargs -v"
                                    fi
                            fi
                    fi
            done
    
            #add any non-optional arguments (anything that does not start with a - )
            for arg in $@; do
                    if [[ $arg != -* ]]; then
                            newargs="$newargs $arg"
                    fi
            done
    
            #let the user know they are using a non-standard version of ls
            echo -e "\e[00;41mMOVING IN MAC MODE\e[00m"
    
            #execute it
            /usr/bin/ad mv $newargs 1>&1
    
    else
    
            #just run ls in the normal manner
            /bin/mv $@ 1>&1
    
    fi
    Save and close the file.

    Make it executable by typing:

    sudo chmod 777 /usr/local/sbin/mvtemp

    Test the script by typing the following:

    cd /usr/local/sbin
    sudo cp ls renameme

    If you now do a listing of this directory, you should see that you have successfully copied the file called ls into a file called renameme. This isn't new, you have already done this exact same experiment above. But now we will rename this file. Type:

    sudo mvtemp renameme deleteme

    If you list the files in this directory, you should now see that the file renameme has been renamed to deleteme. You should have been notified that you are listing in Mac Mode. Go ahead and delete this file by typing:

    sudo rm deleteme

    You should have been notified that you were deleting in Mac Mode.

    Next, switch to the same directory you had used to test netatalk copying before. (This was the directory where you most likely had to copy some files into via the finder so that you could test the cp command). Go ahead and copy a file here as well. Name the file you are copying renameme. You should have been notified that you were copying in Mac Mode. Now move this file by using the mvtest tool (move it to a file called deleteme). If you list the contents of the directory you should see that the file has been renamed from renameme to deleteme. Go ahead and delete this new file using the rm tool (again, you should be notified that you are deleting in Mac Mode).

    If this all works, rename your script to mv. This way, whenever you are in a directory being managed by netatalk, you will use the ad tools, and whenever you are in a normal directory you will use the standard Linux mv tool.

    mv /usr/local/sbin/mvtemp /usr/local/sbin/mv

    Note, you can always use the original mv command at any time (even if you are in a netatalk managed directory) by typing:

    /bin/mv


    UPDATE 1/7/2012: To better make sure we know when we are in a Netatalk managed directory or not, we will also update the prompt on our bash shell.

    To create a custom prompt for our bash shell, edit your .bashrc file (this is a script that is automatically run every time you create a new shell - either via ssh or the console. So we put commands in here that we want to run every time we create a new shell).

    To edit this file, type the following:

    nano ~/.bashrc

    Once the file is open, add the following line to the end of it:

    Code:
    PS1='$(if [ -d "`pwd`/.AppleDouble" ]; then  echo "\e[00;41mMAC MODE\e[00m \033[01;32m\][\w]\033[00m\]: "; else echo "\033[01;32m\][\w]\033[00m\]: "; fi)'
    Save and close the file.

    Now, whenever you create a new shell, your prompt will intelligently change depending on which directory you are currently in (Showing the text "MAC MODE" in red whenever you are in a Netatalk managed directory).




    Section #21 - Set up time machine on your Mac
    ----------------------------------------------------------------
    ----------------------------------------------------------------


    There really isn't too much to this step. On your Mac, turn on time-machine. When it lists the possible drives you could use, select the one on your Ubuntu server. Done.






    Section #22 - Install Subsonic - a web-based media server
    ----------------------------------------------------------------
    ----------------------------------------------------------------


    iTunes sucks hard (and this is coming from a total Mac-head). That said, if you must use it you can serve music to iTunes devices by installing a daap server called forked-daapd. On the plus side, it will act as though it were another iTunes machine on your network, sharing all of its music, including serving up DRM protected tracks that you may have purchased from the iTunes store when you were younger and much dumber (too dumb to realize that buying DRM encumbered music is completely nuts). On the minus side, it will act as though it were another iTunes machine on your network, sharing all of its music. This means you cannot use cover flow. You cannot make playlists. You cannot see your music in any format other than a giant list of every track on the server. You cannot rate the music.

    So feel free to install forked-daapd if you want to use iTunes (I used it for a while back when I had FreeNAS installed, and I still have it installed on this Ubuntu server) but I am not going to bother walking you through it. It is fairly self explanatory and I am not interested in using it myself. That said, I do not wish to disparage forked-daapd itself. It is an excellent package that does exactly what it is supposed to do. It is just iTunes that sucks eggs.

    Instead, I am going to suggest you install Subsonic (http://subsonic.org/pages/index.jsp) which is a Java based music server that will allow you to stream music to any machine in the house and to any device out on the internet. The pluses are that you have much better control over your music from wherever you may be listening to it. It is web-based so that you really only need a browser to connect (though we will not actually be using it that way). It is much more flexible and much easier to use than iTunes. The minuses are that you have to install Java on your server, you have to have a web server operating (though it is built into Subsonic and isn't a separate web server like Apache) which has unknown ramifications for security and, finally, the client side web view of your music is so ugly it will want to make you scratch your eyes out. We are talking Android ugly here (and I am a HUGE Android fan by the way, but both it and Subsonic got hit with the ugly stick… hard).

    Finally, Subsonic is somewhere between shareware and donationware. You are free to use it on a fairly basic level without paying anything. If you feel it is worth something (and I do) you are encouraged to kick a tiny amount of money back to the developer. If you want to use any of the more advanced features, you are required to make a donation, but the amount is still flexible and up to you. I personally have donated some money (more than the minimum amount) because I truly appreciate the effort that went into making a tool that blows iTunes out of the water. Also, maybe if enough people donate the developer can hire a designer and make all of our lives better. (I'm teasing!)

    There may be more than one way to install Subsonic. I noticed that you can simply type:

    sudo apt-get install subsonic

    And perhaps this will work. I have no idea. The actual subsonic website has different instructions for installation and that is what I followed. Since then I have run the above command and all it does is tell me that my Subsonic package is up to date. Feel free to try this first method though. It may be all you need.

    Otherwise, to install Subsonic from the instructions on the website (do the following. First install Java by typing:

    sudo apt-get install openjdk-6-jre

    This may take a while to install.

    Next, we want to download the latest version of Subsonic to our home directory. To do this we will use the wget command. wget very simply will download any file you point it to on the internet. In our case, we will want to go to Subsonic's download site (which is on SourceForge). To begin, on our Mac, go to the following URL:

    http://sourceforge.net/projects/subs...iles/subsonic/

    There we will see a list of different versions. Click on the latest version number that isn't a beta (4.6 in my case). You will then be given a list of files that could be downloaded. Don't click on any of them! Instead, right click on the one that ends in .deb (a Debian package. The Ubuntu software center and apt-get are based on Debian). Copy the URL.

    Now, in your ssh session type the following:

    cd ~
    wget
    <paste the URL here>

    You should get a notification that you are downloading the .deb package.

    Once it has downloaded, we need to actually install it. Do this by typing:

    sudo dpkg -i subsonic-4.6.deb

    It will now install the Subsonic server for you.





    By default, Subsonic runs Java as root. This seems like it is a very bad idea and, luckily, it is changeable. I used the following website to help me through this: http://forum.subsonic.org/forum/view...php?f=6&t=6146

    achoo5000's description is so good that I am simply going to copy the first bit verbatim with just a few tiny edits to make it fit in here (hope he or she does not mind). I am going to deviate a bit when it comes to the last few steps.


    ------------

    First step is to install everything as normal using the .deb package.

    Now make another user (called subsonic in this case):

    sudo useradd subsonic

    We now have to give this user ownership of all the subsonic data:

    cd /var/subsonic/
    sudo chown -R subsonic:subsonic .

    I had a problem where mine would crash when trying to write to /tmp because there was already data there, you could just change the permissions as above, but I just deleted it.

    cd /tmp
    sudo rm -rf subsonic/

    ------------

    From here I am deviating a bit from achoo's instructions (it may just be a case where I have a newer install of Subsonic which does things in a different way)


    Edit the Subsonic startup script. To do this, type:

    sudo nano /etc/default/subsonic

    Near the end of the file, change the line that reads:

    SUBSONIC_USER=root

    to

    SUBSONIC_USER=subsonic

    Save and close the file.


    Restart Subsonic by typing:

    sudo service subsonic restart





    To listen to music on your Mac, point your browser to:

    http://172.16.1.120:4040

    (of course, you should substitute your server's actual IP address for 172.16.1.120)

    You should be asked to log into Subsonic. Go ahead and log in as instructed (you will be guided through the first time). Make a new user in the Subsonic UI. You can name this user anything you want. I simply chose "music" because I want both my fiancee and myself to be able to log in as the same user and manage and share the same playlists. But do whatever makes you happy.

    With regard to the nasty UI. I downloaded Submariner from the Apple Mac App Store. It is a desktop client for the Submariner server and looks a lot better (plus you don't have to have a browser running). There are other desktop clients as well. At this point, just use whatever you like.







    Section #23 - Make your server visible to the outside world
    ----------------------------------------------------------------
    ----------------------------------------------------------------


    Generally, you want to keep your server locked up tight and hidden behind your router. But if you want to connect to it from outside your local network, you will have to open up your router a bit and forward ports to the server. I may go into more detail about this process in the future, but it has to be pretty generalized simply because the actual process varies from router to router. The basic idea is that any ports you want externally accessible will need to be forwarded from the router to your server.

    Here is a list of all the ports I think you will want to forward:

    ssh (in our example: 22512)
    Subsonic (port 4040)
    All of the "knocking" ports (7000, 8000, 9000 in our example)

    Some things to consider: If possible, forward a non-standard port to port 4040. In other words, have port 2101 (for example) forwarded to port 4040 on the server. This way you will connect to Subsonic on port 2101 (a non-standard port) when you are on an external network trying to listen to your music.

    You will also need some way of knowing the IP address of your router when you are out and about. There are services that allow you to set that up, but I intend to do it a bit differently. Again, a bit of added security by obscurity. But I do not have time to do that just now. For the time being, you can sign up for a service like http://www.no-ip.com/. Note: I have never used this service and I have no idea if they are any good, if there are better services out there, or anything. Again, I intend to roll my own but do not have time to do that just yet.








    Section #24 - Back your server up to a duplicate machine across the country
    ----------------------------------------------------------------
    ----------------------------------------------------------------


    Well, this part is simply going to have to wait. I do not have the money to build a second machine right now. When I do, I will post back here (editing this post actually).

    If you want to take a stab at it, I expect I will be using the rsync utility (or Duplicity which builds on top of rsync). I will also have to have a means by which each server will know the IP address of the other (these will change over time and cannot be predicted). There are services which will translate a domain to a dynamic IP address and I may use one of those. Alternatively, I may simply have each server periodically check its external IP address and post it somewhere public in an obfuscated format (like to dropbox or to my hosted web server). I don't know.

    But do know this: RAID does not equal BACKED UP!

    Just because you have your data on a RAID set does not mean you have it backed up. First off, what happens if you accidentally issue a command to delete a file? Or a directory? What if your server gets compromised or you drunkenly issue the wrong command and wipe out the entire file system? What if there is a fire, or your house gets broken into? RAID only protects you against disk failure (and RAID 5 only protects against a single disk failing. If two fail, you are in trouble).

    YOU MUST BACK UP YOUR DATA SO THAT IT EXISTS IN MORE THAN ONE LOCATION. PREFERABLY NOT IN THE SAME BUILDING.

    Seriously. Do it. It is a little like my beautiful iMac. I had thought about installing theft tracking software like prey (http://preyproject.com/) but always figured I had time to do it. Then one day I came home to find stuff all over the floor and my 2 month old 27" iMac missing (the kensington lock didn't do squat). All of a sudden it was too late. Luckily I had backed my system up, but it was too late to do anything else.

    Backups are like that. You don't really see the need for them until suddenly you do and then you will really be sorry. Back your data up now! (I am currently doing that by copying the data from the server to my new iMac but I will come up with a better solution soon). When I get it figured out, I will post here again.








    Section #25 - That's it! Hope it was helpful
    ----------------------------------------------------------------
    ----------------------------------------------------------------


    Let me know if you have any questions, comments, corrections, or suggestions. I will attempt to adjust the posts here to reflect them.

    Also, I would be very interested in any feedback including typos and formatting issues. I want to have a quality product here

    Finally, good luck! I really hope it helps some folks and I really hope it helps us newbies learn a thing or two about the wonderful world of Linux server administration. Once you have set up your own server, not only do you have total control over your own data, you can really start to understand what kinds of services you need and don't need - even if you eventually decide to switch away to a packaged NAS or even a cloud based system.
    Last edited by bvz; March 26th, 2012 at 07:32 AM. Reason: Updated 3/25/2012 to fix typos in sensors_monitor.sh

  2. #2
    Join Date
    Aug 2010
    Beans
    25

    Re: Step by step guide to setting up Ubuntu 11.10 server for Newbies!

    great tutorial. thank you! I haven't made it all the way through it yet but I am setting up Ubuntu server w/ a Ubuntu client. The only addition I've seen so far from your set up with the Mac is that for the Ubuntu client I had to issue the 'ssh-add' command to allow my private/public key pair to work properly.

  3. #3
    Join Date
    Sep 2007
    Beans
    75

    Re: Step by step guide to setting up Ubuntu 11.10 server for Newbies!

    Quote Originally Posted by jdawgvh View Post
    great tutorial. thank you! I haven't made it all the way through it yet but I am setting up Ubuntu server w/ a Ubuntu client. The only addition I've seen so far from your set up with the Mac is that for the Ubuntu client I had to issue the 'ssh-add' command to allow my private/public key pair to work properly.
    Glad the tutorial is of some help to you.

    Just a clarification: Did you have to issue the ssh-add command on your Ubuntu client or on the server?

  4. #4
    Join Date
    Aug 2010
    Beans
    25

    Re: Step by step guide to setting up Ubuntu 11.10 server for Newbies!

    On my client I ran these steps:

    ssh-keygen
    scp -P 22512 ~/.ssh/id_rsa.pub admiral@172.16.1.16:.ssh/authorized_keys2 (replacing your information for mine)
    ssh-add
    ssh -p 22512 -l admiral 172.16.1.16

  5. #5
    Join Date
    Sep 2009
    Beans
    9

    Re: Step by step guide to setting up Ubuntu 11.10 server for Newbies!

    If this was a quick set-up guide for a simple file server set-up, I would hate to see the advanced version. 80% of this had nothing to do with setting up a small plain file server.

  6. #6
    Join Date
    Sep 2007
    Beans
    75

    Re: Step by step guide to setting up Ubuntu 11.10 server for Newbies!

    Quote Originally Posted by AndesHelp View Post
    If this was a quick set-up guide for a simple file server set-up, I would hate to see the advanced version. 80% of this had nothing to do with setting up a small plain file server.
    Hey, I'm totally open to any criticisms, but they would have to be a bit more specific (and, given the amount of time I put into writing this, a bit higher quality than an unsubstantiated dismissal).

    Instead of taking the time to write something but then skimping out on offering anything constructive, why don't you help out by telling me what specifically would you cut from this tutorial (given that it is aimed at people new to setting up a server).

    Edit:
    Also, I am not sure where you got the idea that this was supposed to be a "quick set-up guide".
    Last edited by bvz; January 7th, 2012 at 10:32 PM. Reason: Made a new year's resolution to be more snarky

  7. #7
    Join Date
    Sep 2007
    Beans
    75

    Update 1-7-2012

    So I updated the original post to make the following changes (and I am including these as a separate post here for quick reference):



    UFW
    I added the following rules in UFW to make sure DHCP works:
    ----------------------------------------------------------------------------------------------------------------------------

    sudo ufw allow in from 172.16.1.0/24 to any port 67
    sudo ufw allow out from 172.16.1.0/24 to any port 67




    Section 20, Step 1: Install Netatalk
    I Changed the way Netatalk is installed because the default version in oneiric does not play nicely with OSX Lion.
    ----------------------------------------------------------------------------------------------------------------------------

    The version of Netatalk currently available for install from the 11.10 repositories does not work with lion (may not work with previous versions of OSX either). In order to install the latest version, do the following:

    I used this thread to help diagnose the issues:
    https://bugs.launchpad.net/ubuntu/+s...lk/+bug/810732

    Start by setting up your server so that it can install packages outside of the official ones available for the 11.10 version of Ubuntu. These outside packages are referred to as PPA's and are basically identical to a package you would install from the official repository, but are put together by members of the community and hosted elsewhere. Of course you want to be careful not to install just any package from just any source, but as long as you are careful to use respected sources you should be fine.

    To install the latest Netatalk ppa, we need to add the repository to our apt package management system sources list. I used the following website as a reference as to how to accomplish this: https://help.ubuntu.com/community/Re...es/CommandLine

    Go ahead and read that previous link for a very good description of what we will be doing (as I am going to go light on the description here). Note: In theory you should be able to add a repository simply by issuing the add-apt-repository command, but I was unable to get that to work (Perhaps I would have to install python software properties first, but the following method works and so I never bothered trying to go any further with this command).

    Start by backing up your /etc/apt/source.list file and then editing it. Do this by typing:

    sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup
    sudo nano /etc/apt/sources.list

    Add the following line to the end of this file:

    Save and close the file.

    Update your repositories list by typing:

    sudo apt-get update

    Now you can install Netatalk by simply typing:

    sudo apt-get install netatalk



    Section 20, Step 4: Configure bash prompt for better interaction with Netatalk
    Finally, I added a custom prompt to my .bashrc file to better inform me when I was in a netatalk managed directory when using ssh or the console.
    ----------------------------------------------------------------------------------------------------------------------------

    To better make sure we know when we are in a Netatalk managed directory or not, we will also update the prompt on our bash shell.

    To create a custom prompt for our bash shell, edit your .bashrc file (this is a script that is automatically run every time you create a new shell - either via ssh or the console. So we put commands in here that we want to run every time we create a new shell).

    To edit this file, type the following:

    nano ~/.bashrc

    Once the file is open, add the following line to the end of it:

    Code:
    PS1='$(if [ -d "`pwd`/.AppleDouble" ]; then  echo "\e[00;41mMAC MODE\e[00m \033[01;32m\][\w]\033[00m\]: "; else echo "\033[01;32m\][\w]\033[00m\]: "; fi)'
    Save and close the file.

    Now, whenever you create a new shell, your prompt will intelligently change depending on which directory you are currently in (Showing the text "MAC MODE" in red whenever you are in a Netatalk managed directory).






    Those are the updates for now. As always, let me know if you have any questions or comments!

  8. #8
    Join Date
    Nov 2008
    Beans
    3

    Re: Step by step guide to setting up Ubuntu 11.10 server for Newbies!

    wonderful work, this is going to be useful.
    I'm planing to set some services and this is a great starting point.

    Thanx for putting this together

  9. #9
    Join Date
    May 2007
    Beans
    117
    Distro
    Xubuntu 12.04 Precise Pangolin

    Re: Step by step guide to setting up Ubuntu 11.10 server for Newbies!

    Wow.

    Fantastic job. Great work - fantastically great production.

    Why? Plain English descriptions. Common sense strategic application.

    Yes, you have some extra verbiage - you bring this to us as a story - and you could trim that back by stating your goals at the top - then following up with the instruction sets. But hey, I don't mind the story - I like it. It makes it much more comfortable and friendly. So don't trim it.

    Inside that story you do a superior job of explaining security, key-based security, iptables, RAID, and more.

    I want to point this out by giving a negative example. A search on avahi gets me a link to this:
    The Avahi mDNS/DNS-SD daemon implements Apple's Zeroconf architecture (also known as "Rendezvous" or "Bonjour"). The daemon registers local IP addresses and static services using mDNS/DNS-SD and provides two IPC APIs for local programs to make use of the mDNS . .
    I've been mucking about with computers for over 20 years, and linux for the past 4 or 5. I'm an intelligent, college-educated person. That description means nothing to me at all. I have to look up about every 3rd word just to feel like I have SOME grasp of what the author tried to say - and I still felt real shaky about answering "What does this mean to ME?". Compare that to what you wrote: "an open-source Bonjour alternative". There I go look up one thing - Bonjour, and say "Oh, ok."

    You've done a good job, setting up some very good real-life parameters and goals. Not something written by network admins for network admins.

    Here is another thank you - your examples of IP ranges. I can't tell you how many hours I have spent researching iptables and stuff trying to make the firewalls work the way I want them to work in linux. And, in all that time I have NEVER seen possible ways to write ranges of IP addresses stated so clearly or simply. I KNOW I have still had trouble getting various linux gui iptables tools to accept IP ranges properly. After all the research I did, I did not know you could use 0-250 as an equivalent to /24. If it does and it works, then you may have solved an unrelated problem.

    Massive amount of work you put in here. Excellent recording.

    Thanks.
    Last edited by mbuell; January 10th, 2012 at 04:22 PM.

  10. #10
    Join Date
    Mar 2010
    Location
    Metro-ATL
    Beans
    Hidden!
    Distro
    Lubuntu 14.04 Trusty Tahr

    Re: Update 1-7-2012

    Sorry for the late response - didn't read anything after
    196.0.10.* where * can be any number between 1 and 255
    line. You probably realize this incorrect now.
    https://www.arin.net/knowledge/address_filters.html or the RFC has the specifics.

    10.0.0.0/8 IP addresses: 10.0.0.0 -- 10.255.255.255
    172.16.0.0/12 IP addresses: 172.16.0.0 -- 172.31.255.255
    192.168.0.0/16 IP addresses: 192.168.0.0 – 192.168.255.255

    Setting up a "server" can be as easy as during the install, choosing "LAMP" or "ssh" from the installation menu. Adding all the RAID stuff is good, but might scare "noobs" away.

    I like to add either fail2ban or denyhosts on any machine with internet access - good to quickly block unwanted ssh (and other) attempts, right?

Page 1 of 5 123 ... LastLast

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •