One question when dealing with old hardware is whether or not it's fast enough for the intended purpose. Post 1 and 4 discuss this topic and provide ideas for getting a higher performance. Another question is whether or not it's safe to store data on an old hard disk.
In mechanical equipment errors often appear in the shape of the bathtub curve. Since this is about old gear the left hand part of the bathtub is not a concern; the only question is how long we can use the hard disk before the right hand increase appears - if it ever does.
I am often surprised how long a hard disk can live on in an error-free state, some of mine now celebrating 20 years birthday after being exposed to a lot of distrohopping and reinstalling.
However, if a sign of malfunction appears then one has to heed the warning and take action right away. This applies to hard disks in general and not only recycled gear. There may not be a second chance.
A hard disk can fail in various ways; here we focus on bad sectors which can develop over time. Fortunately they tend to appear in clusters and not randomly distributed over the disk.
Error-handling routines in the hard disk firmware are expected to take care of the individual sectors when an error occurs. They are often quite efficient and many well-behaving hard disks have a few bad sectors unbeknownst to the user.
However, when resurrecting old hardware one should at least do a little testing before installing.
If the hard disk is dubious then one can simply choose not to use it, expecting that more errors are coming up. When discarding a hopeless computer remember to keep the hard disk so there always is a stack of spares available.
Another option is to keep using the damaged disk, directing the installation to the intact parts. This is what we are going to discuss here. If one decides to store all user files in Google Drive or another cloud service then there's no risk associated with a failing hard disk.
As always: Regardless of the hard disk condition always back up to a physically and digitally separated location. If your important data are not backed up they are obviously not important to you.
Let's take an example. The commands can be run in a live boot or in an already installed Buntu / Debian.
First we execute
The top line of the output could look like
Code:
Disk /dev/sda: 74.6 GiB, 80060424192 bytes, 156368016 sectors
It tells us the number of sectors and that the hard disk is mounted as /dev/sda.
We would like to investigate /dev/sda further. Next command is
Code:
sudo badblocks -v /dev/sda
which can take long time to run. It searches the entire /dev/sda hard disk for bad sectors and writes their location if any are found.
From an installed system the command can also be executed as
Code:
sudo badblocks -v /dev/sda > badblocks.txt
which saves the output as a text file. An empty file indicates that no errors are found.
When the command has finished (for a large disk it might need to run the whole night) investigate the output. Say it looks like
Code:
3040076
3040077
3040078
3040079
3040548
3040549
3040551
12558888
12558889
12559356
12559357
12559358
12559359
(and 58 more 125xxxxx-numbers)
In other words we have one cluster with sector numbers 304xxxx and another with numbers 125xxxxx.
The larger of the numbers is 125xxxxx. Let's define an area up to and including this sector (and a part more for safety) from which we are keeping the installation away.
Since the total disk size is 156368016 sectors we have to let go of at least 12600000 / 156368016 = 8 % of the disk capacity. This equals 6 GiB lost.
A basic Debian install takes around 4 GiB so we have plenty of room (remember, we have already decided to store all files in Google Drive). For safety we decide to skip the first 20 GiB.
There will still be 74.6 GiB - 20 GiB = 54.6 GiB left as useable space.
We decide to erase the hard disk completely before installing. From a live boot run the command
Code:
sudo dd if=/dev/zero of=/dev/sda bs=1M
again assuming that the hard disk is sda. Also this command can take a long time and only the hard disk indicator light shows that something is proceeding.
When finished the message no space left on device will appear. Though it might sound like an error it only indicates that the process is complete.
Some consider the command overkill but at least one should erase the master boot record and the partitioning scheme. This is done by the (quick) command
Code:
sudo dd if=/dev/zero of=/dev/sda bs=512 count=1
After this
Code:
sudo fdisk -l /dev/sda
shows an empty (unpartitioned) disk as expected.
During the Debian installation one is offered an automatic partitioning. Normally this is a good option but here we are going to take control and create our own.
First a partition of around 20 GiB should be created and marked do not use.
After this a swap partition. I prefer to have it double the size of RAM but other people might have different opinions.
The remainder is used as the / (root) partition formatted as ext4. I don't see the purpose of having a separate /home partition but if this is desired now is the time for creating it.
A simple partitioning scheme could be done using only primary partitions but if you are planning to do many experiments then extended partitions might be necessary.
Rest of the installation is standard.
Now
should show something (more or less) like
Code:
Disk /dev/sda: 74,6 GiB, 80060424192 bytes, 156368016 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: *
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2046 156366847 156364802 74,6G 5 Extended
/dev/sda5 2048 39061503 39059456 18,6G 83 Linux
/dev/sda6 39063552 136912895 97849344 46,7G 83 Linux
/dev/sda7 136914944 156366847 19451904 9,3G 82 Linux swap / Solaris
and
shows
Code:
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 1,9G 0 1,9G 0% /dev
tmpfs tmpfs 379M 1,2M 377M 1% /run
### notice that /dev/sda5 is not mentioned here ###
/dev/sda6 ext4 46G 3,9G 40G 9% /
tmpfs tmpfs 1,9G 0 1,9G 0% /dev/shm
tmpfs tmpfs 5,0M 8,0K 5,0M 1% /run/lock
tmpfs tmpfs 379M 64K 379M 1% /run/user/1000
We see that sda5 spans 39.059.456 sectors (12.600.000 required) and is unused; only sda6 stores data for the operative system. The usable hard disk size is now 46 GB.
Finally, the command
Code:
sudo badblocks -v /dev/sda6
should yield a clean log like
Code:
Checking blocks 0 to 48924671
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found. (0/0/0 errors)
Once in a while people discuss which file system to use. I would go for ext4 unless there are strong reasons for doing otherwise.