View Full Version : [ubuntu] 18.04 -- USB to SATA External Hard Drive not recognized
squeaky2
June 1st, 2019, 04:20 PM
Hey guys!
I've never moved files from one hard drive to another before, and so I'm not entirely sure what commands to enter, or even where to begin.
The hard drive I'm using is a Windows 10 600-ish GB SATA HDD that decided to start developing bad sectors. My current internal hard drive is 250 GB, so Clonezilla and most other software options are out the window.
I have it connected to my computer via a USB 3.0 to SATA cable. When I plug in the hard drive, I can hear it spinning up, and the lights on the cable indicate a solid connection. But it's not recognized in the GNOME Discs GUI, or the file explorer. I don't know what commands to enter into the terminal so I can access the files on this hard drive.
Thank you very much for your time,
--Squeaky
SeijiSensei
June 1st, 2019, 04:37 PM
Is the cable plugged into a USB 3.0 port? Does the USB/SATA device itself use 3.0? I have a similar device (https://www.newegg.com/vantec-cb-isatau2-usb-to-ide-sata/p/N82E16812232002?Item=N82E16812232002) that uses USB 2.0. When plugged into a machine, it's recognized automatically. In my experience USB 2.0 and USB 3.0 are generally not interchangeable.
Once they were connected, I could boot from the installation image, then install the package gddrescue and use ddrescue to clone the machine's hard drive to the new SATA device.
oldfred
June 1st, 2019, 07:35 PM
I have a USB to SATA converted cable. It works well for my old SSD drive which was why I purchased it.
But when I tried to use it with an old HDD, it just did not have enough power.
If you adapter has separate power not just from USB port, then it may work.
Years ago with XP, I used a .bat file to copy my data files. Windows then had my data in multiple places. Some saved in same folder as program, some saved in a Windows data partition and some let me save to my own folder. So multiple copy commands required.
In Linux I now use a bash file and rsync for same purpure.
Autodave
June 2nd, 2019, 12:45 AM
Definitely try a USB 2.0 port. And better yet, connect it with a powered USB.
squeaky2
June 2nd, 2019, 03:12 AM
Thanks for the feedback, guys!
It does have an external power supply that plugs directly into the wall. Unfortunately, this computer was salvage when I got it, and as such is too old to have any 3.0 ports. I thought that 3.0 and 2.0 were cross compatible? I could be mistaken, I was told that for a 3.0 cable to run at full speed it has to be connected to a 3.0 port. Kind of how like a 2.0 cable will still connect to an old 1.1 port, but it just runs slow as molasses on the coldest day of January.
Will Rsync or GDDRescue install any packages or drivers that will let my computer recognize this thing from a USB port? Or do I need to go about this the long way and drag my stuff through a Google Drive and re-download it? I'd really prefer to do it this way, just because I have a lot of files that are rather big that I'd like to keep archived somewhere safe.
squeaky2
June 2nd, 2019, 05:21 AM
Great!
... i have no idea what fsarchiver is. To be straight honest with you guys, I have no clue what I am doing. I'm literally just plugging in a hard drive into a USB slot with a weird cable set that has no discernible brand name and no real instruction manual. It powers on, I hear it spin up, but my computer says it doesn't exist. Could I get more detailed instructions on how to open it like a flash drive? Or configure it to run like a flash drive? All I really need are the personal files off of it.
Good god, ubuntu is complicated. Why is it so complicated? I'm sorry... I feel so dumb right now.
Edit: You know what, I'm just going to try and run it again and risk it BSOD'ing again so I can get at least my pictures off of it. I'm sorry.
Edit again: Scratch that, I'm scared of it blue screening in mid-upload and losing everything.
I'm kinda desperate at this point. What do you think I should do? I don't even know if this cable setup works with Linux!
Edit 3: Okay, I just ran ``sudo fdisk -l``, both with the hard drive connected and with it disconnected. Both outputs are identical, it's like it's completely invisible. But I do hear the USB connect/disconnect sound when I pulled it out and plugged it back in. So at least on *some* level it recognizes that there's something there. But I just can't get the terminal to recognize that it's a hard drive, or any kind of storage medium. Any other commands I should try?
squeaky2
June 2nd, 2019, 05:56 AM
Oh. Well. No wonder I feel dumb, it's because I am dumb. I unplugged it from the wall socket with my foot.
Welllll, that was silly of me.
Okay, now it's spinning up and it is reading my Linux HDD when I run sudo fdisk -l , but it still isn't recognizing that the Windows drive plugged in with the USB is there. I don't have to set this up as a RAID system, do I? Good god, that would be a headache.
squeaky2
June 2nd, 2019, 06:29 AM
Okay, ran 'lsusb' with absolutely no luck. I'm trying to find the make and model of this cable set so I can report it as incompatible. But, apparently, I can find no brand name or model number anywhere on these cables. It's so infuriating, the only thing the label says is that it's supposed to work with literally *any* 2.5 and 3.5 in HDD, SATA or IDE. I have no idea why it's not even being recognized as a device anywhere in the terminal or file explorer. The closest thing I can find to the cable set is this.
< https://www.amazon.com/Warmstor-Converter-External-Adapter-Support/dp/B073GT6VJF/ref=sr_1_56?crid=1PIVWBQ49H91H&keywords=usb+3.0+to+sata+adapter+cable&qid=1559452989&s=gateway&sprefix=usb+3.0+to+SATA+%2Caps%2C549&sr=8-56 >
It looks almost identical, but some of the markings are off, and the box looks completely different, but it's basically the same setup.
Autodave
June 2nd, 2019, 11:19 AM
You could have a bad cable set. You could also have a bad HD. Just because it spins doesn't mean that it should work. For instance, if the super block of the HD is corrupted, you will not be able to get anything off of it. Also, since this was a Win10 drive, it more than likely was on a machine that had fast boot enabled which means that Win10 has not released it.
yancek
June 2nd, 2019, 11:25 AM
If you are developing bad sectors on your windows system you should first run chkdsk which must be done from windows. If you can't do it from windows, you may need to download some windows tool to do it or use the Repair function on the windows installation DVD (if you have one). There is Linux software called ntfsfix which might repair a very minor problem with an ntfs filesystem but it is very limited on the proprietary ntfs filesystem as one would expect.
As suggested above, if you have windows 10, was this external drive attached to windows 10 and was that system hibernated or was fast boot on? Linux systems won't mounted a drive which shows as hibernated.
TheFu
June 2nd, 2019, 05:52 PM
With great power comes great responsibility. Linux has great power. It, the OS, has an expectation that the OS users are responsible.
Linux skills aren't something you can expect to gain overnight. Windows skills seldom transfer. When beginning with Linux, a strong foundation of understanding is very helpful. As a Windows user, people are used to the OS holding their hands and trying to stop them from attempting dumb things. Linux is different. The OS doesn't know if you are ignorant or a genius, so it assumes you know what you are doing and will actually do what is requested. If what you mean isn't the same as what you request, well, bad things can and do happen all the time.
Linux (and all Unix-based OSes) are multi-user. Each command runs with a specific effective userid and specific effective groupid. These control the access to directories and files on the system. Everything on Unix is either
a) a file
b) a process
everything. A process is a running program. Anything that isn't a running program, is a file. Period. Files have an owner, group, and permissions. This controls all access.
BTW, drives don't get mounted. File systems get mounted.
Drives hold partitions. Partitions can hold a few other things, one of which might be a file system, but that isn't actually required.
sda = drive, the entire drive.
sdb = drive, a different drive.
sda1 = the first partition on the drive, sda
sda2 = the 2nd partition on the drive, sda
File systems - there are many, many, different types of file systems. Each has strengths and weaknesses. In Windows, there are basically 3 file systems - NTFS, FAT32 and exFAT.
In Linux, there are 20+ popular file systems. ext2, ext3, ext4, zfs, xfs, jfs, ReiserFS, brtfs, f2fs, .... and in certain situations, NTFS, FAT32 and exFAT can be accessed by Linux systems, assuming the file system was properly "closed" by Windows before being disconnected.
Windows and Linux file systems have some major differences, mainly around user, group and file permissions.
In Windows, permissions were never really central to the OS security model. Whenever you copy a file, you get the data and that is pretty much all you need.
In Linux, file and directory permissions are ABSOLUTELY CRITICAL to the OS security model. When you copy a file, you need the data, but you also need the owner, group, permissions and ACLs which are connected to the file. Getting just the data won't leave you with a working file, unless it is just data.
This is meant to explain why copying files between different file systems doesn't always get everything you need. Going FROM Windows to Linux will probably be fine, since Windows thinks of all files as data. Going from a Linux file system to Windows, will lose the extra parts of those files which can be extremely important to make use of the file later.
Copying data usually happens either by imaging - bit-for-bit - copying or by using file copies.
dd, clonezilla, ddrescue, partimage, and a few others are bit-for-bit copies. They have to be used at the partition or disk level to be most effective. Notice, I didn't say File System. By cloning a partition, we get the file system and all the files/directories included.
rsync, tar, rdiff-backup, and most other backup tools are file-level copies. They work at the individual file/directory level and allow copying 1 file or an entire file system full of files. This provides more flexibility in which specific areas of the directories are copied. But care is required to get not just the data, but also the ownership, group and file permissions.
So, with all that background, assuming you have access to the file systems for the SOURCE and TARGET directories on the 2 different disks, then I would use rsync to mirror all the files.
$ rsync -avz {SOURCE} {TARGET}
This only works after the source and target file systems are mounted. rsync will try to get the permissions and data. The owner will be the current userid running the rsync command.
A quick review .... everything is a file if it isn't a process. That means you can run file commands on anything on the system if you have permissions which allow that access for the file.
A file is a file (hosts)
A directory is a file (/etc/)
A partition is a file (sda1)
An entire HDD is a file (sda)
The mouse device is a file
The screen device is a file
A network card is a file
A specific network port is a file
The keyboard is a file.
A virtual machine is a file, unless it is running, then it is both a file and a process. ;)
File permissions, ownership and group membership are central to all Unix security.
So, if you want to understand any Unix system, like Linux, Android, OSX, iOS, then a good understanding of file access, file permissions, file management, would be a good thing to master, yes?
I didn't mean to scare anyone with this information, but just trying to point out that Unix is really very simple. From 1 simple idea, everything is a file, the entire OS has been built, but since everything is a file, some very basic capabilities are possible.
Oh ... and lsblk -f is a very helpful command to see the HDD, partitions, file systems and mount point for connected storage.
Powered by vBulletin® Version 4.2.2 Copyright © 2025 vBulletin Solutions, Inc. All rights reserved.