PDA

View Full Version : 20.04 SSH connection refused to server VM



merenze
September 23rd, 2020, 04:55 PM
I'm brand new to server management. I have install Ubuntu 20.04 Server edition to a physical hard drive, which I am currently trying to set up in VirtualBox (Windows 10 host) before I plug her into the actual server hardware.

When I try to ssh user@server-ip into the guest machine, I get the message ssh: connect to host server-ip port 22: Connection refused.

On the guest machine, output of ufw status is

Status: Active

To Action From
-- ------ ----
22/tcp ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)

If there's other relevant information I need to provide, please let me know.

darkod
September 23rd, 2020, 08:07 PM
Make sure you have the VBox setup correctly. VBox usually uses virtual network and the server IP might not be directly visible from another computer on the LAN.

This is why I always use VBox in bridge mode on my LAN. That way all guest machines recevie working IP in the LAN and are easily reachable.

Also, I recommend using plain iptables instead of ufw. That is, if you need a firewall at all.

ajgreeny
September 23rd, 2020, 08:36 PM
Check that you have the ssh server running on the guest with command
ps aux | grep sshd
If you do not see a line with that showing (other than the grep service which will also show), you should be able to start ssh with command
systemctl start ssh.service

LHammonds
September 24th, 2020, 02:02 AM
I'm fairly certain darkod hit the nail on the head. Bridge mode on your network settings of the VM in virtualbox is a MUST if you intend on anything outside the VM touching it.

LHammonds

scorp123
September 24th, 2020, 09:39 AM
I'm brand new to server management. I have install Ubuntu 20.04 Server edition to a physical hard drive, which I am currently trying to set up in VirtualBox (Windows 10 host) before I plug her into the actual server hardware.

Not a good plan. The hardware between what's inside Virtualbox and what's inside the real server is bound to be very different, e.g. different network cards with different MAC addresses, requiring different kernel modules ("drivers"), different CPU features maybe, different UEFI/BIOS features probably, and so on. I would expect you to run into a lot of problems if you do it this way.

You'd be better off installing the blank empty harddrive into the server right away and doing a clean proper Ubuntu Server OS install directly on the server hardware. You'll have way fewer problems that way and all the hardware that the installer auto-detected (e.g. network cards and their MAC addresses) will still be correct afterwards.

SeijiSensei
September 24th, 2020, 02:44 PM
^In general, this.

I can see installing a copy of Server 20.04 into a VBox VM so you can get comfortable with managing a machine from the command-line. But a VM and bare metal hardware are very different creatures. You likely cannot mock up a server in VirtualBox and then transfer it seamlessly to a physical hard drive for use in a different machine. What you can do is prepare for the server installation by installing the Server ISO into a new VBox VM. Figure out what software you need to add to provide the services you want. Use bridged networking so your other machines can access the virtual server and identify any problems. After you've done all of that you should be ready to install Ubuntu to the clean hard drive in the server.

LHammonds
September 24th, 2020, 05:41 PM
I disagree with scorp123 about it not being a good plan.

It is true that installing on bare metal is vastly different than installing in a virtual machine. But that is primarily just the "install" of specific hardware that can be problematic.

Once you have a working install (which may not even be a problem or even possible bare-metal on that machine due to drivers or compatibility with Ubuntu), the setup of the application services should be nearly identical to what you do in the VM. Getting that knowledge and familiarity in the safety of an easy-to-install environment is crucial to new server admins so they are not overwhelmed with everything new.

Once you get a procedure for how you are going to configure the software for your server, it is just a matter of figuring out the specifics on getting the bare-metal install figured out...which can be different on EVERY machine you install. ;)

If I found out that I could NOT install Ubuntu directly onto a server, I would try a hyper-visor such as Proxmox or ESXi and if either of those can be installed, then I have a virtual environment to install Ubuntu server. Granted, that is a more complicated install but its an example of how you can still win in a difficult situation. But again, that is hardware-specific. Once you get past the hardware-specific steps, the rest tends to be the same steps.

LHammonds

TheFu
September 24th, 2020, 11:01 PM
I treat physical servers and virtual machines the same. The only difference is that VMs have really compatible hardware, so there are never any driver issues inside VMs. I use KVM as the hypervisor. This is maintained by the Linux kernel team and what Amazon EC2 and most other huge VPS vendors use around the world.

When it is time to migrate a system - physical or not - my steps are basically the same. I always begin with a fresh install of the Ubuntu Server version that I want to run. Then I restore from my backups into that newly installed OS instance, slightly selectively. Some things are easy like web server configs, DBMS data files, and end-user data stuff. Other things are a little more complex - like networking, /etc/hosts, and some hardware specific settings around storage, volume management, etc.
In general, the restore process takes 30-45 minutes going to new hardware. If I'm going from a VM to a VM, then it is faster, since I don't need to worry about the drivers and settings being different unless the move is to a different release - say 16.04 --> 20.04.
I did that last June for a box. It was pretty seamless. Just had to figure out which server configs needed to be modified due to new versions of nginx and mariaDB. I was burned by my backup tool due to the python3 vs python2 change. I ended up creating the python2 version for 20.04 to get passed that issue. It tool me a few days to finish that. As my other servers move from 16.04 to 20.04, there will be a point where moving to python3 for the backup tool will make sense. Now that I'm aware, it won't be a surprise like it was last time. Environments with mixed OSes get to have this sort of fun. ;)

Notice, I didn't mention any physical vs virtual issues? I started using VMs in the late 1990s. Where I worked in 2005, we mandated all new deployments be to virtual machines, period. Not going to a virtual machine needed an architectural variance signed by an SVP or higher and was limited to 12 months. It was that important to the business. At the time, average system utilization was 13%, so mandating VMs was perfectly sensible. Some vendors pushed back initially. We cancelled contracts for those. Word got around. All the remaining vendors (thousands of them) fell into line and would be proactive on asking which VM hypervisors we preferred for each platform so they could test for deployment issues.

I took all that to heart. In 2010, I started using a desktop that ran inside a VM. I've been moving that VM forward ever sense. It is on 20.04 now, inside a VM. Accessible from anywhere in the world with internet and ssh. I seldom actually need to use it as a desktop, but email and web surfing happen on that system regardless of the system I happen to be typing onto.

scorp123
September 24th, 2020, 11:43 PM
I disagree with scorp123 about it not being a good plan.

It is true that installing on bare metal is vastly different than installing in a virtual machine.


That's not the issue. He's planning a virtual to physical migration of an already installed OS, aka "V2P". And that is problematic and just asking for problems of all kinds. I consider myself being a "pro user" and even I would not do that like this. Hence why I said "not a good plan". Moving a harddrive with an already installed OS on it between two potentially very different systems with very different hardware is just asking for problems that you would not have otherwise with a clean installation from scratch.



Getting that knowledge and familiarity in the safety of an easy-to-install environment is crucial to new server admins so they are not overwhelmed with everything new.

You can still do that without going for a potentially problematic "virtual to physical" migration. Just reinstall your physical server clean and proper and replicate the steps you have learned in the VM. Copy over some config files, reinstall the same packages that you already have in the VM (or learn how to backup and restore your "apt" package selection and let it happen this way) ... Or you learn Ansible, write a playbook and use this method to make sure that your install is identical to the previous one and the way you want.

LHammonds
September 25th, 2020, 09:02 AM
That's not the issue. He's planning a virtual to physical migration of an already installed OS, aka "V2P". And that is problematic and just asking for problems of all kinds. I consider myself being a "pro user" and even I would not do that like this. Hence why I said "not a good plan". Moving a harddrive with an already installed OS on it between two potentially very different systems with very different hardware is just asking for problems that you would not have otherwise with a clean installation from scratch.
Ah, that did not register to me that it was a V2P scenario (I would never do that) so if that's truly the case...yes, bad plan.

LHammonds

TheFu
September 25th, 2020, 02:08 PM
Every attempt at v2p or p2v I've made has failed. It was more work than just doing normal backup, migration, and restore processes. The only time is was less effort was when I moved a Win7 VM that was my media center system from a really old KVM VM into a much newer KVM VM.

Windows thought the motherboard had been changed. I'd tried just a simple copy and manually setting up the virtual hardware identically. That made Windows puke. Ended up using sysprep to convince Windows the move wasn't the end of the world. I've never needed to do that with any Linux OS. Complexities of Windows Licensing causes all sorts of unnecessary complications. Those don't exist for the Linux distros I use.

LHammonds
September 25th, 2020, 11:56 PM
Complexities of Windows Licensing causes all sorts of unnecessary complications. Those don't exist for the Linux distros I use.You can say that again. I absolutely love having a "template" Linux OS that is fully-configured how I do all my servers and all I do is right-click, deploy to a new VM. Then its a matter of changing the IP and name. Done.