Page 3 of 3 FirstFirst 123
Results 21 to 29 of 29

Thread: seamless ssh through intermediate host?

  1. #21
    Join Date
    Jan 2008
    Location
    Malmö
    Beans
    132
    Distro
    Ubuntu 13.10 Saucy Salamander

    Re: seamless ssh through intermediate host?

    Great. If it's going to incur a running cost then the minimalistic option is the most interesting one for this purpose. As you say, no sense paying for unused space.

    Side note, just tested an external connection from the ssh client of my Android phone running on mobile Internet, worked brilliantly as well.

    PS. Marking this topic as SOLVED now as the initial topic has been covered more than thoroughly. DS.
    Last edited by anlag; September 19th, 2010 at 04:31 AM. Reason: PS

  2. #22
    Join Date
    Apr 2008
    Location
    Far, far away
    Beans
    2,148
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: seamless ssh through intermediate host?

    That's cool about the Android phone having ssh. I'll have to get one of those someday.

    Here's a quick rundown on using AWS at the cmd line. There is an ec2-api-tools package in the Ubuntu repos but I much prefer Tim Kay's perl script. It doesn't need java, doesn't use env vars, and has shortcut cmds. Plus I have made custom edits to use my own defaults.

    1. Download Tim's AWS perl script. In your terminal,

    wget http://github.com/timkay/aws/raw/master/aws

    If you don't have Curl installed you will need that package,

    sudo apt-get install curl

    2. Run the install command, which will copy it to /usr/bin and create symlinks for the shortcuts.

    sudo perl aws --install

    3. Next you need to login to your AWS account and visit the "Security Credentials" page we saw before. You will need the two values from your "Access Key" - the ID and the KEY. The first is visible in the list and the second you have to click to show, and then copy. Back in terminal create the ~/.awssecret file and enter your ID on the first line, and the secret key on the second line.

    nano ~/.awssecret

    When done it should look like this, (but with your key info),

    1B5JYHPQCXW13GWKHAG2
    2GAHKWG3+1wxcqyhpj5b1Ggqc0TIxj21DKkidjfz

    Save it. Type aws -h to see a command summary. Good?

    4. Try a test command to see that it can access EC2 without error,

    ec2din

    (this is the shortcut for ec2describe-instances and should return with nothing, since you don't have any running instances)

    5. That's it. You're now able to manage EC2 and S3 from the command line. Here's a couple more tips. If you use a region other than US East then you will want to put some default settings in the .awsrc file.

    nano .awsrc

    and enter --region=eu-west-1 (for example). You can also add --simple to enable simpler output style (the tables are sometimes too much). Now you don't have to add the region for every command. Save.

    6. Let's try to start a spot instance.

    ec2rsi ami-1234de7b -p 0.01 -i t1.micro -k MyKey
    (use your choice ami and the name you gave your key on the AWS console)
    (this is the shortcut cmd for ec2request-spot-instances)

    It should respond with info about the pending request. You can check on the request with,

    ec2dsir
    (short for ec2describe-spot-instance-requests)

    If you're quick you can cancel the request with,

    ec2csir -id-
    (where id is the sir-xxxxx value you see in the responses above)
    (short for ec2cancel-spot-instance-requests)

    You can check every 15-30 seconds on the status of your instance with,

    ec2din

    When it's up and running it will tell you the instance-id and the publc DNS url.

    7. When you're ready to shutdown your instance you can use.

    ec2tin -id-
    (where id is the one given you in ec2din output).
    (short for ec2terminate-instances)

    There's quite a variety of other EC2 commands to explore too, but these are the real basic ones (for using spot instances). I manually edited my /usr/bin/aws script to have my default ami, price, key built in so that I only need a simple ec2rsi to start an instance. The script is quite readable and so it's easy to adjust the defaults in the cmd table.

    Hope this gets you started with some quick and easy commands to use EC2 instances. I'll post a tutorial on creating a custom AMI later...
    Last edited by BkkBonanza; September 19th, 2010 at 06:55 AM.

  3. #23
    Join Date
    Jan 2008
    Location
    Malmö
    Beans
    132
    Distro
    Ubuntu 13.10 Saucy Salamander

    Re: seamless ssh through intermediate host?

    Neat, all worked great. Saves all that web interface use. I like it. The only difficulty I had was finding the "Security Credentials" page -- kept looking for it in the EC2 control panel, before finding out I had to go higher up in the hierarchy so to speak.

    I'll definitely either edit the existing program or just add my own shell scripts to streamline the procedure a little. The AMI ID numbers could get a little tricky to remember...

    By the way, while testing things earlier I set up a dynamic DNS address (on dyndns.org) to point at the EC2 instance, also saving not only time and typing but also making it possible to run the same exact commands for different instances. For example, I was thinking of a script that rsyncs a setup script onto the EC2 instance where it's executed, doing things like editing the sshd_config and making sure it's enabled thus saving me having to even do that "by hand". It would be a cheap kind of way to quickly get a customized and ready to use system, without having to pay even for small storage. Probably won't be practical when the configuration goes beyond a few simple steps, but for this ssh tunnel business it should work well enough I'd think (not tried it yet though, need some sleep now...)

    The DNS needs to be edited for each instance of course, to update with the new address, which of course adds a manual step early on in the procedure. Unless I can find a nice way to do that too from the command line (scripted, more or less.)

    Still definitely interested in making AMI snapshots though, sounds like it will come in handy one way or the other. I mean I've been thinking of paying for an additional shell account somewhere for various uses, this could provide that both cheaper and with increased functionality. Well impressed so far. Will check back for the next tutorial whenever you have time to write that. No rush, mind you, I've got more than enough to work with for now. (Not to mention actual work, hehe.)
    Last edited by anlag; September 19th, 2010 at 07:22 AM.

  4. #24
    Join Date
    Apr 2008
    Location
    Far, far away
    Beans
    2,148
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: seamless ssh through intermediate host?

    That's a good idea. You can have a script which starts the instance, polls ec2din for it's running status, logs in and appends the Gateway cfg line, pkill the sshd, then updates the dyndns IP (just use wget with correct parms and user/pwd), then exits. After that every should be ready for the tunnel and it likely only takes a few seconds to init all that.

    Actually not too hard, eg. tested but not thoroughly,

    Code:
    #!/bin/bash
    
    AMI=ami-1234de7b
    KEYNAME=MyKey
    KEYFILE=MyKey.pem
    DYNUSER=test
    DYNPWD=test
    DYNHOST=test.dyndns.org
    
    ec2rsi $AMI -p 0.01 -i t1.micro -k $KEYNAME
    
    URL=`ec2din |awk '{print $3}'`
    while [ "$URL" = "" ]; do
      echo "."
      sleep 15
      URL=`ec2din |awk '{print $3}'`
    done
    
    ssh -i .ssh/$KEYFILE ubuntu@$URL <<-EOF
    wget -q --user=$DYNUSER --password=$DYNPWD https://members.dyndns.org/nic/update?hostname=$DYNHOST&
    sudo tee -a /etc/ssh/sshd_config >/dev/null <<-CFG
    GatewayPorts yes
    CFG
    sudo pkill -HUP sshd
    EOF
    
    ssh -i .ssh/$KEYFILE -fNR *:7777:localhost:22 ubuntu@$URL
    
    export URL
    echo $URL

  5. #25
    Join Date
    Jan 2008
    Location
    Malmö
    Beans
    132
    Distro
    Ubuntu 13.10 Saucy Salamander

    Re: seamless ssh through intermediate host?

    Excellent, that script ran just fine. Had a feeling you could do something like that with the dynamic DNS but no idea how... very useful. The script in general is certainly shorter than it would have been had I written it too...

    So that sets it up for me just fine. Tried tunneling in through 7777 and works great. The only human intervention I had to do while running the script was to accept the RSA key for the AWS host address. Is there a flag for ssh to tell it to accept this automatically only for this single instance? Had a look at the man page but couldn't find it, it's a bit lengthy though (for good reasons.)

    Anyhow, that's not a big deal. Slightly more of a bother is that I can't use the dyndns address directly to ssh into the AWS host, since it has a conflicting fingerprint in ~/.ssh/known_hosts (that of my laptop, presumably) and kicks up a fuss about man-in-the-middle attacks and all.

    Now I really don't want to disable StrictHostKeyChecking, but again I wonder if there might be a just-this-once flag for ssh to tell it to ignore this. Security-wise I don't see that it would add much of a danger since these are always temporary servers and addresses anyway and I'm always looking at a new fingerprint no matter what.

    Googled for fixes a bit and found for instance this:

    http://people.uleth.ca/~daniel.odonn...-single-domain

    In short, the workaround is to script renaming known_hosts before connecting to the conflicting host having it get added as a new key, then merge the new file with the old known_hosts into one. Apparently there's no problem having multiple entries for the same host.

    Suppose that would work, though it will spam the known_hosts pretty quickly if I do this a lot so if there's a skip-key-checking option for individual ssh instances I'd think that's still a preferred option.

    Or I could just use the full long public DNS for the times I want to connect specifically to the AWS host, but where's the fun in that?


    EDIT:

    Forgot to say I found and tried another way that should be reasonably convenient too:

    http://www.cyberciti.biz/faq/warning...-and-solution/

    The solution presented there was to use 'ssh-keygen -R serveraddress' to remove any keys for a specific host. Didn't work for me though, as it gave me an error message. Also not sure if that would mean I have to accept the new key for the tunnel through 7777 again.
    Last edited by anlag; September 19th, 2010 at 06:55 PM. Reason: Another possibility

  6. #26
    Join Date
    Apr 2008
    Location
    Far, far away
    Beans
    2,148
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: seamless ssh through intermediate host?

    Yes, I've run into this and solved it this way...

    Create a ~/.ssh/config file for your personal custom config, and add

    Code:
    Host *.compute-1.amazonaws.com
    StrictHostKeyChecking no
    UserKnownHostsFile .ssh/AWSHosts
    IdentityFile .ssh/MyKey.pem
    This says - for these hosts don't prompt for keys and put them in a special file so they don't clog up my "real" host keys. If you add them to the normal file it soon becomes a mish-mash of irrelevant keys. The identity line makes it so you don't have to keep typing it manually for these hosts (use the right filename of course).

    You'll see the host key fingerprint message still but no prompt or wait.

    Actually this file is great for specifying non-standard ports for certain hosts too - then you don't have to keep putting -p 7777 to login.

    Code:
    Host example.com
    Port 7777
    I've got the "Custom AMI Shrink Tutorial" done. Just testing a bit before posting.
    Last edited by BkkBonanza; September 19th, 2010 at 08:22 PM.

  7. #27
    Join Date
    Apr 2008
    Location
    Far, far away
    Beans
    2,148
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: seamless ssh through intermediate host?

    How to customize a public AMI (15GB EBS size) and shrink to a personal AMI (1GB EBS size).

    This process lets you customize the root partition to have your own packages, config and init scripts and then move it to a smaller EBS root volume. If you need more space for data you can always have a second EBS partition that you mount during or after boot. Having a small root partition keeps your storage costs to a minimum when not running.

    We can do most of this process from the EC2 console but there is two steps where we will need to login by SSH. Let's get started.

    1. Log in to your AWS account and go to the Management Console, EC2 tab.

    2. For this step we have to use a "normal" priced instance because the spot instances don't support "stopping" an instance. Stopping is not the same as "terminating". A stopped instance can be resumed where it left off much like a suspend in desktop systems. So first thing we do is click "Instances" and then "Launch Instance". Select suitable options for the AMI that you want to use as your "base system".

    3. Note down it's "Availability Zone". Login into your instance with SSH and customize the system as you need. Install packages, configure to your needs and test.

    4. If you used a t1.micro instance - they do not support mounting S3 partitions so you must check /etc/fstab and remove any "extra" mounts (there should only be "/"). If you don't do this then you won't be able to reboot or stop-start the instance.

    5. Once you have things the way you like head back to the EC2 console and select your instance and click "Instance Actions", "Stop Instance". Unlike "Terminate" this will shutdown the instance without losing it's current state. Wait for the instance status to show "stopped".

    6. Now we create a second instance to use as a copier. Click "Launch Instance" and select options for a t1.micro AMI that we can use to copy files. Make sure it is in the same "Availability Zone" as your stopped instance. Wait for the status to show "running".

    7. Click the "Volumes" menu item, far left. You will see the volume that is attached to your stopped instance. Click "Detach Volume" to remove it from your instance.

    8. Next we create the small target volume we will migrate onto. Click "Create Volume" and select the size you want (1GB is smallest possible). Make sure the "Availability Zone" is the exact same as your stopped instance. "No Snapshot" should be default. Click Create.

    9. Both volumes should now be in the list and in an "available" state. Make note of what volume IDs they have for later.

    10. Click the volume from your "stopped" instance and click "Attach Volume". Choose your running instance (not the stopped one) and enter a device name: /dev/sdg - click "Attach".

    11. Do the same with your new small target volume. Attach it to your running instance and enter device name: /dev/sdh. Click "Attach".

    12. Now both your source and target volumes are attached to your "copy machine". Now login to this machine with SSH.

    13. In the terminal run the following commands to copy over the system,

    sudo mkdir /old /new
    sudo mount /dev/sdg /old
    (make sure you get the right one - check the contents)

    sudo ls -ls /old
    (should have the root from your stopped machine)
    (good? now continue)

    sudo mkfs.ext3 -F /dev/sdh
    sudo mount /dev/sdh /new

    (this formats the volume with ext3 filesystem and mounts it)

    sudo rsync -avx /old/* /new/
    (this will copy all the files over)
    (do a visual check that things look as they should in /mnt/new)

    sudo umount /new
    sudo umount /old

    (unmount the partitions again and we're done)

    14. Return to the EC2 console, "Volumes" screen. Select and "Detach Volume" each of the two volumes you just attached to the running machine. Verify which ones with the IDs you wrote down before.

    15. Now select the small new volume and click "Attach Volume". Select the "stopped" machine and enter device name: /dev/sda1 - click "Attach".

    16. Super! Now we have the new volume attached to our stopped machine containing the same root files as we had originally on our big volume. Go to "Instances", select the stopped instance and click "Instance Actions", "Start".

    17. If everything has gone right then you can now login with SSH to your original "custom" machine and make sure it's working correctly (it will have a different IP now, use ec2din to find it). Test it.

    18. Finally, we want to snapshot this machine to create your custom AMI. Select the instance and click "Instance Actions", "Create Image (EBS AMI)". Enter a "name" for your new AMI. Click "Create". This starts a pending request to snapshot the running machine - it will take a while...

    19. Click on the "AMIs" menu item, far left. You should see your pending AMI creation task there (this seems to take forever). After it's done the status will indicate "available". I've had this step fail on me once, and I went back and copied again and it worked

    That's it!

    Once your new AMI is ready you can start new instances (spot or normal) by choosing it as your AMI. It will show up in the "My AMIs" tab when launching an instance. If you need to customize it more then you simply make the changes and snapshot it again into a new AMI using the same "Create Image (EBS AMI)" action. You won't need to go through this whole detach/copy/attach process again. The Custom AMI that I created here uses 622MB on a 1GB EBS volume, and it works.

    Normal instances can be "stopped", and you don't get charged run fees while stopped. This is like having a machine you turn on and off, which remembers it's state. "Terminating" is like reverting the system to it's original AMI state.

    Spot instances can't be "stopped" - one of their drawbacks, every time they start "fresh" as the original AMI state.

    BTW, You should probably delete left over "volumes" after all this to save on storage fees. You only need the snapshot and your new AMI to launch new instances now.

    Have fun!
    Last edited by BkkBonanza; September 20th, 2010 at 02:56 AM.

  8. #28
    Join Date
    Jan 2008
    Location
    Malmö
    Beans
    132
    Distro
    Ubuntu 13.10 Saucy Salamander

    Re: seamless ssh through intermediate host?

    Quote Originally Posted by BkkBonanza View Post
    Yes, I've run into this and solved it this way...

    Create a ~/.ssh/config file for your personal custom config, and add

    Code:
    Host *.compute-1.amazonaws.com
    StrictHostKeyChecking no
    UserKnownHostsFile .ssh/AWSHosts
    IdentityFile .ssh/MyKey.pem
    This says - for these hosts don't prompt for keys and put them in a special file so they don't clog up my "real" host keys. If you add them to the normal file it soon becomes a mish-mash of irrelevant keys. The identity line makes it so you don't have to keep typing it manually for these hosts (use the right filename of course).

    You'll see the host key fingerprint message still but no prompt or wait.

    Actually this file is great for specifying non-standard ports for certain hosts too - then you don't have to keep putting -p 7777 to login.

    Code:
    Host example.com
    Port 7777
    I've got the "Custom AMI Shrink Tutorial" done. Just testing a bit before posting.
    Great, the address mask for me has to be *.compute.amazonaws.com or some such though. Not compute-1... but the method seems solid. That speeds things up.

    I'll try and get on the AMI making tonight, looks good.

  9. #29
    Join Date
    Apr 2008
    Location
    Far, far away
    Beans
    2,148
    Distro
    Ubuntu 11.04 Natty Narwhal

    Re: seamless ssh through intermediate host?

    You may want to check this out for a better way to init the instance when launched,

    https://help.ubuntu.com/community/CloudInit

    I just came across it because it currently causes micro instances to fail on second start/reboot due to auto adding an s3 volume in /etc/fstab. There is a fix for this problem described here, (near page bottom)

    http://stackoverflow.com/questions/3...grade-problems

Page 3 of 3 FirstFirst 123

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •