In some rare occasions you may find yourself unable to communicate or log into your AWS EC2 instance. There are a number of factors that could cause a situation like this.
- Kernel version conflicts
- Failure to mount filesystems
- Grub misconfigurations
- Software firewalls blocking traffic
Currently the EC2 Dashboard does not provide the ability to connect to a live console so that you can troubleshoot an unreachable instance. Because of this any investigation on an unreachable instance needs to be done through a secondary rescue instance
. You should be able to look at the System Log
or Instance Screenshot
to see what might be keeping the instance from booting.
The scope of this article is to get you access to the impaired instance's filesystem so that you can get the instance back online.

Create a "recovery" instance
I would recommend a t2.micro
running Amazon Linux
for your recovery instance. This combination is cost effective and easy to use. If you need help on how to launch an instance you can refer to AWS's documentation here.
Attaching the impaired volume to your rescue instance
First you will need to stop the instance by selecting it and clicking Actions
> Instance State
> Stop
. Now is also a perfect time to Create an AMI
as a backup of your instance in case you screw something up ;).
As seen below, select the impaired instance and click on the EBS ID for the Root device
.

This will bring you to the Volumes
page listing the root volume for the impaired instance. Click Actions
> Detach Volume
. It should only take a few seconds to detach. If it doesn't detach then you may have missed step 1. Once the volume is in an available
state you can move to the next step.
Now click Actions
> Attach volume
. You will be prompted to specify the instance you want to attach it to. I called mine rescue instance
so I will search by that tag. The default device name of /dev/sdf
will be fine.

Mounting the impaired filesystem in the rescue instance
SSH into your rescue instance and run the following commands to mount the file system at the /mnt
directory.
$ lsblk
OUTPUT: xvda 202:0 0 8G 0 disk └─xvda1 202:1 0 8G 0 part / xvdf 202:80 0 10G 0 disk └─xvdf1 202:81 0 10G 0 part
Based on the above output we will want to use /dev/xvdf1
when running out mount command.
$ sudo mount /dev/xvdf1 /mnt
Running lsblk
again will allow you to confirm that the mount command was successful.
OUTPUT: xvda 202:0 0 8G 0 disk └─xvda1 202:1 0 8G 0 part / xvdf 202:80 0 10G 0 disk └─xvdf1 202:81 0 10G 0 part /mnt
Now you have access to the filesystem of the impaired instance. Keep in mind that since it's mounted at /mnt
you will need to make sure you are looking in that directory. For example, the fstab
file for the impaired instance is actually at /mnt/etc/fstab
.
Further investigation and Potential Solutions
Now that you have access to the filesystem you can review logs, modify configuration files, adjust permissions and aspects of the boot process. You can refer to the following articles for potential solutions.