Sitemap

Startup script for Linux vm to configure ssh

6 min readApr 29, 2025

Chapter 1: Startup script

1.1 What is Startup script?

A Linux VM startup script is a piece of code (usually a shell script, like Bash) that you provide when you create a Virtual Machine (VM). This script is automatically executed by the VM the first time it boots up, or sometimes on subsequent boots depending on the configuration.

Think of it like giving instructions to a new computer before you even log in for the first time.

1.2 How it Works?

The exact mechanism depends heavily on the platform where you’re running the VM:

Virtualization Software (VMware, VirtualBox, KVM/libvirt):

  • cloud-init: You can often configure these platforms (especially with tools like Vagrant or direct libvirt configurations) to provide data that cloud-init inside the guest VM can consume, mimicking the cloud environment.
  • Guest Additions/Tools: Some platforms have specific tools (e.g., VMware Tools, VirtualBox Guest Additions) that might offer ways to run scripts, though cloud-init is more standard for initial setup.
  • Custom Boot Services: You could manually configure a systemd service or an older init.d script within your custom VM image to run on first boot, but this is less flexible than using mechanisms like cloud-init or user data.

Cloud Providers (AWS, GCP, Azure, etc.):

  • User Data (AWS, GCP): You provide the script content as “User Data”. The cloud platform makes this data available to the VM during boot.
  • Metadata Startup Script (GCP): Similar to User Data, often used specifically for scripts.
  • Custom Script Extension (Azure): An Azure feature specifically designed to run scripts on VMs after provisioning.
  • cloud-init: This is the industry standard tool used by most Linux distributions designed for the cloud. It runs very early in the boot process, detects the cloud platform, reads the User Data/Metadata, and executes the script(s) provided. It’s responsible for many initial setup tasks beyond just running your script (like setting the hostname, network configuration, and managing SSH keys provided through platform features). Your startup script is often executed by cloud-init.

Chapter 2: ssh key configuration as startup script

We can create a startup script for a Linux VM to automatically install an SSH public key, allowing passwordless login for a user with the corresponding private key.

2.1 Core Concepts

SSH Key Pair: SSH access relies on a pair of keys: a private key (kept secret by the user) and a public key (shared and placed on the server).

authorized_keys File: Linux SSH servers (like OpenSSH) look in a specific file within a user’s home directory: ~/.ssh/authorized_keys. If a public key listed in this file matches the private key presented by a connecting user, access is granted.

Permissions: SSH is very strict about file permissions for security. The ~/.ssh directory must be 700 (only owner can read/write/execute), and the ~/.ssh/authorized_keys file must be 600 (only owner can read/write). Incorrect permissions will cause SSH key authentication to fail silently.

User: You need to decide which user account on the VM the key should grant access to (e.g., root, admin, ubuntu, ec2-user, or a custom user).

2.2 Steps

  1. Generate SSH Key Pair (if needed): The user who needs access must have an SSH key pair. If they don’t, they can generate one typically using:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

This creates ~/.ssh/id_rsa (private key — KEEP SAFE!) and ~/.ssh/id_rsa.pub (public key — this is what you need).

2. Obtain the Public Key: Get the content of the user’s public key file (id_rsa.pub or similar). It will look something like this:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC... user@hostname

3. Choose Target User: Decide which user account on the VM will use this key (e.g., admin).

4. Create the Startup Script: This script will perform the necessary actions on the VM during boot.

5. Provide the Script to the VM: Use your cloud provider’s mechanism (like user data, metadata startup script, etc.) to inject this script so it runs on first boot. You’ll also need to provide the actual public key string to the script, typically via metadata attributes or embedding it (less ideal but possible for simple cases).

2.3 Startup Script Example (using Bash)

#!/bin/bash
set -e # Exit immediately if a command exits with a non-zero status.

# --- Configuration ---
# Option 1: Get key from metadata (RECOMMENDED for cloud environments)
# Adjust the command based on your cloud provider:
# GCP:
# PUBLIC_KEY=$(curl -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/ssh-public-key)
# Option 2: Hardcode the key (Less flexible, less secure - use only if necessary)
# Replace the placeholder key with the actual public key string. USE QUOTES!
# PUBLIC_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC..."

# !!! --- IMPORTANT: Define PUBLIC_KEY using ONE of the methods above --- !!!
# Example using a hardcoded key (replace!):
PUBLIC_KEY="ssh-rsa AAAAB3NzaC1yc2EAAA..." # PASTE ACTUAL PUBLIC KEY HERE

# Define the target user on the VM
TARGET_USER="admin" # Change to 'ubuntu', 'ec2-user', 'root', or your desired username

# --- Script Logic ---

echo ">>> Configuring SSH key for user '$TARGET_USER'"

# Validate if PUBLIC_KEY is set
if [ -z "$PUBLIC_KEY" ]; then
echo "!!! ERROR: SSH Public Key is not defined. Exiting."
exit 1
fi

# Validate if PUBLIC_KEY looks like a key (basic check)
if [[ ! "$PUBLIC_KEY" =~ ^ssh-(rsa|ed25519|ecdsa) ]]; then
echo "!!! ERROR: PUBLIC_KEY variable does not look like a valid SSH key format."
exit 1
fi

# Determine the user's home directory
if [ "$TARGET_USER" = "root" ]; then
USER_HOME="/root"
else
USER_HOME="/home/$TARGET_USER"
fi

# Check if the target user exists
if ! id "$TARGET_USER" &>/dev/null; then
echo "!!! ERROR: User '$TARGET_USER' does not exist. Cannot install SSH key."
# Add user creation logic
useradd -m -s /bin/bash "$TARGET_USER"
echo "User '$TARGET_USER' created."
fi

echo ">>> Target user home directory: $USER_HOME"

# Define SSH directory and authorized_keys file paths
SSH_DIR="$USER_HOME/.ssh"
AUTH_KEYS_FILE="$SSH_DIR/authorized_keys"

# Create the .ssh directory if it doesn't exist
echo ">>> Ensuring $SSH_DIR directory exists..."
mkdir -p "$SSH_DIR"

# Set correct permissions for the .ssh directory (owner only)
echo ">>> Setting permissions for $SSH_DIR..."
chmod 700 "$SSH_DIR"

# Append the public key to the authorized_keys file
# Use grep -qF to avoid adding duplicate keys if the script runs multiple times
echo ">>> Adding public key to $AUTH_KEYS_FILE..."
if grep -qF "$PUBLIC_KEY" "$AUTH_KEYS_FILE" 2>/dev/null; then
echo ">>> Key already exists in $AUTH_KEYS_FILE. Skipping."
else
echo "$PUBLIC_KEY" >> "$AUTH_KEYS_FILE"
echo ">>> Key added."
fi

# Set correct permissions for the authorized_keys file (owner read/write only)
echo ">>> Setting permissions for $AUTH_KEYS_FILE..."
chmod 600 "$AUTH_KEYS_FILE"

# Set correct ownership for the .ssh directory and authorized_keys file
echo ">>> Setting ownership for $SSH_DIR and its contents..."
chown -R "$TARGET_USER":"$TARGET_USER" "$SSH_DIR" # Assumes group name is same as user name
# A more robust way if group name might differ:
# chown -R "$TARGET_USER":"$(id -g -n "$TARGET_USER")" "$SSH_DIR"

echo ">>> SSH key configuration for user '$TARGET_USER' completed successfully."

# Optional: Ensure SSH daemon is configured correctly (usually default is ok)
# Consider adding checks or commands to ensure PubkeyAuthentication is enabled
# in /etc/ssh/sshd_config and restarting the sshd service if changes are made.
# Example (use with caution, might restart SSH service):
# if ! grep -q "^PubkeyAuthentication yes" /etc/ssh/sshd_config; then
# echo "PubkeyAuthentication yes" >> /etc/ssh/sshd_config
# systemctl restart sshd
# fi

exit 0

2.4 How to Use with Cloud Providers (Examples)

GCP (Compute Engine):

  1. Store the public key content in a metadata key, e.g., ssh-public-key.
  2. Paste the script content into the startup-script metadata key.
  3. Modify the script to use the curl command shown in Option 1 (GCP).

2.5 Testing

After the VM boots and the script runs:

From the machine where the private key (id_rsa) resides:

ssh -i /path/to/your/private_key TARGET_USER@VM_IP_ADDRESS_OR_HOSTNAME

You should be logged in without being prompted for a password. If it fails, check the VM’s system logs (/var/log/cloud-init-output.log on many systems, or journalctl) for errors from the startup script and check SSH server logs (/var/log/auth.log or journalctl -u sshd) for connection issues. Common problems are incorrect permissions or ownership.

Enjoy learning !!!

This post is based on interaction with https://aistudio.corp.google.com.

--

--

Dilip Kumar
Dilip Kumar

Written by Dilip Kumar

With 18+ years of experience as a software engineer. Enjoy teaching, writing, leading team. Last 4+ years, working at Google as a backend Software Engineer.

No responses yet