slide

Use hashi corp vault aws engine with multiple accounts

Ned Bellavance
9 min read

Cover

I received a question recently on how to properly configure the AWS secrets engine on HashiCorp Vault to work with multiple AWS accounts. It took me a bit, but I did figure out how to do it and what the limitations are. In this post, I will break down how the secrets engine works and how to use it to dynamically create credentials across multiple AWS accounts using the assume_role feature.

I’m going to assume for the purposes of this article you are already familiar with HashiCorp Vault at a basic level. Like, you know what secrets engines and policies are. If not, check out my course on Pluralsight! And I’ll assume you know a little bit about AWS Identity and Access Management (IAM). Not expert level of course, IAM still makes my head hurt on the best days, but you know what an IAM role, policy, and user are at the very least. With that all out of the way, let’s talk a little bit about the AWS secrets engine in Vault.

AWS Secrets Engine

The AWS secrets engine in Vault allows you to dynamically generate credentials in AWS through Vault. There are three credential types: iam_user, assumed_role, and federation_token. We’ll get back to those in a moment. When you enable an instance of the AWS secrets engine, you need to configure Vault access to AWS so it can generate these dynamic credentials. For this post, we will call this user vault-account. You’ll do this by creating an IAM user and generating an access key and secret key. The IAM user assigned to Vault needs sufficient permissions to perform actions that relate to the type of credential being generated. For iam_user credentials, it will need permission to perform actions like CreateUser and CreateAccessKey. For the assumed_role type, the vault-account really only needs the sts:AssumeRole action for any AWS roles it will be creating credentials against. I’m deliberately going to ignore the federation_token for the purpose of this post. You can find an example set of permissions for each credential type in the official AWS secrets engine docs.

Let’s talk about the first two credential types: iam_user and assumed_role. The iam_user type is probably the more typical and better documented. In Vault, you create a role on the AWS secrets engine for each iam_user type you want. When a user requests credentials, an IAM user is dynamically created on the same AWS account as the vault-account and an access key is returned for the user. Revoking the credentials in Vault destroys the IAM user on AWS. This works really well for a single AWS account and secrets engine, but what if you were working with multiple AWS accounts? What approaches are available to you?

  1. Create an AWS secrets engine for each AWS account - This is feasible, but could quickly become a management nightmare, especially when it comes to policies.
  2. Grant IAM users cross-account roles - You can handle this entirely on the AWS side by creating roles in each AWS account and granting a group permission to assume that role. When the IAM user is created dynamically, it will be assigned as a member of the group.
  3. Use the assumed_role credential type, and grant vault-account permission to assume roles in different accounts.

All three of these are viable options, but I would argue the cleanest and easiest is probably using the assumed_role credential type. The first option suffers from engine sprawl as the number of accounts increases. The second spreads the management of permissions and policies across both AWS and Vault. The third keeps the permissions management options entirely on the Vault side and has less overall configuration. The generated credentials also have a lower TTL, which helps with maintaining security.

The assumed_role credential type essentially has the vault-account requesting credentials for an AWS role defined in the AWS secrets engine role. Yes, both AWS and Vault use the word roles. And yes, it can be confusing. The AWS role can be in the same account as the vault-account or in a different account. You can create multiple Vault roles and specify multiple AWS roles in a single Vault role definition. The requestor - assuming they have proper permissions on Vault - performs a write operation on the Vault role and receives AWS credentials that are good for a limited amount of time, typically 60 minutes or less. You can restrict who has access to request credentials using Vault policies. For instance you could have a policy allowing all developers to request credentials to development AWS accounts, but not have any access to production AWS accounts.

With all that in mind, let’s actually get to the meat and potatoes of what we need to configure to get cross-account credentials from the assumed_role credential type.

Setting up cross-account credentials

We are going to be setting up our AWS environment and a dev instance of Vault server to get the cross-account credentials working. If you want to follow along, you will need the following:

  • Two AWS accounts - primary and secondary
  • Admin permissions in each AWS account
  • The Vault executable
  • The AWS CLI

We are going to be performing the following steps to get things working:

  1. Create the vault-account IAM user in the primary AWS account
  2. Create the IAM role in the secondary AWS account
  3. Grant the vault-account the AssumeRole permissions to the IAM role
  4. Start up a dev instance of the Vault server
  5. Enable the AWS secrets engine and configure it
  6. Create the Vault role and test it

Let’s start by getting our AWS environment set up.

Configure the AWS environment

First, we are going to set up our two AWS profiles, primary and secondary. Each will refer to a separate AWS account that you have admin access to.

aws configure --profile primary
aws configure --profile secondary

After each command, enter the Access Key, Secret Access Key, and default region for the profile.

Now we are going to create the vault-account IAM user and store the ARN in a variable for later use:

# Create the vault-account IAM user on the primary account

vaultacct=$(aws iam create-user --user-name=vault-account --profile=primary)
vaultarn=$(echo $vaultacct | jq .User.Arn -r)

In the secondary account, we will create a role called ec2-admin that has full admin permissions on EC2, and attach an assume role policy that grant the vault-account permission to assume this role:

# Create the role with an assume policy in the secondary account
cat << EOF > assume_policy.json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "$vaultarn"
      },
      "Action": "sts:AssumeRole",
      "Condition": {}
    }
  ]
}
EOF

ec2admin=$(aws iam create-role --role-name=ec2-admin --assume-role-policy-document=file://assume_policy.json)

# Grant the role the AmazonEC2FullAccess permission

aws iam attach-role-policy --role-name=ec2-admin --policy-arn=arn:aws:iam::aws:policy/AmazonEC2FullAccess --profile=secondary

We are capturing the ec2-admin role ARN in a variable as well, since we will need it for a policy that will be attached to the vault-account.

# Create the allow policy in the primary account
ec2adminarn=$(echo $ec2admin | jq .Role.Arn -r)

cat << EOF > allow_role.json
{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Action": "sts:AssumeRole",
    "Resource": "$ec2adminarn"
  }
}
EOF

allow_policy=$(aws iam create-policy --policy-name=allow-vault-ec2-admin --policy-document=file://allow_role.json --profile=primary)
allow_policy_arn=$(echo $allow_policy | jq .Policy.Arn -r)

aws iam attach-user-policy --user-name=vault-account --policy-arn=$allow_policy_arn --profile=primary

Now we have an IAM user in the primary account with permissions to assume a role in the secondary account. Lastly, we will need to generate an access key for the vault-account. That will be used when we configure the AWS secrets engine on Vault.

# Get an access token for the vault-account to use with vault
access_key=$(aws iam create-access-key --user-name=vault-account --profile=primary)
key=$(echo $access_key | jq .AccessKey.AccessKeyId -r)
secret=$(echo $access_key | jq .AccessKey.SecretAccessKey -r)

Configure Vault

We now have everything setup correctly on the AWS side. It’s time to configure Vault. In this example, we are going to fire up a dev instance of Vault server. Open a separate terminal window and run the following command:

vault server -dev

Make note of the root token as we will need that to log into the Vault server. Back in your original terminal, run the following to set the address of the Vault server and login using the root token:

export VAULT_ADDR='http://127.0.0.1:8200'

vault login

It’s probably a good idea to point out that we are using the root token for these operations since this is a dev server instance. In a real world scenario, you would never use the root token for these operations. You probably already knew that, but I feel that it bears repeating.

Now we are going to enable the AWS secrets engine on the default path and configure it to use the vault-account:

vault secrets enable aws

vault write aws/config/root \
    access_key=$key \
    secret_key=$secret \
    region=us-east-1

With the secrets engine configured, we can create a Vault role on the engine of type assumed_role and specify the ARN of the ec2-admin role we created in the secondary account:

vault write aws/roles/ec2-admin\
    role_arns=$ec2adminarn \
    credential_type=assumed_role

Finally, we can request credentials from this role on the path aws/sts/ by issuing a write command. You could create a Vault policy that restricts who has permissions to execute write against this path or this specific role. We are running as the root login, so we can do anything we like. Let’s request a credential!

vault write aws/sts/ec2-admin ttl=60m

You should receive a response similar to this:

Key                Value
--- -----
lease_id           aws/sts/ec2-admin/KN8XkkYfHT0bLHbGBqLx9aQG
lease_duration     1h
lease_renewable    false
access_key         ASIAQWO6FK2M3DWVP4KL
secret_key         3070VHhjHNKvVVidiQ/5FQno2KzzJLjmdWMdwUV0
security_token     FwoGZXIvYXdzEEgaDGfxR/2ttX6FIw7dCSLIAXAM7eBF32NXZUD3E5rkPKa/XQVJgc4rZfMWaiaSNIMkehOzGsjdy008befX20mHYlANiYaYDLd2Jp66ceSa/FPR4ev5GAgt8+mNjNrPYmSCx3VbZ5Gygi72XmvS0/T4GhDPuaflHz9nHsUdGeun1hjoWAK4VtISN1oB/xuTCz9cZ+nvgKDX73q9ueomtFExvgDYQmg/bJfkNnloHHo+pDK6x24x4OT5NAlAyZtNr3x2ExIq9N4IFbomz6KL/mhgXH6EN6f69duaKMDWqfoFMi0sqzp0soP+aYF537YleaaAayIF5X3DtIKEuDgavZIe8KQtLxg80cSbnMYm9jM=

The lease duration specifies one hour, so these credentials will no longer be valid in 60 minutes, and the credentials are not renewable.

We should be able to use these credentials to do something like list all the VPCs in the region for the secondary account. Let’s try and do that.

export AWS_ACCESS_KEY_ID=ACCESS_KEY
export AWS_SECRET_ACCESS_KEY=SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=SESSION_TOKEN

aws ec2 describe-vpcs --region=us-east-1

The easiest way to use the credentials is to set environment variables for the access key, secret key, and session token. You should receive a valid response including your default VPC in the region at the very least.

Conclusion

In this post we examined the different credential types available to the AWS secrets engine and explored how to configure the assumed_role type for cross-account access. While the example was simple, involving only two AWS accounts, you could easily expand this pattern out to tens or hundreds of accounts with little additional effort. Vault policies would be used to govern account access, and you could use a tool like Terraform to configure the necessary accounts and permissions in each AWS account.

You would also want to setup logging on both Vault and AWS to correlate who is requesting access and what they are doing with it.

That does it for this post. If you have additional questions or thoughts, please leave them in the comments. And check out my weekly Vault certification videos on YouTube.