Whether you are a developer or a system administrator, you will have to manage the issue of sharing “secrets” or secure information. In this context, a secret is any sensitive information that should be protected. For example, if lost or stolen, your passwords, database credentials, or cloud provider keys could damage your business. Safe storage and sharing for this information are becoming more difficult with modern complex infrastructures. In today’s post, we’re going to explore how to get started with HashiCorp and how secure information can be managed in a microservice, Docker-based environment using HashiCorp Vault.
The drawbacks of common approaches
To deal with the problem of managing secure information, developers and sysadmins can choose from a few common approaches:
- Stored in the image: While this approach is easy to achieve, it’s one that should be avoided in any production environment. Secrets are accessible by anyone who has access to the image and because they will persist in the previous layers of the image, they cannot be deleted.
- Environment variables: When starting up our containers, we can easily set the environment variables using the -e Docker run parameter. This approach is much better than the previous one but it still has some drawbacks. For example, a common security gap is that secrets could appear in debug logs.
- Secrets mounted in volumes: We can create a file that stores our secrets and then mount it at container startup. This is easily done and probably better than the previous approaches. However, it still has some limitations. With this approach, it becomes difficult to manage in infrastructures with a large number of running containers where each container only needs a small subset of secrets.
In addition to the cons mentioned above, all of these approaches share some common problems, including:
- Secrets are not managed by a single source. In complex infrastructures, this is a big problem and ideally, we want to manage and store all of our secrets from a single source.
- If secrets have an expiration time, we will be required to perform some manual actions to refresh them.
- We cannot share just a subset of our credentials to specific users or services.
- We do not have any audit logs to track who requested a particular secret and when, or any logs for failed requests. These are things that we should be aware of since they could represent potential external attacks.
- Even if we find an external attack, we don’t have an easy way to perform a break-glass procedure to stop secrets from being shared with external services or users.
All of the above problems can be easily mitigated and managed using a dedicated tool such as HashiCorp Vault. This makes particular sense in a microservice environment where we want to manage secrets from a single service and expose them as a service to any allowed service or user.
What is HashiCorp Vault?
From the official Vault documentation:
Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, and auditing. Through a unified API, users can access an encrypted Key/Value store and network encryption-as-a-service, or generate AWS IAM/STS credentials, SQL/NoSQL databases, X.509 certificates, SSH credentials, and more.
Using Vault, we can delegate the management of our secrets to a single tool. Vault will take care of the at rest and in transit encryption of each secret. It has built-in support for several authentications, storage, and audit backends, and it was built with high availability in mind. Vault also makes it easy to set up multi-datacenter replication.
Get started with HashiCorp Vault
Vault makes use of a storage backend to securely store and persist encrypted secrets. In today’s example, we’ll use the PostgreSQL backend. We will begin by starting a container named vault-storage-backend from the official PostgreSQL image with vault as database name, username, and password:
$ docker run -d -e POSTGRES_PASSWORD=vault -e POSTGRES_USER=vault -e POSTGRES_DB=vault --name vault-storage-backend postgres
Since Vault’s PostgresSQL storage backend will not automatically create anything once set up, we need to execute some simple SQL queries to create the required schema and indexes.
Let’s connect to the Docker container and open a PSQL session:
$ docker exec -it vault-storage-backend bash $ su - postgres $ psql vault
Required schema and indexes can be easily created by executing the following SQL statements:
CREATE TABLE vault_kv_store ( parent_path TEXT COLLATE "C" NOT NULL, path TEXT COLLATE "C", key TEXT COLLATE "C", value BYTEA, CONSTRAINT pkey PRIMARY KEY (path, key) ); CREATE INDEX parent_path_idx ON vault_kv_store (parent_path);
We don’t need to do anything else inside the PostgreSQL container so we can close the session and go back to the host terminal.
Now that PostgreSQL is properly configured, we need to create a configuration file to inform Vault that its storage backend will be the Vault database inside the vault-storage-backend container. Let’s do that by defining the following configuration file named config.hcl.
# config.hcl { "backend": {"postgresql": {"connection_url": "postgres://vault:vault@storage-backend:5432/vault?sslmode=disable"}}, "listener": {"tcp": {"address": "0.0.0.0:8200", "tls_disable": 1}} }
Using Vault we can make use of the Access Control Policies – ACLs – to define different policies to allow or deny access to specific secrets. Before proceeding, let’s define a simple file that will be used to allow read-only access to each secret contained inside secret/web path to any authenticated user or service that will be identified as part of that policy:
# web-policy.hcl path "secret/web/*" { policy = "read" }
Both files will be stored inside a Docker data container to be easily accessible from other linked containers. Let’s create the container by executing:
$ docker create -v /config -v /policies --name vault-config busybox
Next, we will copy both of the files inside it:
$ docker cp config.hcl vault-config:/config/ $ docker cp web-policy.hcl vault-config:/policies/
Since we want to make use of Vault’s auditing capabilities and we want to make logs persistent, we will store them in a local folder on the host and then mount it in Vault’s container. Let’s create the local folder:
mkdir logs
Finally, we can start our Vault server by launching a container named vault-server:
docker run \ -d \ -p 8200:8200 \ --cap-add=IPC_LOCK \ --link vault-storage-backend:storage-backend \ --volumes-from vault-config \ -v $(pwd)/logs:/vault/logs \ --name vault-server \ vault server -config=/config/config.hcl
As you can see, we are using the official Vault image available on the Docker hub. Vault is running on port 8200 inside the container and that port is exposed on port 8200 of the localhost. The PostgreSQL container is linked and named storage-backend inside the container, which is the same alias used in the configuration file config.hcl. Volumes are mounted from the data-container named vault-config and the localhost’s logs folder is mounted at /vault/logs/ inside the container. Finally, we have started Vault using the configuration defined inside the config.hcl configuration file.
To interact from the localhost to Vault we can define an alias:
$ alias vault='docker exec -it vault-server vault "$@"' $ export VAULT_ADDR=http://127.0.0.1:8200
We can then initialize Vault by executing:
$ vault init -address=${VAULT_ADDR}
We will receive an output similar to the following:
Unseal Key 1: QZdnKsOyGXaWoB2viLBBWLlIpU+tQrQy49D+Mq24/V0B Unseal Key 2: 1pxViFucRZDJ+kpXAeefepdmLwU6QpsFZwseOIPqaPAC Unseal Key 3: bw+yIvxrXR5k8VoLqS5NGW4bjuZym2usm/PvCAaMh8UD Unseal Key 4: o40xl6lcQo8+DgTQ0QJxkw0BgS5n6XHNtWOgBbt7LKYE Unseal Key 5: Gh7WPQ6rWgGTBRSMecuj8PR8IM0vMIFkSZtRNT4dw5MF Initial Root Token: 5b781ff4-eee8-d6a1-ea42-88428a7e8815 Vault initialized with 5 keys and a key threshold of 3. Please securely distribute the above keys. When the Vault is re-sealed, restarted, or stopped, you must provide at least 3 of these keys to unseal it again. Vault does not store the master key. Without at least 3 keys, your Vault will remain permanently sealed.
The Vault was successfully initialized and now it is in a sealed state. In order to start interacting with it, we will first need to unseal it.
In the previous output, we can see five different unseal keys. This is because Vault makes use of Shamir’s Secret Sharing. Basically, this means that we will need to provide at least three of the five generated keys to unseal the vault. That’s why each key should be shared with a single person inside your organization/team. In this way, a single malicious person will never be able to access the vault to steal or modify your secrets. The number of generated and required keys can be modified when you initially set up your Vault.
Let’s unseal our vault using three of the provided keys:
vault unseal -address=${VAULT_ADDR} QZdnKsOyGXaWoB2viLBBWLlIpU+tQrQy49D+Mq24/V0B vault unseal -address=${VAULT_ADDR} bw+yIvxrXR5k8VoLqS5NGW4bjuZym2usm/PvCAaMh8UD vault unseal -address=${VAULT_ADDR} Gh7WPQ6rWgGTBRSMecuj8PR8IM0vMIFkSZtRNT4dw5MF
The final output will be:
Sealed: false Key Shares: 5 Key Threshold: 3 Unseal Progress: 0 Unseal Nonce:
This means that the vault has been correctly unsealed and we can finally start interacting with it.
Additionally, to unseal keys, we can find an Initial Root Token key in the previous vault init command output. Authenticating Vault using that token grants us root access to Vault. Let’s authenticate using it:
$ vault auth -address=${VAULT_ADDR} 5b781ff4-eee8-d6a1-ea42-88428a7e8815
The received output will be:
Successfully authenticated! You are now logged in.
First, we need to enable Vault’s audit. To do that, you will need to execute the following:
$ vault audit-enable -address=${VAULT_ADDR} file file_path=/vault/logs/audit.log
From this point forward, every interaction with the Vault will be audited and persisted in a log file inside the logs folder on the localhost.
We can now write and read our first secret:
$ vault write -address=${VAULT_ADDR} secret/hello value=world $ vault read -address=${VAULT_ADDR} secret/hello
The output will be exactly what we expect:
Key Value --- ----- refresh_interval 768h0m0s value
Next, let’s associate the policy defined in the previous web-policy.hcl file to verify that ACLs are working as expected:
$ vault policy-write -address=${VAULT_ADDR} web-policy /policies/web-policy.hcl
Now we can write a new secret inside secret/web path:
$ vault write -address=${VAULT_ADDR} secret/web/web-apps db_password='password'
Vault has built-in support for many different authentication systems. For example, we can authenticate users using LDAP or GitHub. We want to keep things simple here, so we will make use of the Username & Password authentication backend. We first need to enable it:
$ vault auth-enable -address=${VAULT_ADDR} userpass
Next, let’s create a new user associated to the policy web-policy and with web as username and password:
$ vault write -address=${VAULT_ADDR} auth/userpass/users/web password=web policies=web-policy
Let’s authenticate this new user to Vault:
$ vault auth -address=${VAULT_ADDR} -method=userpass username=web password=web
Vault informs us that we have correctly authenticated to it, and since the policy associated to the user has read-only access to the secret/web path, we are able to read the secrets inside that path by executing:
$ vault read -address=${VAULT_ADDR} secret/web/web-apps
However, if we try to execute:
$ vault read -address=${VAULT_ADDR} secret/hello
We will receive the following:
Error reading secret/hello: Error making API request. URL: GET http://127.0.0.1:8200/v1/secret/hello Code: 403. Errors: * permission denied
This means that Vault’s ACLs checks are working fine. We can also see the denied request from the audit logs by executing:
tail -f logs/audit.log
In fact, in the output we will see:
{ "time":"2017-03-21T15:32:44Z", "type":"request", "auth":{ "client_token":"", "accessor":"", "display_name":"", "policies":null, "metadata":null }, "request":{ "id":"e0c254e6-5701-79ac-2959-34db59d1c9cf", "operation":"read", "client_token":"hmac-sha256:3c0d732a6899fdae57018b4b341b08e1348e21cb866412e0a394ad48e3d4e8c4", "client_token_accessor":"hmac-sha256:48128e5b762f1ec376cebe9a3c41b85a2042d7e937b14b634f8c287a6deddd6c", "path":"secret/hello", "data":null, "remote_address":"127.0.0.1", "wrap_ttl":0, "headers":{ } }, "error":"permission denied" }
In this scenario, we could easily integrate external services such as AWS CloudWatch and AWS Lambda to revoke access to users or completely seal the vault.
For example, if we would like to revoke the access to web user we could execute:
$ vault token-revoke -address=${VAULT_ADDR} -mode=path auth/userpass/users/web
Or if we would like to completely seal the vault, we can execute:
$ vault seal -address=${VAULT_ADDR}
Let’s now imagine that we have an external service running on a different container that needs access to some secrets stored with Vault. Let’s start a container from the official Python image and directly attach to its Bash.
docker run -it --link vault-server:vault-server python bash
To programmatically interact with Vault we first need to install the official Python client for Vault, called hvac.
$ pip install hvac
Let’s now try to access to some secrets from this new container via Vault:
import hvac client = hvac.Client(url='http://vault-server:8200') # We authenticate to Vault as web user client.auth_userpass('web', 'web') # This will work client.read('secret/web/apps') # This will not work since the authenticated user is associated with the ACLs web-policy client.read('secret/hello')
Summary
Today we have seen how secrets can be delegated to a single point of access and management using HashiCorp Vault and how it can be set up in a microservice, container-based environment. We have only scratched the surface of Vault’s features and capabilities.
To get started with the HashiCorp Vault course, sign in to your Cloud Academy account. I also absolutely recommend spending some time with the official Getting started guide to go deeper into Vault’s concepts and functionalities.