Skip to main content

Using environment files over injected environment variables in Kubernetes

Published on

Last month I decided to finally dive in and learn about Kubernetes. Since then I have been spending my mornings listening to podcasts and audio resources about Kubernetes. The other morning I was listening to the Kubernetes Security with Liz Rice podcast from Software Daily. The podcast covers the security surface area of Kubernetes, which is a bit large due to all of the components in a distributed system.

One of the topics was around configuration and secret storage. Kubernetes uses etcd, a distributed key-value store for critical system data, to store configuration and secrets. In Kuberenetes, the configuration that needs to be passed to a container is defined in a configuration map resource object and secrets are their own resource object as well. Kubernetes allows you to inject configuration and secrets into Pods so that they are available for container instances in a Pod.

If you're not familiar with Kubernetes

  • Configuration maps are a resource object for storing non-sensitive data to be used as configuration within containers on a Pod
  • Secrets are a resource object like configuration maps but store values obfuscated with base64 encoding and intended to be used for sensitive data

Kubernetes allows you to provide configuration maps and secrets directly as environment variables in the container or through environment files that the application can mount. I honestly did not consider the difference between the two until Liz Rice made an interesting point.

  • Injected environment variables are always present and may become artifacts in logs for the entire system.
  • When an application bootstraps environment files, those secrets are only available within the application runtime.

Environment files help lower the surface area of attack. Application logging may produce environment variable artifacts; however, the management and control over these logging artifacts are easier.

Let's say we have the following configuration map manifest. ?Note! the MySQL password is in the configuration map! Later we will move it to a secret.

apiVersion: v1
kind: ConfigMap
metadata:
  name: database-conn
  namespace: default
data:
  MYSQL_DATABASE: db
  MYSQL_HOSTNAME: db.service
  MYSQL_PASSWORD: supersecret
  MYSQL_PORT: db
  MYSQL_USER: db

Using the following Pod manifest, we can take the configuration map and directly inject the variables into the environment.

apiVersion: v1
kind: Pod
metadata:
  name: env-inject-example
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "tail", "-f", "/dev/null" ]
      envFrom:
      - configMapRef:
          name: database-conn
  restartPolicy: Never

If you were to SSH into the container within our env-inject-example Pod, we would see our session has access to all of the configuration map values as injected environment variables. That means everything within the container has access to these environment variables.

Instead of having the configuration map injected as environment variables, we can mount it as a volume so that it is represented as a file on the disk. This creates an environment file your application can then mount. The following manifest would create the /etc/config/db_config file which our application can mount.

apiVersion: v1
kind: Pod
metadata:
  name: env-mount-example
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "tail", "-f", "/dev/null" ]
      volumeMounts:
      - name: database-conn-volume
        mountPath: /etc/myapp/
  volumes:
    - name: database-conn-volume
      configMap:
        name: database-conn
  restartPolicy: Never

Liz Rice stated in the podcast that these configuration map volume mounts are ephemeral and are not persisted to actual persisted volume storage. However, there is one problem. With our current configuration map declaration, we have individual data keys for each configuration value. Using the volume mount we end up with individual environment files for each configuration item ?.

Our current configuration map declaration is perfect for injected environment variables. Not so much for mounting in an application. That is a bit tasking to require our application to go out and source various environment files. We can rewrite the configuration map to put all of our configuration items into a single environment file. The following change to our configuration map puts all of the desired environment variables under the db_config key, which will generate a db_config file for us containing all of our MYSQL_* configurations.

apiVersion: v1
kind: ConfigMap
metadata:
  name: database-conn
  namespace: default
data:
  db_config: |
    MYSQL_DATABASE: db
    MYSQL_HOSTNAME: db.service
    MYSQL_PASSWORD: supersecret
    MYSQL_PORT: db
    MYSQL_USER: db

After applying the manifest changes, our mounted Pod example has a /etc/myapp/db_config file containing the application's database connection that it can mount for its environment variables.

Okay, great! Our MySQL configuration information is now in an environment file that our application can mount. These variables are not part of the environment and not immediately available to sidecar containers or other processes within our application container. But, our database password is just sitting there. We need to move our database password out of the configuration map and into a secret which we will mount as an environment file, just like our configuration map.

Note: Kubernetes does not support interpolating secrets into a configuration file via a templating-like format. That means you need to mount your configuration map and secrets separately.

Here is an example of the secret's manifest. The password has been base64 encoded into the manifest and will be available as the db_password environment file.

apiVersion: v1
kind: Secret
metadata:
  name: database-pass
type: Opaque
data:
  db_password: c3VwZXJzZWNyZXQK

Here is an updated manifest for our Pod which mounts the configuration map as an environment file. We have mounted our secret as a volume to our pod. The container specifies it as a volumeMount and marks it as read-only.

apiVersion: v1
kind: Pod
metadata:
  name: env-mount-example
spec:
  containers:
    - name: test-container
      image: k8s.gcr.io/busybox
      command: [ "tail", "-f", "/dev/null" ]
      volumeMounts:
      - name: database-conn-volume
        mountPath: /etc/myapp/
      - name: database-pass-volume
        readOnly: true
        mountPath: "/etc/myapp_secrets"
  volumes:
    - name: database-conn-volume
      configMap:
        name: database-conn
    - name: database-pass-volume
      secret:
        secretName: database-pass
  restartPolicy: Never

When we access our Pod, we now have a configuration environment file and secrets environment file for the database password. There are still two files to mount in the application, however. Maybe it would be worthwhile to just store the entire database connection information as a secret, that is up to you.

There is still the problem that our secret is stored within the Kubernetes cluster itself alongside normal data. The risks associated are outlined in the Kubernetes secrets documentation.

  • Secret data is stored in etcd, which means it should be encrypted
  • Communications between etcd should be encrypted (if you're on a managed cluster provider, such as DigitalOcean, this is probably being taken care of.)
  • Access control needs to be exercised on who has access to etcd.

There are third-party solutions to off-site secret storage. One that I am familiar with is Lockr, which has been serving application layer secret storage for some time now. It would be interesting to see how Lockr could be integrated into a Kubernetes cluster for secure offsite key storage. This way, secret management with Lockr could be moved out of the application layer and into pod deployment.

Photo by Markus Spiske on Unsplash