Standalone Mongodb on Kubernetes Cluster

Dilip Kumar
7 min readJul 21, 2019

Since the introduction of StatefulSets in Kubernetes, it became super easy to host MongoDB on the Kubernetes cluster. In this post, I am going to share steps to install the Standalone version of MongoDB on the Kubernetes cluster.

Prerequisite

Before you are please make sure of following

  • If you are developing on your local, please have Minikube or other Kubernetes cluster installed
  • I have used node hostname as `mongodb-node`, please change to match your node selector
  • If you are testing on a private network (mostly office network) then please use your own docker registry for Mongodb docker image.

Quick and Dirty Standalone MongoDB

A simple version of standalone MongoDB can be installed with the following constraints

  • Hardcoded admin username and password
  • Headless service to access standalone MongoDB from other containers
  • No persistent volume
  • No app user

Step1: Use the following as statefulstets.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
name:
mongodb-standalone
spec:
serviceName:
database
replicas: 1
selector:
matchLabels:
app:
database
template:
metadata:
labels:
app:
database
selector: mongodb-standalone
spec:
containers:
- name: mongodb-standalone
image: mongo:4.0.8
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: admin
- name: MONGO_INITDB_ROOT_PASSWORD
value: password
nodeSelector:
kubernetes.io/hostname:
mongodb-node
---

Step2: Use following headless service.yaml

apiVersion: v1
kind: Service
metadata:
name:
database
labels:
app:
database
spec:
clusterIP:
None
selector:
app:
database

Step3: Simply apply these two templates

kubectl apply -f statefulsets.yaml
kubectl apply -f service.yaml

Step4: Use the following to connect running MongoDB

kubectl exec -it mongodb-standalone-0 sh
mongo mongodb://mongodb-standalone-0.database:27017

Step5: Use admin user to login

use admin
db.auth('admin','password')

This standalone MongoDB has many problems as listed in the beginning. Now let’s try to use Kubernetes secrets instead of hardcoded admin password in the template.

Standalone MongoDB with username/password stored in Kubernetes secrets

Since we want to store username/password as secrets, therefore, we need to first add these in the Kubernetes cluster.

Step1: Add secrets in the Kubernetes cluster. The following is sample template for reference.

apiVersion: v1
kind: Secret
metadata:
name:
k8-training
type: Opaque
data:
MONGO_ROOT_USERNAME:
YWRtaW4K
MONGO_ROOT_PASSWORD: cGFzc3dvcmQK

Step2: Use following statefulsets.yaml to use Kubernetes secrets

apiVersion: apps/v1
kind: StatefulSet
metadata:
name:
mongodb-standalone
spec:
serviceName:
database
replicas: 1
selector:
matchLabels:
app:
database
template:
metadata:
labels:
app:
database
selector: mongodb-standalone
spec:
containers:
- name: mongodb-standalone
image: mongo:4.0.8
env:
- name: MONGO_INITDB_ROOT_USERNAME_FILE
value: /etc/k8-training/admin/MONGO_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD_FILE
value: /etc/k8-training/admin/MONGO_ROOT_PASSWORD
volumeMounts:
- name: k8-training
mountPath: /etc/k8-training
readOnly: true
nodeSelector:
kubernetes.io/hostname:
mongodb-node
volumes:
- name: k8-training
secret:
secretName:
k8-training
items:
- key: MONGO_ROOT_USERNAME
path: admin/MONGO_ROOT_USERNAME
mode: 0444
- key: MONGO_ROOT_PASSWORD
path: admin/MONGO_ROOT_PASSWORD
mode: 0444
---

Step3: Use regular service.yaml

apiVersion: v1
kind: Service
metadata:
name:
database
labels:
app:
database
spec:
clusterIP:
None
selector:
app:
database

Step4: Apply these templates as below

kubectl apply -f secrets.yaml
kubectl apply -f statefulsets.yaml
kubectl apply -f service.yaml

Step5: Use the following to connect running MongoDB

kubectl exec -it mongodb-standalone-0 sh
mongo mongodb://mongodb-standalone-0.database:27017

Ste6: Use admin user to login

use admin
db.auth('admin','password')

This is good so far, however to run it on production, you would like to add application user instead of default admin.

Standalone Mongodb with configured app user

To add other than admin users, we need to map `docker-entrypoint-initdb.d` folder with a script to add a new user.

Step1: Add following config.yaml file

apiVersion: v1
kind: ConfigMap
metadata:
name:
mongodb-standalone
data:
ensure-users.js:
|
const targetDbStr = 'training';
const rootUser = cat('/etc/k8-training/admin/MONGO_ROOT_USERNAME');
const rootPass = cat('/etc/k8-training/admin/MONGO_ROOT_PASSWORD');
const usersStr = cat('/etc/k8-training/MONGO_USERS_LIST');

// auth against admin
const adminDb = db.getSiblingDB('admin');
adminDb.auth(rootUser, rootPass);
print('Successfully authenticated admin user');

// we'll create the users here
const targetDb = db.getSiblingDB(targetDbStr);

// user-defined roles should be stored in the admin db
const customRoles = adminDb
.getRoles({rolesInfo: 1, showBuiltinRoles: false})
.map(role => role.role)
.filter(Boolean);

// parse the list of users, and create each user as needed
usersStr
.trim()
.split(';')
.map(s => s.split(':'))
.forEach(user => {
const username = user[0];
const rolesStr = user[1];
const password = user[2];

if (!rolesStr || !password) {
return;
}

const roles = rolesStr.split(',');
const userDoc = {
user: username,
pwd: password,
};

userDoc.roles = roles.map(role => {
if (!~customRoles.indexOf(role)) {
// is this a user defined role?
return role; // no, it is built-in, just use the role name
}
return {role: role, db: 'admin'}; // yes, user-defined, specify the long format
});

try {
targetDb.createUser(userDoc);
} catch (err) {
if (!~err.message.toLowerCase().indexOf('duplicate')) {
// if not a duplicate user
throw err; // rethrow
}
}
});

Step2: Use the following secrets.yaml for app username and password with permission

apiVersion: v1
kind: Secret
metadata:
name:
k8-training
type: Opaque
data:
MONGO_ROOT_USERNAME:
YWRtaW4K
MONGO_ROOT_PASSWORD: cGFzc3dvcmQK
MONGO_USERNAME: dHJhaW5pbmcK
MONGO_PASSWORD: cGFzc3dvcmQK
MONGO_USERS_LIST: dHJhaW5pbmc6ZGJBZG1pbixyZWFkV3JpdGU6cGFzc3dvcmQK

Note: This not advisable for production. Please use either vault to store secrets or add secrets directly to Kubernetes. Don’t manage it as source code.

Step3: Use following statefulsets.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
name:
mongodb-standalone
spec:
serviceName:
database
replicas: 1
selector:
matchLabels:
app:
database
template:
metadata:
labels:
app:
database
selector: mongodb-standalone
spec:
containers:
- name: mongodb-standalone
image: mongo:4.0.8
env:
- name: MONGO_INITDB_ROOT_USERNAME_FILE
value: /etc/k8-training/admin/MONGO_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD_FILE
value: /etc/k8-training/admin/MONGO_ROOT_PASSWORD
volumeMounts:
- name: k8-training
mountPath: /etc/k8-training
readOnly: true
- name: mongodb-scripts
mountPath: /docker-entrypoint-initdb.d
readOnly: true
nodeSelector:
kubernetes.io/hostname:
mongodb-node
volumes:
- name: k8-training
secret:
secretName:
k8-training
items:
- key: MONGO_ROOT_USERNAME
path: admin/MONGO_ROOT_USERNAME
mode: 0444
- key: MONGO_ROOT_PASSWORD
path: admin/MONGO_ROOT_PASSWORD
mode: 0444
- key: MONGO_USERNAME
path: MONGO_USERNAME
mode: 0444
- key: MONGO_PASSWORD
path: MONGO_PASSWORD
mode: 0444
- key: MONGO_USERS_LIST
path: MONGO_USERS_LIST
mode: 0444
- name: mongodb-scripts
configMap:
name:
mongodb-standalone
items:
- key: ensure-users.js
path: ensure-users.js
---

Step4: Use the same service.yaml

apiVersion: v1
kind: Service
metadata:
name:
database
labels:
app:
database
spec:
clusterIP:
None
selector:
app:
database

Step5: Apply these templates as below

kubectl apply -f secrets.yaml
kubectl apply -f statefulsets.yaml
kubectl apply -f service.yaml

Ste6: Use the following to connect running MongoDB

kubectl exec -it mongodb-standalone-0 sh
mongo mongodb://mongodb-standalone-0.database:27017

Step7: Use admin user to login

use training
db.auth('training','password')

So far so good :-) But one of the most important aspects of the database stores it on external storage.

Use external volume to store data outside of the container

Storing data externally is a critical piece for any database. Following steps needs to be done to store data externally

  1. Define StorageClass
  2. Define PersistentVolume
  3. Define PersistentVolumeClaim
  4. Define mongo.conf to change dbPath to /data/db
  5. Update StatefulSets to mound external mounted volume as /data/db

Step1: Use storageclass.yaml as below

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name:
mongodb-standalone
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
---

Step2: Use persistent-volume.yaml as below

apiVersion: v1
kind: PersistentVolume
metadata:
name:
mongodb-standalone
spec:
capacity:
storage:
2Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: mongodb-standalone
local:
path:
/k8-training
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- mongodb-node
---

Step3: Use persistent-volume-claim.yaml as below

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name:
mongodb-standalone
spec:
storageClassName:
mongodb-standalone
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage:
1Gi
---

Step4: Use configmap.yaml as below

apiVersion: v1
kind: ConfigMap
metadata:
name:
mongodb-standalone
data:
mongo.conf:
|
storage:
dbPath: /data/db
ensure-users.js: |
const targetDbStr = 'training';
const rootUser = cat('/etc/k8-training/admin/MONGO_ROOT_USERNAME');
const rootPass = cat('/etc/k8-training/admin/MONGO_ROOT_PASSWORD');
const usersStr = cat('/etc/k8-training/MONGO_USERS_LIST'); // user1:role1A,role1B:pass1[;user2:role2A,role2B:pass2...]

// auth against admin
const adminDb = db.getSiblingDB('admin');
adminDb.auth(rootUser, rootPass);
print('Successfully authenticated admin user');

// we'll create the users here
const targetDb = db.getSiblingDB(targetDbStr);

// user-defined roles should be stored in the admin db
const customRoles = adminDb
.getRoles({rolesInfo: 1, showBuiltinRoles: false})
.map(role => role.role)
.filter(Boolean);

// parse the list of users, and create each user as needed
usersStr
.trim()
.split(';')
.map(s => s.split(':'))
.forEach(user => {
const username = user[0];
const rolesStr = user[1];
const password = user[2];

if (!rolesStr || !password) {
return;
}

const roles = rolesStr.split(',');
const userDoc = {
user: username,
pwd: password,
};

userDoc.roles = roles.map(role => {
if (!~customRoles.indexOf(role)) {
// is this a user defined role?
return role; // no, it is built-in, just use the role name
}
return {role: role, db: 'admin'}; // yes, user-defined, specify the long format
});

try {
targetDb.createUser(userDoc);
} catch (err) {
if (!~err.message.toLowerCase().indexOf('duplicate')) {
// if not a duplicate user
throw err; // rethrow
}
}
});

Step5: Use statefulsets.yaml as below

apiVersion: apps/v1
kind: StatefulSet
metadata:
name:
mongodb-standalone
spec:
serviceName:
database
replicas: 1
selector:
matchLabels:
app:
database
template:
metadata:
labels:
app:
database
selector: mongodb-standalone
spec:
containers:
- name: mongodb-standalone
image: mongo:4.0.8
env:
- name: MONGO_INITDB_ROOT_USERNAME_FILE
value: /etc/k8-training/admin/MONGO_ROOT_USERNAME
- name: MONGO_INITDB_ROOT_PASSWORD_FILE
value: /etc/k8-training/admin/MONGO_ROOT_PASSWORD
volumeMounts:
- name: k8-training
mountPath: /etc/k8-training
readOnly: true
- name: mongodb-scripts
mountPath: /docker-entrypoint-initdb.d
readOnly: true
- name: mongodb-conf
mountPath: /config
readOnly: true
- name: mongodb-data
mountPath: /data/db
nodeSelector:
kubernetes.io/hostname:
mongodb-node
volumes:
- name: k8-training
secret:
secretName:
k8-training
items:
- key: MONGO_ROOT_USERNAME
path: admin/MONGO_ROOT_USERNAME
mode: 0444
- key: MONGO_ROOT_PASSWORD
path: admin/MONGO_ROOT_PASSWORD
mode: 0444
- key: MONGO_USERNAME
path: MONGO_USERNAME
mode: 0444
- key: MONGO_PASSWORD
path: MONGO_PASSWORD
mode: 0444
- key: MONGO_USERS_LIST
path: MONGO_USERS_LIST
mode: 0444
- name: mongodb-scripts
configMap:
name:
mongodb-standalone
items:
- key: ensure-users.js
path: ensure-users.js
- name: mongodb-conf
configMap:
name:
mongodb-standalone
items:
- key: mongo.conf
path: mongo.conf
- name: mongodb-data
persistentVolumeClaim:
claimName:
mongodb-standalone
---

Step6: Use the following secrets.yaml for app username and password with permission

apiVersion: v1
kind: Secret
metadata:
name:
k8-training
type: Opaque
data:
MONGO_ROOT_USERNAME:
YWRtaW4K
MONGO_ROOT_PASSWORD: cGFzc3dvcmQK
MONGO_USERNAME: dHJhaW5pbmcK
MONGO_PASSWORD: cGFzc3dvcmQK
MONGO_USERS_LIST: dHJhaW5pbmc6ZGJBZG1pbixyZWFkV3JpdGU6cGFzc3dvcmQK

Step7: Use same service.yaml

apiVersion: v1
kind: Service
metadata:
name:
database
labels:
app:
database
spec:
clusterIP:
None
selector:
app:
database

Step8: Apply these templates as below

kubectl apply -f storageclass.yaml
kubectl apply -f persistent-volume.yaml
kubectl apply -f persistent-volume-claim.yaml
kubectl apply -f secrets.yaml
kubectl apply -f configmap.yaml
kubectl apply -f statefulsets.yaml
kubectl apply -f service.yaml

Step9: Use the following to connect running MongoDB

kubectl exec -it mongodb-standalone-0 sh
mongo mongodb://mongodb-standalone-0.database:27017

Step10: Use admin user to login

use training
db.auth('training','password')
db.users.insert({name: 'your name'})

Step 11: Delete statefulsets and redeploy again to check if data persist or not

kubectl delete statefulsets mongodb-standalone
kubectl apply -f statefulsets.yaml
kubectl exec -it mongodb-standalone-0 sh
mongo mongodb://mongodb-standalone-0.database:27017
use training
db.auth('training','password')
show collections

Congratulations, you have installed standalone MongoDb for your local development. On a side note; don’t use standalone MongoDb for production. Use either a replica set or sharded cluster for production. You would need following to setup Replicaset/Sharded MongoDB

  • Operator to configure a cluster

To get more insight on MongoDB docker image, please go through official image https://hub.docker.com/_/mongo

Enjoy!!!

--

--