Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

Charmed MongoDB K8s Tutorials > Deploy a replica set > 4. Scale your replicas

Scale your replicas

A replica set in MongoDB is a group of processes that copy stored data in order to make a database highly available. Replication provides redundancy, which means the application can provide self-healing capabilities in case one replica fails.

Disclaimer: This tutorial hosts all replicas on the same machine. This should never be done in a production environment.

To enable high availability in a production environment, replicas should be hosted on different servers to maintain isolation.

Summary


Add replicas

You can add two replicas to your deployed MongoDB application with:

juju scale-application mongodb-k8s 3

The number is 3 because the scale-application command takes the final number of units instead of the amount of units you want to add. Since we already had one replica, the final number after adding two replicas is three.

It usually takes several minutes for the replicas to be added to the replica set. You’ll know that all three replicas are ready when juju status --watch 1s reports:

Model     Controller  Cloud/Region        Version  SLA          Timestamp
Model  Controller  Cloud/Region        Version  SLA          Timestamp
t6     overlord    microk8s/localhost  3.1.6    unsupported  12:49:05Z

App          Version  Status  Scale  Charm        Channel  Rev  Address         Exposed  Message
mongodb-k8s           active      2  mongodb-k8s  6/beta    37  10.152.183.161  no       Primary

Unit            Workload  Agent  Address      Ports  Message
mongodb-k8s/0*  active    idle   10.1.138.17         Primary
mongodb-k8s/1   active    idle   10.1.138.22                
mongodb-k8s/2   active    idle   10.1.138.26 

If you wanted to verify the replica set configuration, you could connect to MongoDB via mongo command in the pod. Since your replica set has 2 additional hosts, you will need to update the hosts in your URI. You can retrieve these host IPs with:

export HOST_IP_1="mongodb-k8s-1.mongodb-k8s-endpoints"
export HOST_IP_2="mongodb-k8s-2.mongodb-k8s-endpoints"

Then recreate the URI using your new hosts and reuse the username, password, database name, and replica set name that you previously used when you first connected to MongoDB:

export URI=mongodb://$DB_USERNAME:$DB_PASSWORD@$HOST_IP,$HOST_IP_1,$HOST_IP_2:27017/$DB_NAME?replicaSet=$REPL_SET_NAME

Now view and save the output of the URI:

echo $URI

Like earlier, we access mongo by sshing into one of the Charmed MongoDB K8s hosts:

juju ssh --container=mongod mongodb-k8s/0

While ssh’d into mongodb-k8s/0 unit, we can access mongo using our new URI that we saved above.

mongosh <saved URI>

Now type rs.status() and you should see your replica set configuration. It should look something like this:

{
  set: 'mongodb-k8s',
  date: ISODate("2023-11-09T12:47:12.176Z"),
  myState: 1,
  term: Long("1"),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long("2000"),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1699534030, i: 2 }), t: Long("1") },
    lastCommittedWallTime: ISODate("2023-11-09T12:47:10.212Z"),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1699534030, i: 2 }), t: Long("1") },
    appliedOpTime: { ts: Timestamp({ t: 1699534030, i: 2 }), t: Long("1") },
    durableOpTime: { ts: Timestamp({ t: 1699534030, i: 2 }), t: Long("1") },
    lastAppliedWallTime: ISODate("2023-11-09T12:47:10.212Z"),
    lastDurableWallTime: ISODate("2023-11-09T12:47:10.212Z")
  },

...

Return to original shell

Leave the MongoDB shell by typing exit.

You will be back in the container of Charmed MongoDB K8s (mongodb-k8s/0). Exit this container by typing exit again.

You are not at the original shell where you can interact with Juju and MicroK8s.

Remove replicas

Removing a unit from the application scales the replicas down. Before we scale down the replicas, list all the units with juju status, here you will see three units mongodb-k8s/0, mongodb-k8s/1, and mongodb-k8s/2. Each of these units hosts a MongoDB replica.

To remove one unit, define a new final number of units for the scale-application command:

juju scale-application mongodb-k8s 2

You’ll know that the replica was successfully removed when juju status --watch 1s reports:

Model  Controller  Cloud/Region        Version  SLA          Timestamp
t6     overlord    microk8s/localhost  3.1.6    unsupported  12:49:05Z

App          Version  Status  Scale  Charm        Channel  Rev  Address         Exposed  Message
mongodb-k8s           active      2  mongodb-k8s  6/beta    37  10.152.183.161  no       Primary

Unit            Workload  Agent  Address      Ports  Message
mongodb-k8s/0*  active    idle   10.1.138.17         Primary
mongodb-k8s/1   active    idle   10.1.138.22                

You can check the status of pod in K8s to see that a pod was removed by using command kubectl get pods --namespace=tutorial

NAME                             READY   STATUS    RESTARTS   AGE
modeloperator-69977f8c44-zh7pc   1/1     Running   0          52m
mongodb-k8s-0                    2/2     Running   0          49m
mongodb-k8s-1                    2/2     Running   0          19m

You can also see that the replica was successfully removed by using the new URI (where the removed host has been excluded).

Next step: 5. Manage passwords

Last updated 22 days ago. Help improve this document in the forum.