Skip to main content
Version: 7.0

Database

Introduction

Database replication is a critical part of the fault-tolerant Passwork architecture. It keeps data synchronized across nodes. When the Primary node fails, the remaining nodes vote to elect a new Primary that application servers start using automatically.

Voting mechanism

How it works

  1. Every node has a vote — each node can participate in electing the Primary.
  2. Quorum — electing a new Primary requires a majority (>50%) of votes.
  3. Automatic election — when the current Primary fails, nodes vote automatically.
  4. Data synchronization — the new Primary must have up-to-date data.

Why the node count matters

Minimum nodes: 3

To stay fault-tolerant, use an odd number of nodes (3, 5, 7) in the replica set.

Why 2 nodes are not enough

  • With 2 nodes: if one fails, the remaining node cannot reach quorum (50% is insufficient; you need >50%).
  • With 3 nodes: if one fails, the remaining 2 nodes form a majority (66%) and can elect a new Primary.

Why an odd number is preferred

With an even number of nodes (for example, 4), you risk a split-brain scenario:

  • If the network splits into two parts with 2 nodes each, neither side can reach majority (>50% required).
  • Both parts switch to read-only mode, and the system becomes unavailable.

Example issue with 4 nodes:

         ┌─────────────────────────────────────────────────────────────────────────┐
│ NETWORK SPLIT │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Node #1 │ │ Node #2 │ │ Node #3 │ │ Node #4 │ │
│ │ │ │ │ │ │ │ │ │
│ │ Vote: 1 │ │ Vote: 1 │ │ Vote: 1 │ │ Vote: 1 │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │ │ │
│ Part 1: 2 nodes (50%) Part 2: 2 nodes (50%) │
│ — Cannot elect Primary — Cannot elect Primary │
│ — Read-only mode — Read-only mode │
│ — Passwork unavailable — Passwork unavailable │
└─────────────────────────────────────────────────────────────────────────┘

Configuration comparison:

Nodes1 node failsNetwork splitRecommendation
2Read-onlyRead-onlyNot recommended
3WorksWorks (2 of 3)Minimum recommended
4WorksRead-only (2 and 2)Not recommended
5WorksWorks (3 of 5)Recommended
6WorksRead-only (3 and 3)Not recommended
7WorksWorks (4 of 7)Recommended

Voting diagram

         ┌─────────────────────────────────────────────────────────────────┐
│ REPLICA SET │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Node #1 │ │ Node #2 │ │ Node #3 │ │
│ │ (Primary) │ │ (Secondary) │ │ (Secondary) │ │
│ │ │ │ │ │ │ │
│ │ Vote: 1 │ │ Vote: 1 │ │ Vote: 1 │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ └─────────────────┼─────────────────┘ │
│ │ │
│ Voting between nodes │
│ (Quorum: 2 of 3 = majority) │
└─────────────────────────────────────────────────────────────────┘

Operating scenarios

Normal operation (3 nodes):

  • Primary handles all reads and writes.
  • Secondary nodes synchronize with the Primary.
  • All nodes participate in voting.

One node fails (2 nodes remain):

  • Remaining 2 nodes form a majority (66%).
  • A new Primary is elected automatically.
  • The system continues to work for reads and writes.

Two nodes fail (1 node remains):

  • The remaining node cannot reach quorum (33% < 50%).
  • The replica set switches to read-only mode.
  • Passwork becomes unavailable for writes.

Read-only mode and Passwork availability

When read-only mode occurs

A replica set goes into read-only mode when:

  1. No quorum — more than half of the nodes are unavailable.
  2. Network partition — parts of the cluster each have less than 50% of the nodes.

Impact on Passwork

When the database is in read-only mode, Passwork is fully unavailable. Any action in Passwork (sign-in, viewing data, creating or updating items) requires writes to the database to record activity history. With writes blocked, these operations cannot be completed.

What users see:

  • Connection errors when trying to reach the database
  • Log messages such as "read-only mode" or "no primary available"
  • Error messages in the UI when attempting to use the system

MongoDB Replica Set

Architecture

A MongoDB replica set consists of multiple nodes, one Primary and one or more Secondary nodes.

Node types:

  • Primary — handles all write and read operations.
  • Secondary — sync from the Primary and can optionally serve reads.
  • Arbiter (optional) — participates in elections but does not store data.

How it works

  1. Writes are performed only on the Primary node.
  2. Oplog (operation log) stores all write operations.
  3. Synchronization — Secondary nodes read the Oplog from the Primary and apply operations to their data.
  4. Voting — when the Primary fails, nodes vote to elect a new Primary.
  5. Automatic failover — a new Primary is chosen automatically from nodes with up-to-date data.

Connection string

All Passwork application servers connect to the replica set using a single connection string:

mongodb://node1:27017,node2:27017,node3:27017/pw?replicaSet=rs0

The MongoDB driver automatically:

  • Detects the current Primary node
  • After failover, routes queries to the new Primary elected by the replica set

Requirements for node placement

Importance of independent sites

For maximum fault tolerance, use three independent physical sites (data centers).

Why this matters:

  1. Protection from disasters — if one site fails, others keep running.
  2. Independent infrastructure — each site has its own power, cooling, and network.
  3. Geographic distribution — nodes can be in different locations.
         ┌─────────────────────────────────────────────────────────────────────┐
│ RECOMMENDED ARCHITECTURE │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │
│ │ DC #1 │ │ DC #2 │ │ DC #3 │ │
│ │ │ │ │ │ │ │
│ │ ┌────────────┐ │ │ ┌────────────┐ │ │ ┌────────────┐ │ │
│ │ │ MongoDB │ │ │ │ MongoDB │ │ │ │ MongoDB │ │ │
│ │ │ Node #1 │ │ │ │ Node #2 │ │ │ │ Node #3 │ │ │
│ │ └────────────┘ │ │ └────────────┘ │ │ └────────────┘ │ │
│ │ │ │ │ │ │ │
│ │ Independent │ │ Independent │ │ Independent │ │
│ │ infrastructure │ │ infrastructure │ │ infrastructure │ │
│ └────────┬─────────┘ └────────┬─────────┘ └───────┬──────────┘ │
│ │ │ │ │
│ │ │ │ │
│ └─────────────────────┼────────────────────┘ │
│ │ │
│ High-speed network │
│ (for data replication) │
└─────────────────────────────────────────────────────────────────────┘

Network requirements

Database nodes need:

  • High-speed connections for replication
  • Low latency for fast synchronization
  • Stable links with minimal packet loss
  • Sufficient bandwidth for replication traffic

Minimum requirements

Minimum: 3 nodes across 3 independent sites

  • Each node on a separate physical site (data center)
  • High-speed network links between sites
  • Independent infrastructure per site

Alternative (not recommended):

  • 3 nodes in one data center but on different servers/racks
  • Less protection from disasters, but still tolerant to a single node failure

Connecting application servers

Single connection string

All Passwork application servers connect through one connection string.

MongoDB drivers discover the Primary automatically when you list all nodes:

mongodb://db-mongo-1,db-mongo-2,db-mongo-3/?replicaSet=rs0

Automatic Primary detection

  • The driver detects the current Primary during connection.
  • It monitors node health.
  • After an election, it switches traffic to the new Primary automatically.

Recommendations

  • Use one shared connection string on all application servers.
  • List all nodes, do not point to a single host.
  • Set reasonable timeouts for connections and operations.
  • Monitor the replica set to track which node is Primary and verify elections.