c¼h

Raft consensus Protocol

2015-04-30

Merovius

Fallacies of distributed computing

  • The network is reliable
  • Latency is zero
  • Bandwidth is infinite
  • The network is secure
  • Topology doesn’t change
  • There is one administrator
  • Transport cost is zero
  • The network is homogeneous
  • (Clocks are synchronous)

CAP Theorem

"From Consistency, Availability and Partition tolerance, compromise at least one. You can't compromise Partition tolerance"

CAP Theorem ("Proof")

CAP Theorem ("Proof")

Raft

Distributed consensus:

  • Termination
  • Validity
  • Integrity
  • Agreement

Short: Guarantee consistency

Architecture


Source: Heide Howard - Analysis of Raft consensus

Node states

Role Description RPCs
Follower Replicate updates
Candidate Gathers votes to become leader RequestVotes(term, id, lastLog) → (term, voteGranted)
Leader Receives Client-requests AppendEntries(term, id, lastLog, entries, commit) → (term, success)

Leader election


Source: Heide Howard - Analysis of Raft consensus

Log replication (follower)

  1. AppendEntry RPC arrives
  2. If term is lower than saved term → return false
  3. If parent of new entries is unknown → return false
  4. If new entries conflicts, delete existing old entries
  5. Append new entries
  6. Apply all committed entries to state machine

Log replication (leader)

  1. Client sends command to leader
  2. Leader sends AppendEntries RPC to all nodes
  3. If answer with higher term arrives → become follower
  4. If write failed, retry with older entries
  5. If majority of followers replied true:
    1. Mark entries as commited
    2. Apply to state machine
    3. Send reply to client

IRC

Drops Consistency ("netsplit")

RobustIRC

Focuses on Consistency ("netsplit")

RobustIRC & Raft

  • Messages are log-entries
  • Once they are committed, propagate to other clients
  • Guarantees no dropped messages, no double sends, no out-of-order delivery
  • HTTP-wrapper („RobustSession“) to make client-sessions independent of TCP-connections

Fin