Badger panics after db reaches 1.1T

We resolved it with adding another level. Our install is shared across 3 servers. A single node hit the limit and killed the cluster. I suppose I’m curious if there a more elegant solution to hitting this limit?

The process we took was…

  1. Cluster crashed
  2. Google searched for error with “dgraph” in the search term. No hits.
  3. Google searched for error without “dgraph” in search term. BadgerDB error hits on this discuss forum.
  4. Address the issue.
  5. Restart the swarm.
  6. Working again.

This just doesn’t seem like a production level resolution to hitting 1.1TB on a node of a predicate.

Are there plans in place to address memory size limits or predicate sharding across nodes?

Addressing this: Splitting predicates into multiple groups - #13 by eugaia, seems like it could mitigate the issue substantially.

Thanks,
Ryan