Badger panics after db reaches 1.1T

Thanks Ryan. We just added some documentation about this based on your comments above, which is waiting in a PR for our next doc release. I appreciate your comments here which helps people find this error if they do encounter it.

For other readers I want to clarify that sharding individual predicates will only be needed if you have a single huge predicate taking up 1.1TB on disk. Dgraph already shards by moving various predicates around among node groups to keep things balanced. So you can scale vertically as above by increasing levels to handle 11.1TB per machine (though most machines don’t scale to that level) or scale horizontally by adding new alpha node groups which will result in the data being split among groups.

While most people will never see a single 1TB predicate, there is an existing roadmap item to shard individual predicates here: ticket: Single predicate sharded across groups.