I recently ran a test with membase, incrementing 60 million keys, each key of size 20-30 bytes, the values are less than the value of an integer. This cluster was across 3 16 GB boxes, 15 GB dedicated to a single bucket (replication=1) in membase. The build is
membase-server-community_x86_64_188.8.131.52 on 64-bit ubuntu lucid boxes.
Initially, 10 million keys resided on 3 GB of memory. (3mil keys / GB) @60 million keys resided on 45 GB of memory. (1.33mil keys / GB)
In comparison, redis handles 9-10 million keys / GB @ 60 million keys. This ratio of keys per GB is consistent regardless of the dataset size.
Membase does not seem to scale well when faced with key heavy datasets. Is there any tuning/configuration that could help Membase in this use case?
PS I migrated from redis to membase because the latter seemed to offer more reliability against cache failure. However, this degradation of performance with large datasets is a bit too painful.
preguntado el 08 de noviembre de 11 a las 18:11