How do I check memory usage in my ElastiCache for Redis self-designed cluster and implement best practices to control high memory usage?

6 minute read
0

I want to check memory usage in my Amazon ElastiCache for Redis self-designed cluster and implement best practices to control high memory usage.

Short description

The following are causes of high memory usage in your ElastiCache for Redis self-designed cluster:

  • Recently added keys: Additional key-value pairs cause an increase in memory usage. Also, additional elements on keys that already exist cause an increase in memory usage. To identify recent data changes on a node, check the SetTypeCmds metric. For more information, see the commandstats command on the INFO page of the Redis website.
  • Increase in buffer usage: Clients are connected to Redis over the network. If the client, which includes Pub/Sub clients, doesn't read from the cache fast enough, then Redis keeps the response data in the client output buffer. For more information, see Redis Pub/Sub and Output buffer limits and on the Redis website. If there's a bottleneck in network bandwidth or the cluster continuously has a heavy load, then buffer usage might accumulate. This results in memory exhaustion and performance degradation. By default, ElastiCache for Redis doesn't restrict growth in output buffer, and each client has their own buffer. To check buffer usage, use the client list command. For more information, see CLIENT LIST on the Redis Website.
  • Large number of new connections: A large number of new connections might increase memory usage. All new connections create a file descriptor that consumes memory, and that aggregate memory consumption might lead to data eviction or OOM errors. To view the number of new connections, check the NewConnections metric.
  • High swap usage: It's normal behavior to have some swap usage on a cache node when there's free memory. However, too much swap usage might lead to decreased performance. High swap usage occurs in a node that's under high memory pressure. This results in low freeable memory. To monitor swap on the host, use the SwapUsage metric.
  • High memory fragmentation: High memory fragmentation indicates inefficiencies in the operating system memory management. Redis might not free up memory when keys are removed. To monitor the fragmentation ratio, use the MemoryFragmentationRatio metric. If you have fragmentation issues, then turn on the activedefrag parameter for active memory defragmentation.
  • Big keys: Big keys have a large data size or a large number of elements. Big keys might cause high memory usage. To detect big keys in your dataset, use the redis-cli --bigkeys command or the redis-cli --memkeys command. For more information, see Scanning for big keys and Scanning keys on the Redis website.

Resolution

Check memory usage

To check memory usage in your ElastiCache for Redis self-designed cluster, review the following Redis metrics:
Note: These metrics are published in Amazon CloudWatch for each node in a cluster.

  • BytesUsedForCache: This is the total number of bytes allocated by Redis for all purposes. This value is used to determine a cluster's memory utilization. To retrieve this metric, run the INFO command on a Redis node. For more information, see INFO on the Redis website.
  • FreeableMemory: This is a host-level metric that shows the amount of free memory available on the host. If memory usage increases because of cache data or overhead, then FreeableMemory decreases. A decrease in FreeableMemory indicates low memory on the host. Swapping might occur when FreeableMemory is too low.
  • DataBaseMemoryUsagePercentage: This is the percentage of memory that's used by a cluster node. When this metric reaches 100% of its threshold, Redis initiates the Redis maxmemory eviction policy. For more information, see Key eviction on the Redis website. To retrieve this metric, run the INFO command on a Redis node. For more information, see INFO on the Redis website.

Note: By default, ElastiCache reserves 25% of the maxmemory for non-data usage such as failover and backup. If you don't specify enough reserved memory for non-data usage, then swapping might increase. For more information, see Managing reserved memory.

Best practices to control high memory usage

  • Use TTL on keys: To prevent storage of unnecessary keys and remove keys that expire, specify TTL on keys for expiration. For more information, see TTL on the Redis website. If you have a high number of key evictions, then your node runs on memory pressure. To avoid keys that expire in the same time window, add randomness when you use TTL.
  • Use an eviction policy: When cache memory becomes full, Redis removes keys to free up space based on the maxmemory-policy. The default maxmemory-policy policy is set to volatile_lru. It's a best practice to choose an eviction policy that's appropriate for your workload.
  • Allocate reserved memory: To avoid issues during failover or backup, it's a best practice to set the reserved_memory_percentage parameter to at least 25% for non-data usage. If there's not enough reserved memory to perform failover or backup, then swap and performance issues occur.
  • Use connection pooling: Connection pooling allows you to control high numbers of new connections that are attempted by the Redis client. For more information, see How do I implement best practices for Redis clients and ElastiCache for Redis self-designed clusters?
  • Configure a server-side idle timeout: Open connections consume memory and usage increases over time, whether or not the client sends requests to ElastiCache for Redis. To minimize unnecessary memory usage by idle connections, configure the server-side timeout through the parameter group to close idle connections after a specified time period. To avoid early closure of connections, set the server-side idle timeout value higher than the client-side timeout within the client library.
  • Adjust output buffer size limits: Adjust the output buffer limit to control the buffer space usage. ElastiCache for Redis parameter groups includes parameters that start with client-output-buffer-limit to avoid high client output buffer usage. These parameters don't have a suggested limit. Make sure that you benchmark your workload and choose an appropriate value for your output buffer limits.
  • Use hash mapping: Hash mapping helps with data structures that have a large number of keys. Also, to reduce the memory footprint compared to hash-tables, use ziplist encoding. For more information, see Memory optimization on the Redis website. Note: hash mapping is a complex command that might cause a spike in the Redis engine usage.
  • Scale the cluster: If you have increased memory pressure under an expected workload, then scale the cluster to decrease the memory pressure.
  • Set an alarm for memory usage: To initiate an alarm for memory usage that reaches a preset threshold, use CloudWatch alarms. When you create a CloudWatch alarm, use the BytesUsedForCache or DatabaseMemoryUsagePercentage metric.
AWS OFFICIAL
AWS OFFICIALUpdated 2 months ago