Skip to content

Memcached

  • Ops
Tags:

Memcache

Store and retrieve data in memory(not persistent) base on specific hash function.

concepts

  • Slab: allocate as many pages as the ones available

  • Page: a memory area of default 1MB which contains as many chunks

  • Chunk: minimum allocated space for a single item

  • LRU: least recently used list

ref: Journey to the centre of memcached

we could say that we would run out of memory when all the available pages are allocated to slabs

memcached is designed to evict old/unused items in order to store new ones

every item operation (get, set, update or remove) requires the item in question to be locked

memcached only tries to remove the first 5 items of the LRU — after that it simply gives up and answers with OOM (out of memory)

commands with telnet

  • get

  • set

  • add: add key or return NOT_STORED if exists

  • replace: replace key or return NOT_STORED if exists

  • append, prepend

  • incr, decr

  • delete

  • flush_all

  • stats

  • version

  • quit

Run Service

Image used: memcached

Python client: pymemcache

Distributed Caching

image

Modulo Hashing

  • Pros: Balancing the distribution between instances in cluster
  • Cons: 1. Loss data if instance down 2. hard to scale

Example

Run 3 instances, expose at port 11211, 11212, 11213

Use python client to set key

Get client instance

pymemcache use Murmur3 hashing

Test with telnet

telnet the third cache server

Consistent Hashing

image

Scale up / down not affect all the servers on the ring

High Availability

  • Repcached: replica data between masters
  • KeepAlive: port forword to slave if master down

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *