Redis - Pytania Rekrutacyjne i Kompletny Przewodnik 2026

Sławomir Plamowski 27 min czytania
backend cache database nosql pytania-rekrutacyjne redis

"Jak zaimplementowałbyś cache w swojej aplikacji?" - to pytanie otwiera dyskusję o Redis na większości rozmów backend. Redis stał się standardem dla caching, sesji, rate limiting i real-time features. Rekruterzy oczekują nie tylko znajomości podstawowych komend, ale zrozumienia struktur danych, strategii persistence, i kiedy Redis jest właściwym wyborem.

W tym przewodniku znajdziesz 50+ pytań rekrutacyjnych z odpowiedziami, od podstaw Redis po zaawansowane tematy jak Cluster, Lua scripting i optymalizacja pamięci.

Redis Fundamentals - Podstawy

Odpowiedź w 30 sekund

"Redis to in-memory data store służący jako cache, baza danych i message broker. Przechowuje dane w RAM co daje latencje poniżej milisekundy. Obsługuje różne struktury danych - strings, lists, sets, sorted sets, hashes. Typowe use cases to caching, sesje, rate limiting, leaderboardy i Pub/Sub."

Odpowiedź w 2 minuty

Redis różni się od tradycyjnych baz danych tym, że dane są w pamięci RAM, nie na dysku. To fundamentalna różnica wpływająca na wydajność, ale też na architekturę i use cases.

┌─────────────────────────────────────────────────────────────────┐
│                    REDIS ARCHITECTURE                           │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Client Request                                                │
│       │                                                        │
│       ▼                                                        │
│  ┌─────────────────────────────────────────────────────────┐  │
│  │                    REDIS SERVER                          │  │
│  │  ┌───────────────────────────────────────────────────┐  │  │
│  │  │              Single-threaded Event Loop            │  │  │
│  │  │                                                    │  │  │
│  │  │  Command Parser → Command Execution → Response    │  │  │
│  │  └───────────────────────────────────────────────────┘  │  │
│  │                          │                               │  │
│  │                          ▼                               │  │
│  │  ┌───────────────────────────────────────────────────┐  │  │
│  │  │               IN-MEMORY DATA                       │  │  │
│  │  │                                                    │  │  │
│  │  │  Strings │ Lists │ Sets │ Hashes │ Sorted Sets   │  │  │
│  │  │          │       │      │        │ Streams       │  │  │
│  │  └───────────────────────────────────────────────────┘  │  │
│  │                          │                               │  │
│  │                          ▼                               │  │
│  │  ┌───────────────────────────────────────────────────┐  │  │
│  │  │              PERSISTENCE (optional)                │  │  │
│  │  │         RDB Snapshots │ AOF Log                   │  │  │
│  │  └───────────────────────────────────────────────────┘  │  │
│  └─────────────────────────────────────────────────────────┘  │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Dlaczego Redis jest tak szybki?

Czynnik Opis
In-memory Dane w RAM, brak I/O dyskowego
Single-threaded Brak context switching, brak locków
I/O Multiplexing epoll/kqueue dla tysięcy połączeń
Efficient data structures Zoptymalizowane implementacje
Simple protocol RESP - łatwy do parsowania

Typowe latencje:

  • Redis: < 1ms (często 0.1-0.5ms)
  • PostgreSQL: 1-10ms
  • MongoDB: 1-5ms
  • Disk I/O: 5-15ms (SSD), 10-20ms (HDD)

Struktury Danych - Pytania Rekrutacyjne

1. Jakie struktury danych oferuje Redis i kiedy ich używać?

Odpowiedź:

Redis oferuje 8 głównych struktur danych, każda z własnymi use cases:

┌─────────────────────────────────────────────────────────────────┐
│                    REDIS DATA STRUCTURES                        │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  STRING                          LIST                          │
│  ┌─────────────────┐            ┌─────────────────┐            │
│  │ key → "value"   │            │ key → [a,b,c,d] │            │
│  └─────────────────┘            └─────────────────┘            │
│  Cache, counters, flags         Queues, recent items           │
│                                                                 │
│  SET                             SORTED SET (ZSET)             │
│  ┌─────────────────┐            ┌─────────────────┐            │
│  │ key → {a,b,c}   │            │ key → {a:1,b:2} │            │
│  └─────────────────┘            └─────────────────┘            │
│  Tags, unique items             Leaderboards, ranking          │
│                                                                 │
│  HASH                            STREAM                        │
│  ┌─────────────────┐            ┌─────────────────┐            │
│  │ key → {f1:v1,   │            │ key → [entry1,  │            │
│  │        f2:v2}   │            │        entry2]  │            │
│  └─────────────────┘            └─────────────────┘            │
│  Objects, sessions              Event log, messaging           │
│                                                                 │
│  BITMAP                          HYPERLOGLOG                   │
│  ┌─────────────────┐            ┌─────────────────┐            │
│  │ key → 10110...  │            │ key → ~count    │            │
│  └─────────────────┘            └─────────────────┘            │
│  Feature flags, presence        Cardinality estimation         │
└─────────────────────────────────────────────────────────────────┘

Szczegółowy przegląd:

Struktura Komendy Use Case Complexity
String GET, SET, INCR, APPEND Cache, counters, sessions O(1)
List LPUSH, RPOP, LRANGE Queues, recent items, timeline O(1) push/pop, O(n) access
Set SADD, SMEMBERS, SINTER Tags, unique visitors, relationships O(1) add/remove
Sorted Set ZADD, ZRANGE, ZRANK Leaderboards, priority queues O(log n)
Hash HSET, HGET, HGETALL Objects, user profiles O(1) per field
Stream XADD, XREAD, XGROUP Event sourcing, messaging O(1) add, O(n) read
Bitmap SETBIT, GETBIT, BITCOUNT Feature flags, daily active O(1) bit ops
HyperLogLog PFADD, PFCOUNT Unique counts (approx) O(1), ~0.81% error

2. Jak zaimplementować leaderboard w Redis?

Słaba odpowiedź: "Użyję listy i posortuję ją."

Mocna odpowiedź:

Sorted Set (ZSET) jest idealny dla leaderboardów - automatycznie utrzymuje kolejność po score z O(log n) complexity.

# Dodaj/aktualizuj score gracza
ZADD leaderboard 1500 "player:alice"
ZADD leaderboard 2300 "player:bob"
ZADD leaderboard 1800 "player:charlie"
ZADD leaderboard 2100 "player:diana"

# Top 10 graczy (highest scores)
ZREVRANGE leaderboard 0 9 WITHSCORES
# 1) "player:bob"     2) "2300"
# 3) "player:diana"   4) "2100"
# 5) "player:charlie" 6) "1800"
# 7) "player:alice"   8) "1500"

# Rank konkretnego gracza (0-indexed)
ZREVRANK leaderboard "player:charlie"
# (integer) 2  → 3rd place

# Score konkretnego gracza
ZSCORE leaderboard "player:alice"
# "1500"

# Inkrementuj score (po wygranej grze)
ZINCRBY leaderboard 100 "player:alice"
# "1600"

# Gracze w zakresie score
ZRANGEBYSCORE leaderboard 1500 2000 WITHSCORES
# Wszyscy z score między 1500 a 2000

# Usuń gracza
ZREM leaderboard "player:bob"

# Liczba graczy w leaderboard
ZCARD leaderboard
# (integer) 3

Node.js implementation:

const Redis = require('ioredis');
const redis = new Redis();

class Leaderboard {
  constructor(name) {
    this.key = `leaderboard:${name}`;
  }

  async addScore(playerId, score) {
    // ZADD z NX/XX/GT/LT options (Redis 6.2+)
    // GT = only update if new score > current
    return redis.zadd(this.key, 'GT', score, playerId);
  }

  async incrementScore(playerId, amount) {
    return redis.zincrby(this.key, amount, playerId);
  }

  async getTopPlayers(count = 10) {
    const results = await redis.zrevrange(
      this.key, 0, count - 1, 'WITHSCORES'
    );

    // Convert to array of objects
    const players = [];
    for (let i = 0; i < results.length; i += 2) {
      players.push({
        playerId: results[i],
        score: parseFloat(results[i + 1]),
        rank: i / 2 + 1,
      });
    }
    return players;
  }

  async getPlayerRank(playerId) {
    const rank = await redis.zrevrank(this.key, playerId);
    return rank !== null ? rank + 1 : null; // 1-indexed
  }

  async getPlayerScore(playerId) {
    const score = await redis.zscore(this.key, playerId);
    return score ? parseFloat(score) : null;
  }

  async getAroundPlayer(playerId, range = 5) {
    const rank = await redis.zrevrank(this.key, playerId);
    if (rank === null) return null;

    const start = Math.max(0, rank - range);
    const end = rank + range;

    return redis.zrevrange(this.key, start, end, 'WITHSCORES');
  }
}

// Usage
const gameLeaderboard = new Leaderboard('daily-game');
await gameLeaderboard.addScore('player:123', 1500);
await gameLeaderboard.incrementScore('player:123', 100);
const top10 = await gameLeaderboard.getTopPlayers(10);

3. Jak zaimplementować kolejkę zadań w Redis?

Odpowiedź:

Redis Lists działają jako naturalne kolejki (FIFO) z atomowymi operacjami:

# Producer - dodaj zadania do kolejki
LPUSH jobs '{"type":"email","to":"user@example.com"}'
LPUSH jobs '{"type":"resize","imageId":"123"}'

# Consumer - pobierz i usuń zadanie (blocking)
BRPOP jobs 30  # Czekaj max 30 sekund
# 1) "jobs"
# 2) '{"type":"email","to":"user@example.com"}'

# Sprawdź długość kolejki
LLEN jobs

Reliable queue pattern (z backup):

┌─────────────────────────────────────────────────────────────────┐
│                 RELIABLE QUEUE PATTERN                          │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Producer              Redis                    Consumer        │
│     │                                              │            │
│     │  LPUSH jobs task                            │            │
│     │────────────────▶ [task3,task2,task1]        │            │
│     │                        │                     │            │
│     │                        │ BRPOPLPUSH          │            │
│     │                        │ jobs → processing   │            │
│     │                        ├────────────────────▶│            │
│     │                        │                     │            │
│     │              [task3,task2] jobs              │ Process    │
│     │              [task1] processing              │ task1      │
│     │                        │                     │            │
│     │                        │     LREM processing │            │
│     │                        │◀────────────────────│            │
│     │                        │     (on success)    │            │
│     │                                              │            │
│  If consumer crashes, task1 stays in processing   │            │
│  Cleanup job moves stuck items back to jobs       │            │
└─────────────────────────────────────────────────────────────────┘
const Redis = require('ioredis');
const redis = new Redis();

class ReliableQueue {
  constructor(name) {
    this.queueKey = `queue:${name}`;
    this.processingKey = `queue:${name}:processing`;
  }

  async enqueue(job) {
    const jobData = JSON.stringify({
      id: crypto.randomUUID(),
      data: job,
      createdAt: Date.now(),
    });
    return redis.lpush(this.queueKey, jobData);
  }

  async dequeue(timeout = 30) {
    // Atomically move from queue to processing
    const result = await redis.brpoplpush(
      this.queueKey,
      this.processingKey,
      timeout
    );

    if (!result) return null;
    return JSON.parse(result);
  }

  async complete(jobData) {
    // Remove from processing on success
    return redis.lrem(this.processingKey, 1, JSON.stringify(jobData));
  }

  async fail(jobData, error) {
    // Move back to queue or to dead letter queue
    const pipeline = redis.pipeline();
    pipeline.lrem(this.processingKey, 1, JSON.stringify(jobData));

    // Add to dead letter queue with error info
    const failedJob = { ...jobData, error: error.message, failedAt: Date.now() };
    pipeline.lpush(`${this.queueKey}:failed`, JSON.stringify(failedJob));

    return pipeline.exec();
  }

  async recoverStuckJobs(maxAge = 300000) {
    // Move jobs stuck in processing for > maxAge back to queue
    const processing = await redis.lrange(this.processingKey, 0, -1);

    for (const jobStr of processing) {
      const job = JSON.parse(jobStr);
      if (Date.now() - job.createdAt > maxAge) {
        await redis.lrem(this.processingKey, 1, jobStr);
        await redis.rpush(this.queueKey, jobStr);
        console.log(`Recovered stuck job: ${job.id}`);
      }
    }
  }
}

// Worker
async function processJobs(queue) {
  while (true) {
    const job = await queue.dequeue();
    if (!job) continue;

    try {
      console.log(`Processing job ${job.id}`);
      await processJob(job.data);
      await queue.complete(job);
      console.log(`Completed job ${job.id}`);
    } catch (error) {
      console.error(`Failed job ${job.id}:`, error);
      await queue.fail(job, error);
    }
  }
}

4. Wyjaśnij różnicę między EXPIRE, TTL i PERSIST

Odpowiedź:

Te komendy zarządzają czasem życia kluczy (Time To Live):

# SET z TTL
SET session:123 "user_data" EX 3600  # Expires in 3600 seconds
SET session:123 "user_data" PX 3600000  # Expires in 3600000 milliseconds

# Lub osobno
SET session:123 "user_data"
EXPIRE session:123 3600  # Set TTL to 3600 seconds
EXPIREAT session:123 1704067200  # Expire at Unix timestamp

# Sprawdź pozostały TTL
TTL session:123
# (integer) 3542  # Seconds remaining
# (integer) -1    # Key exists but no TTL
# (integer) -2    # Key doesn't exist

PTTL session:123  # TTL in milliseconds

# Usuń TTL (make key persistent)
PERSIST session:123
# (integer) 1  # Success
# (integer) 0  # Key didn't have TTL

# Zmień TTL tylko jeśli key już ma TTL
EXPIRE session:123 7200 XX  # Only if TTL exists
EXPIRE session:123 7200 NX  # Only if no TTL
EXPIRE session:123 7200 GT  # Only if new TTL > current
EXPIRE session:123 7200 LT  # Only if new TTL < current

Atomic operations z TTL:

// SET with GET (atomically replace and get old value)
const oldValue = await redis.set('key', 'new_value', 'EX', 3600, 'GET');

// GETEX - get and set/remove expiry atomically
const value = await redis.getex('key', 'EX', 3600);  // Get and set TTL
const value2 = await redis.getex('key', 'PERSIST');  // Get and remove TTL

// SETEX (deprecated, use SET EX)
await redis.setex('key', 3600, 'value');

// Check if key has TTL
const ttl = await redis.ttl('key');
const hasExpiry = ttl >= 0;
const keyExists = ttl !== -2;

Caching Strategies

5. Jakie są strategie cachowania i kiedy każdą stosować?

Odpowiedź:

┌─────────────────────────────────────────────────────────────────┐
│                    CACHING STRATEGIES                           │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  CACHE-ASIDE (Lazy Loading)                                    │
│  ───────────────────────────                                   │
│  App ──▶ Cache miss? ──▶ DB ──▶ Update cache ──▶ Return       │
│      ◀── Cache hit ◀──────────────────────────────┘            │
│                                                                 │
│  Pros: Only requested data cached, resilient to cache failure  │
│  Cons: Cache miss = 3 round trips, stale data possible         │
│  Use: Read-heavy, tolerant of stale data                       │
│                                                                 │
│  ─────────────────────────────────────────────────────────────  │
│                                                                 │
│  WRITE-THROUGH                                                  │
│  ─────────────                                                  │
│  App ──▶ Write to Cache ──▶ Cache writes to DB ──▶ Return     │
│                                                                 │
│  Pros: Cache always consistent, no stale data                  │
│  Cons: Write latency (2 writes), unused data cached            │
│  Use: Write-heavy, consistency critical                        │
│                                                                 │
│  ─────────────────────────────────────────────────────────────  │
│                                                                 │
│  WRITE-BEHIND (Write-Back)                                     │
│  ─────────────────────────                                     │
│  App ──▶ Write to Cache ──▶ Return                            │
│                  │                                              │
│                  └──▶ Async batch write to DB                  │
│                                                                 │
│  Pros: Lowest write latency, batch DB writes                   │
│  Cons: Data loss risk if cache fails before DB write           │
│  Use: High write throughput, acceptable data loss window       │
│                                                                 │
│  ─────────────────────────────────────────────────────────────  │
│                                                                 │
│  READ-THROUGH                                                   │
│  ────────────                                                   │
│  App ──▶ Cache (auto-loads from DB on miss) ──▶ Return        │
│                                                                 │
│  Pros: Simple app code, automatic loading                      │
│  Cons: First read slow, requires cache library support         │
│  Use: When using caching library like Caffeine, Guava          │
└─────────────────────────────────────────────────────────────────┘

Cache-Aside implementation:

class CacheAside {
  constructor(redis, db, options = {}) {
    this.redis = redis;
    this.db = db;
    this.ttl = options.ttl || 3600;
    this.prefix = options.prefix || 'cache:';
  }

  async get(key, fetchFn) {
    const cacheKey = this.prefix + key;

    // Try cache first
    const cached = await this.redis.get(cacheKey);
    if (cached) {
      return JSON.parse(cached);
    }

    // Cache miss - fetch from source
    const data = await fetchFn();

    // Update cache (don't await - fire and forget)
    this.redis.setex(cacheKey, this.ttl, JSON.stringify(data))
      .catch(err => console.error('Cache write failed:', err));

    return data;
  }

  async invalidate(key) {
    return this.redis.del(this.prefix + key);
  }

  async invalidatePattern(pattern) {
    const keys = await this.redis.keys(this.prefix + pattern);
    if (keys.length > 0) {
      return this.redis.del(...keys);
    }
  }
}

// Usage
const cache = new CacheAside(redis, db);

// Read with cache
const user = await cache.get(`user:${userId}`, async () => {
  return db.users.findById(userId);
});

// Invalidate on update
await db.users.update(userId, data);
await cache.invalidate(`user:${userId}`);

Write-Through implementation:

class WriteThrough {
  constructor(redis, db, options = {}) {
    this.redis = redis;
    this.db = db;
    this.ttl = options.ttl || 3600;
  }

  async write(key, data, dbWriteFn) {
    // Write to DB first
    const result = await dbWriteFn(data);

    // Then update cache
    await this.redis.setex(
      `cache:${key}`,
      this.ttl,
      JSON.stringify(result)
    );

    return result;
  }

  async read(key, dbReadFn) {
    const cached = await this.redis.get(`cache:${key}`);
    if (cached) {
      return JSON.parse(cached);
    }

    const data = await dbReadFn();
    if (data) {
      await this.redis.setex(`cache:${key}`, this.ttl, JSON.stringify(data));
    }
    return data;
  }
}

6. Jak rozwiązać problem cache stampede / thundering herd?

Odpowiedź:

Cache stampede występuje gdy wiele requestów jednocześnie próbuje odbudować cache po jego wygaśnięciu:

┌─────────────────────────────────────────────────────────────────┐
│                    CACHE STAMPEDE PROBLEM                       │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Time: 12:00:00 - Cache expires                                │
│                                                                 │
│  Request 1 ──▶ Cache MISS ──▶ Query DB ─┐                      │
│  Request 2 ──▶ Cache MISS ──▶ Query DB ─┤                      │
│  Request 3 ──▶ Cache MISS ──▶ Query DB ─┼──▶ DB OVERLOADED!   │
│  Request 4 ──▶ Cache MISS ──▶ Query DB ─┤                      │
│  ...                                     │                      │
│  Request N ──▶ Cache MISS ──▶ Query DB ─┘                      │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Rozwiązania:

1. Locking (Mutex):

async function getWithLock(key, fetchFn, ttl = 3600) {
  const cacheKey = `cache:${key}`;
  const lockKey = `lock:${key}`;

  // Try cache first
  let data = await redis.get(cacheKey);
  if (data) return JSON.parse(data);

  // Try to acquire lock
  const lockAcquired = await redis.set(lockKey, '1', 'EX', 10, 'NX');

  if (lockAcquired) {
    try {
      // We got the lock - fetch and cache
      data = await fetchFn();
      await redis.setex(cacheKey, ttl, JSON.stringify(data));
      return data;
    } finally {
      await redis.del(lockKey);
    }
  } else {
    // Someone else is fetching - wait and retry
    await sleep(100);
    return getWithLock(key, fetchFn, ttl);
  }
}

2. Probabilistic Early Expiration (PER):

async function getWithPER(key, fetchFn, ttl = 3600, beta = 1) {
  const cacheKey = `cache:${key}`;
  const result = await redis.get(cacheKey);

  if (result) {
    const { data, delta, expiry } = JSON.parse(result);
    const now = Date.now();

    // Probabilistic early recomputation
    // As we approach expiry, probability of recompute increases
    const random = Math.random();
    const shouldRecompute = (now - delta * beta * Math.log(random)) >= expiry;

    if (!shouldRecompute) {
      return data;
    }
  }

  // Fetch new data
  const start = Date.now();
  const data = await fetchFn();
  const delta = Date.now() - start;

  await redis.setex(cacheKey, ttl, JSON.stringify({
    data,
    delta,
    expiry: Date.now() + (ttl * 1000),
  }));

  return data;
}

3. Background Refresh:

async function getWithBackgroundRefresh(key, fetchFn, ttl = 3600) {
  const cacheKey = `cache:${key}`;
  const refreshKey = `refresh:${key}`;

  const data = await redis.get(cacheKey);

  if (data) {
    // Check if we should trigger background refresh
    const shouldRefresh = await redis.set(refreshKey, '1', 'EX', ttl / 2, 'NX');

    if (shouldRefresh) {
      // Trigger async refresh - don't await
      refreshInBackground(key, fetchFn, ttl);
    }

    return JSON.parse(data);
  }

  // Cache miss - fetch synchronously
  const freshData = await fetchFn();
  await redis.setex(cacheKey, ttl, JSON.stringify(freshData));
  return freshData;
}

async function refreshInBackground(key, fetchFn, ttl) {
  try {
    const data = await fetchFn();
    await redis.setex(`cache:${key}`, ttl, JSON.stringify(data));
  } catch (err) {
    console.error('Background refresh failed:', err);
  }
}

Persistence i High Availability

7. Wyjaśnij różnicę między RDB a AOF persistence

Odpowiedź:

Redis oferuje dwa mechanizmy persistence:

┌─────────────────────────────────────────────────────────────────┐
│                    RDB vs AOF PERSISTENCE                       │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  RDB (Redis Database Backup)                                   │
│  ───────────────────────────                                   │
│  Point-in-time snapshot of dataset                             │
│                                                                 │
│  Timeline: ──●────────────●────────────●────────────●──        │
│              │            │            │            │           │
│           snapshot    snapshot    snapshot    snapshot          │
│                                                                 │
│  Pros:                          Cons:                          │
│  • Compact single file          • Data loss between snapshots  │
│  • Fast recovery               • Fork() can be slow on big data│
│  • Perfect for backups         • Not suitable for no-data-loss │
│                                                                 │
│  ─────────────────────────────────────────────────────────────  │
│                                                                 │
│  AOF (Append Only File)                                        │
│  ──────────────────────                                        │
│  Log of all write operations                                   │
│                                                                 │
│  Timeline: ──SET─INCR─LPUSH─DEL─SET─HSET─ZADD──────────        │
│              │    │     │    │   │    │    │                   │
│           Every operation appended to log                       │
│                                                                 │
│  Pros:                          Cons:                          │
│  • More durable (fsync options) • Larger file size             │
│  • Append-only = no corruption  • Slower recovery (replay)     │
│  • Rewritable to compact        • Slightly slower writes       │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

redis.conf configuration:

# RDB Configuration
save 900 1      # Save if 1 key changed in 900 seconds
save 300 10     # Save if 10 keys changed in 300 seconds
save 60 10000   # Save if 10000 keys changed in 60 seconds

dbfilename dump.rdb
dir /var/lib/redis

# AOF Configuration
appendonly yes
appendfilename "appendonly.aof"

# fsync policy:
# appendfsync always    # Every write - slowest, safest
# appendfsync everysec  # Every second - good balance (RECOMMENDED)
# appendfsync no        # Let OS decide - fastest, least safe

appendfsync everysec

# AOF rewrite (compact the log)
auto-aof-rewrite-percentage 100  # Rewrite when AOF is 2x the size after last rewrite
auto-aof-rewrite-min-size 64mb   # Minimum size to trigger rewrite

Production recommendation:

# Use BOTH for maximum safety
appendonly yes
appendfsync everysec

# Keep RDB for backups
save 900 1
save 300 10

# AOF rewrite settings
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

Recovery priority:

  1. If AOF enabled → Load from AOF (more complete)
  2. If only RDB → Load from RDB snapshot

8. Jak działa Redis Cluster vs Sentinel?

Odpowiedź:

┌─────────────────────────────────────────────────────────────────┐
│              SENTINEL vs CLUSTER COMPARISON                     │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  REDIS SENTINEL (High Availability)                            │
│  ─────────────────────────────────                             │
│                                                                 │
│     ┌──────────┐  ┌──────────┐  ┌──────────┐                  │
│     │Sentinel 1│  │Sentinel 2│  │Sentinel 3│                  │
│     └────┬─────┘  └────┬─────┘  └────┬─────┘                  │
│          │             │             │                         │
│          └─────────────┼─────────────┘                         │
│                        │ Monitor & Failover                    │
│          ┌─────────────┼─────────────┐                         │
│          ▼             ▼             ▼                         │
│     ┌─────────┐   ┌─────────┐   ┌─────────┐                   │
│     │ Master  │──▶│ Replica │   │ Replica │                   │
│     │ (R/W)   │   │  (R/O)  │   │  (R/O)  │                   │
│     └─────────┘   └─────────┘   └─────────┘                   │
│                                                                 │
│  • All data on one master                                      │
│  • Automatic failover                                          │
│  • Read scaling (read from replicas)                          │
│  • No write scaling                                            │
│                                                                 │
│  ─────────────────────────────────────────────────────────────  │
│                                                                 │
│  REDIS CLUSTER (Horizontal Scaling + HA)                       │
│  ───────────────────────────────────────                       │
│                                                                 │
│     ┌─────────────────────────────────────────────────────┐   │
│     │               16384 Hash Slots                       │   │
│     │  [0-5460]      [5461-10922]      [10923-16383]      │   │
│     └─────┬───────────────┬──────────────────┬────────────┘   │
│           │               │                  │                 │
│     ┌─────▼─────┐   ┌─────▼─────┐   ┌───────▼───────┐        │
│     │  Master 1 │   │  Master 2 │   │   Master 3   │         │
│     │ Slots 0-  │   │ Slots 5461│   │ Slots 10923- │         │
│     │    5460   │   │   -10922  │   │    16383     │         │
│     └─────┬─────┘   └─────┬─────┘   └───────┬───────┘        │
│           │               │                  │                 │
│     ┌─────▼─────┐   ┌─────▼─────┐   ┌───────▼───────┐        │
│     │ Replica 1 │   │ Replica 2 │   │  Replica 3   │         │
│     └───────────┘   └───────────┘   └───────────────┘        │
│                                                                 │
│  • Data sharded across masters                                 │
│  • Horizontal write scaling                                    │
│  • Each master has replicas for HA                            │
│  • Minimum 6 nodes (3 masters + 3 replicas)                   │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Kiedy używać którego:

Scenariusz Sentinel Cluster
Data fits in memory
Need horizontal scaling
Multi-key transactions ⚠️ (same slot only)
Simple setup
< 25GB data Overkill
> 25GB data
Lua scripts across keys ⚠️ (same slot only)

Cluster hash slot calculation:

// Redis uses CRC16 to determine slot
// HASH_SLOT = CRC16(key) mod 16384

// Hash tags - force keys to same slot
// Keys: {user:1}:profile and {user:1}:sessions
// Both hash on "user:1" → same slot

// Node.js with ioredis
const Redis = require('ioredis');

const cluster = new Redis.Cluster([
  { host: 'node1', port: 6379 },
  { host: 'node2', port: 6379 },
  { host: 'node3', port: 6379 },
]);

// Hash tags for multi-key operations
await cluster.mset(
  '{user:123}:name', 'John',
  '{user:123}:email', 'john@example.com'
);

// This works because both keys are in same slot
await cluster.mget('{user:123}:name', '{user:123}:email');

Rate Limiting

9. Jak zaimplementować rate limiting w Redis?

Odpowiedź:

Kilka algorytmów z różnymi trade-offs:

1. Fixed Window Counter:

async function fixedWindowRateLimit(userId, limit = 100, windowSec = 60) {
  const key = `ratelimit:${userId}:${Math.floor(Date.now() / 1000 / windowSec)}`;

  const current = await redis.incr(key);

  if (current === 1) {
    await redis.expire(key, windowSec);
  }

  return {
    allowed: current <= limit,
    remaining: Math.max(0, limit - current),
    resetAt: Math.ceil(Date.now() / 1000 / windowSec) * windowSec,
  };
}

// Problem: Spike at window boundaries
// Time: 0:59 → 100 requests OK
// Time: 1:01 → 100 requests OK
// = 200 requests in 2 seconds!

2. Sliding Window Log:

async function slidingWindowLog(userId, limit = 100, windowSec = 60) {
  const key = `ratelimit:${userId}`;
  const now = Date.now();
  const windowStart = now - (windowSec * 1000);

  const pipeline = redis.pipeline();

  // Remove old entries
  pipeline.zremrangebyscore(key, 0, windowStart);

  // Count current entries
  pipeline.zcard(key);

  // Add current request
  pipeline.zadd(key, now, `${now}-${Math.random()}`);

  // Set expiry
  pipeline.expire(key, windowSec);

  const results = await pipeline.exec();
  const count = results[1][1];

  return {
    allowed: count < limit,
    remaining: Math.max(0, limit - count - 1),
  };
}

// Pros: Accurate, no boundary issues
// Cons: Memory grows with requests

3. Sliding Window Counter (best balance):

async function slidingWindowCounter(userId, limit = 100, windowSec = 60) {
  const now = Date.now();
  const currentWindow = Math.floor(now / 1000 / windowSec);
  const previousWindow = currentWindow - 1;

  const currentKey = `ratelimit:${userId}:${currentWindow}`;
  const previousKey = `ratelimit:${userId}:${previousWindow}`;

  const [currentCount, previousCount] = await redis.mget(currentKey, previousKey);

  // Weight previous window by overlap percentage
  const windowProgress = (now / 1000 % windowSec) / windowSec;
  const weightedCount = (parseInt(previousCount) || 0) * (1 - windowProgress) +
                        (parseInt(currentCount) || 0);

  if (weightedCount >= limit) {
    return { allowed: false, remaining: 0 };
  }

  // Increment current window
  const pipeline = redis.pipeline();
  pipeline.incr(currentKey);
  pipeline.expire(currentKey, windowSec * 2);
  await pipeline.exec();

  return {
    allowed: true,
    remaining: Math.floor(limit - weightedCount - 1),
  };
}

4. Token Bucket (Lua script for atomicity):

const tokenBucketScript = `
  local key = KEYS[1]
  local capacity = tonumber(ARGV[1])
  local refillRate = tonumber(ARGV[2])
  local now = tonumber(ARGV[3])
  local requested = tonumber(ARGV[4])

  local bucket = redis.call('HMGET', key, 'tokens', 'lastRefill')
  local tokens = tonumber(bucket[1]) or capacity
  local lastRefill = tonumber(bucket[2]) or now

  -- Refill tokens
  local elapsed = now - lastRefill
  local refill = elapsed * refillRate / 1000
  tokens = math.min(capacity, tokens + refill)

  local allowed = tokens >= requested
  if allowed then
    tokens = tokens - requested
  end

  -- Save state
  redis.call('HMSET', key, 'tokens', tokens, 'lastRefill', now)
  redis.call('EXPIRE', key, capacity / refillRate * 2)

  return {allowed and 1 or 0, tokens}
`;

async function tokenBucket(userId, capacity = 100, refillRate = 10) {
  const result = await redis.eval(
    tokenBucketScript,
    1,
    `ratelimit:${userId}`,
    capacity,
    refillRate,
    Date.now(),
    1
  );

  return {
    allowed: result[0] === 1,
    remaining: Math.floor(result[1]),
  };
}

Pub/Sub i Messaging

10. Jak działa Redis Pub/Sub i kiedy go używać?

Odpowiedź:

Pub/Sub to fire-and-forget messaging - wiadomości nie są persisted:

┌─────────────────────────────────────────────────────────────────┐
│                    REDIS PUB/SUB                                │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Publisher 1 ──┐                                               │
│                │    ┌─────────────────┐                        │
│  Publisher 2 ──┼───▶│  Channel: news  │───┬──▶ Subscriber A   │
│                │    └─────────────────┘   │                    │
│  Publisher 3 ──┘                          └──▶ Subscriber B   │
│                                                                 │
│  • Fire and forget - no message persistence                    │
│  • No acknowledgment                                           │
│  • If no subscribers, message is lost                          │
│  • Subscribers receive only while connected                    │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
# Terminal 1 - Subscribe
SUBSCRIBE news
PSUBSCRIBE news:*  # Pattern subscribe

# Terminal 2 - Publish
PUBLISH news "Breaking news!"
PUBLISH news:sports "Team wins!"

# Pub/Sub commands
PUBSUB CHANNELS          # List active channels
PUBSUB NUMSUB news       # Number of subscribers

Node.js implementation:

const Redis = require('ioredis');

// Separate connections for pub and sub!
const publisher = new Redis();
const subscriber = new Redis();

// Subscribe
subscriber.subscribe('notifications', 'alerts');
subscriber.psubscribe('user:*:events');

subscriber.on('message', (channel, message) => {
  console.log(`${channel}: ${message}`);
});

subscriber.on('pmessage', (pattern, channel, message) => {
  console.log(`${pattern}${channel}: ${message}`);
});

// Publish
await publisher.publish('notifications', JSON.stringify({
  type: 'new_message',
  userId: '123',
  message: 'Hello!',
}));

// Publish to pattern-matched channel
await publisher.publish('user:123:events', 'logged_in');

When to use Pub/Sub vs Streams:

Feature Pub/Sub Streams
Persistence ❌ No ✅ Yes
Consumer groups ❌ No ✅ Yes
Message replay ❌ No ✅ Yes
Acknowledgment ❌ No ✅ Yes
Delivery guarantee At-most-once At-least-once
Use case Real-time notifications Event sourcing, queues

Redis Streams (better for reliable messaging):

// Producer
await redis.xadd('events', '*',
  'type', 'order_created',
  'orderId', '123',
  'timestamp', Date.now()
);

// Consumer group
await redis.xgroup('CREATE', 'events', 'processors', '$', 'MKSTREAM');

// Consumer
async function processEvents(consumerId) {
  while (true) {
    const results = await redis.xreadgroup(
      'GROUP', 'processors', consumerId,
      'COUNT', 10,
      'BLOCK', 5000,
      'STREAMS', 'events', '>'
    );

    if (!results) continue;

    for (const [stream, messages] of results) {
      for (const [id, fields] of messages) {
        await processMessage(fields);
        await redis.xack('events', 'processors', id);
      }
    }
  }
}

Performance i Memory

11. Jak zoptymalizować użycie pamięci w Redis?

Odpowiedź:

// 1. Use appropriate data structures
// BAD: Separate keys for user fields
SET user:123:name "John"
SET user:123:email "john@example.com"
SET user:123:age "30"
// Memory: 3 keys × overhead ≈ 200+ bytes

// GOOD: Hash for small objects
HSET user:123 name "John" email "john@example.com" age 30
// Memory: 1 key, fields stored efficiently ≈ 100 bytes

// 2. Use hash-max-ziplist optimizations
// redis.conf
hash-max-ziplist-entries 512  # Use ziplist if < 512 fields
hash-max-ziplist-value 64     # Use ziplist if values < 64 bytes

// 3. Short key names in high-volume scenarios
// Instead of: user:session:authentication:token:abc123
// Use:       u:s:t:abc123

// 4. Use EXPIRE aggressively
SETEX cache:data 3600 "value"  # Auto-cleanup

// 5. Use bitmaps for boolean flags
// Instead of: SET user:123:feature:darkmode 1
SETBIT user:123:features 0 1  # darkmode
SETBIT user:123:features 1 0  # notifications
SETBIT user:123:features 2 1  # beta

// 6. Use HyperLogLog for cardinality
// Instead of: SADD unique_visitors "user1" "user2" ...
PFADD unique_visitors "user1" "user2"  # ~12KB regardless of count
PFCOUNT unique_visitors  # Approximate count

// 7. Compress large values
const compressed = zlib.gzipSync(JSON.stringify(largeObject));
await redis.setex('data', 3600, compressed.toString('base64'));

Memory analysis commands:

# Overall memory info
INFO memory
MEMORY STATS

# Memory usage of specific key
MEMORY USAGE user:123

# Find big keys
redis-cli --bigkeys

# Analyze key patterns
redis-cli --memkeys

# Debug object encoding
DEBUG OBJECT mykey
# encoding:ziplist (efficient)
# encoding:hashtable (less efficient)

Eviction policies:

# redis.conf
maxmemory 2gb
maxmemory-policy allkeys-lru

# Policies:
# noeviction      - Return error when full
# allkeys-lru     - LRU on all keys (RECOMMENDED for cache)
# volatile-lru    - LRU only on keys with TTL
# allkeys-lfu     - LFU on all keys (better for skewed access)
# volatile-lfu    - LFU only on keys with TTL
# allkeys-random  - Random eviction
# volatile-random - Random on keys with TTL
# volatile-ttl    - Evict keys with shortest TTL

12. Jak debugować problemy z wydajnością Redis?

Odpowiedź:

# 1. SLOWLOG - find slow commands
CONFIG SET slowlog-log-slower-than 10000  # Log > 10ms
CONFIG SET slowlog-max-len 128

SLOWLOG GET 10  # Get last 10 slow commands
# 1) 1) (integer) 14           # ID
#    2) (integer) 1704067200   # Timestamp
#    3) (integer) 15234        # Duration (microseconds)
#    4) 1) "KEYS"              # Command
#       2) "*"

# 2. MONITOR - real-time commands (USE CAREFULLY - impacts perf)
MONITOR
# 1704067200.123456 [0 127.0.0.1:6379] "GET" "key"

# 3. CLIENT LIST - active connections
CLIENT LIST
# id=5 addr=127.0.0.1:6379 cmd=get ...

# 4. INFO - server statistics
INFO stats
# total_connections_received:1000
# instantaneous_ops_per_sec:5000
# rejected_connections:0
# expired_keys:123
# evicted_keys:0

INFO commandstats
# cmdstat_get:calls=1000,usec=5000,usec_per_call=5.00
# cmdstat_set:calls=500,usec=3000,usec_per_call=6.00

# 5. LATENCY - latency diagnosis
CONFIG SET latency-monitor-threshold 100  # Track > 100ms
LATENCY LATEST
LATENCY HISTORY command
LATENCY DOCTOR  # Human-readable diagnosis

# 6. DEBUG SLEEP - test latency (DON'T USE IN PROD)
DEBUG SLEEP 0.5

Common performance issues:

┌─────────────────────────────────────────────────────────────────┐
│              REDIS PERFORMANCE CHECKLIST                        │
├─────────────────────────────────────────────────────────────────┤
│                                                                 │
│  PROBLEM                      SOLUTION                          │
│  ─────────────────────────────────────────────────────────────  │
│  KEYS * command               Use SCAN instead                  │
│  Large values (>1MB)          Compress or split                 │
│  Too many small keys          Use hashes to group               │
│  Big DEL operations           Use UNLINK (async)                │
│  No TTL on cache keys         Add EXPIRE                        │
│  Single hot key               Shard or add local cache          │
│  Too many connections         Use connection pooling            │
│  Slow persistence             Tune RDB/AOF settings             │
│  Memory fragmentation         Schedule MEMORY PURGE             │
│  Blocking commands            Avoid BLPOP in main thread        │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Node.js connection pooling:

const Redis = require('ioredis');

// ioredis automatically pools connections
const redis = new Redis({
  host: 'localhost',
  port: 6379,
  maxRetriesPerRequest: 3,
  retryDelayOnFailover: 100,

  // Connection pool settings
  lazyConnect: true,
  keepAlive: 10000,

  // For Cluster
  enableReadyCheck: true,
  scaleReads: 'slave',  // Read from replicas
});

// Pipeline for batch operations (reduces round trips)
const pipeline = redis.pipeline();
pipeline.get('key1');
pipeline.get('key2');
pipeline.set('key3', 'value3');
const results = await pipeline.exec();

// Transaction (MULTI/EXEC)
const multi = redis.multi();
multi.incr('counter');
multi.get('counter');
const results = await multi.exec();

Praktyczne Zadania

Zadanie 1: Session storage

Zaimplementuj session storage z Redis.

class SessionStore {
  constructor(redis, options = {}) {
    this.redis = redis;
    this.prefix = options.prefix || 'sess:';
    this.ttl = options.ttl || 86400; // 24 hours
  }

  async create(userId, data = {}) {
    const sessionId = crypto.randomUUID();
    const session = {
      id: sessionId,
      userId,
      data,
      createdAt: Date.now(),
    };

    await this.redis.setex(
      this.prefix + sessionId,
      this.ttl,
      JSON.stringify(session)
    );

    // Index by userId for logout all
    await this.redis.sadd(`user:${userId}:sessions`, sessionId);

    return sessionId;
  }

  async get(sessionId) {
    const data = await this.redis.get(this.prefix + sessionId);
    return data ? JSON.parse(data) : null;
  }

  async refresh(sessionId) {
    return this.redis.expire(this.prefix + sessionId, this.ttl);
  }

  async destroy(sessionId) {
    const session = await this.get(sessionId);
    if (session) {
      await this.redis.srem(`user:${session.userId}:sessions`, sessionId);
    }
    return this.redis.del(this.prefix + sessionId);
  }

  async destroyAllForUser(userId) {
    const sessionIds = await this.redis.smembers(`user:${userId}:sessions`);
    if (sessionIds.length > 0) {
      const keys = sessionIds.map(id => this.prefix + id);
      await this.redis.del(...keys);
      await this.redis.del(`user:${userId}:sessions`);
    }
  }
}

Zadanie 2: Distributed lock

class DistributedLock {
  constructor(redis) {
    this.redis = redis;
  }

  async acquire(lockName, ttlMs = 10000) {
    const lockId = crypto.randomUUID();
    const key = `lock:${lockName}`;

    const acquired = await this.redis.set(
      key, lockId, 'PX', ttlMs, 'NX'
    );

    if (acquired) {
      return { lockId, key };
    }
    return null;
  }

  async release(lock) {
    // Only release if we own the lock (Lua for atomicity)
    const script = `
      if redis.call("get", KEYS[1]) == ARGV[1] then
        return redis.call("del", KEYS[1])
      else
        return 0
      end
    `;
    return this.redis.eval(script, 1, lock.key, lock.lockId);
  }

  async withLock(lockName, fn, options = {}) {
    const { ttlMs = 10000, retries = 3, retryDelay = 100 } = options;

    for (let i = 0; i < retries; i++) {
      const lock = await this.acquire(lockName, ttlMs);

      if (lock) {
        try {
          return await fn();
        } finally {
          await this.release(lock);
        }
      }

      await new Promise(r => setTimeout(r, retryDelay));
    }

    throw new Error(`Failed to acquire lock: ${lockName}`);
  }
}

// Usage
const lock = new DistributedLock(redis);
await lock.withLock('process-order:123', async () => {
  // Critical section - only one process can run this
  await processOrder(123);
});

Podsumowanie

Redis to fundamentalne narzędzie backend developera. Na rozmowie rekrutacyjnej oczekuj pytań o:

  1. Struktury danych - kiedy String vs Hash vs List vs Set vs ZSet
  2. Caching - strategie, cache stampede, invalidation
  3. Persistence - RDB vs AOF, trade-offs
  4. High Availability - Sentinel vs Cluster
  5. Rate Limiting - algorytmy, implementacja
  6. Performance - optymalizacja pamięci, debugging
  7. Use cases - sessions, queues, leaderboards, pub/sub

Klucz do sukcesu: zrozumienie kiedy używać Redis i jak go skalować w produkcji.


Zobacz też


Artykuł przygotowany przez zespół Flipcards - tworzymy materiały do nauki programowania i przygotowania do rozmów rekrutacyjnych.

Chcesz więcej pytań rekrutacyjnych?

To tylko jeden temat z naszego kompletnego przewodnika po rozmowach rekrutacyjnych. Uzyskaj dostęp do 800+ pytań z 13 technologii.

Kup pełny dostęp Zobacz bezpłatny podgląd
Powrót do blogu

Zostaw komentarz

Pamiętaj, że komentarze muszą zostać zatwierdzone przed ich opublikowaniem.