System design: Key System Design Concepts

Jayaprasanna Roddam - Oct 3 - - Dev Community

Scalability
Availability
Reliability
Maintainability
Consistency and CAP theorem
Latency and throughput

Key System Design Concepts (Conversational and Practical)

Let's dive into some core system design concepts in a more relaxed, practical style. I’ll walk you through each one as if we’re having a chat about how these things work in real systems.


1. Scalability

When someone says "scalable system," they’re talking about how a system can handle growth—whether it’s more users, data, or traffic. Imagine you're building a food delivery app. You start small, but what happens when 10x more users join? Or 100x more?

Two types of scalability:

  1. Vertical Scaling (Scaling Up): This is like upgrading your laptop to handle more tabs or apps. You add more RAM, a faster processor, etc. In system terms, you're giving your server more CPU, more memory. But, there’s a limit—eventually, no matter how much you upgrade your server, you’ll hit a ceiling.
- **Example**: Say you have one powerful server that handles all orders on your app. In the beginning, this works. But as orders flood in, it starts to slow down. You can add more memory and processing power, but at some point, your server can’t get any bigger. Now what?
Enter fullscreen mode Exit fullscreen mode
  1. Horizontal Scaling (Scaling Out): This is like adding more laptops instead of upgrading one. Instead of a single beefy server, you add more servers to spread the load. Each server handles a part of the total traffic.
- **Example**: When you're scaling your food delivery app, rather than making one server more powerful, you add 10 or 50 servers. One handles orders, another handles user profiles, and another does restaurant info. A **load balancer** can distribute users' requests across all servers to prevent overload. If a server crashes, others still handle the traffic—nice, right?

- **Real-world tools**: Cloud providers like AWS, Azure, or GCP let you spin up new servers (instances) automatically when traffic spikes. It’s like having extra hands ready when things get busy.
Enter fullscreen mode Exit fullscreen mode

2. Availability

Here’s a question for you: how frustrating is it when an app crashes while you’re trying to do something important? That’s where availability comes in. A system is available when it's online and working for your users. If your food delivery app is down during peak lunch hours, people are going to switch to competitors.

How do you keep a system available?

  • Redundancy: You don’t want a single point of failure, so you make sure there’s always a backup for every critical part of your system. Imagine if every time your delivery app’s main server crashed, users couldn’t place an order. If you have multiple servers running, one can go down, but the others keep the service going.

    • Example: Let’s say your database server that stores customer info goes down. If you have a replica—a copy of that server—traffic automatically switches to the replica, and no one even notices.
  • Failover: Your app needs to be smart about switching to backups. In cloud systems, you can set up health checks that detect when a server is down and switch to a working one without you doing a thing.

    • Example: If a data center in the US East region goes down, your app should be able to reroute traffic to a data center in Europe without users seeing any downtime.
  • Load Balancing: This is like having a traffic cop directing cars evenly across lanes so no one lane gets clogged. With load balancers, user requests get spread across multiple servers, ensuring no one server gets overwhelmed.

    • Example: When a food delivery app runs a promotion, user traffic spikes. Load balancers ensure all those users don’t overwhelm just one server by distributing them across all available servers.

3. Reliability

Reliability is the idea that your system behaves the way it's supposed to, even under stress or failure conditions. The best example is banking software—if it messes up even once, it’s catastrophic. Every transaction must happen exactly once—no more, no less.

How to make a system reliable:

  • Replication: Have copies of important data or services. In real life, if you lose your phone, you’re glad you have backups, right? Similarly, in systems, you keep copies of databases, services, or queues to ensure nothing gets lost if one part fails.

    • Example: Say someone places an order in your app, but right after, the payment service crashes. If that order data is backed up in a message queue (like Kafka or RabbitMQ), the system can process it later when the payment service is back online.
  • Retries and Idempotency: You might retry failed actions, like payments, but here's the thing—what happens if you accidentally charge a user twice? You need to ensure that retried operations don’t have unintended consequences. This is where idempotency comes in—it guarantees that multiple retries of an action have the same result as if it was done once.

    • Example: If a user submits an order but their payment fails, your system can retry charging their card. With idempotent transactions, you ensure that the user isn’t charged twice if the retry succeeds.

4. Maintainability

Systems evolve. New features, bug fixes, and performance improvements happen all the time. Maintainability is all about how easy it is to change and update your system over time.

Good practices for maintainability:

  • Modularity: Break your system into small, independent modules. Think of it like building with LEGO blocks—you can add, replace, or remove a block without needing to tear down the entire structure.

    • Example: In your food delivery app, instead of having one giant codebase, you split it into microservices. There’s a separate service for handling orders, another for payments, and one for user profiles. If you need to change how orders are processed, you only update the order service—nothing else is affected.
  • Good Documentation and Tests: You can’t predict who will work on your system in the future. So, writing good documentation is like leaving a map for future developers. Plus, with automated tests in place, they can make changes without breaking everything.

    • Example: If a new developer joins your team and needs to change how discounts work, documentation explains the existing logic, and tests ensure their changes don’t cause bugs elsewhere.

5. Consistency and CAP Theorem

When you hear the term CAP Theorem, it’s often discussed in the context of databases and distributed systems. It stands for Consistency, Availability, and Partition Tolerance. The theorem basically says you can’t have all three at the same time in a distributed system. You can only pick two.

  • Consistency: Every read receives the most recent write.
  • Availability: Every request gets a response, even if it’s not the latest data.
  • Partition Tolerance: The system continues to function even if there’s a network split.

Example in the real world:
Imagine your app runs in two regions—Europe and the US. If the network connection between the two is interrupted (partition), you have to choose: either your app waits for the regions to sync (favoring consistency) or it allows orders to go through with slightly outdated data (favoring availability).


6. Latency and Throughput

Let’s wrap up with two important metrics: latency and throughput.

  • Latency: This is the time it takes for a request to travel through your system and get back a response.

    • Example: If a user opens the food delivery app and it takes 5 seconds to load restaurant options, that's high latency. You want that to be as low as possible—under 1 second ideally.
  • Throughput: This is the total number of requests your system can handle per unit of time.

    • Example: During peak hours, your app might need to handle thousands of orders per minute. Your system's throughput needs to be high enough to process these orders without slowing down.

To reduce latency and increase throughput, you can:

  • Use CDNs (Content Delivery Networks) to serve static content closer to users.
  • Optimize database queries with indexing or caching frequent requests in Redis.
  • Increase server capacity with horizontal scaling, as discussed earlier.

That’s the rundown of these core concepts in system design, but now that you’ve got some practical examples, you'll start spotting them in systems around you. Whether it's a food delivery app, an e-commerce site, or a social media platform, these principles shape how those systems perform under real-world conditions.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .