When your app starts attracting larger numbers of users, simply adding more power to your server is usually not enough. The application needs to be built to allow for horizontal scaling, across multiple servers.
Real-time web or mobile applications with Socket.IO
Socket.IO is a WebSocket library for Node.js that allows for real-time communication, without the need for polling (although it can fall back to polling if WebSockets aren't supported). It's great for developing applications that require near-instant communication - such as multi-player games, or real-time chat applications.
Web and mobile apps
WebSockets, as implied by their name, are usually used to create real-time web applications, but they can also be used to develop native mobile applications for iOS (iPhone, iPad), Android, etc. Web apps can also be developed as mobile applications, that can be accessed via mobile browsers. More recent versions of the iOS and Android browsers support WebSockets.
Example: Web application developed with Socket.IO
The following example is for an extremely simple chat web application, where users log in with a username and can send messages to other users by entering the recipient's username and the message. A native mobile app could also be developed as a client to work with this server, without needing any change to the server. We keep a map of usernames to socket IDs, so that we can send messages to specific clients based on their username.
Server
// Start the server on port 80 var io = require('socket.io').listen(80); // A map of usernames to socket IDs - so we can message clients individually var users = {}; // When a client connects io.sockets.on('connection', function (socket) { // Log in a user with their provided username socket.on('log in', function (data) { users[data.username] = socket.id; }); // User wants to send a message socket.on('send message', function (data) { // Find the socket for the recipient, and send the message to them io.sockets.socket(users[data.username]).emit('new message', data.message); }); });
Client
// Get the user's username var username = prompt('Enter your username'); // Connect to server on port 80 var io = require('socket.io').listen(80); // When connected to the server io.sockets.on('connection', function (socket) { // Log in with the entered username socket.emit('log in', { username: username }); // Print received message to the console socket.on('new message', function (data) { console.log(data); }); }); // Send a message to a user (this could be called from a HTML form) function sendMessage(recipientId, message) { io.socket.emit('send message', { username: recipientId, message: message }); }
Scalability problems with Socket.IO
Node.js (and therefore Socket.IO) is single threaded, meaning that it cannot take advantage of multi-core processors. In order to do this, we need to run a separate Node.js instance for each core. However, these instances would be completely independent and not share any data. For example, if a real-time chat mobile app had two server instances running and a single user connected to each, these two users would have no way of communicating with each other.
This problem is apparent in the chat example above, where the users
variable on the server would only contain the users connected to that particular server instance - as this variable is not shared between instances.
Separate server instances need a way to communicate and share data, so they can act like a single server - whilst retaining the benefits of having multiple instances. Once we have the application taking full advantage of a single server, we can then move onto scaling it horizontally across multiple servers. If built right, the same technique can be used to scale the system vertically (on a single system), as well as horizontally.
Interprocess communication with Redis
Redis is an extremely fast in-memory advanced key-value store, with optional persistence. It has support for several different data structures to be used as values, such as lists and sets. In some applications, it can be used to completely replace a traditional SQL database - providing a big performance boost. If an SQL database is still needed (e.g. if more advanced queries are required), Redis can be used as a cache to temporarily keep results in memory for subsequent accesses - taking load off the database.
Redis can also be used for communication, with its publish/subscribe functionality. Listeners can subscribe to messages published to a channel, so all listeners will be notified when a publisher publishes a message to that channel. This is very useful for interprocess communication, such as in our chat application example. Each server instance could listen for newly published messages, and forward them onto the recipient client if the user is connected to that particular server. When users send messages, the server they are connected to would publish the message to all other listening servers on the channel.
Luckily, Socket.IO can be easily configured to use Redis as its store.
Example: Running multiple instances of Socket.IO with Redis
The following example creates a Redis client for storing and accessing shared data between server instances. Pub and sub clients are used by Socket.IO for communication between different server instances.
The example simply uses the default Redis server configuration, which can be started for the command line with $ redis-server
If multiple instances are to be ran on the same server, then they cannot be on the same port. This example allows a port to be entered when running the server from the command line. The Redis hostname/IP must also be provided, as it will no longer necessarily be on the same server. The server can be started as follows: $ node Server.js 80 127.0.0.1
The client is not shown, as it is almost exactly the same - which is a bonus if you need to scale up your application without having to update clients. The only thing that would need to change for this example is allowing the user to choose a server hostname and port to connect to - a problem we will address later.
Server
// Start the server on provided port const serverPort = process.argv[2]; var io = require('socket.io').listen(serverPort); // Create a Redis client const redisHost = process.argv[3]; var redis = require('redis'); client = redis.createClient(redisHost); // Create and use a Socket.IO Redis store var RedisStore = require('socket.io/lib/stores/redis') io.set('store', new RedisStore({ redisPub : redis.createClient(redisHost), redisSub : redis.createClient(redisHost), redisClient : client })); // When a client connects io.sockets.on('connection', function (socket) { // Log in a user with their provided username socket.on('log in', function (data) { client.set('user:' + data.username, socket.id); }); // User wants to send a message socket.on('send message', function (data) { // Find the socket for the recipient, and send the message to them client.get('user:' + data.username, function (err, socketId) { io.sockets.socket(socketId).emit('new message', data.message); }); }); });
Notice that the map of users to socket IDs is now gone. This data is now stored in Redis, so all processes can access it. The key uses the format 'user:username'
, so if we wanted to get the socket ID for the user 'dbennett' then we would use the key 'user:dbennett'
. Both the 'log in'
and 'send message'
events have changed to reflect this change.
Load balancing with HAProxy
Now the server can have many instances running simultaneously across multiple ports and servers. Currently, the user would need to choose a server & port to connect to. This is far from ideal, as it should be seamless to the user that the server is distributed across several instances. It would also mean that users could all just choose to connect to the same server instance, making this scalability pointless. To solve these problems, we want to provide a single point of entry to all users, by using load balancing.
HAProxy (High Availability Proxy) is an open source TCP/HTTP load balancer, which is perfect for our needs. It is currently being used by some big sites, such as Stack Overflow, Reddit, Tumblr, and Twitter.
Example: Load balancing a web application with HAProxy
Here is an example haproxy.cfg that can be used to load balance between two instances of our server, named server1
and server2
. Both servers are running on the same host as HAProxy, but could just as easily be on a different host by setting the IP address appropriately. server1
is running on port 30001, and server2
on 30002.
HAProxy can be run from the command line with a configuration file as follows: $ haproxy -f haproxy.cfg
global nbproc 1 # Number of processes to run HAProxy on maxconn 65536 # Maximum number of connections # Clients will be able to access the server on port 80 over unsecured HTTP frontend unsecured *:80 mode http timeout client 86400000 # WebSockets can be kept open for a long time, to save time reconnecting option httpclose default_backend socket_io_servers # The backend Socket.IO servers backend socket_io_servers mode http balance roundrobin # Distribute clients evenly amongst servers timeout server 30000 timeout connect 4000 server server1 127.0.0.1:30001 weight 1 maxconn 10000 check # Our first server instance server server2 127.0.0.1:30002 weight 1 maxconn 10000 check # Our second server instance
With the load balancer in place, we can use the original client code to connect on port 80 and have clients be automatically distributed across servers, in a round-robin fashion.
HAProxy can also be used to handle HTTPS, so individual server instances don't have to. As this post is getting pretty long, I'll leave this bit out.
Each server can be given a weight, to indicate what percentage of clients should be distributed to that server. This is useful when some servers are able to cope with more clients than others. More capable servers can be given a higher weighting.
Conclusion
The system should be pretty reliable, as it can handle Socket.IO servers going offline. If a server goes offline, HAProxy will automatically detect this and distribute clients to the working servers - until the offline server becomes available again. As long as one message server is up, the service will still be usable.
The only issue left regarding reliability is that if the Redis server goes down, then the whole system will stop working. Redis can be used in a master/slave setup for redundancy. A single master server can be replicated across a number of slaves to be exact duplicates. This setup can also be configured to automatically have a slave become promoted to master if the original master goes down. Slaves are read-only by default, so all writing must be performed on the master server - which will be replicated to the slaves. Each Socket.IO server could have its own Redis slave for read queries, to help distribute the load off of the master server.
-David
Comments
There aren't any comments yet. Leave one below.
Leave a comment