Welcome!

Join our community of MMO enthusiasts and game developers! By registering, you'll gain access to discussions on the latest developments in MMO server files and collaborate with like-minded individuals. Join us today and unlock the potential of MMO server development!

Join Today!

Event-Driven Practices- Classical OO Programmers Discovering "Callback Hell"

Joined
Jun 8, 2007
Messages
1,985
Reaction score
490
[Event-Driven Practices] Classical OO Programmers Discovering "Callback Hell"

I want to start by saying I'm not going to talk about . I won't talk about the disadvantages of threads, or the disadvantages of working on a single process, as the problems with both of these competing choices are well-known and have work arounds (except the second- I mean, to scale out, we inevitably have to make systems designed for one process somehow work on more than one computer). I want to focus on you- the programmer, and little about the efficiency of your programs. If you're smart, you'll make something efficient enough in whatever style you choose. I'm going to describe, instead, a style of coding that is fairly straight-forward, extensible, and requires very little thinking.

Let's first discuss our choices. Would we rather have an event-driven server, or a more classic type of server? There are problems and advantages with both. Let's explore the classical style of programming in general- be it PHP on top of a server like Nginx or Apache, or a custom server coded in Python, or Java.

In classical OOP languages, you often have to think a lot. Before you start coding in features, you have to think about the project as a whole. Then, after you get an entire and complete mock-up of features you need, and an idea of where you are going to put those features in your classes, and what classes you are going to make to describe different parts of your project, you can start coding. Naturally, you start at the lower level logic in which the (several) higher level classes will inherit. All of this is in your head- You must make the low level perfect, or the high level parts will be hacky at best. If you work in a classical OOP language, and you don't fall into this trap, you're doing something different. Further, the books we read, the design patterns we learn, and the popular culture we interact with encourage us classical programmers to fall into this trap. It's as if this trap is simply how programs should be crafted in classical OOP.

I'm not going to go so far to say that classical OOP is an anti-pattern in itself, but, in my opinion, humans are prone to thinking up things that are imperfect, and the above style clearly demands conceiving perfection using our minds. The very fact we often create imperfect low-level classes is a big reason the programs we produce are often different than the programs we had in mind when we first decided to create those programs. As we get passed the low-level logic, and start inheriting that logic with high-level logic, we go into a trial-and-error phase, where we get things working. Changing parts of the low-level logic to accommodate for the new high-level logic. This process continues until the project is good enough to consider complete.. Design patterns make this process much easier on us, but let's face it- the solutions pretty much boil down to loose coupling, consistency, and clarity, among other good practices. It should be noted that following any one of the three of those good practices in their entirety, while creating a program even remotely useful, is an irrational idea. Think about it- 100% loose coupling means any one part of your program cannot access any other part. 100% consistency could be interpreted as everything being the same. The bigger a project becomes, the less clearly one can interpret the entire system- no matter how well it's written. So we have to compromise good design for something that's actually practical. And that's OK. This is a world we can live in and actually get work done.

Now let's discuss event-driven systems- specifically Node.js, and the problems that come with that.

The idea with event-driven programs, is that heavy operations should be non-blocking, and everything else should finish fast. If you read any article supporting this idea, they really make the event-driven model seem perfect for servers. Entry-level examples make this idea seem very workable- very easy to get started in. The truth is, the road to completing a project in an event-driven system isn't paved like it is for classical programmers. You may be disappointed similarly to how settlers were disappointed when coming to early America, hearing rumors of the roads being paved with gold, when in fact, many roads weren't paved at all. Much of the design patterns we relied on from classical OOP don't apply to these programs, and we create things that quickly become hard to think about- or worse, we think up this massive system and go to write our first low-level class, and if we somehow unfortunately get passed that, the evented ideology of "never block, and finish fast" falls apart. This is not OK. This is not a world we can live in and actually get work done.

So, eventually we try and forget much of what we learned from classical OOP, and work with these callback things. Pretty soon we have one callback for the server, falling into another callback when we need to access the database. Oh, but we want to hash a string, so we run another callback, oh but we want to check the user's password, so we use YET ANOTHER CALLBACK!! This is a known problem, and it is known as Callback Hell.

Ok. So the problem with classical OOP is clearly defined, and the problem with event-driven programs are clearly undefined.

Years have been spent solving the problem: we get overwhelmed with classical OOP. Why do we have design patterns? Because we have too much on our mind in classical programming. The less we have to think about the entire system, the easier it is to remain sane while programming features for that system. This is why structured programming became a thing, and isn't a big thing anymore. Anyway, let me get onto my solution for event-driven servers. I'll reference classical OOP ideologies, as this post is designed towards classical programmers.

I came up with this structure before promises became a big thing, and I may use promises in sub-parts of this system, but I still prefer to structure my event-driven applications this way.

We're talking about servers- I don't care if the server is for HTTP or TCP- this system is for event-driven servers coded in Node.js. I use this for all kinds of fun stuff. For brevity, I'll describe an HTTP server that utilizes Web-Sockets- but you can use this for any event-driven server.

Start with server.js.
This is the entry file- the one that gets executed to start running your program. I'm going to say it straight- a little coupling is necessary. Now that that's said, create a variable called "main."

Code:
var main = {};

Main is the tool we use to allow different parts of this program to communicate. Now I'm going to break one more rule early on. Do as much blocking operations you reasonably need to do during the initialization process. Want to load a config file? Do it now! Want to cache common web files into memory? Do it now. Want to load libraries into variables that main will contain? You get the idea. Blocking is bad, but we're talking about a long-running process- so if it takes a while to start up, that's OK. Nobody is connecting to the server and waiting on a blocking operation at this point. Here's an example of a server.js:
Code:
var express = require('express');
var app = express();
var http = require('http').Server(app);
var io = require('socket.io')(http);
var events = require('events');
var fs = require('fs');
var port = 8080;


var main = {
    app: app,
    EventEmitter: events.EventEmitter,
    express: express,
    fs: fs,
    io: io,
    root: __dirname,
    session: {}
};


require('./private/init.js')(main);
I break some of my own rules in this example, as you'll see- but just like in classical OOP, compromises may be made- and are sometimes necessary. I should've put root in a namespace, and perhaps the node.js libraries in another namespace, and maybe session doesn't deserve it's own namespace.. But, this is an example for something rather simple, anyway.

Name collisions in the main variable is something you should strongly keep in mind. JavaScript objects are multidimensional. Avoid putting any functions directly in main. Think of your main variable as a library of namespaces. It's your choice to allow main to become the evil global it can be, or a collection of loosely coupled namespaces it should be.

Create 2 or 3 folders- one named private, and one named public- If you want, go ahead and create one named protected which is for things that become public after authentication. The public folder is static stuff you want the public to know about. For HTTP servers, this would be static HTML files, Client-side JavaScript libraries, CSS files, images, and of course, sub-directories containing more of those public things. The idea is, we want everything in the public folder to be readily available to clients before any authentication. At this point, I'll allow the public and protected parts to remain in your imagination.

The private folder is for things that are strictly server-side- things that never cross the eyes of the client. Let's venture into this private folder. Firstly, create a folder 'private/extensions'. Next, create a file 'private/init.js' Ironically, server.js shouldn't start firing up servers- we'll leave that to the extensions. init.js will load and call all of these extensions one time- and only one time. Each extension is passed the main variable. So design each extension module as a monolithic function. Remember how I said to do blocking operations during initialization? It's perfectly legitimate to read the directory containing all of the extensions at this time- remember, there isn't a server running right now, anyway. So why should I put a list of extensions in a config file, or use some tricky autoloading technique? Well, if your system grows to the point where there are so many extensions that autoloading and sub-directories of extensions is necessary, then by all means, design an autoloader and don't block when loading an extension- but, I didn't design extensions to be that plentiful. This system encourages extensions to contain things like auto-loaders and stuff like that. Don't think of an extension as a class, think of an extension as more of a sub-program that must be used for this system to operate- such as an HTTP server, a web-socket server, a ready database connection, an authorization library, etc. Here's an example of init.js:
Code:
module.exports = function (m) {
    var dir = m.fs.readdirSync('./private/extensions');
    var index;
    for (index in dir) {
        require('./extensions/' + dir[index])(m);
    }
};
It may be small, but this is a very important step in creating a maintainable system. By the way, 'm' is short for 'main.'

Now let's get into an example of an extension. You'll notice that init.js is designed exactly the same as an extension is designed. The module returns a function, and that function takes one argument, that is expected to be the 'main' variable. Finally, it modifies the 'main' variable and returns nothing. This may seem like a system that is the opposite of something maintainable to a classical programmer's perspective, and even a functional programmer's perspective, but please don't lose faith quite yet.

Notice I haven't done any callbacks, nor have I attempted to do anything non-blocking yet. Let's change that- Here's an example of a MongoDB connection, using the mongojs library.
Code:
var db = require('mongojs')('DBname', ['people', 'places', 'things', 'ideas']);
module.exports = function (m) {
    m.db = db;
};
db.on('error', function (err) {
    console.log('database error', err)
});


db.on('connect', function () {
    console.log('database connected')
});
Pretty simple. We create a variable named 'db' that connects to a mongo database named 'DBname', and loads the collections that are named like the different types of nouns, because this is an example. We use module.exports to reveal the extension as a function, taking the main argument. We then add the variable 'db' to the main object, and that's it. Oh, and we create a listener for a DB error, and a listener for a DB connection. At this time, mongojs will connect to the database the first time the database is asked to do an operation that requires a connection, and not until that point. When a connection is established, it will fire the callback. Nothing real special going on here. One thing you don't have to do, though, is think about how other parts of the system are going to use this link to the DB, let alone other parts of the system. All the db extension needs to know, is itself, and hope that main doesn't already have a db. If main does have another DB, this extension doesn't care, and if the db variable name was used before, it is simply overwritten. We could've also named the variable, 'mongodb', or 'catInTheHat', for that matter. Again, the choice of making good design decisions is up to you with this framework skelet

Now let's see an HTTP router:
Code:
module.exports = function (m) {
    var express = m.express;
    var app = m.app;


    app.use(express.static(m.root + '/public'));

    m.http.listen(m.port, function () {
        console.log('listening on *:' + m.port);
    });

}

In this example, we don't modify the main variable directly, but we surely modify the behavior of something in main, and also couple the router to m.app, m.express, m.root and m.port (which doesn't exist in my server.js example; bare with my poor examples ;)). The router is pretty bare-bones- all it does is serve an unoptimized static file-system, and open a listener for the HTTP server.

Mk, so now let's see a socket server, and lead into a whole new set of features coupled to sockets- channels!
Code:
var cookieParser = require('socket.io-cookie-parser');
module.exports = function (m) {
    var io = m.io;
    var channels = [];
    var dir = m.fs.readdirSync('./private/channels');
    for (index in dir) {
        channels.push(require('../channels/' + dir[index]));
    }
    io.use(cookieParser());
    io.on('connection', function (socket) {
        var i;
        console.log("someone connected.");
        m.session[socket.id] = {
            socket: socket,
            event: new m.EventEmitter()
        };
        socket.on('disconnect', function () {
            console.log("someone disconnected");
            delete m.session[socket.id];
        });


        for (i in channels) {
            channels[i](m, m.session[socket.id]);
        }


    });
}

Ok, so during initialization, we read the 'private/channels' directory synchronously, don't panic, this happens less than a second after you start running the server- and only happens once. It's better to require all of the channels and cache them now, than to auto-load them. Channels are necessary, as they are designed to be socket listeners/socket emitters/broadcasting stations. Channels are cached, once, but they are called once for each socket connection. Also, we have a cookieParser that is requires during initialization, and used as middleware to populate a socket.req.cookies into the socket variable for each client. We open up a session for each connection, using the socket.id as the session key. Session is kind of like a local version of main- but instead of being relative to the entire application, the session is only relative to the individual connection. When a connection fires the 'disconnect' event, we delete the session- but that's not to say a channel cannot restore data from a previous (for example a dropped) connection using data from the client (like a cookie).

Needless to say, create a folder 'private/channels' now. Let's see a login channel- then I'll explain how to separate monolithic initialization operations, from per-connection operations.
Code:
var expected = {
    'username' : /^[a-zA-Z][a-zA-Z0-9-_\.]{1,20}$/,
    'password' : /^.{5,512}$/,
    'keepMeIn' : /^true|false$/
}
module.exports = function (m, session) {
    var socket = session.socket;


    if (socket.request.cookies.sessionID) {
        m.db.people.findOne({
            key: socket.request.cookies.sessionID
        }, function (err, user) {
            if (err) {
                console.log(err);
                return;
            }
            if (user !== null) {
                console.log("logged in via cookie")
                session.user = user;
                session.event.emit('logged_in', true);
            } else {
                console.log("cookie was invalid.");
            }
        });
    }


    socket.on("login", function (data) {
        socket.emit('login', {'status': "Processing Login.."});
        console.log("Processing Login..");
        console.log(data, socket.id);
        var result = m.form.process(expected, data);
        
        if (result === false) {
            console.log("Login Error: incompatible input");
            socket.emit('login', {'status': "(1) No Bueno.", 'code': 1});
            return;
        }
        result.keepMeIn = JSON.parse(result.keepMeIn);
        m.db.people.findOne({
            username: result.username
        }, function (err, user) {
            if (err) {
                console.log('login', {'status': "(2) No Bueno."});
                socket.emit('login', {'status': "(2) No Bueno.", 'code': 2});
                return;
            }
            
            if (user === null || !m.form.compare(result.password, user.password)) {
                console.log('login', {'status': "(3) Unable to find user."});
                socket.emit('login', {'status': "(3) Unable to find user.", 'code': 3});
                return;
            }
            session.user = user;
            console.log("User:", user.username);
            session.event.emit('logged_in', true);
            var myKey = false;
            if (result.keepMeIn) {
                myKey = m.form.hash(Date.now() + '' + Math.random() + socket.id);
                socket.emit('cookie', {'name': 'sessionID', 'value': myKey, 'days': 14});
            }
            m.db.users.update({username: result.username}, {$set: {key: myKey}}, {multi: false});
        });
    });
};
So, first thing to notice is the expected variable sitting outside of the module function. This is created one time and stored in memory. The expected format of the data we'll be accepting from the user is the same for all connections, so it's a prime example of something we can keep monolithic, and thus store during the initialization process. Now we enter the login module.

As soon as the user connects to the socket, we ask if there is a 'SessionID' cookie. If there is, we query the database for users that have this key, and if one exists, store the user in the session and emit the logged_in event for that session. Really simple stuff here.

The second part of the login module is the handler for the web-socket login itself, naturally labelled 'login.' We send the client feedback JSON, saying the status is 'processing login..'. The client can do with this data what it wishes. We assume the client is sending us data in the format we are expecting, but we don't rely on that data- we rely on an extension I didn't mention- an authentication library extension that would populate the main variable with a 'form' namespace, that is supposed to process data, generate hashes, and verify that data supplied from a user is of an expected form. If the data is not of an expected form, then form.process returns false. So we tell the client the login process failed, and give a message and an error code. Then we quit by returning void- which essentially does nothing except break out of the login process and prevent further processing attempts.

We parse the keepMeIn into a boolean- as it is expected to be the string 'true' or 'false'. My client for this example converts a checkbox into this value. I'm not going into depth with client code in this example. Next, we try to find a user that matches the client's supplied username. If for some reason this database operation fails, we give the client an error code and stop further processing by returning. Notice we never tell the client why the operation failed, but we give the client an error code- and if we wanted to, when coding our client, we can give the user information related to why this operation failed- and a hacker may reverse-engineer this information to make sense of it, so for security reasons, if you wanted to give a generic error for any possible error, that's not a bad idea- but I chose to give clues to hackers, and we have the right to choose- so I'm pointing that out in this example. In mongodb, if you try to findOne, and you get null back, that means the database didn't find one. So we check if the user variable is equal to null, and we also check if the user's password (database) matches the password (client supplied), using form.compare- which is a function that compares a raw string with a hashed string, and returns true on a match, and false on a mismatch. I chose to treat an incorrect password exactly the same as a non-existing user, to make it harder for hackers to hack the users of my system. I don't mind giving hackers clues about my system, but when it comes to my users, I'd rather not give them clues. Unfortunately, this may confuse legit users who forgot their username for this system. That design choice is up to you- but I chose security over friendliness in this case. Lastly, if all else fails, that must mean we have a user who typed in a username and a matching password, so we set session.user to this user, and emit the logged_in event for this session. Finally, we check if the user wants to be logged in automatically (the keepMeIn variable was for this purpose). If they do, we attempt to generate a unique hash using the time, some randomness, and the socket id. We tell the client to store a cookie and give it some information about the cookie we want the client to store- the client may use this information as it wishes- but this server isn't going to force cookies on clients directly. We update the database, setting the key to either false, or to the key in the cookie. Loop back to the first part of this module, and you'll notice the conditional for the sessionID is multi-purpose! If the sessionID variable is undefined, it will be treated as false. Also, if a hacker tries to get into a user's account by supplying false as the value, it won't get through this condition. Finally, mongodb treats the boolean false different than the string false, as does JavaScript- so the string 'false' for the sessionID will return true in JavaScript, allowing the hacker to the next step which is MongoDB- and MongoDB will not find a user with the session key 'false', because we set the value to the boolean false. I'm not proud of this last point, and I don't think this trick should be used in any production environment for security and peace-of-mind reasons.

One question remains- What to do with the "logged_in" event? Well, that's up to you! More importantly, the login channel doesn't care! Loose coupling and a clear separation of roles at it's finest :). This is what makes event-driven programming so great in terms of maintainability. We can add any amount of listeners we want to this event- all we need to do next is add more channels. Channels are designed for a per-connection basis. If we want to know if one connected agent "logged in", or, more to the point, do something if that happens- we would add a new channel, listen for the "logged_in" event, and execute some code. Here's an example:

Code:
var content = {

    login: null,
    game: null
}


module.exports = function (m, session) {
    var socket = session.socket;
    socket.emit("content", {
        selector: "#canvas",
        html: content.login || (content.login = m.fs.readFileSync(m.root + '/protected/login.html', 'utf8'))
    });

    session.event.on("logged_in", function (success) {
        if (success === true) {
            socket.emit("content", {
                selector: "#canvas",
                html: content.game || (content.game = m.fs.readFileSync(m.root + '/protected/game.html', 'utf8'))
            });
        }
    });
}

The above code will send a login web-page from the protected folder that I mentioned earlier. If it has already been served once, future requests will be served from memory. The first request is synchronous- you can fix that in your application, this one has poorly written fine details like that, lol. The same is true for something that a logged_in event will trigger to be served. Once a user logs in, then that event will trigger a function that emits 'content' that contains some web page for a game on that same user's socket.

So there you have it! I don't really feel like I'm in callback hell, I didn't use promises, and different parts of the system can communicate, but remain loosely coupled, and this is what I use to create small, maintainable servers. I hope you might use this framework skeleton, and deviate from my examples for your own personal use! I purposefully used minor bad practices that work, because I think real-life applications are imperfect, so I hope you can relate to my examples and see them for what they are- examples. Also note, the code won't work as-is, and is merely meant to be code that you might look at and relate to so you can create your own code, rather than actually use my code bit-for-bit.

* I reserve the right to edit this post in the future.
 
Last edited:
Experienced Elementalist
Joined
Dec 17, 2008
Messages
209
Reaction score
31
Re: Event-Driven Practices- Classical OO Programmers Discovering "Callback Hell"

s-p-n, I've seen your work a lot on this forum and you never seem to disappoint. Kudos to that.

I've just recently started getting into node.js and having like 10 nested callbacks just to perform a simple login or connection really bothered me, and so, this was very helpful.
 
Junior Spellweaver
Joined
Sep 5, 2014
Messages
141
Reaction score
65
Re: Event-Driven Practices- Classical OO Programmers Discovering "Callback Hell"

s-p-n, I've seen your work a lot on this forum and you never seem to disappoint. Kudos to that.

I've just recently started getting into node.js and having like 10 nested callbacks just to perform a simple login or connection really bothered me, and so, this was very helpful.

Another holy water for callback hells is a switch/step based asynchronous recursive function.

See that

I don't know if this pattern gives enough advantages to be used in production - but it helps me a lot to maintain and understand code, which I wrote like some months ago.

It's pretty simple, short and beautiful to my eyes.
 
Joined
Jun 8, 2007
Messages
1,985
Reaction score
490
Re: Event-Driven Practices- Classical OO Programmers Discovering "Callback Hell"

@develix - I didn't want to go into anything very confusing, so I didn't go into promises at all, but they come with a few different patterns, and one is a synchronous step pattern like your example. This is -compatible code:
Code:
login().
    then(doStep1).
    then(doStep2).
    then(doStep3).
    then(function () { console.log('login complete.'; }).
    then(null, console.error);
console.log("login process started");

In this example, 'then' is method of a promise that is executed once the previous promise was satisfied. There is also an error handler at the very end of the login chain. If any of the 5 previous steps error, then the error is passed to console.error()- and the server should remain online. Promises are very powerful, and I recommend using them- I use them. There are several tutorials and articles online about promises, and I strongly suggest checking them out. They work very well with the framework skeleton in this tutorial ;)
 
Junior Spellweaver
Joined
Sep 5, 2014
Messages
141
Reaction score
65
Re: Event-Driven Practices- Classical OO Programmers Discovering "Callback Hell"

@develix - I didn't want to go into anything very confusing, so I didn't go into promises at all, but they come with a few different patterns, and one is a synchronous step pattern like your example. This is -compatible code:
Code:
login().
    then(doStep1).
    then(doStep2).
    then(doStep3).
    then(function () { console.log('login complete.'; }).
    then(null, console.error);
console.log("login process started");

In this example, 'then' is method of a promise that is executed once the previous promise was satisfied. There is also an error handler at the very end of the login chain. If any of the 5 previous steps error, then the error is passed to console.error()- and the server should remain online. Promises are very powerful, and I recommend using them- I use them. There are several tutorials and articles online about promises, and I strongly suggest checking them out. They work very well with the framework skeleton in this tutorial ;)

Promises are neat, but browser support is critical right now. If you want to hold your code away from libraries, i suggest using patterns like my posted example on the client side code - or use a transpiler. On the server side, promises can be used without worry, but I don't recomme using them on large/often called operations, since they perform slow. Always a big challenge, to write pretty code as well running as fast as possible (like in animation frames or realtime tasks in mmos).
 
Joined
Jun 8, 2007
Messages
1,985
Reaction score
490
Re: Event-Driven Practices- Classical OO Programmers Discovering "Callback Hell"

Hm, I use libraries (Phaser, jQuery) with the MMO I'm developing. I'm not really using promises client side at the moment. I also don't have experience using them on the client. There are tools available, though, and I'm simply pointing out that those tools are available. Much of the knowledge I have regarding event-driven programming is noted in . You are, of course, free to use any style you choose when programming your applications. My goal is to spend as little time in the Abyss as possible, and I thank you for contributing to that effort. :)
 
Junior Spellweaver
Joined
Jul 11, 2006
Messages
188
Reaction score
184
Re: Event-Driven Practices- Classical OO Programmers Discovering "Callback Hell"

Just a note:
 
Junior Spellweaver
Joined
Jul 11, 2006
Messages
188
Reaction score
184
Re: Event-Driven Practices- Classical OO Programmers Discovering "Callback Hell"

That library looks fantastic! I'll have to try it sometime.

that solved most of my problems with ES5.

now I'm using ES6/7/2005 (transpiled with babel) that has async functions (<3 javascript with classes)
 
Back
Top