Distributed Mind


Musings on software development

Stop complaining or improve yourself

Written on 14. February 2015

Working in the IT sector never has offered more choices than nowadays. You can pick from a wide range of hardware platforms supporting an even broader range of operating systems. On top of it, you can pick from a myriad of development platforms and languages.

On the other hand, more and more people working in the IT sector (I’ll focus on software development mostly in this rant) start complaining about so much things in their daily use with their hardware, OS or their development environment / programming language (yes, I also come through this hollow alley). If you start complaining publicly on twitter or Facebook et al. you can even get retweets and likes if you show some creativity when doing it.

Let’s face the sad truth: it won’t change anything until you start acting.

If you don’t like JavaScript, stop complaining and try ES6. If you still don’t like it: ditch it and try something different. Maybe you should focus on HTML5 and CSS3 only. If you don’t like that either, stop doing web development as a whole. You’re not forced to do web development at all.

If you don’t like WPF (like me), don’t do it, maybe AngularJS might be your frontend framework of choice. But if you’re doing the switch, please don’t start complaining about it again. If you don’t like it’s behavior, try ReactJS or start contributing to AngularJS to improve it.

If you think running JavaScript on the server is plain wrong, simply don’t use Node.js/io.js.

If giving up referential integrity is blasphemy for you, don’t use (most of the) NoSQL databases.

If you don’t like Windows (any longer - like me), try Linux, OS X or build your own operating system. You may find it hard to change things, but it’s up to you to understand and learn things like bash scripting, VIM and all that stuff. If you start experiencing doing better is hard, you might understand why existing things are as they are.

It’s your choice to improve yourself or stick with your habbits. But if you stick with them, please do me a favour and stop complaining about them - they have been your own choice.

#NodeJS / #ExpressJS: Adding routes dynamically at runtime

Written on 02. February 2015

When running a ExpressJS based API or Website, you may want to add new routes without restarting your ExpressJS host.

Actually, adding routes to ExpressJS at runtime is pretty simple.

First, lets create some “design time” routes for an API endpoint as well as some static content to be served by our ExpressJS application:

var express = require('express');
var app = express();

// development time route
app.get('/api/hello', function(request,response) {
    return response.status(200).send({"hello":"world"})
});

// static file handling
app.use(express.static(__dirname + '/client/app'));


app.listen(3000);

A call to http://localhost:3000/api/hello will return this:

{"hello":"world"}

And the call to http://localhost:3000 will render us some HTML:

<!DOCTYPE html>
<html>
<head lang="en">
    <meta charset="UTF-8">
    <title>Hello World!</title>
</head>
<body>
<h1>Hello World</h1>
</body>
</html>

Nothing special here, move along :)

Now, lets assume we’re dropping a new ExpressJS controller into our ./controllers folder at runtime:

module.exports= {
    init : init
};

function init(app){
    app.get('/api/myruntimeroute', function(req,res) {
        res.send({"runtime" : "route"});
    })
}

When calling it’s route http://localhost:3000/api/myruntimeroute we’ll get a 404. Sad panda.

Now lets drop this piece of code into our app.js before the app.listen(3000) line (this needs to be added before the host is started, of course):

// hook to initialize the dynamic route at runtime
app.post('/api/dynamic', function(req,res) {
    var dynamicController = require('./controllers/RuntimeController');
    dynamicController.init(app);
    res.status(200).send();
});

When sending a POST (curl -X POST http://localhost:3000/api/dynamic) to that endpoint, our controller dropped in at runtime, now gets initalizied and can register it’s routes.

Calling http://localhost:3000/api/myruntimeroute now GETs us some fancy JSON:

{"runtime" : "route"}

Looking back to the hookup code, you can see that the name of the controller is already known which of course is just for the sake of simplicity of this sample.

In a real world implementation you might have some other hook up like a user action in your administration which looks for new controllers in a specific folder. Or you might monitor a specfic folder for new controller files. You get the point.

Another approach for dynamic runtime routing could be to have catch some/all route definition which hooks in when all other routes fail.

A small example:

app.get('/api/:dynamicroute', function(req,res) {
    res.send({"param" : req.params.dynamicroute});
});

In this snippet, the part behind the /api/ path is dynamic which means you can have some logic which distributes the requests based on decisions made at runtime.

You can find the source code for this sample in this GitHub repository.

mongoose: Referencing schema in properties or arrays

Written on 01. February 2015

When using a NoSQL database like MongoDb, most of the time you’ll have documents that contain all properties by itself. But there are also scenarios where you might encounter the need for a more relational approach and need to reference other documents by the ObjectIds.

This post will show you how to deal with these references using Node.js and the mongoose ODM.

Lets consider we’ll have a users collection and a posts collection, thus we’ll have a UserSchema as well as a PostSchema. Posts can be written by users and the can by commented by users.

In this example, well reference the users in posts and comments by their ObjectId reference.

The UserSchema is implemented straight forward and looks like this:

var mongoose = require('mongoose');

var UserSchema = new mongoose.Schema({
    name: String
});

module.exports = mongoose.model("User", UserSchema);

Beside the title property, the PostSchema also defines the reference by ObjectId for the postedBy property of the PostSchema as well as the postedBy property of the comments inside the comments array property:

var mongoose = require('mongoose');

var PostSchema = new mongoose.Schema({
    title: String,
    postedBy: {
        type: mongoose.Schema.Types.ObjectId,
        ref: 'User'
    },
    comments: [{
        text: String,
        postedBy: {
            type: mongoose.Schema.Types.ObjectId,
            ref: 'User'
        }
    }]
});

module.exports = mongoose.model("Post", PostSchema);

Now lets create two users:

require("./database");

var User = require('./User'),
    Post = require('./Post');


var alex = new User({
    name: "Alex"
});

var joe = new User({
    name: "Joe"
})

alex.save();
joe.save();

The interesting part of course is the creation and even more the query for posts. The post is created with the ObjectId references to the users.

var post = new Post({
    title: "Hello World",
    postedBy: alex._id,
    comments: [{
        text: "Nice post!",
        postedBy: joe._id
    }, {
        text: "Thanks :)",
        postedBy: alex._id
    }]
})

Now lets save the Post and after it got created, query for all existing Posts.

post.save(function(error) {
    if (!error) {
        Post.find({})
            .populate('postedBy')
            .populate('comments.postedBy')
            .exec(function(error, posts) {
                console.log(JSON.stringify(posts, null, "\t"))
            })
    }
});

As you can see, the we’re using the populate function of mongoose to join the documents when querying for Posts. The first call to populate joins the Users for the postedBy property of the posts whereas the second one joins the Users for the comments.

The Post document in the database looks like this:

{
    "_id" : ObjectId("54cd6669d3e0fb1b302e54e6"),
    "title" : "Hello World",
    "postedBy" : ObjectId("54cd6669d3e0fb1b302e54e4"),
    "comments" : [
        {
            "text" : "Nice post!",
            "postedBy" : ObjectId("54cd6669d3e0fb1b302e54e5"),
            "_id" : ObjectId("54cd6669d3e0fb1b302e54e8")
        },
        {
            "text" : "Thanks :)",
            "postedBy" : ObjectId("54cd6669d3e0fb1b302e54e4"),
            "_id" : ObjectId("54cd6669d3e0fb1b302e54e7")
        }
    ],
    "__v" : 0
}

In contrast, the query result is a full document containing all User references for the Posts.

[
    {
        "_id": "54cd6669d3e0fb1b302e54e6",
        "title": "Hello World",
        "postedBy": {
            "_id": "54cd6669d3e0fb1b302e54e4",
            "name": "Alex",
            "__v": 0
        },
        "__v": 0,
        "comments": [
            {
                "text": "Nice post!",
                "postedBy": {
                    "_id": "54cd6669d3e0fb1b302e54e5",
                    "name": "Joe",
                    "__v": 0
                },
                "_id": "54cd6669d3e0fb1b302e54e8"
            },
            {
                "text": "Thanks :)",
                "postedBy": {
                    "_id": "54cd6669d3e0fb1b302e54e4",
                    "name": "Alex",
                    "__v": 0
                },
                "_id": "54cd6669d3e0fb1b302e54e7"
            }
        ]
    }
]

You can find the source code for this sample in this GitHub repository.

MongoDB development environment journal size management using mongoctl

Written on 26. January 2015

On a developer machine you might have to install multiple versions of MongoDB or you might want to run multiple MongoDB server instances. Both can be done easily using mongoctl.

To install mongoctl on Ubuntu, just run:

sudo pip install mongoctl

In case you don’t have installed pip, just run this before the above command:

git clone https://github.com/pypa/pip.git
cd pip
python setup.py install #might need sudo / root

To install the latest stable MongoDB release just run:

mongoctl install-mongodb

Beside installing MongoDB this also creates the config file for your first MongoDB server name “MyServer”. You can simply start it using

mongoctl start MyServer

Stopping is as easy as starting it:

mongoctl stop MyServer

By default, journaling is enabled for MongoDB and may eat your hard disk if you’re running many MongoDB servers. Having installed MongoDB using mongoctl, the journal of “MyServer” can be found here:

~/mongodb-data/my-server/journal

Listing the directory content shows us, that our instance uses 3GB of hard disk space.

drwxrwxr-x  2 alexzeitler alexzeitler 4,0K Jan 25 00:09 .
drwxrwxr-x 10 alexzeitler alexzeitler 4,0K Jan 24 23:15 ..
-rw-------  1 alexzeitler alexzeitler 1,0G Jan 25 00:09 prealloc.0
-rw-------  1 alexzeitler alexzeitler 1,0G Nov 25 13:04 prealloc.1
-rw-------  1 alexzeitler alexzeitler 1,0G Nov 25 13:04 prealloc.2

While (until the 2.0 release) disabling the journal is a bad idea for production, it is ok for a development environment.

To disable the journaling for “MyServer”, head over to ~/.mongoctl and edit the servers.config file.

No find the “MyServer” configuration, which should be the first entry and look like this:

{
    "_id": "MyServer",

    "description" : "My server (single mongod)",

    "cmdOptions": {
        "port": 27017,
        "dbpath": "~/mongodb-data/my-server",
        "directoryperdb": true
    }
}

In order to disable MongoDB journaling, add the nojournal property to cmdOptions:

{
    "_id": "MyServer",

    "description" : "My server (single mongod)",

    "cmdOptions": {
        "port": 27017,
        "dbpath": "~/mongodb-data/my-server",
        "directoryperdb": true,
        "nojournal" : true
    }
}

No, stop your “MyServer” instance and delete the files from the ~/mongodb-data/my-server/journal (you’re doing this at your own risk, of course - and as said not in a production environment!) directory.

After starting “MyServer” again, the journal folder should stay empty. That’s it!

You can also limit the journal file size to 128MB by setting the smallfiles property to true.

Farewell...

Written on 17. January 2015

The title of this post might sound a bit scenic, but 2014 indeed has been a year of goodbys for me - at least regarding software development. As almost every leave also is a fresh start, lots of things have changed.

So lets say farewell to…

… .NET only software development

While contributing to “Pro ASP.NET API”, doing talks and projects on ASP.NET Web API in 2012 and 2013 by persuasion, 2014 has become the year where I decided to gain broader knowledge by practice in the non-Microsoft space.

This means digging into Linux, Node.js and MongoDb and all that things like Gulp, Yeoman, Docker and …you name it.

I also nosed around Scala, Go and Erlang, but in the end, I stick with Node.js and it’s friends.

Things that felt bumpy (especcially because of Windows, e.g. CMD vs bash, Docker vs. …?) in the .NET world, worked like a charm on the new stack and allowed me to purge my whole software development and deployment process.

Also the huge open source momentum in the Node.js ecosystem deeply impressed me.

On the other hand, in 2014, the .NET stack also has moved forward and Node.js not always is the promised land. The biggest gain for ages has been open sourcing the .NET framework and making it a real cross platform development environment.

As a .NET developer since the early days, I surely won’t leave the .NET ecosystem in the near future, but there may be requirements where I or my customers will prefer the Node.js environment in the future.

… blog.alexonasp.net

When you’re reading these lines, my old blog “Alex on ASP.NET” which has been around since 2003, is gone. As aforementioned, I’m no longer developing solely in the .NET space and thus my blog needs to be as open as I am.

… BlogEngine.NET

“Alex on ASP.NET” in it’s early days has been running on “dasBlog” which has been replaced by SubText later on. Since 2006 it has been running on BlogEngine.NET.

As blog posts after being published, almost never are modified, running a database backed blog engine doesn’t make sence to me any longer.

Because of this, I migrated my old blog to Wintersmith, a static site generator based on Node.js. It is now hosted on the DigitalOcean cloud platform and deployment is done using git.

I’ll cover this in a upcoming post for everyone who’s interested in that topic.