My notebook is now 4 years old and when I look to get a new one, it seems if there are no better notebooks today. 4 years ago I was paying 460€ for my acer with an i3 4010u, 4GB, 250GB-SSD, 1080p 15.3” screen. Even after so long time it is very hard to find something better, for the same price AND I am looking to have a nootebook of the same size and thikness.. If I want more, I need to play more.

And actually I want more, but when spending a little more, the increase is to little as that I want to do it and it the spects are better, the price easily goes out of range.Since 4 years I live with a broken left mousebutton, so the quality of my notebook was even at that time on a low level. So next time I realy want to spend more for my PC, so that it is every day a better feeling when working on the machine. And as a software developer I work a lot on this machine. most of my opensource projects are developed on my notebook.

Actually I am happy. I am happy that it is not nessasary to get a new device. And I like, that cooding usually requires so little hardware. I am happy, that this hardware lasts so long and delivers a good reliability. since about 8-9 months I am switched to linux and in this area the hardware seems to olden much slower.

And I am not just thinking for myself. It is better for the environment.A device less, that I buy, is a device less, that is on a mountain of crap. And as it seems others are also using there devices longer. 10 years ago, you could buy every two years a new machine. But now a 4 year old computer is still good. I like that. Not to forget is the social stability. A so old machine, is still capable to fulfill most important tasks. I think most of it is on the internet, and a internet PC is today very cheap to get.

With that I want to thank Intel and AMD for a slower innovation cycle.

Recently I had to reinitialize my webserver, doing some updates and just clean up all the stuff, with that I broke with testing the previouse installation. For the new system I decided, that it should be hosted with nodejs, not with apache. Doing that step, I also had to move from wordpress to something new. On the server I had already running serveral nodejs services. That help me to watch youtube videos in china or store files right from the web.

setup

Now the main webserver should be replaced. After checking some options, I decided to run the nodeprocess on a port of my choice and redirect port 80 to that new port using iptables. In this way, the nodeprocess don’t need to get started with root-privileges. With my previouse nodejs services I had the problem, that they always shut down over time. The solution for that is as the following listing:

process.on('uncaughtException', (err)=>{
  console.log(err); // actually I log into a file
});

This snipped prevent the server from shutdown after an error and later I can check out what happend. So the issue was, that the servers logged to console. The console was filled up. and the process broke. This issue was the same with using forever, pm2 or run the server in background and ‘disown’.

system

The server that is running now is an express server, serving static files from its public directory. First I begun to do prepare some basic layout writing underscore templates, managed by my template manager. But actually I didn’t wanted to build a layout now. That made me looking for static page generators. On github are many pages made using jekyll. But it is a system made with Ruby and I wanted to use something with node.

hexo

After a while, crawling some websites, I found Hexo. A static page generator, that has some CLI programm to initialize a new project, import content from my wordpress page, create new pages and posts. The structure with pages and posts has directly been familiar to me. For Hexo, there are about 50 themes on the official website. I chose one and tweaked it to look like a page for a software developer. The theme is already responsive and looks very clean.

I can directly write new posts and pages from the console. The cli initializes for each a new markdown file. Where I can direcly write new content. For local, to check out how the page looks the clitools provices a buildin webserver, that will live regenerate the page. For production, you use “hexo generate” to create the static htmlfiles. after generating simple copy the files to the public directory of my live host and there is the new conent. The system even generates archive pages that let the visitor brows the site by time. on the sidebar is a tag-cloud, that help to explore the webside with given tags. This will be a great plus for Search Engine Optimization.

For now I am very excited about hexo, it is great to make simple websites and I can have a good seperation from the rest of the code, that I will add to the server.

Designing APIs is an important task. The API should be simple, follow good conventions, behave like expected, be efficient. This post is about updating data in bunch operations.

Lets say you have a blog-system with many authors. On the back-end the authors have a list of all posts and pages. Now they can select many and publish all at once. It is also possible to select a few and change there rating. Actually I don’t want to talk about a blog system. It is about changing many items in a list.

In a standard RESTful-API it is the case that the client will send a patch-request for each of the items. This RESTful approach is simple to implement, in many cases the API can be generated almost automatically. For example you have a MongoDB or MySQL. Using sails.js or loopback you can easily expose an HTTP-REST-endpoint. Also on client side that API can be used very easy. Loop over the selected items, trigger the update-call and handle the response, one by one.

But as comfortable as it is it is also not efficient. When always sending the same information for different entries. In slow connections (mobile, village, shared internet, China), it can reduce the user experience. For this cases the server can provide additional methods to handle bunch operations. like publishPosts many, changeRatingTo. Both methods would in our case expect a list of postIds and the value to change to.

Having a list of Items to change on the server, you have the same question again. Do you execute a single update command on the db or one update after the other. Can you check the permission in as list-operation? Then you can decide, refuse all item-changes if one item fails with some precondition. Last is the result reporting. do you need to send success states for each item, only the success items, only the failing item-changes.

How ever the solution is, that you chose. You should give a transparent report to the user and update the item list accordingly. In the end I want to give you some questions you can ask, when you get the task to implement an API with list operation.

  1. can each item get a different value using this procedure?
  2. fail-one-_fail-all or detailed report?
  3. how should the error reporting look like?

Last time I told you about txml, the fastest XML parser in Javascript.

Now I have a new bonbon for you. tReeact. A framework that is inspired by facebook’s react. In react you are compiling JSX letting you generating an object-tree representing the html-DOM structure. In tReeact you are generating an html-string representing your UI. tReeact will parse that string (usind txml) and do the reconcilation, to update the UI and only change the elements that need an update.

This is good, because you can directly reason, why your UI looks as it is, according to your app-data and the generation of the XML. Making updates to the DOM-Elements, it is easy to apply animations using CSS. The XML can easily get created with a templating language of your choice or be send to you from your server.

An important feature are also components. They are used to handle UI-Events. In your app, you can provide some generic components to set a value on click or on change of an input with a direct rerendering. That will provide two-way databinding between your UI and your app-data.

So the idea and the usage are discribed quickly. If you like to generate your UI using a templating language of your choice, if you want a separation of your app-logic and the UI development, if you want to handle your UI in a pure procedural way or if you want to build your app with a flux framework, You should give it a try and check it out on Github and npm.

tReeact is a great option, that let you scale from small apps to complex enterprise applications.

It is done, the fastest pure Javascript XML parser is ready.

For a new UI framework that I develop I needed an XML-DOM parser. The new framework tReeact, was highly inspired by react.js but want to to render the UI using a templating language of your choice. The Framework will then update the DOM elements in the Browser, in the fastest possible way and with a minimal amount of direct DOM operations.

But the first step is the XML parser. Usually people make a different of two kinds of XML parser. StreamReader and DOM-Parser. Stream Reader are great to read very large files that are even bigger then a few GB. While parsing they trigger Events discribing what was found on the xml-stream. A DOM-Parser on the other hand takes an xmlString and returns an object that represents the structure and data of that XML. Because that object and the entire XML-String have to be in memory while parsing, the size of the parsed xml is limited. There for your programm that is using the DOM-Object can be written in a procedural way, not event-driven, what makes it much easier to reason, debug and develop.

The new framwork tReeact is meaned to handle a webapps UI. And the HTML of an app seldom bigger then 5MB. Thats why tXML became a domParser. The development took serval steps. First I made a basic version that can give nodes, attributes and text content. Then I used this tool to parse different sources. OpenStreetMap, serval websites, there RSS-feeds. I also compared the speed with other projects, like XML2JSON and sax and browsers native XML parser. In the end I was about 5-10% slower in Chrome then native. But the object that I had in the end was a “plane old javascript object”. So accessing the data is much faster then using the DOM API in a browser. That will specially make the tReeact faster, what has to traverse the entire Object to compare.

In other situations the difference will be much more significant. In a direct comparisson with sax vs tXML parsing the Github website and a bigchunk of OSM-Data the advantage was 5 to 10 times in termes of speed. When reading that, please keep in mind that this is a comparisson of a streamReader vs DOM-parser.

Motivated by seeing this advantage, I analysed the tXML parser again. And thought how I can improve the speed and usability for most common cases. A gread win for usabilify was to “simplify” the dom-Object. for that I oriented on PHP’s simpleXML. With a simplify method I return the same object as if it was parsed by simpeXML. This let you access the data very comfortable.

Providing the functions “getElementsByClassName” and “getElementById” the usability as well as the speed is increased. and the speed advantage can be enormouse. because you will use this functions direct on your XML-string. In that way tXML parses only the nessasary Elements, not the entire XML. These methods make tXML the perfect tool, for parsing Data from any website, that not officially provides an API. So, have fun Hacking the Web.

If you are now interested to use the fastest XML parser, for the best user experience in your application, get started and install the tXML parser with “npm install txml”. or download the standalone version for the browser on github. At NPM as well as on Github, you find the documentation.

A short opinion to the end: if you can chose, use JSON in stat of XML to persist and transfer data, this is much easier to access in all programming languages and also very fast in JS.