Restful is a pattern to provide an API to manage resources on a server, providing a uniform deal to create, access and change data. In a previouse post I have shown the json-sever. With that server, I made serveral tests and even extended its API. The json-server takes a JSON-file and proide an access through a auto-generated API.

But I never made a complete app using that server, until recently. Because every time I meet some issues that would require a lot of coding that I was not willing to spend for small side projects. But I could not get RESTapis out of my mind, so I studied studied some resources to answer all questions that are open. In this small series of posts I want to talk about some.

The first is this one, about Authentication, it is the one you are currently reading. The next will be about actual designs, means about how parameter and responses should look. It will also provide good resources with good API-definitions and actual implementations. After that, I will actially take a look of implementing a restAPI in nodejs and autogenerated APIs, with additional feature: how objects will be validated, security guarantied and business rules applied.

Authentitation

Often I had the question, how should I do authentication and how does resources look, that are related to the current user. Messages to me, messages from me, my photos, my results, orders, what ever. On top of that, I was asking myself how should a restful API actually look like. The JSON-Server is so simple that it is instantly fun to play with it. But quickly I reached serveral points, that throw questions that need to be answered but for the small sideprojects they are to big.

Login

First was the authentication, is making a session actually restful? My answer to that is actually yes. Many people come to different opinions, but it is not to important. The important thing is that you know how to do authentication. Typically I had some auth-module that has provided an RPC method for login and an other to get information about the current session.

For my first complete restful app, I actually provided a kind of a virtual resource. Many frameworks would name that a Controller or API-Handler. The Controller that I made received login information through a POST-request and provided the current session information to the GET and the DELETE for logout.

token + signatures

When the API is not meaned to be called by a browser, working with sessions is not comfortable and when using web-APIs I never saw that. So the authentitation is send on every request. Depending on the importants of that API, there was just a token. So the provider can monitor my request and limit the results and number of requests. Sometimes it is fine to just use https for encryption. But because many http implementations do not validate the certificates APIs require to add a signature to a request.

implementation

Using nodejs with express, both types of authentication can be ensured using middlewhare that is runing before the middlewhare for the actual API is executed. No metter if you are using sails.js, JSON-server or an other rest-API providing framework, you can use standard middlewhare such as express-session, curf or express-body-parser. With your applications specific authentication middleware you only need to invest once and the authentication will be solved.

Programming with server applications is very complex. One big discipline is using databases. The most common kind of databases are SQL databases. A best practice is saying: you should not write SQL in string-notation mixed into your code. This Post is about moving sql queries out of your sourcecode.

When I used J2EE, I also used JQL, the java query language. There you write he queries right above your actual function and give it a name, that you then can access inside that function. But in my opinion the query then is still inside the code. just on a different position and added to some variable or object.

An other common approach are query builder. There you are going to use function to assemble the querystring. In nodejs there are modules for that, squel or knex, to name two. in both, you write something like that: squel.select().from(“tablename”,”t”).join(“othertable”, “t2”,”t.c=t2.c”).where(“t.b>2”). but actually, again, in my opinion the best practice is not met. The query is still written between your application code(hopefully in a separate module). You don’t write SQL, but still a complex syntax, to query the database, that you have to learn on top of the database and SQL. I think the value of using those libraries directly is very limited.

But they are often used by some the third way, that want to solve that issue. ORMs often use query builder to work on different databases with different dialects of SQL. For example Sequilize as well as bookshelf are using the knex-query-builder. In addition thex provide handy methods to load, update and insert objects. But as soon as you need to execute a more complex query, you still use the underlying querybuilder and the problem is the same again. you need to learn an ORM + its query-builder.

An other type is query-mapping libraries. Actually the mysql module for node can already be seen as one of this kind. The result is mapped into an array of plain objects, where the values can get accessed by the same name as the field on the table. But this alone is not helping to move the query out of your JS file or PHP file. At my company we are using bearcat-dao. It is actually discribing itself as a query-mapping-framework. Instat to simple objects it also can map to defined classes. In bearcar-dao the SQL-queries are written into separate .sql-files. When booting the app, bearcat dao will read all .sql-files and provide those queries by there name. So when you want to execute a query, you choose one by name in your code and the query is written in an SQL file, what is also giving you nice syntax highlighting in editors.

This actually meets the best practice of moving SQL-queries into a separate file. But the implementation has a drawback. All queries across all sql-files share the same namespace and if you gave two queries the same name, you don’t even get a warning. This has lead me to make an sql-file loader. That makes it possible to have a namespace for each single sql-file. My new module tsqlreader, can load sqlfiles. it can handle comments and has support for templates or fragments.

So if you think using an ORM or query-builder is not appropriate, you can try the tsqlreader to load your SQL queries from separate files. But please, try to not repeat yourself to much, when you write your SQL. That will be the next post about.

Recently at my company, I had a task, where I had to make changes at almost the entire codebase for the serverside and of couse, I made many mistakes and produced bugs and errors along the way. In this post, I want to show good and bad errors. But actually I don’t care about the error.message. More important was the stacktrace. Because how can an error be fixed, when you don’t know where is comes from. Along the way, I had two times to hack into a framework, to get a complete stacktrace.

a good error

First, I have a good StackTrace. To produce it, I have created a circular data structure, whare A has a prop B, that has A.
when pass this in an express to res.json(circular), you get the following trace.

1
2
3
4
5
6
7
8
9
10
11
TypeError: Converting circular structure to JSON
at Object.stringify (native)
at ServerResponse.json (/testExpress/node_modules/express/lib/response.js:242:19)
at /testExpress/app.js:8:7
at Layer.handle [as handle_request] (/testExpress/node_modules/express/lib/router/layer.js:95:5)
at trim_prefix (/testExpress/node_modules/express/lib/router/index.js:312:13)
at /testExpress/node_modules/express/lib/router/index.js:280:7
at Function.process_params (/testExpress/node_modules/express/lib/router/index.js:330:12)
at next (/testExpress/node_modules/express/lib/router/index.js:271:10)
at expressInit (/testExpress/node_modules/express/lib/middleware/init.js:33:5)
at Layer.handle [as handle_request] (/testExpress/node_modules/express/lib/router/layer.js:95:5)

At line four it is pointing to my app. So it is easy to see, where I have produced the circular datastructure. In contrast I can name the pomelo framework. If you try there to pass an object to the client, that can not be serialized, you will get an error. The stacktrace then is completely within the node_modules folder.

a bad error from bearcat-dao

Next I have a longer stacktrace that is from bearcat-dao and the underlying mysql-module. This is when I had to introduce a new parameter to the querying function, but did not directly updated all the usages of that method.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
{ [Error: ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?' at line 1]
code: 'ER_PARSE_ERROR',
errno: 1064,
sqlState: '42000',
index: 0 }
Error: ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '?' at line 1
at Query.Sequence._packetToError (/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:48:14)
at Query.ErrorPacket (/app/node_modules/mysql/lib/protocol/sequences/Query.js:83:18)
at Protocol._parsePacket (/app/node_modules/mysql/lib/protocol/Protocol.js:271:23)
at Parser.write (/app/node_modules/mysql/lib/protocol/Parser.js:77:12)
at Protocol.write (/app/node_modules/mysql/lib/protocol/Protocol.js:39:16)
at Socket.<anonymous> (/app/node_modules/mysql/lib/Connection.js:92:28)
at emitOne (events.js:77:13)
at Socket.emit (events.js:169:7)
at readableAddChunk (_stream_readable.js:146:16)
at Socket.Readable.push (_stream_readable.js:110:10)
at TCP.onread (net.js:523:20)
--------------------
at Protocol._enqueue (/app/node_modules/mysql/lib/protocol/Protocol.js:135:48)
at PoolConnection.Connection.query (/app/node_modules/mysql/lib/Connection.js:197:25)
at /app/node_modules/bearcat-dao/lib/template/sql/mysqlTemplate.js:344:26
at /app/node_modules/bearcat-dao/lib/connection/sql/mysqlConnectionManager.js:58:3
at /app/node_modules/bearcat-dao/lib/connection/sql/mysqlConnectionManager.js:116:4
at Ping.onPing [as _callback] (/app/node_modules/mysql/lib/Pool.js:94:5)
at Ping.Sequence.end (/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:96:24)
at Ping.Sequence.OkPacket (/app/node_modules/mysql/lib/protocol/sequences/Sequence.js:105:8)
at Protocol._parsePacket (/app/node_modules/mysql/lib/protocol/Protocol.js:271:23)
at Parser.write (/app/node_modules/mysql/lib/protocol/Parser.js:77:12)
at Protocol.write (/app/node_modules/mysql/lib/protocol/Protocol.js:39:16)
at Socket.<anonymous> (/app/node_modules/mysql/lib/Connection.js:92:28)
at emitOne (events.js:77:13)
at Socket.emit (events.js:169:7)
at readableAddChunk (_stream_readable.js:146:16)
at Socket.Readable.push (_stream_readable.js:110:10)
at TCP.onread (net.js:523:20)

The problem is, I can see, there is a missing parameter, but because I was changing so many places, I could not say, what function was now called with a missing parameter.

That made it nessasary to look deeper into the bearcat-dao module. What I needed to do, is to provide a peace of stacktrace that is pointing out of bearcat, into my app, and the wrong usage of the query method. And actually that is what Mysql is already doing when using a connection-pool, thatwhy we have the line with minusses (——————–). The problem for providing a full stack-trace is in asyncronouse code. Bearcat-dao manages the connectionpool itself, but do not provide the stacktrace. (I already made a pull-request to bearcat-dao at github, we will see when they are going to fix that issue on the npm-module.)

fix of the bearcat error stacktrace

To provide the stacktrace, I created a new Error object. but did not throw or return that error, only when an error occur, I have append its stack to the original Error. In that whay, I can see where the error is coming from and where the code querying method was called, to fix that bug.

The stack was then extended by the following stack-trace.

1
2
3
4
5
6
7
8
9
10
11
12
executeQuery Error
at MysqlTemplate.executeQuery (/app/node_modules/bearcat-dao/lib/template/sql/mysqlTemplate.js:330:41)
at DomainDaoSupport.getList (/app/node_modules/bearcat-dao/lib/core/domainDaoSupport.js:715:31)
at FileDao.getSharingChildren (/app/app/mysql/dao/fileDao.js:196:28)
at VFService.getSharingChildren (/app/app/mysql/service/vfService.js:963:19)
at Promise.post (/app/node_modules/q/q.js:1161:36)
at Promise.promise.promiseDispatch (/app/node_modules/q/q.js:788:41)
at /app/node_modules/q/q.js:1391:14
at runSingle (/app/node_modules/q/q.js:137:13)
at flush (/app/node_modules/q/q.js:125:13)
at doNTCallback0 (node.js:430:9)
at process._tickDomainCallback (node.js:400:13)

bad error in Q

Until now, we have seed frameworks that stand in the beginning and the end of a process chain. Means express and pomelo provide you a stricture that you can hook into, to provide an API to other machines. Mysql and bearcat-dao provide access to some resource, in this case a database.

Q is a library, that will help you to structure and live directly within your context, to let you write code using promises. We are also uding it to execute node-style async methods with promisses. The problem, when you have an error, you can see where it occur, but you can not sees, where a method was called to course that Error.

In other situations, I realised that async.js is providing the stack for both sites. you see where the error occur and where the method was called from.

conclusion

Having so much debugging work, I have learned a lot about error-handling and what I do expect from a good error. The error should not just describe what is wrong, it should also describe how the error occur and the stacktrace is a good option for that.

When a library is not providing you a complete stack-trace, it is possible to read the modules code and adjust the error-handling there. And actually I think we all deserve that the libraries we are using are providing a stacktrace that point to your application and when providing a library it should always be possible to provide the stack.

When working with databases, it is always the question how to load the objects. Now I will give you some thoughts that are important in general and especially when working with nodeJS.

When loading objects of a kind by there id, it is obviouse and trivial to load all its fields. to reduce traffic and used memory it is usefull to use a projection. In SQL that means that you select the interested fields instat of loading the entire rows with *. But when selecting specific fields, you will need a new field selection for all methods where you load the same type of objects. So usually I am loading the whole row, because in development it is much cleaner and you have fiewer different query types. That can later be more easy to optimize using a caching layer.

It becomes more complex, when you need to load related objects. In Bookshelf or sequilize this feature is called neasting or load neasted objects. In other frameworks there are features called lazy loading. When Accessing the property to read the related object, behind the scene the framework will load that new object. These feature is more polular in Java or PHP.

Independent of the framwork or language, it is important to know, what data is geting loaded, to make the code work efficient and be reliable with providing the nessasary neasted objects. As a solution I now follow some naming convention. To get objects of some type, I will call a get-method. something like “var users = userDB.getByName(name)” to get one or more user with that name. This will only load the users direct properties/fields. To load the profilePictures to some user I will implement a fetchProfilePics on the userDB controller. This method will take a list of users, load there profilePictures, extend the users my the profilePics property that will be a list of pic-objects and return the pic-objects.

This process can be automated very easily and it will be clear what objects are loaded and it is good to controll, to load only nessasary objects.

updated

The rule of getting objects with by some information and then fetch reladed objects was very usefull and I saw that I would write a lot of code repeatedly for many types of objects. So I prepared to mysql a libreary, that let you give a schema for a class and it will provide you reasonable prepared get and fetch methods. you can try tmysqlpromisedao.

Extending HTML means to implement new behavior of existing elements in HTML. There are a number of default behaviors. Like a click on a link open the page defined by the href attribute. A select is able to get changed by the user. A form is able to submit data. And so on. With extending HTML you define new behaviors and rules when they get applied. Once the extension is defined a developer can create elements of a certain type to get the new behavior applied. This will help to reduce code, compared to specifying the making for making an pageLoad stript, that will fild the button, apply the custom behavior and has a section for each of the elements on the page with some special requirements. This is useful, if you have a website, that delivers page by page from the server, as it is done my most PHP sides and using systems like wordpress, drupal and in most cases synfony and zend. I will discuss in a future post about the advantages and disadvantages of such systems over single-page applications. Second improvement by building HTML extensions is the separation of concerns you are defining the behavior on one place and the compositions and style of those components is defined in other places, like templates.

I heard about HTML-extensions while a javascript meetup at The NetCircle in Shanghai. There was a talk given by Billy Shen. In his presentation he spoke about his implementation using Promises and jQuery. But in the following I want to show an approach, that I would prefer to implement html-syntax-extensions using jQuery in a small example. The following example will allow to define a button that when clicked to load data from a given url, to get displayed on some other element, given by a selector, but only if the user allow that.

1
2
3
4
5
6
7
8
9
10
11
12
13
// get some event on a certain type of element. in this case buttons with the class of loadJSON.
$(document).on("click", "button.loadJSON", function (e) {
// when there was a click on a button that has the loadJSON class
// read parameter from the attributes and validate them, before doing the logic.
var message = e.target.getAttribute('data-message') || 'should we do it?';
var url = e.target.getAttribute('data-url');
var destiny = e.target.getAttribute('data-destiny')
if(url && destiny && confirm(message)) {
$.get(url, function(data) {
$(destiny).text(data);
});
}
});

As you can see, the click-Event is registered on the document. That means every click is going to be evaluated with the selector “button.loadJSON”. This approach is good. to add some behaviors to an existing Website. I think this might be ok as link as you don’t have more then a hundred extensions. But for a single-page application where basically very element can show some specific and unique functionality. You should go and bind the events direct on those elements when you create them. Or let that handle by some framework, like Backbone.js, React, Angular, Ember,…

In conclusion, extending HTML is just giving you a new perspective, how you can implement simple components. So, Can you be a great software developer Javascript UI developer without extending the HTML-syntax? Yes. Do you need that to implement apps? no. Is it interesting, and could probably save some time? Yes.