As always – in the very last minute.
I wish all my readers a Happy New Year! May all your dreams come true! In the coming year I will try to blog more often and more interesting – at least that’s one of my resolutions. So, Happy New Year – and hope to see you all in 2006!
Software engineer's thoughts on technology - sometimes grumpy, sometimes depressive, sometimes happy (though not often).
Saturday, December 31, 2005
Thursday, December 22, 2005
Digg - first impressions
Today I’ve tried using Digg – and I am not very impressed. I mean, the site as an application is great, and the content is present in abundance, but I still cannot understand the right way of using it.
The main problem that I have is filtering out the right news. There are just too many news out there. As far as I understand the system, I can read either front page, or top news, or news by category. The categories are very coarse-grained for me: for example, I do not want to read all programming news, since I do not program in C# or Python. I would greatly appreciate the capability to sort the news by some other criteria (tags, for example – this seems like an ideal place to use them!). I can’t even sort the news inside one category by their ranking.
The top news section is a mixture of all categories – and I am not interested in half of them. The same goes for the front page.
So, I will probably make another attempt – but for now, I think that Digg severely lacks the filtering capabilities, and for me this is crucial.
Technorati tags: Digg,Web 2.0
The main problem that I have is filtering out the right news. There are just too many news out there. As far as I understand the system, I can read either front page, or top news, or news by category. The categories are very coarse-grained for me: for example, I do not want to read all programming news, since I do not program in C# or Python. I would greatly appreciate the capability to sort the news by some other criteria (tags, for example – this seems like an ideal place to use them!). I can’t even sort the news inside one category by their ranking.
The top news section is a mixture of all categories – and I am not interested in half of them. The same goes for the front page.
So, I will probably make another attempt – but for now, I think that Digg severely lacks the filtering capabilities, and for me this is crucial.
Technorati tags: Digg,Web 2.0
Monday, December 19, 2005
Best of Web 2.0: two lists
Two lists of the best Web 2.0 applications of 2005. The lists give pretty good idea of what Web 2.0 is and what are the most prominent directions of its development.
The first list was compiled by Mark Millerton and can be found here: http://www.articledashboard.com/Article/Top-10-Innovative-Web-2-0-Applications-of-2005/10891
The second list comes courtesy of Dion Hinchcliffe and is located here: http://web2.wsj2.com/the_best_web_20_software_of_2005.htm This list breaks Web 2.0 applications in categories, and, besides naming the best, provides also several runners-up for each category. The discussion that follows this article is also quite interesting as one can find there even more interesting applications and sites.
A curious thing: it seems to me that only Digg has made it into both lists. I’ve checked Digg out as soon as it was mentioned by eHub – but I definitely cannot consume more news now, so I still didn’t try and use it. But now I think about giving it a shot – since both lists recommend it.
Technorati tag: Web 2.0
The first list was compiled by Mark Millerton and can be found here: http://www.articledashboard.com/Article/Top-10-Innovative-Web-2-0-Applications-of-2005/10891
The second list comes courtesy of Dion Hinchcliffe and is located here: http://web2.wsj2.com/the_best_web_20_software_of_2005.htm This list breaks Web 2.0 applications in categories, and, besides naming the best, provides also several runners-up for each category. The discussion that follows this article is also quite interesting as one can find there even more interesting applications and sites.
A curious thing: it seems to me that only Digg has made it into both lists. I’ve checked Digg out as soon as it was mentioned by eHub – but I definitely cannot consume more news now, so I still didn’t try and use it. But now I think about giving it a shot – since both lists recommend it.
Technorati tag: Web 2.0
Thursday, December 15, 2005
Wikipedia trouble
On Monday O’Reilly Radar reported about a website which orchestrates a preparation to a class action lawsuit against Wikipedia (http://www.wikipediaclassaction.org/). The basic idea is the accusation of Wikipedia that the information posted there is inaccurate and leads to defamation of some people.
The story behind this case is a long one. The summary of events (from the Wikipedia’s point of view) can be found here. It seems – at least after reading the Wikipedia’s version – that the whole “class action suit” is just an attempt of retaliation, made by some people, whose dubious business practices were accidentally uncovered by Wikipedia members. The case itself – if it will ever reach the court – might become a precedent, particularly important as we enter the world of Web 2.0 and user-generated contents.
But there is another aspect of this case, which interests me, probably, even more – and it is a question of Wikipedia’s accuracy in general. The next post on O’Reilly Radar is also dedicated to this question, since it talks about an experiment carried out by the “Nature” journal. They run 42 science articles in Britannica and in Wikipedia by a team of experts. It turned out that Britannica has on average about 3 inaccuracies per article, and Wikipedia – about 4.
Though it may seem as an impressive achievement for a team of voluntary unpaid editors, I am still not convinced in the quality of Wikipedia. The problem which concerns me lies in the area of “common misconceptions”. Since the editing process is open to everybody, I am afraid that the real facts that contradict with some popular belief will be edited out in favor of incorrect, but widespread opinions.
Which, in turn, brings in a discussion of the changes to the process of acquiring knowledge that are brought by the Internet. Not so long ago, the answer to the question “Where do I get knowledge about X?” was “Go read a book” or “Go ask an expert”. No the answer is “Ask Google”, or “Check the Wikipedia”. The knowledge of the experts is slowly being replaced by the knowledge of the crowd. The consequences of this process for our life and our society may be quite deep, and, I am afraid, quite negative.
Technorati tag: Wikipedia
The story behind this case is a long one. The summary of events (from the Wikipedia’s point of view) can be found here. It seems – at least after reading the Wikipedia’s version – that the whole “class action suit” is just an attempt of retaliation, made by some people, whose dubious business practices were accidentally uncovered by Wikipedia members. The case itself – if it will ever reach the court – might become a precedent, particularly important as we enter the world of Web 2.0 and user-generated contents.
But there is another aspect of this case, which interests me, probably, even more – and it is a question of Wikipedia’s accuracy in general. The next post on O’Reilly Radar is also dedicated to this question, since it talks about an experiment carried out by the “Nature” journal. They run 42 science articles in Britannica and in Wikipedia by a team of experts. It turned out that Britannica has on average about 3 inaccuracies per article, and Wikipedia – about 4.
Though it may seem as an impressive achievement for a team of voluntary unpaid editors, I am still not convinced in the quality of Wikipedia. The problem which concerns me lies in the area of “common misconceptions”. Since the editing process is open to everybody, I am afraid that the real facts that contradict with some popular belief will be edited out in favor of incorrect, but widespread opinions.
Which, in turn, brings in a discussion of the changes to the process of acquiring knowledge that are brought by the Internet. Not so long ago, the answer to the question “Where do I get knowledge about X?” was “Go read a book” or “Go ask an expert”. No the answer is “Ask Google”, or “Check the Wikipedia”. The knowledge of the experts is slowly being replaced by the knowledge of the crowd. The consequences of this process for our life and our society may be quite deep, and, I am afraid, quite negative.
Technorati tag: Wikipedia
Monday, December 12, 2005
Humane Interfaces
Martin Fowler has recently published on his “bliki” (a hybrid between a blog and a wiki) a very interesting article regarding two different approaches to API design: “Humane Interface” and “Minimal Interface”. The article spurred quite a discussion – the links to different follow-ups are at the end of the original post. A summary of the debate may be found here.
Personally, I incline towards Humane Interfaces. No doubt it’s harder to come up with just the right set of methods when you design such API, and the maintenance is more difficult, but for the user it is a real blessing. It goes very well with my principle: everything should be possible to do, and common tasks should be simple to do. Basically, this is the same idea I was talking about when writing on Java date and time classes.
According to Fowler, the term “Humane Interface” is very popular among the rubyists – one more reason to learn Ruby! I have more than enough of the reasons to do it now – the only remaining question is where to find the time for it…
Personally, I incline towards Humane Interfaces. No doubt it’s harder to come up with just the right set of methods when you design such API, and the maintenance is more difficult, but for the user it is a real blessing. It goes very well with my principle: everything should be possible to do, and common tasks should be simple to do. Basically, this is the same idea I was talking about when writing on Java date and time classes.
According to Fowler, the term “Humane Interface” is very popular among the rubyists – one more reason to learn Ruby! I have more than enough of the reasons to do it now – the only remaining question is where to find the time for it…
Wednesday, December 07, 2005
Jacob Nielsen: AJAX sucks. Is he right?
Jacob Nielsen has written an article called “Why Ajax Sucks”. He is, definitely, one of the grumpiest tech guys in the world – but, as usual, I have to agree with most of his statements. But at the same time, he completely misses the point. He is so concerned with the usability issues that he just can’t see the forest for the trees.
The problem is not Ajax – or frames, or Flash… The problem is the Web itself. It has evolved from the hypertext model (the original design, which had the page as the main unit – see the beginning of Nielsen’s article) into the application platform model. The paradigm had changed – but the tools remained mostly the same, and that’s the source of most of the usability problems, which Nielsen attributes to Ajax.
The tools, which we are using to work with the web are obsolete and outdated, because they still tend to work with the pages. But most of the page attributes have little to no sense in the context of an application:
So, the right – and the only – way to go is not to try and force the new paradigm into the Procrustes’ bed of old tools and metaphors, but to adapt the tools for the new world.
Here is what, in my opinion, should be done:
1. There should be a way for the client to tell old-fashioned web site from the application. Of course this will be a responsibility of the site designer to mark it correctly – but the server should somehow tell the browser (maybe with a new header?): “You are entering an application – switch to the app mode”.
2. URLs as we know them should be replaced by a more generic object, which can hold more data in a better format. This object – let’s call it a “neolink”- should be used to tell the browser where to go, and provide enough data for the application so it can restore the required state (if possible). Under the hood a neolink may be just an XML file. The bookmark managers will hold neolinks instead of URLs; neolink may be sent in the mail as well. The URLs will be used only to address old web segments and to point browsers to the applications’ entry points.
3. Browser buttons should be customizable, and should interact with the application. The easiest way – the button should just call a JavaScript handler.
4. A special protocol should be designed defining the interaction between an application and a search engine. Maybe the application can provide special pages for search engines, which will hold the new information and links, or provide a “search engine” login.
If – or, rather, when – these changes will happen, the usability problems described by Jacob Nielsen will simply disappear. Of course, some new problems will arise – but we will know about them only from Jacob Nielsen’s future articles.
The problem is not Ajax – or frames, or Flash… The problem is the Web itself. It has evolved from the hypertext model (the original design, which had the page as the main unit – see the beginning of Nielsen’s article) into the application platform model. The paradigm had changed – but the tools remained mostly the same, and that’s the source of most of the usability problems, which Nielsen attributes to Ajax.
The tools, which we are using to work with the web are obsolete and outdated, because they still tend to work with the pages. But most of the page attributes have little to no sense in the context of an application:
- URLs don’t work for applications, because the application either simply cannot restore its state to the one “specified” by URL, or it can – but the URL becomes such a monstrosity, that we now have specialized on-line services which can replace unwholesome long URLs with a short alias. The same goes for bookmarks.
- “Back” button, that bane of web application designers, also has no place in the application world. (And “Forward”, and “Reload” too).
- Search engines cannot work with applications correctly.
So, the right – and the only – way to go is not to try and force the new paradigm into the Procrustes’ bed of old tools and metaphors, but to adapt the tools for the new world.
Here is what, in my opinion, should be done:
1. There should be a way for the client to tell old-fashioned web site from the application. Of course this will be a responsibility of the site designer to mark it correctly – but the server should somehow tell the browser (maybe with a new header?): “You are entering an application – switch to the app mode”.
2. URLs as we know them should be replaced by a more generic object, which can hold more data in a better format. This object – let’s call it a “neolink”- should be used to tell the browser where to go, and provide enough data for the application so it can restore the required state (if possible). Under the hood a neolink may be just an XML file. The bookmark managers will hold neolinks instead of URLs; neolink may be sent in the mail as well. The URLs will be used only to address old web segments and to point browsers to the applications’ entry points.
3. Browser buttons should be customizable, and should interact with the application. The easiest way – the button should just call a JavaScript handler.
4. A special protocol should be designed defining the interaction between an application and a search engine. Maybe the application can provide special pages for search engines, which will hold the new information and links, or provide a “search engine” login.
If – or, rather, when – these changes will happen, the usability problems described by Jacob Nielsen will simply disappear. Of course, some new problems will arise – but we will know about them only from Jacob Nielsen’s future articles.
Tuesday, December 06, 2005
Trick: Getting the Name of Enclosing Form (JS)
Here is a small Javascript function which returns the name of the enclosing form for an element. I found it quite useful in refactoring and supporting an old web application – some old JS code required a form name to access some input fields, so I created this JS snippet to be able to use the old code in a more generic way.
function getEnclosingFormName(field_name){
var obj = document.getElementById(field_name);
var node = obj.parentNode;
var i=0;
while (node.tagName !='FORM' && node !=null && node.tagName!='HTML'){
node = node.parentNode;
}
if (node.tagName == 'HTML') {
return '';
}
return node.name;
}
Monday, December 05, 2005
Knowledge coordination
From time to time I find myself looking for an answer to what may be called a “knowledge coordination question”. For example:
It may sound strange, but the most companies I’ve worked in didn’t address this problem in any way. So, the following are just my thoughts – or, my plan, since I am going to try and implement this solution.
A good solution for this problem is a centralized repository for different types of knowledge. Wiki seems to be a good platform for implementing “company-wide knowledge bank”. It is easy to create a separate page for each knowledge area, the most important being:
- Does my company have a vector drawing tool?
- Do we have a Java library which implements SFTP protocol?
- Had anyone implemented already a JSP tag, which generates a date picker tied to our database?
It may sound strange, but the most companies I’ve worked in didn’t address this problem in any way. So, the following are just my thoughts – or, my plan, since I am going to try and implement this solution.
A good solution for this problem is a centralized repository for different types of knowledge. Wiki seems to be a good platform for implementing “company-wide knowledge bank”. It is easy to create a separate page for each knowledge area, the most important being:
- Existing software – to catalog the tools which are currently used by the company;
- Libraries – to catalog currently used libraries;
- Reusable components – to track different in-house developed components, which are intended for reuse.
Subscribe to:
Posts (Atom)