Saturday, December 31, 2005

Happy New Year!

As always – in the very last minute.

I wish all my readers a Happy New Year! May all your dreams come true! In the coming year I will try to blog more often and more interesting – at least that’s one of my resolutions. So, Happy New Year – and hope to see you all in 2006!

Thursday, December 22, 2005

Digg - first impressions

Today I’ve tried using Digg – and I am not very impressed. I mean, the site as an application is great, and the content is present in abundance, but I still cannot understand the right way of using it.

The main problem that I have is filtering out the right news. There are just too many news out there. As far as I understand the system, I can read either front page, or top news, or news by category. The categories are very coarse-grained for me: for example, I do not want to read all programming news, since I do not program in C# or Python. I would greatly appreciate the capability to sort the news by some other criteria (tags, for example – this seems like an ideal place to use them!). I can’t even sort the news inside one category by their ranking.

The top news section is a mixture of all categories – and I am not interested in half of them. The same goes for the front page.

So, I will probably make another attempt – but for now, I think that Digg severely lacks the filtering capabilities, and for me this is crucial.

Technorati tags: ,

Monday, December 19, 2005

Best of Web 2.0: two lists

Two lists of the best Web 2.0 applications of 2005. The lists give pretty good idea of what Web 2.0 is and what are the most prominent directions of its development.

The first list was compiled by Mark Millerton and can be found here: http://www.articledashboard.com/Article/Top-10-Innovative-Web-2-0-Applications-of-2005/10891

The second list comes courtesy of Dion Hinchcliffe and is located here: http://web2.wsj2.com/the_best_web_20_software_of_2005.htm This list breaks Web 2.0 applications in categories, and, besides naming the best, provides also several runners-up for each category. The discussion that follows this article is also quite interesting as one can find there even more interesting applications and sites.

A curious thing: it seems to me that only Digg has made it into both lists. I’ve checked Digg out as soon as it was mentioned by eHub – but I definitely cannot consume more news now, so I still didn’t try and use it. But now I think about giving it a shot – since both lists recommend it.

Technorati tag:

Thursday, December 15, 2005

Wikipedia trouble

On Monday O’Reilly Radar reported about a website which orchestrates a preparation to a class action lawsuit against Wikipedia (http://www.wikipediaclassaction.org/). The basic idea is the accusation of Wikipedia that the information posted there is inaccurate and leads to defamation of some people.

The story behind this case is a long one. The summary of events (from the Wikipedia’s point of view) can be found here. It seems – at least after reading the Wikipedia’s version – that the whole “class action suit” is just an attempt of retaliation, made by some people, whose dubious business practices were accidentally uncovered by Wikipedia members. The case itself – if it will ever reach the court – might become a precedent, particularly important as we enter the world of Web 2.0 and user-generated contents.

But there is another aspect of this case, which interests me, probably, even more – and it is a question of Wikipedia’s accuracy in general. The next post on O’Reilly Radar is also dedicated to this question, since it talks about an experiment carried out by the “Nature” journal. They run 42 science articles in Britannica and in Wikipedia by a team of experts. It turned out that Britannica has on average about 3 inaccuracies per article, and Wikipedia – about 4.

Though it may seem as an impressive achievement for a team of voluntary unpaid editors, I am still not convinced in the quality of Wikipedia. The problem which concerns me lies in the area of “common misconceptions”.  Since the editing process is open to everybody, I am afraid that the real facts that contradict with some popular belief will be edited out in favor of incorrect, but widespread opinions.

Which, in turn, brings in a discussion of the changes to the process of acquiring knowledge that are brought by the Internet. Not so long ago, the answer to the question “Where do I get knowledge about X?” was “Go read a book” or “Go ask an expert”. No the answer is “Ask Google”, or “Check the Wikipedia”. The knowledge of the experts is slowly being replaced by the knowledge of the crowd. The consequences of this process for our life and our society may be quite deep, and, I am afraid, quite negative.

Technorati tag:

Monday, December 12, 2005

Humane Interfaces

Martin Fowler has recently published on his “bliki” (a hybrid between a blog and a wiki) a very interesting article regarding two different approaches to API design: “Humane Interface” and “Minimal Interface”.  The article spurred quite a discussion – the links to different follow-ups are at the end of the original post. A summary of the debate may be found here.

Personally, I incline towards Humane Interfaces. No doubt it’s harder to come up with just the right set of methods when you design such API, and the maintenance is more difficult, but for the user it is a real blessing. It goes very well with my principle: everything should be possible to do, and common tasks should be simple to do. Basically, this is the same idea I was talking about when writing on Java date and time classes.

According to Fowler, the term “Humane Interface” is very popular among the rubyists – one more reason to learn Ruby! I have more than enough of the reasons to do it now – the only remaining question is where to find the time for it…

Wednesday, December 07, 2005

Jacob Nielsen: AJAX sucks. Is he right?

Jacob Nielsen has written an article called “Why Ajax Sucks”. He is, definitely, one of the grumpiest tech guys in the world – but, as usual, I have to agree with most of his statements. But at the same time, he completely misses the point. He is so concerned with the usability issues that he just can’t see the forest for the trees.

The problem is not Ajax – or frames, or Flash… The problem is the Web itself. It has evolved from the hypertext model (the original design, which had the page as the main unit – see the beginning of Nielsen’s article) into the application platform model. The paradigm had changed – but the tools remained mostly the same, and that’s the source of most of the usability problems, which Nielsen attributes to Ajax.

The tools, which we are using to work with the web are obsolete and outdated, because they still tend to work with the pages. But most of the page attributes have little to no sense in the context of an application:

  • URLs don’t work for applications, because the application either simply cannot restore its state to the one “specified” by URL, or it can – but the URL becomes such a monstrosity, that we now have specialized on-line services which can replace unwholesome long URLs with a short alias. The same goes for bookmarks.

  • “Back” button, that bane of web application designers, also has no place in the application world. (And “Forward”, and “Reload” too).

  • Search engines cannot work with applications correctly.

So, the right – and the only – way to go is not to try and force the new paradigm into the Procrustes’ bed of old tools and metaphors, but to adapt the tools for the new world.

Here is what, in my opinion, should be done:
1. There should be a way for the client to tell old-fashioned web site from the application. Of course this will be a responsibility of the site designer to mark it correctly – but the server should somehow tell the browser (maybe with a new header?): “You are entering an application – switch to the app mode”.
2. URLs as we know them should be replaced by a more generic object, which can hold more data in a better format. This object – let’s call it a “neolink”-  should be used to tell the browser where to go, and provide enough data for the application so it can restore the required state (if possible). Under the hood a neolink may be just an XML file. The bookmark managers will hold neolinks instead of URLs; neolink may be sent in the mail as well. The URLs will be used only to address old web segments and to point browsers to the applications’ entry points.
3. Browser buttons should be customizable, and should interact with the application. The easiest way – the button should just call a JavaScript handler.
4. A special protocol should be designed defining the interaction between an application and a search engine. Maybe the application can provide special pages for search engines, which will hold the new information and links, or provide a “search engine” login.

If – or, rather, when – these changes will happen, the usability problems described by Jacob Nielsen will simply disappear. Of course, some new problems will arise – but we will know about them only from Jacob Nielsen’s future articles.

Tuesday, December 06, 2005

Trick: Getting the Name of Enclosing Form (JS)

Here is a small Javascript function which returns the name of the enclosing form for an element. I found it quite useful in refactoring and supporting an old web application – some old JS code required a form name to access some input fields, so I created this JS snippet to be able to use the old code in a more generic way.


function getEnclosingFormName(field_name){
var obj = document.getElementById(field_name);
var node = obj.parentNode;
var i=0;
while (node.tagName !='FORM' && node !=null && node.tagName!='HTML'){
node = node.parentNode;
}
if (node.tagName == 'HTML') {
return '';
}
return node.name;
}

Monday, December 05, 2005

Knowledge coordination

From time to time I find myself looking for an answer to what may be called a “knowledge coordination question”. For example:
  • Does my company have a vector drawing tool?

  • Do we have a Java library which implements SFTP protocol?

  • Had anyone implemented already a JSP tag, which generates a date picker tied to our database?
Getting answers to these questions may be very tricky – that is, there are people in the company, who know answers (“Yes, we have a copy of Adobe Illustrator”), but finding them may be a royal pain. Quite often people just make couple of feeble attempts to find the right “expert”, and then make their own decision – buy another tool, get a new library, implement the tag themselves (this one was my sin!). The results are obvious: waste of money, waste of time and a whole zoo of libraries, tools and components in the company – a nightmare for support.

It may sound strange, but the most companies I’ve worked in didn’t address this problem in any way. So, the following are just my thoughts – or, my plan, since I am going to try and implement this solution.

A good solution for this problem is a centralized repository for different types of knowledge. Wiki seems to be a good platform for implementing “company-wide knowledge bank”. It is easy to create a separate page for each knowledge area, the most important being:
  • Existing software – to catalog the tools which are currently used by the company;

  • Libraries – to catalog currently used libraries;

  • Reusable components – to track different in-house developed components, which are intended for reuse.
Initially filling this database is an enormous endeavor; however, if everyone will just list his or her assets, the task will be much less intimidating. An incentive provided by the management can be a great help. Something like “The best knowledgebase contributor” monthly award might motivate people (especially if it will come with some prize) – but selling ideas to the management is, unfortunately, not what I’m good at…

Monday, November 28, 2005

80 percent by convention

In an interview published recently by eWeek, David Hansson (creator of Ruby on Rails) made a statement, which wonderfully summarizes my ideas on the way APIs, libraries and frameworks should be designed. It can also be considered an addition to my previous post – I was trying to say the same thing, but was unable to put it into such an elegant phrasing:

“As long as you do what most people want to do most of the time, you get a free ride. No configuration necessary. So get the 80 percent by convention, tailor the last 20 percent by hand.”


I definitely should try Ruby on Rails: it is always a great pleasure to use a tool when you and the tool’s designer share the same ideas.

Tuesday, November 15, 2005

Lesson of Java date and time classes

It happened so that I didn’t use Java date- and time-related classes for a long time. I didn’t like them, and I was lucky enough that I didn’t need them, too. Well, sooner or later that should have happened – today I needed to use some date arithmetic. I opened the Javadoc, and spent next half-an-hour trying to understand, how these classes should be used.

I succeeded, and the result was not as ugly as I expected. The question is, why these classes are so counterintuitive and overcomplicated? I had to use 3 different classes to do a relatively simple thing: Date class to store date in milliseconds; Calendar class to perform arithmetic; and SimpleDateFormatter class to print the date and to parse string input.

There are many classes, they are complex – but still some of the most mundane tasks are not that easy to do. How difficult is for us to answer a question: “What date is the next Saturday?” Easy, right (if we have calendar at hand, of course)? Then try doing it using Java classes. Yes, it is possible – and not that difficult – but, still, it is much more complex process than it should be.

The lesson here is simple: create façade for complex classes so that simple common tasks can be easily done – and for more challenging things (what if I need to print a Hebrew calendar in Klingon?)  there are always more powerful tools.

Friday, November 11, 2005

Sony's DRM malware and Well-Mannered Software

Today I’ve stumbled upon a very interesting article called “Sony, Rootkits and Digital Rights Management Gone Too Far” . The story is simple: a DRM software coming on a musical CD produced by Sony behaves exactly as a malware program: it silently installs on a PC, hides itself from the user, intercepts system calls, monitors running processes and provides no way to uninstall it. I do not want to discuss now how methods like this should attract users to buy legitimate CDs – though I personally will never ever buy a CD with that kind of protection.

I want to discuss another thing – a concept of  “well-mannered software”. The idea came to my mind when recently I was installing several programs on my PC. All of a sudden my firewall told me that the installer wants to connect to the Internet. There was no reason why the installer was supposed to do it – it never told me about “looking for updates” or “downloading additional components”, so I blocked the connection. Surprise, surprise – the installer didn’t complain about that, it just quietly completed the installation with no error messages. Well, actually, it quietly made an attempt to make something run on startup, which was detected by SpyBot, and also forbidden by me. The same happened with other programs I’ve installed – I just had to sit, watch the installers and keep slapping their hands when they tried to do something they were not supposed to.

It seems to be the current state of things: the software considers itself to be smarter than the user, and doesn’t bother to tell the user about its actions. If a computer can be considered as a house for the software, then the user is its landlord, and the applications are tenants. So now the tenants run the house, and the landlord is pushed aside. This makes some people angry, some people miserable, and it definitely makes all of us very insecure.

So, here I suggest the new trend, the new direction – “Well-Mannered Software” (WMS).
WMS should follow just one simple rule
WMS should not do anything, which is not necessary for its normal functions without explaining the action to the user and getting the user’s permission.

An installer should not go online without telling the user “I am going to check for updates – will connect to the site blah.com. Is it OK?”

An image viewer should not add an item to your startup sequence without asking “I will install color manager to run on startup. OK?”

And a music CD should not install a poorly written piece of malware on your computer without…. No, it just shouldn’t do it at all.

Tuesday, October 11, 2005

Web 2.0 apps - too similar to each other

I am really amazed with the number of Web 2.0 sites (or should I say – applications?) that appear daily. There is a great site (eHub ) which hosts a constantly updated list of almost everything related to Web 2.0 – and almost every day some new resources are being added.  While being amazed with the execution and design of the majority of these sites, I am at the same time surprised with their similarity. It seems like the developers are just remixing a few ideas, waiting that somehow they will eventually stumble upon the winning combination.  “Friendster meet epinions”, “mashup between del.icio.us, flickr and yahoo news”,  “sex offender data with google maps”… Every idea by itself sounds exciting and promising – but all together they give an impression of the same pattern being repeated over and over again and again.
I do not see this as a fundamental problem with Web 2.0 concept – I just think that for too many developers it’s still just an opportunity to play with new toys and to create a cool-looking modern web application.

Thursday, October 06, 2005

HBO poisons BitTorrent

O’Reilly Radar writes about HBO poisoning BitTorrent downloads of the show “Rome”. It sounds to me like a near-criminal way of stopping an “officially criminal” activity, and, thus, is really disturbing. From the formal point of view, HBO probably does not violate any law, and BitTorrent downloaders are, definitely, pirates. But from a position of a common sense, I cannot wholeheartedly call the BitTorrent users criminals. If I just watch the show – I’m not a criminal. And if I record the show on my DVR – I am also not a criminal. And if I remember that they taught me at school “Sharing is caring” and will share the record with a couple of my friends – will I become a criminal then? Where is the thin line that separates caring people from pirates?

And looking at the actions of HBO, I would say that they are almost indistinguishable from hacking into a network and disrupting the data transfer, which is a crime. They’d better spend their resources on developing a new distribution / business model – this, and not the poisoning, may help reduce pirating of their shows.

Monday, October 03, 2005

Trick: "Report a problem" button

Today, I think, is a good time to stop being grumpy for a while. I want to share a trick I found very useful when I was working on a small application.

So, here is the problem: there is a small application, which works like a “wizard” – i.e. the users fill in some data on several pages, and, at the end, the application does something quite complex with the data (writes it to the database, creates some content – it doesn’t matter). The requirements are poorly specified, the business logic is changing often, and there are not enough QA resources to thoroughly test the application. As a result, the application sometimes behaves strangely – sometimes because of a bug, sometimes as a result of user actions. The question is – how to troubleshoot the application? The users being mostly non-technical people, it became a tough problem for me, how to find out what exactly caused the application to misbehave? I was getting several calls per week– and quite often I had to spend a lot of time trying to re-create the bug and interrogating the user – what exactly did she entered on all pages before the problem became evident.

Then I came with a solution. On the top of each page I’ve placed a button “Report a problem”. When clicked, the button opened a pop-up with a textbox for the description of the problem and a “Send report” button. The users were instructed, that in case of trouble they should click on the button, type in the description of the problem and submit the form. Behind the scenes this small pop-up did a very useful job: it dumped all the relevant data (session variables, URL, request data etc.) and appended the data to the problem description. I’ve also added a small piece of code to every page, which recorded the visit of the page in a session variable, so I could get a trace of which pages were visited in this session and in which order. The message was then e-mailed to me.

This little hack – as simple as it is – made the troubleshooting a lot easier. The users were also happy – they liked the simple way to submit a bug report and the speed at which the issues were resolved. Of course this solution will not, probably, work so well for huge applications (too much data to dump), but for small-to-medium applications it’s definitely worth trying.

Friday, September 30, 2005

A Server of Babel

Recently I’ve caught myself thinking more and more about the proliferation of languages in enterprise Java projects. If you look at a typical J2EE project you will notice that, besides obvious Java and JSP, the project contains a huge amount of code written is many “configuration” languages: tag library descriptions, WSDL, XSL, Hibernate… All these languages are XML-based – but still, each is a language in its own, with its own syntax and semantics. It seems to me that there is a tendency here: the frameworks (platforms, technologies…) become more and more generic – and the more and more of actual logic moves from Java code to the “configuration” files. Initially, maybe, that was not a bad idea – but now it turned J2EE into a monstrosity. The source files in those languages are hard to read, they are extremely error-prone, and, since they do not require compiling, quite often the problems may go undetected until the run-time. Compare it to how Java sources are handled – and you will see the difference immediately.

Personally, I am getting tired of this “server of Babel” situation. I feel I am ready to look for some other platform, which will be simpler and more elegant than modern J2EE. And – judging by the speed at which Ruby on Rails is gaining popularity – I am not the only one who is ready to move.

Tuesday, September 20, 2005

Do people need privacy?

Recently the company I’m working in encountered a problem with one of our web sites – for some reason the cookies were rejected by the client browsers. After short investigation we found the reason: incorrectly configured P3P policy on one of our servers. The problem was quickly fixed – but somehow I keep thinking about this small incident.

The topic of online privacy is a hot one – and was a hot one for quite some time. Users of the Internet – and I’m not an exception – were worried about the data that different sites collected about them. Several utilities were created at different times to address these worries (I myself had at some time installed a program called IDCide to block tracking cookies). And I’ve read a multitude of articles in different online and offline magazines – all talking about the necessity to protect an innocent Internet user against data collection and privacy violation. It seemed like everybody wanted something to be done.

And then the issue was addressed by W3C, and a Platform for Privacy Preferences (AKA P3P) was created. Now almost all major sites and the major browsers support P3P. The sites tell you their policy on data collection and retention, and the browser is able to allow or deny certain actions based on how your preferences match the site’s profile. Terrific!

The only question is – how many people know about it? I’ve asked several of my friends and colleagues – most of them never heard about this. Somehow P3P and everything that’s related to it got past the majority of people who need it most. There is something unwholesome about the situation; it shows us that people do not need privacy and security – they just need to talk about privacy and security.

Friday, September 16, 2005

AJAX - new technology or a neat trick?

AJAX took the Web by storm and quickly became a buzzword of the day. On almost all sites that has anything to do with web-oriented development there are discussions about AJAX technology, tutorials on how to use AJAX technology, new libraries that support AJAX technology... But, after discarding all that buzz around AJAX, one can easily see that there is nothing new in this "technology". There were several ways to asynchronously call server without reloading the page long before AJAX appeared. One way was to use a hidden frame as a container for the results returned by the server - I used it 6 years ago. Another way was to utilize for the same purpose an invisible IFRAME (used 3 years ago).

Yes, yes - both of these ways are much less elegant than AJAX, and there are certain things possible with AJAX which were impossible with these old tricks (for example, retrieving some data from another site which supports REST). But still, the basic functionality was the same, and the famous example that started the AJAX craze - Google Suggest - could be easily implemented using any of these methods.

So why the old methods did not get that popularity? One reason is, of course, the timing. AJAX appeared at a very right time, and was popularized by Google itself. But there is another reason - maybe more important. It is very simple: "AJAX" is a catchy word, "technology" is a very important-sounding word. "AJAX technology" - this combination just doomed to success.