Friday, April 13, 2007

Business rules: hard-coding or soft-coding

Couple of days ago “Worse Than Failure” published (instead of a usual daily IT horror story) a very interesting article dedicated to “Soft Coding”. The article discussed a problem known to anyone who ever did some business-related software design: dealing with business rules.

The problem with business rules is that they are almost impossible to generalize, often follow some odd logic and are subject to frequent and unpredictable changes. Usually business rules involve some arbitrary values, which programmers are reluctant to hard-code. The article gives quite a good example of the situation, but let me give here one of my own.

Let’s suppose that one of the business rules is:

“If a user account is inactive for more than 180 days it should be deleted, except for the cases when the user is located in New York or New Jersey, or when the user is an employee of XYZ Corp.”

The most straightforward way to implement the rule is just to write a couple lines of a code:


if (account.getInactiveTime() > 180 ) {
if ( account.getUser().getState()!=”NY”
&& account.getUser().getState()!=”NJ”
&& account.getUser().getCompany()!=”XYZ”) {

account.delete();
}
}

The classic approach tells us that this code is bad: it has hard-coded values, which means that changing one of those values will require code change. The obvious solution is to move the values somewhere outside the source code, for example to a configuration file. But the logic of the rule itself is also subject to unexpected changes; so it will seem natural that the logic should be somehow generalized and the concrete details should also be moved out. Unfortunately, this almost always leads to creation of some monstrous home-brew scripting languages, which turn the maintenance of such projects into a nightmare.

The article suggests that it’s much better to just leave the logic in the code: this way it’s easy to read and it’s implemented in a well-known programming language.

In my opinion, both approaches are equally good – or equally bad, depending upon the circumstances.

Having business logic in the code has some very serious disadvantages. Yes, the build process is no longer as expensive as it used to be some time ago; however, the necessity to change code when a business rule changes is still unpleasant:

  • Builds themselves are cheap now; however, the deployment might be quite expensive. If you need to re-deploy application to one or two servers – it’s easy; however, if the application runs in a complex environment with multiple server groups and clusters, that might be quite a different story. And if the application is actually deployed on the users’ desktops…

  • Changing code might cause ripple effects, and requires regression testing – which also might be expensive.

  • Frequent minor code changes under time pressure usually cause a code quality to deteriorate.

  • The last, but not least: with this approach, the developers become forever responsible for implementing rule changes.



In my experience, the right solution for the problem lies (as usual) somewhere between those two approaches. There is no silver bullet, and each group of business rules (and, sometimes, even each rule) has to be addressed separately. Here are the principles I try to follow when dealing with applications with business rules.


  1. Try to identify as many business rules as possible in the project you are working on. The purpose is to understand, which part of the requirements (or implementation) deals with core business logic, and which with some arbitrary business rules.

  2. Estimate which elements of business rules are going to change and how often? It’s never possible to get an absolutely precise answer to this question; however, surprisingly often one can get a good estimate just by asking for it: “The states in this rule change constantly – last year we had 4 states, then two months ago we added two more, and a week ago a new regulation came out…” or “XYZ was always a special case – it’s our largest partner”.

  3. Frequently changing values should go into an external location (file, database…)

  4. Rarely changing values might also go there, or can be implemented as constants, whichever will make the code easier to maintain. Just do not leave them as “magic numbers”!

  5. The business rules logic should be moved to separate classes, and, possibly, to separate modules. There are quite many ways to achieve it. Observer pattern can be used when the rules are to be triggered by some events. Decorator and Strategy patterns are also helpful here. Another possible approach would be using aspect-oriented programming and moving some business rules to aspects. It might also be a good idea to implement certain groups of business rules as plugins, and to make the core system automatically discover them. (I did something similar to this in one of my projects, and it worked pretty well). The basic idea behind all of this is to minimize ripple effect and make required change as small as possible.

  6. If the business rules logic is subject to frequent changes, the situation becomes more serious. The design approaches suggested above will, definitely, somewhat alleviate the pain, but in general this type of situation calls for more drastic measures. Usually this involves adding some sort of scripting support and giving the users ability to write some simple scripts and script snippets. One advice for those who will go down this path: do not invent new scripting languages – first try to use an existing one. It’s also a good idea to provide some UI which will help with writing snippets and putting them in a right place. Adding scripting support and providing is, definitely, quite an effort. So, before doing this, it always makes sense to look outside of the project: sometimes frequent business rules logic changes might be prevented by fixing some business processes, or by working with users and stakeholders. It may sound unrealistic, but there are occasions when just explaining the fact that the requested changes are not as minor as the requestor thinks and do not come free of charge helps significantly reduce the frequency of change requests.

  7. While following those principles, try to keep the system from turning into chaos of different techniques applied. Group similar or related business rules together, and use for each group the technique needed for the rules with maximal volatility. For example, if you have 10 similar rules, and two of them are changing frequently enough to validate usage of external configuration file for them – use it for all 10 rules. Just make sure your grouping is fine-grained enough.



Technorati tags: ,

Friday, March 23, 2007

Use one editor? I prefer three...

Recently I finished reading "The Pragmatic Programmer". The book is amazing, and, probably, is worth a separate article. In short, it contains wisdom and experience of veteran programmers in condensed and purified form. I wholeheartedly agree with most of suggestions and advices the book gives; however, there are several topics on which I disagree with the authors. One such topic is the use of text editors.

The authors give the following advice:
"We think it is better to know one editor very well, and use it for all editing tasks: code, documentation, memos, system administration and so on."

In my daily work, however, I discovered that I get maximal productivity when I use not one, but three editors. Here is my setup.

1. IDE. I know some people who claim that they don’t need an IDE, and that they can achieve the same result using their favorite editor (EMACS, vi or some other). I think that those developers either never got their hands on a really good IDE, or have never done any more or less complex project. Besides simple – though also convenient – features like syntax highlighting, integrated build support, project support etc, a good IDE can provide much more complex language –oriented features. For example, my IDE of choice currently is Eclipse (I’m doing mostly Java now), and it provides me with such incredibly useful features as automated refactoring, language-oriented search, class hierarchy navigation, code generation helpers and many others. And for aspect-oriented programming having IDE with support for aspects (I use AJDT plugin for Eclipse) is vital. In my opinion, trying to program using AOP without an IDE support is a pure suicide: how would you find out that the line of the code you are about to change is, in fact, augmented by three aspects, and your change will have unnecessary side effects?

2. Programmer’s editor. Besides writing Java code, I often need to work with some other files, which do not belong to my current project. I have to analyze logs, edit data files, write some short scripts in other languages… For this tasks I use one of so-called "programmer’s editors". These are complex and powerful text editors. They usually offer syntax support of multiple languages (usually meaning "syntax highlighting", integrated FTP and, sometimes, version control support, file comparison, hex editing, built-in powerful macro languages and many other useful features. The reason why I use separate editor for those tasks instead of IDE is that (a) I don’t want to pollute my IDE workspace with unrelated files – I want to have only program-related stuff there, and (b) I don’t want to start IDE every time I need to edit a file. For some time I was a big fan of Multi-Edit; then I switched to UltraEdit and Crimson Editor. (The latter is less feature-rich than UltraEdit, but is free).
3. Notepad replacement. The third editor in my setup would be Notepad, if it wouldn’t be that crippled. The idea behind having this third editor type is that I want to have something extremely lightweight and fast, so I can instantly do some simple editing on any file. I don’t have any favorite here. Any replacement will do, as long as it supports arbitrarily large files, has support for both Windows and Unix line styles and has decent search/replace capabilities.

This is my setup. All comments and suggestions are more than welcome.

Technorati tags:,

Thursday, February 01, 2007

Joke becomes true...

Quite some time ago I’ve heard a computer joke:

Bill Gates was demonstrating his latest speech-recognition software. He was just about ready to start the demonstration and asked everyone in the room to quiet down.

Just then someone in the back of the room yelled, "Format C: Return."

Someone else chimed in: "Yes, Return!"

Unfortunately, the software worked.


I thought it to be hilarious, though highly improbable. Certainly anyone designing this kind of system would introduce some sort of protection against this kind of “accidents”, right?...

Well, today I’ve read the following:

Vista can respond to vocal commands and concern has been raised about malicious audio on websites or sent via e-mail.
In one scenario outlined by users an MP3 file of voice instructions was used to tell the PC to delete documents.
Microsoft said the exploit was "technically possible" but there was no need to worry.


The full text of the article is here, and here is a response posted on The Microsoft Security Response Center Blog.

Now I am trying to recall other computer jokes – I have to know what to be prepared for, after all…

Technorati tags: , ,

Friday, January 12, 2007

Vista and downloadable games

Couple of days ago Gamasutra published quite an interesting article by Alex St. John, founder and CEO of WildTangent. In the article (called “Vista Casts a Pall on PC Gaming”), he describes serious problems which Vista will present to independent game developers (and casual game developers in general).

Two main problem areas outlined by Alex are program installation and parental control.

Installation. According to Alex, the enhanced security system of Vista might require users to enter administrative login and password every time they try to download and install game. This might sharply reduce the number of installs (and, therefore, purchases), since people might just get tired and frustrated by all the hoops they have to jump through in order to just try out a game, and, therefore, try less games.

Parental Control. It turns out that Vista has something called Game Explorer – some place where the games are being registered, which allows parents to define the allowed ESRB rating level for the games the kids are allowed to play, and which blocks the attempts to start the registered games from outside of Game Explorer. The problem here – again, according to Alex – is that since ESRB grading process is expensive, most small and indie developers cannot afford it, therefore making their games “Not Rated”. Since from the protection standpoint all “Not Rated” material is not safe, most parents will probably block it, thus locking out all small developers.

I didn’t install Vista yet (and not going to, until the time when I would have no other choice!), so I cannot validate Alex’s statements. But, assuming he is right, this might indeed have very unpleasant consequences for game developers. I have no doubt that it will be possible to turn off all these extra-protecting features, or to circumvent them. The problem, however, is that target audience for most casual games are not technical-savvy people, who will, most probably, have Vista running with default settings.

Interesting fact is that the parental control system does not apply to web games. So, if the downloadable games might lose in popularity – the web games might gain, and that, in turn, might lead to some quite interesting market shifts.

Technorati tags: , , ,

Wednesday, January 10, 2007

iPhone craze

It seems like everyone suddenly went crazy over iPhone. The new gadget is being discussed in multitude of blogs, newspapers publish articles on it, and the stock prices for Apple skyrocketed over the past two days. My coworkers show each other web pages with photos of the new device…

Well, I knew for quite some time that, when it comes to marketing, no one can beat the Apple guys. They are geniuses. And I am sure that the craze over this new gizmo will just increase over time, and, most probably, it will become one of the most wanted and hyped devices of this year.

But, frankly, I don’t understand what’s so great or special about this new thingy. Let’s cool down a little bit, and look at the device more attentively. Yes, there are many nice touches about it: stylish design, more or less decent on-board storage size (up to 8 Gigs), camera, Wi-Fi, Bluetooth, GPS – everything is included. You can take pictures, surf the web, play music and movies, may be even play games. There are interesting new features, such as:

  • multi-touch UI;

  • different built-in sensors which, for example, detect when the phone is rotated and switch automatically between portrait and landscape mode (though I assume sometimes that might be annoying);

  • visual voicemail – a list of voice messages (I applaud Apple for this one!)
    integration with Google maps.


But there are also quite many drawbacks:

  • operating touchscreen with fingers means having grease, scratches and fingerprints all over it. A reporter from NY Times states that “You still get finger streaks, but they’re relatively subtle and a quick wipe on your sleeve takes care of them”. The reporter was playing with the phone in an office, with clean hands. I hate to thing what will happen to the screen on a hot and humid day.

  • The same reporter admits that “Typing is difficult. The letter keys are just pictures on the glass screen, so of course there’s no tactile feedback.”. The difficulty is somewhat relieved by some ultra-smart installed software – but, still, it’s not the same as having a real keyboard.

  • Speaking of the software – according to Engadget, the phone is first-party software only. In my view, that diminishes the appeal of the phone tenfold.

  • No removable battery

  • No expandable memory

  • No Exchange support

  • And a hefty price tag! 600 dollars for a phone (as far as I understand, with a 2-year contract) – isn’t it too much?


And, except the visual voicemail, there are no real phone innovations in this product! (though this seems to be a problem of the mobile phone industry in general – all new features have nothing to do with telephony.) Blacklisting and whitelisting of the callers, scheduling of the notification sound types (automatically switch to vibration only at night) – those and similar features existed in crude russian Caller ID phones in mid-1990s, but none of the features is present in the ultra-modern devices.

I will not rush for the iPhone. No doubt it will have an owerwhelming success – but not with me.

Technorati tags: , ,

Tuesday, January 02, 2007

Happy New Year!

(Yes, I know it's a little bit late - but better late than never, right?)

Happy New Year to all who read my blog! All the best wishes to you and your families.

One of my New Years resolutions is to blog more often - and I do hope I will be able to carry out this one.

Enjoy the life - and stay tuned!

Monday, December 11, 2006

Yet another idiot - and some other stuff...

First - after almost three weeks of silence I'm back. I was pretty busy, and not only had no time to post something interesting - I've also had no time even to read comments. When yesterday I've got at last a couple of minutes to check what's going on, I discovered that my blog was infested with spam comments. Speaking frankly, I don't understand the purpose of spamming blogs with totally unrelated comments - most probably they will be deleted almost immediately, and would only annoy people. (One of the comments I deleted was from "a 15 y.o. Sandra from Arabia" who learns English and wants to talk to boys - I have no clue what was the idea of this, since the comment had no other information). Anyway, I tried to delete as much of this garbage as possible, and I've turned CAPTCHA on - sorry for the inconvenience, but I need no more spam, thank you very much.

Now, about an idiot. Here's a funny story: a blogger received an e-mail from somebody with quite a request:

I have been running the site for over two years and we have been ranked very highly for the search term [edited].

On Thursday morning I checked our google positions and your site is now above us for this term. I haev checked your blog and it has nothing to do with [edited], so I think it would be best all round if you remove your blog from google for this search term.




You can read the rest of the e-mail here, and a follow-up here.

Well, of course we have an exemplary case of an idiot here. Funny, amusing - but not too interesting. What is interesting, though, is the entire situation with businesses (both large and small) striving for the first place in Google searches. It's obvious that the first place is extremely valuable in terms of business. However, this precious commodity is not for sale (which is good!), and is being granted by Google's system based on some mysterious factors. For example, as for today, if you try to search Google for "buy computers", the first place will be occupied by BestBuy. It is followed by buy.com - and CompUSA is on the tenth place, last on the first page. Google in this case becomes something like a blind force of nature - powerful and unpredictable. I am not saying that this is bad - or good - I just find it to be curious and thought-provoking...

Technorati tag:

Tuesday, November 21, 2006

PS3 and Wii - first impressions not that euphoric...

So, at last, it has happened! Both PS3 and Wii were released in US. It was fun to read about extremes some people would go to just to get the box on the first day of the sales. However, according to the multitude of articles and blog posts published in the last couple of days, the first impressions about those two next-gen consoles are not all euphoric. There are bugs, problems with the new Wii controllers (some people find them poorly suited for games, while some other claim that the motion-sensitive Wii controller broke off during play and cracked their TV screen (!)), and some incompatibility issues.

However, for me the most interesting was the article in NY Times called “A Weekend Full of Quality Time With PlayStation 3”. The author is disappointed in PS3 usability, and summarizes his feelings:

And so it is a bit of a shock to realize that on the video game front Microsoft and Sony are moving in exactly the opposite directions one might expect given their roots. Microsoft, the prototypical PC company, has made the Xbox 360 into a powerful but intuitive, welcoming, people-friendly system. Sony’s PlayStation 3, on the other hand, often feels like a brawny but somewhat recalcitrant specialized computer. (Sony is even telling users to wait for future software patches to fix some of the PS3’s deficiencies.) The thing is, if people want to use a computer, they’ll use a computer.


Goes surprisingly well with my thoughts

Technorati tags:, ,

Friday, November 17, 2006

Web 3.0

Here is a new buzzword: Web 3.0!

Well, the word itself is, probably, not that new – it seems it was used for quite some time; but almost always it was used to describe just “something beyond web 2.0”. However, an article was published recently in NY Times which caught some attention. The article is written by John Markov, and it, basically, puts an equality sign between this new buzzword and something called “semantic Web”. The idea of the semantic web is simple, but powerful: to make data stored on WWW not only human-readable, but also machine-readable; to enhance the markup so that automated processors would be able to “understand” the meaning of each piece of data and its relation to other pieces. It will be possible, thus, to do many exciting things with the data found on the web: to analyze and aggregate data from multiple unrelated sources and to do extensive data mining.

Here are several more links to some quite interesting texts about semantic web:

“Minding The Planet -- The Meaning and Future of the Semantic Web” and a follow-up to Markov’s article “What is the Semantic Web, Actually?” written by Nova Spivack, a founder of Radar Networks, one of a few companies that are working on semantic web technologies.

So, should we say goodbye to Web 2.0 and switch to Web 3.0? Obviously, not! The two concepts are quite orthogonal, so the name “Web 3.0” is, probably, as misleading as it gets. (It’s funny to try and search WikiPedia for “Web 3.0” – the article is removed, because there is still no consensus about what “Web 3.0” is.)

Personally, I am quite happy about the development of the semantic web. New tools will mean more capabilities for Internet users; and new paradigm will mean more work for programmers – clearly, a win-win situation for me!

Technorati tags: , ,

Friday, November 03, 2006

More on interactive storytelling: Ernest Adams

While writing my previous post I totally forgot to mention an extremely interesting talk on interactive storytelling presented at GDC 2006 by Ernest Adams. Unfortunately, my notes on the lecture – which, by the way, was called “A New Vision for Interactive Stories” - are very brief, and I couldn’t find a full text of his speech on the web. (On his own site Ernest has a full text of his previous presentation on the same topic – but just a short paragraph about his last one). However, here you can read a pretty good summary of Ernest’s speech. It’s interesting to compare his ideas with the ones of Chris Crawford – similar and yet different at the same time (at least, according to what I’ve read at the Storytron site).

Technorati tags: , ,

Wednesday, November 01, 2006

Chris Crawford on Interactive Storytelling and Storytron

In September Dr. Dobbs Journal published a very interesting interview with Chris Crawford. (I’ve discovered this interview just yesterday). Chris Crawford, a prominent game designer and writer, talks about interactive storytelling. Chris shares his views on game design in general, but the bulk of the article is dedicated to his new brainchild: interactive storytelling technology called Storytronics. I was excited when I found this conversation, for I am very interested in game design and interactive fiction. With discussions of the way the narrative in the games should be designed being all over the place, I was anxious to hear what the famous game design guru will disclose.

Well, after reading the article I was somewhat disappointed. In the beginning of the conversation Chris told that

The Sims is neither interactive storytelling nor a game. Will [Wright] considers himself a toy designer. It's the finest toy anybody ever developed, but it's not interactive storytelling.


But the more he was telling about his new system, the more I felt that he actually is building something very similar to “The Sims”. And, at the end, I thought that now he plainly contradicts himself:

Basically, it's a social interaction simulator. In fact [it might be] better to think of it as a simulator, because the stories it generates are very different from conventional stories. They don't have plots.


Personally, I think that stories with no plot just aren’t stories. And Storytronics – at least, as Chris described it – seems to be no different from “The Sims”. I was also surprised that Chris didn’t mention the whole genre of Interactive Fiction. Even in the page called “Different Approaches in the Quest for Interactive Storytelling” on his site he never mentions it – which is really strange, because IF is all about interactive storytelling, and can provide a humongous amount of useful information, experience and insights.

The Storytron site allows everyone to download a pre-alpha version of their software. I definitely will do it, because I respect Chris, and I don’t want to judge his ideas based on just one interview. As soon as I try his software, I will post my impressions.

Technorati tags: , , ,

Wednesday, October 18, 2006

EA goofs up with ads embedded in a game

The idea of embedding advertisements in software is not a new one. For quite some time it was used by developers of shareware programs to help them getting paid for their work while keeping the product "free" (at least, with no payments required from the user). Somehow for a long time the idea was not introduced into the world of computer games; but recently the topic of "embedded ads" became a hot one. Many factors made the idea of putting ads on vacant places in the game world a lucrative one: growing time people are spending playing games, growth of gamers’ population, expanding demographics of players, availability of internet connectivity… The interest to this topic is constantly growing, especially in the area of casual games. For example, on GDC 2006 WildTangent introduced their own platform for embedding advertisements, oriented on downloadable games.

So, I’m not surprised that Electronic Arts decided to join the fun and released two games with built-in ads: Battlefield 2142 and Need for Speed: Carbon. But I’m still surprised at the total lack of market understanding which EA demonstrated with this launch. EA decided to get the best of both worlds – they’re charging a regular price for the game and make you watch their ads. It’s no wonder people become frustrated with this: usually it’s one or another: I can pay for the game; I also can support a developer of a free game by watching ads instead of paying cash. But I really don’t understand why do I have to do both?!

"Joystiq" (from which I’ve got the information) in two posts (post1 and post2) provides a transcript of the letter, which, as I understand, comes with the game. Here is the most interesting part from this letter:

IF YOU DO NOT WANT IGA TO COLLECT, USE, STORE OR TRANSMIT THE DATA DESCRIBED IN THIS SECTION, DO NOT INSTALL OR PLAY THE SOFTWARE ON ANY PLATFORM THAT IS USED TO CONNECT TO THE INTERNET.


Basically, love it – or leave it. I am speechless…

I hope that EA will listen to the voice of the gamers and will reconsider its policy. Basically, it has to do a very simple thing: let the users choose, whether they want a free game (or, at least, deeply discounted) with ads, or a fully priced one – but with no ads.

Technorati tags: , , ,

Friday, October 13, 2006

Second Life: cyberpunk becomes real?

Second Life is a 3D online virtual world, created by Linden Lab. According to Wikipedia, currently it has more than 300000 active users and total of more than 800000 user accounts. Nothing spectacular – there are much more densely populated virtual worlds. So why am I writing about it? Well, because it seems that more and more people start realizing that the virtual worlds can be used for more than just killing monsters and leveling up characters. And not just people – huge companies are paying close attention to Second Life. Here are three stories that I’ve discovered today:

I have a feeling that the thrilling cyberpunk stories by Gibson and Stephenson turn into reality much faster that anyone would think…

Technorati tags:,

Wednesday, October 11, 2006

Office 2.0 is coming

Couple of interesting developments happened in the area of web-based office productivity applications (so-called "Office 2.0"). First, Zoho just launched its "Virtual office" – an integrated online suite of collaboration tools. Second, Google launched an integrated version of Writely and Google Spreadsheet products called "Docs and Spreadsheets". I didn’t use Zoho yet – but I’ve used Google Spreadsheets, and can tell that the application is nothing short of amazing. On the other hand, the list of applications offered by Zoho is absolutely overwhelming. I think one can say that Office 2.0 is almost here.

More information about Office 2.0 can be found here. Especially interesting for me was a section called "My office 2.0 setup", which provides a list of Office 2.0 tools together with their alternatives. I should confess that I didn’t know about half of the tools mentioned there!

With the new developments come new concerns. As usual, I am concerned about the security of Office 2.0. Here is a very interesting article called “Top 10 Web 2.0 attack vectors” – a must read for anyone building Web 2.0 applications. This article, however, deals mostly with developer-side issues. As for user-side security, I have nothing specific to say yet – just some uneasy feeling about it.

Technorati tags: , ,

Thursday, October 05, 2006

JavaScript Intranet Scanner

In "Other Things" blog I've found a link to a PDF document which describes a very disturbing security issue with JavaScript:

Imagine visiting a blog on a social site or checking your email on a portal like Yahoo’s Webmail. While you are reading the Web page JavaScript code is downloaded and executed by your Web browser. It scans your entire home network, detects and determines your Linksys router model number, and then sends commands to the router to turn on wireless networking and turn off all encryption. Now imagine that this happens to 1 million people across the United States in less than 24 hours.
This scenario is no longer one of fiction.


The document provides more information on how this can be achieved (though the link to their demo page doesn't work, so I can't guarantee that this is not another joke). If the approach, described in this paper, works - then it's scary. It seems like the only possible solution is to turn off JavaScript support in browser and turn it on only for selected sites, which will make Ajax and other modern Web technologies significantly less appealing.

Again, I didn't check the information yet - but the explanation in the document seems realistic enough.

Technorati tags: ,

Friday, September 29, 2006

Agile

For some reason agile methodology became a hot topic last week. Many new postings in many blogs were discussing the merits and the shortcomings of agile development, and many old posts on the same topic got promoted to the first page on sites like DZone . Many people were criticizing Agile, and many were defending it.

I, personally, have no experience doing Agile, and, therefore, I am in no position to criticize it (though I find many of ideas and methods of Agile contradicting my own experience and common sense in general). But I also want to add my 2 cents to the discussion.

I will not talk about Agile being better or worse than Waterfall or any other methodologies. I will start with one simple statement: methodology is a tool. It’s not a religion, not a science – it’s merely a way to organize production. But there isn’t such a thing as “universal tool” – every tool is good for something, and is bad for something else. There are exceptions – tools that are bad for everything, but in case of Agile this is not the case, since we’ve read some success storied, and I don’t have any reason to call the inventors of Agile liars.

What that means is – it’s absolutely useless to discuss whether Agile is good or bad; instead Agile users (both haters and lovers) should spend their effort on analysis of their stories, in order to understand when Agile is good, and when it is bad. I am absolutely sure that for certain types of projects (or teams, or environments, or combination of those factors) Agile is a blessing – and for certain others it’s a curse.

Technorati tags: ,

Thursday, September 28, 2006

Windows tips

Just found today a great discussion thread in The Joel on Software Discussion Group: Best tips that no one seems to know about. Basically, the whole thread is a collection of various simple Windows tips and tricks - mostly some interesting hotkey combinations. The conversation also mentions several quite useful links:

Firefox keyboard shortcuts
Windows XP keyboard shortcuts
117 run commands in Windows XP

Now, the only question I have is: how should I memorize all of these cool key combinations - or, at least, the most useful ones? Probably I should create some kind of printable list of my favorites and put it on my wall.

Technorati tags: , ,

Tuesday, September 19, 2006

PS3: a hidden computer.

Yesterday CNN published an article about using PS3 for distributed calculations. The scientists at Stanford University worked with Sony to port their Folding@home project to PS3. The idea is that the users will download a program to the hard drive of their PS3, and the program will perform some complex scientific calculations while the console is not being used for games. It will upload results to some central location, helping to find a cure to a number of diseases.

The article left me with mixed feelings. On one hand, this is definitely a creative use of PS3, and the project is, no doubt, beneficial for all mankind. I applaud the scientists at Stanford and the people (tehchies and business folks) at Sony.

On the other hand, this new project demonstrates one very important thing: PS3 is a device which has very powerful processor, local storage and connectivity capabilities. It can go online, download software, run it (maybe even in a background mode), and send data back. In other terms - yes, it is a full-scale computer, as we were told already. The question is - what about security?

I do believe that Sony did its best to implement various security features. But I know also that PSP, for example, was hacked in a very short time. In a contest between Sony and hackers I wouldn't bet on Sony.

You may say: "So what? Normal computers are also being hacked into daily; there are tons of malware out there - but no one panics because of that." The problem here is not a technical one - it's, rather, a psychological issue. Majority of users know now that the computers have to be protected against viruses, trojans and other dangers. People are learning to be attentive to unusual behavior of their computers; and they also learn to protect their PCs by installing automatically updating antiviruses, firewalls and all sorts of protective software (to say nothing about regular automated updates of OS). They learn it about computers - but almost no one will ever perceive their gaming console - a toy - to be a computer that requires an equal amount of protection. I strongly doubt that anyone will buy and install a firewall or antivirus for their PS3.

Next question is - what is a danger of compromising PS3 security? Yes, it doesn't have any sensitive data stored (though it might have account numbers and passwords for some subscription-based online games). But it has exactly what was used by people at Stanford: free horsepower and connectivity. So, I can clearly see a botnet of infected PS3s used for distributed calculations (breaking keys, for example), spamming, or DDoS attacks. And, even if the source of the problem will be traced to PS3s, it might be incredibly difficult to make people install some patches or run cleaning software.

Maybe Sony already addressed this problem somehow - I don't have enough information yet. But I can see an interesting and dangerous trend here. I am talking about having powerful computers in an ordinary gadgets and not even thinking about their true capabilities. It's not a new idea, but it seems less and less fantastic to me: we are close to times when one might discover that his coffee maker is being infected by a virus, and his vacuum cleaner is being used to crack some Pentagon codes...

Technorati tags:,

Friday, September 15, 2006

Advice to interviewers

Lately on several blogs I've seen posts discussing different aspects of hiring. Most posts were giving advice to candidates - so, I thought I might also take part in this. However, instead of giving advice to candidates on how to survive an interview, I'd rather give some advice to interviewers.

Phone interview. Couple of times I've discovered that the person who is interviewing me over the phone has some speech defect or some extremely heavy accent. I'm not a native speaker myself, and, probably, I shouldn't complain about it, but still - the fact that I was unable to understand questions from the first time (and sometimes from the second and from the third times as well) made those interviews extremely - and unnecessary - stressful. So, advice number one is: make sure that the person who does phone screening speaks clearly.

Also, do not make candidates read or listen to large pieces of code. It's inconvenient, ineffective and, speaking frankly, pretty stupid.

And the last one about phone screening: be flexible about who calls whom. I've encountered once a person who insisted on calling me - in the middle of a working day in my office!

Pre-screening.

Do not ask the candidate to submit code examples from his (or her) previous job. This might be illegal, and it puts the candidate in an awkward position.

Test projects and on-line tests - well, I, personally, strongly dislike those practices. Nothing prevents candidate from cheating - and serious professional wouldn't like to spend his time on doing some bogus project.

On-site interview.

Always give your business card to the candidate. When going through an interview with 5 or six people in a row it's hard to remember everyone's name and title - and it's so embarrassing later to admit that you've forgot whom did you talk with!

Don't turn the interview into your ego-booster. Don't ask questions the only purpose of which is to prove you that you know something better than the candidate. As an example: in a list of "General SQL knowledge interview questions" in one company I've seen a question based on a strange, and, probably, incorrect behavior of MS SQL server under some circumstances. Do you think this is an appropriate question to measure general knowledge of SQL? I don't think so.

And, please, try to give a feedback. I remember one of my interviews. A man asked me questions, I answered, he said "OK..." and continued with the next question. At some moment, I felt uncertain about my answer. The man said "OK...." and I asked him: "Was it the answer you expected?" He calmly replied: "No. As a matter of fact, it was a completely wrong answer." I asked for a clarification, and discovered that I misunderstood his question. So, if the candidate answers your question incorrectly, tell him (or her) so - maybe they know the right answer, but just didn't get you right.

Technorati tags: ,

Monday, September 11, 2006

Microformats

I've discovered an idea of microformats quite recently, and I was immediatelty charmed by its simplicity and elegance. Simply put, microformats are about formatting semantically united blocks of data so it will be easily understandable both by machines and by humans. Examples of the data which will benefit from this approach are numerous. Microformats.org - a site dedicated to microformats - lists almost a dozen already existing formats, including hCard - format for representing people and organizations, hCalendar - format for events and calendar entries and others. Microformats are based on XHTML - which allows them, on one hand, to be easily integrated into a web page, and, on the other hand, to be easily extracted from the page and processed by any program.

There are already several tools - most of them are stil beta versions, though - that are able to detect the presense of microformatted data on a web page and extract it. One of the examples of a freal-world usage of microformats is the way Technorati processes tags from blogs - rel-tag is one of the microformats!

One thing I am afraid of, however, is an uncontrollable proliferation of incompatible microfomats once the idea becomes popular. It's so easy to come up with your own format! This might render the whole idea unusable - but I do hope that it will not happen, and I am watching with interest all the new development in this area.

Technorati tags: ,