Roya,

“The number of people who participated in the study to train and test this tool is 32”. I find that number to be exceptionally small even if it was a reasonably distributed sample (but what are the chances of that). Do you have any indication whether 32 is over or under the typical training sets? It is easy to understand why folks desire a “silver bullet”. I wonder if the purchasers of these solutions understand the concerns, constraints and complexity inherent in there construction — or perhaps they do and don’t care.

Surprisingly I just studied a limited application of image categorization with a surprising number of type 1 and type 2 errors. Initially I was very optimistic about it’s application, but the study ruled out that bit of tech. The human face probably carries more information — but there is much more variance nature && nurture I can’t imagine it being successful at an acceptable rate.

A 25 year software industry veteran with a passion for functional programming, architecture, mentoring / team development, xp/agile and doing the right thing.