Thursday 20 November 2014

How do you like your scores: raw or well done?

 

I do not think I could be considered a foodie. I enjoy good food, but that includes today’s lunchtime meal of bread, ham and cheese, then lemon cheese cake and raspberries. It was not elaborate, and the ingredients were not autochthonous. I doubt I could pass a blindfold test to distinguish this particular meal from similar breads, hams and cheeses. The meal was fine, and needs no further discussion.

Of more moment to me is how to deal with scores which arise from mental tasks. Here I have a strong preference for scores which are as raw as possible. This may be due to the teachings of Prof A.E.Maxwell , who said of his 1978 Basic Statistics: For Medical and Social Science Students:  “It must be one of the simplest text books on elementary statistics ever written.”

http://link.springer.com/book/10.1007/978-94-009-5804-3

He was used to working with data sets by hand, which was extremely slow (his thesis was based on one factor analysis which took him three years) but allowed him to see how the actual bumps and declivities in performance scores translated into the final summary statistics. However, he was not entirely without guile, because when he suggested applying log transformations to skewed data I was mildly shocked. The fact that data are skewed, to my puritan mind, was a material fact, an aspect of reality, and I did not want it erased from view by statistical trickery, however justified and openly admitted.

Naturally, although a log transform slightly cooks the data, like real cooking it brings indigestible observations within the purview of standard statistics, in that it meets the need for normality of distributions. A declared manipulation allows data processing, like pounding and slow cooking tough stringy meat to make it edible.

I could also see the beauty of factor analysis, bringing orderly simplification to a maze of correlations. Dennis Child used the simple explanatory method of discussing vectors of force to resolve the resultant line of movement of an object subject to individual forces. For example, if an object is pulled by two equal forces at right angles to each other, it will move in a line which is at 45 degrees to those forces. The two weights are imposing real forces on the object, and the vector is the actual single path it follows. A “factor” in a correlation matrix is the vector which results when all the variables have exerted their forces. The “loading” of each variable on the common factor (vector) shows how close each variable is to that larger, simplifying, resultant force.

http://www.amazon.co.uk/Essentials-Factor-Analysis-Dennis-Child/dp/0826480004

In that sense it is simpler and also more truthful to describe human abilities in terms of factors than to give a jumbled list of raw scores on many different tasks. g (for general intelligence) is the big vector which results from a whole lot of mental forces acting together. It is a distillation,

So, when people discuss the Flynn effect, they often argue about whether the effect “shows up on g”. If it does not, then there is case for saying that the observed changes are a case of IQ inflation rather than a real increase in ability. Dodgy tests, dodgy marking and distorted standardisation measures have confused us, say the g men. Rushton led this charge. Jan te Nijenhuis has continued it. I generally support this argument, and at the very least want to know if the gains are “hollow” as regards g. Flynn has a counter argument, which is that g is not the definition of a gain being real.

Allied to this discussion is the more technical one regarding “invariance”. Jelte Wicherts has done much on this, and Roberto Colom is right to draw attention to the changing g loading in his sample (see comments on Gignac paper). 

However, we need an explanation as to why digit span should have become less intellectually demanding. The task itself has not changed. It might have been more novel years ago, and then become humdrum, but commentators point out that increased use of mobile phones mean that numbers no longer have to be remembered, though that is a very recent phenomenon.

Some individuals, as a consequence of massed practice, have learned to “chunk” digits into groups for better recall. Indeed, as was clear to George Miller decades ago, musicians and chess players would have to be doing something like that in order to hold very long sequences in their minds. However, such massed practice is by no means the norm, which is just as well, because it brings few advantages outside very specific domains, and does not generalise well.

In summary, if someone can show me a decade long data series on a ratio scale such as digits recalled or seconds taken to respond to a signal, then I am very interested in the raw data, even in the minimally cooked form of means, standard deviations, and skewness and kurtosis. For that reason I am more influenced by the relatively unchanging means shown by Gignac than by g loadings per se.

However, I think I need to give Roberto Colom more space on this topic, so will do that after a break for a movie.

No comments:

Post a Comment