SHOE Archives

Societies for the History of Economics

SHOE@YORKU.CA

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
[log in to unmask] (Mike Bradley)
Date:
Tue Jun 26 13:01:08 2007
Content-Type:
text/plain
Parts/Attachments:
text/plain (38 lines)
    Listing and ranking journals by various criteria--"AEA equivalent 
pages," citations, etc.--has been a common academic enterprise for the 
past two decades or so.  A few economists have built their reputations 
on these exercises.  I agree with Professor McCloskey that such rankings 
are of dubious scholarly merit, and are toxic when applied to assess the 
work of specific economists.
    Professor McCloskey is right on the money in suggesting dummy 
journals in any sample.  I recall such a study in the AEA some time ago 
in which economists were asked to indicate how often they consulted 
journals.  A really rigorous sounding dummy and more 
historical/institutional sounding dummy were included in a list of 
journals.  The historical/institutional dummy did quite badly--almost 
nobody indicated that they used.  The analytical/theoretical dummy did 
better (as I recall) than some actual journals.
    I am also concerned about the accuracy of citations as measures of 
impact or quality of scholarship.  With the concentration of editors and 
authors in a relatively small number of universities, and an 
understandable bias toward citing the work of known economists, I have 
the distinct (but not empirically tested) suspicion that there is a good 
bit of quality scholarship that goes uncited.   Coupling this with the 
omission of HET journals (until recently) from the major citation 
service makes citations a particularly thorny issues as the major or 
sole criterion for assessing the scholarly value of our work.
    On the other hand (there's always the other hand), a proliferation 
of journals of unknown quality and peer review standards makes it 
possible to publish a lot of stuff (a technical term) that is of dubious 
quality--some of it quite bad.  This can lead to a mindless "counting 
articles" measure with no consideration of the quality of the work.
    The only resolution of this issue that I can hazard is not very 
imaginative.  We can assess the quality of scholarly journal articles by 
reading them ourselves, rather than relying on questionable quantitative 
substitutes for reading and thought.  I have not been able to convince 
many of my colleagues that this is a better approach to assessing our work.

Mike Bradley



ATOM RSS1 RSS2