Making Sausage and Editing Scientific Journals
Eli gets mail
---------------------------------------------
Dear all:
A bit of a ramble from someone mid-career who spent ten years as an editor then chief editor of a scientific journal. For various reasons, I've decided to be anonymous below, although nothing I'm saying is very controversial.
Remote Sensing is a very young journal (in the general scheme of journals and their history) with just a bit over two years of articles. It has a large editorial board of almost 50 scientists, some who I recognize as very well known in the broad remote sensing community, others who have recently started their careers. Because it is fairly young, it is not yet indexed in the Thompson's web of knowledge for its 'impact factor', which is a measure of the impact of a journal and often taken by scientists as a standing of how good a journal is relevant to others that are similar [there are MANY remote sensing journals].
The first few years of a journal are always difficult. There are so many thousands of journals, how does one convince people to publish in 'your' journal. One way of course is to have a strong body of scientists on the editorial board that are internationally well recognized. Different journals though vary in how they handle papers. I was chief editor for a scientific journal for five years (editor before that), with many fewer editors on our editorial board (started off with just 9-10 of us, then grew to about 20) and the responsibilities there were each editor chose the reviewers of papers, based on author recommendations and the knowledge of the editor of people in the field. Choosing reviewers is definitely a very hard job. Often one needs to contact 10-15 people to get 2-3 'real' reviews, where the reviewers seem credible, knowledgeable, and actually critique the work appropriately.
A 'good' and experienced editor will look at suggestions of reviewers by authors, and then check to see if the reviewers seem to be in bed or have other close ties with the author. This is not difficult to do--check to see if they've published with them before, or other close ties. But, an editor might be handling 10-20 papers in a year (although with a large editorial board, this number might be reduced) and so sometimes one might cut corners after the 10th "sorry, am overburdened with commitments to my time, cannot review this paper for you" and go back to the reviewers that the author suggested [I have not done this, but a 'new' editor, who has not been given proper mentorship in 'how' to be an editor, might do so]. I had one paper, that happened to be controversial, that I approached over 30 reviewers before getting three to agree, of which only two of the reviews that came back were 'real'.
In the case of the journal I was working with, we let the editors make the decision based on the reviewers' comments as how to proceed with the paper (minor changes, major changes, rejection--rare and could be contested). As chief editor, I kept an eye on things, but did not check 'every' paper as to what was happening. Other journals have more of a vetting system, whereby every decision of an editor needs to then be confirmed by an executive editor. And, remember, none of us are being paid for these roles. These are volunteers, who do it in addition to our normal scientific duties (academic and/or research scientists). I personally enjoy the whole reviewing process, and was careful to make sure that no paper was turned away or rejected only because it was controversial--even controversial ideas had to be given a very solid reviewing, sometimes going the extra step to ensure that reviewers were told to put aside any biases.
The journal I worked on was fairly broad, and like "remote sensing" might attract papers from a wide variety of disciplines. I would imagine that for remote sensing, they could get a whole variety of papers, anything that uses remote sensing. As chief editor, I did ask that editors NOT take any papers for overseeing they were uncomfortable with. They needed to have the expertise to oversee the paper. When papers came in that were outside the 20 editors' direct expertise, so long as they were in the broad remits of the journal, I would oversee the peer-review process, and then do a very careful job of finding and choosing reviewers by consulting with colleagues who might have more knowledge in that particular area. These were often very time consuming papers, as they were often controversial. And yes, we received papers on climate change related topics, where we really tried to be careful in the whole process, and make sure the reviewers were giving 'real' reviews and not just rubber stamps "publish as is".
I don't know how the editorial system worked at Remote Sensing, and how much charge the founding editor in chief had (Wolfgang Wagner) in actually approving every paper/examining reviews for papers assigned to other editors, etc. He did take responsibility, which I know I would have in his place also. As a new journal, one would be caught between trying to attract a sufficient number of papers to justify the journal's existence, and making sure the papers are of high quality so that they are well cited (which would then play into the ultimate impact factor, which is determined by a couple years statistics of how many papers the journal publishes and how many people in other articles cite those papers).
Signed.
A mid-career scientist and past chief editor of a nameless but decent
scientific journal<
8 comments:
"Often one needs to contact 10-15 people to get 2-3 'real' reviews"
Huh. Does this mean that to be an industry-average reviewer, I'd be allowed to turn down 4 out of 5 review requests? I think so far I've agreed every time I've been asked (only about 5 or 6 papers so far, I think, of which one was a mistake, since the paper was an 80 page tome of a review which could have used proofreading by a native English speaker along with a lot of tightening and error correction - this was the only paper that I reviewed where I recommended rejection, though in the end, because it was an invited review, we ended up going through 3 rounds of revise and resubmit, with lengthy comments by all three reviewers to get it into acceptable shape. On the other hand, I did have one of my own papers go through a three round revise and resubmit, with amazingly helpful reviewers, so I guess I'm about even on that score...)
Anyway, thanks for publishing this e-mail, it is quite insightful. I continue to wonder what role Wagner had in this paper _before_ it was accepted the first time...
-M
Excuse me for making a comment oblique to the subject of the posting.
I think that it is a bad custom to evaluate journals (also scientists or research institutions) by citation of papers in the recent two years. (It may be necessary for evaluating new journals such as "Remote Sensing", so excuse me for deflection from the original issue.) My experience in climate science suggests that important citation is usually a decade old articles, sometimes half-century old articles, either to refute them or reaffirm them. The IPCC cycles cause us hastened jobs, sometimes resulting in immature publications, but then important citations are criticisms on the papers in the previous cycle 5 or 6 years ago, rather than mutual citations between those papers which are ideally parts of the same work. Perhaps informatics firms are fooled by apparent rapid changeover of genome technologies etc. and lose the sight of how science evolves. I think we should lobby Thomson Reuter (ISI) etc. to extend the time frame of counting citations.
By 2-year citation count, Gregor Mendel was a complete failure, yes?
"I'd be allowed to turn down 4 out of 5 review requests? "
Not necessarily, the usual practice is either to get the request and not do anything or provide a three word review. Useful reviews are another story
Kooti, Eli seems to remember that ISI has a couple of rankings, only one of with is immediate impact so there are longer range ones to look at.
Explanation: Eli's place has SciFinder and he uses Google Scholar. ISI is too expensive, so he only gets to use it once and a while at other places. So it really is "seem to remember"
Just to be clear, consider 2 journals not indexed by ISI:
1) E&E
2) Remote Sensing
Non-indexing *means* something in the first case.
It means nothing for a 2-year-old journal.
Making sausage more satisfying.
Post a Comment