This is the second in my series of interviews inspired by the 2012 SSP session “Publishers! What Are They Good For?” If you’d like to learn more about why I’ve conducted these interviews you should read my previous conversation with Tom Reller of Elsevier. All interviewees were asked the same questions.
Our next STM professional is David Crotty, Senior Editor with Oxford University Press, and panelist on the “Publishers! What Are They Good For?” session. David is a veteran of the scholarly publishing industry who currently oversees a suite of society-owned medical and life sciences journals. David has previously served as an Executive Editor with Cold Spring Harbor Laboratory Press, creating, acquiring, and editing new science books, creating and running new journals, and managing the Press’ online content. David received his PhD in Genetics from Columbia University and did postdoctoral research at Caltech before moving from the bench to a science publishing house. As one of the Society for Scholarly Publishing’s “chefs” at The Scholarly Kitchen blog, David regularly writes about the intersection of technology and publishing
ADAM ETKIN: Let’s start with the subject of Open Access vs. traditional publishing models. There was a lot of buzz recently due to Nature’s editor-in-chief Philip Campbell stating that open access to research “is inevitable.” However he also said:
“You hear it said that the public has paid for it, the public should have access to it for free,” said Campbell. “At that point, I draw back and say, what do you mean by ‘it’? You mean something that has got quality in it that has been selected and copy-edited. Somebody has got to do all that stuff, so the ‘it’ you want usually has involved somebody acting like a publisher to make it useable and comprehensible.”
Who, other than existing publishers, could or should “act like a publisher?” What are the pros/cons of some alternate “publisher” or self-publishers taking on this role?
DAVID CROTTY: First, I think I disagree with Campbell that anything is “inevitable”. I think open access is highly desirable, and really the ideal way science should work. But the real world often raises its ugly head and makes things difficult. I think we’re in the midst of a great period of experimentation, and as is the nature of experiments, the outcome is unknown.
He’s right though, that the argument that the public must have free access to anything paid for by taxpayers is a specious argument. The line of logic fails long before it gets to the point Campbell mentions. My taxes pay to build and maintain the NY City Subway system, yet I must pay for access. My taxes pay to fund research, yet the actual results of that research (not just the written reports of the results) are often locked away behind patent paywalls.
To answer your question though, I don’t think there needs to be a reinvention of the wheel. Most of the arguments in this sphere are about control: control of access to the material, control of copyright, editorial control, control of re-use, etc. If these are important factors for academics, then they already have the mechanism in place for achieving that control. There’s a functional system of university presses, not-for-profits and society publishers already in place that’s already owned and controlled by academia. Why not exercise that control to get better results, and why not support your own efforts, rather than continuing to work hard to benefit somebody else?
Commercial interests have different goals than the Academy, and it’s understandable that there seem to be more and more conflicts between those goals of commerce versus scholarship. But I don’t think the answer lies in taking that taking that control away from one set of commercial interests and turning it over to another. In the end, they’re still looking to take money out of academia and put it in their shareholders’ pockets.
ADAM ETKIN: Continuing on this topic:
“If gold open access became the norm for the primary literature, Campbell said that the cost per article could be in excess of $10,000 to publish in highly selective journals such as Nature, Cell or Science.”
I think that it seems a safe assumption that no author or funding agency will be willing to pay publishing fees of $10k (or even $4-$5k) per article. In order to drive this fee down OA publishers such as PLoS have adopted a peer review “light” system and publish 65-70% of articles submitted to them. What is your opinion of this model?
DAVID CROTTY: Why is that a safe assumption? Don’t Wellcome and HHMI already pay $5K for articles in Nature Communications? If I’m a researcher and I have the cash on hand, then $10K to publish a paper in Nature is a superb investment. I will likely see much more back from that paper in terms of further funding and career advancement than the initial $10K investment. The problem though, for most researchers, is that funds are so tight that only the select few have a spare $10K on hand. That results in a system where the rich get richer.
As far as the PLoS ONE “light” peer review (and I hesitate to call it that because that term will sound pejorative to many—I’m still searching for a better descriptive word), I think there are benefits and drawbacks that it presents. The key benefit is speed. If the journal accepts 70% of submissions, then if you submit something reasonable, you know it’s going to get in, and that you aren’t going to have to go through multiple rounds of peer review and rewrites. This, I’d be willing to bet, much more than open access, is likely the driving force for many of PLoS ONE’s submissions.
The downside is that it puts more work on the reader by removing a filtering mechanism. Researchers are already overburdened, and asking them to do more work makes things even worse. There’s a balance that the research community must decide for itself. Is the filtering offered by the journal system of enough value that it’s worth paying for? Or is this activity something where we’d rather keep the money and instead spend extra time digging through a less-filtered literature? So far, given the continued growing level of submission to traditional journals, it appears the balance remains on the side of spending money to save time.
ADAM ETKIN: Do you think there is a business model where journals could maintain a selective acceptance rate (using more rigid peer review) which allows for the publication of the top research in the field while at the same time results in author fees which are reasonable?
DAVID CROTTY: It’s already possible in some fields. Nucleic Acids Research, as one example, has an impact factor around 8 and a high rejection rate. The author fee is $3K, which so far has proven acceptable to the research community. But that’s also a community that is well-funded, and the journal publishes a much higher number of papers than many others, despite its selectivity. So it’s hard to see that translating to smaller niche journals.
There are potential ways around it though. High-end journals could use submission fees to help cover the cost of rejection (which would also help by lowering the number of submissions). eLife is exploring a model where those costs are covered by a funding agency, where the journal isn’t necessarily expected to break even. Or one could follow the path being explored by F1000 Research, where author fees are paid before peer review takes place, thus eliminating the cost of rejection altogether.
ADAM ETKIN: The traditional peer review system seems to constantly be criticized, yet at the same time studies and surveys show the large majority of researchers view it as essential to any publication system. Do you agree with the criticism? How would you improve the current system?
DAVID CROTTY: I think much of the criticism is overblown, and often due to a misunderstanding of what peer review is supposed to achieve. People have done studies trying to quantitatively measure what is essentially a qualitative process.
I do think the system could be improved though, particularly in terms of speed and transparency.
ADAM ETKIN: Another target of much criticism is Impact Factor. Do you foresee a time when IF is less important? If so, how soon do you think this might be the case?
DAVID CROTTY: That’s entirely up to the research community. I don’t know anyone (aside from Thomson-Reuters) who loves the Impact Factor. Publishers use it (and exploit it in some ways) because that’s what the research community has told us is important and where we should focus. If the research community chooses some other metric (or metrics) to value instead, publishers will follow.
It’s hard to see this happening in the short term, as the IF is so deeply ingrained in the system. Administrators seem particularly happy with a simple number they can use to rank performance, rather than trying to put together a nuanced picture of what a researcher’s work means. To me the ideal solution will be a panel of metrics, weighted accordingly to balance each individual metric’s weaknesses. You’d still end up with one number that would satisfy administrators, but it would be a more meaningful number. The Impact Factor is a reasonable way to measure a journal, but a poor measurement of the quality of an individual scientist’s work.
ADAM ETKIN: Related to IF, Academia has created a culture where in order for researchers to gain tenure, promotions, grants etc. they must publish in high IF journals. Do you see this changing any time soon?
DAVID CROTTY: See the answer above. If anything, funding is getting tighter, jobs are getting fewer and farther between. A colleague at a major research institution tells me they get a minimum of 400 qualified applications every time they advertise a tenure track position. There’s a glut of talented researchers and nowhere to put them and no way to fund them. So I see competition and pressure increasing rather than decreasing. I would love to see better, more meaningful filtering systems in place than the Impact Factor, but we are likely going to see more hurdles that researchers must clear to continue their careers rather than fewer.
ADAM ETKIN: How important do you see social media measurements such as altmetrics being in the future? Are you concerned that such measurements have the potential to be “gamed” by purchasing tweets, likes and so on. How might those concerns be addressed?
DAVID CROTTY: Altmetrics go well beyond just social metrics and there are many promising avenues currently being explored. The problem with social metrics is that they are social. They measure things like popularity and one’s ability to network. Those are interesting things to look at, but they aren’t necessarily a proxy for quality or impact.
And yes, they do present an easier path to gaming, something we’re likely to see more and more of given the increased competition for limited jobs and funding. Citation has its problems, but the barrier to gaming (publishing a new paper that cites the old paper) is much higher than that for social metrics (clicking a mouse).
ADAM ETKIN: What are the services that scholarly publishers offer that are the most valuable? Are there services these publishers provide which are not easily replaceable due to cost, expertise, efficiency or other reasons?
DAVID CROTTY: There are far more than most people realize, but I’ll start with three key services: time saving, neutrality and driving new technology.
Technology has reached a point where researchers can do for themselves much of what a journal publisher offers. The question is why they would want to spend their valuable time doing such mundane work. Researchers, scientists in particular, do an enormous amount of outsourcing. Their job is discovery, doing experiments, learning new things. As such, they pay someone to wash the test tubes in the lab rather than doing it themselves, because that’s time and effort taken away from experimentation.
Many labs buy pre-mixed solutions that they could spend the time to make from scratch at a lower cost, and much of biology is dominated by kits, prefabricated sets of reagents for performing a particular assay, again, faster, but much more expensive than making everything yourself. Labs farm out common activities like sequencing DNA, making constructs, breeding flies, or making transgenic mice. This is all done so researcher time can be concentrated on the cutting edge of garnering new knowledge.
So in an era where researchers are so time-pressed that they only do the experiments that they can’t hire someone else to do, does it make any sense to ask them to sacrifice an enormous amount of their time doing the job of publishers, something just as easily outsourced? Publishers provide a set of necessary services for the communication of results. These are often tedious and time-consuming services, and like the lab’s dishwasher, the sequencing center, or even the campus plumber, we are paid to do the necessary things in order to free up a researcher’s time to do research.
If you are self-publishing, you are axiomatically self-promoting. And that’s not credible in the scholarly realm. Your claims must be reviewed by others, what you say about your work is suspect. Journals serve as that neutral third party, gathering expert opinions on the validity of your claims and passing final judgment. This process can’t be done by the author themselves, simply because no one trusts anyone else—that’s the nature of science. Given the enormous stakes, the scarcity of funding and careers, a system without neutral oversight would rapidly fall prey to manipulation and gaming.
The research community constantly demands new tools for interacting with the literature. But these tools come at a cost, both in terms of funding and in effort. Publishers are investing millions of dollars every year in semantic technologies, in building API’s and writing and rewriting metadata to meet new standards for new ways of use. Think of something like the Nature Network, a failed but fascinating experiment into social networking and science. Elsevier is doing some really interesting things with their SciVerse developer network as another example. There seems to be a disconnect between those complaining about journal profits and those demanding new experiments and new technologies from publishers. Those profits are how those experiments get done and those technologies get built.
ADAM ETKIN: Do you have anything else to mention regarding the scholarly publishing industry and what publishers in this field are “good for?”
DAVID CROTTY: Publishing is a service industry. We’re good for whatever you need. Times change, technology changes, and the needs of researchers continue to change as well. Publishers will change the services offered to better match those needs. That may mean changing our peer review process or the way the literature is accessed. So be it. The publisher that succeeds is the publisher that best meets the needs of the researcher.
In the coming weeks I’ll continue this series of interviews with members of the STM publishing community. I look forward to any comments you may have.