Mark Johnston – Genes to Genomes https://genestogenomes.org A blog from the Genetics Society of America Wed, 17 Jul 2019 14:02:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://genestogenomes.org/wp-content/uploads/2023/06/cropped-G2G_favicon-32x32.png Mark Johnston – Genes to Genomes https://genestogenomes.org 32 32 Learning to peer review https://genestogenomes.org/learning-to-peer-review/ https://genestogenomes.org/learning-to-peer-review/#comments Mon, 31 Jul 2017 16:30:50 +0000 https://genestogenomes.org/?p=9783 GENETICS Editor-in-Chief Mark Johnston introduces a new peer review training program for early career scientists. “Just tell them what you think of them.” That was the response of one of my mentors when I asked him how I should review grant applications. I was a newly-minted Assistant Professor and had been asked to sit on an NIH study section. I…]]>

GENETICS Editor-in-Chief Mark Johnston introduces a new peer review training program for early career scientists.


“Just tell them what you think of them.” That was the response of one of my mentors when I asked him how I should review grant applications. I was a newly-minted Assistant Professor and had been asked to sit on an NIH study section. I had only a vague idea of how to go about reviewing grant applications, so I turned to my trusted colleague for advice.

I got invited back to the study section. So I must have done something right. But it felt like being tossed into the deep end of the pool before having a swimming lesson. That’s one way to learn. But perhaps it’s not the best way.

Peer-reviewers are vital to the scientific enterprise. They provide a check-and-balance for science by critically evaluating the authors’ (their peers) stories.  They check that the data support the authors’ conclusions. Are the data convincing? Does it meet statistical standards? Have the authors done the necessary controls? By answering these questions in the affirmative, peer-reviewers validate the authors’ findings; by raising concerns about these points, peer-reviewers identify errors in the work that authors surely want to avoid. And peer-reviewers provide a check of the authors’ presentation. Is it clear? Is it persuasive? In my experience, peer-review almost always helps authors improve articles.

Peer-reviewers help editors determine which stories should enter the scientific record. Reviewers must maintain high standards to protect the integrity of the literature, but they must also have reasonable expectations of authors (their peers). Science advances incrementally, after all, and reviewers and editors need to determine how much of an advance justifies readers’ attention—to judge when a story warrants becoming a brick in the Great Wall of Knowledge. It’s a big responsibility.

You’d think such an important task would require advanced training, but there’s no formal training that I know of. Many graduate programs provide their trainees with practice reviewing manuscripts and grant applications, but the scope and effectiveness of those exercises vary widely. Some, though far from all, faculty advisors provide their students opportunities to review manuscripts, often on an informal basis. This patchy system inadvertently robs many students and postdocs of the chance to hone some of the skills central to success in science—understanding the mindset and expectations of peer reviewers and editors, critical thinking, evaluating research, and providing feedback on scientific projects not directly related to your own.

But just because we’ve always done it this way, doesn’t mean it’s the only way.

During her time serving on the GSA’s Publications Committee, Early Career Liaison Aleeza Gerstein drew our attention to this inequality and variability in peer review training. Aleeza works in a field (evolutionary genetics) in which senior students and postdocs traditionally get more opportunities for inclusion in the peer review process, so she was surprised when she learned her experience was not the norm. Across our field as a whole, students and postdocs report uneven experiences in training for peer-review. Aleeza suggested that the GSA is in a good position to help train the next generation of peer reviewers. The entire GSA community could serve as a valuable resource for our early career colleagues.

With the enthusiastic support of GENETICS Senior Editor David Greenstein (now GSA Secretary and Publications Committee Chair), the Editorial Board and the Publications Committee (particularly Elyse Hope and David Fay) are working with Sonia Hall, GSA’s Director of Engagement and Development, to develop a program that will give early career GSA members real-world peer review experience.

To pilot this program, we are currently recruiting the first group of GSA member graduate students, postdoctoral fellows, and junior faculty to serve as peer-reviewers for the journal.

Trainee reviewers will receive training on the principles, purposes, and best practices of peer-review, as well as guidelines and models for fair reviews that are helpful to both the authors and the editors. Participants will review manuscripts submitted to GENETICS that are within their areas of interest and expertise. Just like for any other peer-reviewer, the participants’ reviews will be provided to the authors and considered by the editor in making their decisions.

The trainee reviewers will receive feedback in two ways. First, they will read the other reviews and the decision letter. Seeing how other, more experienced, reviewers do the job will reveal much about the process and nuances of the task, as well as illuminate the path of an academic paper from initial submission through to final publication. And seeing how the editor weighs the reviewers’ opinions and takes their comments into account in coming to a decision on the manuscript will demonstrate what is most salient in reviewers’ comments. Second, we want the trainee reviewers to benefit from the expertise of the GENETICS’ editorial board, so editors will provide feedback to the reviewers about their reviews. I’m hoping that will consist of more than just “tell them what you think of it.”

And beyond the world of publishing, we expect participants to benefit in many ways. Good peer reviewers are skilled at communicating specialist information in an accessible way. They are able to give feedback that is constructive and fair. Chances like this to get feedback are remarkably rare, despite the fact that an important part of being a scientist is regularly critiquing peers and mentees! Participants will demonstrate their understanding of responsible publication and authorship practices, their willingness to contribute to the discipline, along with many of those hard-to-show “soft” skills like workplace etiquette, knowing when to seek advice, time management, and reliably meeting deadlines.

Peer-review is a cornerstone of science. We should not leave training for such an important activity to chance. The editors of GENETICS look forward to working with our young colleagues to develop the journal’s next generation of peer-reviewers.

 

 

Learn more about the GENETICS Peer Review Training Program.

]]>
https://genestogenomes.org/learning-to-peer-review/feed/ 8
How to write titles that tempt https://genestogenomes.org/how-to-write-titles-that-tempt/ https://genestogenomes.org/how-to-write-titles-that-tempt/#comments Tue, 01 Mar 2016 13:00:15 +0000 https://genestogenomes.org/?p=4445 You slave over writing your paper, trying to make sure that the introduction sets up a compelling story, that the results provide clear and convincing evidence for your conclusions, and that your discussion of what it all means makes sense. You and your co-authors edit relentlessly, passing the manuscript back and forth, improving it with…]]>

You slave over writing your paper, trying to make sure that the introduction sets up a compelling story, that the results provide clear and convincing evidence for your conclusions, and that your discussion of what it all means makes sense. You and your co-authors edit relentlessly, passing the manuscript back and forth, improving it with each round of tweaks.  When you realize the returns are diminishing, you decide the paper is ready to submit for publication.

But wait: what’s the title? You quickly write down what the paper is about — “Studies on the chemical nature of the substance inducing transformation of pneumococcal types: Induction of transformation by a desoxyribonucleic acid fraction isolated from Pneumococcus type III” — head to the journal’s submission website, dash off a cover letter, upload data to Dryad and FigShare, and click the submit button.

The first thing the editor reads is the title. She scratches her head and moves on to the abstract and cover letter to learn what the paper is about. She sees that it looks interesting and potentially significant, so she invites an expert to review the manuscript. He sees the title, scratches his head, and moves on to the abstract and cover letter to learn what the paper is about. He realizes there might be something interesting there, so he agrees to review it. The paper receives good reviews and eventually gets published. A reader comes across the title in the journal’s table of contents. He scratches his head and moves on to the next title.

Some say the abstract is the most important section of a paper because it’s the part that most people read and is widely available. But an even greater number of people will lay eyes on the title. A carefully crafted title can attract readers to your paper; a difficult or dull one is likely to turn them away.

It seems obvious that titles should be clear, concise, and compelling, but I admit to sometimes not having taken them seriously enough with my own papers. After all, titles should be easy to write:  they’re just a handful of words. But I became hyperaware of the importance of interesting, inviting titles once I started helping put together the Table of Contents for GENETICS each month.  I now frequently give authors suggestions for improving their titles to increase the impact and readership of their papers. Here are a few suggestions for composing titles that I’ve come up with along the way:

  1. Don’t bury the lede. Start with the topic of the paper, not with the name of the gene or organism you studied. A title such as

    “Fibulin-1 interacts with type IV collagen and antagonizes GON-1/ADAMTS in shaping the C. elegans gonad”

    is unlikely to attract potential readers who don’t know what Fibulin-1 and GON-1/ADAMTS are (which is almost everybody), and “type IV collagen” will generate interest in only the most special of specialists.  Something like

    “Shaping of tissue architecture in the C. elegans gonad by interactions among fibulin-1, type IV collagen, and the ADAMTS extracellular protease”

    is more likely to snare a reader because the first thing she sees is “tissue architecture”, which sounds like an interesting topic. A title that begins with

    “The C. elegans transcriptional regulators LIN-15A and LIN-56 interact and function redundantly……….”

    will attract readers interested only in C. elegans, and even among them the only ones likely to look at the abstract are those who know something about LIN-15A and LIN-56. Put the specific stuff as far into the title as possible, after you’ve hooked the reader with terms of more general interest. Better yet, leave the specific stuff out if you can.

  2.    Entice the reader. Make what you learned seem exciting (I assume you, the author, think it is exciting).

    “Transcriptome and Genetic Analyses Reveal that Abc1 and Def2 are Required for Glucagon Secretion”

    could become

    “Glucagon secretion requirements revealed by transcriptome and genetic analysis of glucagon-producing cells.”

    Readers are more likely to look at your paper if the first thing they see makes them think it will offer some insight into an interesting topic, rather than just some analysis of a bunch of data.

    “A novel method of genetic selection in yeast identifies the DNA binding site of NGFIB”

    entices the reader with the prospect of learning about a new, possibly innovative method.  By contrast, the real title of the paper—

    “Identification of the DNA binding site of NGFIB using genetic selection in yeast”

    —seems to offer (yawn) just another binding site.

  3. Avoid jargon. Jargon is hard to avoid in technical publications, but you should do your best to purge it from the title.

    “FLP-21/NPR-1 Signaling and the TRPV Channels OSM-9 and OCR-2 Independently Control Heat Avoidance in Caenorhabditis elegans”

    is better as something like

    “Regulation of Heat Avoidance in Caenorhabditis elegans by Peptide Signaling and Transient Receptor Potential (TRP) Channels.”

    Jargon turns off the majority of readers, who may not be familiar with the specialized terms. Avoid unnecessary abbreviations and acronyms too.

  4.    Be concise. Readers have a limited attention span. Instead of

    “The Maize Zea mays stunter1 Mutation Causes a Reduction in Gametophyte Size, Has Maternal Effects on Seed Development, and Reveals that Endosperm Development is not Essential for Early Embryo Development”

    try something like

    “Effects on gametophyte development in maize of a maternal effect mutation in stunter1.

    Thirteen words are more digestible than 30. In fact, most readers are unlikely to read past about a dozen words before their eyes wander away from your title.

  5.    Don’t give away the ending. Some authors treat the title as a one-sentence abstract, but I think that’s a mistake. The purpose of the title is to entice readers with the question under investigation so they’ll want to read more, not to tell the whole story. Don’t give the conclusion of your story in the title. So,

    “MCM-related precondition gene mei-218 inhibits lig4-dependent repair and promotes checkpoint activation during Drosophila meiosis”

    might be better as

    “Multiple barriers to non-homologous DNA end joining during meiosis in Drosophila.

    That makes me want to read on to learn what the multiple barriers are. And the declarative verbs that came into vogue among molecular biologists in the 1970s should be avoided, in my opinion.  Titles such as

    “The Type VI Secretion TssEFGK-VgrG Phage-Like Baseplate Is Recruited to the TssJLM Membrane Complex via Multiple Contacts and Serves As Assembly Platform for Tail Tube/Sheath Polymerization”

    gives me the impression that I’ve just learned all I want to know about the paper. Leave the reader in some suspense, wanting to read on to see how the story ends.


Keep these suggestions in mind when you’re composing titles of your papers and I expect you’ll increase the impact of your publications. If Oswald Avery had done so, his paper (whose title is in the 2
nd paragraph above) might have been titled

“Desoxyribonucleic acid as the carrier of heredity”

which might have brought a wider readership, and—who knows?—maybe the Nobel Prize he deserved.


Footnote:  I thank various GENETICS authors for donating their titles in progress (TIPs) to this post.

]]>
https://genestogenomes.org/how-to-write-titles-that-tempt/feed/ 1
A glaring paradox clarified https://genestogenomes.org/a-glaring-paradox-clarified/ https://genestogenomes.org/a-glaring-paradox-clarified/#comments Tue, 10 Mar 2015 15:48:36 +0000 https://genestogenomes.org/?p=1072 Last week, GENETICS published an editorial by Editor-in-Chief Mark Johnston about the influence of the Journal Impact Factor on science and discussed an alternative metric that emphasizes the research experience of the journal’s editors. The following is Mark’s response to some of the feedback he’s received: In my editorial I proposed a new metric for…]]>

Last week, GENETICS published an editorial by Editor-in-Chief Mark Johnston about the influence of the Journal Impact Factor on science and discussed an alternative metric that emphasizes the research experience of the journal’s editors. The following is Mark’s response to some of the feedback he’s received:

In my editorial I proposed a new metric for comparing journals: the “Journal Authority Factor” (JAF), in an attempt to highlight the flaws of the Journal Impact Factor (JIF) and the tendency for hiring, promotion, and funding committees to rely on it as a proxy for candidate quality. The JAF would use the average h-index (a personal citation index) of the journal’s editors as a rough indicator of their scientific experience and expertise.

I’ve received much feedback from readers thanking me for addressing the current status of scientific publishing, but many people remarked that the JAF is not a solution to the problem of the reliance on impact factors.

Of course I agree.

I don’t think we should replace one flawed metric with another (and, for the record, neither does the Genetics Society of America). It is impossible to judge a journal and, by extension, the hundreds or thousands of articles it publishes every year, using a single metric.

I used the JAF as a device to illustrate the difference in research experience between the editors of the top-tier (high impact factor) journals and the editors of community-run journals. Do I think authors should concern themselves with the slight differences in the JAFs of peer-edited journals? No. But I do think the large differences between the JAFs of peer-edited and professionally-edited journals illustrates a significant problem with how the standards of our field are set. The point of the JAF is to underscore that publication decisions at many high impact factor journals are made by professional editors, and when we defer career-changing decisions to these journals, we are in effect giving these editors significant control of not only scientific publishing, but the entire scientific enterprise.

Importantly, I didn’t say, and didn’t mean to imply, that professional editors have somehow “failed” as scientists, or that they are not important contributors to our community. I believe they have an important role to play in science and scientific publishing. I just don’t think they should have such a disproportionately large influence on our fields.

It’s not only science, but individual authors who benefit from having their peers handle the review of their manuscripts. We regularly receive feedback from GENETICS and G3 authors who tell us that they benefit from the careful decisions our academic editors provide. In particular, they value the editors’ guidance, with decision letters that adjudicate and synthesize the reviews, explain the extent of changes or experiments that are required (or not) to make the story compelling, outline how to respond to reviews, and specify which comments are most important to address.

There’s no metric that will solve the problems we’re discussing. We should make hiring, promotion, and funding decisions based on candidates’ merits and promise as scientists. As I argued in a previous editorial (“We have met the enemy, and it is us”), the ultimate solution to this problem is for scientists to change our culture and stop allowing impact factors to weigh so heavily in decisions about who to hire, promote, and fund. My hope is that when we achieve this, as the influence of the impact factor diminishes, the influence of journals with academic editors will increase, not least because journals like GENETICS and G3 are directly accountable to their colleagues, to the field, and in many cases to the scientific societies that represent and advocate for us.

]]>
https://genestogenomes.org/a-glaring-paradox-clarified/feed/ 2