How to Write a Neurosurgical Manuscript and Get it Published! A Forum of Ideas

This is a preview. Check to see if you have access to the full video. Check access


- Hello, ladies and gentlemen. Thank you for joining us for another session of the AANS Grand Rounds. Writing a neurosurgical manuscript can be quite challenging at times, and therefore we have prepared a webinar for this purpose. During the AANS meeting in Los Angeles in 2017, the five chief editors of the main neurosurgical journals gave a talk regarding how to write an acceptable neurosurgical manuscript. This following webinar is such a recording. The speakers are Dr. Jim Rutka from the "White Journal" or the "Journal of Neurosurgery," Dr. Nelson Oyesiku from the "Red Journal" or "Neurosurgery," Dr. Anil Nanda from the "Skull Base Journal," as was Dr. Tiit Mathiesen from "Acta Neurochirurgica," as was Dr. Ed Benzel from the "World Neurosurgery." I hope that you find the following useful in your career. Thank you.

- Good morning, everybody. I'm Anil Nanda and will be running this breakfast seminar on how to, so I want to write a neurosurgical manuscript and make sure it gets published. Our guarantee to you after you take this course is that every paper you ever write will be accepted by at least four editors immediately. They will not be peer reviewed. There'll be none of that. I'm just kidding. So that's the guarantee that comes with it. It sounds like a Trump real estate seminar, only kidding. So, you know, this is the whole pressure that we faced in academic existence, whether you publish or perish, and here's somebody who published, who published but perished anyway. And I guess the elixir of immortality, no matter how high your h-index is, it can be daunting. So we have a real panel of experts with all the major journals represented here, so we're very grateful. They are, you know, I just want you to make sure you thank the participants because this is, they're all very busy. And for them to give time this morning, we were all very grateful. So we have a real star lineup. And I just want to go over a little bit of historical. Since the Smith Papyrus from 1600 BC, the diffusion of medical knowledge has occurred through written text. It's a period when every five years there's a huge amount of data out there. And actually one of the British writers said, "Any paper, however bad, can now get published in a journal that claims to be peer reviewed." And I think that's something we should be cautious about. The one thing we all face is all these courses. And our emails are full of these journals that'll send you oh, we want you on the "Austin Journal," and every day there's like 15 emails that I'm sure many of you get. And they just say, "We will put you on this journal and we'll put you in." So mostly we just should be very cautious, and I think we'll address this in the panel discussion. And this was, I just thought I'd address this. One of our residents wrote this paper for a journal, "Spine," out of England. Has a decent impact factor and it was something about economic outcomes. And lo and behold, the department gets a bill for $3,000 to have this published. And we had to withdraw it. Was like we're not paying $3,000. But I think we should discuss that a little bit. And there's a lot of malicious emails out there. So I think some of the things we want to talk about later will be scientific integrity, review process, case reports. And everybody's going to speak for about 20 minutes, and then we'll make it very interactive with the audience. I had a bunch of questions. So we'll start off with Jim from the "Journal of Neurosurgery," Jim Rutka.

- Great, so thanks very much, Anil, for that introduction and also to all of you for being here today. And we hope this will be informative for you and educational. And I think, as Anil had mentioned at the beginning, what do we actually leave behind when we're done all of this in our careers and everything that we've been striving to do? Obviously we look after our patients and take care of them, and there's a lot of gratification and satisfaction for that, but when all's said and done, what you leave behind on your professional side of things really are your publications and things you may have accomplished, if you're in research, in your labs and so on. So it is important. And we hope that today you'll be some tips and some clues on best practices for how to publish your work in the journals. And really no question is either too complex or too simple to ask in the discussion period. I'll tell you a little bit about from my perspective as editor-in-chief of the "Journal of Neurosurgery." Before I do, I do wish also to underscore the fact that you're very fortunate today. You have the editors-in-chiefs of the main neurosurgical journals here today. So Ed Benzel is top left here, Nelson Oyesiku here. I show him with Murat Gunel for "Neurosurgery," Ed for "World Neurosurgery", Tiit Mathiesen for "Acta Neurochirurgica," and of course our fearless leader Anil Nanda shown in the bottom right. And Anil's been really good about leading the seminar over the past several years because he's actually published a huge amount himself in the literature and knows a lot about how to publish his own work from his own department. A little bit about the "Journal of Neurosurgery" review process and the editorial board. It's a fairly small editorial board, and you can see the members of the editorial board here. These are the print journals editorial board members, so that includes "Journal of Neurosurgery," "Journal of Neurosurgery: Pediatrics," and "Journal of Neurosurgery: Spine." And every year they get replenished as some rotate off. And the term of service for an editorial board member is roughly four to six years depending on the journal. This is the staff. We are a self-published journal in the "Journal of Neurosurgery," unlike some of the other journals in neurosurgery, which means that we publish out of this office in Charlottesville. We're not a member of a large conglomerate like Springer or Thieme or other large publishing groups, "Nature" or so on. So we to some degree have some limits and constraints but we also have a certain amount of freedoms, and we may be able to talk a little bit about that in the discussion period if there's interest. Just this past year, we published this report which is an editorial on how to publish your best studies in neurosurgery. I did this with Doug Kondziolka, and we really want you to consider submitting your best works to the "Journal of Neurosurgery." And in this particular editorial, we discussed a little bit about the review process itself for the "Journal of Neurosurgery." And it begins with allocation of manuscripts as they come into the office, either myself or Doug Kondziolka, associate editor, that goes to the editorial board members. They render a decision on things. Also, we have a pool of ad hoc reviewers. If revisions are required, they'll be done and then they'll come basically back to me. There may be an opportunity to send those revised manuscripts again out for revisions and then they'll come back to me. And then finally a recommendation is made as either accept or reject, and that's the process. What's a little bit different about the "Journal of Neurosurgery" peer review process is that it's sequential peer review. By that, it means that it goes from one reviewer to the next, to the next in sequence as opposed to simultaneous review. So there are pluses and minuses to that review process and others, so we can maybe also talk about that in the discussion period. Now these are the different categories under which you can submit your work to the "Journal of Neurosurgery," and they include clinical series, lab investigations, case reports, illustrations, and so on you can see here. All journals basically have instructions to authors for manuscript submissions that you should adhere to. We're seeing a lot of these. I know the other journal editors-in-chiefs are also. So lots of systematic reviews and meta-analyses. And so if you're going to do those types of studies, please try to adhere to these very important structures that you can go to and make sure that your meta-analyses are really done in conformity with the expectations. So there's the PRISMA guidelines. There's the, these are for systematic reviews. If you're doing clinical trials kind of reviews, there's the CONSORT guidelines. And then if you're looking at epidemiology studies, you have these STROBE statement guidelines that you can adhere to. And that really is very important because it tells the editorial board that you've done your homework and you've tried to fashion your manuscript according to what the expectations are from these guidelines. All right, so now let's get down to the basics of your manuscript and your submission. So writing should be basically the four C's here, so clear, concise, consistent, and convincing. And I think as we go from one presentation to the next, there'll be a certain amount of repetition which is good, but also underscoring those points on what you really should do for your manuscripts to make them as strong as possible coming in the door. On the importance of writing well, even the most novel and well-constructed study will be rejected if the writing is flawed. I think you'll all agree that if you don't write in an articulate manner, if the reviewers have a hard time, they're struggling with the language that is being used to describe your work, they're going to have a more difficult time rendering an accurate opinion about your study. Here are some tips that are from my perspective on how you could write a manuscript in, you know, you think maybe I should start with the abstract, put that together. Then I'll go to the introduction. Just follow in sequence as you actually read the manuscript. But in fact, to me, the most important part of your paper are your data which exists in the results section. So I would say start there with your results. Let your results tell the whole story, and then bring your data, your figures, your imaging studies, everything together, and then write your whole story around the data. And the methods are fairly mechanical. As you write your methods, there's only a certain number of words that can be used to describe certain methodologies, and a lot of them are already published to a degree. So it's fairly mechanical exercise to write the methods part of it. So you don't need to spend a lot of time here. What you don't want to do is to copy and paste methods. You have to adjust the wording to a significant degree. Discussion's very important, and the first paragraph of your discussion should ideally summarize the main findings of your entire manuscript and then go downstream from there and take yourself into other aspects of your study. And at the very end, close to the end, the second to last paragraph should say our study is limited because of, and you list the limitations that the reviewers know that it's not a perfect study and we could've done more, but it would have taken a lot more time or more investigation and perhaps it's beyond the scope of the current study. And then the introduction which is the background. You can write the abstract at the very end because you want to have a punchy abstract that really captures the essence of your work and that the reviewers know, just by looking at the abstract, what they're going to see within the body of the manuscript. And then you have other supplemental materials. So spend a lot of time on your figures, making them look as sharp, as crisp, as clear as possible. These are just some images from some of my papers over the years which I think you can see that they're very easily understood what each one represents and adding on components of it on the right-hand side here. This is animal study with different groups, a control group by saline and other groups here with treatment. And you're looking at how this tumor is responding over time. I mean, that's fairly intuitive. All you have to do is look at the figure and you kind of know what the story is. You don't really even need a figure legend for a figure like this. So I look at the figures very carefully in manuscript submissions, and I can tell whether you've kind of slapped them together in a hurry or whether you've spent your time, an appropriate amount of time, bringing your figures together. So I would definitely spend the time. And then how you project your data and optimizing it for presentation's also very important. On the left-hand side here is a bar graph, and it's really hard to determine the trends that are going on between these different groups over time. And if you simplify that by going to some kind of a linear graph model, then it's easy to see that this particular treatment group D over time has this kind of effect, and it just stands out more clearly. So there are many ways that you can look at your data. And you may want to experiment, changing things from bar graph to line graph to other types of representations to really know what's the best way to present your data. Okay, now let's talk about the title for a second. So the title can catch the reviewer's attention, and here are some titles of manuscripts that were submitted to the "Journal of Neurosurgery." "The Life and Death of Lord Nelson: The Leader, the Patient, the Legend." Well, you want to read that one as soon as you read that title. "Can you see it? Retinol vessel analysis in the context of subarachnoid hemorrhage." Here was one that was submitted. "Under the Drapes." What does that mean? Well, that one didn't get accepted, but here's another one of information to you that at least for our journal, the declarative titles, if you state something like this gene causes cancer, we usually don't accept those types of declarative titles. We want you to change the wording to a degree so that you're saying something like the gene for this particular protein is associated with cancer or something like that. So the declarative titles aren't usually acceptable for the "Journal of Neurosurgery." So a word about the abstract. The abstract is the single most important part of the manuscript yet the most often poorly written because as I said, you often leave the abstract to the end. And if you do that, you're tired and you just want to get your work out that you're so close to submitting it. You're ready to push the button. And so you kind of swing your abstract together and it may not be as well-defined and orchestrated as it could be. So I would say spend time. Take an extra day if you need to to get your abstract together to make it look as good, as strong, and as representative of the data as you can because a sloppily written abstract will torpedo your submission to most journals. Okay, top eight reasons why your paper may not be accepted. Too many words. Unclear purpose or rationale. Confusing structure or poor writing. Poor data presentation. Inappropriate statistical methods. Conclusions are not supported by the data. That's a very frequent cause for rejection. A lack of novelty or originality. And insufficient referencing. So not enough references are used to stand behind the paper that you've written. How else can you improve your manuscript submission? When less is more. Those who have the most to say usually say it with the fewest words. So I'm really keen to see authors who have a paucity of words and put their story together in a very clear, and as I said earlier, concise manner. Now congratulations, you have a manuscript, you've submitted it, and now you've got the letter back from the editor-in-chief that says you can now revise your study. I don't know about the other journals, perhaps we'll hear from the others, but those manuscripts that get submitted to the "Journal of Neurosurgery" that are accepted outright after the just first pass is less than 0.5%. So you're really lucky if you get an acceptance letter out of the starting blocks. You're usually asked to revise your manuscript. And if you are, here are some principles to follow. One is should you resubmit your work to the same journal for the revisions, because the revisions may be hard. So I've had, personally I've had revision letters that have come back to me and it was harder than actually writing the original manuscript itself. I had to take care of all the revisions. So you gotta ask yourself that question. Do you want to do that? If you're concerned or you have a quandary, ask the editor-in-chief for guidance especially if there's some unresolved issues. Prioritize the reviewer's comments. You'll be able to answer the vast majority of them, but don't treat the reviewer as an adversary. If you come out swinging and say this reviewer missed the point altogether and I can't believe that the reviewer didn't understand the point we were making, it's a pretty good chance your manuscript won't get accepted if you have a tone about your revision letter that's not conducive to exchange between authors and reviewers. You may disagree without being disagreeable and I'll show you some examples of that. Do most of the work that's asked of you. You don't have to do all of it, but you can argue against doing some of the work if it's appropriate. And wherever possible, shorten your manuscript. It's not what you say, but how you say it. So here's some of the phraseology, if you will. So we have carefully considered the reviewer's comments and would like to respond to them point by point as follows, okay, in a revised manuscript. Thank you for pointing out that we have labeled figure three incorrectly. We have now corrected this. Reviewer number three has suggested that we shortened our discussion. Accordingly, we have removed paragraphs four through six in our discussion section. Reviewer has asked us to establish a new mouse model for cancer. Well, while we agree that this would be an interesting experiment, it is beyond the scope of our current study. So you can actually argue out of doing major experiments if it's clear that it's not within the boundaries of what you're trying to write about. And we have shortened. So I'm telling you, just keep shortening your work and everyone will be happy. The final paragraph of your revised manuscript cover letter could look something like this. We would like to take this opportunity to thank the reviewers once again for their thoughtful review of our manuscript by attending to their many helpful suggestions. We believe that our manuscript has been improved significantly. So you can use that type of wording, and I think it goes a long way to satisfying both the reviewers and the editor-in-chief that you're sincere in your approach to tackling those revisions. There are many different ways that you can structure your revision letter. Some journals give you guidelines for that, but here's an example where the reviewer said this in black, and then the author of the revised manuscript's saying, "This comment is fully correct, and therefore we changed prospective into retrospective," and so on. That's a nice way for the editor-in-chief and the reviewers to see how you've tackled your revision. So I would say that that's one way to structure your revision letter, is something like that. It makes it stand out really nicely and easily for the editor-in-chief and also for the reviewers to see. Or this one's actually quite nice too which is a table form. So you've got the query. The calf spines are very dense compared to human adults. Here's the explanation for it. And then they talk about the changes made in the manuscript. So tabular form is also very straightforward, a very nice way to do a revised cover letter as you were turning your manuscript in. So many different approaches, and some journals give you some guidance on how to do that. Okay, so your manuscript was rejected. Now what? If you think your manuscript was inadequately or poorly reviewed, you can call or write the editor-in-chief. You can describe how the peer review process was faulty. You should state precisely how the reviewers missed the point and the purpose of your study. You can prepare rebuttal letters or to each of the reviewer's comment, and you may be invited to resubmit. It's worth a try. But really think carefully before you contact the editor-in-chief or you think about sending your manuscript back as a resubmission because the reviewers often have very good things to say. And with what they say, you can understand why your manuscript did not get in the door. So sometimes it's worth a try if you really feel strongly that your manuscript was inadequately reviewed. So you can try that. Well, what is the role of the editor-in-chief? The editor is really very important for mitigating against bias for manuscripts that have been submitted. Editor-in-chief filters, selects, refines, and finalizes the manuscript submissions. The editor-in-chief provides a vision for where the journal is going and at the forefront of handling misconduct issues. And Anil talked a little bit about this with integrity. So I'll just say a very brief word as I close about integrity, and that is you know, there's many different forms of research misconduct and scientific misconduct that makes its way into the literature. So you have fabrication, falsification, plagiarism. Here's a study that says how many scientists fabricate and falsify research? Approximately 2% of researchers acknowledged falsifying data. 34% admit to other kind of questionable research practices. So it's out there. How do you detect it and how do you find it? And here's a fabrication story, the biggest fabricator in science, how this person got caught. So this scientist was an anesthesiologist in one of Japan's top institutes. His name was Dr. Fuji and he wrote about controlling nausea and vomiting after surgery with drugs. He began falsifying data in 1993. His data, he was caught because his data were found to be not randomly distributed. It was too predictable almost. And another anesthesiologist called him out on this. So he had fabricated completely 126 papers as if somebody was writing a novel on a research idea at a desk, 126 papers that were falsified and had to be retracted. So who's the watchdog of integrity? There's Retraction Watch. And all of us as editors-in-chief, we face this all the time where if we have to retract a manuscript in the journal, it goes almost directly to Retraction Watch which is a not-for-profit organization that has in its goal in mind to understand why these manuscripts were retracted. And they come after you and they want to know what was it about your peer review process that allowed this manuscript to be published in the first place. And were there any other issues that arose that led to this happening in the literature? So I've had to deal with Retraction Watch on at least a couple of occasions over the years. Nelson maybe has, and Tiit and Ed, maybe talk about that. But it's, to some degree is the bane of the existence of the editor-in-chief. But about scientific misconduct, all I'd say is the reverse Nike sign, just don't do it. You'll get yourself in trouble. Your reputation will be smeared. You'll lose your academic appointment at your university perhaps. You'll not be taken seriously if you try to submit your work back into the journals. So just don't go in that direction. So I'll just conclude, Anil, by saying that we do have some tips for you on this webinar that was produced a few, I guess couple of years ago now. You can go online to find this. And most of the teaching points that I discussed today are embedded within this presentation. And I'll conclude with that, Anil. I'll be very happy to take part in the discussion period afterwards. Thank you very much.

- Good morning, I'm Ed Benzel. I'm the editor-in-chief of "World Neurosurgery," and I'm going to talk to you a little bit about my points of view as to how to manage manuscript writing. A lot of what is going to be said this morning is going to be a little bit repetitive, and I don't even remotely apologize for that because in my opinion, repetition is good. The process in "World Neurosurgery" is not too dissimilar from what Dr. Rutka presented for "Journal of Neurosurgery." Manuscripts are submitted to the managing editor. Then she sends these manuscripts out to section editors who then send them out to reviewers that they have selected from a list. The manuscript, after the authors review the manuscript or revise the manuscript, it comes back to the editor-in-chief. And I either accept, revise, or reject and sometimes will send back to the section editors for their opinion. And then the manuscript, if it is sent back for revision, will come back to me and I will accept or reject. So Louis Pasteur said, "Chance favors the prepared mind." And so the process of writing really starts with the idea. And the idea, it takes some creativity and an open mind to observe it. The innovation process starts with the idea. There's some entrepreneurism involved usually and then commercialization. Regarding the idea, sometimes we need to think out of the box and be creative. Write a history paper, use national databases, registries and patient databases, unique research strategies, economics, socioeconomic, healthcare reform. Anybody know who this is? This is a very famous person. Yes, it's Alexander Fleming. In the mid-'30s, he's a microbiologist in the UK and he went on vacation and came back and saw this botched experiment where this fungus had invaded his Petri dish and ruined the experiment. He could have easily just thrown this dish away, but instead he said, "There's something here and we're going to call it penicillin that's being secreted by the fungus." He kind of dropped that though. He didn't take this idea too much further than that. And it wasn't till Howard Florey and Ernst Boris Chain took advantage of the idea and then developed it and then made it available for use in commercialization. And actually it saved countless lives in World War II due to the work of Chain and his brother. The bottom line is they won the Nobel Prize in 1945 for medicine for that work. But if it wouldn't have been for the latter two, Fleming's work would have been never come to light. So we don't commercialize here in this process, but we sort of do entrepreneurism. I mean, that's a form of research. We got the idea, we do the research, and we then confirm or reject a hypothesis. So a hypothesis is very important. And then we gather the data and then we ask questions. What is my motivation? Why do I want to do this experiment or write this paper? So the information dissemination, academic advancement, conclusion based work. Conclusion based work is that I am trying to prove a point. That may not be a good idea. Process based research may be more appropriate where I'm trying to find the truth. And what is the archival value of the manuscript? Be honest with yourself. Ask is the work worthy of dissemination? We get a lot of manuscripts in that are sent to us that are really not of significant value or archival value. And again, I already addressed conclusion versus process based research and I'm going to address this at the end of my talk as well. It is very important that we eliminate bias as much as possible. And if we are doing conclusion based research, in other words, trying to prove a point, we're designing usually a methodology that may be flawed in order to prove that point. Surgical trials are particularly prone to this kind of flaw. And so what essentially is the true value of the work? Have I optimally prepared this manuscript? Well, lecturing and writing are roughly similar from an educational perspective of the author or the speaker. They both are teaching and both are learning tools. We learn a lot, particularly as younger neurosurgeons, in writing a manuscript. Writing, I recommend to people write early and write often. Warm up with chapter writing exercises, et cetera. There's plenty of opportunities to do that. Hone down your skills. Use your faculty as instructors and mentors to learn how to put words together in an appropriate manner so that you convey your thoughts. And I've already stated repetition is good. We're doing it here today. And the more you write, the more effectively you will write. So as learners, and we're all learners, we learn more by, we learn by seeing things and then hearing things. And we learn then by reading and repeating. And then we learn by teaching. And the process is iterative and repetitive, and it's sort of like a spiral. And so this academic process builds on itself, and you should strive to take advantage of that. Let's go to the manuscript itself. Dr. Rutka already addressed the title. It's very important to have a catchy but not too lengthy title, a title that defines the work that is being presented. And then we have the abstract, introduction, methods, results, discussion, and conclusion. And I would agree that the abstract is the most important part of the manuscript in reality. It's sort of a mechanism by which you attract the reader to actually read the manuscript. If the abstract is not of a significant eye-catching ability, then the reader will not be interested in going further. Organize your thoughts. I like, once I've gotten my data and the results section is really formalized, I then want to write the manuscript and I want to convey my thoughts so that people want to read it and can learn from it. I'm a strong believer in an outline. Write an outline and then fill in the blanks. Problem with many manuscripts is that there is information in one section and it really leads to information in another section, and the information is really not clearly presented. The outline forces you to put information that relates to a heading or subheading in that section. And so then you can, then you just expand that, and then you have your manuscript. You have your outline, fill in the blanks, and you have your manuscript with subheadings and first, second, and third degree paragraphs that are. So it can be very useful in conveying the information. Then we have the methods section, results section, discussion, and conclusion. They can all have subheadings that compartmentalize the information. The reader is probably not familiar with what you're presenting, okay. So present the data to educate the reader who is not familiar. Make them familiar. Don't waste words. We don't want to hear everything you know, okay. We want to know what we need to know as the reader of the manuscript. Introduce you to a lady here, Nancy Bashook who I worked with in days gone by in the AANS here. She was the director of education for the AANS at the time and she had a master's degree in adult education. And she said a couple of things to me that really resonated with me, and that is that adults really have an increasingly short attention span. So when you're giving a talk, you want to give bullet points. When you're giving a, presenting a paper, you don't want to belabor points. You want to bang, hit them with the information that is needed. Conclusions, as Dr. Rutka stated, should not be overstated. They should be derived from the data presented. They should be rational and they should be useful and applicable to a clinical, usually a clinical situation. We have thresholds. Every editor-in-chief has a threshold for their articles. One of the major thresholds that we look at is case reports. Case reports usually are not cited very much. They deteriorate and degrade the impact factor. But if they present unique information, rare, not just uncommon cases, and unique lessons learned, we look at them, but we're going to be hard in general on the case report. And again, I keep emphasizing these two words, archival value. Is this information that you're presenting in your manuscript worthy of putting in the literature? Biased and conflict of interest is a huge problem. It's probably much more common than we think. We can have bias at the journal end in selection of the section editors. So the managing editor who's not a neurosurgeon, she's a clerical person, actually selects the in sort of a random process and with the "World Neurosurgery" selects the section editors. And then the section editors randomly select from the reviewers. And the ultimate decision is mine regarding the publication of the manuscript. The literature, particularly in surgical trials and in particular in spine surgery, is relatively flawed. There's a lot of anecdotal information, IDE studies, biased using non-randomized systematic error in study design and conduct, and market and academic pressures that significantly influence these manuscripts. There's a tremendous amount of obvious and non-obvious conflict of interest that relates to bias. There can be surgeon bias. The surgeon can be biased in many ways, in designing the methodology, in actually running the trial, and selecting patients that need revision surgery versus non-revision surgery, et cetera. And then there's patient bias, winner/loser bias. If a patient wants to be in a trial, say, for an artificial disc because they want to get the artificial disc and they get the artificial disc, they won the lottery. If they don't, they didn't. And the process can be very flawed in establishing what really is the truth. And I'd like to refer you to an article written by David Casarett in the "New England Journal of Medicine" March of 2016, called "The Illusion of Control." And there he talked about the therapeutic illusion. "The outcome of virtually all medical decisions is at least partly outside the physician's control, and random chance can encourage physicians to embrace mistaken beliefs about causality." We've got to be very cautious of that, and we as editors must be cautious of that and not, and very, keep this high on our radar screen and recognize that many manuscripts that on the surface look really well polished and well done, deep down inside present misleading or erroneous information. And that therapeutic illusion can lead to a confirmation bias where we go down the wrong pathway and make bad assumptions about the results. So conclusion based research, that's research to prove a point, is very different than process based research which is research that is designed to find the truth. Usage and grammar are terms that we throw around a lot. Grammatical is anything that has to do with sentences, punctuation, or the correct ways to write or speak a language, and usage is the way in which words or phrases are actually used, spoken, written in speech in a speech community. The bottom line is if your manuscript doesn't read well, it's going to have a greater time, a difficult time in being published. And so work on this. And if one of your coauthors or somebody else in your institution can help you in this regard, you should take advantage of that. Plagiarism is a dirty word, the practice of taking someone else's work or ideas and passing them off as one's own. There are in reality in our line of work here two kinds of plagiarism, malicious plagiarism, which is plagiarism on purpose, and inadvertent plagiarism. Fortunately, the latter is much more common than the former. And all journals I believe use some sort of a cross-check mechanism to scan the articles and look for lines that have been copied verbatim from other manuscripts. And so here we see this paragraph being copied from number four from this manuscript right here. Most plagiarism is inadvertent and most commonly comes from foreign countries where the author's first language is not English. And so I believe what they do is they want to make it look good. So they find a line from another journal and copy and paste it into their article. We simply ask, if this is, if it is excessive, for the authors to revise this in their own words and to resubmit. Data fabrication, Jim talked about, is not common, I think, but we must be very cognizant of it and try to watch for it. And Dr. Mathiesen and myself have been, we talk to each other. And so if one journal picks up something and we see that it's going to another journal, we pursue this information and try to rid our literature of it. But it's a difficult process. The anesthesiologist that Dr. Rutka spoke of published 126 papers before he was caught. So it's a sad, sad state of affairs, but it is something that we must maintain a high level of consciousness over. So there is a significant amount of heterogeneity in the responses of reviewers. I think you probably see the same thing maybe not so much in "Journal of Neurosurgery" where it's a sequential review, but within "World Neurosurgery" I'll often see an accept, reject, and revise by three different reviewers. And you kind of get caught in the middle. And one reviewer might be adamant about rejecting a manuscript and another reviewer think it's just the best thing since sliced bread. So we, you know, so sometimes we'll even send, I will even send these out for further review to get more consensus on this because of the sanctity of the peer review process. It's not just a decision that the editor-in-chief makes. The editor-in-chief must make it with the knowledge of what the authorities in the field think. Revision. One of the best ways to piss off an editor-in-chief is to not, is to be argumentative throughout the revision. We want to see substantive changes in the manuscript usually in red or yellow and designated in a way that we can easily see what you did to revise the manuscript. Occasionally I will receive a manuscript back in which the reviewer has just been argumentative about every point and didn't change a thing in the manuscript. That's either going to get rejected or sent back with a nasty letter, at least from me. What if rejected? We've already discussed that a bit, but I gotta ask what is my motivation? Is this really worthy of publication? Have I optimally prepared this manuscript? Do I trash it? Do I submit or submit, resubmit or submit elsewhere? Heed the critiques. Take them very seriously. Honestly revise the manuscript and self-assess and communicate with the editor-in-chief. I've had people beg me. They say, "This is really a good manuscript, I know it is, and it's rejected. Can you reverse the process?" No, I can't. This is a peer review process, the sanctity of which we must maintain as editors-in-chief. And so I say, "I welcome you to revise it and resubmit it to our journal, but it's going to go through the same process probably with different editors-in-chief, editor-in-chief and different reviewers." I think that's fair, and I've had several authors do that and meet with success. And then resubmit. Make sure you heed the critiques of the reviewers and be very, very thorough and honest with yourself. I thank you very much.

- Thank you, Ed. Next we have Tiit Mathiesen.

- So thank you very much. Of course, we are overlapping to a large extent. Oh, was it this one? So I'm going to dwell on some topics and not so much on others because naturally we have several points that are exactly the same. Of course the mission of our journal is to increase neurological knowledge, to print relevant scientific articles, to maintain a relevant discussion. And I'm going to highlight how we look at science a little bit, and I'm going to kind of also comment on the review process in "Acta." As for any journal, the manuscripts go to the editorial office to check for really basic formal requirements. Then they go to the editor. And I personally read all manuscripts briefly. And most manuscripts are then sent out for reviewers, to reviewers, independent reviewers, one, usually one to five. Comments coming back. Depending on the comments and on the possible agreement or disagreement of reviewers, we may select a few more. Based on the comments, we make a decision. We usually like to have a dialogue with the authors. We also think that the authors really have put a lot of effort into writing the papers up, and we try to help to find the essence that may be worth publishing. There are two sides to the evaluation of a manuscript. One of the things is the contents. The other thing is the form. And I'll dwell a little bit on the contents and I'll give some examples of, of course, both Jim and Ed discussed what we may consider, what we will not consider. We will certainly reject articles if there is not a sufficient scientific novelty, if there is a failure of a relevant gap of knowledge, if there is a failure of scientific reasoning or method, or, of course, breach of publication ethics. So as Ed mentioned, case reports, we get, as all journals, a lot of case reports, and many of these we do not think are really relevant to publish. Typical cases are this is sort of the 18th case of chordoid glioma. Diagnosis and surgical management will be reviewed. Now I think honestly, the 18th case is probably not very different from the 17th or from the 16th. So such a paper will be rejected by me as a editor-in-chief. We get papers where we have the unique combination of a chordoid glioma and an MCA aneurysm has not been published in the literature. Also will not be published in the literature. And of course there are many ways to deal with these rare cases. And we see many instances where it is the 18th case, but then somebody presents all the 17 previous cases and treats this as it would be their own case series and makes conclusions and things. Again, not scientifically very valid. What we would publish is if there is something unique to it. For example, the 18th case of chordoid glioma. Surgery was refused. We treated with an mTOR antagonist. Complete remission. This we will publish, but you see the difference in what is unique and what is not. So a case series without controls. Also a case series without controls is the Japanese word , bad, don't publish. Low quality of science if you follow evidence of evidence classification. But we would publish if it's really a relevant test of a novel hypothesis. For example, if one was retrospectively analyzed 20 consecutive patients who underwent new treatment and all were free of recurrence five year after follow-up, we would publish this because this is unique. And if somebody else tries to publish a similar series, then finds out that almost all patients died from hepatic failure, again, we will publish. This is relevant. Or if someone in the other part of the world again publishes a case series of five patients and finds the same outrageously unique finding, a corroboration, we would publish this even if it's very, let's say very low quality data. We show our results and discuss management. This is the commonest structure we see, and this is usually not novel and there is no defined gap of knowledge. I have recently rejected a number of series of papers where people have looked at cavernomas. Cavernomas were found from hemorrhage. They have treated the cavernomas with radiosurgery, and then they follow the cavernomas for one or two years and find that they didn't bleed to a very large extent after surgery. And the argument, this is common in the literature. Many papers are published on this, and I said the last month I rejected three papers like this. And then they said, "Well, all patients were followed for a short time before they were treated. The annual bleeding rate was then 57% because they were diagnosed from bleeds. Only two bled during one-year follow-up, and then bleeding rate was significantly reduced to 20% annual bleeding rate." I'm sure you have seen papers like this, and I'm sure you realize this doesn't make sense. Suppose there were incidentally found cavernomas. You treated them with radiosurgery and two bled after during one year. Then you have increased the bleeding rate to 20% from zero. So you must look at how, at the scientific reasoning. We want to publish novel findings. I think it's relevant to publish negative findings. Too few negative findings are published. If it is a good study, negative findings can be extremely relevant, and I'll come back a little bit more to why this is also philosophically and theoretically relevant. Well done randomized controlled trial for relevant hypothesis of course. Successful repeated trials, also important, very important for our science. Failure to repeat trials is important. And we must remember that novel science often depends on serendipity, and best evidence kind of confirmatory trials is usually not very novel and it's quite boring. It's necessary to publish, but this is not very novel usually. And all of this leading back to if you think of growth of knowledge, many people think that growth of knowledge is constant accumulation of verified data or verified facts, and I don't think this is right. Instead, knowledge is constantly changing. Beliefs are challenged. So it's not that we have more verified facts today than we had a hundred years ago necessarily, but we have different. Science is wider than medical evidence. Sir Michael Rawlins wrote that "The notion that evidence can be reliably placed in hierarchies is illusory, and the findings of randomized control trials should be extrapolated with caution." Please look at this. And this is important when we think of what we consider science and when we structure our articles. There is a very famous article by Ioannidis published in 2005 who presented evidence for lack of evidence for evidence. So he , so he looked into 59 highly cited randomized controlled trials, and he found that 24% of these were never challenged, only 44 were replicated, 16% had much smaller effects than original, and 16% were contradicted. So again, leading to the idea that science comprises theories, not verified facts. And these theories provide causality explanation of an underlying reality and prediction of the future. This is what a scientific theory is. It allows us to predict theory, to predict the future. And when we do a scientific inquiry, we really investigate whether we are successful or not to predict the future with the explanations we have. So a scientific theory is based on observations that lead to theories. These theories allow deduction of hypothesis. The hypothesis can be tested experimentally, and the theory is either corroborated or falsified. And this is what we want to see in a scientific article. Carl Hempel uses the example of, I'm sure you recognize this Hungarian gentlemen, Ignaz Semmelweis. He uses his work on puerperal fever in Vienna in Allgemeines Krankenhaus as an example of scientific reasoning in medicine. I'm sure you know about this. There were two clinics where people were, two maternity clinics, and the death rate in these two clinics was very different. And people in Vienna knew that the death risk was very high in the first clinic and they did everything not to give birth in the first, but in the second. And now they tried to work out why is this difference. What does it? So they had a number of explanations. The first was about miasma. That's some sort of poisonous air that would be present and cause these deaths. This was easily tested by the fact that the same air was really present in both wards. So was it the number of patients in the wards? There were small differences. They adjusted the number of patients in the wards, but no effect on mortality. The patients were tended to by medical students in clinic number one and midwife students in clinic number two. There was a question of whether the medical students would be unskilled, but again, going back to theory, they found out that the medical students actually were better trained. So it could not be a question of skill. They had different positions when giving birth. So in one of the wards, they were giving birth lying back on their backs and the other side ward on their side. So how was this hypothesis tested? They changed the positions, but no effect, no effect on mortality. A very strange thing was that when somebody had died in one of the wards, they used to have a small procession led by a priest and a choirboy who was holding a bell, so a small possession. And this small procession was only passing ward number one. So they thought maybe there is some psychological effect of this. So they changed and they made a priest make a detour through both wards which I think may be ethically strange if you think that it would increase mortality in ward number two, but it did not, no effect. So what finally happened is that Ignaz Semmelweis was doing an autopsy with one of his colleagues, Jakob Kolletchka, and Jakob Kolletchka happened to cut his finger. And three days later, he was dead, exactly the same clinical picture that the young mothers were dying from. So it was sepsis of puerperal fever was this. So the explanation was that there is a substance in corpses which creates this. And the medical students in clinic number one were really carrying this substance to the maternity wards, leading to death of their patients. Now, this hypothesis was tested by a treatment for this poisonous substance. They started using chlorine handwash. And you can see that the death rate sank directly to almost zero. So this was proof of the concept that there is a mystical substance in corpses which transmits death. So this is the structure of scientific explanation. So according to Karl Popper, there is no experimental verification of theories. You can only test your hypothesis. And as long as you cannot falsify them, the theories are corroborated. And he has, I'm sure you know about the discussion of the white swans, the idea of induction. If you have a hypothesis that all swans are white, you cannot really prove your point by looking up all the white swans because you will find a number of white swans, but it's still not theoretically proven that there is not one single black swan anywhere. Okay, so what you can do and what is experimentally, theoretically, philosophically strong is if you go to Australia and you find a black swan, then you have falsified your hypothesis. But you don't really verify it by finding more white swans. So Karl Popper writes that "The standards of objective truth and criticism may teach the individual man to try again and to think again, to challenge his own conclusions, and to use his imagination in trying to find whether and where his own conclusions are at fault. They may teach him to apply the method of trial and error in every field, especially in science, and thus, they may teach him how to learn from his mistakes, how to search for them. Those standards may help him to discover how little he knows and how much there is to know." The core principle that should guide all scientific efforts is disinterestedness, acting for the benefit of a common scientific enterprise. So there is an idea that there is a truth out there which we can find by challenging what we have, what we are believing temporarily. So contents should not depend on the form. Contents should depend on something else. You should not design your article from an accepted form but from the scientific inquiry. And again, I do not think that the most, I don't agree with Jim that in my mind the results are the first thing and the most important thing. I usually like to look at the gap of knowledge. How did you define the gap of knowledge at the end of introduction? The introduction should be where you give an overview of the field and really define what we know, what we don't know, or why am I doing this inquiry. So I think it all starts from the gap of knowledge. And the gap of knowledge is where we define what methods will be relevant, which data will be relevant to analyze this gap of knowledge. So that about the contents. Of course, we need a structure to communicate what we have found. "Acta," I'm sure Nelson will speak more about this checklist kind of thing, but we have a checklist for authors to avoid some of the pitfalls with structure and to be able to publish our data in a good way. So important things, I think, in introduction, the relevant background and a well-defined gap of knowledge. That's really the essence that shows, that determines which data and which methods will be relevant. I don't have a slide for the methods, but methods should in principle allow anybody who reads to repeat what you have done because reproducibility of science is really a cornerstone of scientific method. Naturally, the results must be comprehensive. The discussion is a place where we frequently, where I think the authors frequently make it difficult for the readers and for the reviewers to appreciate why the study was done and what was found. So I think a discussion should be symmetrical to the aim and it should show very early how the gap of knowledge was filled. And it's very nice to start the discussion with brief main findings and then go on to critical analysis. Are the data reliable? Are the data relevant? Is there an external validity? Do they agree or disagree with previous findings? And then what are the implications? So it's basically the same thinking that Jim was commenting, but it should be a critical analysis of the data and not a long essay based on the previous literature and your own findings. It shouldn't be a textbook chapter on the topic, and it should definitely not be a sales pitch for why you think your data are so great. It should be disinterested and objective and reasoning. So I'll just also comment on the publication ethics things. And some of the things that come to the editorial office or to the reviewers, we either spot early or find later on that they are not really acceptable for ethical reasons. One thing, one paper that I found a number of years ago came from a prestigious American institution. A number of consecutively treated colloid cysts, sizes between, endoscopically operated, sizes between three and 40 millimeters. 35 patients, long-term follow-up. Couple of the reviewers were commenting on several things. The length of follow-up was a little bit unclear and was different in text than in tables. We were very questioning about colloid cyst that was only three millimeters. What was the indication for surgery? So it was sent for revision. The revision comes back and the data are changed. The three millimeter cyst has been taken away and changed to a seven millimeter cyst. One of the follow-ups which was stated to be 86 months, which was not physically possible with the dates that were given, was also changed but without comments. And of course we reject it because we didn't believe this. This was later published in another journal. This is a nice paper where the authors have fallen in love with a theory that you can find in meningiomas areas with different activity. You can find hotspots on a PET scan, which you see on the left, which then have different proliferation index. So they showed images on the right side with Mid1 staining, and a specimen from the blue area was, I think, above, on, and from the red area below. And you see they do look different. And the reason they look different is they're differently magnified. And so I think we will just sum up. I had a little story here of a paper we are working on right now which is clearly fabricated, but I think for the time of, for the time or sake of time, I think maybe we shall, or shall I show that paper? Just, it's nice. Yeah, well, or the... Sorry, I'm going the wrong way, sorry. We had this paper that had been going back and forth to and from the reviewers. It's on experimental subarachnoid hemorrhage. It was a very ambitious paper but was a little bit difficult to understand how the study had been done. It was ambitious, as I said, because it was 56 rats and they had done a lot of things to the rats, experimental subarachnoid hemorrhage. They had taken CSF three times during two weeks from the cisterna magna of the rats and evaluated that. Had taken cortical samples of the brain and studied and a lot of things. But it was difficult to understand how many rats were done different things to and things like that. The reviewers asked for a flow chart, and it was still difficult. And I started looking at it, and it was difficult to see what they had done. They had really perfect results. Everything agreed with the hypothesis they had. This was so great, so I wondered this must be a big group who's doing a lot of good stuff, but they had only referenced to one paper from their own group which was five or six years earlier. So I looked in PubMed. Turns out that the same group has published more than 20 papers during the last 18 months on experimental subarachnoid hemorrhage. They had looked, you'll see the beautiful pictures where they look at the basilar artery with different, in the different groups. Looking at some of their previous papers on subarachnoid hemorrhage during the 18 months, can see that they were reusing some of the arteries. Some were turned. Some were upside down. They changed contrast. So, of course this we had not yet accepted, so we rejected this because methods were not transparent. But we had published a previous paper from the same group. What really caught them is that they said, is the ethics permission. They said that this study was in agreement with the Helsinki Declaration of ethics which is very good, but the Helsinki Declaration does not deal with rats. It only deals with patients. So this was a clear, you know, this cannot be true. And then there is a procedure to follow. So we write a letter to the authors or to the corresponding author and asked him to explain, supply the ethics permission, all of these things. He didn't reply, or she didn't reply. Then we write to all authors. Also, they did not reply. So now we are writing to the university asking them to investigate this. So... So I think we can help the individuals to be virtuous, and I think maybe we can bring up how to deal with this. I think a system for third-party data validation could be to be discussed, thank you.

- My name's Nelson Oyesiku. I'm editor-in-chief of "Neurosurgery." I'd like to provide the perspective of "Neurosurgery" to this panel. So we have two main journals, "Neurosurgery," otherwise known as the "Red Journal," and "Operative Neurosurgery" which is the "Blue Journal." "Neurosurgery" has been in publication since 1977, so this July would be the ruby anniversary or the 40th anniversary of the journal. The two journals are rather distinct. They provide two different options to authors. For "Neurosurgery," this is the main research arm of the publication group. And the types of articles that are attracted to "Neurosurgery" are clinical studies. These may be institutional or individual surgeon based or consortia or multi-institutions or multi-national clinical trials, RCTs, or even non-RCTs, but clinical trials nonetheless and clinical protocols which are really the recipe that antedate the actual data from clinical trials. So the protocols are the how to or what we're going to do bit. And if protocols have been reviewed by major granting agencies, then we would be welcome to a publication of those. And then laboratory research, and that may be human tissue or animal tissue. And then research that's based on animal studies. Review articles, these may be qualitative or quantitative reviews, and usually these are, these may be invited, but not necessarily so. Oftentimes we get unsolicited reviews and those are welcome as well. And then a designated category known as special articles, and then full-length commentaries that are typically provided by reviewers of papers that have been published. And then case reports. Whilst it's true that case reports are rarely published, it is also true that most of the first seminal observations of human disease started out with one case. And so if you're lucky as a journal to be publishing the index case of a novel human condition, well, you are very lucky indeed. Rarely does that happen. And then letters to the editor, correspondence and so forth, bring up the last bit. For "Operative Neurosurgery," the article types as follows. These are mostly concentrated on the technical aspects of our specialty. So multimedia surgical videos. They may be cadaveric intraoperative videos or combinations thereof, complications of neurosurgical procedures, assessments of new instrumentations and devices or techniques, novel operative techniques or nuances to existing operative procedures, combinations of instrumentation and technique, surgical anatomy, particularly cadaveric, but in this day and age because of all the digital stuff, there's some anatomical procedures that are virtual. And then technical case reports, and finally case series institution based again or individual surgeon based. We recently launched a new portal for "Operative Neurosurgery" which is a very robust digital platform known as The Surgeon's Armamentarium, and this has a whole slew of content including cases, compendiums, anatomical dissections and illustrations from Rhoton, puzzles, human cerebrum, and all the content from "Operative Neurosurgery," "Neurosurgery," our video library, all tagged and annotated and eminently searchable to deliver content digitally to the end user. So just some key facts about the journal, just some things that authors might want to know. These are averages, and averages mean just that. They're the means between the two extremes. So on average, we try to get papers back to the authors and we succeed at about 21 days to first decision and then posting online after acceptances within two weeks. We publish in all subspecialty categories, and the journal's read across the world in 96 countries. We have podcasts in nine languages. So the reach of the journal is very wide indeed. So the journal, when I took over in 2009, we converted from our house style to the AMA style. AMA style is more recognized and it's more uniform and used across the world. And most people recognize the AMA style, wo we've been very pleased with that and we're going to stick with that. Not all papers are treated equally, and the most impactful papers are the ones we are most interested in. We have this known as the High Impact Manuscript Services, HIMS. That is what the journal preferentially prefers to publish. And for those kinds of papers, we allocate to them the pride of place of rapid reviews. We provide author incentives, a waiver of publication charges, open access, cover considerations, promotions in the lay press, social media, and of course in print. So if you have a paper that qualifies for the HIMS program, that goes to the top of the pile. So, and I'll talk about now our review, our peer review system for the journal. So that's just an overview of the journal from the standpoint of what our philosophy is. Now, the double-blind review, peer reviews paradigm we started about six years ago, and in this paradigm, it says exactly what it says. It's a double-blind system. So neither the author, authors or the reviewer or reviewers are privy to the identity of the other party until of course the time of publication when everything is unmasked, at the time of publication. The editor however, myself, editor-in-chief, or one of my section editors of course are privy to the reviewer and the authors. And we believe this system whilst not perfect, there really is no perfect peer review system, we do believe this provides some measure of fairness and confidentiality to preserve as much as possible the fair exchange of objective assessment back and forth. So we tend to stick with that. So our peer review goes in three fundamental stages. We start with what's known as a triage. Triage is anybody who's served on a grant panel knows exactly what a triage is. And the triage is done by myself as editor-in-chief or one of my section or one of our section editors. And the purpose of the triage is really very clear. It's to determine whether or not we're interested. And the reasons where we're interested of course may be legion, but the fundamental question is are we interested in this paper, number one, number two, is this paper have legs? In other words, is it worth putting this paper through the grinder of the peer review system with the cost and time involved or is it better to notify the author or authors that this is not going to see, likely to see the light of day under any circumstances and therefore we want to move on? Then when you get beyond the triage, then we go into the peer review, and I'll describe our peer review system in a minute. And then the post-publication where we've published a paper, then we want to track it. We want to see whether or not that paper truly performed as predicted. So about a year out, we'll do what's called a post-publication review with our standard bibliometrics. And what we want to know is the average number of citations and if it's been cited, who's citing it. And that gives us feedback year to year as to whether or not our predictions of whether or not this was a good paper or not holds true. So we've been following that paradigm for the last five years, and it yields a lot of data, particularly for me and section editors. So our section editors, as you might expect, are specialty based in the usual subspecialty areas of neurosurgery, and their role is to obviously advise and assist the editor-in-chief in both the review as well as the adjudication of manuscripts. And the adjudication of manuscripts is really a very, very, it's both an art and a science, right, because weighing opinions much like a chief justice might do say from a panel of opinions on a court is really what the adjudication process is. In other words, weighing opinions and then making and rendering a decision is really very, very an important role of the editor-in-chief and the section editors. And so provide having that back and forth with the section editors is very important. And then we have a biostatistician and a methodologist assigned to each section, and they bring in a different perspective. Their goal and their role is to advise us as to the quality of the study. Is the quality of the study as executed, is it done to standards? And the data that have been generated from the standpoint of results and data and the analysis of that, has that been up to standard? And so they provide a different perspective. They're not the science of the, they're not the scientific analysis. They're more of the clinical study as well as the data analysis and presentation. And then under each section we have associate editors, usually about five to seven, that assists each of the section editors. And they provide a consistent group of available reviewers within each subspecialty. And in addition, each section is available to call upon a database of about 2000 ad hoc reviewers from various areas of allied clinical neurosciences that may not be within the knowledge base of the typical neurosurgical editorial board member, you know, pathologists and neuroradiologists and so forth, et cetera. And then once that review is done, then reviewer recommendations are assigned to the review and those are the broad categories. The final category is the triage category which is the withdraw/outright reject which I talked about already. We recognize that many of our authors come from all over the world, and it is not the case that everybody speaks English or writes English with facility. I don't write French with facility nor do I read French with facility, and I shouldn't expect that anybody who is French should do so for English. And so we've provided English language services. If we're persuaded that the science is fine, we'll take care of the English. We'll refer them to an English service provider. They will do the first massage of the paper and the author will bear that cost separately. And then it'll come to our copy editors and then onto proofreading. So we accept no responsibility for that interaction, but we do provide that as a service to them. I know this has been talked about by my colleagues. Reporting guidelines have been around now for several years, and they've really been a very important yardstick in the standard of reporting of papers and the way papers are being written. And thanks to the EQUATOR network and like organizations, a plethora of guidelines now that are available for virtually every type of paper including case reports and RCTs and beyond. So this does two things. It introduces a measure of discipline to the author, making sure that they've really covered all the pertinent bases and they've been able to direct the reviewer and the editor to the appropriate area in the paper where this is dealt with, and also helps us on our side making sure that we've actually looked at every little tiny nuance and detail, and nothing, no stone has been left unturned. And so for RCTs, you have to adhere to the CONSORT or similar like guidelines. And it's pretty straightforward and allows us to understand the design and so forth. For systematic reviews and meta-analysis, PRISMA is the go-to guideline, and again, for the same reasons that Jim alluded to. For observational studies that are not RCT but are being reported as meta-analysis of reviews, MOOSE is the go-to person. And I think STROBE now is pretty much the workhorse for most of the clinical studies because they allow cohort studies, case-control studies, cross-sectional studies, and so forth which are a large proportion of the stuff we see. We've already talked about the issue of plagiarism. Plagiarism may be other people's work. It could also be self-plagiarism. So there are two aspects to that. And it can lead to, it can lead, unfortunately sometimes, to publication ethics violation and discipline. And I, several years back, the second year or so during my tenure, one of these papers that was plagiarized came to our attention and ultimately had to be withdrawn several years ago. So we take that very seriously. And it's very easy these days to take your paper, run it through the ringer, and in short order, in a matter of a few seconds, it'll grind out the information about that. So I'll talk now about some of the, a few things about manuscript preparation, and then we'll close with the panel questions. So in terms of the kinds of papers that get the most attention, novel, novelty. But I'll also add that there, you know, with science being what it is, it's also important sometimes to confirm other studies because sometimes a study that's reported may be a black swan and it may not represent generalizable information. So confirmatory studies, even though they're not novel, sometimes are quite important, particularly the larger studies than the initial study that reported an effect or a finding. And so novelty, yes, but it has its limits. Mechanistic and descriptive studies are the other two things that are of interest to us. From a standpoint of clinical material, we see more descriptive material than we see mechanistic. And in the case of the basic sciences, of course we see more mechanistic material than we do purely descriptive, and so on. In general, once a decision has been made to conclude the study and send it off to a journal, I think it's important to go through the process and make sure that everything's actually truly complete. In other words, has the hypothesis truly been tested? Are the data truly adequate in nature? And are the study findings contestable or not? And then if you can crystallize all of that and you can send the message in one or two sentences, then you probably got something to say. We've talked about things not to do and we've covered all of these. I'm not gonna belabor this particular thing except to say that five and six get very little attention. We take them, sometimes take them for granted and we shouldn't because if a paper is published with ethics or problems in animal use or human data procurement, that can also lead to trouble as well. So don't overlook those two things, items five and six. They're rather very important, and many times authors will not provide information as to whether those have been adhered to properly. Reasons for rejection, obviously we've covered several of these. This is not an issue quite as much in my mind. If it's poorly written, we can fix this if the science is fine. And of course, if it doesn't achieve the high enough priority for our journal, that's another issue. This is a topic that I'm very, very passionate about, is brevity, and there's no question but that we can do a whole lot better on this. And every now and then I get, I make a decision as to accept upon condition that the paper is trimmed by 20% or 15%. And every now and then I get an email back saying, "We cannot trim the information. It would ruin the paper." I said, "No, it won't. It'll make the paper better." So this is from Strunk and White. This is one of my favorite books. It's a very small, teensy monograph about writing. And if you have never read Strunk and White, you should. And I won't go into that details of that quote, but you get the message. Let me give you some examples. These are really, really cool examples. And I'm sure you can think of many of your own, but here we go. So a considerable amount, many. So I've gone from four words to one. So that's a 75% reduction. On account of, because. A number of, several. Referred to as, called. In a number of cases, some. Has the capacity to, can. It is clear that, clearly. It is apparent that, apparently. I mean, the list goes on. And then you get the hyperbole business going on. You know, it's either important or it isn't. Extremely important? I don't really think so. Close proximity? How close, cheek to jowl? Summarize briefly? Well, we already summarized. Very deep. What, just how deep? And so on. So there are many examples such as these. And so keep, and keeping manuscripts short saves pages, saves postage, and in the long run is healthy for everyone. The quintessential example is the paper by Watson and Crick where they described there the molecular structure of DNA. Gee, I think they barely got to the page. So this is the beginning of the paper and these are the references, so a page. The story of life on one page. I think we write papers on half a page if we had to. And so this is a seminal example. So the way I think a manuscript should begin is with the simplest part of it, right. So what is it? The title, what's the title? And that's the first thing you need to do. What's the title? The second easiest thing are the methods, right, because even if it was a clinical study, you have a protocol. If it was an experiment, you'd have methods. And so the method's already known well in advance of when the data comes out. So you can write your methods and you can write your protocol the minute you want to do a study. And so you do that right after the title. And then as you generate information and data and so forth, the results begin to accumulate. And so you start grinding out your results section. And there the introduction and the discussion, the two ends of the manuscript, the introduction introducing the body of the question, sorry, based on the lack of the gaps in the body of knowledge, and then the discussion which then puts everything into proper perspective and delivers the message. And as Jim has already alluded to, the abstract is that final piece, you know, the really take-home thing where you got to encapsulate everything in 250 words or less, and every word is priceless. You've got to use every word, and every word has to carry a punch. And then of course, the final piece are the references. So active titles, crisp, informative titles really carry the day. And I can decide three times out of four based on the title whether this is going to even go. I mean, you get better as you get, as you mature in the job, and the title, I can pretty much make a quick decision based on that, and it starts turning in my mind before I read the abstract. Abstract's critical because in the abstract, anyone should be able to get the gist of the story, the why, the what, and wheretofore without further ado. And if you can't say it in the abstract in 250 words or less, then it's going to be very difficult to be persuasive even after reading 50 pages of a manuscript. The major conclusions and the significance of the work should be there, and it should be written and rewritten until it's flawless. The introduction builds the case for the study, provides a brief background, provides the central hypothesis in a one or two sentence or 1 1/2 sentence of the findings. As I said before, the methods and materials, that's a gimme. You already know the protocol. You already know the methods. You can get that done even before you have one data point. And then of course you could tweak it as experiments change or you modify things and so forth, but fundamentally the elements of the materials and methods are already spoken for even before the experiments begin. Obviously there's only so much flexibility you have with established methods. And so repetition or use, repetitive use of information there is very common and cannot be considered plagiarism per se as long as it's appropriately referenced with footnotes and even if necessary, quotation marks. Results should be presented with as few words as possible, tables, figures, flow charts, et cetera, the more the merrier. Color, lots of color, enhancements and so on are very, very useful there. And this, the discussion should really be broken up by subheadings. And the first first order of business is to answer the question that was posed at the last part of the introduction. So essentially the last two sentences of an introduction should be fundamentally addressed in the first paragraph of the discussion if it's done right. And then the conclusions and so forth are related to extent knowledge. And then towards the end of the discussion is where you look at the issues of discrepancies with the literature and limitations, generalizability, and so forth. And the key results in the discussion should be broken into by subheadings. And then finally, depending on the type of paper it is, if it's a laboratory experiment, it's going to be different from if it was a clinical study and so forth. And then finally, the references, and be very highly selective about the use of references and make sure you truly have read the references and they're appropriately for the citations. So the pet peeves, I'm not going to go into all those details, but you'll see those on the online presentation. Responding to reviewers, very important. Do that when you're calm and collected. Be enthusiastic. Address all the points that have been made. Be tactful and get help from all the other authors. It shouldn't just be the case that the corresponding author carries the water. Revise, revise, revise, rehash, rehash. Keep the paper on ice for a week. Come back into it. Double check everything. There's no excuse for spelling typos. That just, it shows laziness because most computers can pick up spelling errors, and it shouldn't be the case that a spelling error passes through. So with that, I will stop, Anil, and hopefully we'll be able to do some questions. Thank you very much.

Please login to post a comment.