Wednesday, March 6, 2013

NEJM Open Access Round Table

As hours of lecture pass by, I, like most of my fellow students nowadays, will flip through my smart phone to check out facebook, check my email, check facebook again, check what the weather is like today and tomorrow and hope for a nice weekend, and then zone back in to realize I lost 5 minutes of lecture on a critique of a journal article.  As more medical schools emphasize the teaching of evidence based medicine and work into curriculum the appraisal of research articles, the more peculiar I found on my smart phone a discussion we've been having on this blog for a year and a half.

I have an app (free) for the New England Journal of Medicine which provides weekly editions of its articles (again free of charge) and I became engrossed and totally failed to pay attention to the appraisal of a paper discussing dexamethasone to treat bronchiolitis.  (I read the article, from NEJM ironically enough, and it's a solid one, but I digress.)  I failed to pay attention because my app showed me this week's (February 28, 2013 Vol 368, No. 9, pages 785-793) NEJM topic of focus--open science!

I don't want this to become an advertisement for this free app or OSK endorsing NEJM over other publications (they do provide electronic articles free of charge to readers though from my experience).  But there was some fantastic commentary both for and against open access. 

Check out the link.  And I'll try to pay more attention in class.

Friday, January 18, 2013

OSK solves the journal problem

I've enjoyed writing little pieces for OSK. Unfortunately, as I leave Kentucky, my writing days for Kentucky's open science source of record must end. I want to leave this last piece documenting thoughts my colleagues and I had when starting this endeavor a year or two ago. We tried to bring it to fruition, but we simply did not have the time or resources to do so. We hope one of you will.

This is not a comprehensive characterization, but it's the beginning of a comprehensive solution. Big ups to my partners on this project, Adam Robison, Keevin Bybee, and Patrick Bybee. Thanks to many others who've contributed time and mind.

---------------------------------------------------------------------------


The academic publishing enterprise is archaic at best, unethical at worst. Read commentary here.

Below is our solution to this problem. It would be a huge undertaking to create it and we don’t have the requisite skills/time to do so. We just want to float it on the web so someone else might pick it up and start the process of making it a reality.

Crowd-sourced ratings systems, on a large enough scale, can be very effective. We envision a publishing system where scientists upload novel research to a cloud-based platform in somewhat traditional manuscript form, including introduction, methods, results, conclusions, limitations, etc. Peers will evaluate these manuscripts by the same criteria that are used today. Diverging from today, manuscripts will be linked to underlying data supporting said manuscripts. The value of the manuscript will be decided by the peer community, rather than by 3-5 anonymous “experts” requested by the manuscript’s author or selected by the journal’s editor. Non-anonymous researchers will evaluate the manuscript for quality/importance, leaving a “score” and supporting discussion for posterity (comments that will also be given a score by reviewers). Manuscript, based on an algorithm, will be given a comprehensive score that will be dynamic (more about algorithm below). Good manuscripts will make the “front page”, their visibility will increase as they’re targeted to “high value” lists, and poor quality manuscripts will fall to the bottom. However, all manuscripts will remain visible/searchable, allowing an idea ahead of its time to receive proper recognition with age. Authors will be listed as now, and each author will be linked to a historical research profile. Each manuscript/comment score is factored into a historical user score, which will serve to weight the importance of comments left on other manuscripts/comments.

In the end, manuscript submission is completely transparent, has supporting data, includes relevant critical commentary traditionally seen only by authors, eliminates the "third reviewer" phenomenon, rewards authors with novel ideas without regard to institution/financial backing/geography, and overcomes problems documented at the polymath project. It would have completely open access. It would eliminate user frees, freeing information to all. It would have other perks, but you get the idea.

We envision this system laid out in a three-tier model with a “dashboard” type user interface streamlining ease of use. The “bottom” level of this system would include all data sets. Datasets would include relevant epidemiological characteristics. Datasets would be linked to citations of data within articles that’d identify which pieces of data, equations, and interpretations were used therein. Checking for accurate interpretation of data by peers would be simple and limits could be built into citation software prohibiting inaccurate data utilization. All new manuscripts would be tied to data in a dataset, though the dataset used might not be new. An independent user might identify something novel in a dataset not identified by the original dataset poster, and he/she could create a separate manuscript from it, giving full credit to the original submitter with data-links. Original posters would be rewarded each time a dataset was used successfully via historical user scores mentioned throughout this article (encouraging submission of data for community processing). This system would reward transparency and thoroughness on original submission.

The “middle” level of the system would include manuscripts and their associated comments. We envision this level being a split screen, allowing users to read the new manuscript, while older manuscripts cited in the new manuscript would come directly into view on the second screen (this could be customizable to allow several manuscripts and data, multiple manuscripts, and outside sources to be viewed simultaneously for cross-referencing). Users could read new manuscripts, give them a rating, and leave comments with citations in the form of discussion here. Comments could address issues with new manuscripts, or the data underlying new manuscripts, directly. Consequently, rather than discrete manuscripts, new information would be disseminated as a novel dialogue, allowing important commentary on new ideas to be included in the historical record (ensuring accountability for new information, providing context, and facilitating quicker synthesis of new ideas with older ideas). As above, good manuscripts/comments would float to the top via a strong score, rewarding the submitter and the scientific community.

The “top” level of this system would be an information synthesis similar to Wikipedia. This level would allow users to cite and synthesize information from manuscripts and comments (as well as, initially, from outside sources) in discrete articles “about” scientific topics. This level eliminates the need for independent external publishing sources, as all discrete information at this level could be linked directly to relevant manuscripts/comments discussing it, making “truth” easily linked to primary sources.

Underlying this process is an algorithm accounting for the spectrum of user contributions. A user’s individual score would account for all of his historical contributions to the project, including scores for manuscripts provided, when/where/how often each manuscript is cited by other users, constructive comments left, and datasets submitted. Credentials (appointments, higher education, etc), similarly, could be accounted for in the individual user score. Value associated with comments he/she leaves on other manuscripts would take historical user score into account, weighting his/her comments fairly for historical contributions. Similarly, new manuscripts submitted by a user would receive higher/lower baseline ratings based on historical contribution. We envision this score being included on academic curricula vitae, replacing absurdly long lists of publications/abstracts that give no indication of the quality of individual manuscripts.

The beauty of this system is that it allows complete transparency while offering each user a rating assigned by his peers. It would include translation software, eliminating linguistic barriers. It would allow new literature to be targeted directly to user preferences, allowing users to customize lists of manuscripts they would like to read and review by key words, discipline, user score underlying a new manuscript, “hot’ manuscripts receiving a lot of attention from readers/commenters, country/state/neighborhood of origin, etc. It would facilitate streamlined and effective literature searches, allowing users to search for key words across the entire body of scientific knowledge, including comments and datasets. It would force improvement of data quality by making data subject to inspection along with commentary on that data. It would facilitate high quality meta-analyses by allowing users to compile original data for processing, rather than processed data. It would eliminate siloed departmental research by opening commentary to all disciplines and encouraging interdisciplinary collaborations.

This system could be supported financially by grants, government funding or other charitable contributions. Alternatively, we envision this system having the potential to support itself (and be quite lucrative) through advertising revenue. Because a user’s entire historical research archive would be posted, his/her search and review history would be available, his biographical sketch posted, his discipline noted, advertisers would have a wealth of information for targeting. For example, a device manufacturer selling pipette tips would know that user X recently posted a dataset in which 30,000 pipette tips were used. That company would also know that user X searched for manuscripts from users completing similar research. Consequently, it would be clear that user X purchased many, many pipette tips and advertising could reflect this.


As I said, this is not a complete characterization of this project. Leave your thoughts and questions in the comments!


Sunday, January 13, 2013

Open altruism..?

A quick post to link to a website called Admitting Failure.

This is a great site started by the guys at Engineers without Borders that allows folks working in various altruistic fields (for lack of a more proper term) to post their stories of perceived personal failure. The premise is that, in the decentralized world of international NGOs, an excellent way to do better work would be to learn from the mistakes of others.

Worth a look.

Take a stand against the academic publishing industry.. Do it now

If you're reading this, you might have some interest in signing this list. If you're going to sign it, do it now.

Many of you reading this (my peers) are very, very early in your academic careers. I understand the fear of signing a pledge of this sort without the blanket of anonymity. However, in my opinion, the momentum is shifting away from the archaic medium that is the academic publishing industry -- who wants to be caught falling behind a tide like this? Make a pledge to yourself to be a reformer. Promise yourself you'll overcome whatever challenges you face by doing the right thing. This pledge should indicate that you're among like-minded people who will follow you into this fight.

You'll find my name on it. I hope to find all of your names there.

RIP Aaron Swartz

If you're reading this blog, you're likely aware of Reddit and the passing of one of Reddit's founders, Aaron Schwartz (link to Ars Technica here). I can't say I knew anything about Schwartz prior to his death, but one remarkable achievement of his life is relevant to the Open Science Kentucky blog.

In 2011, Aaron Schwartz was charged with "illegally" downloading millions of academic papers "owned" by JSTOR, a publishing middle-man that owns its users' scholarly content once they submit it for "publishing", with the intention of sharing them with the world. Schwartz faced decades of imprisonment for this crime. If convicted, his sentence, no doubt, would've served as a stern warning to those who'd challenge lucrative publishing conglomerates and their undue influence on our civilized society.

We've written about the absurdity of the academic publishing industry here, here, here, here, here, and probably elsewhere too. I could link to any number of thousands of conflicting articles in competing journals cited by proprietary news organizations owned by those profiting from sensationalism collectively indicating how huge a problem the academic publishing industry actually is. Not that I'm alone in my disdain; there's nearly ubiquitous acrimony among those in the research community toward the highly-flawed publishing industry (here's a tongue-in-cheek portrayal of this) and its lucrative stranglehold on the public record of knowledge.

So, Aaron Schwartz committed a crime and would've been punished as dictated by law. The law upvotes and downvotes, it does not decide right and wrong in the larger sense. Deciding legal versus illegal, right versus wrong, is up to us. We've decided, as a civilized society, that information (often created with public tax dollars) can be owned by individuals and corporations. We justify it with any number of platitudes and anecdotes about innovation and genius and the university and impact factor and curricula vitae. We hand our lifes' works to publishing houses and trust that they'll use them for the benefit of the communities we cherish. Again and again, we learn that our system is flawed. Yet, we continue on this course, punishing those who challenge it with academic isolation and prison.

Aaron Schwartz believed that information belongs to everyone. We agree.


Saturday, December 15, 2012

FOAM, not what you think

Free Open Access Medical Education, who doesn't already love the concept, regardless of the number of hundreds of thousands of dollars "invested" into one's education?
http://lifeinthefastlane.com/foam/, #FOAMed (twitter, i guess)
As I've been only reading the emergency medicine blogs, i first stumbled across it there, but it seems to be a rapidly expanding movement promoted by the online leaders in the field (Scott Weingart, of EMCrit; Michelle Lin, of academiclifeinem).
I feel this is extremely pertinent to the Open Science Movement and wanted to keep everyone abreast of this new phenomenon!

Monday, October 1, 2012

One Scientist's Solution...

...to the question of where do we get funding if not from institutions and grants, which provide such research money based on merit defined as publications, presentations, etc, is crowd-funding the fees.

A really neat article and video about Ethan Perlstein's approach to the open science and open source model and his research on evolutionary pharmacology.  This is a fantastic idea and could be used as a model for other researchers out there.  This at the very least could be used as a means for researchers to  engage open science and sharing data and cultivating a public collective conscience about scientific research and its funding.
Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 United States License.