Tuesday, November 29, 2011

Allen Institute for Brain Science Open for Science

Great WSJ article about the Allen Institute for Brain Science turning its data out to the public free of charge.

Edit: I notice the WSJ article requires a subscription; very 'un-open' of Paul Allen; regardless, I was able to read it after searching for the article on google news with keywords "Open Science Paul Allen."

Saturday, November 26, 2011

"Climategate" and Openness

Here's a piece about "Climategate." While I, like everyone else, is tired of tagging "-gate" to all scandals or perceived scandals, the actual scandal relates back to a need for openness. I want to transpose this piece that goes right back to the need for Open Science, but in a slightly politicized agenda:

"Voters in a democracy do not argue about science. They argue about the authority of scientists. And scientists’ claim to authority comes from the perception that, in fact, they do not let their vanities and rivalries influence their work...scientists pursue only the truth...some prominent members of the climate-change establishment were not operating in a spirit of openness."

Friday, November 25, 2011

Facebook for Scientists and Open Science Study

Hope everyone has enjoyed his or her respective Thanksgiving traditions. Here is an interesting article on the prospects of Facebook for Scientists. Something perhaps a more titillating is this project on science as a public enterprise. It has several links discussing open science in detail. (One is a link to Lancet, to which you must register as a user [for free] to see.)

Tuesday, November 15, 2011

A Map of Science's Future

From the Institute for the Future, a literal map of the future of science. We, of the open science mantra, are found up in the right-hand corner under the heading of "Massively Multiplayer Data" and pretty much the entire right-hand side (the "interstellar clouds of creation").

This is only a theoretical gesticulation, but it's pretty cool that open science is right there with decrypting the human brain, finding extraterrestrial life, nanotechnology, etc.

Check it out:


The Toaster Project

Here's a fun piece about an amateur scientist (among other titles), Thomas Thwaites, who made his own toaster from scratch. I saw him on The Colbert Report tonight, of which you can all try to catch a rerun, or you can watch his Ted talk here:


It's great to see inquisitive, thoughtful minds out there who want to know how things work, even if the end product is a ridiculous fire-hazard as in this case of the toaster. Amateur scientists are an untapped resource full of good ideas and are unfairly silent due to their unfamiliarity with the entrenched publication system of academia. Hopefully opening up science can provide the platform amateur scientists need to share their data and ideas.

Sunday, November 13, 2011

Open Science and Entrepreneurship

Let me tell you a true story of…luck. Very recently a group of MBA students at the University of Louisville has been winning entrepreneurial competitions, including the 2011 Venture Labs Investment Competition (formerly and still colloquially known as Moot Corp.) and the 2011 Rice University Business Plan Super Bowl. This group, under the name of TNG Pharmaceuticals, has been the benefactor of discovering some researchers at Auburn University with a “vaccine” that inactivates the blood-thinning factors in the horn fly’s saliva; the horn fly has a $1 billion impact on the cattle industry. TNG Pharmaceuticals is marketing the vaccine as FlyVax and is setting up to make lots of money as long as the USDA gives it the green light.

How this group found these researchers is the more interesting point. The members began a web-based search through the Offices of Technology Transfer (OTTs) of various universities. These OTTs have several goals/functions. Most important developments in medical science begin in labs, but these developments have limited scope beyond meeting a narrow research goal. Developing these advances into impactful measures requires other steps including more R&D, testing, approval by appropriate regulatory bodies, manufacturing, and distribution. More or less OTTs function to carry out technology transfer mandates by retaining title to inventions and licensing these inventions to private entities to ensure use, commercialization, and public availability.

The word “luck” I used earlier is not my own, it’s one of the collaborator’s at TNG Pharmaceuticals. It was luck that the research they found was just sitting on the shelf for more than five years. But that begs the question about what other research is out there sitting on the shelf that is potentially highly lucrative and marketable? I doubt the cure to cancer is waiting, but valuable research can lay dormant for years until others bring it back to life or repurpose it. This is where Open Science can prove valuable. OTTs have a necessary function that Open Science is not intrinsically designed to cover, but Open Science can fill the gaps and offer good, inactive research a chance to be rekindled, reexamined, re-purposed. Who knows, maybe the future online forums constituting the fruit of Open Science labors will hold some entrepreneur’s moneymaker!

Some links for those curious:




Thursday, November 10, 2011

The increasing rate of retractions

This article was published back in August of this year. After I read it, I was amazed.. Fellow OSK readers, you got to check it out.

Retractions of scientific studies are surging

Wednesday, November 9, 2011

Publication Bias

I have been ruminating lately on the idea of publication bias and some of its negative consequences.

For those of you who are not familiar with the term, publication bias refers to the fact that studies finding statistically significant results are published more frequently than those finding no significance. A term was coined back in 1979 for this: the “file drawer effect”. This term refers to the fact that many studies are conducted, but will never be reported due to non-significant results (1). These studies languish in the proverbial file drawer, never seeing the light of day, un-publishable within the current model.

If researcher A goes through the effort of conducting a given study evaluating the differences between drugs X and Y, and the null hypothesis is upheld (i.e. there is no difference between drug X and drug Y), then why spend all of the time and money going through the effort of writing up a manuscript to have it rejected? In the modern world of academic science, the age-old mantra of “publish or perish” still holds true. For his/her career to progress he needs to produce a steady supply of manuscripts in a timely manner. In my opinion, it is not that people do not want to publish manuscripts with null results; it is that there is very limited real estate in journals.

Positive results (i.e. significant results) are, flat out, sexy. I mean, what is more appealing, reading that drug A does not cure “incurable disease” or that drug B cures “incurable disease”? Obviously, drug B makes a better story.

If the journal’s only purpose were to keep us abreast of the current developments in a given field, then this situation would be fine. This is the function of tabloids. That is not their sole purpose. Journals are supposed to act as the repository of emerging scientific knowledge, sexy or dull.

Anyone that has tried to learn anything new knows we often learn the most from our mistakes. There is value in not just reading about proven solutions, but also in awareness of the various ideas that were not successful.

If you take this idea out to its logical conclusion, you can see a situation in which there are several hypothetical researchers working on the same problem. Let’s pretend their intention is to find a cure for breast cancer (since there really are many separate scientists working on this right now). The different labs are all working as hard as they can, trying different solutions independently. Yet, many of their failed attempts at cures will not make it into the journals for others to see. This means that others out there are making the same mistakes, completely unaware of the failed attempts of their peers.

This is very inefficient, costly, and dangerous.

A better solution would be an open science approach, where all attempts at science were published in a way allowing loose collaboration between labs. One lab could use the failed experiences of another lab, expediting the process and introducing a level of efficiency that doesn’t currently exist.
In this model, there would be no more reinventing of the wheel; reading that somebody in Kansas, Russia, or Nigeria has already attempted your failed idea would allow you to move your limited resources on to the next step.

(1) Robert Rosenthal (May 1979). "The file drawer problem and tolerance for null results".Psychological Bulletin 86 (3): 638–641. doi:10.1037/0033-2909.86.3.638

Monday, November 7, 2011

Another take on Open Science

I was perusing the internet and came across this article on the openscience.org blog. On there is an entry that was written back in 2009. It gives a good definition of what open science is. It is always useful to get other perspectives on this topic.

Another Take of Open Science

Saturday, November 5, 2011

Another example of poor vetting

Why do we keep letting this happen. All it takes is the ability to fool 2-3 people and up goes your "research". There is no mention in here anywhere of the responsibility that the journals also share in letting this guy's stuff get published.

Psychologist admits faking data in dozens of studies

Tuesday, November 1, 2011

All in peer-reviewed journals...

Paying for the back-end; research is only the beginning of the cost

Let’s be realistic: new ideas are too expensive.

The research part is only the tip of the iceberg.

After years of meticulous research, you must publish; this is enormously expensive. Manuscript publication may include various opaque “first-copy costs” and indexing costs, as well as back end costs associated with sustaining an editorial staff and infrastructure at many thousands of peer-reviewed journals. These costs come either via the direct cost for publishing your manuscript or for purchasing licenses to view other published manuscripts for citation in your paper. You may also have to pay for translation to English or from English to other languages. You might purchase a subscription to an individual journal so you can offer a copy of your published manuscript to your mom and dad!

In addition to publishing your research in a journal, you will also try to present your research at a reputable conference, where you’ll be able to share with, and learn from, other experts in your field. In this case, you’ll be paying for airfare, housing, and transportation. You might even pay to submit your abstract to that conference for consideration! However, you will certainly pay to print a glossy 35”x55” poster to help your peers visually understand your research, unless you plan to be maligned for a lack of professionalism and credibility. There will be additional registration fees for attending the conference to hear selected research presentations. You will probably purchase a few drinks of KY bourbon to make these mounting costs easier to rationalize.

Succinctly: if you can’t afford to reference old journal articles, you can’t publish new manuscripts. If you can’t afford to support journal infrastructure, you can’t disseminate new ideas. If you don’t attend conferences, you aren’t part of the discussion. There is no good alternative to this process.

For every chunk of new information you create, you’ll move through this process once. If you’re good, these costs will be paid many, many times throughout your career.

Where does the money go? Who is making sure this money is spent effectively?

How much should it cost to share an important new idea with the world?

It’s easy to blame journals, conferences, airlines, etc. for these costs. They are prohibitively expensive, elitist in their assurance that only researchers with financial backing are taken seriously, conservative in their support of the status quo over progressive science. Unfortunately, researchers are responsible for sustaining these exorbitant costs, too.

Scientists participate in a system that both requires and rewards dutiful, enduring payment of these costs. They are systemic, rationalized, and accepted, in that they are accounted for in research and development costs. Ultimately, they are anticipated by principal investigators at universities, in industrial settings, and at government laboratories. Thriving within this system is a major piece of achieving recognition as a scientist, securing research grants and a laboratory, and reaching tenure. Spending money to disseminate information in this way is part of being able to continue doing great science.

There is nothing sinister on either side; the system demands payment then rewards participation.

It is a monopoly.

At OSK, we believe all parties involved are doing the best they can to discover great things with as little waste, in as much haste, as the system allows. Despite the tradition that is this process, however, there are probably cheaper ways to do it.

We’d like to see more accountability across the board. We believe discussion about the merits of other information systems must emerge, allowing the community to identify the best ways to innovate. For a brief moment in human history, we believe both publishers and researchers should be held up to scrutiny to justify these exorbitant costs, to identify to best way to move forward with science.

The result of reducing/eliminating unnecessary back-end research costs will be fewer barriers to entry into research, less money wasted on infrastructure and bureaucracy, a greater breadth of big ideas, and higher quality science. Cutting these costs will ultimately require a concerted effort by publishers/conference facilitators and researchers.

(The above hyperlinks were the first examples I found illustrating these very typical publishing costs – no harm is meant to individual entities therein)

Tuesday, October 25, 2011

In Spite of Myself, I Cannot Fully Engage Open Science

When I exited high school, I had no idea about my future except that it was inevitably going to involve science. During college I decided to focus on the goal of entering medical school. All young pre-med students are told ad nauseum, get good grades, volunteer, and research. Of course there are some oscillations in the manner in which this information is disseminated and stressed, but the baseline is there: one of the significant components to a competitive medical school (or graduate school) application is conducting research. And if your research gets published, you’re in. So I inundated my CV with research experiences; some hopelessly drab (counting how often a rat moves its nose by 1x1cm boxes can wear on a soul), and others fruitful and edifying. In the end I managed to find a professor who loved using cheap undergraduate labor and had a penchant for churning out papers; I was published and all was right in the world. I couldn’t care if Neurochemical Research had a small impact factor and that my PI was scraping the bottom of the barrel to get some data published, I was an author. In the end, I was accepted to medical school (albeit while the papers were in the painfully sluggish review process).

I present this brief recollection as a blanket summary of what I, and innumerable pre-med students (and really anyone trying to advance in academic science), go through when trying to make oneself a strong candidate. I should mention however that more than a year’s worth of slaving in a microscopy lab and preparing immunocytochemistry slides was threatened because of the aforementioned PI’s developing tumultuous relationship in the scientific community. He and a particular grad student spawned heated contention about ownership of data and research that seeped out into national conventions. He openly speculated that the reason he was receiving harsh critiques of his submitted papers in journals in which he had previously published with ease was due to this growing stigma. He had to shop around for journals with smaller impact factors to get this data published. Now this didn’t matter to me much at the time, I got published and impact factor was an abstract concept that meant Nature, Science, and New England Journal of Medicine were fantastic journals, my article’s journal was just okay.

Little did I know and appreciate, however, is the impetus of producing publications in high impact factor journals goes far beyond the pure aesthetic of “I'm an author” that I acclaimed. My PI was concerned about his job and possible tenure; grad students are concerned about getting good post-doc positions and post-docs are concerned about getting tenure-tracked faculty positions; today I am concerned about my future residency spot. The current system of scientific publication has a stranglehold on the scientific community because it is the only manner in which we presently esteem research (and researchers) as valuable. A researcher with a strong publishing record is more likely to get prized grants and this again affects job availability, progression, and security. It is how we grade a researcher in his or her progression through the ranks, making one’s work tangible to which accolades may be attributed (e.g. National Medal of Science, induction to the National Academy of Sciences, Nobel Prize). Academics consider the publication process with such sanctity because they have spent time developing an expertise in adroitly navigating its obfuscating terrain. It is not likely that (m)any of the founded researchers in the current culture will enact or endorse the necessary changes for Open Science to blossom.

There are far too many faults in the current publication system for me to not want to whole-heartedly pursue the goals of Open Science. The entire review process needs to become more transparent to avoid conflicts of interest as well thorough understanding of why journals will not publish articles. The retraction and revision system needs to be renovated so that it elucidates reasoning behind retractions and revisions so that it intrinsically carries fewer stigmas when honest mistakes are made. The world of science would benefit from a system permitting a “continuous stream, rather than a punctuated series of publications” as said by Ivan Oransky, executive editor at Reuters Health and co-founder of the blog Retraction Watch. Researchers today must trudge through innumerable journal articles to glean important updates instead of having a free-flowing discourse in scientific pursuits. And that free-flowing discourse is already discouraged secondary to the basic natural selection of researchers: only the strong survive. The current system disincentivizes collaboration and open commentary (see Nature’s failed trial of open commentary from 2006: http://www.nature.com/nature/peerreview/) because, in the basic ecologic model, why would I provide my direct competitor my time, energy, and resources that would be better suited for my own needs and goals?

Here is where I find myself conflicted: I like the notion of Open Science a lot but I am not in any position to make a concerted effort to forwarding the movement. Instead I must navigate the current system in all its faults because I want to make myself a strong candidate for a competitive residency spot. It’s doubtful the head of most any university department would value blogging data points to the masses as equal to the trusted standard of scientific publishing, no matter how progressive he or she was. I must continue my endeavors to receive a ‘first authorship’ and get published in the current system because I cannot choose, in the basic ecologic model, to spurn available resources because it will indirectly offer advantages to my competitors. The peer-review system is flawed, but we are ensconced in decorum not soon to change without appropriate catalysts.

Thankfully there have been some measures, including a strategy enacted by the NIH that necessitates all published NIH-funded research be made available to the public online within a particular timeline (
http://www.earlham.edu/~peters/fos/2008/01/nih-releases-its-new-oa-policy.html). We also need to examine new manners to incentivize researchers to engage the Open Science movement. A cultural change in how research is evaluated and shared needs to occur; the sharing component confronts the already confounding issue of intellectual property rights and how authors can own knowledge. I really wish to make an impact in the Open Science movement, but I fear I will be hypocritical by investing far more time in the classic publication process. I ask the reader, how else should I act?

Nature: stealing from Peter?

This Nature article leaves me uncomfortable.

As Retractionwatch notes, Nature retracts articles far more often than most other journals.

Is it really time to congratulate ourselves for detecting more manuscripts that need to be retracted?

Or, rather, should we stop publishing these manuscripts in the first place?

Nature, you have a conflict of interest here.

Wednesday, October 19, 2011

The need of true transparency

A recent article in the Journal of Pediatrics (1) stated that 1 in 10 parents do not completely vaccinate their kids. The MMR vaccine was cited as the vaccine withheld most frequently. This is most likely due to the infamous 1998 manuscript in The Lancet stating that there is a “link” between the MMR vaccine and autism. Though the original manuscript has since been retracted from The Lancet due to falsified data, its effects are still lingering. There are children out there that are not getting vaccinated due to this bad science.

Bad science does not just impact researchers’ understanding; it impacts people’s lives. This study illustrates what can happen when articles are published, but are then retracted after a larger audience has vetted the ideas therein.

I wish I could say that retractions are a relatively rare occurrence. Sadly, this is not the case.

A website called Retraction Watch publishes cases like this on a very regular basis. Retractions are posted from journals across the scientific gamut. You might be thinking that only minor journals, with a low impact factor, would be the major culprits. This is most definitely not the case. Major journals like Nature and Science are featured frequently.

The interesting thing is that the scientific community always vilifies the authors. However, there is no real outcry against journals publishing, then retracting, articles with spurious findings.

This is wrong.

There should clearly be some blame passed to the journal publishing the manuscript.

This goes back to one of our guiding principles at OSK: transparency.

The scientist’s book of holy writ is the peer-reviewed journal; once an article is published in a journal, it then, in essence, becomes “canonized”. However, there is no real discussion about the merits and flaws of this system. We need to take a deeper look into what actually constitutes peer-reviewed publishing. We need to ask ourselves if there is a better way to do this and why we did not seek it in the midst of obvious flaws in our publishing system.

Currently all publication requires is an “okay” from a handful of peers. Similarly, a handful of “not okay”’s can condemn an article to rejection. This small number of reviewers leaves open the possibility of numerous biases; we rigorously test for these biases in statistical analysis, but never inquire about them in peer review. We similarly never inquire about the possibility of dissent among reviewers, leaving our faith in the journal to give us only manuscripts that are unquestionably of the highest quality.

This whole process needs to be turned on its head. Most directly, we must include more reviewers in peer review. Dissenting reviews should be provided for the reader, as they offer an important tool with which to understand the context of a published manuscript. Probably more importantly, all data used to make conclusions should be subject to review, as it is fundamental to the validity of a manuscript. Limitations must be emphasized, as they are the most important part of any manuscript!

Once these solutions are implemented, the retraction rate will decrease. Ultimately, this will impact not only the researchers, rewarding truly high-quality research, but also the people their research is likely to affect.

I invite your comments below

1. Dempsey AF, Schaffer S, Singer D, et al. Alternative vaccination schedule preferences among parents of young children. Pediatrics 2011 Nov;128(5)

Monday, October 17, 2011

A must read

Former BMJ editor's commentary on peer review. We can't improve on this.

You're welcome.

Thursday, October 13, 2011

The State of Open Science

Admittedly, I’m hoping this post will be as informative for me as it is for the reader.

Adam and I started this organization because we were having a hard time identifying a cohesive movement illuminating the value of open science. Forward-thinking scholars had published wonderful commentary on problems associated with a closed science model, but there was no hub for open science! Who could I contact to volunteer my time and skill set to this cause?

Where we are:

Let’s give credit where it’s due: Michael Nielsen, scholar and major internet presence, has produced some of the most insightful commentary on this issue to date. We will defer to his opus, The Future of Science, for baseline commentary on the need for, and potential of, open science. It’s worth reading the comments section, as well, as it’s a treasure trove of commentary from other thought leaders in the field.

Sampling just a few lines from TFoS:

To create an open scientific culture that embraces new online tools, two challenging tasks must be achieved: (1) build superb online tools; and (2) cause the cultural changes necessary for those tools to be accepted. The necessity of accomplishing both these tasks is obvious, yet projects in online science often focus mostly on building tools, with cultural change an afterthought.

Nielsen concludes that, “to develop [superb online tools] requires a rare combination of strong design and technical skills, and a deep understanding of how science works,” and that culture change will come from two directions: a top-down and a bottom-up strategy. He notes that a successful top-down intervention, the 2008 NIH declaration that every NIH funded manuscript must be made open access. It is easy to see how top-down mandates like this will be necessary to inspire sustainable change. He concludes that bottom-up changes will require new ways of gauging the value of a scientist’s intellectual contributions if tools like blogging are eventually used to disseminate knowledge.

That Nielsen does not elaborate on bottom-up strategies is unfortunate; we learn so much from the rest of his piece! As thorough as the rest is, we surmise that this oversight is due the enormous challenge of creating an exhaustive list of necessary interventions to make open science a reality. Ultimately, OSK believes that bottom-up changes will require much more than validating a researcher’s contribution among his peers and superiors.

Validation isn’t enough; open science must be convenient, making not working within open science media an absurdly wasteful proposition. It should be self-reinforcing, offering bigger opportunities as prestige within the new media grows; it must inspire allegiance. Visibility should be a priority; a researcher cannot use this media if he believes closed science media are the only way! The emerging standards for collaboration and publishing must strike the individual researcher as progressive and obvious. He/she must be able to justify using these media to himself/herself, philosophically. A bottom-up approach will require institutional changes precipitated by the demands of individual researchers.

Perhaps most importantly, open science should be ready for a young scientist at the genesis of his/her career. Ultimately, the moment ambitious secondary schoolers/undergraduates/graduate students/professional students begin thinking about research, they are immersed in a paradigm by their peers/superiors. The current paradigm includes data ownership, journal-based publishing, journal-oriented citations, anonymous peer review, the English language, publishing-volume based promotions, and impact-factor based personal “success”. Discussions of alternatives to this paradigm are few with far between, usually happen among progressive zealots, and are almost entirely grounded in the merits of projects like PLoS ONE and open access journals. Unfortunately, most discussion of open science happen among brilliant folks who are probably humble enough to not realize they’re in the top-down bracket of the movement!

This momentum promises to be the biggest challenge for open science (though technical challenges are many, as well); a functional paradigm with problems is invariably more inviting than the uncertainty of a revolution. There is currently little incentive for change, but there is truly no impetus among the researching majority.

OSK will take on these momentum challenges.

We think open science should keep a better record of its allies at every level of academia, industry, and government; we are ultimately a team! Open science must find ways to coexist initially with closed science, to become a necessary supplement to closed science, to continually improve and eventually supersede closed science in its scope. It must become part of the scientist’s consciousness, challenging him/her intellectually from secondary school to tenure. Undergraduates should discuss the merits and shortcomings of open science in class. Eventually, the press will see the merits of dialoguing the limitations of closed science.

Visibility is paramount.


Monday, October 10, 2011

What is open science?

We want to use this post to talk a little about why we started Open Science Kentucky. Let's call this our vision. You'll notice that our perspectives are distinct and not in complete agreement with one another! What we propose, first and foremost, is that rigorous discussion about this topic is needed. Please feel free to leave your perspective in the comments section. Cheers!

From Andrew:

To me, open science is the natural progression of information exchange in the digital age.

A variety of terms are used to describe open science, including open access, open data sharing, open peer review, open innovation, and open source. Each has a specific meaning and a very specific purpose. However, all reflect a call for efficiency, increased collaboration, greater transparency and accountability in research and publishing.

Open science is an exciting concept reflecting the enormous power of technology to solve problems. It emerged with the internet, with a pervasive understanding that great minds across the world working together can often solve problems individuals cannot. It identifies and accentuates the bonds tying research peers together in addressing increasingly consequential problems. It reflects the culture within which the world's youngest scientists were raised to excellence.

Open science begs researchers to consider new paradigms, to challenge their most fundamental understanding of scientific mores like data ownership, journal-based publishing, and even the scientific method. Such an idea troubles many historic institutions; progress has always done so. However, open science will ultimately benefit humanity; it will increase the speed of innovation, reward the highest-quality science, and allow the best solutions to emerge.

This change is inevitable. Establishing the rules of the road now will ensure the next generation of researchers is given a carefully considered, philosophically advanced system for innovation. A similarly historic shift occurred with the inception of the scientific method.

As scientists, we must begin this critical discussion.

From Adam:

The current model of manuscript peer review has not changed in hundreds of years. It has many documented flaws and yet there has not been any successful alternative to the model proposed. Open science is an attempt to unify the ideas below to shift the paradigm of the way that new knowledge is incorporated into the collective knowledge of the scientific community. The key components can be put into three different categories: transparency, equality, and open access. These will radically change the way that information is disseminated.

First, transparency in open science refers to the idea that the process through which new knowledge becomes accepted and “canonized” in journals needs to be subject to scrutiny. Open science proposes that this be accomplished by releasing the names and comments of reviewers, increasing the number of the reviewers that read and verify each article, and including the complete set of data points for each article. These things can be done easily with the new tools and capabilities that the internet and digital publishing allows.

Equality refers to the ideas that all manuscripts and ideas should be considered with equal objectivity. The existing problem is that barriers to entry to publishing can be prohibitive for some scientists. Some of these barriers are cost, negative results, naiveté to publishing, native language, variability in skill of the reviewer, and so forth. Again, we believe this problem can be solved with the use of the technology. Equality will allow new ideas and a more even playing field for ideas to be discussed. Equality also acknowledges the fact that anyone should be able to contribute significant ideas, regardless of his/her rank and status in academia.

Open access is the final concept. This principle describes the need for everyone to be able to access the newest discoveries in the shortest amount of time. Right now, most journals are controlled by publishing houses. For some reason, the scientific community still feels compelled to use this old outdated model for reporting its ideas and discoveries. The very idea that the discoveries that researchers and clinicians have made should then be, in essence, owned by the publishing houses seems utterly ridiculous. Open access will allow free publishing and free reading of novel ideas. It also allows for quicker dissemination of those ideas. The community should own these ideas and thus they should be published by the community. Again this is something that can be easily attained via the internet and digital publishing.

We will continue to discuss open science and how its effects will change the current paradigm. This new model will mean a completely new way of thinking through how we report and discuss our ideas. Through the principles of transparency, equality, and open access, we hope to achieve improved quality, increased efficiency, reduced redundancy, and the higher quality of life a better understanding of our natural world will bring.

Wednesday, October 5, 2011

Welcome to Open Science Kentucky


We hope this site will become your hub for information related to the open science movement in Kentucky.

Though Kentucky is a small state in terms of research output, we hope it will set a new standard for high-quality, efficient, and transparent research.

Please join us in representing the ideals of the open science movement here.

Adam and Andrew

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 United States License.