Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Peer-reviewed journals retracted 110 papers over the last 2 years. Nature reports the grim details in “Publishing: the peer review scam”.
When a handful of authors were caught reviewing their own
papers, it exposed weaknesses in modern publishing systems.
Editors are trying to plug the holes.
The Hill reports that the FDA may lift its ban on blood donations from gay men. The American Red Cross has voiced its support for lifting of the ban.
Advisers for the Food and Drug Administration (FDA) will meet this week to decide whether gay men should be allowed to donate blood, the agency’s biggest step yet toward changing the 30-year-old policy.
If the FDA accepts the recommendation, it would roll back a policy that has been under strong pressure from LGBT advocates and some members of Congress for more than four years.
“We’ve got the ball rolling. I feel like this is a tide-turning vote,” said Ryan James Yezak, an LGBT activist who founded the National Gay Blood Drive and will speak at the meeting. “There’s been a lot of feet dragging and I think they’re realizing it now.”
Groups such as the American Red Cross and America’s Blood Centers also voiced support of the policy change this month, calling the ban “medically and scientifically unwarranted.”
The FDA will use the group’s recommendation to decide whether to change the policy.
“Following deliberations taking into consideration the available evidence, the FDA will issue revised guidance, if appropriate,” FDA spokeswoman Jennifer Rodriguez wrote in a statement.
This reporter has more than 20 years of Blood Bank experience. The factor in favor of the recommendation is that the HIV 1/2 and other testing is accurate enough to leave the question of donor lifestyle irrelevant. However, it remains to be seen whether the testing turnaround time is sufficient to prevent the release of units that may be contaminated prior to transfusion, which is problematic for platelets, that have short expirations. In all cases of donor infection, regardless of whether units are released, a finding leads to not releasing the product or to recall.
Democrats made a strategic mistake by passing the Affordable Care Act, Sen. Charles Schumer (N.Y.), the third-ranking member of the Senate Democratic leadership, said Tuesday.
Schumer says Democrats “blew the opportunity the American people gave them” in the 2008 elections, a Democratic landslide, by focusing on healthcare reform instead of legislation to boost the middle class.
“After passing the stimulus, Democrats should have continued to propose middle class-oriented programs and built on the partial success of the stimulus,” he said in a speech at the National Press Club.
He said the plight of uninsured Americans caused by “unfair insurance company practices” needed to be addressed, but it wasn’t the change that people wanted when they elected Barack Obama as president.
“Americans were crying out for an end to the recession, for better wages and more jobs; not for changes in their healthcare,” he said.
This reader finds the observation by Senator Schumer very perceptive, regardless of whether the observation in hindsight might have had a different political outcome. It has been noted that President Obama had a lot on his plate. Moreover, we have not seen such a poor record of legislation in my lifetime. There are underlying issues of worldview of elected officials that also contribute to the events.
THE PEER-REVIEW SCAM
BY CAT FERGUSON, ADAM MARCUS AND IVAN ORANSKY
N AT U R E | 2 7 N O V 2 0 1 4; VO L 5 1 5 : 480-82.
Most journal editors know how much effort it takes to persuade busy researchers to review a paper. That is why the editor of The Journal of Enzyme Inhibition and Medicinal Chemistry was puzzled by the reviews for manuscripts by one author — Hyung-In Moon, a medicinal-plant researcher then at Dongguk University in Gyeongju, South Korea.
The reviews themselves were not remarkable: mostly favourable, with some suggestions about how to improve the papers. What was unusual was how quickly they were completed — often within 24 hours. The turnaround was a little too fast, and Claudiu Supuran, the journal’s editor-in-chief, started to become suspicious.
In 2012, he confronted Moon, who readily admitted that the reviews had come in so quickly because he had written many of them himself. The deception had not been hard to set up. Supuran’s journal and several others published by Informa Healthcare in London
invite authors to suggest potential reviewers for their papers. So Moon provided names, sometimes of real scientists and sometimes pseudonyms, often with bogus e-mail addresses that would go directly to him or his colleagues. His confession led to the retraction of 28 papers by several Informa journals, and the resignation of an editor.
Moon’s was not an isolated case. In the past 2 years, journals have been forced to retract more than 110 papers in at least 6 instances of peer-review.
PEER-REVIEW RING
Moon’s case is by no means the most spectacular instance of peer-review rigging in recent years. That honour goes to a case that came to light in May 2013, when Ali Nayfeh, then editor-in-chief of the Journal of Vibration and Control, received some troubling news. An author who had submitted a paper to the journal told Nayfeh that he had received e-mails about it from two people claiming to be reviewers. Reviewers do not normally have direct contact with authors, and — strangely — the e-mails came from generic-looking Gmail accounts rather than from the professional institutional accounts that many academics use (see ‘Red flags in review’).
Nayfeh alerted SAGE, the company in Thousand Oaks, California, that publishes the journal. The editors there e-mailed both the Gmail addresses provided by the tipster, and the institutional addresses of the authors whose names had been used, asking for proof of identity and a list of their publications.ew rigging. What all these cases had in common was that researchers exploited vulnerabilities in the publishers’ computerized systems to dupe editors into accepting manuscripts, often by doing their own reviews. The cases involved publishing behemoths Elsevier, Springer, Taylor & Francis, SAGE and Wiley, as well as Informa, at least one of the systems — could make researchers vulnerable to even more serious identity theft. “For a piece of software that’s used by hundreds of thousands of academics worldwide, it really is appalling,” says Mark Dingemanse, a linguist at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, who has used some of these programs to publish and review papers.
A 14-month investigation that came to involve about 20 people from SAGE’s editorial, legal and production departments. It showed that the Gmail addresses were each linked to accounts with Thomson Reuters’ ScholarOne, a publication-management system used by SAGE and several other publishers, including Informa. Editors were able to track every paper that the person or people behind these accounts had allegedly written or reviewed, says SAGE spokesperson Camille Gamboa. They also checked the wording of reviews, the details of author-nominated reviewers, reference lists and the turnaround time for reviews (in some cases, only a few minutes). This helped the investigators to ferret out further suspicious-looking accounts; they eventually found 130.
SAGE investigators came to realize that authors were both reviewing and citing each other at an anomalous rate. Eventually, 60 articles were found to have evidence of peer-review tampering, involvement in the citation ring or both. “Due to the serious nature of the findings, we wanted to ensure we had researched all avenues as carefully as possible before contacting any of the authors and reviewers,” says Gamboa. When the dust had settled, it turned out that there was one author in the centre of the ring: Peter Chen, an engineer then at the National Pingtung University of Education (NPUE) in Taiwan, who was a co-author on practically all of the papers in question.
PASSWORD LOOPHOLE
Moon and Chen both exploited a feature of ScholarOne’s automated processes. When a reviewer is invited to read a paper, he or she is sent an e-mail with login information. If that communication goes to a fake e-mail account, the recipient can sign into the system under whatever name was initially submitted, with no additional identity verification. Jasper Simons, vice-president of product and market strategy for Thomson Reuters in Charlottesville, Virginia, says that ScholarOne is a respected peer-review system and that it is the responsibility of journals and their editorial teams to invite properly qualified reviewers for their papers.
ScholarOne is not the only publishing system with vulnerabilities. Editorial Manager, built by Aries Systems in North Andover, Massachusetts, is used by many societies and publishers, including Springer and PLOS. The American Association for the Advancement of Science in Washington DC uses a system developed in-house for its journals Science, Science Translational Medicine and Science Signaling, but its open-access offering, Science Advances, uses Editorial Manager. Elsevier, based in Amsterdam, uses a branded version of the same product, called the Elsevier Editorial System.
Usually, editors in the United States and Europe know the scientific community in those regions well enough to catch potential conflicts of interest between authors and reviewers. But Lindsay says that Western editors can find this harder with authors from Asia — “where often none of us knows the suggested reviewers”. In these cases, the journal insists on at least one independent reviewer, identified and invited by the editors.
Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson
Curators and Writer: Stephen J. Williams, Ph.D. with input from Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN
(this discussion is in a three part series including:
Using Scientific Content Curation as a Method for Validation and Biocuration
Using Scientific Content Curation as a Method for Open Innovation)
Every month I get my Wired Magazine (yes in hard print, I still like to turn pages manually plus I don’t mind if I get grease or wing sauce on my magazine rather than on my e-reader) but I always love reading articles written by Clive Thompson. He has a certain flair for understanding the techno world we live in and the human/technology interaction, writing about interesting ways in which we almost inadvertently integrate new technologies into our day-to-day living, generating new entrepreneurship, new value. He also writes extensively about tech and entrepreneurship.
Clive gives a wonderful example of Ory Okolloh, a young Kenyan-born law student who, after becoming frustrated with the lack of coverage of problems back home, started a blog about Kenyan politics. Her blog not only got interest from movie producers who were documenting female bloggers but also gained the interest of fellow Kenyans who, during the upheaval after the 2007 Kenyan elections, helped Ory to develop a Google map for reporting of violence (http://www.ushahidi.com/, which eventually became a global organization using open-source technology to affect crises-management. There are a multitude of examples how networks and the conversations within these circles are fostering new ideas. As Clive states in the article:
Our ideas are PRODUCTS OF OUR ENVIRONMENT.
They are influenced by the conversations around us.
However the article got me thinking of how Science 2.0 and the internet is changing how scientists contribute, share, and make connections to produce new and transformative ideas.
But HOW MUCH Knowledge is OUT THERE?
Clive’s article listed some amazing facts about the mountains of posts, tweets, words etc. out on the internet EVERY DAY, all of which exemplifies the problem:
154.6 billion EMAILS per DAY
400 million TWEETS per DAY
1 million BLOG POSTS (including this one) per DAY
2 million COMMENTS on WordPress per DAY
16 million WORDS on Facebook per DAY
TOTAL 52 TRILLION WORDS per DAY
As he estimates this would be 520 million books per DAY (book with average 100,000 words).
A LOT of INFO. But as he suggests it is not the volume but how we create and share this information which is critical as the science fiction writer Theodore Sturgeon noted “Ninety percent of everything is crap” AKA Sturgeon’s Law.
Internet live stats show how congested the internet is each day (http://www.internetlivestats.com/). Needless to say Clive’s numbers are a bit off. As of the writing of this article:
2.9 billion internet users
981 million websites (only 25,000 hacked today)
128 billion emails
385 million Tweets
> 2.7 million BLOG posts today (including this one)
The Good, The Bad, and the Ugly of the Scientific Internet (The Wild West?)
So how many science blogs are out there? Well back in 2008 “grrlscientist” asked this question and turned up a total of 19,881 blogs however most were “pseudoscience” blogs, not written by Ph.D or MD level scientists. A deeper search on Technorati using the search term “scientist PhD” turned up about 2,000 written by trained scientists.
So granted, there is a lot of
….. when it comes to scientific information on the internet!
I had recently re-posted, on this site, a great example of how bad science and medicine can get propagated throughout the internet:
Drs.Elena Cattaneo and Gilberto Corbellini document their long, hard fight against false and invalidated medical claims made by some “clinicians” about the utility and medical benefits of certain stem-cell therapies, sacrificing their time to debunk medical pseudoscience.
Using Curation and Science 2.0 to build Trusted, Expert Networks of Scientists and Clinicians
Establishing networks of trusted colleagues has been a cornerstone of the scientific discourse for centuries. For example, in the mid-1640s, the Royal Society began as:
“a meeting of natural philosophers to discuss promoting knowledge of the
natural world through observation and experiment”, i.e. science.
The Society met weekly to witness experiments and discuss what we
would now call scientific topics. The first Curator of Experiments
Indeed as discussed in “Science 2.0/Brainstorming” by the originators of OpenWetWare, an open-source science-notebook software designed to foster open-innovation, the new search and aggregation tools are making it easier to find, contribute, and share information to interested individuals. This paradigm is the basis for the shift from Science 1.0 to Science 2.0. Science 2.0 is attempting to remedy current drawbacks which are hindering rapid and open scientific collaboration and discourse including:
Slow time frame of current publishing methods: reviews can take years to fashion leading to outdated material
Level of information dissemination is currently one dimensional: peer-review, highly polished work, conferences
Current publishing does not encourage open feedback and review
Published articles edited for print do not take advantage of new web-based features including tagging, search-engine features, interactive multimedia, no hyperlinks
Published data and methodology incomplete
Published data not available in formats which can be readably accessible across platforms: gene lists are now mandated to be supplied as files however other data does not have to be supplied in file format
(put in here a brief blurb of summary of problems and why curation could help)
Curation in the Sciences: View from Scientific Content Curators Larry H. Bernstein, MD, FCAP, Dr. Justin D. Pearlman, MD, PhD, FACC and Dr. Aviva Lev-Ari, PhD, RN
Curation is an active filtering of the web’s and peer reviewed literature found by such means – immense amount of relevant and irrelevant content. As a result content may be disruptive. However, in doing good curation, one does more than simply assign value by presentation of creative work in any category. Great curators comment and share experience across content, authors and themes. Great curators may see patterns others don’t, or may challenge or debate complex and apparently conflicting points of view. Answers to specifically focused questions comes from the hard work of many in laboratory settings creatively establishing answers to definitive questions, each a part of the larger knowledge-base of reference. There are those rare “Einstein’s” who imagine a whole universe, unlike the three blind men of the Sufi tale. One held the tail, the other the trunk, the other the ear, and they all said this is an elephant!
In my reading, I learn that the optimal ratio of curation to creation may be as high as 90% curation to 10% creation. Creating content is expensive. Curation, by comparison, is much less expensive.
– Larry H. Bernstein, MD, FCAP
Curation is Uniquely Distinguished by the Historical Exploratory Ties that Bind –Larry H. Bernstein, MD, FCAP
The explosion of information by numerous media, hardcopy and electronic, written and video, has created difficulties tracking topics and tying together relevant but separated discoveries, ideas, and potential applications. Some methods to help assimilate diverse sources of knowledge include a content expert preparing a textbook summary, a panel of experts leading a discussion or think tank, and conventions moderating presentations by researchers. Each of those methods has value and an audience, but they also have limitations, particularly with respect to timeliness and pushing the edge. In the electronic data age, there is a need for further innovation, to make synthesis, stimulating associations, synergy and contrasts available to audiences in a more timely and less formal manner. Hence the birth of curation. Key components of curation include expert identification of data, ideas and innovations of interest, expert interpretation of the original research results, integration with context, digesting, highlighting, correlating and presenting in novel light.
Work of Original Expression what is the methodology of Curation in the context of Medical Research Findings Exposition of Synthesis and Interpretation of the significance of the results to Clinical Care
… leading to new, curated, and collaborative works by networks of experts to generate (in this case) ebooks on most significant trends and interpretations of scientific knowledge as relates to medical practice.
In Summary: How Scientific Content Curation Can Help
Given the aforementioned problems of:
I. the complex and rapid deluge of scientific information
II. the need for a collaborative, open environment to produce transformative innovation
III. need for alternative ways to disseminate scientific findings
CURATION MAY OFFER SOLUTIONS
I. Curation exists beyond the review: curation decreases time for assessment of current trends adding multiple insights, analyses WITH an underlying METHODOLOGY (discussed below) while NOT acting as mere reiteration, regurgitation
II. Curation providing insights from WHOLE scientific community on multiple WEB 2.0 platforms
III. Curation makes use of new computational and Web-based tools to provide interoperability of data, reporting of findings (shown in Examples below)
Therefore a discussion is given on methodologies, definitions of best practices, and tools developed to assist the content curation community in this endeavor.
Methodology in Scientific Content Curation as Envisioned by Aviva lev-Ari, PhD, RN
At Leaders in Pharmaceutical Business Intelligence, site owner and chief editor Aviva lev-Ari, PhD, RN has been developing a strategy “for the facilitation of Global access to Biomedical knowledge rather than the access to sheer search results on Scientific subject matters in the Life Sciences and Medicine”. According to Aviva, “for the methodology to attain this complex goal it is to be dealing with popularization of ORIGINAL Scientific Research via Content Curation of Scientific Research Results by Experts, Authors, Writers using the critical thinking process of expert interpretation of the original research results.” The following post:
Cardiovascular Original Research: Cases in Methodology Design for Content Curation and Co-Curation
demonstrate two examples how content co-curation attempts to achieve this aim and develop networks of scientist and clinician curators to aid in the active discussion of scientific and medical findings, and use scientific content curation as a means for critique offering a “new architecture for knowledge”. Indeed, popular search engines such as Google, Yahoo, or even scientific search engines such as NCBI’s PubMed and the OVID search engine rely on keywords and Boolean algorithms …
which has created a need for more context-driven scientific search and discourse.
To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information.
.. where Célya considers curation an essential practice to manage open science and this new style of research.
As mentioned above in her article, Dr. Lev-Ari represents two examples of how content curation expanded thought, discussion, and eventually new ideas.
Curator edifies content through analytic process = NEW form of writing and organizations leading to new interconnections of ideas = NEW INSIGHTS
The Life Cycle of Science 2.0. Due to Web 2.0, new paradigms of scientific collaboration are rapidly emerging. Originally, scientific discovery were performed by individual laboratories or “scientific silos” where the main method of communicationwas peer-reviewed publication, meeting presentation, and ultimately news outlets and multimedia. In this digital era, data was organized for literature search and biocurated databases.In an era of social media, Web 2.0, a group of scientifically and medically trained “curators” organize the piles of data of digitally generated data and fit data into an organizational structure which can be shared, communicated, and analyzed in a holistic approach, launching new ideas due to changes in organization structure of data and data analytics.
The result, in this case, is a collaborative written work above the scope of the review. Currently review articles are written by experts in the field and summarize the state of a research are. However, using collaborative, trusted networks of experts, the result is a real-time synopsis and analysis of the field with the goal in mind to
In her paper, Curating e-Science Data, Maureen Pennock, from The British Library, emphasized the importance of using a diligent, validated, and reproducible, and cost-effective methodology for curation by e-science communities over the ‘Grid’:
“The digital data deluge will have profound repercussions for the infrastructure of research and beyond. Data from a wide variety of new and existing sources will need to be annotated with metadata, then archived and curated so that both the data and the programmes used to transform the data can be reproduced for use in the future. The data represent a new foundation for new research, science, knowledge and discovery”
— JISC Senior Management Briefing Paper, The Data Deluge (2004)
As she states proper data and content curation is important for:
Post-analysis
Data and research result reuse for new research
Validation
Preservation of data in newer formats to prolong life-cycle of research results
However she laments the lack of
Funding for such efforts
Training
Organizational support
Monitoring
Established procedures
Tatiana Aders wrote a nice article based on an interview with Microsoft’s Robert Scoble, where he emphasized the need for curation in a world where “Twitter is the replacement of the Associated Press Wire Machine” and new technologic platforms are knocking out old platforms at a rapid pace. In addition he notes that curation is also a social art form where primary concerns are to understand an audience and a niche.
Indeed, part of the reason the need for curation is unmet, as writes Mark Carrigan, is the lack of appreciation by academics of the utility of tools such as Pinterest, Storify, and Pearl Trees to effectively communicate and build collaborative networks.
And teacher Nancy White, in her article Understanding Content Curation on her blog Innovations in Education, shows examples of how curation in an educational tool for students and teachers by demonstrating students need to CONTEXTUALIZE what the collect to add enhanced value, using higher mental processes such as:
Although many tools are related to biocuration and building databases but the common idea is curating data with indexing, analyses, and contextual value to provide for an audience to generate NETWORKS OF NEW IDEAS.
“Nowadays, any organization should employ network scientists/analysts who are able to map and analyze complex systems that are of importance to the organization (e.g. the organization itself, its activities, a country’s economic activities, transportation networks, research networks).”
Creating Content Curation Communities: Breaking Down the Silos!
An article by Dr. Dana Rotman “Facilitating Scientific Collaborations Through Content Curation Communities” highlights how scientific information resources, traditionally created and maintained by paid professionals, are being crowdsourced to professionals and nonprofessionals in which she termed “content curation communities”, consisting of professionals and nonprofessional volunteers who create, curate, and maintain the various scientific database tools we use such as Encyclopedia of Life, ChemSpider (for Slideshare see here), biowikipedia etc. Although very useful and openly available, these projects create their own challenges such as
information integration (various types of data and formats)
social integration (marginalized by scientific communities, no funding, no recognition)
The authors set forth some ways to overcome these challenges of the content curation community including:
standardization in practices
visualization to document contributions
emphasizing role of information professionals in content curation communities
maintaining quality control to increase respectability
recognizing participation to professional communities
A few great presentations and papers from the 2012 DICOSE meeting are found below
Judith M. Brown, Robert Biddle, Stevenson Gossage, Jeff Wilson & Steven Greenspan. Collaboratively Analyzing Large Data Sets using Multitouch Surfaces. (PDF) NotesForBrown
Bill Howe, Cecilia Aragon, David Beck, Jeffrey P. Gardner, Ed Lazowska, Tanya McEwen. Supporting Data-Intensive Collaboration via Campus eScience Centers. (PDF) NotesForHowe
Kerk F. Kee & Larry D. Browning. Challenges of Scientist-Developers and Adopters of Existing Cyberinfrastructure Tools for Data-Intensive Collaboration, Computational Simulation, and Interdisciplinary Projects in Early e-Science in the U.S.. (PDF) NotesForKee
Betsy Rolland & Charlotte P. Lee. Post-Doctoral Researchers’ Use of Preexisting Data in Cancer Epidemiology Research. (PDF) NoteForRolland
Dana Rotman, Jennifer Preece, Derek Hansen & Kezia Procita. Facilitating scientific collaboration through content curation communities. (PDF) NotesForRotman
Nicholas M. Weber & Karen S. Baker. System Slack in Cyberinfrastructure Development: Mind the Gaps. (PDF) NotesForWeber
Indeed, the movement of Science 2.0 from Science 1.0 had originated because these “silos” had frustrated many scientists, resulting in changes in the area of publishing (Open Access) but also communication of protocols (online protocol sites and notebooks like OpenWetWare and BioProtocols Online) and data and material registries (CGAP and tumor banks). Some examples are given below.
This project looked at what motivates researchers to work in an open manner with regard to their data, results and protocols, and whether advantages are delivered by working in this way.
The case studies consider the benefits and barriers to using ‘open science’ methods, and were carried out between November 2009 and April 2010 and published in the report Open to All? Case studies of openness in research. The Appendices to the main report (pdf) include a literature review, a framework for characterizing openness, a list of examples, and the interview schedule and topics. Some of the case study participants kindly agreed to us publishing the transcripts. This zip archive contains transcripts of interviews with researchers in astronomy, bioinformatics, chemistry, and language technology.
2. cBIO -cBio’s biological data curation group developed and operates using a methodology called CIMS, the Curation Information Management System. CIMS is a comprehensive curation and quality control process that efficiently extracts information from publications.
3. NIH Topic Maps – This website provides a database and web-based interface for searching and discovering the types of research awarded by the NIH. The database uses automated, computer generated categories from a statistical analysis known as topic modeling.
4. SciKnowMine (USC)- We propose to create a framework to support biocuration called SciKnowMine (after ‘Scientific Knowledge Mine’), cyberinfrastructure that supports biocuration through the automated mining of text, images, and other amenable media at the scale of the entire literature.
OpenWetWare– OpenWetWare is an effort to promote the sharing of information, know-how, and wisdom among researchers and groups who are working in biology & biological engineering. Learn more about us. If you would like edit access, would be interested in helping out, or want your lab website hosted on OpenWetWare, pleasejoin us. OpenWetWare is managed by the BioBricks Foundation. They also have a wiki about Science 2.0.
6. LabTrove: a lightweight, web based, laboratory “blog” as a route towards a marked up record of work in a bioscience research laboratory. Authors in PLOS One article, from University of Southampton, report the development of an open, scientific lab notebook using a blogging strategy to share information.
7. OpenScience Project– The OpenScience project is dedicated to writing and releasing free and Open Source scientific software. We are a group of scientists, mathematicians and engineers who want to encourage a collaborative environment in which science can be pursued by anyone who is inspired to discover something new about the natural world.
8. Open Science Grid is a multi-disciplinary partnership to federate local, regional, community and national cyberinfrastructures to meet the needs of research and academic communities at all scales.
9. Some ongoing biomedical knowledge (curation) projects at ISI
IICurate This project is concerned with developing a curation and documentation system for information integration in collaboration with the II Group at ISI as part of the BIRN.
BioScholar It’s primary purpose is to provide software for experimental biomedical scientists that would permit a single scientific worker (at the level of a graduate student or postdoctoral worker) to design, construct and manage a shared knowledge repository for a research group derived on a local store of PDF files. This project is funded by NIGMS from 2008-2012 ( RO1-GM083871).
VIVO for scientific communities– In order to connect this information about research activities across institutions and make it available to others, taking into account smaller players in the research landscape and addressing their need for specific information (for example, by proving non-conventional research objects), the open source software VIVO that provides research information as linked open data (LOD) is used in many countries. So-called VIVO harvesters collect research information that is freely available on the web, and convert the data collected in conformity with LOD standards. The VIVO ontology builds on prevalent LOD namespaces and, depending on the needs of the specialist community concerned, can be expanded.
11. Examples of scientific curation in different areas of Science/Pharma/Biotech/Education
From Science 2.0 to Pharma 3.0 Q&A with Hervé Basset
Hervé Basset, specialist librarian in the pharmaceutical industry and owner of the blog “Science Intelligence“, to talk about the inspiration behind his recent book entitled “From Science 2.0 to Pharma 3.0″, published by Chandos Publishing and available on Amazon and how health care companies need a social media strategy to communicate and convince the health-care consumer, not just the practicioner.
How the Internet of Things is Promoting the Curation Effort
Update by Stephen J. Williams, PhD 3/01/19
Up till now, curation efforts like wikis (Wikipedia, Wikimedicine, Wormbase, GenBank, etc.) have been supported by a largely voluntary army of citizens, scientists, and data enthusiasts. I am sure all have seen the requests for donations to help keep Wikipedia and its other related projects up and running. One of the obscure sister projects of Wikipedia, Wikidata, wants to curate and represent all information in such a way in which both machines, computers, and humans can converse in. About an army of 4 million have Wiki entries and maintain these databases.
Enter the Age of the Personal Digital Assistants (Hellooo Alexa!)
In a March 2019 WIRED article “Encyclopedia Automata: Where Alexa Gets Its Information” senior WIRED writer Tom Simonite reports on the need for new types of data structure as well as how curated databases are so important for the new fields of AI as well as enabling personal digital assistants like Alexa or Google Assistant decipher meaning of the user.
As Mr. Simonite noted, many of our libraries of knowledge are encoded in an “ancient technology largely opaque to machines-prose.” Search engines like Google do not have a problem with a question asked in prose as they just have to find relevant links to pages. Yet this is a problem for Google Assistant, for instance, as machines can’t quickly extract meaning from the internet’s mess of “predicates, complements, sentences, and paragraphs. It requires a guide.”
Enter Wikidata. According to founder Denny Vrandecic,
Language depends on knowing a lot of common sense, which computers don’t have access to
A wikidata entry (of which there are about 60 million) codes every concept and item with a numeric code, the QID code number. These codes are integrated with tags (like tags you use on Twitter as handles or tags in WordPress used for Search Engine Optimization) so computers can identify patterns of recognition between these codes.
Now human entry into these databases are critical as we add new facts and in particular meaning to each of these items. Else, machines have problems deciphering our meaning like Apple’s Siri, where they had complained of dumb algorithms to interpret requests.
The knowledge of future machines could be shaped by you and me, not just tech companies and PhDs.
But this effort needs money
Wikimedia’s executive director, Katherine Maher, had prodded and cajoled these megacorporations for tapping the free resources of Wiki’s. In response, Amazon and Facebook had donated millions for the Wikimedia projects. Google recently gave 3.1 million USD$ in donations.
Future postings on the relevance and application of scientific curation will include:
Using Scientific Content Curation as a Method for Validation and Biocuration
Using Scientific Content Curation as a Method for Open Innovation
Other posts on this site related to Content Curation and Methodology include:
Part 2, presents Views of two Curators on the transformation of Scientific Publishing and the functioning of the Scientific AGORA (market place in the Ancient Greek CIty of Athena).
Views of Thomas Lin, NYT, 1/17/2012 – Cracking Open the Scientific Process
e-Recognition for Author Views is presented below of a pioneering launch of the ONE and ONLY web-based Open Access Online Scientific Journal on frontiers in Biomedical Technologies, Genomics, Biological Sciences, Healthcare Economics, Pharmacology, Pharmaceutical & Medicine.
Friction-free Collaboration over the Internet: An Equity Sharing Venture for “Open Access to Curation of Scientific Research” launched THREE TYPES of Scientific Research Sharing
Type 1:
“Open Access to Curation of Scientific Research – Online Scientific Journal
The venture, Leaders in Pharmaceutical Business Intelligence, operates as an online scientific intellectual EXCHANGE – an Open Access Online Scientific Journal for curation and reporting on frontiers in Biomedical, Genomics, Biological Sciences, Healthcare Economics, Pharmacology, Pharmaceutical & Medicine. The website, http://pharmaceuticalintelligence.com , is a scientific, medical and business multi expert authoring environment in several domains of LIFE SCIENCES, PHARMACEUTICAL, HEALTHCARE & MEDICINE INDUSTRIES.
A GLOBAL FORUM Ijad Madisch, 31, a virologist and computer scientist, founded ResearchGate, a Berlin-based social networking platform for scientists that has more than 1.3 million members.
The New England Journal of Medicine marks its 200th anniversary this year with a timeline celebrating the scientific advances first described in its pages: the stethoscope (1816), the use of ether foranesthesia (1846), and disinfecting hands and instruments before surgery (1867), among others.
LIKE, FOLLOW, COLLABORATE A staff meeting at ResearchGate. The networking site, modeled after Silicon Valley startups, houses 350,000 papers.
For centuries, this is how science has operated — through research done in private, then submitted to science and medical journals to be reviewed by peers and published for the benefit of other researchers and the public at large. But to many scientists, the longevity of that process is nothing to celebrate.
The system is hidebound, expensive and elitist, they say. Peer review can take months, journal subscriptions can be prohibitively costly, and a handful of gatekeepers limit the flow of information. It is an ideal system for sharing knowledge, said the quantum physicist Michael Nielsen, only “if you’re stuck with 17th-century technology.”
Dr. Nielsen and other advocates for “open science” say science can accomplish much more, much faster, in an environment of friction-free collaboration over the Internet. And despite a host of obstacles, including the skepticism of many established scientists, their ideas are gaining traction.
Open-access archives and journals like arXiv and the Public Library of Science (PLoS) have sprung up in recent years. GalaxyZoo, a citizen-science site, has classified millions of objects in space, discovering characteristics that have led to a raft of scientific papers.
On the collaborative blog MathOverflow, mathematicians earn reputation points for contributing to solutions; in another math experiment dubbed the Polymath Project, mathematicians commenting on the Fields medalistTimothy Gower’s blog in 2009 found a new proof for a particularly complicated theorem in just six weeks.
And a social networking site called ResearchGate — where scientists can answer one another’s questions, share papers and find collaborators — is rapidly gaining popularity.
Editors of traditional journals say open science sounds good, in theory. In practice, “the scientific community itself is quite conservative,” said Maxine Clarke, executive editor of the commercial journal Nature, who added that the traditional published paper is still viewed as “a unit to award grants or assess jobs and tenure.”
Dr. Nielsen, 38, who left a successful science career to write “Reinventing Discovery: The New Era of Networked Science,” agreed that scientists have been “very inhibited and slow to adopt a lot of online tools.” But he added that open science was coalescing into “a bit of a movement.”
On Thursday, 450 bloggers, journalists, students, scientists, librarians and programmers will converge on North Carolina State University (and thousands more will join in online) for the sixth annual ScienceOnline conference. Science is moving to a collaborative model, said Bora Zivkovic, a chronobiology blogger who is a founder of the conference, “because it works better in the current ecosystem, in the Web-connected world.”
Indeed, he said, scientists who attend the conference should not be seen as competing with one another. “Lindsay Lohan is our competitor,” he continued. “We have to get her off the screen and get science there instead.”
Facebook for Scientists?
“I want to make science more open. I want to change this,” said Ijad Madisch, 31, the Harvard-trained virologist and computer scientist behind ResearchGate, the social networking site for scientists.
Started in 2008 with few features, it was reshaped with feedback from scientists. Its membership has mushroomed to more than 1.3 million, Dr. Madisch said, and it has attracted several million dollars in venture capital from some of the original investors of Twitter, eBay and Facebook.
A year ago, ResearchGate had 12 employees. Now it has 70 and is hiring. The company, based in Berlin, is modeled after Silicon Valley startups. Lunch, drinks and fruit are free, and every employee owns part of the company.
The Web site is a sort of mash-up of Facebook, Twitter and LinkedIn, with profile pages, comments, groups, job listings, and “like” and “follow” buttons (but without baby photos, cat videos and thinly veiled self-praise). Only scientists are invited to pose and answer questions — a rule that should not be hard to enforce, with discussion threads about topics like polymerase chain reactions that only a scientist could love.
Scientists populate their ResearchGate profiles with their real names, professional details and publications — data that the site uses to suggest connections with other members. Users can create public or private discussion groups, and share papers and lecture materials. ResearchGate is also developing a “reputation score” to reward members for online contributions.
ResearchGate offers a simple yet effective end run around restrictive journal access with its “self-archiving repository.” Since most journals allow scientists to link to their submitted papers on their own Web sites, Dr. Madisch encourages his users to do so on their ResearchGate profiles. In addition to housing 350,000 papers (and counting), the platform provides a way to search 40 million abstracts and papers from other science databases.
In 2011, ResearchGate reports, 1,620,849 connections were made, 12,342 questions answered and 842,179 publications shared. Greg Phelan, chairman of the chemistry department at the State University of New York, Cortland, used it to find new collaborators, get expert advice and read journal articles not available through his small university. Now he spends up to two hours a day, five days a week, on the site.
Dr. Rajiv Gupta, a radiology instructor who supervised Dr. Madisch at Harvard and was one of ResearchGate’s first investors, called it “a great site for serious research and research collaboration,” adding that he hoped it would never be contaminated “with pop culture and chit-chat.”
COME TOGETHER Bora Zivkovic, a chronobiology blogger, is a founder of the ScienceOnline conference.
Dr. Gupta called Dr. Madisch the “quintessential networking guy — if there’s a Bill Clinton of the science world, it would be him.”
The Paper Trade
Dr. Sönke H. Bartling, a researcher at the German CancerResearch Center who is editing a book on “Science 2.0,” wrote that for scientists to move away from what is currently “a highly integrated and controlled process,” a new system for assessing the value of research is needed. If open access is to be achieved through blogs, what good is it, he asked, “if one does not get reputation and money from them?”
Changing the status quo — opening data, papers, research ideas and partial solutions to anyone and everyone — is still far more idea than reality. As the established journals argue, they provide a critical service that does not come cheap.
“I would love for it to be free,” said Alan Leshner, executive publisher of the journal Science, but “we have to cover the costs.” Those costs hover around $40 million a year to produce his nonprofit flagship journal, with its more than 25 editors and writers, sales and production staff, and offices in North America, Europe and Asia, not to mention print and distribution expenses. (Like other media organizations, Science has responded to the decline in advertising revenue by enhancing its Web offerings, and most of its growth comes from online subscriptions.)
Similarly, Nature employs a large editorial staff to manage the peer-review process and to select and polish “startling and new” papers for publication, said Dr. Clarke, its editor. And it costs money to screen for plagiarism and spot-check data “to make sure they haven’t been manipulated.”
Peer-reviewed open-access journals, like Nature Communications and PLoS One, charge their authors publication fees — $5,000 and $1,350, respectively — to defray their more modest expenses.
The largest journal publisher, Elsevier, whose products include The Lancet, Cell and the subscription-based online archive ScienceDirect, has drawn considerable criticism from open-access advocates and librarians, who are especially incensed by its support for the Research Works Act, introduced in Congress last month, which seeks to protect publishers’ rights by effectively restricting access to research papers and data.
In an Op-Ed article in The New York Times last week,Michael B. Eisen, a molecular biologist at the University of California, Berkeley, and a founder of the Public Library of Science, wrote that if the bill passes, “taxpayers who already paid for the research would have to pay again to read the results.”
In an e-mail interview, Alicia Wise, director of universal access at Elsevier, wrote that “professional curation and preservation of data is, like professional publishing, neither easy nor inexpensive.” And Tom Reller, a spokesman for Elsevier, commented on Dr. Eisen’s blog, “Government mandates that require private-sector information products to be made freely available undermine the industry’s ability to recoup these investments.”
Mr. Zivkovic, the ScienceOnline co-founder and a blog editor for Scientific American, which is owned by Nature, was somewhat sympathetic to the big journals’ plight. “They have shareholders,” he said. “They have to move the ship slowly.”
Still, he added: “Nature is not digging in. They know it’s happening. They’re preparing for it.”
Science 2.0
Scott Aaronson, a quantum computing theorist at the Massachusetts Institute of Technology, has refused to conduct peer review for or submit papers to commercial journals. “I got tired of giving free labor,” he said, to “these very rich for-profit companies.”
Dr. Aaronson is also an active member of online science communities like MathOverflow, where he has earned enough reputation points to edit others’ posts. “We’re not talking about new technologies that have to be invented,” he said. “Things are moving in that direction. Journals seem noticeably less important than 10 years ago.”
Dr. Leshner, the publisher of Science, agrees that things are moving. “Will the model of science magazines be the same 10 years from now? I highly doubt it,” he said. “I believe in evolution.
“When a better system comes into being that has quality and trustability, it will happen. That’s how science progresses, by doing scientific experiments. We should be doing that with scientific publishing as well.”
Matt Cohler, the former vice president of product management at Facebook who now represents Benchmark Capital on ResearchGate’s board, sees a vast untapped market in online science.
“It’s one of the last areas on the Internet where there really isn’t anything yet that addresses core needs for this group of people,” he said, adding that “trillions” are spent each year on global scientific research. Investors are betting that a successful site catering to scientists could shave at least a sliver off that enormous pie.
Dr. Madisch, of ResearchGate, acknowledged that he might never reach many of the established scientists for whom social networking can seem like a foreign language or a waste of time. But wait, he said, until younger scientists weaned on social media and open-source collaboration start running their own labs.
“If you said years ago, ‘One day you will be on Facebook sharing all your photos and personal information with people,’ they wouldn’t believe you,” he said. “We’re just at the beginning. The change is coming.”
The Internet now makes it possible to publish and share billions of data items every day, accessible to over 2 billion people worldwide. This mass of information makes it difficult, when searching, to extract the relevant and useful information from the background noise. It should be added that these searches are time-consuming and can take much longer than the time we actually have to spend on them. Today, Google and specialized search engines such as Google Scholar are based on established algorithms. But are these algorithms sufficiently in line with users’ needs? What if the web needed a human brain to select and put forward the relevant information and not just the information based on “popularity” and lexical and semantic operations?
To address this need, human intermediaries, empowered by the participatory wave of web 2.0, naturally started narrowing down the information and providing an angle of analysis and some context. They are bloggers, regular Internet users or community managers – a new type of profession dedicated to the web 2.0. A new use of the web has emerged, through which the information, once produced, is collectively spread and filtered by Internet users who create hierarchies of information. This “popularization of the web”therefore paves the way to a user-centered Internet that plays a more active role in finding means to improve the dissemination of information and filter it with more relevance. Today, this new practice has also been categorized and is known as curation.
The term “curation” was borrowed from the world of fine arts. Curators are responsible for the exhibitions held in museums and galleries. They build these exhibitions and act as intermediaries between the public and works of art. In contemporary art, the curator’s role is also to interpret works of art and discover new artists and trends of the moment. In a similar way on the web, the tasks performed by content curators include the search, selection, analysis, editorial work and dissemination of information. Curators can also share online the most relevant information on a specific subject. Instead of acting as mere echo chambers, they provide some context for their searches. For example, they address niche topics and themes that do not stand out in a traditional search. They prioritize the information and are able to find new means of presenting it, new types of visualization. Their role is, therefore, to find new formats, faster and more direct means of consultation for Internet users, in a context in which the time we spend reading the information is more and more limited. Curation on the web has a social and relational dimension that plays a central role in the curator’s work. Anyone can act as a curator and personalize information, providing an angle that he or she invites us to discover. This means that curation can be carried out by individuals who do not have an institutional footing. The expression “powered by people” exemplifies this possibility of democratizing information searches.
The world of scientific research and culture is no exception to this movement. The web 2.0 offers the scientific community and its surrounding spheres the opportunity to discover new tools that transform practices and uses, not only of researchers, but also of all the actors of scientific and technical culture (STC).
Curation: an Essential Practice to Manage “Open Science”
The web 2.0 gave birth to new practices motivated by the will to have broader and faster cooperation in a more free and transparent environment. We have entered the era of an “open” movement: “open data”, “open software”, etc. In science, expressions like “open access” (to scientific publications and research results) and “open science” are used more and more often.
The concept of “open science” emerged from the web and created bigger and bigger niches all around the planet. Open science and its derivatives such as open access make us dream of an era of open, collective expertise and innovation on an international scale. This catalyst in the field of science is only possible on one condition: that it be accompanied by the emergence of a reflection on the new practices and uses that are essential to its conservation and progress. Sharing information and data at the international level is very demanding in terms of management and organization. As a result, curation has established itself in the realm of science and technology, both in the research community and in the world of scientific and technical culture.
Curation: Collaborative Bibliographic Management for the Researcher 2.0
In the world of research, curation appears as a logical extension of the literature review and bibliographic search, the pillars of a researcher’s work. Curation on the web has brought a new dimension to this work of organizing and prioritizing information. It makes it easier for researchers to collaborate and share, while also bringing to light some works that had previously remained in the shadows.
Mendeley and Zotero are both search and bibliographic management tools that assist you in the creation of an online library. Thus, it is possible to navigate in this mass of bibliographic data, referenced by the researcher, through multiple gateways: keywords, authors’ names, date of publication, etc. In addition, these programs make it possible to generate automatically article bibliographies in the formats specified by each scientific journal. What is new about these tools, apart from the “logistical” aid they provide, is that they are based on collaboration and sharing. Mendeley and Zotero let you create private or public groups. These groups make it possible to share a bibliography with other researchers. They also give access to discussion forums that are useful for sharing with international researchers. Other tools like EndNote and Papersexist, but these paid softwares are less collaborative.
New platforms, real scientific social networks, have also appeared. The leading platform ResearchGate was founded in 2008 and now counts 1.9 million users (august 2012). It is an online search platform, but it is used above all for social interaction. Researchers can create a profile and discussion groups, make their work available online, job hunt, etc. Other professional social networks for researchers have emerged, among them MyScienceWork, which is devoted to open access.
Curation, in the era of open science, accelerates the dissemination of information and provides access to the most relevant parts. Post-publication comments add value to the content. Apart from the benefits for the community, these new practices change the role of researchers in society by offering them new public spaces for expression. Curation on the web opens the way towards the development of an e-reputation and a new form of celebrity in the world of international science. It gives everyone the opportunity to show the cornerstones of their work in the same way that the research notebooks of Hypothèses.orgwere used in Humanities and Social Sciences. This system based on the dual role of “observer/observed” may also impose limits on researchers who would have to be more thorough in the choice of the articles they list.
Have we entered the era of the “researcher 2.0”? Undoubtedly, even if it is still limited to a small group of people. The tools described above are widely used for bibliographic management but their collaborative function is still less used. It is difficult to change researchers’ practices and attitudes. To move from a closed science to an open science in a world of cutthroat competition, researchers will have to grope their way along. These new means of sharing are still sometimes perceived as a threat to the work of researchers or as an excessively long and tedious activity.
Curation and Scientific and Technical Culture: Creating Hybrid Networks
Another area, where there are most likely fewer barriers, is scientific and technical culture. This broad term involves different actors such as associations, companies, universities’ communication departments, CCSTI (French centers for scientific, technical and industrial culture), journalists, etc. A number of these actors do not limit their work to popularizing the scientific data; they also consider they have an authentic mission of “culturing” science. The curation practice thus offers a better organization and visibility to the information. The sought-after benefits will be different from one actor to the next. University communication departments are using the web 2.0 more and more to promote their values; this is the case, for example, for the FrenchUniversité Paris 8. For companies, curation offers the opportunity to become a reference on the themes related to their corporate identity. MyScienceWork, for example, began curating three collections surrounding the key themes of its project. The key topics of its identity are essentially open access, new uses and practices of the web 2.0 in the world of science and “women in science”. It is essential to keep abreast of the latest news coming from large institutions and traditional media, but also to take into account bloggers’ articles and links that offer a different viewpoint.
Some tools have also been developed in order to meet the expectations of these various users. Pearltreesand Scoopit are non-specialized curation tools that are widely used by the world of Scientific and Technical Culture. Pearltrees offers a visual representation in which each listed page is presented as a pearl connected to the others through branches. The result: a prioritized data tree. These mindmaps can be shared with one’s contacts. A good example of this is the work done by Sébastien Freudenthal, who uses this tool on a daily basis and offers rich content listed by theme in the field of Sciences and Web. Scoopit offers a more traditional presentation with a nice page layout that looks like a magazine. It enables you to list articles quickly and almost automatically, thanks to a plugin, and also to share them. A special tool for the “world” of Technical and Scientific Culture is the social network of scientific culture Knowtex that, in addition to its referencing and links assessment functions, seeks to create a space interconnecting journalists, artists, communicators, designers, bloggers, researchers, etc.
These different tools are used on a daily basis by various actors of technical and scientific culture, but also by researchers, teachers, etc. They gather these communities around a shared practice and favor multiple conversations. The development of these hybrid networks is surely a cornerstone in the building of open science, encouraging the creation of new ties between science and society that go beyond the traditional geographical limits.
Un grand merci à Antoine Blanchard pour sa participation et relecture de l’article.
This article has two parts, the first presents a pioneering experience in Curation of Scientific Research in an Open Access Online Scientific Journal, in a BioMed e-Books Series and in curation of a Scoop.it! Journal on Medical Imaging.
The second Part, presents Views of two Curators on the transformation of Scientific Publishing and the functioning of the Scientific AGORA (market place in the Ancient Greek CIty of Athena).
The CHANGES described above are irrevocable and foster progress of civilization by provision of ACCESS to the Scientific Process and Resources via collaboration among peers.
The scientists who were recruited to appear at a conference called Entomology-2013 thought they had been selected to make a presentation to the leading professional association of scientists who study insects.
But they found out the hard way that they were wrong. The prestigious, academically sanctioned conference they had in mind has a slightly different name: Entomology 2013 (without the hyphen). The one they had signed up for featured speakers who were recruited by e-mail, not vetted by leading academics. Those who agreed to appear were later charged a hefty fee for the privilege, and pretty much anyone who paid got a spot on the podium that could be used to pad a résumé.
“I think we were duped,” one of the scientists wrote in an e-mail to the Entomological Society.
Those scientists had stumbled into a parallel world of pseudo-academia, complete with prestigiously titled conferences and journals that sponsor them. Many of the journals and meetings have names that are nearly identical to those of established, well-known publications and events.
Steven Goodman, a dean and professor of medicine at Stanford and the editor of the journal Clinical Trials, which has its own imitators, called this phenomenon “the dark side of open access,” the movement to make scholarly publications freely available.
The number of these journals and conferences has exploded in recent years as scientific publishing has shifted from a traditional business model for professional societies and organizations built almost entirely on subscription revenues to open access, which relies on authors or their backers to pay for the publication of papers online, where anyone can read them.
Open access got its start about a decade ago and quickly won widespread acclaim with the advent of well-regarded, peer-reviewed journals like those published by the Public Library of Science, known as PLoS. Such articles were listed in databases like PubMed, which is maintained by the National Library of Medicine, and selected for their quality.
But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk. “Most people don’t know the journal universe,” Dr. Goodman said. “They will not know from a journal’s title if it is for real or not.”
Researchers also say that universities are facing new challenges in assessing the résumés of academics. Are the publications they list in highly competitive journals or ones masquerading as such? And some academics themselves say they have found it difficult to disentangle themselves from these journals once they mistakenly agree to serve on their editorial boards.
The phenomenon has caught the attention of Nature, one of the most competitive and well-regarded scientific journals. In a news report published recently, the journal noted “the rise of questionable operators” and explored whether it was better to blacklist them or to create a “white list” of those open-access journals that meet certain standards. Nature included a checklist on “how to perform due diligence before submitting to a journal or a publisher.”
Jeffrey Beall, a research librarian at the University of Colorado in Denver, has developed his own blacklist of what he calls “predatory open-access journals.” There were 20 publishers on his list in 2010, and now there are more than 300. He estimates that there are as many as 4,000 predatory journals today, at least 25 percent of the total number of open-access journals.
“It’s almost like the word is out,” he said. “This is easy money, very little work, a low barrier start-up.”
Journals on what has become known as “Beall’s list” generally do not post the fees they charge on their Web sites and may not even inform authors of them until after an article is submitted. They barrage academics with e-mail invitations to submit articles and to be on editorial boards.
One publisher on Beall’s list, Avens Publishing Group, even sweetened the pot for those who agreed to be on the editorial board of The Journal of Clinical Trails & Patenting, offering 20 percent of its revenues to each editor.
One of the most prolific publishers on Beall’s list, Srinubabu Gedela, the director of the Omics Group, has about 250 journals and charges authors as much as $2,700 per paper. Dr. Gedela, who lists a Ph.D. from Andhra University in India, says on his Web site that he “learnt to devise wonders in biotechnology.”
Another Beall’s list publisher, Dove Press, says on its Web site, “There are no limits on the number or size of the papers we can publish.”
Open-access publishers say that the papers they publish are reviewed and that their businesses are legitimate and ethical.
“There is no compromise on quality review policy,” Dr.Gedela wrote in an e-mail. “Our team’s hard work and dedicated services to the scientific community will answer all the baseless and defamatory comments that have been made aboutOmics.”
But some academics say many of these journals’ methods are little different from spam e-mails offering business deals that are too good to be true.
Paulino Martínez, a doctor in Celaya, Mexico, said he was gullible enough to send two articles in response to an e-mail invitation he received last year from The Journal of Clinical Case Reports. They were accepted. Then came a bill saying he owed $2,900. He was shocked, having had no idea there was a fee for publishing. He asked to withdraw the papers, but they were published anyway.
“I am a doctor in a hospital in the province of Mexico, and I don’t have the amount they requested,” Dr. Martínez said. The journal offered to reduce his bill to $2,600. Finally, after a year and many e-mails and a phone call, the journal forgave the money it claimed he owed.
Some professors listed on the Web sites of journals on Beall’s list, and the associated conferences, say they made a big mistake getting involved with the journals and cannot seem to escape them.
Thomas Price, an associate professor of reproductive endocrinology and fertility at the Duke University School of Medicine, agreed to be on the editorial board of The Journal of Gynecology & Obstetrics because he saw the name of a well-respected academic expert on its Web site and wanted to support open-access journals. He was surprised, though, when the journal repeatedly asked him to recruit authors and submit his own papers. Mainstream journals do not do this because researchers ordinarily want to publish their papers in the best journal that will accept them. Dr. Price, appalled by the request, refused and asked repeatedly over three years to be removed from the journal’s editorial board. But his name was still there.
“They just don’t pay any attention,” Dr. Price said.
About two years ago, James White, a plant pathologist at Rutgers, accepted an invitation to serve on the editorial board of a new journal, Plant Pathology & Microbiology, not realizing the nature of the journal. Meanwhile, his name, photograph and résumé were on the journal’s Web site. Then he learned that he was listed as an organizer and speaker on a Web site advertising Entomology-2013.
“I am not even an entomologist,” he said.
He thinks the publisher of the plant journal, which also sponsored the entomology conference, — just pasted his name, photograph and résumé onto the conference Web site. At this point, he said, outraged that the conference and journal were “using a person’s credentials to rip off other unaware scientists,” Dr. White asked that his name be removed from the journal and the conference.
Weeks went by and nothing happened, he said. Last Monday, in response to this reporter’s e-mail to the conference organizers, Jessica Lincy, who said only that she was a conference member, wrote to explain that the conference had “technical problems” removing Dr. White’s name. On Tuesday, his name was gone. But it remained on the Web site of the journal.
Dr. Gedela, the publisher of the journals and sponsor of the conference, said in an e-mail on Thursday that Dr. Price and Dr. White’s names remained on the Web sites “because of communication gap between the EB member and the editorial assistant,” referring to editorial board members. That day, their names were gone from the journals’Web sites.
“I really should have known better,” Dr. White said of his editorial board membership, adding that he did not fully realize how the publishing world had changed. “It seems like the Wild West now.”
This article has been revised to reflect the following correction:
Correction: April 8, 2013
An earlier version of this article misstated the name of a city in Mexico that is home to a doctor who sent articles to a pseudo-academic journal. It is Celaya, not Ceyala.