Archive for the ‘Academic Publishing’ Category

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:


This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals


Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs


Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]


The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]


Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]


Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”


But Before I get to the “selling monkeys to morons” quote,

I want to talk about


















My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).


His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples


This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 



Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.


The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn



Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn


In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!




UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>



To introduce this discussion first a few startoff material which will fram this discourse


The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!


We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?


There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations





























































































































































































Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education


Read Full Post »


Medical Startups – Artificial Intelligence (AI) Startups in Healthcare

Reporters: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN and Shraga Rottem, MD, DSc,

The motivation for this post is two fold:

First, we are presenting an application of AI, NLP, DL to our own medical text in the Genomics space. Here we present the first section of Part 1 in the following book. Part 1 has six subsections that yielded 12 plots. The entire Book is represented by 38 x 2 = 76 plots.

Second, we bring to the attention of the e-Reader the list of 276 Medical Startups – Artificial Intelligence (AI) Startups in Healthcare as a hot universe of R&D activity in Human Health.

Third, to highlight one academic center with an AI focus

ETH Logo
ETH AI Center - Header Image
Dear friends of the ETH AI Center,

We would like to provide you with some exciting updates from the ETH AI Center and its growing community.

We would like to provide you with some exciting updates from the ETH AI Center and its growing community. The ETH AI Center now comprises 110 research groups in the faculty, 20 corporate partners and has led to nine AI startups.

As the Covid-19 restrictions in Switzerland have recently been lifted, we would like to hear from you what kind of events you would like to see in 2022! Participate in the survey to suggest event formats and topics that you would enjoy being a part of. We are already excited to learn what we can achieve together this year.

We already have many interesting events coming up, we look forward to seeing you at our main and community events!





LPBI Group is applying AI for Medical Text Analysis with Machine Learning and Natural Language Processing: Statistical and Deep Learning

Our Book 

Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS & BioInformatics, Simulations and the Genome Ontology

Medical Text Analysis of this Books shows the following results obtained by Madison Davis by applying Wolfram NLP for Biological Languages on our own Text. See below an Example:

Part 1: Next Generation Sequencing (NGS)


1.1 The NGS Science

1.1.1 BioIT Aspect


Hypergraph Plot #1 and Tree Diagram Plot #1

for 1.1.1 based on 16 articles & on 12 keywords

protein, cancer, dna, genes, rna, survival, immune, tumor, patients, human, genome, expression


Read Full Post »

@MIT Artificial intelligence system rapidly predicts how two proteins will attach: The model called Equidock, focuses on rigid body docking — which occurs when two proteins attach by rotating or translating in 3D space, but their shapes don’t squeeze or bend

Reporter: Aviva Lev-Ari, PhD, RN

This paper introduces a novel SE(3) equivariant graph matching network, along with a keypoint discovery and alignment approach, for the problem of protein-protein docking, with a novel loss based on optimal transport. The overall consensus is that this is an impactful solution to an important problem, whereby competitive results are achieved without the need for templates, refinement, and are achieved with substantially faster run times.
28 Sept 2021 (modified: 18 Nov 2021)ICLR 2022 SpotlightReaders:  Everyone Show BibtexShow Revisions
Keywords:protein complexes, protein structure, rigid body docking, SE(3) equivariance, graph neural networks
AbstractProtein complex formation is a central problem in biology, being involved in most of the cell’s processes, and essential for applications such as drug design or protein engineering. We tackle rigid body protein-protein docking, i.e., computationally predicting the 3D structure of a protein-protein complex from the individual unbound structures, assuming no three-dimensional flexibility during binding. We design a novel pairwise-independent SE(3)-equivariant graph matching network to predict the rotation and translation to place one of the proteins at the right location and the right orientation relative to the second protein. We mathematically guarantee that the predicted complex is always identical regardless of the initial placements of the two structures, avoiding expensive data augmentation. Our model approximates the binding pocket and predicts the docking pose using keypoint matching and alignment through optimal transport and a differentiable Kabsch algorithm. Empirically, we achieve significant running time improvements over existing protein docking software and predict qualitatively plausible protein complex structures despite not using heavy sampling, structure refinement, or templates.
One-sentence SummaryWe perform rigid protein docking using a novel independent SE(3)-equivariant message passing mechanism that guarantees the same resulting protein complex independent of the initial placement of the two 3D structures.

MIT researchers created a machine-learning model that can directly predict the complex that will form when two proteins bind together. Their technique is between 80 and 500 times faster than state-of-the-art software methods, and often predicts protein structures that are closer to actual structures that have been observed experimentally.

This technique could help scientists better understand some biological processes that involve protein interactions, like DNA replication and repair; it could also speed up the process of developing new medicines.

Deep learning is very good at capturing interactions between different proteins that are otherwise difficult for chemists or biologists to write experimentally. Some of these interactions are very complicated, and people haven’t found good ways to express them. This deep-learning model can learn these types of interactions from data,” says Octavian-Eugen Ganea, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.

Ganea’s co-lead author is Xinyuan Huang, a graduate student at ETH Zurich. MIT co-authors include Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in CSAIL, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering in CSAIL and a member of the Institute for Data, Systems, and Society. The research will be presented at the International Conference on Learning Representations.

Significance of the Scientific Development by the @MIT Team

EquiDock wide applicability:

  • Our method can be integrated end-to-end to boost the quality of other models (see above discussion on runtime importance). Examples are predicting functions of protein complexes [3] or their binding affinity [5], de novo generation of proteins binding to specific targets (e.g., antibodies [6]), modeling back-bone and side-chain flexibility [4], or devising methods for non-binary multimers. See the updated discussion in the “Conclusion” section of our paper.


Advantages over previous methods:

  • Our method does not rely on templates or heavy candidate sampling [7], aiming at the ambitious goal of predicting the complex pose directly. This should be interpreted in terms of generalization (to unseen structures) and scalability capabilities of docking models, as well as their applicability to various other tasks (discussed above).


  • Our method obtains a competitive quality without explicitly using previous geometric (e.g., 3D Zernike descriptors [8]) or chemical (e.g., hydrophilic information) features [3]. Future EquiDock extensions would find creative ways to leverage these different signals and, thus, obtain more improvements.


Novelty of theory:

  • Our work is the first to formalize the notion of pairwise independent SE(3)-equivariance. Previous work (e.g., [9,10]) has incorporated only single object Euclidean-equivariances into deep learning models. For tasks such as docking and binding of biological objects, it is crucial that models understand the concept of multi-independent Euclidean equivariances.

  • All propositions in Section 3 are our novel theoretical contributions.

  • We have rewritten the Contribution and Related Work sections to clarify this aspect.


Footnote [a]: We have fixed an important bug in the cross-attention code. We have done a more extensive hyperparameter search and understood that layer normalization is crucial in layers used in Eqs. 5 and 9, but not on the h embeddings as it was originally shown in Eq. 10. We have seen benefits from training our models with a longer patience in the early stopping criteria (30 epochs for DIPS and 150 epochs for DB5). Increasing the learning rate to 2e-4 is important to speed-up training. Using an intersection loss weight of 10 leads to improved results compared to the default of 1.



[1] Protein-ligand blind docking using QuickVina-W with inter-process spatio-temporal integration, Hassan et al., 2017

[2] GNINA 1.0: molecular docking with deep learning, McNutt et al., 2021

[3] Protein-protein and domain-domain interactions, Kangueane and Nilofer, 2018

[4] Side-chain Packing Using SE(3)-Transformer, Jindal et al., 2022

[5] Contacts-based prediction of binding affinity in protein–protein complexes, Vangone et al., 2015

[6] Iterative refinement graph neural network for antibody sequence-structure co-design, Jin et al., 2021

[7] Hierarchical, rotation-equivariant neural networks to select structural models of protein complexes, Eismann et al, 2020

[8] Protein-protein docking using region-based 3D Zernike descriptors, Venkatraman et al., 2009

[9] SE(3)-transformers: 3D roto-translation equivariant attention networks, Fuchs et al, 2020

[10] E(n) equivariant graph neural networks, Satorras et al., 2021

[11] Fast end-to-end learning on protein surfaces, Sverrisson et al., 2020



Read Full Post »

Emergence of a new SARS-CoV-2 variant from GR clade with a novel S glycoprotein mutation V1230L in West Bengal, India

Authors: Rakesh Sarkar, Ritubrita Saha, Pratik Mallick, Ranjana Sharma, Amandeep Kaur, Shanta Dutta, Mamta Chawla-Sarkar

Reporter and Original Article Co-Author: Amandeep Kaur, B.Sc. , M.Sc.

Since its inception in late 2019, SARS-CoV-2 has evolved resulting in emergence of various variants in different countries. These variants have spread worldwide resulting in devastating second wave of COVID-19 pandemic in many countries including India since the beginning of 2021. To control this pandemic continuous mutational surveillance and genomic epidemiology of circulating strains is very important. In this study, we performed mutational analysis of the protein coding genes of SARS-CoV-2 strains (n=2000) collected during January 2021 to March 2021. Our data revealed the emergence of a new variant in West Bengal, India, which is characterized by the presence of 11 co-existing mutations including D614G, P681H and V1230L in S-glycoprotein. This new variant was identified in 70 out of 412 sequences submitted from West Bengal. Interestingly, among these 70 sequences, 16 sequences also harbored E484K in the S glycoprotein. Phylogenetic analysis revealed strains of this new variant emerged from GR clade (B.1.1) and formed a new cluster. We propose to name this variant as GRL or lineage B.1.1/S:V1230L due to the presence of V1230L in S glycoprotein along with GR clade specific mutations. Co-occurrence of P681H, previously observed in UK variant, and E484K, previously observed in South African variant and California variant, demonstrates the convergent evolution of SARS-CoV-2 mutation. V1230L, present within the transmembrane domain of S2 subunit of S glycoprotein, has not yet been reported from any country. Substitution of valine with more hydrophobic amino acid leucine at position 1230 of the transmembrane domain, having role in S protein binding to the viral envelope, could strengthen the interaction of S protein with the viral envelope and also increase the deposition of S protein to the viral envelope, and thus positively regulate virus infection. P618H and E484K mutation have already been demonstrated in favor of increased infectivity and immune invasion respectively. Therefore, the new variant having G614G, P618H, P1230L and E484K is expected to have better infectivity, transmissibility and immune invasion characteristics, which may pose additional threat along with B.1.617 in the ongoing COVID-19 pandemic in India.

Reference: Sarkar, R. et al. (2021) Emergence of a new SARS-CoV-2 variant from GR clade with a novel S glycoprotein mutation V1230L in West Bengal, India. medRxiv. https://doi.org/10.1101/2021.05.24.21257705https://www.medrxiv.org/content/10.1101/2021.05.24.21257705v1

Other related articles were published in this Open Access Online Scientific Journal, including the following:

Fighting Chaos with Care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur


T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN


Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN


Identification of Novel genes in human that fight COVID-19 infection

Reporter: Amandeep Kaur, B.Sc., M.Sc.


Mechanism of Thrombosis with AstraZeneca and J & J Vaccines: Expert Opinion by Kate Chander Chiang & Ajay Gupta, MD

Reporter & Curator: Dr. Ajay Gupta, MD


Read Full Post »

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”




Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD


Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.


HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN


AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN


Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN


New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD


Read Full Post »

Thriving Vaccines and Research: Weizmann Institute Coronavirus Research Development

Reporter: Amandeep Kaur, B.Sc., M.Sc.

In early February, Prof. Eran Segal updated in one of his tweets and mentioned that “We say with caution, the magic has started.”

The article reported that this statement by Prof. Segal was due to decreasing cases of COVID-19, severe infection cases and hospitalization of patients by rapid vaccination process throughout Israel. Prof. Segal emphasizes in another tweet to remain cautious over the country and informed that there is a long way to cover and searching for scientific solutions.

A daylong webinar entitled “COVID-19: The epidemic that rattles the world” was a great initiative by Weizmann Institute to share their scientific knowledge about the infection among the Israeli institutions and scientists. Prof. Gideon Schreiber and Dr. Ron Diskin organized the event with the support of the Weizmann Coronavirus Response Fund and Israel Society for Biochemistry and Molecular Biology. The speakers were invited from the Hebrew University of Jerusalem, Tel-Aviv University, the Israel Institute for Biological Research (IIBR), and Kaplan Medical Center who addressed the molecular structure and infection biology of the virus, treatments and medications for COVID-19, and the positive and negative effect of the pandemic.

The article reported that with the emergence of pandemic, the scientists at Weizmann started more than 60 projects to explore the virus from different range of perspectives. With the help of funds raised by communities worldwide for the Weizmann Coronavirus Response Fund supported scientists and investigators to elucidate the chemistry, physics and biology behind SARS-CoV-2 infection.

Prof. Avi Levy, the coordinator of the Weizmann Institute’s coronavirus research efforts, mentioned “The vaccines are here, and they will drastically reduce infection rates. But the coronavirus can mutate, and there are many similar infectious diseases out there to be dealt with. All of this research is critical to understanding all sorts of viruses and to preempting any future pandemics.”

The following are few important projects with recent updates reported in the article.

Mapping a hijacker’s methods

Dr. Noam Stern-Ginossar studied the virus invading strategies into the healthy cells and hijack the cell’s systems to divide and reproduce. The article reported that viruses take over the genetic translation system and mainly the ribosomes to produce viral proteins. Dr. Noam used a novel approach known as ‘ribosome profiling’ as her research objective and create a map to locate the translational events taking place inside the viral genome, which further maps the full repertoire of viral proteins produced inside the host.

She and her team members grouped together with the Weizmann’s de Botton Institute and researchers at IIBR for Protein Profiling and understanding the hijacking instructions of coronavirus and developing tools for treatment and therapies. Scientists generated a high-resolution map of the coding regions in the SARS-CoV-2 genome using ribosome-profiling techniques, which allowed researchers to quantify the expression of vital zones along the virus genome that regulates the translation of viral proteins. The study published in Nature in January, explains the hijacking process and reported that virus produces more instruction in the form of viral mRNA than the host and thus dominates the translation process of the host cell. Researchers also clarified that it is the misconception that virus forced the host cell to translate its viral mRNA more efficiently than the host’s own translation, rather high level of viral translation instructions causes hijacking. This study provides valuable insights for the development of effective vaccines and drugs against the COVID-19 infection.

Like chutzpah, some things don’t translate

Prof. Igor Ulitsky and his team worked on untranslated region of viral genome. The article reported that “Not all the parts of viral transcript is translated into protein- rather play some important role in protein production and infection which is unknown.” This region may affect the molecular environment of the translated zones. The Ulitsky group researched to characterize that how the genetic sequence of regions that do not translate into proteins directly or indirectly affect the stability and efficiency of the translating sequences.

Initially, scientists created the library of about 6,000 regions of untranslated sequences to further study their functions. In collaboration with Dr. Noam Stern-Ginossar’s lab, the researchers of Ulitsky’s team worked on Nsp1 protein and focused on the mechanism that how such regions affect the Nsp1 protein production which in turn enhances the virulence. The researchers generated a new alternative and more authentic protocol after solving some technical difficulties which included infecting cells with variants from initial library. Within few months, the researchers are expecting to obtain a more detailed map of how the stability of Nsp1 protein production is getting affected by specific sequences of the untranslated regions.

The landscape of elimination

The article reported that the body’s immune system consists of two main factors- HLA (Human Leukocyte antigen) molecules and T cells for identifying and fighting infections. HLA molecules are protein molecules present on the cell surface and bring fragments of peptide to the surface from inside the infected cell. These peptide fragments are recognized and destroyed by the T cells of the immune system. Samuels’ group tried to find out the answer to the question that how does the body’s surveillance system recognizes the appropriate peptide derived from virus and destroy it. They isolated and analyzed the ‘HLA peptidome’- the complete set of peptides bound to the HLA proteins from inside the SARS-CoV-2 infected cells.

After the analysis of infected cells, they found 26 class-I and 36 class-II HLA peptides, which are present in 99% of the population around the world. Two peptides from HLA class-I were commonly present on the cell surface and two other peptides were derived from coronavirus rare proteins- which mean that these specific coronavirus peptides were marked for easy detection. Among the identified peptides, two peptides were novel discoveries and seven others were shown to induce an immune response earlier. These results from the study will help to develop new vaccines against new coronavirus mutation variants.

Gearing up ‘chain terminators’ to battle the coronavirus

Prof. Rotem Sorek and his lab discovered a family of enzymes within bacteria that produce novel antiviral molecules. These small molecules manufactured by bacteria act as ‘chain terminators’ to fight against the virus invading the bacteria. The study published in Nature in January which reported that these molecules cause a chemical reaction that halts the virus’s replication ability. These new molecules are modified derivates of nucleotide which integrates at the molecular level in the virus and obstruct the works.

Prof. Sorek and his group hypothesize that these new particles could serve as a potential antiviral drug based on the mechanism of chain termination utilized in antiviral drugs used recently in the clinical treatments. Yeda Research and Development has certified these small novel molecules to a company for testing its antiviral mechanism against SARS-CoV-2 infection. Such novel discoveries provide evidences that bacterial immune system is a potential repository of many natural antiviral particles.

Resolving borderline diagnoses

Currently, Real-time Polymerase chain reaction (RT-PCR) is the only choice and extensively used for diagnosis of COVID-19 patients around the globe. Beside its benefits, there are problems associated with RT-PCR, false negative and false positive results and its limitation in detecting new mutations in the virus and emerging variants in the population worldwide. Prof. Eran Elinavs’ lab and Prof. Ido Amits’ lab are working collaboratively to develop a massively parallel, next-generation sequencing technique that tests more effectively and precisely as compared to RT-PCR. This technique can characterize the emerging mutations in SARS-CoV-2, co-occurring viral, bacterial and fungal infections and response patterns in human.

The scientists identified viral variants and distinctive host signatures that help to differentiate infected individuals from non-infected individuals and patients with mild symptoms and severe symptoms.

In Hadassah-Hebrew University Medical Center, Profs. Elinav and Amit are performing trails of the pipeline to test the accuracy in borderline cases, where RT-PCR shows ambiguous or incorrect results. For proper diagnosis and patient stratification, researchers calibrated their severity-prediction matrix. Collectively, scientists are putting efforts to develop a reliable system that resolves borderline cases of RT-PCR and identify new virus variants with known and new mutations, and uses data from human host to classify patients who are needed of close observation and extensive treatment from those who have mild complications and can be managed conservatively.

Moon shot consortium refining drug options

The ‘Moon shot’ consortium was launched almost a year ago with an initiative to develop a novel antiviral drug against SARS-CoV-2 and was led by Dr. Nir London of the Department of Chemical and Structural Biology at Weizmann, Prof. Frank von Delft of Oxford University and the UK’s Diamond Light Source synchroton facility.

To advance the series of novel molecules from conception to evidence of antiviral activity, the scientists have gathered support, guidance, expertise and resources from researchers around the world within a year. The article reported that researchers have built an alternative template for drug-discovery, full transparency process, which avoids the hindrance of intellectual property and red tape.

The new molecules discovered by scientists inhibit a protease, a SARS-CoV-2 protein playing important role in virus replication. The team collaborated with the Israel Institute of Biological Research and other several labs across the globe to demonstrate the efficacy of molecules not only in-vitro as well as in analysis against live virus.

Further research is performed including assaying of safety and efficacy of these potential drugs in living models. The first trial on mice has been started in March. Beside this, additional drugs are optimized and nominated for preclinical testing as candidate drug.

Source: https://www.weizmann.ac.il/WeizmannCompass/sections/features/the-vaccines-are-here-and-research-abounds

Other related articles were published in this Open Access Online Scientific Journal, including the following:

Identification of Novel genes in human that fight COVID-19 infection

Reporter: Amandeep Kaur, B.Sc., M.Sc. (ept. 5/2021)


Fighting Chaos with Care, community trust, engagement must be cornerstones of pandemic response

Reporter: Amandeep Kaur, B.Sc., M.Sc. (ept. 5/2021)


T cells recognize recent SARS-CoV-2 variants

Reporter: Aviva Lev-Ari, PhD, RN


Need for Global Response to SARS-CoV-2 Viral Variants

Reporter: Aviva Lev-Ari, PhD, RN


Mechanistic link between SARS-CoV-2 infection and increased risk of stroke using 3D printed models and human endothelial cells

Reporter: Adina Hazan, PhD


Read Full Post »

Two brothers with MEPAN Syndrome: A Rare Genetic Disorder

Reporter: Amandeep Kaur

In the early 40s, a married couple named Danny and Nikki, had normal pregnancy and delivered their first child in October 2011.  The couple was elated after the birth of Carson because they were uncertain about even conceiving a baby. Soon after birth, the parents started facing difficulty in feeding the newborn and had some wakeful nights, which they used to called “witching hours”. For initial six months, they were clueless that something was not correct with their infant. Shortly, they found issues in moving ability, sitting, and crawling with Carson. Their next half year went in visiting several behavioral specialists and pediatricians with no conclusion other than a suggestion that there is nothing to panic as children grow at different rates.

Later in early 2013, Caron was detected with cerebral palsy in a local regional center. The diagnosis was based on his disability to talk and delay in motor development. At the same time, Carson had his first MRI which showed no negative results. The parents convinced themselves that their child condition would be solved by therapies and thus started physical and occupational therapies. After two years, the couple gave birth to another boy child named Chase in 2013. Initially, there was nothing wrong with Chase as well. But after nine months, Chase was found to possess the same symptoms of delaying in motor development as his elder brother. It was expected that Chase may also be suffering from cerebral palsy. For around one year both boys went through enormous diagnostic tests starting from karyotyping, metabolic screen tests to diagnostic tests for Fragile X syndrome, lysosomal storage disorders, Friedreich ataxia and spinocerebellar ataxia. Gene panel tests for mitochondrial DNA and Oxidative phosphorylation (OXPHOS) deficiencies were also performed. No conclusion was drawn because each diagnostic test showed the negative results.

Over the years, the condition of boys was deteriorating as their movements became stiffer and ataxic, they were not able to crawl anymore. By the end of 2015, the boys had an MRI which showed some symmetric anomalies in their basal ganglia indicating a metabolic condition. The symptoms of Carson and Chase was not even explained by whole exome sequencing due to the absence of any positive result. The grievous journey of visits to neurologist, diagnostic tests and inconclusive results led the parents to rethink about anything happened erroneous due to them such as due to their lifestyle, insufficient intake of vitamins during pregnancy or exposure to toxic agents which left their sons in that situation.

During the diagnostic odyssey, Danny spent many restless and sleepless nights in searching PubMed for any recent cases with symptoms similar to his sons and eventually came across the NIH’s Undiagnosed Diseases Network (UDN), which gave a light of hope to the demoralized family. As soon as Danny discovered about the NIH’s Diseases Network, he gathered all the medical documents of both his sons and submitted the application. The submitted application in late 2015 got accepted a year later in December 2016 and they got their first appointment in early 2017 at the UDN site at Stanford. At Stanford, the boys had gone through whole-genome sequencing and some series of examinations which came back with inconclusive results. Finally, in February 2018, the family received some conclusive results which explained that the two boys suffer from MEPAN syndrome with pathogenic mutations in MECR gene.

  • MEPAN means Mitochondrial Enoyl CoA reductase Protein-Associated Neurodegeneration
  • MEPAN syndrome is a rare genetic neurological disorder
  • MEPAN syndrome is associated with symptoms of ataxia, optic atrophy and dystonia
  • The wild-type MECR gene encodes a mitochondrial protein which is involved in metabolic processes
  • The prevalence rate of MEPAN syndrome is 1 in 1 million
  • Currently, there are 17 patients of MEPAN syndrome worldwide

The symptoms of Carson and Chase of an early onset of motor development with no appropriate biomarkers and T-2 hyperintensity in the basal ganglia were matching with the seven known MEPAN patient at that time. The agonizing journey of five years concluded with diagnosis of rare genetic disorder.

Despite the advances in genetic testing and their low-cost, there are many families which still suffer and left undiagnostic for long years. To shorten the diagnostic journey of undiagnosed patients, the whole-exome and whole-genome sequencing can be used as a primary tool. There is need of more research to find appropriate treatments of genetic disorders and therapies to reduce the suffering of the patients and families. It is necessary to fill the gap between the researchers and clinicians to stimulate the development in diagnosis, treatment and drug development for rare genetic disorders.

The family started a foundation named “MEPAN Foundation” (https://www.mepan. org) to reach out to the world to educate people about the mutation in MECR gene. By creating awareness among the communities, clinicians, and researchers worldwide, the patients having rare genetic disorder can come closer and share their information to improve their condition and quality of life.

Reference: Danny Miller, The diagnostic odyssey: our family’s story, The American Journal of Human Genetics, Volume 108, Issue 2, 2021, Pages 217-218, ISSN 0002-9297, https://doi.org/10.1016/j.ajhg.2021.01.003 (https://www.sciencedirect.com/science/article/pii/S0002929721000033)




https://www.mepan. org

Other related articles were published in this Open Access Online Scientific Journal, including the following:

Effect of mitochondrial stress on epigenetic modifiers

Larry H. Bernstein, MD, FCAP, Curator, LPBI


The Three Parent Technique to Avoid Mitochondrial Disease in Embryo

Reporter and Curator: Dr. Sudipta Saha, Ph.D.


New Insights into mtDNA, mitochondrial proteins, aging, and metabolic control

Larry H. Bernstein, MD, FCAP, Curator, LPBI


Mitochondrial Isocitrate Dehydrogenase and Variants

Writer and Curator: Larry H. Bernstein, MD, FCAP


Update on mitochondrial function, respiration, and associated disorders

Larry H. Benstein, MD, FCAP, Gurator and writer


Read Full Post »

Papers citing PharmaceuticalIntelligence.com

Reporter: Stephen J. Williams, PhD

(10 papers, 5 peer reviewed, 2 web page, 2 teaching lectures, 1 preprint) Search from years 2012 – 2018

UPDATED on 5/22/2022


Peer Reviewed

  1. Yan-Fang Guan, Gai-Rui Li, Rong-Jiao Wang, Yu-Ting Yi, Ling Yang, Dan Jiang, Xiao-Ping Zhang, and Yin Peng. Application of next-generation sequencing in clinical oncology to advance personalized treatment of cancer. Chin J Cancer. 2012 Oct; 31(10): 463–470. Using 44. Lev-Ari A. Sunitinib brings Adult acute lymphoblastic leukemia (ALL) to Remission-RNA Sequencing-FLT3 Receptor Blockade. 2012. Available at: http://pharmaceuticalintelligence.com/2012/07/09/sunitinib-brings-adult-all-to-remission-rna-sequencing/
  2. Pedro A. Moreno. Bioinformática multifractal: Una propuesta hacia la interpretación no-lineal del genoma: Multifractal bioinformatics: A proposal to the nonlinear interpretation of genome. Ing. compet. vol.16 no.1 Cali Jan./June 2014. Using (http://pharmaceuticalintelligence.com/tag/fractal-geometry/).


  1. Gökmen Altay1* and David E. Neal2. Genome-wide differential gene network analysis R software and its application in LnCap prostate cancer in https://www.biorxiv.org/content/biorxiv/early/2017/04/24/129742.full.pdf citing an R package in the paper citing us in reference to CXC4 inhibitors as (http://pharmaceuticalintelligence.com/2015/12/15/are-cxc4-antagonists-making-a-comebackin-cancer-chemotherapy, 2015).


  1. D.Sairam Course : Fundamentals of Biology- Life Sciences Course Code: BSBT-201 Course Instructor: Dr. Subhabrata Kar citing in presentation Comparison of Glycolysis between a Normal Tissue and Tumour/ Proliferated Tissue • http://pharmaceuticalintelligence.com/2012/10/17/is-the- warburg-effect-the-cause-or-the-effect-of-cancer-a-21st- century-view/


  1. Sasaki, Takaaki, Scott J. Rodig, Lucian R. Chirieac, and Pasi A. Janne. “The Biology and Treatment of EML4-ALK Non-Small Cell Lung Cancer.” NCBI. US National Library of Medicine, 24 Apr. 2010. In website https://prezi.com/rbtv_450s3wd/eml4-alk-fusion-gene/ citing us as https://pharmaceuticalintelligence.files.wordpress.com/2014/08/membrane_receptor_tk.jpg?w=500&h=326
  2. Github project on GENEREGULATION,Deciphering the code of gene regulation using massively parallel assays of designed sequence at https://github.com/arquivo/Research-Websites-Preservation/blob/master/classifier/projects_classification.csv.3 citing us as http://pharmaceuticalintelligence.com/tag/transcription/page/2/,0


  1. Website https://tginnovations.wordpress.com/2012/05/08/cancer/ citing us as Cancer Metastasis(pharmaceuticalintelligence.com) {one of our chapters in Cancer Volume 1}


  1. Website  https://www.minipiginfo.com/mini-pig-cancer.html citing us https://pharmaceuticalintelligence.com/the-scid-pig-how-pigs-are-becoming-a-great-alternate-model-for-cancer-research
  2. Shashi Shekhar Anand, Navgeet, Balraj Singh Gill*. In https://www.pharmatutor.org/articles/breakthroughs-in-epigenetics

Citing as figure source for Fig 5. Phosphorylation of serine chain


  1. Larry H Bernstein. Ca2+-Stimulated Exocytosis: The Role of Calmodulin and Protein Kinase C in Ca2+ Regulation of Hormone and Neurotransmitter in https://www.archivesofmedicine.com/medicine/ca2stimulated-exocytosis-the-role-of-calmodulin-and-protein-kinase-c-in-ca2-regulation-of-hormone-and-neurotransmitter.php?aid=7058


  1. Description: Multidimensional Representation of Concepts as Cognitive Engrams in the Human Brain. Clinical Trial NCT02834716 on Cognitive Impairment using https://pharmaceuticalintelligence.com/2013/02/27/ustekinumab-new-drug-therapy-for-cognitive-decline-resulting-from-neuroinflammatory-cytokine-signaling-and-alzheimers-disease/
    Description: Ustekinumab New Drug Therapy for Cognitive Decline resulting from Neuroinflammatory Cytokine Signaling and Alzheimer’s Disease


  1. Beckman Coulter promotional material. In Post Translational Modification at https://www.beckman.com/resources/sample-type/bio-molecules/post-translational-modification

Using our material as reference “Overview of Posttranslational Modification (PTM) | Leaders in Pharmaceutical Business Intelligence (LPBI) Group.” Leaders in Pharmaceutical Business Intelligence (LPBI) Group, 29 July 2014, https://pharmaceuticalintelligence.com/2014/07/29/overview-of-posttranslational-modification-ptm/.



Chin J Cancer. 2012 Oct; 31(10): 463–470.

doi: 10.5732/cjc.012.10216

PMCID: PMC3777453

PMID: 22980418

Application of next-generation sequencing in clinical oncology to advance personalized treatment of cancer

Yan-Fang GuanGai-Rui LiRong-Jiao WangYu-Ting YiLing YangDan JiangXiao-Ping Zhang, and Yin Peng

Author information Article notes Copyright and License information Disclaimer

This article has been cited by other articles in PMC.

Go to:


With the development and improvement of new sequencing technology, next-generation sequencing (NGS) has been applied increasingly in cancer genomics research over the past decade. More recently, NGS has been adopted in clinical oncology to advance personalized treatment of cancer. NGS is used to identify novel and rare cancer mutations, detect familial cancer mutation carriers, and provide molecular rationale for appropriate targeted therapy. Compared to traditional sequencing, NGS holds many advantages, such as the ability to fully sequence all types of mutations for a large number of genes (hundreds to thousands) in a single test at a relatively low cost. However, significant challenges, particularly with respect to the requirement for simpler assays, more flexible throughput, shorter turnaround time, and most importantly, easier data analysis and interpretation, will have to be overcome to translate NGS to the bedside of cancer patients. Overall, continuous dedication to apply NGS in clinical oncology practice will enable us to be one step closer to personalized medicineWith the development and improvement of new sequencing technology, next-generation sequencing (NGS) has been applied increasingly in cancer genomics research over the past decade. More recently, NGS has been adopted in clinical oncology to advance personalized treatment of cancer. NGS is used to identify novel and rare cancer mutations, detect familial cancer mutation carriers, and provide molecular rationale for appropriate targeted therapy. Compared to traditional sequencing, NGS holds many advantages, such as the ability to fully sequence all types of mutations for a large number of genes (hundreds to thousands) in a single test at a relatively low cost. However, significant challenges, particularly with respect to the requirement for simpler assays, more flexible throughput, shorter turnaround time, and most importantly, easier data analysis and interpretation, will have to be overcome to translate NGS to the bedside of cancer patients. Overall, continuous dedication to apply NGS in clinical oncology practice will enable us to be one step closer to personalized medicine.

  1. Lev-Ari A. Sunitinib brings Adult acute lymphoblastic leukemia (ALL) to Remission-RNA Sequencing-FLT3 Receptor Blockade. 2012. Available at: http://pharmaceuticalintelligence.com/2012/07/09/sunitinib-brings-adult-all-to-remission-rna-sequencing/


Ingeniería y competitividad

Print version ISSN 0123-3033

Ing. compet. vol.16 no.1 Cali Jan./June 2014


Bioinformática multifractal: Una propuesta hacia la interpretación no-lineal del genoma

Multifractal bioinformatics: A proposal to the nonlinear interpretation of genome

Pedro A. Moreno
Escuela de Ingeniería de Sistemas y Computación, Facultad de Ingeniería, Universidad del Valle, Cali, Colombia
E-mail: pedro.moreno@correounivalle.edu.co

Eje temático: Ingeniería de sistemas / System engineering
Recibido: 19 de septiembre de 2012
Aceptado: 16 de diciembre de 2013


El primer borrador de la secuencia completa del genoma humano (GH) se publicó en el año 2001 por parte de dos consorcios competidores. Desde entonces, varias características estructurales y funcionales de la organización del GH han sido reveladas. Hoy en día, más de 2000 genomas humanos han sido secuenciados y todos estos hallazgos están impactando fuertemente en la academia y la salud pública. A pesar de esto, un gran cuello de botella llamado la interpretación del genoma persiste; esto es, la falta de una teoría que integre y explique el complejo rompecabezas de características codificantes y no codificantes que componen el GH como un todo. Diez años después de secuenciado el GH, dos trabajos recientes, abordados dentro del formalismo multifractal permiten proponer una teoría no-lineal que ayuda a interpretar la variación estructural y funcional de la información genética de los genomas. El presente artículo de revisión trata acerca de este novedoso enfoque, denominado: “Bioinformática multifractal”.

Palabras clave: Ciencias ómicas, bioinformática, genoma humano, análisis multifractal.


The first draft of the human genome (HG) sequence was published in 2001 by two competing consortia. Since then, several structural and functional characteristics for the HG organization have been revealed. Today, more than 2.000 HG have been sequenced and these findings are impacting strongly on the academy and public health. Despite all this, a major bottleneck, called the genome interpretation persists. That is, the lack of a theory that explains the complex puzzles of coding and non-coding features that compose the HG as a whole. Ten years after the HG sequenced, two recent studies, discussed in the multifractal formalism allow proposing a nonlinear theory that helps interpret the structural and functional variation of the genetic information of the genomes. The present review article discusses this new approach, called: “Multifractal bioinformatics”.

  1. Introduction

Omic Sciences and Bioinformatics

In order to study the genomes, their life properties and the pathological consequences of impairment, the Human Genome Project (HGP) was created in 1990. Since then, about 500 Gpb (EMBL) represented in thousands of prokaryotic genomes and tens of different eukaryotic genomes have been sequenced (NCBI, 1000 Genomes, ENCODE). Today, Genomics is defined as the set of sciences and technologies dedicated to the comprehensive study of the structure, function and origin of genomes. Several types of genomic have arisen as a result of the expansion and implementation of genomics to the study of the Central Dogma of Molecular Biology (CDMB), Figure 1 (above). The catalog of different types of genomics uses the Latin suffix “-omic” meaning “set of” to mean the new massive approaches of the new omics sciences (Moreno et al, 2009). Given the large amount of genomic information available in the databases and the urgency of its actual interpretation, the balance has begun to lean heavily toward the requirements of bioinformatics infrastructure research laboratories Figure 1 (below).

The bioinformatics or Computational Biology is defined as the application of computer and information technology to the analysis of biological data (Mount, 2004). An interdisciplinary science that requires the use of computing, applied mathematics, statistics, computer science, artificial intelligence, biophysical information, biochemistry, genetics, and molecular biology. Bioinformatics was born from the need to understand the sequences of nucleotide or amino acid symbols that make up DNA and proteins, respectively. These analyzes are made possible by the development of powerful algorithms that predict and reveal an infinity of structural and functional features in genomic sequences, as gene location, discovery of homologies between macromolecules databases (Blast), algorithms for phylogenetic analysis, for the regulatory analysis or the prediction of protein folding, among others. This great development has created a multiplicity of approaches giving rise to new types of Bioinformatics, such as Multifractal Bioinformatics (MFB) that is proposed here.

1.1 Multifractal Bioinformatics and Theoretical Background

MFB is a proposal to analyze information content in genomes and their life properties in a non-linear way. This is part of a specialized sub-discipline called “nonlinear Bioinformatics”, which uses a number of related techniques for the study of nonlinearity (fractal geometry, Hurts exponents, power laws, wavelets, among others.) and applied to the study of biological problems (http://pharmaceuticalintelligence.com/tag/fractal-geometry/). For its application, we must take into account a detailed knowledge of the structure of the genome to be analyzed and an appropriate knowledge of the multifractal analysis.


Genome-wide differential gene network analysis R software and its application in LnCap prostate cancer

Gökmen Altay1* and David E. Neal2 1*La Jolla Institute for Allergy and Immunology, CA, USA 2Nuffield Department of Surgical Sciences, University of Oxford, Headington, OX3 7DQ , Oxford, United Kingdom *Corresponding author altay@lji.org Abstract We introduce an R software package for condition-specific gene regulatory network analysis based on DC3NET algorithm. We also present an application of it on a real prostate dataset and demonstrate the benefit of the software. We performed genome-wide differential gene network analysis with the software on the LnCap androgen stimulated and deprived prostate cancer gene expression datasets (GSE18684) and inferred the androgen stimulated prostate cancer specific differential network. As an outstanding result, CXCR7 along with CXCR4 appeared to have the most important role in the androgen stimulated prostate specific genome-wide differential network. This blind estimation is strongly supported from the literature. The critical roles for CXCR4, a receptor over-expressed in many cancers, and CXCR7 on mediating tumor metastasis, along with their contributions as biomarkers of tumor behavior as well as potential therapeutic target were studied in several other types of cancers. In fact, a pharmaceutical company had already developed a therapy by inhibiting CXCR4 to block non-cancerous immuno-suppressive and pro-angiogenic cells from populating the tumor for disrupting the cancer environment and restoring normal immune surveillance functions. Considering this strong confirmation, our inferred regulatory network might reveal the driving mechanism of LnCap androgen stimulated prostate cancer. Because, CXCR4 appeared to be in the center of the largest subnetwork of our inferred differential network. Moreover, enrichment analyses for the largest subnetwork of it appeared to be significantly enriched in terms of axon guidance, fc gamma R-mediated phagocytosis and endocytosis. This also conforms with the recent literature in the field of prostate cancer. We demonstrate how to derive condition-specific gene targets from expression datasets on genome-wide level using differential gene network analysis. Our results showed that differential gene network analysis worked well in a prostate cancer dataset, which suggest the use of this approach as essential part of current expression data processing. Availability: The introduced R software package available in CRAN at https://cran.r-project.org/web/packages/dc3net and also at https://github.com/altayg/dc3net

  1. Discussion By employing differential gene network analysis approach, the present study aims to investigate the molecular mechanisms that may drive disease progression in prostate cancer using our presented software dc3net. Top four hub nodes, identified in the present study, have been strongly associated with prostate cancer metastatic process, including CXCR7, STK39, ELOVL7 and ACSL3. Hub nodes are genes that are highly connected with other genes and they were proposed to have important roles in biological development. Since hub nodes have more complex interactions than other genes, they may have crucial roles in the underlying mechanisms of disease (Guo, 2015). Identification of hub genes involved in progression of prostate cancer may lead to the development of better diagnostic methods and providing therapeutic approaches. According to our analysis, CXCR7 (chemokine (C-X-C motif) receptor 7) is by far the top hub gene in the androgen stimulated differential network and it is also part of the largest independent subnetwork as seen in Figure 3. In (Wang, 2008), it is reported that staining of high-density tissue microarrays shows that the levels of CXCR7/RDC1 expression increase as the tumors become more aggressive. Also, In vitro and in vivo studies with prostate cancer cell lines propose that alterations in CXCR7/RDC1 expressions are associated with enhanced invasive and adhesive activities in addition to a survival advantage. Along other papers on CXCR7 (Zheng, 2010), it was shown that increased CXCR7 expression was found in hepatocellular carcinoma (HCC) tissues. Knockdown of CXCR7 expression by transfected with CXCR7shRNA significantly inhibited SMMC-7721 angiogenesis, adhesion and cells invasion. Moreover, down-regulation of CXCR7 expression leads to a reduction of tumor growth in a xenograft model of HCC (Zheng, 2010). Another study demonstrated that the IL8–regulated Chemokine Receptor CXCR7 stimulates EGFR Signaling to promote prostate cancer growth (Singh, 2011). In a study conducted by Yun et al., it is reported that CXCR7 expression is increased in most of the tumor cells compared with the normal cells and is involved in cell proliferation, migration, survival, invasion and angiogenesis during the initiation and progression of many cancer types including prostate cancer (Yun, 2015). A more recent study indicated that there appeared to be disconnect of the effect of DHT on CXCL12/CXCR4/CXCR7 chemokine axis between transcriptional and translation machinery in androgen-responsive LnCaP cell line. There are many other studies that showed the strong role of CXCR7 in metastatic type cancer that strongly validates our blind foremost prediction is very likely to be true and thus needs further experimental work on its targets that we inferred in this study. However, available under aCC-BY-NC 4.0 International license. not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made bioRxiv preprint doi: https://doi.org/10.1101/129742; this version posted April 24, 2017. The copyright holder for this preprint (which was 11 It was also observed that CXCR7/RDC1 levels are regulated by CXCR4 (Singh, 2011). This is a very interesting supporting information from literature for our blind estimation because in our predicted largest independent subnetwork, as shown in Figure 3, CXCR7 and CXCR4 appear to be very close and interacting over only one gene. Although CRCX4 is not a hub gene, it appears to be as a bridge that connects both halves of the largest subnetwork. According to KEGG analysis, CXCR4 was found in the gene list of two different significantly enriched KEGG pathways, axon guidance and endocytosis which are strongly associated with prostate cancer (Table 3). Considering the prediction was made on global level, this literature confirmation seems assuring but not a coincidence. Therefore, this relation is worth experimenting in LnCap cancer too. It is also reported (Shanmugam, 2011) that inhibition of CXCR4/CXCL12 signaling axis by ursolic acid leads to suppression of metastasis in transgenic adenocarcinoma of mouse prostate model and CXCR4 induced a more aggressive phenotype in prostate cancer (Miki, 2007). In another study, it is reported that CXCR4 and CXCR7 have critical roles on mediating tumor metastasis in various types of cancers as both being a receptor for an important α-chemokine, CXCL12 (Sun, 2010). Furthermore, a more recent study concluded that CXCR4 plays a crucial role in cancer proliferation, dissemination and invasion and the inhibition of CXCR4 strongly affects prostate cancer metastatic disease (Gravina, 2015). The chief officer of Massachusetts based X4 Pharmaceuticals company recently stated that CXCR4 protein “acts as a beacon to attract cells to surround a tumor, effectively hiding the tumor from the body’s T cells that would otherwise destroy them”. He indicated that X4 company is beginning human trials using CXCR4 inhibitors which aims to develop a therapy to block the protein, CXCR4 (http://pharmaceuticalintelligence.com/2015/12/15/are-cxc4-antagonists-making-a-comebackin-cancer-chemotherapy, 2015).


  1. 1. Warburg Effect Presented By: D.Sairam Course : Fundamentals of Biology- Life Sciences Course Code: BSBT-201 Course Instructor: Dr. Subhabrata Kar Presentation Code: U3P1
  2. 2. Overview Ø Introduction Ø Causes of Warburg Effect Ø Significance of Warburg Effect ØReferences
  3. 3. Introduction • Warburg, considered by many the pre-eminent bio chemist of the first half of the twentieth century, made vital contributions to many other areas of biochemistry, including respiration, photosynthesis, and the enzymology of intermediary metabolism. • In the mid 1920s Warburg and co-workers showed that, under Aerobic conditions, tumour tissues metabolize approximately tenfold more glucose to lactate in a given time than normal tissues. • This phenomenon later came to be known as “Warburg Effect”. • Warburg purified and crystallized seven of the enzymes of glycolysis. He used a tool called as “ Warburg Manometer” which measured directly the consumption of oxygen by monitoring changes in gas volume, and therefore allowed quantitative measurement of any enzyme with oxidase activity.
  4. 4. Note: Warburg Effect in Plant Physiology and Oncology are different • In Plant physiology, Warburg Effect refers to the phenomenon in which Oxygen acts as a competitive inhibitor to Carbon dioxide fixation under the influence of RuBisCO, which initiates photosynthesis. Source : Bild-1928
  5. 5. Comparison of Glycolysis between a Normal Tissue and Tumour/ Proliferated Tissue • http://pharmaceuticalintelligence.com/2012/10/17/is-the- warburg-effect-the-cause-or-the-effect-of-cancer-a-21st- century-view/


EML4-ALK fusion gene


Published with reusable license by Camille Kawawa-Beaudan

July 2, 2015

Sasaki, Takaaki, Scott J. Rodig, Lucian R. Chirieac, and Pasi A. Janne. “The Biology and Treatment of EML4-ALK Non-Small Cell Lung Cancer.” NCBI. US National Library of Medicine, 24 Apr. 2010. Web. 30 June 2015. <http%3A%2F%2Fwww.ncbi.nlm.nih.gov%2Fpmc%2Farticles%2FPMC2888755%2F>.

Shaw, Alice T., and Benjamin Solomon. “Targeting Anaplastic Lymphoma Kinase in Lung Cancer.” Clinical Cancer Research. N.p., 2 Feb. 2011. Web. 30 June 2015. <http%3A%2F%2Fclincancerres.aacrjournals.org%2Fcontent%2F17%2F8%2F2081.full>.

Webb, Thomas R., Jake Slavish,et al. “Anaplastic Lymphoma Kinase: Role in Cancer Pathogenesis and Small-molecule Inhibitor Development for Therapy.” Expert Review of Anticancer Therapy. U.S. National Library of Medicine, 1 Jan. 2010. Web. 30 June 2015. <http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2780428/&gt;.

Webb, Thomas R., Jake Slavish, et al. “Anaplastic Lymphoma Kinase: Role in Cancer Pathogenesis and Small-molecule Inhibitor Development for Therapy.” Expert Review of Anticancer Therapy. U.S. National Library of Medicine, 1 Jan. 2010. Web. 30 June 2015. <http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2780428/&gt;.



GENEREGULATION,Deciphering the code of gene regulation using massively parallel assays of designed sequence libraries,http://pharmaceuticalintelligence.com/tag/transcription/page/2/,0




Print Post


2 Votes

Cancer is a broad group of various diseases involving unregulated cell growth. It is medically known as a malignant neoplasm. In cancer, cells divide and grow uncontrollably and invade nearby parts of the body. The cancer may also spread to more distant parts of the body through the lymphatic system or bloodstream, it is called metastasis. However, not all tumors are cancerous. Some tumors do not grow uncontrollably, do not invade neighboring tissues, and do not spread throughout the body which are called Benign tumors.

There are more than 100 types of Cancers. Follow the link to know more:


Main sites of metastases for some common cancer types. Primary cancers are denoted by “…cancer” and their main metastasis sites are denoted by “…metastases”. List of included entries and references is found on main image page in Commons: (Photo credit: Wikipedia)

  • Commander Selvam(drcommanderselvamsiddhar28.wordpress.com)
  • Cancer Cells Identified and Highly Preventable(guardianlv.com)
  • Cancer Metastasis(com)


Cancer and pigs

Pig cancer has been widely studied because pigs cancer advantages include the resemblance in anatomy, physiology, and genetic makeup with the human, as well as new methods to manipulate the pig genome. So there is alot of research that continues to be done with regards to pigs and cancer. Some of the articles are older and additonal research has been done, but the studies have not been published publicly, so I am including links to studies that can be viewed by anyone. I do have access to some studies that are published on sites only healthcare personnel can access. These are just a few things you should be aware of when adding a pig to your family. This does not mean your pig may get cancer, but it is certainly a possibility. In the past 15 years (2001–2016), the incidence of reported neoplasia in pet pot-bellied pigs has increased, mostly as a consequence of increased life spans resulting from improved veterinary medical care.

Pigs are predisposed to some genetic cancers as well as other types of cancer. Reproductive cancer is extremely common in pigs that were never spayed/neutered, but especially in the females. I have seen research studies indicating a percentage as high as 75% of ALL females will develop some kind of neoplasm or tumor in the reproductive organs as they age, which is why we let others know the importance of spaying and neutering your pig. These pigs age data were available for 27 of the 32 pigs and ranged from 4 months to 19 years. The study is linked below from Sage Pub.




But, one key distinction between the Biologics Price Act and the HatchWaxman Act that deserves some attention is that even though the current FDA Draft guidelines are silent on the issue of labeling requirements for follow-on biologic drugs, it is most likely that the FDA will not only permit, but will require follow-on biologic drug manufacturers to maintain warning labels that are unique. (117) What this means is that brand-name biologic drug

  1. It is worth emphasizing that there is a normative debate amongst industry as to whether follow-on biologics should have unique names. See Biosimilars: Intellectual Property Creation and Protection by Pioneer and by Biosimilar Manufacturers, LEADERS IN PHARMACEUTICAL BUS. INTELLIGENCE, http://pharmaceuticalintelligence.com/tag/BiologicsPriceAct/ (last visited Oct. 27, 2014) (“Having unique names will avoid unintended substitution, minimize risk of medication errors, allow for essential elements of pharmacovigilance such as traceability and follow-up of adverse drug reactions, as well as facilitate prescriberpatient decision making . . . .”). However, the debate seems to be tipping in favor of requiring unique labeling as opposed to identical labeling. See id (“[W]hile all biologics should be uniquely tracked, biosimilars should not require unique International Nonproprietary Names (INNs) from their reference products.


Shashi Shekhar Anand, Navgeet, Balraj Singh Gill*
Centre for Biosciences,
School of Basic and Applied Sciences,
Central University of Punjab, Bathinda, India

The word ‘epigenetic’ was first coined by Conrad Waddington in 1946. It deals with functionally relevant modifications to the genome that do not include a change in the nucleotide sequence. Till date observation has focused on the functions of genome sequences and how their regulation occurs. The emerging epigenetic changes and the interactions between cis-acting elements with protein factors  plays a central role in gene regulation as well as give insight to various diseases. To evaluate the crosstalk of DNA and protein by taking account of the whole genome, one new evolving technique which is called as ChIP-chip, works on the principle of combining chromatin immunoprecipitation with microarray. ChIP-chip has been recently used in basic biological studies and may be improved further and can be useful for other to aspects, like human diseases. Now a days large amount of discoveries by ChIP-chip and other high-throughput techniques like this   may be connected with evolving bioinformatics to add to our knowledge of life and diseases.

Histone Phosphorylation
Histone Phosphorylation is the modification in which there is an addition of a phosphate  group. Phosphorylation is catalyzed by the various specific protein kinases, whereas phosphatases mediate removal of the phosphate group (Figure 5). The most studied sites of histone phosphorylation are the serine. Phosphorylation of histones can also be a vital regulatory signal. H2A variant, H2AX that helps in the DNA repair by the process of phosphorylation. It has an important role in DNA damage response and DNA repair.

Fig 5. Phosphorylation of serine chain

Histone H3 phosphorylation has been reported to play important roles in both transcription and chromatin condensation during mitosis.


Archives of Medicine

Reach Us   +44-7482878454

Ca2+-Stimulated Exocytosis: The Role of Calmodulin and Protein Kinase C in Ca2+ Regulation of Hormone and Neurotransmitter

Larry H Bernstein*

New York Methodist Hospital, Brooklyn, New York, USA

Corresponding Author:

Larry H Bernstein
New York Methodist Hospital
Brooklyn, New York, USA
Tel: 2032618671
E-mail: larry.bernstein@gmail.com

Visit for more related articles at Archives of Medicine


This is a review of the role of calmodulin and protein kinase C inregulation of Ca++ – stimulated secretion. The molecular mechanisms underlying the Ca2+ regulation of hormone and neurotransmitter release are largely unknown. Using a reconstituted [3H] norepinephrine release assay in permeabilized PC12 cells, we found essential proteins that support the triggering stage of Ca2+-stimulated exocytosis are enriched in an EGTA extract of brain membranes. Fractionation of this extract allowed purification of two factors that stimulate secretion in the absence of any other cytosolic proteins. These are calmodulin and protein kinase Ca (PKCa). Their effects on secretion were confirmed using commercial and recombinant proteins. Calmodulin enhances secretion in the absence of ATP, whereas PKC requires ATP to increase secretion, suggesting that phosphorylation is involved in PKC-mediated stimulation but not calmodulin mediated stimulation. Both proteins modulate the half-maximal increase was elicited by 3 nM PKC and 75 nM calmodulin. These results suggest that calmodulin and PKC increase Ca2+-activated exocytosis by directly modulating the membrane- or cytoskeleton-attached exocytic machinery downstream of Ca2+ elevation.

Ca2+-Stimulated Exocytosis:  The Role of Calmodulin and Protein Kinase C in Ca2+ Regulation of Hormone and Neurotransmitter

Writer and Curator: Larry H Bernstein, MD, FCAP
Curator and Content Editor: Aviva Lev-Ari, PhD, RN



Government Repositories Using our Curations





one of our articles used as a reference for this clinical trial entry


see below


Links:URL: http://print.ispub.com/api/0/ispub-article/7521
Description: Multidimensional Representation of Concepts as Cognitive Engrams in the Human Brain
URL: http://ispub.com/IJH/8/1/9567
Description: Evaluation of Cognitive Impairment
URL: https://pharmaceuticalintelligence.com/2013/02/27/ustekinumab-new-drug-therapy-for-cognitive-decline-resulting-from-neuroinflammatory-cytokine-signaling-and-alzheimers-disease/
Description: Ustekinumab New Drug Therapy for Cognitive Decline resulting from Neuroinflammatory Cytokine Signaling and Alzheimer’s Disease
Available IPD/Information:

Study NCT02835716
Submitted Date:  September 10, 2016 (v3)

Open or close this module Study Identification
Unique Protocol ID: PCD=OO ALZ
Brief Title: Pre-Clinical (Alzheimers) Diagnosis PCD = Optimum Outcomes OO (PCD=OOALZ)
Official Title: Pre-Clinical Alzheimer’s (ALZ) Diagnosis (PCD) = Optimum Outcomes (OO)
Secondary IDs:
Open or close this module Study Status
Record Verification: September 2016
Overall Status: Unknown status [Previously: Recruiting]
Study Start: September 2016
Primary Completion: September 2019 [Anticipated]
Study Completion: September 2020 [Anticipated]
First Submitted: July 10, 2016
First Submitted that
Met QC Criteria:
July 13, 2016
First Posted: July 18, 2016 [Estimate]
Last Update Submitted that
Met QC Criteria:
September 10, 2016
Last Update Posted: September 13, 2016 [Estimate]
Open or close this module Sponsor/Collaborators
Sponsor: Millennium Magnetic Technologies, LLC
Responsible Party: Sponsor
Open or close this module Oversight
U.S. FDA-regulated Drug:
U.S. FDA-regulated Device:
Data Monitoring: No
Open or close this module Study Description
Brief Summary: This Observational protocol will attempt to verify two recent and very critical concepts in ALZ Clinical Research by studying high-risk individuals who already are taking medications which may prevent the onset of ALZ.

  • It may be possible to determine the future development of ALZ in a preclinical state in a cognitively normal but high risk individual at least 18-24 months before any symptoms develop of cognitive impairment.
  • Early treatment of these cognitively normal high-risk persons with subclinical pre-ALZ can prevent of delay the occurrence and severity of ALZ.


Neither this protocol nor the fMRI imaging are designed or intended to diagnose or treat ALZ, nor develop or use medications or diagnostic neuroimaging outside of already approved and accepted parameters. Persons who volunteer to be study subjects in this observational protocol will be under the care of their primary care / specialty physician, who will order tests and treatments as they see appropriate.

Although there is a very large body of peer-reviewed scientific literature demonstrating that certain functional MRI patterns are associated with certain neurologic conditions, the utilization of fMRI for the evaluation of neurologic disorders is still considered an emerging science and therefore in the investigational stage. Although this protocol will report on brain patterns of certain neurologic conditions such as cognitive impairment and Alzheimer’s disease, based on patterns published in peer-reviewed journals, such findings are not considered stand alone or diagnostic per se and should always be considered by the PMD in conjunction with the patient’s clinical condition. These data should only be used as additional information to add to the PMD’s diagnostic impression.

Detailed Description: Prospective observational study participants age 50-75 years are identified who are currently (<1 month) taking or will soon (within 3 months) start taking study medications of interest thru their own physician.

Prospective observational study participants are explained how persons can be cognitively normal but at high risk for developing clinical ALZ, and that this risk can be prospectively identified.

They are advised that there is some rationale that medications they are or will soon be taking may have a protective effect in delaying the onset of ALZ, and that protective effect can be monitored.

They are asked if they would like to participate in a protocol that monitors their prospective risk for developing ALZ short term, and whether certain of their prescribed medications may have a protective effect. Those who are accepting to be participants are then enrolled in the study.

Enrollees are tested for risk factors for having pre-clinical ALZ. Individuals identified as being at risk at baseline are followed at 6 month intervals for a 24 month period using psychometric testing and functional neuroimaging. Their maintenance of cognitive stability or cognitive decline is monitored while under the care of their PMD and while taking medications of interest.

Open or close this module Conditions
Conditions: Alzheimer Disease
Keywords: Phosphodiesterase type 4 inhibitor
P40 subunit
Interleukin IL-12
Interleukin IL-23
Apoprotein e4
Interleukin IL-17A
Open or close this module Study Design
Study Type: Observational [Patient Registry]
Observational Study Model: Case-Only
Time Perspective: Prospective
Biospecimen Retention:
Biospecimen Description:
Enrollment: 150 [Anticipated]
Number of Groups/Cohorts 0
Target Follow-Up Duration: 5 Years
Open or close this module Groups and Interventions
Intervention Details:
Drug: roflumilast

DALIRESP is used in adults with severe chronic obstructive pulmonary disease (COPD) to decrease the number of flare-ups or the worsening of COPD symptoms (exacerbations
Other Names:

  • Daliresp
Biological: ustekinumab

STELARA is approved to treat adults 18 years and older with moderate or severe plaque psoriasis that involves large areas or many areas of their body
Other Names:

  • Stelara
Open or close this module Outcome Measures
Primary Outcome Measures:
1. Number of Participants who Develop Cognitive Decline
[ Time Frame: 18-24 month ]Cognitive decline is defined as a reduction from baseline performance within each individual of at least one standard deviation (SD) on at least one of the three principal outcome indices (DRS-2, RAVLT Sum of Trials 1-5 [T1- 5], RAVLT Delayed Recall [DR]).
Open or close this module Eligibility
Study Population: Adults at high risk of developing ALZ within 18-48 months but are currently cognitively normal for their age sex matching group. Risk factors include parents with a history of ALZ, and/or positive APOE4 alleles.
Sampling Method: Non-Probability Sample
Minimum Age: 50 Years
Maximum Age: 75 Years
Sex: All
Gender Based:
Accepts Healthy Volunteers: Yes
Criteria: Inclusion Criteria:

  • 50 to 75 years old,
  • Normal baseline cognitive function by standard psychometric testing.
  • Able to privately fund psychometric and neuroimaging studies if not covered by their insurance,
  • Able to give written Informed Consent,
  • Ascent for collaboration by their primary care or specialty physician,
  • Currently taking (<1 month) or planning to take (within the next 3 months) medications which are identified in the study group of interest, prescribed by and under the care of a PMD or specialty physician.

Exclusion Criteria:

• Inability to undergo MR Imaging : Claustrophobia, certain metal implants,

Open or close this module Contacts/Locations
Study Officials: Donald H Marks, MD PhD
Principal Investigator
Locations: United States, Connecticut
Millennium Magnetic Technologies, LLC
Westport, Connecticut, United States, 06880
Contact:Contact: Steve Levy, MD 203-423-9494 SL@MilMag.net
Contact:Contact: Donald H Marks, MD PhD 9733070364 DHM@MilMag.net
Open or close this module IPDSharing
Plan to Share IPD: Yes
Supporting Information:

Time Frame:

Access Criteria:

Open or close this module References
Citations: Bangen KJ, Restom K, Liu TT, Wierenga CE, Jak AJ, Salmon DP, Bondi MW. Assessment of Alzheimer’s disease risk with functional magnetic resonance imaging: an arterial spin labeling study. J Alzheimers Dis. 2012;31 Suppl 3:S59-74. PubMed 22531427
Caltagirone C, Ferrannini L, Marchionni N, Nappi G, Scapagnini G, Trabucchi M. The potential protective effect of tramiprosate (homotaurine) against Alzheimer’s disease: a review. Aging Clin Exp Res. 2012 Dec;24(6):580-7. doi: 10.3275/8585. Epub 2012 Sep 5. Review. PubMed 22961121
Cavedo E, Lista S, Khachaturian Z, Aisen P, Amouyel P, Herholz K, Jack CR Jr, Sperling R, Cummings J, Blennow K, O’Bryant S, Frisoni GB, Khachaturian A, Kivipelto M, Klunk W, Broich K, Andrieu S, de Schotten MT, Mangin JF, Lammertsma AA, Johnson K, Teipel S, Drzezga A, Bokde A, Colliot O, Bakardjian H, Zetterberg H, Dubois B, Vellas B, Schneider LS, Hampel H. The Road Ahead to Cure Alzheimer’s Disease: Development of Biological Markers and Neuroimaging Methods for Prevention Trials Across all Stages and Target Populations. J Prev Alzheimers Dis. 2014 Dec;1(3):181-202. PubMed 26478889
Chincarini A, Bosco P, Calvini P, Gemme G, Esposito M, Olivieri C, Rei L, Squarcia S, Rodriguez G, Bellotti R, Cerello P, De Mitri I, Retico A, Nobili F; Alzheimer’s Disease Neuroimaging Initiative. Local MRI analysis approach in the diagnosis of early and prodromal Alzheimer’s disease. Neuroimage. 2011 Sep 15;58(2):469-80. doi: 10.1016/j.neuroimage.2011.05.083. Epub 2011 Jun 16. PubMed 21718788
Chiang K, Koo EH. Emerging therapeutics for Alzheimer’s disease. Annu Rev Pharmacol Toxicol. 2014;54:381-405. doi: 10.1146/annurev-pharmtox-011613-135932. Review. PubMed 24392696
Chao S, Roberts JS, Marteau TM, Silliman R, Cupples LA, Green RC. Health behavior changes after genetic risk assessment for Alzheimer disease: The REVEAL Study. Alzheimer Dis Assoc Disord. 2008 Jan-Mar;22(1):94-7. doi: 10.1097/WAD.0b013e31815a9dcc. PubMed 18317253
Deardorff WJ, Grossberg GT. Targeting neuroinflammation in Alzheimer’s disease: evidence for NSAIDs and novel therapeutics. Expert Rev Neurother. 2017 Jan;17(1):17-32. Epub 2016 Jun 24. PubMed 27293026
García-Osta A, Cuadrado-Tejedor M, García-Barroso C, Oyarzábal J, Franco R. Phosphodiesterases as therapeutic targets for Alzheimer’s disease. ACS Chem Neurosci. 2012 Nov 21;3(11):832-44. doi: 10.1021/cn3000907. Epub 2012 Oct 1. Review. PubMed 23173065
Galluzzi S, Marizzoni M, Babiloni C, Albani D, Antelmi L, Bagnoli C, Bartres-Faz D, Cordone S, Didic M, Farotti L, Fiedler U, Forloni G, Girtler N, Hensch T, Jovicich J, Leeuwis A, Marra C, Molinuevo JL, Nobili F, Pariente J, Parnetti L, Payoux P, Del Percio C, Ranjeva JP, Rolandi E, Rossini PM, Schönknecht P, Soricelli A, Tsolaki M, Visser PJ, Wiltfang J, Richardson JC, Bordet R, Blin O, Frisoni GB; PharmaCog Consortium. Clinical and biomarker profiling of prodromal Alzheimer’s disease in workpackage 5 of the Innovative Medicines Initiative PharmaCog project: a ‘European ADNI study’. J Intern Med. 2016 Jun;279(6):576-91. doi: 10.1111/joim.12482. Epub 2016 Mar 4. PubMed 26940242
Griffin WS. Neuroinflammatory cytokine signaling and Alzheimer’s disease. N Engl J Med. 2013 Feb 21;368(8):770-1. doi: 10.1056/NEJMcibr1214546. PubMed 23425171
Gurney ME, D’Amato EC, Burgin AB. Phosphodiesterase-4 (PDE4) molecular pharmacology and Alzheimer’s disease. Neurotherapeutics. 2015 Jan;12(1):49-56. doi: 10.1007/s13311-014-0309-7. Review. PubMed 25371167
Christensen KD, Roberts JS, Whitehouse PJ, Royal CD, Obisesan TO, Cupples LA, Vernarelli JA, Bhatt DL, Linnenbringer E, Butson MB, Fasaye GA, Uhlmann WR, Hiraki S, Wang N, Cook-Deegan R, Green RC; REVEAL Study Group*. Disclosing Pleiotropic Effects During Genetic Risk Assessment for Alzheimer Disease: A Randomized Trial. Ann Intern Med. 2016 Feb 2;164(3):155-63. doi: 10.7326/M15-0187. Epub 2016 Jan 26. PubMed 26810768
Green RC, Christensen KD, Cupples LA, Relkin NR, Whitehouse PJ, Royal CD, Obisesan TO, Cook-Deegan R, Linnenbringer E, Butson MB, Fasaye GA, Levinson E, Roberts JS; REVEAL Study Group. A randomized noninferiority trial of condensed protocols for genetic risk disclosure of Alzheimer’s disease. Alzheimers Dement. 2015 Oct;11(10):1222-30. doi: 10.1016/j.jalz.2014.10.014. Epub 2014 Dec 9. PubMed 25499536
Haller S, Nguyen D, Rodriguez C, Emch J, Gold G, Bartsch A, Lovblad KO, Giannakopoulos P. Individual prediction of cognitive decline in mild cognitive impairment using support vector machine-based analysis of diffusion tensor imaging data. J Alzheimers Dis. 2010;22(1):315-27. doi: 10.3233/JAD-2010-100840. PubMed 20847435
Heckman PR, Wouters C, Prickaerts J. Phosphodiesterase inhibitors as a target for cognition enhancement in aging and Alzheimer’s disease: a translational overview. Curr Pharm Des. 2015;21(3):317-31. Review. PubMed 25159073
Li NC, Lee A, Whitmer RA, Kivipelto M, Lawler E, Kazis LE, Wolozin B. Use of angiotensin receptor blockers and risk of dementia in a predominantly male population: prospective cohort analysis. BMJ. 2010 Jan 12;340:b5465. doi: 10.1136/bmj.b5465. PubMed 20068258
Langbaum JB, Fleisher AS, Chen K, Ayutyanont N, Lopera F, Quiroz YT, Caselli RJ, Tariot PN, Reiman EM. Ushering in the study and treatment of preclinical Alzheimer disease. Nat Rev Neurol. 2013 Jul;9(7):371-81. doi: 10.1038/nrneurol.2013.107. Epub 2013 Jun 11. Review. Erratum in: Nat Rev Neurol. 2013 Aug;9(8):418. PubMed 23752908
Lazarczyk MJ, Hof PR, Bouras C, Giannakopoulos P. Preclinical Alzheimer disease: identification of cases at risk among cognitively intact older individuals. BMC Med. 2012 Oct 25;10:127. doi: 10.1186/1741-7015-10-127. Review. PubMed 23098093
Peters KR, Lynn Beattie B, Feldman HH, Illes J. A conceptual framework and ethics analysis for prevention trials of Alzheimer Disease. Prog Neurobiol. 2013 Nov;110:114-23. doi: 10.1016/j.pneurobio.2012.12.001. Epub 2013 Jan 21. Review. PubMed 23348495
Riedel WJ. Preventing cognitive decline in preclinical Alzheimer’s disease. Curr Opin Pharmacol. 2014 Feb;14:18-22. doi: 10.1016/j.coph.2013.10.002. Epub 2013 Nov 13. Review. PubMed 24565007
Rizk-Jackson A, Insel P, Petersen R, Aisen P, Jack C, Weiner M. Early indications of future cognitive decline: stable versus declining controls. PLoS One. 2013 Sep 9;8(9):e74062. doi: 10.1371/journal.pone.0074062. eCollection 2013. PubMed 24040166
Zhou Y, Tan C, Wen D, Sun H, Han W, Xu Y. The Biomarkers for Identifying Preclinical Alzheimer’s Disease via Structural and Functional Magnetic Resonance Imaging. Front Aging Neurosci. 2016 Apr 27;8:92. doi: 10.3389/fnagi.2016.00092. eCollection 2016. PubMed 27199739
Vernarelli JA, Roberts JS, Hiraki S, Chen CA, Cupples LA, Green RC. Effect of Alzheimer disease genetic risk disclosure on dietary supplement use. Am J Clin Nutr. 2010 May;91(5):1402-7. doi: 10.3945/ajcn.2009.28981. Epub 2010 Mar 10. PubMed 20219963
Vom Berg J, Prokop S, Miller KR, Obst J, Kälin RE, Lopategui-Cabezas I, Wegner A, Mair F, Schipke CG, Peters O, Winter Y, Becher B, Heppner FL. Inhibition of IL-12/IL-23 signaling reduces Alzheimer’s disease-like pathology and cognitive decline. Nat Med. 2012 Dec;18(12):1812-9. doi: 10.1038/nm.2965. Epub 2012 Nov 25. PubMed 23178247
Woodard JL, Seidenberg M, Nielson KA, Smith JC, Antuono P, Durgerian S, Guidotti L, Zhang Q, Butts A, Hantke N, Lancaster M, Rao SM. Prediction of cognitive decline in healthy older adults using fMRI. J Alzheimers Dis. 2010;21(3):871-85. doi: 10.3233/JAD-2010-091693. PubMed 20634590
Xekardaki A, Rodriguez C, Montandon ML, Toma S, Tombeur E, Herrmann FR, Zekry D, Lovblad KO, Barkhof F, Giannakopoulos P, Haller S. Arterial spin labeling may contribute to the prediction of cognitive deterioration in healthy elderly individuals. Radiology. 2015 Feb;274(2):490-9. doi: 10.1148/radiol.14140680. Epub 2014 Oct 7. PubMed 25291458

Description: Multidimensional Representation of Concepts as Cognitive Engrams in the Human Brain

Description: Evaluation of Cognitive Impairment

Description: Ustekinumab New Drug Therapy for Cognitive Decline resulting from Neuroinflammatory Cytokine Signaling and Alzheimer’s Disease


Companies using our curations in promotional material

Beckman Coulter Life Science, a $3 Billion a year revenue division of Beckman Coulter, had used our reference in the following promotional material.


Post Translational Modification
Beyond what genes can express

While most recent estimates of the human genome’s size fall between 20,000 and 25,000 genes, that number is dwarfed by the human proteome our genome encodes, which comprises more than 1 million proteins.

Single genes are responsible for the synthesis of multiple proteins, and one way this is accomplished is through the process of post-translational modification (PTM).

Protein PTMs have been shown to play a key role in cellular processes such as differentiation, and regulation of gene expression.

PTMs expand the range of gene products possible via the chemical addition or subtraction of functional groups to proteins already synthesized from an mRNA template without instructions for such alterations. These modifications play critical roles in the regulation of a protein’s activity and type of interaction with other molecules such as lipids, nucleic acids, enzymatic cofactors and other proteins.

Types of post-translational modification

Some of the most common and important PTMs so far identified include:

  • Phosphorylation — The addition of a phosphate group to tyrosine, serine or threonine amino acid residues; accomplished by kinase enzymes; one of the most important and well-studied PTMs that occur within all organisms

  • Methylation — The addition of a methyl group, via methyltransferase enzyme, to an arginine or lysine residue; methylation of histone (a DNA packaging protein) can either activate or repress genes depending on the amino acid modified

  • Acetylation — The addition of an acetyl group to a protein; gene regulation is largely mediated by histone acetylation and deacetylation; acetylation of the p53 gene is essential to its tumor-suppressing activity

  • Glycosylation — The addition of a glycan oligosaccharide to asparagine, serine or threonine residues; involved in antigen recognition and inflammatory response

  • Ubiquitination — The addition of ubiquitin protein to a lysine residue, either singly (monoubiquitination) or in a chain (polyubiquitination); proteins modified in the latter fashion are tagged for degradation

  • Lipidation — The addition of a lipid group to a cysteine or glycine residue; involved in biological regulation, membrane association and apoptosis (programmed cell death)

  • Proteolysis — The splitting of proteins into constituent polypeptides or amino acids; removing an N-terminal methionine residue, for example, can activate an inactive protein

Phosphorylation ImagePhosphorylation: adding a phosphate to 1 of a protein’s amino acid side chains. Phosphate groups carry 2 negative charges, and their addition to a protein results in a conformational change of the protein’s structure.


Deepthi, Surat P., Ph. D. Reviewed. “Types of Protein Post-Translational Modification.” News-Medical.Net, News-Medical, 10 May 2018, https://www.news-medical.net/life-sciences/Types-of-Protein-Post-Translational-Modification.aspx.
“New Human Gene Tally Reignites Debate.” Nature, Macmillan Publishers Limited, part of Springer Nature, 19AD, https://www.nature.com/articles/d41586-018-05462-w.
“Overview of Post-Translational Modification | Thermo Fisher Scientific – US.” Thermo Fisher Scientific – US, https://www.thermofisher.com/us/en/home/life-science/protein-biology/protein-biology-learning-center/protein-biology-resource-library/pierce-protein-methods/overview-post-translational-modification.html.
Overview of Posttranslational Modification (PTM) | Leaders in Pharmaceutical Business Intelligence (LPBI) Group.” Leaders in Pharmaceutical Business Intelligence (LPBI) Group, 29 July 2014, https://pharmaceuticalintelligence.com/2014/07/29/overview-of-posttranslational-modification-ptm/.
“Posttranslational Modification – an Overview | ScienceDirect Topics.” ScienceDirect.Com | Science, Health and Medical Journals, Full Text Articles and Books., https://www.sciencedirect.com/topics/neuroscience/posttranslational-modification.
“Post-Translational Modifications and Quality Control in the Rough ER – Molecular Cell Biology – NCBI Bookshelf.” National Center for Biotechnology Information, https://www.ncbi.nlm.nih.gov/books/NBK21741/.
“Protein Phosphorylation vs. Ubiquitination in Drug Development.” GenScript – Make Research Easy – The Leader in Molecular Cloning and Gene Synthesis, Peptide Synthesis, Protein and Antibody Engineering., https://www.genscript.com/protein-phosphorylation-vs-ubiquitination.html.


Read Full Post »

Multiple Major Scientific Journals Will Fully Adopt Open Access Under Plan S

Curator: Stephen J. Williams, PhD

More university library systems have been pressuring major scientific publishing houses to adopt an open access strategy in order to reduce the library system’s budgetary burdens.  In fact some major universities like the California system of universities (University of California and other publicly funded universities in the state as well as Oxford University in the UK, even MIT have decided to become their own publishing houses in a concerted effort to fight back against soaring journal subscription costs as well as the costs burdening individual scientists and laboratories (some of the charges to publish one paper can run as high as $8000.00 USD while the journal still retains all the rights of distribution of the information).  Therefore more and more universities, as well as concerted efforts by the European Union and the US government are mandating that scientific literature be published in an open access format.

The results of this pressure are evident now as major journals like Nature, JBC, and others have plans to go fully open access in 2021.  Below is a listing and news reports of some of these journals plans to undertake a full Open Access Format.


Nature to join open-access Plan S, publisher says

09 APRIL 2020 UPDATE 14 APRIL 2020

Springer Nature says it commits to offering researchers a route to publishing open access in Nature and most Nature-branded journals from 2021.

Richard Van Noorden

After a change in the rules of the bold open-access (OA) initiative known as Plan S, publisher Springer Nature said on 8 April that many of its non-OA journals — including Nature — were now committed to joining the plan, pending discussion of further technical details.

This means that Nature and other Nature-branded journals that publish original research will now look to offer an immediate OA route after January 2021 to scientists who want it, or whose funders require it, a spokesperson says. (Nature is editorially independent of its publisher, Springer Nature.)

“We are delighted that Springer Nature is committed to transitioning its journals to full OA,” said Robert Kiley, head of open research at the London-based biomedical funder Wellcome, and the interim coordinator for Coalition S, a group of research funders that launched Plan S in 2018.

But Lisa Hinchliffe, a librarian at the University of Illinois at Urbana–Champaign, says the changed rules show that publishers have successfully pushed back against Plan S, softening its guidelines and expectations — in particular in the case of hybrid journals, which publish some content openly and keep other papers behind paywalls. “The coalition continues to take actions that rehabilitate hybrid journals into compliance rather than taking the hard line of unacceptability originally promulgated,” she says.





What is Plan S?

The goal of Plan S is to make scientific and scholarly works free to read as soon as they are published. So far, 17 national funders, mostly in Europe, have joined the initiative, as have the World Health Organization and two of the world’s largest private biomedical funders — the Bill & Melinda Gates Foundation and Wellcome. The European Commission will also implement an OA policy that is aligned with Plan S. Together, this covers around 7% of scientific articles worldwide, according to one estimate. A 2019 report published by the publishing-services firm Clarivate Analytics suggested that 35% of the research content published in Nature in 2017 acknowledged a Plan S funder (see ‘Plan S papers’).


Journal Total papers in 2017 % acknowledging Plan S funder
Nature 290 35%
Science 235 31%
Proc. Natl Acad. Sci. USA 639 20%

Source: The Plan S footprint: Implications for the scholarly publishing landscape (Institute for Scientific Information, 2019)


Source: https://www.nature.com/articles/d41586-020-01066-5

Opening ASBMB publications freely to all


Lila M. Gierasch, Editor-in-Chief, Journal of Biological Chemistry

Nicholas O. Davidson

Kerry-Anne Rye, Editors-in-Chief, Journal of Lipid Research and 

Alma L. Burlingame, Editor-in-Chief, Molecular and Cellular Proteomics


We are extremely excited to announce on behalf of the American Society for Biochemistry and Molecular Biology (ASBMB) that the Journal of Biological Chemistry (JBC), Molecular & Cellular Proteomics (MCP), and the Journal of Lipid Research (JLR) will be published as fully open-access journals beginning in January 2021. This is a landmark decision that will have huge impact for readers and authors. As many of you know, many researchers have called for journals to become open access to facilitate scientific progress, and many funding agencies across the globe are either already requiring or considering a requirement that all scientific publications based on research they support be published in open-access journals. The ASBMB journals have long supported open access, making the accepted author versions of manuscripts immediately and permanently available, allowing authors to opt in to the immediate open publication of the final version of their paper, and endorsing the goals of the larger open-access movement (1). However, we are no longer satisfied with these measures. To live up to our goals as a scientific society, we want to freely distribute the scientific advances published in JBC, MCP, and JLR as widely and quickly as possible to support the scientific community. How better can we facilitate the dissemination of new information than to make our scientific content freely open to all?

For ASBMB journals and others who have contemplated or made the transition to publishing all content open access, achieving this milestone generally requires new financial mechanisms. In the case of the ASBMB journals, the transition to open access is being made possible by a new partnership with Elsevier, whose established capabilities and economies of scale make the costs associated with open-access publication manageable for the ASBMB (2). However, we want to be clear: The ethos of ASBMB journals will not change as a consequence of this new alliance. The journals remain society journals: The journals are owned by the society, and all scientific oversight for the journals will remain with ASBMB and its chosen editors. Peer review will continue to be done by scientists reviewing the work of scientists, carried out by editorial board members and external referees on behalf of the ASBMB journal leadership. There will be no intervention in this process by the publisher.

Although we will be saying “goodbye” to many years of self-publishing (115 in the case of JBC), we are certain that we are taking this big step for all the right reasons. The goal for JBC, MCP, and JLR has always been and will remain to help scientists advance their work by rapidly and effectively disseminating their results to their colleagues and facilitating the discovery of new findings (13), and open access is only one of many innovations and improvements in science publishing that could help the ASBMB journals achieve this goal. We have been held back from fully exploring these options because of the challenges of “keeping the trains running” with self-publication. In addition to allowing ASBMB to offer all the content in its journals to all readers freely and without barriers, the new partnership with Elsevier opens many doors for ASBMB publications, from new technology for manuscript handling and production, to facilitating reader discovery of content, to deploying powerful analytics to link content within and across publications, to new opportunities to improve our peer review mechanisms. We have all dreamed of implementing these innovations and enhancements (45) but have not had the resources or infrastructure needed.

A critical aspect of moving to open access is how this decision impacts the cost to authors. Like most publishers that have made this transition, we have been extremely worried that achieving open-access publishing would place too big a financial burden on our authors. We are pleased to report the article-processing charges (APCs) to publish in ASBMB journals will be on the low end within the range of open-access fees: $2,000 for members and $2,500 for nonmembers. While slightly higher than the cost an author incurs now if the open-access option is not chosen, these APCs are lower than the current charges for open access on our existing platform.


1.↵ Gierasch, L. M., Davidson, N. O., Rye, K.-A., and Burlingame, A. L. (2019) For the sake of science. J. Biol. Chem. 294, 2976 FREE Full Text

2.↵ Gierasch, L. M. (2017) On the costs of scientific publishing. J. Biol. Chem. 292, 16395–16396 FREE Full Text

3.↵ Gierasch, L. M. (2020) Faster publication advances your science: The three R’s. J. Biol. Chem. 295, 672 FREE Full Text

4.↵ Gierasch, L. M. (2017) JBC is on a mission to facilitate scientific discovery. J. Biol. Chem. 292, 6853–6854 FREE Full Text

5.↵ Gierasch, L. M. (2017) JBC’s New Year’s resolutions: Check them off! J. Biol. Chem. 292, 21705–21706 FREE Full Text


Source: https://www.jbc.org/content/295/22/7814.short?ssource=mfr&rss=1


Open access publishing under Plan S to start in 2021


2019; 365 doi: https://doi.org/10.1136/bmj.l2382 (Published 31 May 2019)Cite this as: BMJ 2019;365:l2382

From 2021, all research funded by public or private grants should be published in open access journals, according to a group of funding agencies called coALition S.1

The plan is the final version of a draft that was put to public consultation last year and attracted 344 responses from institutions, almost half of them from the UK.2 The responses have been considered and some changes made to the new system called Plan S, a briefing at the Science Media Centre in London was told on 29 May.

The main change has been to delay implementation for a year, to 1 January 2021, to allow more time for those involved—researchers, funders, institutions, publishers, and repositories—to make the necessary changes, said John-Arne Røttingen, chief executive of the Research Council of Norway.

“All research contracts signed after that date should include the obligation to publish in an open access journal,” he said. T……

(Please Note in a huge bit of irony this article is NOT Open Access and behind a paywall…. Yes an article about an announcement to go Open Access is not Open Access)

Source: https://www.bmj.com/content/365/bmj.l2382.full



Plan S

From Wikipedia, the free encyclopedia

Jump to navigationJump to search

Not to be confused with S-Plan.

Plan S is an initiative for open-access science publishing launched in 2018[1][2] by “cOAlition S”,[3] a consortium of national research agencies and funders from twelve European countries. The plan requires scientists and researchers who benefit from state-funded research organisations and institutions to publish their work in open repositories or in journals that are available to all by 2021.[4] The “S” stands for “shock”.[5]

Principles of the plan[edit]

The plan is structured around ten principles.[3] The key principle states that by 2021, research funded by public or private grants must be published in open-access journals or platforms, or made immediately available in open access repositories without an embargo. The ten principles are:

  1. authors should retain copyrighton their publications, which must be published under an open license such as Creative Commons;
  2. the members of the coalition should establish robust criteria and requirements for compliant open access journals and platforms;
  3. they should also provide incentives for the creation of compliant open access journals and platforms if they do not yet exist;
  4. publication fees should be covered by the funders or universities, not individual researchers;
  5. such publication fees should be standardized and capped;
  6. universities, research organizations, and libraries should align their policies and strategies;
  7. for books and monographs, the timeline may be extended beyond 2021;
  8. open archives and repositories are acknowledged for their importance;
  9. hybrid open-access journalsare not compliant with the key principle;
  10. members of the coalition should monitor and sanction non-compliance.

Member organisations

Organisations in the coalition behind Plan S include:[14]

International organizations that are members:

Plan S is also supported by:


Other articles on Open Access on this Open Access Journal Include:

MIT, guided by open access principles, ends Elsevier negotiations, an act followed by other University Systems in the US and in Europe


Open Access e-Scientific Publishing: Elected among 2018 Nature’s 10 Top Influencers – ROBERT-JAN SMITS: A bureaucrat launched a drive to transform science publishing


Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies


Mozilla Science Lab Promotes Data Reproduction Through Open Access: Report from 9/10/2015 Online Meeting


Elsevier’s Mendeley and Academia.edu – How We Distribute Scientific Research: A Case in Advocacy for Open Access Journals


The Fatal Self Distraction of the Academic Publishing Industry: The Solution of the Open Access Online Scientific Journals
PeerJ Model for Open Access Scientific Journal
“Open Access Publishing” is becoming the mainstream model: “Academic Publishing” has changed Irrevocably
Open-Access Publishing in Genomics













Read Full Post »

Recent Grim COVID-19 Statistics in U.S. and Explanation from Dr. John Campbell: Why We Need to be More Proactive

Reporter: Stephen J. Williams, Ph.D.

In case you have not been following the excellent daily YouTube sessions on COVID-19 by Dr. John Campbell I am posting his latest video on how grim the statistics have become and the importance of using proactive measures (like consistent use of facial masks, proper social distancing) instead of relying on reactive measures (e.g. lockdowns after infection spikes).  In addition, below the video are some notes from his presentation and some links to sites discussed within the video.


Notes from the video:

  • approaching 5 million confirmed cases in US however is probably an underestimation
  • 160,00 deaths as of 8/08/2020

From the University of Washington Institute for Health Metrics and Evaluation in Seattle WA

  • 295,000 US COVID-19 related deaths estimated by December 1, 2020
  • however if 95% of people in US consistently and properly wear masks could save 66,000 lives
  • however this will mean a remaining 228,271 deaths which is a depressing statistic
  • Dr. John Campbell agrees with Dr. Christopher Murray, director of the Institute for Health Metrics that “people’s inconsistent use of these measures (face masks, social distancing) is a serious problem”
  • States with increasing transmission like Colorado, Idaho, Kansas, Kentucky, Mississippi, Missouri, Ohio, Oklahoma, Oregon, and Virginia are suggested to have a lockdown when death rate reaches 8 deaths per million population however it seems we should be also focusing on population densities rather than geographic states
  • Dr. Campbell and Dr. Murray stress more proactive measures than reactive ones like lockdowns
  • if mask usage were to increase to 95% usage reimposition to shutdown could be delayed 6 to 8 weeks


New IHME COVID-19 Forecasts See Nearly 300,000 Deaths by December 1

SEATTLE (August 6, 2020) – America’s COVID-19 death toll is expected to reach nearly 300,000 by December 1; however, consistent mask-wearing beginning today could save about 70,000 lives, according to new data from the Institute for Health Metrics and Evaluation (IHME) at the University of Washington’s School of Medicine.The US forecast totals 295,011 deaths by December. As of today, when, thus far, 158,000 have died, IHME is projecting approximately 137,000 more deaths. However, starting today, if 95% of the people in the US were to wear masks when leaving their homes, that total number would decrease to 228,271 deaths, a drop of 49%. And more than 66,000 lives would be saved.Masks and other protective measures against transmission of the virus are essential to staying COVID-free, but people’s inconsistent use of those measures is a serious problem, said IHME Director Dr. Christopher Murray.

“We’re seeing a rollercoaster in the United States,” Murray said. “It appears that people are wearing masks and socially distancing more frequently as infections increase, then after a while as infections drop, people let their guard down and stop taking these measures to protect themselves and others – which, of course, leads to more infections. And the potentially deadly cycle starts over again.”

Murray noted that there appear to be fewer transmissions of the virus in Arizona, California, Florida, and Texas, but deaths are rising and will continue to rise for the next week or two. The drop in infections appears to be driven by the combination of local mandates for mask use, bar and restaurant closures, and more responsible behavior by the public.

“The public’s behavior had a direct correlation to the transmission of the virus and, in turn, the numbers of deaths,” Murray said. “Such efforts to act more cautiously and responsibly will be an important aspect of COVID-19 forecasting and the up-and-down patterns in individual states throughout the coming months and into next year.”

Murray said that based on cases, hospitalizations, and deaths, several states are seeing increases in the transmission of COVID-19, including Colorado, Idaho, Kansas, Kentucky, Mississippi, Missouri, Ohio, Oklahoma, Oregon, and Virginia.

“These states may experience increasing cases for several weeks and then may see a response toward more responsible behavior,” Murray said.

In addition, since July 15, several states have added mask mandates. IHME’s statistical analysis suggests that mandates with no penalties increase mask wearing by 8 percentage points. But mandates with penalties increase mask wearing by 15 percentage points.

“These efforts, along with media coverage and public information efforts by state and local health agencies and others, have led to an increase in the US rate of mask wearing by about 5 percentage points since mid-July,” Murray said. Mask-wearing increases have been larger in states with larger epidemics, he said.

IHME’s model assumes that states will reimpose a series of mandates, including non-essential business closures and stay-at-home orders, when the daily death rate reaches 8 per million. This threshold is based on data regarding when states and/or communities imposed mandates in March and April, and implies that many states will have to reimpose mandates.

As a result, the model suggests which states will need to reimpose mandates and when:

  • August – Arizona, Florida, Mississippi, and South Carolina
  • September – Georgia and Texas
  • October – Colorado, Kansas, Louisiana, Missouri, Nevada, North Carolina, and Oregon.
  • November – Alabama, Arkansas, California, Iowa, New Mexico, Oklahoma, Utah, Washington, and Wisconsin.

However, if mask use is increased to 95%, the re-imposition of stricter mandates could be delayed 6 to 8 weeks on average.

Source: http://www.healthdata.org/news-release/new-ihme-covid-19-forecasts-see-nearly-300000-deaths-december-1


Read Full Post »

Older Posts »

%d bloggers like this: