Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.
The platform sort of looked like the image below:
This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:
In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.
Previous image source: PeerJ.com
To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were
costs are still assumed by the individual researcher not by the journals
the arise of the numerous Predatory Journals
Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice
which included the cost and the rise of predatory journals.
In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs
Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?
Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]
The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]
Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]
Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]
Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4]Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16]Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4]Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]
Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data security, scalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14]Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15]The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]
Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]
Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrime, online harassment, hate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]
Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]
David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]
Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved
by Frances Yue, Editor of Distributed Ledger, Marketwatch.com
Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.
Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.
“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”
For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner.
“Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”
Decentralized social media?
The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.
In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.
“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.
“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.
BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”
AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”
But Before I get to the “selling monkeys to morons” quote,
I want to talk about
THE GOOD, THE BAD, AND THE UGLY
THE GOOD
My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.
Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).
This concept of science 3.0 he had coined back in 2009. As Dr Teif had mentioned
So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples
This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.
The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online. However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times.
The consensus is that science 2.0 networking is:
good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary
There are a few concerns on Science 3.0 Dr. Teif articulates:
A. Science 3.0 Still Needs Peer Review
Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery. As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly. It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical. Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.
Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes. Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).
An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.
The NEW BREED was born in 4/2012
An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature.
B. Open Access Publishing Model leads to biases and inequalities in the idea selection
The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”. However NOTHING could be further from the truth. In advertising the publishers claim the companies not the consumer pays for the ads. However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher. Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers. However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.
However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.
C. Post publication comments and discussion require online hubs and anonymization systems
Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used. In fact they show that there are just 1 comment per 100 views of a journal article on these systems. In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse. The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter.
In a personal experience,
a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article. Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.
Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms. Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.
A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:
In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.
But many feel such discussions would be safer if they were anonymized. However then researchers do not get any credit for their opinions or commentaries.
A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.
This is where something like NFTs or a decentralized network may become important!
Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse. However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.
This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).
To introduce this discussion first a few startoff material which will fram this discourse
The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?
Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.
Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.
We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!
We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!
Are these new technologies the cure or is it just another headache?
There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees
The Following are some slides from representative Presentations
Other article of note on this topic on this Open Access Scientific Journal Include:
There is growing interest in the field of glycobiology given the fact that epitopes with physiological and pathological relevance have glyco moieties. We believe that another “omics” revolution is on the horizon—the study of the glyco modifications on the surface of cells and their potential as biomarkers and therapeutic targets in many disease classes. Not much industry tracking of this field has taken place. Thus, we sought to map this landscape by examining the entire ensemble of academic publications in this space and teasing apart the trends operative in this field from a qualitative and quantitative perspective. We believe that this methodology of en masse capture and publication and annotation provides an effective approach to evaluate this early-stage field.
Identifiation and Growth of Glycobiology Publications
For this article, we identified 7000 publications in the broader glycobiology space and analyzed them in detail. It is important to frame glycobiology in the context of genomics and proteomics as a means to assess the scale of the field. Figure 1 presents the relative sizes of these fields as assessed by publications in from 1975 to 2015.
Note that the relative scale of genomics versus proteomics and glycobiology/glycomics in this graph strongly suggests that glycobiology is a nascent space, and thus a driver for us to map its landscape today and as it evolves over the coming years.
Figure 2. (A) Segmentation of the glycobiology landscape. (B) Glycobiology versus glycomics publication growth.
To examine closely the various components of the glycobiology space, we segmented the publications database, presented in Figure 2A. Note the relative sizes and growth rates (slopes) of the various segments.
Clearly, glycoconjugates currently are the majority of this space and account for the bulk of the publications. Glycobiology and glycomics are small but expanding and therefore can be characterized as “nascent market segments.” These two spaces are characterized in more detail in Figure 2B, which presents their publication growth rates.
Note the very recent increased attention directed at these spaces and hence our drive to initiate industry coverage of these spaces. Figure 2B presents the overall growth and timeline of expansion of these fields—especially glycobiology—but it provides no information about the qualitative nature of these fields.
Figure 2C. Word cloud based on titles of publications in the glycobiology and glycomics spaces.
To understand the focus of publications in this field, and indeed the nature of this field, we constructed a word cloud based on titles of the publications that comprise this space presented in Figure 2C.
There is a marked emphasis on terms such as oligosaccharides and an emphasis on cells (this is after all glycosylation on the surface of cells). Overall, a pictorial representation of the types and classes of modifications that comprise this field emerge in this word cloud, demonstrating the expansion of the glycobiology and to a lesser extent the glycomics spaces as well as the character of these nascent but expanding spaces.
Characterization of the Glycobiology Space in Journals
Figure 3A. Breakout of publications in the glycobiology/glycomics fields. http://www.genengnews.com/Media/images/AnalysisAndInsight/April12_2016_SelectBiosciences_Figure3a_5002432117316.jpg
Having framed the overall growth of the glycobiology field, we wanted to understand its structure and the classes of researchers as well as publications that comprise this field. To do this, we segmented the publications that constitute this field into the various journals in which glycobiology research is published. Figure 3A presents the breakout of publications by journal to illustrate the “scope” of this field.
The distribution of glycobiology publications across the various journals suggests a very concentrated marketplace that is very technically focused. The majority of the publications segregate into specialized journals on this topic, a pattern very indicative of a field in the very early stages of development—a truly nascent marketplace.
Figure 3B. Origin of publications in the glycobiology/glycomics fields.
We also sought to understand the “origin” of these publications—the breakout between academic- versus industry-derived journals. Figure 3B presents this breakout and shows that these publications are overwhelmingly (92.3%) derived from the academic sector. This is again a testimonial to the early nascent nature of this marketplace without significant engagement by the commercial sector and therefore is an important field to characterize and track from the ground up.
Select Biosciences, Inc. further analyzed the growth trajectory of the glycobiology papers in Figure 3C as a means to examine closely the publications trajectory. Although there appears to be some wobble along the way, overall the trajectory is upward, and of late it is expanding significantly.
In Summary
Figure 3C. Trajectory of the glycobiology space. http://www.genengnews.com/Media/images/AnalysisAndInsight/April12_2016_SelectBiosciences_Figure3c1236921793.jpg
Glycobiology is the study of what coats living cells—glycans, or carbohydrates, and glycoconjugates. This is an important field of study with medical applications because it is known that tumor cells alter their glycosylation pattern, which may contribute to their metastatic potential as well as potential immune evasion.
At this point, glycobiology is largely basic research and thus it pales in comparison with the field of genomics. But in 10 years, we predict the study of glycobiology and glycomics will be ubiquitous and in the mainstream.
We started our analysis of this space because we’ve been focusing on many other classes of analytes, such as microRNAs, long-coding RNAs, oncogenes, tumor suppressor genes, etc., whose potential as biomarkers is becoming established. Glycobiology, on the other hand, represents an entire new space—a whole new category of modifications that could be analyzed for diagnostic potential and perhaps also for therapeutic targeting.
Today, glycobiology and glycomics are where genomics was at the start of the Human Genome Project. They respresent a nascent space and with full headroom for growth. Select Biosciences will continue to track this exciting field for research developments as well as development of biomarkers based on glyco-epitopes.
Enal Razvi, Ph.D., conducted his doctoral work on viral immunology and subsequent to receiving his Ph.D. went on to the Rockefeller University in New York to serve as Aaron Diamond Post-doctoral fellow under Professor Ralph Steinman [Nobel Prize Winner in 2011 for his discovery of dendritic cells in the early-70s with Zanvil Cohn]. Subsequently, Dr. Razvi completed his research fellowship at Harvard Medical School. For the last two decades Dr. Razvi has worked with small and large companies and consulted for more than 100 clients worldwide. He currently serves as Biotechnology Analyst and Managing Director of SelectBio U.S. He can be reached at enal@selectbio.us. Gary M. Oosta holds a Ph.D. in Biophysics from Massachusetts Institute of Technology and a B.A. in Chemistry from E. Mich. Univ. He has 25 years of industrial research experience in various technology areas including medical diagnostics, thin-layer coating, bio-effects of electromagnetic radiation, and blood coagulation. Dr. Oosta has authored 20 technical publications and is an inventor on 77 patents worldwide. In addition, he has managed research groups that were responsible for many other patented innovations. Dr. Oosta has a long-standing interest in using patents and publications as strategic technology indicators for future technology selection and new product development. To enjoy more articles like this from GEN, click here to subscribe now!
RELATED CONTENT
Ezose, Hirosaki University Sign Glycomics Partnership to Identify Urologic Cancer Biomarkers
Getting Testy Over Liquid Biopsies
Enabling High-Throughput Glycomics
Market & Tech Analysis
The Evolution of the Glycobiology Space
Cancer Immunotherapy 2016
The Cancer Biomarkers Marketplace
Microfluidics in the Life Sciences
Liquid Biopsies Landscape
Dr. Henry Bourne has trained graduate students and postdocs at UCSF for over 40 years. In his iBiology talk, he discusses the imminent need for change in graduate education. With time to degrees getting longer, the biomedical community needs to create experimental graduate programs to find more effective and low cost ways to train future scientists and run successful laboratories. If we don’t start looking for solutions, the future of the biomedical enterprise will grow increasingly unstable.
Henry Bourne is Professor Emeritus and former chair of the Department of Pharmacology at the University of California – San Francisco. His research focused on trimeric G-proteins, G-protein coupled receptors, and the cellular signals responsible for polarity and direction-finding of human leukocytes. He is the author of several books including a memoir, Ambition and Delight, and has written extensively about graduate training and biomedical workforce issues. Now Dr. Bourne’s research focuses on the organization and founding of US biomedical research in the early 20th century.
ECG Interpretation: Ischemia, Infarction, and the Waveforms Q through U, Part 1 Girish L. Kalra, MD Assistant Professor, Department of Medicine, Emory Univer…
Alternative CRISPR discovered @MIT, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair
Alternative CRISPR Discovered @MIT
Reporter & Curator: Larry H. Bernstein, MD, FCAP
New breakthrough! – A better alternative CRISPR system just identified
CRISPR-Cas9 system has revolutionized the field of genome editing since its first application in human cells was reported in 2012. A recent publication in Cell reported the identification of a different CRISPR system with the potential for even simpler and more precise genome editing. The newly identified CRISPR-Cpf1 system mediates robust DNA interference with features different from Cas9. Cpf1 possesses several advantages over the currently used Cas9 system.
The Cpf1 system is simpler than Cas9 system as it requires only a single RNA for its DNA-cutting enzymatic activity.
Cpf1 cut has shot overhangs on the exposed ends, allowing more efficient and precise genome engineering; while Cas9 cut produces blunt ends that often undergo mutations when rejoined.
Cpf1 is smaller than Cas9, thus easier to deliver into the cells or tissues.
Cpf1 cut is far away from the recognition site, leaving space for further editing if mutation occurred at the cutting site.
The Cpf1 complex recognize very different PAM sequences than those of Cas9, adding more flexibility in choosing target sites.
These properties of Cpf1 and its potential with more precise gene editing expanded the application scope of CRISPR, from gene knock-out and knock-ins, genomic deletions, to even gene therapy.
•Cpf1 is a CRISPR-associated two-component RNA-programmable DNA nuclease
•Targeted DNA is cleaved as a 5-nt staggered cut distal to a 5′ T-rich PAM
•Two Cpf1 orthologs exhibit robust nuclease activity in human cells
Summary
The microbial adaptive immune system CRISPR mediates defense against foreign genetic elements through two classes of RNA-guided nuclease effectors. Class 1 effectors utilize multi-protein complexes, whereas class 2 effectors rely on single-component effector proteins such as the well-characterized Cas9. Here, we report characterization of Cpf1, a putative class 2 CRISPR effector. We demonstrate that Cpf1 mediates robust DNA interference with features distinct from Cas9. Cpf1 is a single RNA-guided endonuclease lacking tracrRNA, and it utilizes a T-rich protospacer-adjacent motif. Moreover, Cpf1 cleaves DNA via a staggered DNA double-stranded break. Out of 16 Cpf1-family proteins, we identified two candidate enzymes from Acidaminococcus and Lachnospiraceae, with efficient genome-editing activity in human cells. Identifying this mechanism of interference broadens our understanding of CRISPR-Cas systems and advances their genome editing applications.
Almost all archaea and many bacteria achieve adaptive immunity through a diverse set of CRISPR-Cas (clustered regularly interspaced short palindromicrepeats and CRISPR-associated proteins) systems, each of which consists of a combination of Cas effector proteins and CRISPR RNAs (crRNAs) (Makarova et al., 2011, Makarova et al., 2015). The defense activity of the CRISPR-Cas systems includes three stages: (1) adaptation, when a complex of Cas proteins excises a segment of the target DNA (known as a protospacer) and inserts it into the CRISPR array (where this sequence becomes a spacer); (2) expression and processing of the precursor CRISPR (pre-cr) RNA resulting in the formation of mature crRNAs; and (3) interference, when the effector module—either another Cas protein complex or a single large protein—is guided by a crRNA to recognize and cleave target DNA (or in some cases, RNA) (Horvath and Barrangou, 2010,Sorek et al., 2013, Barrangou and Marraffini, 2014). The adaptation stage is mediated by the complex of the Cas1 and Cas2 proteins, which are shared by all known CRISPR-Cas systems, and sometimes involves additional Cas proteins. Diversity is observed at the level of processing of the pre-crRNA to mature crRNA guides, proceeding via either a Cas6-related ribonuclease or a housekeeping RNaseIII that specifically cleaves double-stranded RNA hybrids of pre-crRNA and tracrRNA. Moreover, the effector modules differ substantially among the CRISPR-Cas systems (Makarova et al., 2011, Makarova et al., 2015,Charpentier et al., 2015). In the latest classification, the diverse CRISPR-Cas systems are divided into two classes according to the configuration of their effector modules: class 1 CRISPR systems utilize several Cas proteins and the crRNA to form an effector complex, whereas class 2 CRISPR systems employ a large single-component Cas protein in conjunction with crRNAs to mediate interference (Makarova et al., 2015).
Multiple class 1 CRISPR-Cas systems, which include the type I and type III systems, have been identified and functionally characterized in detail, revealing the complex architecture and dynamics of the effector complexes (Brouns et al., 2008, Marraffini and Sontheimer, 2008, Hale et al., 2009, Sinkunas et al., 2013,Jackson et al., 2014, Mulepati et al., 2014). Several class 2 CRISPR-Cas systems have also been identified and experimentally characterized, but they are all type II and employ homologous RNA-guided endonucleases of the Cas9 family as effectors (Barrangou et al., 2007, Garneau et al., 2010, Deltcheva et al., 2011, Sapranauskas et al., 2011, Jinek et al., 2012, Gasiunas et al., 2012). A second, putative class 2 CRISPR system, tentatively assigned to type V, has been recently identified in several bacterial genomes (http://www.jcvi.org/cgi-bin/tigrfams/HmmReportPage.cgi?acc=TIGR04330) (Schunder et al., 2013, Vestergaard et al., 2014, Makarova et al., 2015). The putative type V CRISPR-Cas systems contain a large, ∼1,300 amino acid protein called Cpf1 (CRISPR from Prevotella and Francisella 1). It remains unknown, however, whether Cpf1-containing CRISPR loci indeed represent functional CRISPR systems. Given the broad applications of Cas9 as a genome-engineering tool (Hsu et al., 2014, Jiang and Marraffini, 2015), we sought to explore the function of Cpf1-based putative CRISPR systems.
Here, we show that Cpf1-containing CRISPR-Cas loci of Francisella novicida U112 encode functional defense systems capable of mediating plasmid interference in bacterial cells guided by the CRISPR spacers. Unlike Cas9 systems, Cpf1-containing CRISPR systems have three features. First, Cpf1-associated CRISPR arrays are processed into mature crRNAs without the requirement of an additional trans-activating crRNA (tracrRNA) (Deltcheva et al., 2011, Chylinski et al., 2013). Second, Cpf1-crRNA complexes efficiently cleave target DNA proceeded by a short T-rich protospacer-adjacent motif (PAM), in contrast to the G-rich PAM following the target DNA for Cas9 systems. Third, Cpf1 introduces a staggered DNA double-stranded break with a 4 or 5-nt 5′ overhang.
To explore the suitability of Cpf1 for genome-editing applications, we characterized the RNA-guided DNA-targeting requirements for 16 Cpf1-family proteins from diverse bacteria, and we identified two Cpf1 enzymes fromAcidaminococcus sp. BV3L6 and Lachnospiraceae bacterium ND2006 that are capable of mediating robust genome editing in human cells. Collectively, these results establish Cpf1 as a class 2 CRISPR-Cas system that includes an effective single RNA-guided endonuclease with distinct properties that has the potential to substantially advance our ability to manipulate eukaryotic genomes.
Results
Figure 1
The Francisella novicida U112 Cpf1 CRISPR Locus Provides Immunity against Transformation of Plasmids Containing Protospacers Flanked by a 5′-TTN PAM
(A) Organization of two CRISPR loci found in Francisella novicida U112 (NC_008601). The domain architectures of FnCas9 and FnCpf1 are compared.
(B) Schematic illustrating the plasmid depletion assay for discovering the PAM position and identity. Competent E. coliharboring either the heterologous FnCpf1 locus plasmid (pFnCpf1) or the empty vector control were transformed with a library of plasmids containing the matching protospacer flanked by randomized 5′ or 3′ PAM sequences and selected with antibiotic to deplete plasmids carrying successfully targeted PAM. Plasmids from surviving colonies were extracted and sequenced to determine depleted PAM sequences.
(C and D) Sequence logo for the FnCpf1 PAM as determined by the plasmid depletion assay. Letter height at each position is measured by information content (C) or frequency (D); error bars show 95% Bayesian confidence interval.
(E) E. coli harboring pFnCpf1 provides robust interference against plasmids carrying 5′-TTN PAMs (n = 3; error bars represent mean ± SEM).
Cpf1-Containing CRISPR Loci Are Active Bacterial Immune Systems
The Cpf1-Associated CRISPR Array Is Processed Independent of TracrRNA
Cpf1 Is a Single crRNA-Guided Endonuclease
The RuvC-like Domain of Cpf1 Mediates RNA-Guided DNA Cleavage
Sequence and Structural Requirements for the Cpf1 crRNA
Cpf1-Family Proteins from Diverse Bacteria Share Common crRNA Structures and PAMs
Cpf1 Can Be Harnessed to Facilitate Genome Editing in Human Cells
In this work, we characterize Cpf1-containing class 2 CRISPR systems, classified as type V, and show that its effector protein, Cpf1, is a single RNA-guided endonuclease. Cpf1 substantially differs from Cas9—to date, the only other experimentally characterized class 2 effector—in terms of structure and function and might provide important advantages for genome-editing applications. Specifically, Cpf1 contains a single identified nuclease domain, in contrast to the two nuclease domains present in Cas9. The results presented here show that, in FnCpf1, inactivation of RuvC-like domain abolishes cleavage of both DNA strands. Conceivably, FnCpf1 forms a homodimer (Figure S2B), with the RuvC-like domains of each of the two subunits cleaving one DNA strand. However, we cannot rule out that FnCpf1 contains a second yet-to-be-identified nuclease domain. Structural characterization of Cpf1-RNA-DNA complexes will allow testing of these hypotheses and elucidation of the cleavage mechanism.
February 8, 2016 | When a geneticist stares down the 3 billion DNA base pairs of the human genome, searching for a clue to what’s gone awry in a single patient, it helps to narrow the field. One of the most popular places to look is the exome, the tiny fraction of our DNA―less than 2%―that actually codes for proteins. For patients with rare genetic diseases, which might be fully explained by one key mutation, many studies sequence the whole exome and leave all the noncoding DNA out. Similarly, personalized cancer tests, which can help bring to light unexpected treatment options, often sequence the tumor exome, or a smaller panel of protein-coding genes.
Unfortunately, we know that’s not the whole picture. “There are a substantial number of noncoding regions that are just as effective at turning off a gene as a mutation in the gene itself,” says Richard Sherwood, a geneticist at Brigham and Women’s Hospital in Boston. “Exome sequencing is not going to be a good proxy for what genes are working.”
Sherwood studies regulatory DNA, the vast segment of the genome that governs which genes are turned on or off in any cell at a given time. It’s a confounding area of genetics; we don’t even know how much of the genome is made up of these regulatory elements. While genes can be recognized by the presence of “start” and “stop” codons―sequences of three DNA letters that tell the cell’s molecular machinery which stretches of DNA to transcribe into RNA, and eventually into protein―there are no definite signs like this for regulatory DNA.
Instead, studies to discover new regulatory elements have been somewhat trial-and-error. If you suspect a gene’s activity might be regulated by a nearby DNA element, you can inhibit that element in a living cell, and see if your gene shuts down with it.
With these painstaking experiments, scientists can slowly work their way through potential regulatory regions―but they can’t sweep across the genome with the kind of high-throughput testing that other areas of genetics thrive on. “Previously, you couldn’t do these sorts of tests in a large form, like 4,000 of them at once,” says David Gifford, a computational biologist at MIT. “You would really need to have a more hypothesis-directed methodology.”
Recently, Gifford and Sherwood collaborated on a paper, published in Nature Biotechnology, which presents a new method for testing thousands of DNA loci for regulatory activity at once. Their assay, called MERA (multiplexed editing regulatory assay), is built on the recent technology boom in CRISPR-Cas9 gene editing, which lets scientists quickly and easily cut specific sequences of DNA out of the genome.
So far, their team, including lead author Nisha Rajagopal from Gifford’s lab, has used MERA to study the regulation of four genes involved in the development of embryonic stem cells. Already, the results have defied the accepted wisdom about regulatory DNA. Many areas of the genome flagged by MERA as important factors in gene expression do not fall into any known categories of regulatory elements, and would likely never have been tested with previous-generation methods.
“Our approach allows you to look away from the lampposts,” says Sherwood. “The more unbiased you can be, the more we’ll actually know.”
A New Kind of CRISPR Screen
In the past three years, CRISPR-Cas9 experiments have taken all areas of molecular biology by storm, and Sherwood and Gifford are far from the first to use the technology to run large numbers of tests in parallel. CRISPR screens are an excellent way to learn which genes are involved in a cellular process, like tumor growth or drug resistance. In these assays, scientists knock out entire genes, one by one, and see what happens to cells without them.
This kind of CRISPR screen, however, operates on too small a scale to study the regulatory genome. For each gene knocked out in a CRISPR screen, you have to engineer a strain of virus to deliver a “guide RNA” into the cellular genome, showing the vicelike Cas9 molecule which DNA region to cut. That works well if you know exactly where a gene lies and only need to cut it once—but in a high-throughput regulatory test, you would want to blanket vast stretches of DNA with cuts, not knowing which areas will turn out to contain regulatory elements. Creating a new virus for each of these cuts is hugely impractical.
The insight behind MERA is that, with the right preparation, most of the genetic engineering can be done in advance. Gifford and Sherwood’s team used a standard viral vector to put a “dummy” guide RNA sequence, one that wouldn’t tell Cas9 to cut anything, into an embryonic stem cell’s genome. Then they grew plenty of cells with this prebuilt CRISPR system inside, and attacked each one with a Cas9 molecule targeted to the dummy sequence, chopping out the fake guide.
Normally, the result would just be a gap in the CRISPR system where the guide once was. But along with Cas9, the researchers also exposed the cells to new, “real” guide RNA sequences. Through a DNA repair mechanism called homologous recombination, the cells dutifully patched over the gaps with new guides, whose sequences were very similar to the missing dummy code. At the end of the process, each cell had a unique guide sequence ready to make cuts at a specific DNA locus—just like in a standard CRISPR screen, but with much less hands-on engineering.
By using a large enough library of guide RNA molecules, a MERA screen can include thousands of cuts that completely tile a broad region of the genome, providing an agnostic look at anywhere regulatory elements might be hiding. “It’s a lot easier [than a typical CRISPR screen],” says Sherwood. “The day the library comes in, you just perform one PCR reaction, and the cells do the rest of the work.”
In the team’s first batch of MERA screens, they created almost 4,000 guide RNAs for each gene they studied, covering roughly 40,000 DNA bases of the “cis-regulatory region,” or the area surrounding the gene where most regulatory elements are thought to lie. It’s unclear just how large any gene’s cis-regulatory region is, but 40,000 bases is a big leap from the highly targeted assays that have come before.
“We’re now starting to do follow-up studies where we increase the number of guide RNAs,” Sherwood adds. “Eventually, what you’d like is to be able to tile an entire chromosome.”
Far From the Lampposts
Sherwood and Gifford tried to focus their assays on regions that would be rich in regulatory elements. To that end, they made sure their guide RNAs covered parts of the genome with well-known signs of regulatory activity, like histone markers and transcription factor binding sites. For many of these areas, Cas9 cuts did, in fact, shut down gene expression in the MERA screens.
But the study also targeted regions around each gene that were empty of any known regulatory features. “We tiled some other regions that we thought might serve as negative controls,” explains Gifford. “But they turned out not to be negative at all.”
The study’s most surprising finding was that several cuts to seemingly random areas of the genome caused genes to become nonfunctional. The authors named these DNA regions “unmarked regulatory elements,” or UREs. They were especially prevalent around the genes Tdgf1 and Zfp42, and in many cases, seemed to be every bit as necessary to gene activity as more predictable hits on the MERA screen.
These results caught the researchers so off guard that it was natural to wonder if MERA screens are prone to false positives. Yet follow-up experiments strongly supported the existence of UREs. Switching the guide RNAs from aTdgf1 MERA screen and aZfp42 screen, for example, produced almost no positive results: the UREs’ regulatory effects were indeed specific to the genes near them.
In a more specific test, the researchers chose a particular URE connected to Tdgf1, and cut it out of a brand new population of cells for a closer look. “We showed that, if we deleted that region from the genome, the cells lost expression of the gene,” says Sherwood. “And then when we put it back in, the gene became expressed again. Which was good proof to us that the URE itself was responsible.”
From these results, it seems likely that follow-up MERA screens will find even more unknown stretches of regulatory DNA. Gifford and Sherwood’s experiments didn’t try to cover as much ground around their target genes as they might have, because the researchers assumed that MERA would mostly confirm what was already known. At best, they hoped MERA would rule out some suspected regulatory regions, and help show which regulatory elements have the biggest effect on gene expression.
“We tended to prioritize regions that had been known before,” Sherwood says. “Unfortunately, in the end, our datasets weren’t ideally suited to discovering these UREs.”
Getting to Basic Principles
MERA could open up huge swaths of the regulatory genome to investigation. Compared to an ordinary CRISPR screen, says Sherwood, “there’s only upside,” as MERA is cheaper, easier, and faster to run.
Still, interpreting the results is not trivial. Like other CRISPR screens, MERA makes cuts at precise points in the genome, but does not tell cells to repair those cuts in any particular way. As a result, a population of cells all carrying the same guide RNA can have a huge variety of different gaps and scars in their genomes, typically deletions in the range of 10 to 100 bases long. Gifford and Sherwood created up to 100 cells for each of their guides, and sometimes found that gene expression was affected in some but not all of them; only sequencing the genomes of their mutated cells could reveal exactly what changes had been made.
By repeating these experiments many times, and learning which mutations affect gene expression, it will eventually be possible to pin down the exact DNA bases that make up each regulatory element. Future studies might even be able to distinguish between regulatory elements with small and large effects on gene expression. In Gifford and Sherwood’s MERA screens, the target genes were altered to produce a green fluorescent protein, so the results were read in terms of whether cells gave off fluorescent light. But a more precise, though expensive, approach would be to perform RNA sequencing, to learn which cuts reduced the cell’s ability to transcribe a gene into RNA, and by how much.
A MERA screen offers a rich volume of data on the behavior of the regulatory genome. Yet, as with so much else in genetics, there are few robust principles to let scientists know where they should be focusing their efforts. Histone markers provide only a very rough sketch of regulatory elements, often proving to be red herrings on closer examination. And the existence of UREs, if confirmed by future experiments, shows that we don’t yet even know which areas of the genome to rule out in the hunt for regulatory regions.
“Every dataset we get comes closer and closer to computational principles that let us predict these regions,” says Sherwood. As more studies are conducted, patterns may emerge in the DNA sequences of regulatory elements that link UREs together, or reveal which histone markers truly point toward regulatory effects. There might also be functional clues hidden in these sequences, hinting at what is happening on a molecular level as regulatory elements turn genes on and off in the course of a cell’s development.
For now, however, the data is still rough and disorganized. For better and for worse, high-throughput tools like MERA are becoming the foundation for most discoveries in genetics—and that means there is a lot more work to do before the regulatory genome begins to come into focus.
CORRECTED 2/9/16: Originally, this story incorrectly stated that only certain cell types could be assayed with MERA for reasons related to homologous recombination. In fact, the authors see no reason MERA could not be applied to any in vitro cell line, and hope to perform screens in a wide range of cell types. The text has been edited to correct the error.
Gene Editing for Exon 51: Why CRISPR Snipping might be better than Exon Skipping for DMD
Why CRISPR might be better than exon skipping for DMD: Snipping vs. skipping for DMD
By Lauren Martz, Senior Writer
Published on Thursday, January 21, 2016
As if to preempt the regulatory setbacks in Duchenne muscular dystrophy (DMD) that last week disappointed the field, a trio of preclinical studies emerged two weeks earlier showing that cutting out DMD mutations with gene editing might offer a viable alternative to the exon-skipping strategies that have dominated the pipeline. Now, the question is whether there’s reason to believe the mouse studies will translate any better to the clinic.
The studies, published Dec. 31 in Science, provide in vivo proof of concept for the first time that CRISPR-Cas9 used postnatally can have a disease-modifying effect. Despite the hype around its therapeutic promise, the technology has so far proved itself primarily in research applications, for example, in modifying cells for in vitro screening or creating animal models of disease.
RNA interference (RNAi) silences, or knocks down, the translation of a gene by inducing degradation of a gene target’s transcript. To advance RNAi applications, Thermo Fisher Scientific has developed two types of small RNA molecules: short interfering RNAs and microRNAs. The company also offers products for RNAi analysis in vitro and in vivo, including libraries for high-throughput applications.
Genes can be knocked down with RNA interference (RNAi) or knocked out with CRISPR-Cas9. RNAi, the screening workhorse, knocks down the translation of genes by inducing rapid degradation of a gene target’s transcript.
CRISPR-Cas9, the new but already celebrated genome-editing technology, cleaves specific DNA sequences to render genes inoperative. Although mechanistically different, the two techniques complement one another, and when used together facilitate discovery and validation of scientific findings.
RNAi technologies along with other developments in functional genomics screening were discussed by industry leaders at the recent Discovery on Target conference. The conference, which emphasized the identification and validation of novel drug targets and the exploration of unknown cellular pathways, included a symposium on the development of CRISPR-based therapies.
RNAi screening can be performed in either pooled or arrayed formats. Pooled screening provides an affordable benchtop option, but requires back-end deconvolution and deep sequencing to identify the shRNA causing the specific phenotype. Targets are much easier to identify using the arrayed format since each shRNA clone is in an individual well.
“CRISPR complements RNAi screens,” commented Ryan Raver, Ph.D., global product manager of functional genomics at Sigma-Aldrich. “You can do a whole genome screen with either small interfering RNA (siRNA) or small hairpin RNA (shRNA), then validate with individual CRISPRs to ensure it is a true result.”
A powerful and useful validation method for knockdown or knockout studies is to use lentiviral open reading frames (ORFs) for gene re-expression, for rescue experiments, or to detect whether the wild-type phenotype is restored. However, the ORF randomly integrates into the genome. Also, with this validation technique, gene expression is not acting under the endogenous promoter. Accordingly, physiologically relevant levels of the gene may not be expressed unless controlled for via an inducible system.
In the future, CRISPR activators may provide more efficient ways to express not only wild-type but also mutant forms of genes under the endogenous promoter.
Choice of screening technique depends on the researcher and the research question. Whole gene knockout may be necessary to observe a phenotype, while partial knockdown might be required to investigate functions of essential or lethal genes. Use of both techniques is recommended to identify all potential targets.
For example, recently, a whole genome siRNA screen on a human glioblastoma cell line identified a gene, known as FAT1, as a negative regulator of apoptosis. A CRISPR-mediated knockout of the gene also conferred sensitivity to death receptor–induced apoptosis with an even stronger phenotype, thereby validating FAT1’s new role and link to extrinsic apoptosis, a new model system.
Dr. Raver indicated that next-generation RNAi libraries that are microRNA-adapted might have a more robust knockdown of gene expression, up to 90–95% in some cases. Ultracomplex shRNA libraries help to minimize both false-negative and false-positive rates by targeting each gene with ~25 independent shRNAs and by including thousands of negative-control shRNAs.
Recently, a relevant paper emerged from the laboratory of Jonathan Weissman, Ph.D., a professor of cellular and molecular pharmacology at the University of California, San Francisco. The paper described how a new ultracomplex pooled shRNA library was optimized by means of a microRNA-adapted system. This system, which was able to achieve high specificity in the detection of hit genes, produced robust results. In fact, they were comparable to results obtained with a CRISPR pooled screen. Members of the Weissman group systematically optimized the promoter and microRNA contexts for shRNA expression along with a selection of guide strands.
Using a sublibrary of proteostasis genes (targeting 2,933 genes), the investigators compared CRISPR and RNAi pooled screens. Data showed 48 hits unique to RNAi, 40 unique to CRISPR, and an overlap of 21 hits (with a 5% false discovery rate cut-off). Together, the technologies provided a more complete research story.
Arrayed CRISPR Screens
Click Image To Enlarge +
Synthetic crRNA:tracrRNA reagents can be used in a similar manner to siRNA reagents for assessment of phenotypes in a cell population. Top row: A reporter cell line stably expressing Cas9 nuclease was transfected with GE Dharmacon’s Edit-R synthetic crRNA:tracrRNA system, which was used to target three positive control genes (PSMD7, PSMD14, and VCP) and a negative control gene (PPIB). Green cells indicate EGFP signaling occuring as a result of proteasome pathway disruption. Bottom row: A siGENOME siRNA pool targeting the same genes was used in the same reporter cell line.
“RNA screens are well accepted and will continue to be used, but it is important biologically that researchers step away from the RNA mechanism to further study and validate their hits to eliminate potential bias,” explained Louise Baskin, senior product manager, Dharmacon, part of GE Healthcare. “The natural progression is to adopt CRISPR in the later stages.”
RNAi uses the cell’s endogenous mechanism. All of the components required for gene knockdown are already within the cell, and the delivery of the siRNA starts the process. With the CRISPR gene-editing system, which is derived from a bacterial immune defense system, delivery of both the guide RNA and the Cas9 nuclease, often the rate limiter in terms of knockout efficiency, are required.
In pooled approaches, the cell has to either drop out or overexpress so that it is sortable, limiting the types of addressable biological questions. A CRISPR-arrayed approach opens up the door for use of other analytical tools, such as high-content imaging, to identify hits of interest.
To facilitate use of CRISPR, GE recently introduced Dharmacon Edit-R synthetic CRISPR RNA (crRNA) libraries that can be used to carry out high-throughput arrayed analysis of multiple genes. Rather than a vector- or plasmid-based gRNA to guide the targeting of the Cas9 cleavage, a synthetic crRNA and tracrRNA are used. These algorithm-designed crRNA reagents can be delivered into the cells very much like siRNA, opening up the capability to screen multiple target regions for many different genes simultaneously.
GE showed a very strong overlap between CRISPR and RNAi using this arrayed approach to validate RNAi screen hits with synthetic crRNA. The data concluded that CRISPR can be used for medium- or high-throughput validation of knockdown studies.
“We will continue to see a lot of cooperation between RNAi and gene editing,” declared Baskin. “Using the CRISPR mechanism to knockin could introduce mutations to help understand gene function at a much deeper level, including a more thorough functional analysis of noncoding genes.
“These regulatory RNAs often act in the nucleus to control translation and transcription, so to knockdown these genes with RNAi would require export to the cytoplasm. Precision gene editing, which takes place in the nucleus, will help us understand the noncoding transcriptome and dive deeper into how those genes regulate disease progression, cellular development and other aspects of human health and biology.”
Tool Selection
Click Image To Enlarge +
Schematic of a pooled shRNA screening workflow developed by Transomic Technologies. Cells are transduced, and positive or negative selection screens are performed. PCR amplification and sequencing of the shRNA integrated into the target cell genome allows the determination of shRNA representation in the population.
The functional genomics tool should fit the specific biology; the biology should not be forced to fit the tool. Points to consider include the regulation of expression, the cell line or model system, as well as assay scale and design. For example, there may be a need for regulatable expression. There may be limitations around the cell line or model system. And assay scale and design may include delivery conditions and timing to optimally complete perturbation and reporting.
“Both RNAi- and CRISPR-based gene modulation strategies have pros and cons that should be considered based on the biology of the genes being studied,” commented Gwen Fewell, Ph.D., chief commercial officer, Transomic Technologies.
RNAi reagents, which can produce hypomorphic or transient gene-suppression states, are well known for their use in probing drug targets. In addition, these reagents are enriching studies of gene function. CRISPR-Cas9 reagents, which produce clean heterozygous and null mutations, are important for studying tumor suppressors and other genes where complete loss of function is desired.
Timing to readout the effects of gene perturbation must be considered to ensure that the biological assay is feasible. RNAi gene knockdown effects can be seen in as little as 24–72 hours, and inducible and reversible gene knockdown can be realized. CRISPR-based gene knockout effects may become complete and permanent only after 10 days.
Both RNAi and CRISPR reagents work well for pooled positive selection screens; however, for negative selection screens, RNAi is the more mature tool. Current versions of CRISPR pooled reagents can produce mixed populations containing a fraction of non-null mutations, which can reduce the overall accuracy of the readout.
To meet the needs of varied and complex biological questions, Transomic Technologies has developed both RNAi and CRISPR tools with options for optimal expression, selection, and assay scale. For example, the company’s shERWOOD-UltramiR shRNA reagents incorporate advances in design and small RNA processing to produce increased potency and specificity of knockdown, particularly important for pooled screens.
Sequence-verified pooled shRNA screening libraries provide flexibility in promoter choice, in vitro formats, in vivo formats, and a choice of viral vectors for optimal delivery and expression in biologically relevant cell lines, primary cells or in vivo.
The company’s line of lentiviral-based CRISPR-Cas9 reagents has variable selectable markers such that guide RNA- and Cas9-expressing vectors, including inducible Cas9, can be co-delivered and selected for in the same cell to increase editing efficiency. Promoter options are available to ensure expression across a range of cell types.
“Researchers are using RNAi and CRISPR reagents individually and in combination as cross-validation tools, or to engineer CRISPR-based models to perform RNAi-based assays,” informs Dr. Fewell. “Most exciting are parallel CRISPR and RNAi screens that have tremendous potential to uncover novel biology.”
Converging Technologies
The convergence of RNAi technology with genome-editing tools, such as CRISPR-Cas9 and transcription activator-like effector nucleases, combined with next-generation sequencing will allow researchers to dissect biological systems in a way not previously possible.
“From a purely technical standpoint, the challenges for traditional RNAi screens come down to efficient delivery of the RNAi reagents and having those reagents provide significant, consistent, and lasting knockdown of the target mRNAs,” states Ross Whittaker, Ph.D., a product manager for genome editing products at Thermo Fisher Scientific. “We have approached these challenges with a series of reagents and siRNA libraries designed to increase the success of RNAi screens.”
Thermo Fisher’ provides lipid-transfection RNAiMax reagents, which effectively deliver siRNA. In addition, the company’s Silencer and Silencer Select siRNA libraries provide consistent and significant knockdown of the target mRNAs. These siRNA libraries utilize highly stringent bioinformatic designs that ensure accurate and potent targeting for gene-silencing studies. The Silencer Select technology adds a higher level of efficacy and specificity due to chemical modifications with locked nucleic acid (LNA) chemistry.
The libraries alleviate concerns for false-positive or false-negative data. The high potency allows less reagent use; thus, more screens or validations can be conducted per library.
Dr. Whittaker believes that researchers will migrate regularly between RNAi and CRISPR-Cas9 technology in the future. CRISPR-Cas9 will be used to create engineered cell lines not only to validate RNAi hits but also to follow up on the underlying mechanisms. Cell lines engineered with CRISPR-Cas9 will be utilized in RNAi screens. In the long term, CRISPR-Cas9 screening will likely replace RNAi screening in many cases, especially with the introduction of arrayed CRISPR libraries.
Validating Antibodies with RNAi
Unreliable antibody specificity is a widespread problem for researchers, but RNAi is assuaging scientists’ concerns as a validation method.
The procedure introduces short hairpin RNAs (shRNAs) to reduce expression levels of a targeted protein. The associated antibody follows. With its protein knocked down, a truly specific antibody shows dramatically reduced or no signal on a Western blot. Short of knockout animal models, RNAi is arguably the most effective method of validating research antibodies.
The method is not common among antibody suppliers—time and cost being the chief barriers to its adoption, although some companies are beginning to embrace RNAi validation.
“In the interest of fostering better science, Proteintech felt it was necessary to implement this practice,” said Jason Li, Ph.D., founder and CEO of Proteintech Group, which made RNAi standard protocol in February 2015. “When researchers can depend on reproducibility, they execute more thorough experiments and advance the treatment of human diseases and conditions.”
Junk DNA Kept in Good Repair by Nuclear Membrane
Heterochromatin has the dubious distinction of being called the “dark matter” of DNA, and it has even suffered the indignity of being dismissed as “junk DNA.” But it seems to get more respectful treatment inside the nucleus, where it has the benefit of a special repair mechanism. This mechanism, discovered by scientists based at the University of Southern California (USC), transports broken heterochromatin sequences from the hurly-burly of the heterochromatin domain so that they can be repaired in the relative peace and quiet of the nuclear periphery.
This finding suggests that the nuclear membrane is more versatile than is generally appreciated. Yes, it serves as a protective container for nuclear material, and it uses its pores to manage the transport of molecules in and out of the nucleus. But it may also play a special role in maintaining the integrity of heterochromatin, which tends to be overlooked because it consists largely of noncoding DNA, including repetitive stretches of no apparent function.
“Scientists are now starting to pay a lot of attention to this mysterious component of the genome,” said Irene E. Chiolo, Ph.D., an assistant professor at USC. “Heterochromatin is not only essential for chromosome maintenance during cell division; it also poses specific threats to genome stability. Heterochromatin is potentially one of the most powerful driving forces for cancer formation, but it is the ‘dark matter’ of the genome. We are just beginning to unravel how repair works here.”
Dr. Chilo led an effort to understand how heterochromatin stays in good repair, even though it is particularly vulnerable to a kind of repair error called ectopic recombination. This kind of error is apt to occur when flaws in repeated sequences undergo homologous recombination (HR) by means of double-strand break (DSB) repair. Specifically, repeated sequences tend to recombine with each other during DNA repair.
Working with the fruit fly Drosophila melanogaster, Dr. Chilo’s team observed that breaks in heterochromatin are repaired after damaged sequences move away from the rest of the chromosome to the inner wall of the nuclear membrane. There, a trio of proteins mends the break in a safe environment, where it cannot accidentally get tangled up with incorrect chromosomes.
The details appeared October 26 in Nature Cell Biology, in an article entitled, “Heterochromatic breaks move to the nuclear periphery to continue recombinational repair.”
“[Heterochromatic] DSBs move to the nuclear periphery to continue HR repair,” the authors wrote. “Relocalization depends on nuclear pores and inner nuclear membrane proteins (INMPs) that anchor repair sites to the nuclear periphery through the Smc5/6-interacting proteins STUbL/RENi. Both the initial block to HR progression inside the heterochromatin domain, and the targeting of repair sites to the nuclear periphery, rely on SUMO and SUMO E3 ligases.”
“We knew that nuclear membrane dysfunctions are common in cancer cells,” Dr. Chiolo said. “Our studies now suggest how these dysfunctions can affect heterochromatin repair and have a causative role in cancer progression.”
This study may help reveal how and why organisms become more predisposed to cancer as they age—the nuclear membrane progressively deteriorates as an organism ages, removing this bulwark against genome instability.
Next, Dr. Chiolo and her team will explore how the movement of broken sequences is accomplished and regulated, and what happens in cells and organisms when this membrane-based repair mechanism fails. Their ultimate goal is to understand how this mechanism functions in human cells and identify new strategies to prevent their catastrophic failure and cancer formation.
Gene Found that Regulates Stem Cell Number Production
The gene Prkci promotes the generation of differentiated cells (red). However if Prkci activity is reduced or absent, neural stem cells (green) are promoted. [In Kyoung Mah]
A scientific team from the University of Southern California (USC) and the University of California, San Diego have described an important gene that maintains a critical balance between producing too many and too few stem cells. Called Prkci, the gene influences whether stem cells self-renew to produce more stem cells, or differentiate into more specialized cell types, such as blood or nerves.
When it comes to stem cells, too much of a good thing isn’t necessarily a benefit: producing too many new stem cells may lead to cancer; making too few inhibits the repair and maintenance of the body.
In their experiments, the researchers grew mouse embryonic stem cells, which lacked Prkci, into embryo-like structures in the laboratory. Without Prkci, the stem cells favored self-renewal, generating large numbers of stem cells and, subsequently, an abundance of secondary structures.
Upon closer inspection, the stem cells lacking Prkci had many activated genes typical of stem cells, and some activated genes typical of neural, cardiac, and blood-forming cells. Therefore, the loss of Prkci can also encourage stem cells to differentiate into the progenitor cells that form neurons, heart muscle, and blood.
Prkci achieves these effects by activating or deactivating a well-known group of interacting genes that are part of the Notch signaling pathway. In the absence of Prkci, the Notch pathway produces a protein that signals to stem cells to make more stem cells. In the presence of Prkci, the Notch pathway remains silent, and stem cells differentiate into specific cell types.
These findings have implications for developing patient therapies. Even though Prkci can be active in certain skin cancers, inhibiting it might lead to unintended consequences, such as tumor overgrowth. However, for patients with certain injuries or diseases, it could be therapeutic to use small molecule inhibitors to block the activity of Prkci, thus boosting stem cell production.
“We expect that our findings will be applicable in diverse contexts and make it possible to easily generate stem cells that have typically been difficult to generate,” said Francesca Mariani, Ph.D., principal investigator at the Eli and Edythe Broad Center for Regenerative Medicine and Stem Cell Research at USC.
Their study (“Atypical PKC-iota Controls Stem Cell Expansion via Regulation of the Notch Pathway”) was published in a Stem Cell Reports.
Atypical PKC-iota Controls Stem Cell Expansion via Regulation of the Notch Pathway
In Kyoung Mah,1 Rachel Soloff,2,3 Stephen M. Hedrick,2 and Francesca V. Mariani1, *
The number of stem/progenitor cells available can profoundly impact tissue homeostasis and the response to injury or disease. Here, we propose that an atypical PKC, Prkci, is a key player in regulating the switch from an expansion to a differentiation/maintenance phase via regulation of Notch, thus linking the polarity pathway with the control of stem cell self-renewal. Prkci is known to influence symmetric cell division in invertebrates; however a definitive role in mammals has not yet emerged. Using a genetic approach, we find that loss of Prkci results in a marked increase in the number of various stem/progenitor cells. The mechanism used likely involves inactivation and symmetric localization of NUMB, leading to the activation of NOTCH1 and its downstream effectors. Inhibition of atypical PKCs may be useful for boosting the production of pluripotent stem cells, multipotent stem cells, or possibly even primordial germ cells by promoting the stem cell/progenitor fate.
The control of asymmetric versus symmetric cell division in stem and progenitor cells balances self-renewal and differentiation to mediate tissue homeostasis and repair and involves key proteins that control cell polarity. In the case of excess symmetric division, too many stem-cell-like daughter cells are generated that can lead to tumor initiation and growth. Conversely, excess asymmetric cell division can severely limit the number of cells available for homeostasis and repair (Go´mez-Lo´pez et al., 2014; Inaba and Yamashita, 2012). The Notch pathway has been implicated in controlling stem cell self-renewal in a number of different contexts (Hori et al., 2013). However, how cell polarity, asymmetric cell division, and the activation of determinants ultimately impinges upon the control of stem cell expansion and maintenance is not fully understood. In this study, we examine the role of an atypical protein kinase C (aPKC), PRKCi, in stem cell self-renewal and, in particular, determine whether PRKCi acts via the Notch pathway. PKCs are serine-threonine kinases that control many basic cellular processes and are typically classified into three subgroups—conventional, novel, and the aPKCs iota and zeta, which, in contrast to the others, are not activated by diacylglyceride or calcium. The aPKC proteins are best known for being central components of an evolutionarily conserved Par3-Par6-aPKC trimeric complex that controls cell polarity in C. elegans, Drosophila, Xenopus, zebrafish, and mammalian cells (Suzuki and Ohno, 2006).
Before Notch influences stem cell self-renewal, the regulation of cell polarity, asymmetric versus symmetric cell division, and the segregation of cell fate determinants such as NUMB may first be required (Knoblich, 2008). For example, mutational analysis in Drosophila has demonstrated that the aPKC-containing trimeric complex is required for maintaining polarity and for mediating asymmetric cell division during neurogenesis via activation and segregation of NUMB (Wirtz-Peitz et al., 2008). NUMB then functions as a cell fate determinant by inhibiting Notch signaling and preventing self-renewal (Wang et al., 2006). In mammals, the PAR3-PAR6-aPKC complex also can bind and phosphorylate NUMB in epithelial cells and can regulate the unequal distribution of Numb during asymmetric cell division (Smith et al., 2007). During mammalian neurogenesis, asymmetric division is also thought to involve the PAR3-PAR6-aPKC complex, NUMB segregation, and NOTCH activation (Bultje et al., 2009).
Mice deficient in Prkcz are grossly normal, with mild defects in secondary lymphoid organs (Leitges et al., 2001). In contrast, deficiency of the Prkci isozyme results in early embryonic lethality at embryonic day (E)9.5 (Seidl et al., 2013; Soloff et al., 2004). A few studies have investigated the conditional inactivation of Prkci; however, no dramatic changes in progenitor generation were detected in hematopoietic stem cells (HSCs) or the brain (Imai et al., 2006; Sengupta et al., 2011), although one study found evidence of a role for Prkci in controlling asymmetric cell division in the skin (Niessen et al., 2013). Analysis may be complicated by functional redundancy between the iota and zeta isoforms and/or because further studies perturbing aPKCs in specific cell lineages and/or at specific developmental stages are needed.
Here, we investigate the requirement of Prkci in mouse cells using an in vitro system that bypasses early embryonic lethality. Embryonic stem (ES) cells are used to make embryoid bodies (EBs) that develop like the early post-implantation embryo in terms of lineage specification and morphology and can also be maintained in culture long enough to observe advanced stages of cellular differentiation (Desbaillets et al., 2000). Using this approach, we provide genetic evidence that inactivation of Prkci signaling leads to enhanced generation of pluripotent cells and some types of multipotent stem cells, including cells with primordial germ cell (PGC) characteristics. In addition, we provide evidence that aPKCs ultimately regulate stem cell fate via the Notch pathway.
Figure 1. Prkci/ EBs Contain Cells with Pluripotency Characteristics (A and A0 ) Day (d) 12 heterozygous EBs have few OCT4/E-CAD+ cells, while null EBs contain many in clusters at the EB periphery. Inset: OCT4 (nucleus)/E-CAD (cytoplasm) double-positive cells. (B and B0 ) Adjacent sections in a null EB show that OCT4+ cells are likely also SSEA1+. (C) Dissociated day-12 Prkci/ EBs contain five to six times more OCT4+ and approximately three times more SSEA1+ cells than heterozygous EBs (three independent experiments). (D and D0 ) After 2 days in ES cell culture, no colonies are visible in null SSEA1 cultures while present in null SSEA1+ cultures (red arrows). (E–E00) SSEA1+ sorted cells can be maintained for many passages, 27+. (E) Prkci+/ sorted cells make colonies with differentiated cells at the outer edges (n = 27/35). (E0 ) Null cells form colonies with distinct edges (n = 39/45). (E00) The percentage of undifferentiated colonies is shown. ***p < 0.001. (F) Sorted null cells express stem cell and differentiation markers at similar levels to normal ES cells (versus heterozygous EBs) (three independent experiments). (G) EBs made from null SSEA1+ sorted cells express germ layer marker genes at the indicated days. Error bars indicate mean ± SEM, three independent experiments. Scale bars, 100 mm in (A, D, and E); 25 mm in (B). See also Figure S1.
RESULTS
Prkci/ Cultures Have More Pluripotent Cells Even under Differentiation Conditions First, we compared Prkci null EB development to that of Prkci/ embryos. Consistent with another null allele (Seidl et al., 2013), both null embryos and EBs fail to properly cavitate (Figures S1A and S1B). The failure to cavitate is unlikely to be due to the inability to form one of the three germ layers, as null EBs express germ-layer-specific genes (Figure S1E). A failure of cavitation could alternatively be caused by an accumulation of pluripotent cells. For example, EBs generated from Timeless knockdown cells do not cavitate and contain large numbers of OCT4-expressing cells (O’Reilly et al., 2011). In addition, EBs generated with Prkcz isoform knockdown cells contain OCT4+ cells under differentiation conditions (Dutta et al., 2011; Rajendran et al., 2013). Thus, we first evaluated ES colony differentiation by alkaline phosphatase (AP) staining. After 4 days without leukemia inhibitory factor (LIF), Prkci/ ES cell colonies retained crisp boundaries and strong AP staining. In contrast, Prkci+/ colonies had uneven colony boundaries with diffuse AP staining (Figures S1F–S1F00). To definitively detect pluripotent cells, day-12 EBs were assayed for OCT4 and E-CADHERIN (E-CAD) protein expression. Prkci+/ EBs had very few OCT4/E-CAD double-positive cells (Figure 1A); however, null EBs contained large clusters of OCT4/E-CAD double-positive cells, concentrated in a peripheral zone (Figure 1A0 ). By examining adjacent sections, we found that OCT4+ cells could also be positive for stage-specific embryonic antigen 1 (SSEA1) (Figures 1B and 1B0 ). Quantification by fluorescence-activated cell sorting (FACS) analysis showed that day-12 Prkci/ EBs had more OCT4+ and SSEA1+ cells than Prkci+/ EBs (Figure 1C). We did not find any difference between heterozygous and wild-type cells with respect to the number of OCT4+ or SSEA1+ cells or in their levels of expression for Oct4, Nanog, and Sox2 (Figures S1I, S1I0 and S1J). However, we did find that Oct4, Nanog, and Sox2 were highly upregulated in OCT4+ null cells (Figure S1G). Thus, together, these data indicate that Prkci/ EBs contain large numbers of pluripotent stem cells, despite being cultured under differentiation conditions.
Functional Pluripotency Tests If primary EBs have a pluripotent population with the capacity to undergo self-renewal, they can easily form secondary EBs (O’Reilly et al., 2011). Using this assay, we found that more secondary EBs could be generated from Prkci/ versus Prkci+/ EBs, especially at days 6, 10, and 16; even when plated at a low density to control for aggregation (Figure S1H). To test whether SSEA1+ cells could maintain pluripotency long term, FACS-sorted Prkci/ SSEA1+ and SSEA1 cells were plated at a low density and maintained under ES cell culture conditions. SSEA1 cells were never able to form identifiable colonies and could not be maintained in culture (Figure 1D). SSEA1+ cells, however, formed many distinct colonies after 2 days of culture, and these cells could be maintained for over 27 passages (Figures 1D0 , 1E0 , and 1E00). Prkci+/ SSEA1+ cells formed colonies that easily differentiated at the outer edge, even in the presence of LIF (Figure 1E). In contrast Prkci/ SSEA1+ cells maintained distinct round colonies (Figure 1E0 ). Next, we determined whether null SSEA1+ cells expressed pluripotency and differentiation markers similarly to normal ES cells. Indeed, we found that Oct4, Nanog, and Sox2 were upregulated in both null SSEA1+ EB cells and heterozygous ES cells. In addition, differentiated markers (Fgf5, T, Wnt3, and Afp) and tissue stem/progenitor cell markers (neural: Nestin, Sox1, and NeuroD; cardiac: Nkx2-5 and Isl1; and hematopoietic: Gata1 and Hba-x) were downregulated in both SSEA1+ cells and heterozygous ES cells (Figure 1F). SSEA1+ cells likely have a wide range of potential, since EBs generated from these cells expressed markers for all three germ layers (Figure 1G).
Figure 2. Prkci and Pluripotency Pathways (A) ERK1/2 phosphorylation (Y202/Y204) is reduced in null ES cells and early day (d)-6 null EBs compared to heterozygous EBs and strongly increased at later stages. The first lane shows ES cells activated (A) by serum treatment 1 day after serum depletion. (B) Quantification of pERK1/2 normalized to non-phosphorylated ERK1/2 (three independent experiments; mean ± SEM; **p < 0.01). (C) pERK1/2 Y202/Y204 is strongly expressed in the columnar epithelium of heterozygous EBs that have just cavitated. Null EBs have lower expression. OCT4 and pERK1/2 expression do not co-localize. Scale bar, 100 mm. (D) pERK1/2Y202/Y204 levels are lower in null SSEA1+ sorted cells than in heterozygous or in null day-12 EBs that have undergone further differentiation. pSTAT3 and STAT levels are unchanged. See also Figure S2.
ERK1/2 Signaling during EB Development Stem cell self-renewal has been shown to require the activation of the JAK/STAT3 and PI3K/AKT pathways and the inhibition of ERK1/2 and GSK3 pathways (Kunath et al., 2007; Niwa et al., 1998; Sato et al., 2004; Watanabe et al., 2006). We found that both STAT3 and phosphorylated STAT3 levels were not grossly altered and that the p-STAT3/STAT3 ratio was similar between heterozygous and null ES cells and EBs (Figures S2A and S2B). In addition we did not see any difference in AKT, pAKT, or b-CATENIN levels when comparing heterozygous to null ES cells or EBs (Figures S2A and S2C). Thus, the effects observed by the loss of Prkci are unlikely to be due to a significant alteration in the JAK/STAT3, PI3K/AKT, or GSK3 pathways.
Next, we investigated ERK1/2 expression and activation. Consistent with other studies showing ERK1/2 activation to be downstream of Prkci in some mammalian cell types (Boeckeler et al., 2010; Litherland et al., 2010), pERK1/2 was markedly inactivated in Prkci null versus heterozygous ES cells. In addition, during differentiation, null EBs displayed strong pERK1/2 inhibition early (until day 6). Later, pERK1/2 was activated strongly, as the EB began differentiating (Figures 2A and 2B). By immunofluorescence, pERK1/2 was strongly enriched in the columnar epithelium of control EBs, while overall levels were much lower in Prkci/ EBs (Figure 2C). In addition, high OCT4 expression correlated with a marked inactivation of pERK1/2 (Figure 2C). Next, we examined Prkci/ SSEA1+ cells by western blot. We found that SSEA1+ cells isolated from day-12 null EBs had pSTAT3 expression levels similar to whole EBs, while pERK1/2 levels were low (Figure 2D). Thus, these experiments indicate that the higher numbers of pluripotent cells in null EBs correlate with a strong inactivation of ERK1/2.
Neural Stem Cell Fate Is Favored in Prkci/ EBs It is well known that ERK/MEK inhibition is not sufficient for pluripotent stem cell maintenance (Ying et al., 2008); thus, other pathways are likely involved. Therefore, we used a TaqMan Mouse Stem Cell Pluripotency Panel (#4385363) on an OpenArray platform to investigate the mechanism of Prkci action. Day 13 and day 20 Prkci/ EBs expressed high levels of pluripotency and stemness markers versus heterozygous EBs, including Oct4, Utf1, Nodal, Xist, Fgf4, Gal, Lefty1, and Lefty2. However, interestingly, EBs also expressed markers for differentiated cell types and tissue stem cells, including Sst, Syp, and Sycp3 (neural-related genes), Isl1 (cardiac progenitor marker), Hba-x, and Cd34 (hematopoietic markers). Based on this first-pass test, we sought to determine whether loss of Prkci might favor the generation of neural, cardiac, and hematopoietic cell types and/or their progenitors.
Figure 3. Neural Stem Cell Populations Are Increased in Null EBs (A–C0 ) Prkci/ EBs (B) have more NESTINpositive cells than Prkci+/ EBs (A). (C and C0 ) MAP2 and TUJ1 are expressed in null EBs, similarly to heterozygous EBs (data not shown). (D) EBs were assessed for PAX6 expression, and the images were used for quantification (Figures S3A and S3B). The pixel count ratio of PAX6+ cells in null EBs (green) is substantially higher than that found in heterozygous EBs (black) (three independent experiments; mean ± SEM; *p < 0.05). (E–F000) Day 4 after RA treatment, Prkci/ EBs have more NESTIN- than TUJ1-positive neurons (E and F). However, null cells can still terminally differentiate into NEUROD-, NEUN-, and MAP2-positive cells (F0 –F000). Scale bars, 25 mm in (A and C) and 50 mm in (E). See also Figure S3. Ste
The Generation of Cardiomyocyte and Erythrocyte Progenitors Is Also Favored Next, we examined ISL1 expression (a cardiac stem cell marker) by immunofluorescence and found that Prkci/ EBs contained larger ISL1 clusters compared with Prkci+/ EBs; this was confirmed using an image quantification assay (Figures 4A, 4A0 , and 4C). Differentiated cardiac cells and ventral spinal neurons can also express ISL1 (Ericson et al., 1992); therefore, we also examined Nkx2-5 expression, a better stem cell marker and regulator of cardiac progenitor determination (Brown et al., 2004), by RT-PCR and immunofluorescence. In null EBs, Nkx2-5 was upregulated (Figure 4D). In addition, in response to RA, which can promote cardiac fates in vitro (Niebruegge et al., 2008), cells expressing NKX2-5 were more prevalent in null versus heterozygous EBs (Figures 4B and 4B0 ).The abundant cardiac progenitors found in null EBs were still capable of undergoing differentiation (Figures 4E–4F0 ).
Figure 4. Cardiomyocyte and Erythrocyte Progenitors Are Increased in Prkci/ EBs (A–F0 ) In (A, A0 , E, and E0 ), Prkci/ EBs cultured without LIF have more ISL1 (cardiac progenitor marker) and a-ACTININ-positive cells compared to heterozygous EBs. (C) At day (d) 9, the pixel count ratio for ISL1 expression indicates that null EBs (green) have larger ISL1 populations than heterozygous EBs (black) (three independent experiments, n = 20 heterozygous EBs, 21 null EBs total; mean ± SEM; *p < 0.05). In (B, B0 , D, F, and F0 ), RA treatment induces more NKX2-5 (both nuclear and cytoplasmic) and a-ACTININ expression in null EBs. Arrows point to fibers in (F0 ). (G) Null EBs (green) generate more beating EBs with RA treatment compared to heterozygous EBs (black) (four independent experiments; mean ± SEM; *p < 0.05, ***p < 0.001). (H) Dissociated null EBs of different stages (green) generate more erythrocytes in a colony-forming assay (CFU-E) (four independent experiments; mean ± SEM; **p < 0.01). (I) Examples of red colonies. (J) Gene expression for primitive HSC markers is upregulated in null EBs (relative to heterozygous EBs) (three independent experiments; mean ± SEM). Scale bars, 50 mm in (A, B, and E); 100 mm in (F), and 25 mm in (I). See also Figure S4. 6
Hba-x expression is restricted to yolk sac blood islands and primitive erythrocyte populations (Lux et al., 2008; Trimborn et al., 1999). Cd34 is also a primitive HSC marker (Sutherland et al., 1992). Next, we determined whether the elevated expression of these markers observed with OpenArray might represent higher numbers of primitive hematopoietic progenitors. Using a colony-forming assay (Baum et al., 1992), we found that red colonies (indicative of erythrocyte differentiation; examples in Figure 4I) were produced significantly earlier and more readily from cells isolated from null versus heterozygous EBs (Figure 4H). By quantitative real-time PCR, upregulation of Hba-x and Cd34 genes confirmed the OpenArray results (Figure 4J). In addition, we found Gata1, an erythropoiesis-specific factor, and Epor, an erythropoietin receptor that mediates erythroid cell proliferation and differentiation (Chiba et al., 1991), to be highly upregulated in null versus heterozygous EBs (Figure 4J). These data suggest that the loss of Prkci promotes the generation of primitive erythroid progenitors that can differentiate into erythrocytes.
To determine whether the aforementioned tissue stem cells identified were represented in the OCT4+ population that we described earlier, we examined the expression of PAX6, ISL1, and OCT4 in adjacent EB sections. We found that cells expressing OCT4 appeared to represent a distinct population from those expressing PAX6 and ISL1 (although some cells were PAX6 and ISL1 double-positive) (Figures S4A–S4C).
Prkci/ Cells Are More Likely to Inherit NUMB/aNOTCH1 Symmetrically The enhanced production of both pluripotent and tissue stem cells suggests that the mechanism underlying the action of Prkci in these different contexts is fundamentally similar. Because the Notch pathway controls stem cell self-renewal in many contexts (Hori et al., 2013), and because previous studies implicated a connection between PRKCi function and the Notch pathway (Bultje et al., 2009; Smith et al., 2007), we examined the localization and activation of a key player in the Notch pathway, NUMB, (Inaba and Yamashita, 2012). Differences in NUMB expression were first evident in whole EBs, where polarized expression was evident in the ectodermal and endodermal epithelia of heterozygous EBs, while Prkci/ EBs exhibited a more even distribution (Figures 5A–5B0 ). To more definitively determine the inheritance of NUMB during cell division, doublets undergoing telophase or cytokinesis were scored for symmetric (evenly distributed in both cells) or asymmetric (unequally distributed) NUMB localization (examples: Figures 5C and 5C0 ).
Because NUMB can be directly phosphorylated by aPKCs (both PRKCi and PRKCz) (Smith et al., 2007; Zhou et al., 2011), loss of Prkci might be expected to lead to decreased NUMB phosphorylation. Three NUMB phosphorylation sites—Ser7, Ser276, and Ser295—could be aPKC mediated (Smith et al., 2007). By immunofluorescence, we found that one of the most well-characterized sites (Ser276), was strongly inactivated in null versus heterozygous EBs, especially in the core (Figures 5F and 5G). Western analysis also confirmed that the levels of pNUMB (Ser276) were decreased in null versus heterozygous EBs (Figure S5F). Thus, genetic inactivation of Prkci leads to a marked decrease in the phosphorylation status of NUMB.
Notch pathway inhibition by NUMB has been observed in flies and mammals (Berdnik et al., 2002; French et al., 2002). Therefore, we investigated whether reduced Numb activity in Prkci/ EBs might lead to enhanced NOTCH1 activity and the upregulation of the downstream transcriptional readouts (Meier-Stiegen et al., 2010). An overall increase in NOTCH1 activation was supported by western blot analysis showing that the level of activated NOTCH1 (aNOTCH1) was strongly increased in day 6 and day 10 null versus heterozygous EBs (Figure S5G). This was supported by immunofluorescence in EBs, where widespread strong expression of aNOTCH1 was seen in most null cells (Figures 5I and 5I0 ), while in heterozygous EBs, this pattern was observed only in the OCT4+ cells (Figures 5H and 5H0 ).
Figure 5. Prkci/ Cells Preferentially Inherit Symmetric Localization of NUMB and aNOTCH1 and Notch Signaling Is Required for Stem Cell Self-Renewal in Null Cells (A–B0 ) In (A and B), day (d)-7 heterozygous EBs have polarized NUMB localization within epithelia and strong expression in the endoderm, while null EBs have a more even distribution. (A0 and B0 ) Enlarged views. (C and C0 ) Asymmetric and symmetric NUMB expression examples. (D) Doublets from day-10 null EBs have more symmetric inheritance when compared to day-10 heterozygous doublets (three independent experiments; mean ± SEM; **p < 0.01). A red line indicates a ratio of 1 (equal percent symmetric and asymmetric). (E) CD24high null doublets exhibited more symmetric NUMB inheritance than CD24high heterozygous doublets (three independent experiments; mean ± SEM; *p < 0.05). A red line indicates where the ratio is 1. (F and G) Decreased pNUMB (Ser276) is evident in the core of null versus heterozygous EBs (n = 10 of each genotype). (H–I0 ) In (H and I), aNOTCH1 is strongly expressed in heterozygous EBs, including both OCT4+ and OCT4 cells, while strong aNOTCH1 expression is predominant in OCT4+ cells of null EBs (n = 10 of each genotype)). (H0 and I0 ) Enlarged views of boxed regions. OCT4+ cells are demarcated with dotted lines. (J and J0 ) OCT4+ cells express HES5 strongly in the nucleus (three independent experiments). (K) Null doublets from dissociated EBs have more symmetric aNOTCH1 inheritance compared to heterozygous doublets (three independent experiments; mean ± SEM; **p < 0.01). A red line indicates where the ratio is 1. (L) CD24high Prkci/ doublets exhibit more symmetric aNOTCH1 than CD24high heterozygous doublets (three independent experiments; mean ± SEM; *p < 0.05). A red line indicates where the ratio is 1. (M and M0 ) Examples of asymmetric and symmetric aNOTCH1 localization. (N and O) Day-3 DMSO-treated null ES colonies show strong AP staining all the way to the colony edge in (N). Treatment with 3 mM DAPT led to more differentiation in (O). (P–R) OCT4 is strongly expressed in day-4 DMSO-treated null ES cultures (P). With DAPT (Q,R), OCT4 expression is decreased. (S) Working model: In daughter cells that undergo differentiation, PRKCi can associate with PAR3 and PAR6. NUMB is recruited and directly phosphorylated. The activation of NUMB then leads to an inhibition in NOTCH1 activation and stimulation of a differentiation/maintenance program. In the absence of Prkci, the PAR3/PAR6 complex cannot assemble (although it may do so minimally with Prkcz). NUMB asymmetric localization and phosphorylation is reduced. Low levels of pNUMB are not sufficient to block NOTCH1 activation, and activated NOTCH1 preserves the stem cell self-renewal program. We suggest that PRKCi functions to drive differentiation by pushing the switch from an expansion phase that is symmetric to a differentiation and/or maintenance phase that is predominantly asymmetric. In situations of low or absent PRKCi, we propose that the expansion phase is prolonged. Scale bars, 50 mm in (A, B, F, G, H, I, J, J0 , P–R); 200 mm in (A0 and B0 ); 25 mm in (C, C0 , M, and M0 ); and 100 mm in (H0 , I0 , N, and O). See also Figure S5.
Figure 6. Additional Inhibition of PRKCz Results in an Even Higher Percentage of OCT4-, SSEA1-, and STELLA-Positive Cells (A and A0 ) After day 4 without LIF, heterozygous ES cells undergo differentiation in the presence of Go¨6983, while null ES cells stay as distinct colonies in (A0 ). (B and B0 ) Go¨6983 stimulates an increase in OCT4+ populations in heterozygous EBs and an even larger OCT4+ population in null EBs in (B0 , insets: green and red channels separately). (C–D0 ) An even higher percentage of cells are OCT4+ (C and C0 ) and SSEA1+ (D and D0 ) with Go¨6983 treatment (day 12, three independent experiments). (E and F) More STELLA+ clusters containing a larger number of cells are present in drugtreated heterozygous EBs. (G and H) Null EBs also have more STELLA+ clusters and cells. Drug-treated null EBs exhibit a dramatic increase in the number of STELLA+ cells. (I–K) Some cells are double positive for STELLA and VASA in drug-treated null EBs (yellow arrows). There are also VASAonly (green arrows) and STELLA-only cells (red arrows) (three independent experiments). (L–P) Treatment with ZIP results in an increase in OCT4+ and STELLA+ cells. ZIP treatment also results in more cells that are VASA+ (three independent experiments); n = 11 for Prkci+/, and n = 13 for Prkci+/ + ZIP; n = 14 for Prkci/, and n = 20 for Prkci/ + ZIP; eight EBs assayed for both STELLA and VASA expression). Scale bars, 100 mm in (A and A0 ); 50 mm in (B and B0 ); and 25 mm in (E, I, and L).
DISCUSSION In this report, we suggest that Prkci controls the balance between stem cell expansion and differentiation/maintenance by regulating the activation of NUMB, NOTCH1, and Hes /Hey downstream effector genes. In the absence of Prkci, the pluripotent cell fate is favored, even without LIF, yet cells still retain a broad capacity to differentiate. In addition, loss of Prkci results in enhanced generation of tissue progenitors such as neural stem cells and cardiomyocyte and erythrocyte progenitors. In contrast to recent findings on Prkcz (Dutta et al., 2011), loss of Prkci does not appear to influence STAT3, AKT, or GSK3 signaling but results in decreased ERK1/2 activation. We hypothesize that, in the absence of Prkci, although ERK1/2 inhibition may be involved, it is the decreased NUMB phosphorylation and increased NOTCH1 activation that promotes stem and progenitor cell fate. Thus, we conclude that PRKCi, a protein known to be required for cell polarity, also plays an essential role in controlling stem cell fate and generation via regulating NOTCH1 activation.
Notch Activation Drives the Decision to Self-Renew versus Differentiate Notch plays an important role in balancing stem cell selfrenewal and differentiation in a variety of stem cell types and may be one of the key downstream effectors of Prkci signaling. Sustained Notch1 activity in embryonic neural progenitors has been shown to maintain their undifferentiated state (Jadhav et al., 2006). Similarly, sustained constitutive activation of NOTCH1 stimulates the proliferation of immature cardiomyocytes in the rat myocardium (Collesi et al., 2008). In HSCs, overexpression of constitutively active NOTCH1 in hematopoietic progenitors and stem cells supports both primitive and definitive HSC selfrenewal (Stier et al., 2002). Together, these studies suggest that activation and/or sustained Notch signaling can lead to an increase in certain tissue stem cell populations. Thus, a working model for how tissue stem cell populations are favored in the absence of Prkci involves a sequence of events that ultimately leads to Notch activation. Recent studies have shown that aPKCs can be found in a complex with NUMB in both Drosophila and mammalian cells (Smith et al., 2007; Zhou et al., 2011); hence, in our working model (Figure 5S), we propose that the localization and phosphorylation of NUMB is highly dependent on the activity of PRKCi. When Prkci is downregulated or absent (as shown here), cell polarity is not promoted, leading to diffuse distribution and decreased phosphorylation of NUMB. Without active NUMB, NOTCH1 activation is enhanced, Hes/Hey genes are upregulated, and stem/progenitor fate generation is favored. To initiate differentiation, polarization could be stochastically determined but could also be dependent on external cues such as the presentation of certain ligands or extracellular matrix (ECM) proteins (Habib et al., 2013). When PRKCi is active and the cell becomes polarized, a trimeric complex is formed with PRKCi, PAR3, and PAR6. Numb is then recruited and phosphorylated, leading to Notch inactivation, the repression of downstream Hes/Hey genes, and differentiation is favored (see Figure 5S). Support for this working model comes from studies in Drosophila showing that the aPKC complex is essential for Numb activation and asymmetric localization (Knoblich, 2008; Smith et al., 2007; Wang et al., 2006). Additional studies on mouse neural progenitors show that regulating Numb localization and Notch activation is critical for maintaining the proper number of stem/progenitor cells in balance with differentiation (Bultje et al., 2009). Thus, an important function for PRKCi may be to regulate the switch between symmetric expansion of stem/progenitor cells to an asymmetric differentiation/maintenance phase. In situations of low or absent PRKCi, we propose that the expansion phase is favored. Thus, temporarily blocking either, or both, of the aPKC isozymes may be a powerful approach for expanding specific stem/progenitor populations for use in basic research or for therapeutic applications.
Although we do not see changes in the activation status of the STAT3, AKT, or GSK3 pathway, loss of Prkci results in an inhibition of ERK1/2 (Figures 2A and 2B). This result is consistent with the findings that ERK1/2 inhibition is both correlated with and directly increases ES cell selfrenewal (Burdon et al., 1999). Modulation of ERK1/2 activity by Prkci has been observed in cancer cells and chondrocytes (Litherland et al., 2010; Murray et al., 2011). Although it is not clear whether a direct interaction exists between Prkci and ERK1/2, Prkcz directly interacts with ERK1/2 in the mouse liver and in hypoxia-exposed cells (Das et al., 2008; Peng et al., 2008). The Prkcz isozyme is still expressed in Prkci null cells but evidently cannot suf- ficiently compensate and activate the pathway normally. Furthermore, knocking down Prkcz function in ES cells does not result in ERK1/2 inhibition, suggesting that this isozyme does not impact ERK1/2 signaling in ES cells (Dutta et al., 2011). Therefore, although PRKCi may interact with ERK1/2 and be directly required for its activation, ERK1/2 inhibition could also be a readout for cells that are more stem-like. Further studies will be needed to address this question.
Utility of Inhibiting aPKC Function Loss of Prkci resulted in EBs that contained slightly more STELLA+ cells than EBs made from +/ cells. Furthermore, inhibition of both aPKC isozymes by treating Prkci null cells with the PKC inhibitor Go¨6983 or the more specific inhibitor, ZIP, strongly promoted the generation of large clusters of STELLA+ and VASA+ cells, suggesting that inhibition of both isozymes is important for PGC progenitor expansion (Figure 6). It is unclear what the mechanism for this might be; however, one possibility is that blocking both aPKCs is necessary to promote NOTCH1 activation in PGCs or in PGC progenitor cells that may ordinarily have strong inhibitions to expansion (Feng et al., 2014). Regardless of mechanism, the ability to generate PGC-like cells in culture is notoriously challenging, and our results provide a method for future studies on PGC specification and differentiation. Expansion of stem/progenitor pools may not be desirable in the context of cancer. Prkci has been characterized as a human oncogene, a useful prognostic cancer marker, and a therapeutic target for cancer treatment. Overexpression of Prkci is found in epithelial cancers (Fields and Regala, 2007), and Prkci inhibitors are being evaluated as candidate cancer therapies (Atwood et al., 2013; Mansfield et al., 2013). However, because our results show that Prkci inhibition leads to enhanced stem cell production in vitro, Prkci inhibitor treatment as a cancer therapy might lead to unintended consequences (tumor overgrowth), depending on the context and treatment regimen. Thus, extending our findings to human stem and cancer stem cells is needed.
In summary, here, we demonstrate that loss of Prkci leads to the generation of abundant pluripotent cells, even under differentiation conditions. In addition, we show that tissue stem cells such as neural stem cells, primitive erythrocytes, and cardiomyocyte progenitors can also be abundantly produced in the absence of Prkci. These increases in stem cell production correlate with decreased NUMB activation and symmetric NUMB localization and require Notch signaling. Further inhibition of Prkcz may have an additive effect and can enhance the production of PGC-like cells. Thus, Prkci (along with Prkcz) may play key roles in stem cell self-renewal and differentiation by regulating the Notch pathway. Furthermore, inhibition of Prkci and or Prkcz activity with specific small-molecule inhibitors might be a powerful method to boost stem cell production in the context of injury or disease.
A hypothesis is proposed about potassium ponds being the cradles of life enriches the gamut of ideas about the possible conditions of pre-biological evolution on the primeval Earth, but does not bring us closer to solving the real problem of the origin of life. The gist of the matter lies in the mechanism of making a delimitation between two environments – the intracellular environment and the habitat of protocells. Since the sodium–potassium pump (Na+ /K+-ATPase) was discovered, no molecular model has been proposed for a predecessor of the modern sodium pump. This has brought into life the idea of the potassium pond, wherein protocells would not need a sodium pump. However, current notions of the operation of living cells come into conflict with even physical laws when trying to use them to explain the origin and functioning of protocells. Thus, habitual explanations of the physical properties of living cells have become inapplicable to explain the corresponding properties of Sidney Fox’s microspheres. Likewise, existing approaches to solving the problem of the origin of life do not see the need for the comparative study of living cells and cell models, assemblies of biological and artificial small molecules and macromolecules under physical conditions conducive to the origin of life. The time has come to conduct comprehensive research into the fundamental physical properties of protocells and create a new discipline – protocell physiology or protophysiology – which should bring us much closer to solving the problem of the origin of life.
There is a statement we constantly come across in the scientific and popular-science literature: the ion composition of the internal environment of the body of humans and animals, in which all of its cells are immersed, is close to that of seawater. This observation appeared in the literature even 100 years ago, when it became possible to investigate the ion composition of biological liquids.
This similarity between the internal environment of the body and the sea is quite obvious: in both seawater and blood plasma there are one or two orders of magnitude more Na+ ions than K+. It is this composition that can make one think that life originated in the primeval ocean (the memory of which has since been sustained by the internal environment of the body), and the first cells delineated themselves from seawater using a weakly permeable membrane, so that their internal environment became special, suitable for chemical and physical processes needed to sustain life. Indeed, the ratio of the above cations in the cytoplasm is the exact reverse of their ratio in seawater: there is much more K+ in it than Na+ . In fact, physiological processes can only be possible in an environment where potassium prevails over sodium. Therefore, any theory of the origin of life must explain how such a deep delimitation (distinction) between the two environments could occur: the intracellular environment, wherein vitally important processes take course, and the external environment, which provides the cell with necessary materials and conditions.
For the protocell to separate from seawater, a mechanism must arise that creates and maintains the ion asymmetry between the primeval cell and its environs. We normally consider a mechanism of this kind as the isolating lipid membrane with a molecular ion pump, the Na+/K+ -ATPase, built into it. If life originated in seawater, the origin of the first cell inevitably comes down to the origin of the sodium pump and any structure supporting it – the lipid membrane – without which the work of any pump would make little sense. It seems that life is born in conditions that are really adverse to it and even ruinous.
The great basic question of science: Membrane compartment or non-membrane phase compartment (biophase) is a physical basis for origin of life?
1. If life originated in seawater, the origin of the first cell inevitably comes down to the origin of the sodium pump and any structure supporting it – the lipid membrane – without which the work of any pump would make little sense.
2. Since the sodium-potassium pump (Na+/K+-ATPase) was discovered, no molecular model has been proposed for a predecessor of the modern sodium pump. Neither Miller’s electrical charges, nor Fox’s amino-acid condensation, nor building ready-made biomolecules into coacervates; none of this has managed to lead to the self-origination of the progenitor of the ion pump even in favourable lab conditions.
3. In 2007, we saw the simultaneous release of two articles, in which it was posited that life originated not in seawater as previously thought, but in smaller bodies of water with a K+/Na+ ratio necessary to sustain life. In this conditions sodium pump is not needed and the pump can originate later. But why the pump is needed if K+/Na+ ratio is good? The origin of the sodium pump in conditions where there is no natural need for it may require the agency of Providence.
4. Potassium Big Bang on Earth instead of potassium ponds.
5. Fox’s microspheres do not need potassium ponds.
6. Despite the fact that Fox’s microspheres have no fully functional membrane with sodium pumps and specific ion channels, they generate action potentials similar to that by nerve cells and in addition have ion channels which open and close spontaneously. This ability of the microspheres contradicts to the generally accepted ideas about the mechanism of generation of biological electrical potentials.
7. Hodgkin-Huxley model of action potentials is similarly well-compatible with both the nerve cell and Fox’s microsphere.
8. Biophase as the main subject of protophysiology. In the past they considered the living cell as a non-membrane phase compartment with different physical properties in comparison to the surrounding medium, and this physical difference plays a key role in cell function. According to a new take on an old phase, non-membrane phase compartments play an important role in the functioning of the cell nucleus, nuclear envelope and then of cytoplasm. Somebody sees the compartments even as temporary organelles. According to available data, the phase compartments can play a key role in cell signaling. In this historical context, studies in recent years dedicated to non-membrane phase compartments in the living cells sound sensational.
9. It is essentially a Protocell World which weaves known RNA World, DNA World and Protein World into unity. 10. In the view of non-membrane phase approach, the usage of liposomes and other membrane (non-biophase) cell models to solve the issue of the origin of life is a deadlock way of the investigation. I would be grat
One hundred years ago this Wednesday, Albert Einstein gave the last of a series of presentations to the Prussian Academy of Sciences, which marks the official completion of his General Theory of Relativity. This anniversary is generating a good deal of press and various celebratory events, such as the premiere of a new documentary special. If you prefer your physics explanations in the plainest language possible, there’s even an “Up Goer Five” version (personally, I don’t find these all that illuminating, but lots of people seem to love it).
Einstein is, of course, the most iconic scientist in history, and much of the attention to this week’s centennial will center on the idea of his singular genius. Honestly, general relativity is esoteric enough that were it not for Einstein’s personal fame, there probably wouldn’t be all that much attention paid to this outside of the specialist science audience.
But, of course, while the notion of Einstein as a lone, unrecognized genius is a big part of his myth, he didn’t create relativity entirely on his own, asthis article in Nature News makes clear. The genesis of relativity is a single simple idea, but even in the early stages, when he developed Special Relativity while working as a patent clerk, he honed his ideas through frequent discussions with friends and colleagues. Most notable among these was probably Michele Besso, who Einstein later referred to as “the best sounding board in Europe.”
And most of the work on General Relativity came not when Einstein was toiling in obscurity, but after he had begun to climb the academic ladder in Europe. In the ten years between the Special and General theories, he went through a series of faculty jobs of increasing prestige. He also laboriously learned a great deal of mathematics in order to reach the final form of the theory, largely with the assistance of his friend Marcel Grossmann. The path to General Relativity was neither simple nor solitary, and the Nature piece documents both the mis-steps along the way and the various people who helped out.
While Einstein wasn’t working alone, though, the Nature piece also makes an indirect case for his status as a genius worth celebrating. Not because of the way he solved the problem, but through the choice of problem to solve. Einstein pursued a theory that would incorporate gravitation into relativity with dogged determination through those years, but he was one of a very few people working on it. There were a couple of other theories kicking around, particularly Gunnar Nordström’s, but these didn’t generate all that much attention. The mathematician David Hilbert nearly scooped Einstein with the final form of the field equations in November of 1915 (some say he did get there first), but Hilbert was a latecomer who only got interested in the problem of gravitation after hearing about it from Einstein, and his success was a matter of greater familiarity with the necessary math. One of the books I used when I taught a relativity class last year quoted Hilbert as saying that “every child in the streets of Göttingen knows more about four-dimensional geometry than Einstein,” but that Einstein’s physical insight got him to the theory before superior mathematicians.
16 November 2015 Corrected: 17 November 2015 Nature Nov 2015; 527(7578)
Lesser-known and junior colleagues helped the great physicist to piece together his general theory of relativity, explain Michel Janssen and Jürgen Renn.
Marcel Grossmann (left) and Michele Besso (right), university friends of Albert Einstein (centre), both made important contributions to general relativity.
A century ago, in November 1915, Albert Einstein published his general theory of relativity in fourshortpapersin the proceedings of the Prussian Academy of Sciences in Berlin1. The landmark theory is often presented as the work of a lone genius. In fact, the physicist received a great deal of help from friends and colleagues, most of whom never rose to prominence and have been forgotten2, 3, 4, 5. (For full reference details of all Einstein texts mentioned in this piece, seeSupplementary Information.)
Here we tell the story of how their insights were woven into the final version of the theory. Two friends from Einstein’s student days — Marcel Grossmann and Michele Besso — were particularly important. Grossmann was a gifted mathematician and organized student who helped the more visionary and fanciful Einstein at crucial moments. Besso was an engineer, imaginative and somewhat disorganized, and a caring and lifelong friend to Einstein. A cast of others contributed too.
Einstein met Grossmann and Besso at the Swiss Federal Polytechnical School in Zurich6 — later renamed the Swiss Federal Institute of Technology (Eidgenössische Technische Hochschule; ETH) — where, between 1896 and 1900, he studied to become a school teacher in physics and mathematics. Einstein also met his future wife at the ETH, classmate Mileva Marić. Legend has it that Einstein often skipped class and relied on Grossmann’s notes to pass exams.
Grossmann’s father helped Einstein to secure a position at the patent office in Berne in 1902, where Besso joined him two years later. Discussions between Besso and Einstein earned the former the sole acknowledgment in the most famous of Einstein’s 1905 papers, the one introducing the special theory of relativity. As well as publishing the papers that made 1905 his annus mirabilis, Einstein completed his dissertation that year to earn a PhD in physics from the University of Zurich.
In 1907, while still at the patent office, he started to think about extending the principle of relativity from uniform to arbitrary motion through a new theory of gravity. Presciently, Einstein wrote to his friend Conrad Habicht — whom he knew from a reading group in Berne mockingly called the Olympia Academy by its three members — saying that he hoped that this new theory would account for a discrepancy of about 43˝ (seconds of arc) per century between Newtonian predictions and observations of the motion of Mercury’s perihelion, the point of its orbit closest to the Sun.
Einstein started to work in earnest on this new theory only after he left the patent office in 1909, to take up professorships first at the University of Zurich and two years later at the Charles University in Prague. He realized that gravity must be incorporated into the structure of space-time, such that a particle subject to no other force would follow the straightest possible trajectory through a curved space-time.
In 1912, Einstein returned to Zurich and was reunited with Grossmann at the ETH. The pair joined forces to generate a fully fledged theory. The relevant mathematics was Gauss’s theory of curved surfaces, which Einstein probably learned from Grossmann’s notes. As we know from recollected conversations, Einstein told Grossmann7: “You must help me, or else I’ll go crazy.”
Their collaboration, recorded in Einstein’s ‘Zurich notebook‘, resulted in a joint paper published in June 1913, known as the Entwurf (‘outline’) paper. The main advance between this 1913 Entwurf theory and the general relativity theory of November 1915 are the field equations, which determine how matter curves space-time. The final field equations are ‘generally covariant’: they retain their form no matter what system of coordinates is chosen to express them. The covariance of the Entwurf field equations, by contrast, was severely limited.
In May 1913, as he and Grossmann put the finishing touches to their Entwurf paper, Einstein was asked to lecture at the annual meeting of the Society of German Natural Scientists and Physicians to be held that September in Vienna, an invitation that reflects the high esteem in which the 34-year-old was held by his peers.
In July 1913, Max Planck and Walther Nernst, two leading physicists from Berlin, came to Zurich to offer Einstein a well-paid and teaching-free position at the Prussian Academy of Sciences in Berlin, which he swiftly accepted and took up in March 1914. Gravity was not a pressing problem for Planck and Nernst; they were mainly interested in what Einstein could do for quantum physics. (It was Walther Nernst who advised that Germany could not engage in WWI and win unless it was a short war).
Several new theories had been proposed in which gravity, like electromagnetism, was represented by a field in the flat space-time of special relativity. A particularly promising one came from the young Finnish physicist Gunnar Nordström. In his Vienna lecture, Einstein compared his own Entwurf theory to Nordström’s theory. Einstein worked on both theories between May and late August 1913, when he submitted the text of his lecture for publication in the proceedings of the 1913 Vienna meeting.
In the summer of 1913, Nordström visited Einstein in Zurich. Einstein convinced him that the source of the gravitational field in both their theories should be constructed out of the ‘energy–momentum tensor’: in pre-relativistic theories, the density and the flow of energy and momentum were represented by separate quantities; in relativity theory, they are combined into one quantity with ten different components.
ETH Zurich, where Einstein met friends with whom he worked on general relativity.
This energy–momentum tensor made its first appearance in 1907–8 in the special-relativistic reformulation of the theory of electrodynamics of James Clerk Maxwell and Hendrik Antoon Lorentz by Hermann Minkowski. It soon became clear that an energy–momentum tensor could be defined for physical systems other than electromagnetic fields. The tensor took centre stage in the new relativistic mechanics presented in the first textbook on special relativity, Das Relativitätsprinzip, written by Max Laue in 1911. In 1912, a young Viennese physicist, Friedrich Kottler, generalized Laue’s formalism from flat to curved space-time. Einstein and Grossmann relied on this generalization in their formulation of the Entwurf theory. During his Vienna lecture, Einstein called for Kottler to stand up and be recognized for this work8.
Einstein also worked with Besso that summer to investigate whether the Entwurf theory could account for the missing 43˝ per century for Mercury’s perihelion. Unfortunately, they found that it could only explain 18˝. Nordström’s theory, Besso checked later, gave 7˝ in the wrong direction. These calculations are preserved in the ‘Einstein–Besso manuscript‘ of 1913.
Besso contributed significantly to the calculations and raised interesting questions. He wondered, for instance, whether the Entwurf field equations have an unambiguous solution that uniquely determines the gravitational field of the Sun. Historical analysis of extant manuscripts suggests that this query gave Einstein the idea for an argument that reconciled him with the restricted covariance of the Entwurf equations. This ‘hole argument’ seemed to show that generally covariant field equations cannot uniquely determine the gravitational field and are therefore inadmissible9.
Einstein and Besso also checked whether the Entwurf equations hold in a rotating coordinate system. In that case the inertial forces of rotation, such as the centrifugal force we experience on a merry-go-round, can be interpreted as gravitational forces. The theory seemed to pass this test. In August 1913, however, Besso warned him that it did not. Einstein did not heed the warning, which would come back to haunt him.
In his lecture in Vienna in September 1913, Einstein concluded his comparison of the two theories with a call for experiment to decide. The Entwurf theory predicts that gravity bends light, whereas Nordström’s does not. It would take another five years to find out. Erwin Finlay Freundlich, a junior astronomer in Berlin with whom Einstein had been in touch since his days in Prague, travelled to Crimea for the solar eclipse of August 1914 to determine whether gravity bends light but was interned by the Russians just as the First World War broke out. Finally, in 1919, English astronomer Arthur Eddington confirmed Einstein’s prediction of light bending by observing the deflection of distant stars seen close to the Sun’s edge during another eclipse, making Einstein a household name10.
Back in Zurich, after the Vienna lecture, Einstein teamed up with another young physicist, Adriaan Fokker, a student of Lorentz, to reformulate the Nordström theory using the same kind of mathematics that he and Grossmann had used to formulate the Entwurf theory. Einstein and Fokker showed that in both theories the gravitational field can be incorporated into the structure of a curved space-time. This work also gave Einstein a clearer picture of the structure of the Entwurf theory, which helped him and Grossmann in a second joint paper on the theory. By the time it was published in May 1914, Einstein had left for Berlin.
Turmoil erupted soon after the move. Einstein’s marriage fell apart and Mileva moved back to Zurich with their two young sons. Albert renewed the affair he had started and broken off two years before with his cousin Elsa Löwenthal (née Einstein). The First World War began. Berlin’s scientific elite showed no interest in the Entwurf theory, although renowned colleagues elsewhere did, such as Lorentz and Paul Ehrenfest in Leiden, the Netherlands. Einstein soldiered on.
By the end of 1914, his confidence had grown enough to write a long exposition of the theory. But in the summer of 1915, after a series of his lectures in Göttingen had piqued the interest of the great mathematician David Hilbert, Einstein started to have serious doubts. He discovered to his dismay that the Entwurf theory does not make rotational motion relative. Besso was right. Einstein wrote to Freundlich for help: his “mind was in a deep rut”, so he hoped that the young astronomer as “a fellow human being with unspoiled brain matter” could tell him what he was doing wrong. Freundlich could not help him.
“Worried that Hilbert might beat him to the punch, Einstein rushed new equations into print.”
The problem, Einstein soon realized, lay with the Entwurf field equations. Worried that Hilbert might beat him to the punch, Einstein rushed new equations into print in early November 1915, modifying them the following week and again two weeks later in subsequent papers submitted to the Prussian Academy. The field equations were generally covariant at last.
In the first November paper, Einstein wrote that the theory was “a real triumph” of the mathematics of Carl Friedrich Gauss and Bernhard Riemann. He recalled in this paper that he and Grossmann had considered the same equations before, and suggested that if only they had allowed themselves to be guided by pure mathematics rather than physics, they would never have accepted equations of limited covariance in the first place.
Other passages in the first November paper, however, as well as his other papers and correspondence in 1913–15, tell a different story. It was thanks to the elaboration of the Entwurf theory, with the help of Grossmann, Besso, Nordström and Fokker, that Einstein saw how to solve the problems with the physical interpretation of these equations that had previously defeated him.
In setting out the generally covariant field equations in the second and fourth papers, he made no mention of the hole argument. Only when Besso and Ehrenfest pressed him a few weeks after the final paper, dated 25 November, did Einstein find a way out of this bind — by realizing that only coincident events and not coordinates have physical meaning. Besso had suggested a similar escape two years earlier, which Einstein had brusquely rejected2.
In his third November paper, Einstein returned to the perihelion motion of Mercury. Inserting the astronomical data supplied by Freundlich into the formula he derived using his new theory, Einstein arrived at the result of 43″ per century and could thus fully account for the difference between Newtonian theory and observation. “Congratulations on conquering the perihelion motion,” Hilbert wrote to him on 19 November. “If I could calculate as fast as you can,” he quipped, “the hydrogen atom would have to bring a note from home to be excused for not radiating.”
Einstein kept quiet on why he had been able to do the calculations so fast. They were minor variations on the ones he had done with Besso in 1913. He probably enjoyed giving Hilbert a taste of his own medicine: in a letter to Ehrenfest written in May 1916, Einstein characterized Hilbert’s style as “creating the impression of being superhuman by obfuscating one’s methods”.
Einstein emphasized that his general theory of relativity built on the work of Gauss and Riemann, giants of the mathematical world. But it also built on the work of towering figures in physics, such as Maxwell and Lorentz, and on the work of researchers of lesser stature, notably Grossmann, Besso, Freundlich, Kottler, Nordström and Fokker. As with many other major breakthroughs in the history of science, Einstein was standing on the shoulders of many scientists, not just the proverbial giants4.
Berlin’s physics elite (Fritz Haber, Walther Nernst, Heinrich Rubens, Max Planck) and Einstein’s old and new family (Mileva Einstein-Marić and heir sons Eduard and Hans Albert; Elsa Einstein-Löwenthal and her daughters Ilse and Margot) are watching as Einstein is pursuing his new theory of gravity and his idée fixeof generalizing the relativity principle while carried by giants of both physics and mathematics (Isaac Newton, James Clerk Maxwell, Carl Friedrich Gauss, Bernhard Riemann) and scientists of lesser stature (Marcel Grossmann, Gunnar Nordström, Erwin Finlay Freundlich, Michele Besso).
Twitter, Google, LinkedIn Enter in the Curation Foray: What’s Up With That?
Reporter: Stephen J. Williams, Ph.D.
Recently Twitter has announced a new feature which they hope to use to increase engagement on their platform. Originally dubbed Project Lightning and now called Moments, this feature involves many human curators which aggregate and curate tweets surrounding individual live events(which used to be under #Live).
As Madhu Muthukumar (@justmadhu), Twitter’s Product Manager, published a blog post describing Moments said:
“Every day, people share hundreds of millions of tweets. Among them are things you can’t experience anywhere but on Twitter: conversations between world leaders and celebrities, citizens reporting events as they happen, cultural memes, live commentary on the night’s big game, and many more,” the blog post noted. “We know finding these only-on-Twitter moments can be a challenge, especially if you haven’t followed certain accounts. But it doesn’t have to be.”
Moments is a new tab on Twitter’s mobile and desktop home screens where the company will curate trending topics as they’re unfolding in real-time — from citizen-reported news to cultural memes to sports events and more. Moments will fall into five total categories, including “Today,” “News,” “Sports,” “Entertainment” and “Fun.” (Source: Fox)
What’s a challenge for Google is a direct threat to Twitter’s existence.
For all the talk about what doesn’t work in journalism, curation works. Following the news, collecting it and commenting, and encouraging discussion, is the “secret sauce” for companies like Buzzfeed, Vox, Vice and The Huffington Post, which often wind up getting more traffic from a story at, say The New York Times (NYSE:NYT), than the Times does as a result.
Curation is, in some ways, a throwback to the pre-Internet era. It’s done by people. (At least I think I’m a people.) So as odd as it is for Twitter (NYSE:TWTR) to announce it will curate live events it’s even odder to see Google (NASDAQ:GOOG) (NASDAQ:GOOGL) doing it in a project called YouTube Newswire.
Buzzfeed, Google’s content curation platform, made for desktop as well as a mobile app, allows sharing of curated news, viral videos.
The feel for both Twitter and Google’s content curation will be like a newspaper, with an army of human content curators determining what is the trendiest news to read or videos to watch.
BuzzFeed articles, or at least, the headlines can easily be mined from any social network but reading the whole article still requires that you open the link within the app or outside using a mobile web browser. Loading takes some time–a few seconds longer. Try browsing the BuzzFeed feed on the app and you’ll notice the obvious difference.
Google News: Less focused on social signals than textual ones, Google News uses its analytic tools to group together related stories and highlight the biggest ones. Unlike Techmeme, it’s entirely driven by algorithms, and that means it often makes weird choices. I’ve heard that Google uses social sharing signals from Google+ to help determine which stories appear on Google News, but have never heard definitive confirmation of that — and now that Google+ is all but dead, it’s mostly moot. I find Google News an unsatisfying home page, but it is a good place to search for news once you’ve found it.
“
Now WordPress Too!
WordPress also has announced its curation plugin called Curation Traffic.
According to WordPress
You Own the Platform, You Benefit from the Traffic
“The Curation Traffic™ System is a complete WordPress based content curation solution. Giving you all the tools and strategies you need to put content curation into action.
It is push-button simple and seamlessly integrates with any WordPress site or blog.
With Curation Traffic™, curating your first post is as easy as clicking “Curate” and the same post that may originally only been sent to Facebook or Twitter is now sent to your own site that you control, you benefit from, and still goes across all of your social sites.”
The theory the more you share on your platform the more engagement the better marketing experience. And with all the WordPress users out there they have already an army of human curators.
So That’s Great For News But What About Science and Medicine?
The news and trendy topics such as fashion and music are common in most people’s experiences. However more technical areas of science, medicine, engineering are not in most people’s domain so aggregation of content needs a process of peer review to sort basically “the fact from fiction”. On social media this is extremely important as sensational stories of breakthroughs can spread virally without proper vetting and even influence patient decisions about their own personal care.
Expertise Depends on Experience
In steps the human experience. On this site (www.pharmaceuticalintelligence.com) we attempt to do just this. A consortium of M.D.s, Ph.D. and other medical professionals spend their own time to aggregate not only topics of interest but curate on specific topics to add some more insight from acceptable sources over the web.
For instance, using a Twitter platform, we curate #live meeting notes and tweets from meeting attendees (please see links below and links within) to give a live conference coverage
and curation and analysis give rise not only to meeting engagement butunique insights into presentations.
In addition, the use of a WordPress platform allows easy sharing among many different social platforms including Twitter, Google+, LinkedIn, Pinterest etc.
Hopefully, this will catch on to the big powers of Twitter, Google and Facebook to realize there exists armies of niche curation communities which they can draw on for expert curation in the biosciences.