Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Eight Subcellular Pathologies driving Chronic Metabolic Diseases – Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics: Impact on Pharmaceuticals in Use
In this curation we wish to present two breaking through goals:
Goal 1:
Exposition of a new direction of research leading to a more comprehensive understanding of Metabolic Dysfunctional Diseases that are implicated in effecting the emergence of the two leading causes of human mortality in the World in 2023: (a) Cardiovascular Diseases, and (b) Cancer
Goal 2:
Development of Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics for these eight subcellular causes of chronic metabolic diseases. It is anticipated that it will have a potential impact on the future of Pharmaceuticals to be used, a change from the present time current treatment protocols for Metabolic Dysfunctional Diseases.
According to Dr. Robert Lustig, M.D, an American pediatric endocrinologist. He is Professor emeritus of Pediatrics in the Division of Endocrinology at the University of California, San Francisco, where he specialized in neuroendocrinology and childhood obesity, there are eight subcellular pathologies that drive chronic metabolic diseases.
These eight subcellular pathologies can’t be measured at present time.
In this curation we will attempt to explore methods of measurement for each of these eight pathologies by harnessing the promise of the emerging field known as Bioelectronics.
Unmeasurable eight subcellular pathologies that drive chronic metabolic diseases
Glycation
Oxidative Stress
Mitochondrial dysfunction [beta-oxidation Ac CoA malonyl fatty acid]
Insulin resistance/sensitive [more important than BMI], known as a driver to cancer development
Membrane instability
Inflammation in the gut [mucin layer and tight junctions]
Epigenetics/Methylation
Autophagy [AMPKbeta1 improvement in health span]
Diseases that are not Diseases: no drugs for them, only diet modification will help
Image source
Robert Lustig, M.D. on the Subcellular Processes That Belie Chronic Disease
These eight Subcellular Pathologies driving Chronic Metabolic Diseases are becoming our focus for exploration of the promise of Bioelectronics for two pursuits:
Will Bioelectronics be deemed helpful in measurement of each of the eight pathological processes that underlie and that drive the chronic metabolic syndrome(s) and disease(s)?
IF we will be able to suggest new measurements to currently unmeasurable health harming processes THEN we will attempt to conceptualize new therapeutic targets and new modalities for therapeutics delivery – WE ARE HOPEFUL
In the Bioelecronics domain we are inspired by the work of the following three research sources:
Michael Levin is an American developmental and synthetic biologist at Tufts University, where he is the Vannevar Bush Distinguished Professor. Levin is a director of the Allen Discovery Center at Tufts University and Tufts Center for Regenerative and Developmental Biology. Wikipedia
THE VOICE of Dr. Justin D. Pearlman, MD, PhD, FACC
PENDING
THE VOICE of Stephen J. Williams, PhD
Ten TakeAway Points of Dr. Lustig’s talk on role of diet on the incidence of Type II Diabetes
25% of US children have fatty liver
Type II diabetes can be manifested from fatty live with 151 million people worldwide affected moving up to 568 million in 7 years
A common myth is diabetes due to overweight condition driving the metabolic disease
There is a trend of ‘lean’ diabetes or diabetes in lean people, therefore body mass index not a reliable biomarker for risk for diabetes
Thirty percent of ‘obese’ people just have high subcutaneous fat. the visceral fat is more problematic
there are people who are ‘fat’ but insulin sensitive while have growth hormone receptor defects. Points to other issues related to metabolic state other than insulin and potentially the insulin like growth factors
At any BMI some patients are insulin sensitive while some resistant
Visceral fat accumulation may be more due to chronic stress condition
Fructose can decrease liver mitochondrial function
A methionine and choline deficient diet can lead to rapid NASH development
The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT
Curator: Stephen J. Williams, Ph.D.
Chatbots are being used in multiple interdisciplanary areas in research and medicine and so it was a natural progression to incorporate artificial intelligence (AI), natural language processing (NLP), and chatbot technology like ChatGPT into bioinformatic analysis.
“In domains like informatics, management and marketing, media and communication science, languages and philosophy, psychology and sociology, engineering, design, and human-computer interaction, the fast expanding body of chatbot study is clearly interdisciplinary.
The field of bioinformatics is a natural fit for incorporting this technology. The curated information below shows some use examples of this technology in bioinformatics related to cancer research.
How ChatGPT can be used in bioinformatics research?
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model that was designed to generate human-like text in a conversational setting. It is not directly related to bioinformatics, which is the field of study that deals with the storage, retrieval, and analysis of biological data, particularly in the context of genetics and genomics. However, ChatGPT could potentially be used in a bioinformatics context as a tool for generating natural language descriptions of genetic or genomic data, or for generating responses to queries about such data.
For example, a researcher could use ChatGPT to generate descriptions of genetic variants or gene expression patterns based on input data. This could be useful for generating summaries of results for reports or papers, or for generating explanations of complex concepts for non-expert audiences. ChatGPT could also be used to generate responses to queries about genetic or genomic data, such as “What is the function of gene X?” or “What are the potential consequences of variant Y?”
It’s worth noting that ChatGPT is just one of many tools and techniques that can be used in bioinformatics, and it is unlikely to be the most suitable or effective option in every situation. There are many specialized tools and resources available for working with biological data, and it is often necessary to use a combination of these tools to fully analyze and understand the data.
Generating descriptions of genetic or genomic data: ChatGPT could be used to generate natural language descriptions of genetic or genomic data based on input data. For example, suppose a researcher has a dataset containing information about gene expression levels in different tissues. The researcher could use ChatGPT to generate a description of the data, such as:
“Gene X is highly expressed in the liver and kidney, with moderate expression in the brain and heart. Gene Y, on the other hand, shows low expression in all tissues except for the lung, where it is highly expressed.”
Thereby ChatGPT, at its simplest level, could be used to ask general questions like “What is the function of gene product X?” and a ChatGPT could give a reasonable response without the scientist having to browse through even highly curated databases lie GeneCards or UniProt or GenBank. Or even “What are potential interactors of Gene X, validated by yeast two hybrid?” without even going to the curated InterActome databases or using expensive software like Genie.
Summarizing results: ChatGPT could be used to generate summaries of results from genetic or genomic studies. For example, a researcher might use ChatGPT to generate a summary of a study that found a association between a particular genetic variant and a particular disease. The summary might look something like this:
“Our study found that individuals with the variant form of gene X are more likely to develop disease Y. Further analysis revealed that this variant is associated with changes in gene expression that may contribute to the development of the disease.”
It’s worth noting that ChatGPT is just one tool that could potentially be used in these types of applications, and it is likely to be most effective when used in combination with other bioinformatics tools and resources. For example, a researcher might use ChatGPT to generate a summary of results, but would also need to use other tools to analyze the data and confirm the findings.
ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model that is designed for open-domain conversation. It is not specifically designed for generating descriptions of genetic variants or gene expression patterns, but it can potentially be used for this purpose if you provide it with a sufficient amount of relevant training data and fine-tune it appropriately.
To use ChatGPT to generate descriptions of genetic variants or gene expression patterns, you would first need to obtain a large dataset of examples of descriptions of genetic variants or gene expression patterns. You could use this dataset to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants or gene expression patterns.
Here’s an example of how you might use ChatGPT to generate a description of a genetic variant:
First, you would need to pre-process your dataset of descriptions of genetic variants to prepare it for use with ChatGPT. This might involve splitting the descriptions into individual sentences or phrases, and encoding them using a suitable natural language processing (NLP) library or tool.
Next, you would need to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants. This could involve using a tool like Hugging Face’s Transformers library to load the ChatGPT model and your pre-processed dataset, and then training the model on the task of generating descriptions of genetic variants using an appropriate optimization algorithm.
Once the model has been fine-tuned, you can use it to generate descriptions of genetic variants by providing it with a prompt or seed text and asking it to generate a response. For example, you might provide the model with the prompt “Generate a description of a genetic variant associated with increased risk of breast cancer,” and ask it to generate a response. The model should then generate a description of a genetic variant that is associated with increased risk of breast cancer.
It’s worth noting that generating high-quality descriptions of genetic variants or gene expression patterns is a challenging task, and it may be difficult to achieve good results using a language model like ChatGPT without a large amount of relevant training data and careful fine-tuning.
To train a language model like chatGPT to extract information about specific genes or diseases from research papers, you would need to follow these steps:
Gather a large dataset of research papers that contain information about the specific genes or diseases you are interested in. This dataset should be diverse and representative of the types of papers you want the model to be able to extract information from.
Preprocess the text data in the research papers by tokenizing the text and creating a vocabulary. You may also want to consider lemmatizing or stemming the text to reduce the dimensionality of the dataset.
Train the language model on the preprocessed text data. You may want to fine-tune a pre-trained model such as chatGPT on your specific dataset, or you can train a new model from scratch.
ChatGPT could also be useful for sequence analysis
A few examples of sequence analysis a ChatGPT could be useful include:
Protein structure
Identifying functional regions of a protein
Predicting protein-protein interactions
Identifying protein homologs
Generating Protein alignments
All this could be done without having access to UNIX servers or proprietary software or knowing GCG coding
ChatGPT in biomedical research
There are several potential ways that ChatGPT or other natural language processing (NLP) models could be applied in biomedical research:
Text summarization: ChatGPT or other NLP models could be used to summarize large amounts of text, such as research papers or clinical notes, in order to extract key information and insights more quickly.
Data extraction: ChatGPT or other NLP models could be used to extract structured data from unstructured text sources, such as research papers or clinical notes. For example, the model could be trained to extract information about specific genes or diseases from research papers, and then used to create a database of this information for further analysis.
Literature review: ChatGPT or other NLP models could be used to assist with literature review tasks, such as identifying relevant papers, extracting key information from papers, or summarizing the main findings of a group of papers.
Predictive modeling: ChatGPT or other NLP models could be used to build predictive models based on large amounts of text data, such as electronic health records or research papers. For example, the model could be trained to predict the likelihood of a patient developing a particular disease based on their medical history and other factors.
It’s worth noting that while NLP models like ChatGPT have the potential to be useful tools in biomedical research, they are only as good as the data they are trained on, and it is important to carefully evaluate the quality and reliability of any results generated by these models.
ChatGPT in text mining of biomedical data
ChatGPT could potentially be used for text mining in the biomedical field in a number of ways. Here are a few examples:
Extracting information from scientific papers: ChatGPT could be trained on a large dataset of scientific papers in the biomedical field, and then used to extract specific pieces of information from these papers, such as the names of compounds, their structures, and their potential uses.
Generating summaries of scientific papers: ChatGPT could be used to generate concise summaries of scientific papers in the biomedical field, highlighting the main findings and implications of the research.
Identifying trends and patterns in scientific literature: ChatGPT could be used to analyze large datasets of scientific papers in the biomedical field and identify trends and patterns in the data, such as emerging areas of research or common themes among different papers.
Generating questions for further research: ChatGPT could be used to suggest questions for further research in the biomedical field based on existing scientific literature, by identifying gaps in current knowledge or areas where further investigation is needed.
Generating hypotheses for scientific experiments: ChatGPT could be used to generate hypotheses for scientific experiments in the biomedical field based on existing scientific literature and data, by identifying potential relationships or associations that could be tested in future research.
PLEASE WATCH VIDEO
In this video, a bioinformatician describes the ways he uses ChatGPT to increase his productivity in writing bioinformatic code and conducting bioinformatic analyses.
He describes a series of uses of ChatGPT in his day to day work as a bioinformatian:
Using ChatGPT as a search engine: He finds more useful and relevant search results than a standard Google or Yahoo search. This saves time as one does not have to pour through multiple pages to find information. However, a caveat is ChatGPT does NOT return sources, as highlighted in previous postings on this page. This feature of ChatGPT is probably why Microsoft bought OpenAI in order to incorporate ChatGPT in their Bing search engine, as well as Office Suite programs
ChatGPT to help with coding projects: Bioinformaticians will spend multiple hours searching for and altering open access available code in order to run certain function like determining the G/C content of DNA (although there are many UNIX based code that has already been established for these purposes). One can use ChatGPT to find such a code and then assist in debugging that code for any flaws
ChatGPT to document and add coding comments: When writing code it is useful to add comments periodically to assist other users to determine how the code works and also how the program flow works as well, including returned variables.
One of the comments was interesting and directed one to use BIOGPT instead of ChatGPT
0:54 oh dear. You cannot use chatgpt like that in Bioinformatics as it is rn without double checking the info from it. You should be using biogpt instead for paper summarisation. ChatGPT goes for human-like responses over precise information recal. It is quite good for debugging though and automating boring awkward scripts
The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.
The abstract from the paper is the following:
Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.
Tips:
BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.
This model was contributed by kamalkraj. The original code can be found here.
BioGPT is MIT-licensed. The license applies to the pre-trained models as well.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
As of right now this does not seem Open Access, however a sign up is required!
We provide our pre-trained BioGPT model checkpoints along with fine-tuned checkpoints for downstream tasks, available both through URL download as well as through the Hugging Face 🤗 Hub.
Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY.Brief Bioinform. 2022 Nov 19;23(6):bbac409. doi: 10.1093/bib/bbac409.PMID: 36156661
We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. …Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fl …
Here they show the advantages of BioGPT over BioBERT and ChatGPT in an example of text extraction and question answer
Table 7. Example of generated text
Input Model Text
Bicalutamide GPT-2 Bicalutamide, a synthetic bicalutamide derivative, was found to increase the cellular proliferation
of C. elegans cells. This effect was not observed with the other synthetic bicalutamide derivatives. This is in line with our previous work which showed that bicalutamide increases the proliferation of C. elegans cells but not in the presence of synthetic bicalutamide.
BioGPT Bicalutamide (Casodex) is an androgen receptor (AR) antagonist approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC) in patients who have progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of early-stage prostate cancer in men who have not progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of metastatic castration-sensitive prostate cancer (mCSPC) in men who have not progressed on or are ineligible for docetaxel chemotherapy.
Janus kinase 3 (JAK-3) GPT-2 Janus kinase 3 (JAK-3) mediates the conversion of glucose to glycogen in skeletal muscle. The
increase in glucose uptake by skeletal muscle is believed to be associated with a reduction in muscle protein breakdown and with the release of ketone bodies, which contribute to the metabolism of glucose.
BioGPT Janus kinase 3 (JAK-3) is a member of the Janus kinase (JAK) family of non-receptor tyrosine kinases and plays an important role in the regulation of cell proliferation, differentiation, survival, migration and angiogenesis.
Apricitabine GPT-2 Apricitabine, duloxetine and citalopram) were investigated in patients with a history of
treatment-resistant depression (n = 10). We found that treatment with the three drugs significantly decreased depressive symptoms, but not depressive symptoms over the next 12 weeks. We conclude that antidepressant drugs that reduce depressive symptoms and reduce depressive symptoms over time may be beneficial in patients with recurrent depression.
BioGPT Apricitabine is an oral prodrug of 5-aza-2 ’-deoxycytidine (5-aza-CdR), a DNA methyltransferase (DNMT) inhibitor, which has been approved by the US Food and Drug Administration (FDA) for the treatment of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) in combination with low-dose cytarabine (Ara-C) and granulocyte colony-stimulating factor (G-CSF) for patients with intermediate-2 or high-risk MDS or AML.
Xylazine GPT-2 Xylazine, the ‘active ingredient’ in ‘bath salts’ (see above) has been linked to numerous deaths,
including those of former University of California, Berkeley, professor Andrew Anglin, who posted
a video of himself having a bath filled with it.
BioGPT Xylazine is an alpha 2-adrenoceptor agonist which has been used as a sedative and analgesic in veterinary medicine for many years, but its effects on the cardiovascular system have not been extensively studied in the dog, and its effects on the central nervous system (CNS) have not been well characterized in the dog, despite the fact that xylazine has been widely used as a sedative and analgesic in veterinary medicine for more than 30 years.
Psoralen GPT-2 Psoralen, the city of the spiritualist, and had been given to him by God, who wished to show him
the way of salvation, and to draw him back to the true life of Christ, and to bring him into communion with his fellow-man.
BioGPT Psoralen photochemotherapy (PUVA) is a well-established treatment for psoriasis, but its use is limited by the risk of skin cancer, particularly squamous cell carcinoma (SCC) of the head and neck (H & N), which is the most common site of skin cancer in the United States (US)
CP-673451 GPT-2 CP-673451) and 4-5-6-3-1-2-3-4-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-
BioGPT CP-673451 is a potent, selective, and orally active inhibitor of human neutrophil elastase (HNE) and human cathepsin G (CatG) with in vitro and in vivo anti-inflammatory activity in a variety of animal models of inflammation and in a model of acute lung injury (ALI) in the rat induced by intratracheal instillation of lipopolysaccharide (LPS) and tumor necrosis factor-alpha (TNF-alpha), a model of acute lung injury (ALI) in which neutrophils play an important role.
BIIB-021 GPT-2 BIIB-021A-2-1.pdf. Wright, C. (2002). ‘The War on Terror: A Legal Theory of Terrorist Attacks,’ in
L.E.R. Wright, S.R. Warrick and J.H. Rothman (Eds.), The War on Terror: Essays in Honor of Michael T. Klare (New York: Oxford University Press), 143-57.
BioGPT BIIB-021 is a novel, orally active, non-peptide bradykinin B2 receptor antagonist with potent and long-lasting anti-inflammatory activity in animal models of acute and chronic inflammation and in a rat model of adjuvant-induced arthritis (AIA), an animal model of rheumatoid arthritis (RA) and in a rat model of collagen-induced arthritis (CIA), an animal model of collagen-induced arthritis (CIA), in which arthritis is induced by immunization with bovine type II collagen (CII).
Huang L, Lin J, Li X, Song L, Zheng Z, Wong KC.Brief Bioinform. 2022 Jan 17;23(1):bbab451. doi: 10.1093/bib/bbab451.PMID: 34791012
The rapid growth in literature accumulates diverse and yet comprehensive biomedical knowledge hidden to be mined such as drug interactions. However, it is difficult to extract the heterogeneous knowledge to retrieve or even discover the latest and novel knowledge in an efficient manner. To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data. Specifically, EGFI consists of two parts: classification and generation. In the classification part, EGFI encompasses the language model BioBERT which has been comprehensively pretrained on biomedical corpus. In particular, we propose the multihead self-attention mechanism and packed BiGRU to fuse multiple semantic information for rigorous context modeling. In the generation part, EGFI utilizes another pretrained language model BioGPT-2 where the generation sentences are selected based on filtering rules.
Results: We evaluated the classification part on ‘DDIs 2013’ dataset and ‘DTIs’ dataset, achieving the F1 scores of 0.842 and 0.720 respectively. Moreover, we applied the classification part to distinguish high-quality generated sentences and verified with the existing growth truth to confirm the filtered sentences. The generated sentences that are not recorded in DrugBank and DDIs 2013 dataset demonstrated the potential of EGFI to identify novel drug relationships.
Jin Q, Yang Y, Chen Q, Lu Z.ArXiv. 2023 May 16:arXiv:2304.09667v3. Preprint.PMID: 37131884 Free PMC article.
While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.
PLEASE WATCH THE FOLLOWING VIDEOS ON BIOGPT
This one entitled
Microsoft’s BioGPT Shows Promise as the Best Biomedical NLP
gives a good general description of this new MIT/Microsoft project and its usefullness in scanning 15 million articles on PubMed while returning ChatGPT like answers.
Please note one of the comments which is VERY IMPORTANT
bioGPT is difficult for non-developers to use, and Microsoft researchers seem to default that all users are proficient in Python and ML.
Much like Microsoft Azure it seems this BioGPT is meant for developers who have advanced programming skill. Seems odd then to be paying programmers multiK salaries when one or two Key Opinion Leaders from the medical field might suffice but I would be sure Microsoft will figure this out.
ALSO VIEW VIDEO
This is a talk from Microsoft on BioGPT
Other Relevant Articles on Natural Language Processing in BioInformatics, Healthcare and ChatGPT for Medicine on this Open Access Scientific Journal Include
Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.
Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.
As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.
The Human Genome Gets Fully Sequenced: A Simplistic Take on Century Long Effort
Curator: Stephen J. Williams, PhD
Article ID #295: The Human Genome Gets Fully Sequenced: A Simplistic Take on Century Long Effort. Published on 6/14/2022
WordCloud Image Produced by Adam Tubman
Ever since the hard work by Rosalind Franklin to deduce structures of DNA and the coincidental work by Francis Crick and James Watson who modeled the basic building blocks of DNA, DNA has been considered as the basic unit of heredity and life, with the “Central Dogma” (DNA to RNA to Protein) at its core. These were the discoveries in the early twentieth century, and helped drive the transformational shift of biological experimentation, from protein isolation and characterization to cloning protein-encoding genes to characterizing how the genes are expressed temporally, spatially, and contextually.
Rosalind Franklin, who’s crystolagraphic data led to determination of DNA structure. Shown as 1953 Time cover as Time person of the Year
Dr Francis Crick and James Watson in front of their model structure of DNA
Up to this point (1970s-mid 80s) , it was felt that genetic information was rather static, and the goal was still to understand and characterize protein structure and function while an understanding of the underlying genetic information was more important for efforts like linkage analysis of genetic defects and tools for the rapidly developing field of molecular biology. But the development of the aforementioned molecular biology tools including DNA cloning, sequencing and synthesis, gave scientists the idea that a whole recording of the human genome might be possible and worth the effort.
How the Human Genome Project Expanded our View of Genes Genetic Material and Biological Processes
The Human Genome Project (HGP) refers to the international 13-year effort, formally begun in October 1990 and completed in 2003, to discover all the estimated 20,000-25,000 human genes and make them accessible for further biological study. Another project goal was to determine the complete sequence of the 3 billion DNA subunits (bases in the human genome). As part of the HGP, parallel studies were carried out on selected model organisms such as the bacterium E. coli and the mouse to help develop the technology and interpret human gene function. The DOE Human Genome Program and the NIH National Human Genome Research Institute (NHGRI) together sponsored the U.S. Human Genome Project.
Please see the following for goals, timelines, and funding for this project
Timeline: Major Events in the Human Genome Project
It is interesting to note that multiple government legislation is credited for the funding of such a massive project including
Project Enabling Legislation
The Atomic Energy Act of 1946 (P.L. 79-585) provided the initial charter for a comprehensive program of research and development related to the utilization of fissionable and radioactive materials for medical, biological, and health purposes.
The Atomic Energy Act of 1954 (P.L. 83-706) further authorized the AEC “to conduct research on the biologic effects of ionizing radiation.”
The Energy Reorganization Act of 1974 (P.L. 93-438) provided that responsibilities of the Energy Research and Development Administration (ERDA) shall include “engaging in and supporting environmental, biomedical, physical, and safety research related to the development of energy resources and utilization technologies.”
The Federal Non-nuclear Energy Research and Development Act of 1974 (P.L. 93-577) authorized ERDA to conduct a comprehensive non-nuclear energy research, development, and demonstration program to include the environmental and social consequences of the various technologies.
The DOE Organization Act of 1977 (P.L. 95-91) mandated the Department “to assure incorporation of national environmental protection goals in the formulation and implementation of energy programs; and to advance the goal of restoring, protecting, and enhancing environmental quality, and assuring public health and safety,” and to conduct “a comprehensive program of research and development on the environmental effects of energy technology and program.”
It should also be emphasized that the project was not JUST funded through NIH but also Department of Energy
For a great read on Dr. Craig Ventnor with interviews with the scientist see Dr. Larry Bernstein’s excellent post The Human Genome Project
By 2003 we had gained much information about the structure of DNA, genes, exons, introns and allowed us to gain more insights into the diversity of genetic material and the underlying protein coding genes as well as many of the gene-expression regulatory elements. However there was much uninvestigated material dispersed between genes, the then called “junk DNA” and, up to 2003 not much was known about the function of this ‘junk DNA’. In addition there were two other problems:
The reference DNA used was actually from one person (Craig Ventor who was the lead initiator of the project)
Multiple gaps in the DNA sequence existed, and needed to be filled in
It is important to note that a tremendous amount of diversity of protein has been realized from both transcriptomic and proteomic studies. Although about 20 to 25,000 coding genes exist the human proteome contains about 600,000 proteoforms (due to alternative splicing, posttranslational modifications etc.)
This expansion of the proteoform via alternate splicing into isoforms, gene duplication to paralogs has been shown to have major effects on, for example, cellular signaling pathways (1)
However just recently it has been reported that the FULL human genome has been sequenced and is complete and verified. This was the focus of a recent issue in the journal Science.
Since its initial release in 2000, the human reference genome has covered only the euchromatic fraction of the genome, leaving important heterochromatic regions unfinished. Addressing the remaining 8% of the genome, the Telomere-to-Telomere (T2T) Consortium presents a complete 3.055 billion–base pair sequence of a human genome, T2T-CHM13, that includes gapless assemblies for all chromosomes except Y, corrects errors in the prior references, and introduces nearly 200 million base pairs of sequence containing 1956 gene predictions, 99 of which are predicted to be protein coding. The completed regions include all centromeric satellite arrays, recent segmental duplications, and the short arms of all five acrocentric chromosomes, unlocking these complex regions of the genome to variational and functional studies.
The current human reference genome was released by the Genome Reference Consortium (GRC) in 2013 and most recently patched in 2019 (GRCh38.p13) (1). This reference traces its origin to the publicly funded Human Genome Project (2) and has been continually improved over the past two decades. Unlike the competing Celera effort (3) and most modern sequencing projects based on “shotgun” sequence assembly (4), the GRC assembly was constructed from sequenced bacterial artificial chromosomes (BACs) that were ordered and oriented along the human genome by means of radiation hybrid, genetic linkage, and fingerprint maps. However, limitations of BAC cloning led to an underrepresentation of repetitive sequences, and the opportunistic assembly of BACs derived from multiple individuals resulted in a mosaic of haplotypes. As a result, several GRC assembly gaps are unsolvable because of incompatible structural polymorphisms on their flanks, and many other repetitive and polymorphic regions were left unfinished or incorrectly assembled (5).
Fig. 1. Summary of the complete T2T-CHM13 human genome assembly. (A) Ideogram of T2T-CHM13v1.1 assembly features. For each chromosome (chr), the following information is provided from bottom to top: gaps and issues in GRCh38 fixed by CHM13 overlaid with the density of genes exclusive to CHM13 in red; segmental duplications (SDs) (42) and centromeric satellites (CenSat) (30); and CHM13 ancestry predictions (EUR, European; SAS, South Asian; EAS, East Asian; AMR, ad-mixed American). Bottom scale is measured in Mbp. (B and C) Additional (nonsyntenic) bases in the CHM13 assembly relative to GRCh38 per chromosome, with the acrocentrics highlighted in black (B) and by sequence type (C). (Note that the CenSat and SD annotations overlap.) RepMask, RepeatMasker. (D) Total nongap bases in UCSC reference genome releases dating back to September 2000 (hg4) and ending with T2T-CHM13 in 2021. Mt/Y/Ns, mitochondria, chrY, and gaps.
Note in Figure 1D the exponential growth in genetic information.
Also very important is the ability to determine all the paralogs, isoforms, areas of potential epigenetic regulation, gene duplications, and transposable elements that exist within the human genome.
Analyses and resources
A number of companion studies were carried out to characterize the complete sequence of a human genome, including comprehensive analyses of centromeric satellites (30), segmental duplications (42), transcriptional (49) and epigenetic profiles (29), mobile elements (49), and variant calls (25). Up to 99% of the complete CHM13 genome can be confidently mapped with long-read sequencing, opening these regions of the genome to functional and variational analysis (23) (fig. S38 and table S14). We have produced a rich collection of annotations and omics datasets for CHM13—including RNA sequencing (RNA-seq) (30), Iso-seq (21), precision run-on sequencing (PRO-seq) (49), cleavage under targets and release using nuclease (CUT&RUN) (30), and ONT methylation (29) experiments—and have made these datasets available via a centralized University of California, Santa Cruz (UCSC), Assembly Hub genome browser (54).
To highlight the utility of these genetic and epigenetic resources mapped to a complete human genome, we provide the example of a segmentally duplicated region of the chromosome 4q subtelomere that is associated with facioscapulohumeral muscular dystrophy (FSHD) (55). This region includes FSHD region gene 1 (FRG1), FSHD region gene 2 (FRG2), and an intervening D4Z4 macrosatellite repeat containing the double homeobox 4 (DUX4) gene that has been implicated in the etiology of FSHD (56). Numerous duplications of this region throughout the genome have complicated past genetic analyses of FSHD.
The T2T-CHM13 assembly reveals 23 paralogs of FRG1 spread across all acrocentric chromosomes as well as chromosomes 9 and 20 (Fig. 5A). This gene appears to have undergone recent amplification in the great apes (57), and approximate locations of FRG1 paralogs were previously identified by FISH (58). However, only nine FRG1 paralogs are found in GRCh38, hampering sequence-based analysis.
Future of the human reference genome
The T2T-CHM13 assembly adds five full chromosome arms and more additional sequence than any genome reference release in the past 20 years (Fig. 1D). This 8% of the genome has not been overlooked because of a lack of importance but rather because of technological limitations. High-accuracy long-read sequencing has finally removed this technological barrier, enabling comprehensive studies of genomic variation across the entire human genome, which we expect to drive future discovery in human genomic health and disease. Such studies will necessarily require a complete and accurate human reference genome.
CHM13 lacks a Y chromosome, and homozygous Y-bearing CHMs are nonviable, so a different sample type will be required to complete this last remaining chromosome. However, given its haploid nature, it should be possible to assemble the Y chromosome from a male sample using the same methods described here and supplement the T2T-CHM13 reference assembly with a Y chromosome as needed.
Extending beyond the human reference genome, large-scale resequencing projects have revealed genomic variation across human populations. Our reanalyses of the 1KGP (25) and SGDP (42) datasets have already shown the advantages of T2T-CHM13, even for short-read analyses. However, these studies give only a glimpse of the extensive structural variation that lies within the most repetitive regions of the genome assembled here. Long-read resequencing studies are now needed to comprehensively survey polymorphic variation and reveal any phenotypic associations within these regions.
Although CHM13 represents a complete human haplotype, it does not capture the full diversity of human genetic variation. To address this bias, the Human Pangenome Reference Consortium (59) has joined with the T2T Consortium to build a collection of high-quality reference haplotypes from a diverse set of samples. Ideally, all genomes could be assembled at the quality achieved here, but automated T2T assembly of diploid genomes presents a difficult challenge that will require continued development. Until this goal is realized, and any human genome can be completely sequenced without error, the T2T-CHM13 assembly represents a more complete, representative, and accurate reference than GRCh38.
This paper was the focus of a Time article and their basis for making the lead authors part of their Time 100 people of the year.
The first human genome was mapped in 2001 as part of the Human Genome Project, but researchers knew it was neither complete nor completely accurate. Now, scientists have produced the most completely sequenced human genome to date, filling in gaps and correcting mistakes in the previous version.
The sequence is the most complete reference genome for any mammal so far. The findings from six new papers describing the genome, which were published in Science, should lead to a deeper understanding of human evolution and potentially reveal new targets for addressing a host of diseases.
A more precise human genome
“The Human Genome Project relied on DNA obtained through blood draws; that was the technology at the time,” says Adam Phillippy, head of genome informatics at the National Institutes of Health’s National Human Genome Research Institute (NHGRI) and senior author of one of the new papers. “The techniques at the time introduced errors and gaps that have persisted all of these years. It’s nice now to fill in those gaps and correct those mistakes.”
“We always knew there were parts missing, but I don’t think any of us appreciated how extensive they were, or how interesting,” says Michael Schatz, professor of computer science and biology at Johns Hopkins University and another senior author of the same paper.
The work is the result of the Telomere to Telomere consortium, which is supported by NHGRI and involves genetic and computational biology experts from dozens of institutes around the world. The group focused on filling in the 8% of the human genome that remained a genetic black hole from the first draft sequence. Since then, geneticists have been trying to add those missing portions bit by bit. The latest group of studies identifies about an entire chromosome’s worth of new sequences, representing 200 million more base pairs (the letters making up the genome) and 1,956 new genes.
NOTE: In 2001 many scientists postulated there were as much as 100,000 coding human genes however now we understand there are about 20,000 to 25,000 human coding genes. This does not however take into account the multiple diversity obtained from alternate splicing, gene duplications, SNPs, and chromosomal rearrangements.
Scientists were also able to sequence the long stretches of DNA that contained repeated sequences, which genetic experts originally thought were similar to copying errors and dismissed as so-called “junk DNA”. These repeated sequences, however, may play roles in certain human diseases. “Just because a sequence is repetitive doesn’t mean it’s junk,” says Eichler. He points out that critical genes are embedded in these repeated regions—genes that contribute to machinery that creates proteins, genes that dictate how cells divide and split their DNA evenly into their two daughter cells, and human-specific genes that might distinguish the human species from our closest evolutionary relatives, the primates. In one of the papers, for example, researchers found that primates have different numbers of copies of these repeated regions than humans, and that they appear in different parts of the genome.
“These are some of the most important functions that are essential to live, and for making us human,” says Eichler. “Clearly, if you get rid of these genes, you don’t live. That’s not junk to me.”
Deciphering what these repeated sections mean, if anything, and how the sequences of previously unsequenced regions like the centromeres will translate to new therapies or better understanding of human disease, is just starting, says Deanna Church, a vice president at Inscripta, a genome engineering company who wrote a commentary accompanying the scientific articles. Having the full sequence of a human genome is different from decoding it; she notes that currently, of people with suspected genetic disorders whose genomes are sequenced, about half can be traced to specific changes in their DNA. That means much of what the human genome does still remains a mystery.
The investigators in the Telomere to Telomere Consortium made the Time 100 People of the Year.
Michael Schatz, Karen Miga, Evan Eichler, and Adam Phillippy
Illustration by Brian Lutz for Time (Source Photos: Will Kirk—Johns Hopkins University; Nick Gonzales—UC Santa Cruz; Patrick Kehoe; National Human Genome Research Institute)
BY JENNIFER DOUDNA
MAY 23, 2022 6:08 AM EDT
Ever since the draft of the human genome became available in 2001, there has been a nagging question about the genome’s “dark matter”—the parts of the map that were missed the first time through, and what they contained. Now, thanks to Adam Phillippy, Karen Miga, Evan Eichler, Michael Schatz, and the entire Telomere-to-Telomere Consortium (T2T) of scientists that they led, we can see the full map of the human genomic landscape—and there’s much to explore.
In the scientific community, there wasn’t a consensus that mapping these missing parts was necessary. Some in the field felt there was already plenty to do using the data in hand. In addition, overcoming the technical challenges to getting the missing information wasn’t possible until recently. But the more we learn about the genome, the more we understand that every piece of the puzzle is meaningful.
I admire the
T2T group’s willingness to grapple with the technical demands of this project and their persistence in expanding the genome map into uncharted territory. The complete human genome sequence is an invaluable resource that may provide new insights into the origin of diseases and how we can treat them. It also offers the most complete look yet at the genetic script underlying the very nature of who we are as human beings.
Doudna is a biochemist and winner of the 2020 Nobel Prize in Chemistry
P. Scalia, A. Giordano, C. Martini, S. J. Williams, Isoform- and Paralog-Switching in IR-Signaling: When Diabetes Opens the Gates to Cancer. Biomolecules10, (Nov 30, 2020).
Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.
The duo behind Twitter Crypto say NFT profile pics and crypto tipping are just the beginning.
YOU MIGHT HAVE heard of crypto Twitter, the corner of the social network where accounts have Bored Apes as profile pictures, posts are rife with talk of tokens, blockchains, and buying the Bitcoin dip, and Elon Musk is venerated.
Then again, you might have heard of Twitter Crypto, the business unit devoted to developing the social network’s strategy for cryptocurrency, blockchains, and that grab-bag of decentralized technologies falling under the rubric of Web3. The team’s unveiling came in November 2021 via a tweet from the newly hired project lead, Tess Rinearson, a Berlin-based American computer scientist whose career includes stints at blockchain companies such as Tendermint and Interchain.
Rinearson joined Twitter at a crucial moment. Jack Dorsey, the vociferously pro-Bitcoin company CEO, would leave a few weeks later, to be replaced by CTO Parag Agrawal. Agrawal had played an instrumental role in Bluesky, a Twitter-backed project to create a protocol—possibly with blockchain components—to build decentralized social networks.
As crypto went mainstream globally and crypto Twitter burgeoned, the company tried to dominate the space. Under the stewardship of product manager Esther Crawford, in September 2021 Twitter introduced a “tipping” feature that helps creators on Twitter to receive Bitcoin contributions through Lightning—a network for fast Bitcoin payments. In January, Twitter allowed subscribers of Twitter’s premium service, Twitter Blue, to flaunt their NFTs as hexagonal profile pictures, through a partnership with NFT marketplace OpenSea.
Twitter Crypto is just getting started. While Rinearson works with people all across the company, her team is still under 10 people, although more hires are in the pipeline, judging from recent job postings. So it’s worth asking what is next. I caught up over a video call with Rinearson and Crawford to talk about where Twitter Crypto is headed.
The conversation has been edited for clarity and brevity.
WIRED: Let’s start with the basics. Why does Twitter have a crypto unit?
Tess Rinearson: We really see crypto—and what we’re now calling Web3— as something that could be this incredibly powerful tool that would unlock a lot for our users. The whole crypto world is like an internet of money, an internet of value that our users can potentially tap into to create new ways of owning their content, monetizing their content, owning their own identity, and even relating to each other.
One of my goals is to build Twitter’s crypto unit in such a way that it caters to communities that go beyond just that core crypto community. I love the crypto Twitter space, obviously—I’m a very proud member of the crypto community. And at the same time, I recognize that people who are really deep in the crypto space may not relate to concepts, like for instance blockchain’s immutability, in the same way that someone who’s less intensely involved might feel about those things.
So a lot of what we try to think about is, what can we learn from this group of people who are super engaged and really, really, creative? And then, how can we translate some of that stuff into a format or a mechanism or a product that’s a little bit more accessible to people who don’t have that background?
How are you learning from crypto Twitter? Do you just follow a lot of accounts, do you actually talk to them? How does that learning experience play out?
Esther Crawford: It’s a combination. We have an amazing research team that sets up panel interviews and surveys. But we’re also embedded in the community itself and follow a bunch of accounts, sit on Twitter spaces, go to conferences and events, engage with customers in that way. That’s the way the research piece of it works. But we also encounter it as end users: Twitter is the discovery platform today for all things crypto.
One of the things we do differently at Twitter is we build out in the open. And so this means having dialog with customers in real time—designers will take something that is very early-stage and post it as a tweet and then get real-time feedback. They’ll hop into spaces with product managers and engineering managers, talk about it live with real customers, and then incorporate that feedback into the designs and what ultimately we end up launching.
Rinearson: One of the things I wanted to make sure of before I came to Twitter was to know that we would be able to build features in the open and solicit feedback and show rough drafts. And so this is something I asked Parag Agrawal, who’s now the CEO, and was the person who hired me. Pretty early in the job interview process, I said this was going to be really important, and he said, “If you think it’s important to the success of this work, great, do it—thumbs up.” He also shares that openness.
As you said, Tess, you come from crypto. When you were out there, what did you think Twitter was getting right? What did you think Twitter was getting wrong?
Rinearson: I had been a Twitter power user for a really long time. The thing that I saw was a lot of aesthetic alignment between how Twitter exists in the world and the way that crypto exists in the world. Twitter has decentralized user experiences in its DNA. And, this is a bit cheesy, but people use Twitter sometimes in ways that they use a public blockchain, as a public database where everything’s time stamped and people can agree on what happened.
And for most people it’s open, it is there for public conversation. And then obviously it was also the place—a place—where the crypto community really found its footing. I think it’s been a place where an enormous amount of discovery happens, and education and learning for the whole community. I joined when there were some murmurings about Twitter starting to do crypto stuff, mostly stuff Esther had led actually, and I was excited to see where it was going. And then Twitter’s investment in Bluesky also gave me a lot of confidence.
Let’s talk about the two main things you have delivered so far: The crypto tipping feature and NFT pictures. Can you give me just a potted history of how each came about and why?
Crawford: Those are our first set of early explorations, and the reason why we started there was we really wanted to make sure that what we built benefited creators, their audiences, and then all the conversations that are happening on Twitter. For creators in particular, we know that they rely on platforms like Twitter to monetize and earn a living, and not all people are able to use traditional currencies. Not everybody has a traditional banking account setup.
And so we wanted to provide an opportunity for a borderless payment solution, and that’s why we decided to go ahead and use Bitcoin Lightning as our first big integration. One of the reasons we chose Bitcoin Lightning was also because of the low transaction fees. And we have Bitcoin and Ethereum addresses that you can also put in there [on your Twitter “tipping jar”]. We noticed that people were actually adding information about their crypto wallet addresses in their profiles. And so we wanted to make a more seamless experience, so that people could just tip through the platform, so that it felt native.
With NFT profile pictures, the way that came about was, again, looking at user behavior. People were adding NFTs that they owned as avatars, but you didn’t really know whether they owned those NFTs or not. So we decided to go ahead and build out that feature so that one could actually prove ownership.
That’s similar to how other things developed on Twitter, right? The hashtag, or even even the retweet, were initially just things users invented—by adding the # sign, or by pasting other users’ tweets—and then Twitter made that a feature.
Crawford: Yeah, exactly. Many of the best ideas come from watching user behavior on the platform, and then we just productize that.
Rinearson: Sometimes I’ve heard people call that the “help wanted signs,” and like, keeping an eye out for the “help wanted signs” across the platform. The NFT profile picture was a clear example of that.
How do all these things—these two things and possibly other crypto features coming further down the line—really help Twitter’s bottom line?
Crawford:With creator monetization our goal was to help creators get paid, not Twitter. But Twitter takes a really small cut of earnings. For more successful creators, we take a larger percentage. The way we think about this is, it is part of our revenue diversification.
Twitter today is a wholly ad-based business. In the future we imagine Twitter making money from a variety of different product areas. So Twitter Blue is one of those products—you can pay $2.99 a month and you get additional features, such as the NFT profile pictures. We really think that revenue diversification sits across a variety of areas, and creator monetization is one really small component of that.
As you said, these are just early experiments. Where is Twitter Crypto going next? What’s your vision for crypto technology’s role within Twitter?
Rinearson: The real trick here is to find the right parts of Twitter to decentralize, and to not try to decentralize everything at once—or, you know, make every user suddenly responsible for taking care of some private keys or something like that.
We have to find the right ways to open up some access to a decentralized economic layer, or give people ways that they can take their identity with them, without relying on a single centralized service.
We’re really early in these explorations, and even looking at things like Bitcoin tipping or the NFT profile pictures—we view those features as experiments themselves in a lot of ways and learning experiences. We’re learning things about how our users relate to these concepts, what they understand about them, what they find confusing, and what’s most useful to them. We really want to try to use this technology to bring utility to people and you know, not just like, sprinkle a little blockchain on it for the sake of it. So creator monetization is an area that I’m really excited about because I think there’s a really clear path forward. But again, we’re looking beyond that: We’re also looking at using crypto technology in fields like [digital] identity and [digital] ownership space and also figuring out how we can better serve crypto communities on the platform.
Are you going to put Twitter verified users’ blue ticks on a blockchain, then?
[Laughter]
No?
[More laughter]
OK, moving on. How does the kind of work you do dovetail with Bluesky’s plan to create a protocol for a decentralized social media platform? Is there any synergy there?
Rinearson: I have known Jay [Graber], the Bluesky lead, for a long time, and she and I are in pretty close contact. We check in with each other regularly and talk a lot about problems we might have in common that we’ll both need to solve. There’s an overlap looking at things in the identity area, but at the end of the day, it’s a separate project. She’s pretty focused on hiring her team, and they’re very focused on building a prototype of a protocol. That is different from what Esther and I are thinking about, which is like: There are all these blockchain protocols that exist, and we need to figure out how to make them useful and accessible for real people.
And when I say “real people,” I mean that in a sort of tongue-in-cheek contrast to hardcore crypto nerds like me. Jay is thinking much more about building for people who are creating decentralized networks. That is a very different focus area. Beyond that, I would just say it’s too early to say what Bluesky will mean for Twitter as a product. We are in touch, we have aligned values. But at the end of the day—separate teams.
Why is a centralized Silicon Valley company like Twitter the right place to start to bring more decentralization to internet users? Don’t we have just to start from scratch, build a new platform that is already decentralized?
Rinearson: I started in crypto in 2015, and I have a very vivid memory from those years of watching some of my coworkers—crypto engineers—trying to figure out how to secure some of their Bitcoin like before one of the Bitcoin forks [in which the Bitcoin blockchain split, creating new currencies], and they were panicking and freaking out. I thought there was no way that a normal person would be able to handle this in a way that would be safe. And so I was a little bit disillusioned with crypto, especially from a consumer perspective.
And then last year, I started seeing more interest from people whom I’ve known for a long time and weren’t crypto people. They were just starting to perk their heads up and take notice and start creating NFTs or start talking about DAOs. And I thought that that was interesting, that we were coming around a corner, and it might be time to start thinking about what this could mean for people beyond that hardcore crypto group.
And that was when Twitter reached out. You know, I don’t think that just any centralized platform would be able to bring crypto to the masses, so to speak. But I think Twitter has the right stuff. I think you have to meet people where they are with new technologies: find ways to onboard them and bring them along and show them what this might mean for them. make things accessible. And it’s really, really hard to do that with just a protocol. You need to have some kind of community, you need to have some kind of user base, you need to have some kind of platform. And Twitter’s just right there.
I don’t think I would say that a centralized platform is definitely the way to “bring crypto to the masses.” I do think that Twitter is the way to do it.
But why do the masses need crypto right now?
Rinearson: I don’t know that anyone needs crypto, and our goal is not to get everyone into crypto. Let’s be clear about that. But I do think that crypto is a potentially very powerful tool for people. And so I think what we are trying to do is show people how powerful it is and unlock those possibilities. It’s also possible that we create some products and features, where people actually don’t even really know what’s happening under the hood.
Like maybe we’re using crypto as a payment rail or again as an identity layer—users don’t necessarily need to know all of those implementation details. And that’s actually something we come back to a lot: What level of abstraction are we talking about with users? What story are we telling them about what’s happening under the hood? But yeah, I would just like to reiterate that the goal is not to just shovel everyone into crypto. We want to provide value for people.
Do you think there is a case for Twitter to launch its own cryptocurrency— a Twittercoin?
Rinearson: I think there’s a case for a lot of things—honestly, there’s a case for a lot of things. We’re trying to think really, really broadly about it.
Crawford: We’re actively exploring a lot of things. It’s not it’s not something we would be making an announcement about.
Rinearson: I think it is really important to stress that when you say “Twittercoin” you probably have a slightly different idea of what it is than we do. And are we exploring those ideas? Yes, we want to think about all of them. Do we have road maps for them? No. But are we trying to think about things really creatively and be really, really open-minded? Yes. We have this new economic technology that we think could unlock a lot of things for people. And we want to go down a bunch of rabbit holes and see what we come up with.
Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.
Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN
In this article, I will list 9 free Harvard courses that you can take to learn data science from scratch. Feel free to skip any of these courses if you already possess knowledge of that subject.
Step 1: Programming
The first step you should take when learning data science is to learn to code. You can choose to do this with your choice of programming language?—?ideally Python or R.
If you’d like to learn R, Harvard offers an introductory R course created specifically for data science learners, called Data Science: R Basics.
This program will take you through R concepts like variables, data types, vector arithmetic, and indexing. You will also learn to wrangle data with libraries like dplyr and create plots to visualize data.
If you prefer Python, you can choose to take CS50’s Introduction to Programming with Python offered for free by Harvard. In this course, you will learn concepts like functions, arguments, variables, data types, conditional statements, loops, objects, methods, and more.
Both programs above are self-paced. However, the Python course is more detailed than the R program, and requires a longer time commitment to complete. Also, the rest of the courses in this roadmap are taught in R, so it might be worth learning R to be able to follow along easily.
Step 2: Data Visualization
Visualization is one of the most powerful techniques with which you can translate your findings in data to another person.
With Harvard’s Data Visualization program, you will learn to build visualizations using the ggplot2 library in R, along with the principles of communicating data-driven insights.
Step 3: Probability
In this course, you will learn essential probability concepts that are fundamental to conducting statistical tests on data. The topics taught include random variables, independence, Monte Carlo simulations, expected values, standard errors, and the Central Limit Theorem.
The concepts above will be introduced with the help of a case study, which means that you will be able to apply everything you learned to an actual real-world dataset.
Step 4: Statistics
After learning probability, you can take this course to learn the fundamentals of statistical inference and modelling.
This program will teach you to define population estimates and margin of errors, introduce you to Bayesian statistics, and provide you with the fundamentals of predictive modeling.
Step 5: Productivity Tools (Optional)
I’ve included this project management course as optional since it isn’t directly related to learning data science. Rather, you will be taught to use Unix/Linux for file management, Github, version control, and creating reports in R.
The ability to do the above will save you a lot of time and help you better manage end-to-end data science projects.
Step 6: Data Pre-Processing
The next course in this list is called Data Wrangling, and will teach you to prepare data and convert it into a format that is easily digestible by machine learning models.
You will learn to import data into R, tidy data, process string data, parse HTML, work with date-time objects, and mine text.
As a data scientist, you often need to extract data that is publicly available on the Internet in the form of a PDF document, HTML webpage, or a Tweet. You will not always be presented with clean, formatted data in a CSV file or Excel sheet.
By the end of this course, you will learn to wrangle and clean data to come up with critical insights from it.
Step 7: Linear Regression
Linear regression is a machine learning technique that is used to model a linear relationship between two or more variables. It can also be used to identify and adjust the effect of confounding variables.
This course will teach you the theory behind linear regression models, how to examine the relationship between two variables, and how confounding variables can be detected and removed before building a machine learning algorithm.
Step 8: Machine Learning
Finally, the course you’ve probably been waiting for! Harvard’s machine learning program will teach you the basics of machine learning, techniques to mitigate overfitting, supervised and unsupervised modelling approaches, and recommendation systems.
Step 9: Capstone Project
After completing all the above courses, you can take Harvard’s data science capstone project, where your skills in data visualization, probability, statistics, data wrangling, data organization, regression, and machine learning will be assessed.
With this final project, you will get the opportunity to put together all the knowledge learnt from the above courses and gain the ability to complete a hands-on data science project from scratch.
Note: All the courses above are available on an online learning platform from edX and can be audited for free. If you want a course certificate, however, you will have to pay for one.
Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.
The platform sort of looked like the image below:
This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:
In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.
Previous image source: PeerJ.com
To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were
costs are still assumed by the individual researcher not by the journals
the arise of the numerous Predatory Journals
Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice
which included the cost and the rise of predatory journals.
In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs
Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?
Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]
The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]
Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]
Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]
Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4]Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16]Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4]Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]
Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data security, scalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14]Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15]The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]
Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]
Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrime, online harassment, hate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]
Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]
David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]
Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved
by Frances Yue, Editor of Distributed Ledger, Marketwatch.com
Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.
Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.
“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”
For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner.
“Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”
Decentralized social media?
The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.
In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.
“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.
“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.
BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”
AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”
But Before I get to the “selling monkeys to morons” quote,
I want to talk about
THE GOOD, THE BAD, AND THE UGLY
THE GOOD
My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.
Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).
This concept of science 3.0 he had coined back in 2009. As Dr Teif had mentioned
So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples
This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.
The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online. However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times.
The consensus is that science 2.0 networking is:
good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary
There are a few concerns on Science 3.0 Dr. Teif articulates:
A. Science 3.0 Still Needs Peer Review
Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery. As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly. It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical. Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.
Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes. Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).
An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.
The NEW BREED was born in 4/2012
An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature.
B. Open Access Publishing Model leads to biases and inequalities in the idea selection
The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”. However NOTHING could be further from the truth. In advertising the publishers claim the companies not the consumer pays for the ads. However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher. Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers. However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.
However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.
C. Post publication comments and discussion require online hubs and anonymization systems
Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used. In fact they show that there are just 1 comment per 100 views of a journal article on these systems. In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse. The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter.
In a personal experience,
a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article. Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.
Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms. Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.
A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:
In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.
But many feel such discussions would be safer if they were anonymized. However then researchers do not get any credit for their opinions or commentaries.
A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.
This is where something like NFTs or a decentralized network may become important!
Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse. However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.
This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).
To introduce this discussion first a few startoff material which will fram this discourse
The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?
Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.
Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.
We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!
We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!
Are these new technologies the cure or is it just another headache?
There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees
The Following are some slides from representative Presentations
Other article of note on this topic on this Open Access Scientific Journal Include:
Medical Startups – Artificial Intelligence (AI) Startups in Healthcare
Reporters: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN and Shraga Rottem, MD, DSc,
The motivation for this post is two fold:
First, we are presenting an application of AI, NLP, DL to our own medical text in the Genomics space. Here we present the first section of Part 1 in the following book. Part 1 has six subsections that yielded 12 plots. The entire Book is represented by 38 x 2 = 76 plots.
Second, we bring to the attention of the e-Reader the list of 276 Medical Startups – Artificial Intelligence (AI) Startups in Healthcare as a hot universe of R&D activity in Human Health.
Third, to highlight one academic center with an AI focus
Dear friends of the ETH AI Center,
We would like to provide you with some exciting updates from the ETH AI Center and its growing community.
As the Covid-19 restrictions in Switzerland have recently been lifted, we would like to hear from you what kind of events you would like to see in 2022! Participate in the survey to suggest event formats and topics that you would enjoy being a part of. We are already excited to learn what we can achieve together this year.
We already have many interesting events coming up, we look forward to seeing you at our main and community events!
LPBI Group is applying AI for Medical Text Analysis with Machine Learning and Natural Language Processing: Statistical and Deep Learning
Our Book
Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS & BioInformatics, Simulations and the Genome Ontology
Medical Text Analysis of this Books shows the following results obtained by Madison Davis by applying Wolfram NLP for Biological Languages on our own Text. See below an Example:
With the explosive development of decentralized finance, we witness a phenomenal growth in tokenization of all kinds of assets, including equity, funds, debt, and real estate. By taking advantage of blockchain technology, digital assets are broadly grouped into fungible and non-fungible tokens (NFT). Here non-fungible tokens refer to those with unique and non-substitutable properties. NFT has widely attracted attention, and its protocols, standards, and applications are developing exponentially. It has been successfully applied to digital fantasy artwork, games, collectibles, etc. However, there is a lack of research in utilizing NFT in issues such as Intellectual Property. Applying for a patent and trademark is not only a time-consuming and lengthy process but also costly. NFT has considerable potential in the intellectual property domain. It can promote transparency and liquidity and open the market to innovators who aim to commercialize their inventions efficiently. The main objective of this paper is to examine the requirements of presenting intellectual property assets, specifically patents, as NFTs. Hence, we offer a layered conceptual NFT-based patent framework. Furthermore, a series of open challenges about NFT-based patents and the possible future directions are highlighted. The proposed framework provides fundamental elements and guidance for businesses in taking advantage of NFTs in real-world problems such as grant patents, funding, biotechnology, and so forth.
Introduction
Distributed ledger technologies (DLTs) such as blockchain are emerging technologies posing a threat to existing business models. Traditionally, most companies used centralized authorities in various aspects of their business, such as financial operations and setting up a trust with their counterparts. By the emergence of blockchain, centralized organizations can be substituted with a decentralized group of resources and actors. The blockchain mechanism was introduced in Bitcoin white paper in 2008, which lets users generate transactions and spend their money without the intervention of banks1. Ethereum, which is a second generation of blockchain, was introduced in 2014, allowing developers to run smart contracts on a distributed ledger. With smart contracts, developers and businesses can create financial applications that use cryptocurrencies and other forms of tokens for applications such as decentralized finance (DeFi), crowdfunding, decentralized exchanges, data records keeping, etc.2. Recent advances in distributed ledger technology have developed concepts that lead to cost reduction and the simplification of value exchange. Nowadays, by leveraging the advantages of blockchain and taking into account the governance issues, digital assets could be represented as tokens that existed in the blockchain network, which facilitates their transmission and traceability, increases their transparency, and improves their security3.
In the landscape of blockchain technology, there could be defined two types of tokens, including fungible tokens, in which all the tokens have equal value and non-fungible tokens (NFTs) that feature unique characteristics and are not interchangeable. Actually, non-fungible tokens are digital assets with a unique identifier that is stored on a blockchain4. NFT was initially suggested in Ethereum Improvement Proposals (EIP)-7215, and it was later expanded in EIP-11556. NFTs became one of the most widespread applications of blockchain technology that reached worldwide attention in early 2021. They can be digital representations of real-world objects. NFTs are tradable rights of digital assets (pictures, music, films, and virtual creations) where ownership is recorded in blockchain smart contracts7.
In particular, fungibility is the ability to exchange one with another of the same kind as an essential currency feature. The non-fungible token is unique and therefore cannot be substituted8. Recently, blockchain enthusiasts have indicated significant interest in various types of NFTs. They enthusiastically participate in NFT-related games or trades. CryptoPunks9, as one of the first NFTs on Ethereum, has developed almost 10,000 collectible punks and helped popularize the ERC-721 Standard. With the gamification of the breeding mechanics, CryptoKitties10 officially placed NFTs at the forefront of the market in 2017. CryptoKitties is an early blockchain game that enables users to buy, sell, collect, and digital breed cats. Another example is NBA Top Shot11, an NFT trading platform for digital short films buying and selling NBA events.
NFTs are developing remarkably and have provided many applications such as artist royalties, in-game assets, educational certificates, etc. However, it is a relatively new concept, and many areas of application need to be explored. Intellectual Property, including patent, trademark, and copyright, is an important area where NFTs can be applied usefully and solve existing problems.
Although NFTs have had many applications so far, it rarely has been used to solve real-world problems. In fact, an NFT is an exciting concept about Intellectual Property (IP). Applying for a patent and trademark is a time-consuming and lengthy process, but it is also costly. That is, registering a copyright or trademark may take months, while securing a patent can take years. On the contrary, with the help of unique features of NFT technology, it is possible to accelerate this process with considerable confidence and assurance about protecting the ownership of an IP. NFTs can offer IP protection while an applicant waits for the government to grant his/her more formal protection. It is cause for excitement that people who believe NFTs and Blockchain would make buying and selling patents easier, offering new opportunities for companies, universities, and inventors to make money off their innovations12. Patent holders will benefit from such innovation. It would give them the ability to ‘tokenize’ their patents. Because every transaction would be logged on a blockchain, it will be much easier to trace patent ownership changes. However, NFT would also facilitate the revenue generation of patents by democratizing patent licensing via NFT. NFTs support the intellectual property market by embedding automatic royalty collecting methods inside inventors’ works, providing them with financial benefits anytime their innovation is licensed. For example, each inventor’s patent would be minted as an NFT, and these NFTs would be joined together to form a commercial IP portfolio and minted as a compounded NFT. Each investor would automatically get their fair share of royalties whenever the licensing revenue is generated without tracking them down.
The authors in13, an overview of NFTs’ applications in different aspects such as gambling, games, and collectibles has been discussed. In addition4, provides a prototype for an event-tracking application based on Ethereum smart contract, and NFT as a solution for art and real estate auction systems is described in14. However, these studies have not discussed existing standards or a generalized architecture, enabling NFTs to be applied in diverse applications. For example, the authors in15 provide two general design patterns for creating and trading NFTs and discuss existing token standards for NFT. However, the proposed designs are limited to Ethereum, and other blockchains are not considered16. Moreover, different technologies for each step of the proposed procedure are not discussed. In8, the authors provide a conceptual framework for token designing and managing and discuss five views: token view, wallet view, transaction view, user interface view, and protocol view. However, no research provides a generalized conceptual framework for generating, recording, and tracing NFT based-IP, in blockchain network.
Even with the clear benefits that NFT-backed patents offer, there are a number of impediments to actually achieving such a system. For example, convincing patent owners to put current ownership records for their patents into NFTs poses an initial obstacle. Because there is no reliable framework for NFT-based patents, this paper provides a conceptual framework for presenting NFT-based patents with a comprehensive discussion on many aspects, ranging from the background, model components, token standards to application domains and research challenges. The main objective of this paper is to provide a layered conceptual NFT-based patent framework that can be used to register patents in a decentralized, tamper-proof, and trustworthy peer-to-peer network to trade and exchange them in the worldwide market. The main contributions of this paper are highlighted as follows:
Providing a comprehensive overview on tokenization of IP assets to create unique digital tokens.
Discussing the components of a distributed and trustworthy framework for minting NFT-based patents.
Highlighting a series of open challenges of NFT-based patents and enlightening the possible future trends.
The rest of the paper is structured as follows: “Background” section describes the Background of NFTs, Non-Fungible Token Standards. The NFT-based patent framework is described in “NFT-based patent framework” section. The Discussion and challenges are presented in “Discussion” section. Lastly, conclusions are given in “Conclusion” section.
Background
Colored Coins could be considered the first steps toward NFTs designed on the top of the Bitcoin network. Bitcoins are fungible, but it is possible to mark them to be distinguishable from the other bitcoins. These marked coins have special properties representing real-world assets like cars and stocks, and owners can prove their ownership of physical assets through the colored coins. By utilizing Colored Coins, users can transfer their marked coins’ ownership like a usual transaction and benefit from Bitcoin’s decentralized network17. Colored Coins had limited functionality due to the Bitcoin script limitations. Pepe is a green frog meme originated by Matt Furie that; users define tokens for Pepes and trade them through the Counterparty platform. Then, the tokens that were created by the picture of Pepes are decided if they are rare enough. Rare Pepe allows users to preserve scarcity, manage the ownership, and transfer their purchased Pepes.
In 2017, Larva Labs developed the first Ethereum-based NFT named CryptoPunks. It contains 10,000 unique human-like characters generated randomly. The official ownership of each character is stored in the Ethereum smart contract, and owners would trade characters. CryptoPunks project inspired CryptoKitties project. CryptoKitties attracts attention to NFT, and it is a pioneer in blockchain games and NFTs that launched in late 2017. CryptoKitties is a blockchain-based virtual game, and users collect and trade characters with unique features that shape kitties. This game was developed in Ethereum smart contract, and it pioneered the ERC-721 token, which was the first standard token in the Ethereum blockchain for NFTs. After the 2017 hype in NFTs, many projects started in this context. Due to increased attention to NFTs’ use-cases and growing market cap, different blockchains like EOS, Algorand, and Tezos started to support NFTs, and various marketplaces like SuperRare and Rarible, and OpenSea are developed to help users to trade NFTs. As mentioned, in general, assets are categorized into two main classes, fungible and non-fungible assets. Fungible assets are the ones that another similar asset can replace. Fungible items could have two main characteristics: replicability and divisibility.
Currency is a fungible item because a ten-dollar bill can be exchanged for another ten-dollar bill or divided into ten one-dollar bills. Despite fungible items, non-fungible items are unique and distinguishable. They cannot be divided or exchanged by another identical item. The first tweet on Twitter is a non-fungible item with mentioned characteristics. Another tweet cannot replace it, and it is unique and not divisible. NFT is a non-fungible cryptographic asset that is declared in a standard token format and has a unique set of attributes. Due to transparency, proof of ownership, and traceable transactions in the blockchain network, NFTs are created using blockchain technology.
Blockchain-based NFTs help enthusiasts create NFTs in the standard token format in blockchain, transfer the ownership of their NFTs to a buyer, assure uniqueness of NFTs, and manage NFTs completely. In addition, there are semi-fungible tokens that have characteristics of both fungible and non-fungible tokens. Semi-fungible tokens are fungible in the same class or specific time and non-fungible in other classes or different times. A plane ticket can be considered a semi-fungible token because a charter ticket can be exchanged by another charter ticket but cannot be exchanged by a first-class ticket. The concept of semi-fungible tokens plays the main role in blockchain-based games and reduces NFTs overhead. In Fig. 1, we illustrate fungible, non-fungible, and semi-fungible tokens. The main properties of NFTs are described as follows15:
Figure 1
Ownership: Because of the blockchain layer, the owner of NFT can easily prove the right of possession by his/her keys. Other nodes can verify the user’s ownership publicly.
Transferable: Users can freely transfer owned NFTs ownership to others on dedicated markets.
Transparency: By using blockchain, all transactions are transparent, and every node in the network can confirm and trace the trades.
Fraud Prevention: Fraud is one of the key problems in trading assets; hence, using NFTs ensures buyers buy a non-counterfeit item.
Immutability: Metadata, token ID, and history of transactions of NFTs are recorded in a distributed ledger, and it is impossible to change the information of the purchased NFTs.
Non-fungible standards
Ethereum blockchain was pioneered in implementing NFTs. ERC-721 token was the first standard token accepted in the Ethereum network. With the increase in popularity of the NFTs, developers started developing and enhancing NFTs standards in different blockchains like EOS, Algorand, and Tezos. This section provides a review of implemented NFTs standards on the mentioned blockchains.
Ethereum
ERC-721 was the first Standard for NFTs developed in Ethereum, a free and open-source standard. ERC-721 is an interface that a smart contract should implement to have the ability to transfer and manage NFTs. Each ERC-721 token has unique properties and a different Token Id. ERC-721 tokens include the owner’s information, a list of approved addresses, a transfer function that implements transferring tokens from owner to buyer, and other useful functions5.
In ERC-721, smart contracts can group tokens with the same configuration, and each token has different properties, so ERC-721 does not support fungible tokens. However, ERC-1155 is another standard on Ethereum developed by Enjin and has richer functionalities than ERC-721 that supports fungible, non-fungible, and semi-fungible tokens. In ERC-1155, IDs define the class of assets. So different IDs have a different class of assets, and each ID may contain different assets of the same class. Using ERC-1155, a user can transfer different types of tokens in a single transaction and mix multiple fungible and non-fungible types of tokens in a single smart contract6. ERC-721 and ERC-1155 both support operators in which the owner can let the operator originate transferring of the token.
EOSIO
EOSIO is an open-source blockchain platform released in 2018 and claims to eliminate transaction fees and increase transaction throughput. EOSIO differs from Ethereum in the wallet creation algorithm and procedure of handling transactions. dGood is a free standard developed in the EOS blockchain for assets, and it focuses on large-scale use cases. It supports a hierarchical naming structure in smart contracts. Each contract has a unique symbol and a list of categories, and each category contains a list of token names. Therefore, a single contract in dGoods could contain many tokens, which causes efficiency in transferring a group of tokens. Using this hierarchy, dGoods supports fungible, non-fungible, and semi-fungible tokens. It also supports batch transferring, where the owner can transfer many tokens in one operation18.
Algorand
Algorand is a new high-performance public blockchain launched in 2019. It provides scalability while maintaining security and decentralization. It supports smart contracts and tokens for representing assets19. Algorand defines Algorand Standard Assets (ASA) concept to create and manage assets in the Algorand blockchain. Using ASA, users are able to define fungible and non-fungible tokens. In Algorand, users can create NFTs or FTs without writing smart contracts, and they should run just a single transaction in the Algorand blockchain. Each transaction contains some mutable and immutable properties20.
Each account in Algorand can create up to 1000 assets, and for every asset, an account creates or receives, the minimum balance of the account increases by 0.1 Algos. Also, Algorand supports fractional NFTs by splitting an NFT into a group of divided FTs or NFTs, and each part can be exchanged dependently21. Algorand uses a Clawback Address that operates like an operator in ERC-1155, and it is allowed to transfer tokens of an owner who has permitted the operator.
Tezos
Tezos is another decentralized open-source blockchain. Tezos supports the meta-consensus concept. In addition to using a consensus protocol on the ledger’s state like Bitcoin and Ethereum, It also attempts to reach a consensus about how nodes and the protocol should change or upgrade22. FA2 (TZIP-12) is a standard for a unified token contract interface in the Tezos blockchain. FA2 supports different token types like fungible, non-fungible, and fractionalized NFT contracts. In Tezos, tokens are identified with a token contract address and token ID pair. Also, Tezos supports batch token transferring, which reduces the cost of transferring multiple tokens.
Flow
Flow was developed by Dapper Labs to remove the scalability limitation of the Ethereum blockchain. Flow is a fast and decentralized blockchain that focuses on games and digital collectibles. It improves throughput and scalability without sharding due to its architecture. Flow supports smart contracts using Cadence, which is a resource-oriented programming language. NFTs can be described as a resource with a unique id in Cadence. Resources have important rules for ownership management; that is, resources have just one owner and cannot be copied or lost. These features assure the NFT owner. NFTs’ metadata, including images and documents, can be stored off-chain or on-chain in Flow. In addition, Flow defines a Collection concept, in which each collection is an NFT resource that can include a list of resources. It is a dictionary that the key is resource id, and the value is corresponding NFT.
The collection concept provides batch transferring of NFTs. Besides, users can define an NFT for an FT. For instance, in CryptoKitties, a unique cat as an NFT can own a unique hat (another NFT). Flow uses Cadence’s second layer of access control to allow some operators to access some fields of the NFT23. In Table 1, we provide a comparison between explained standards. They are compared in support of fungible-tokens, non-fungible tokens, batch transferring that owner can transform multiple tokens in one operation, operator support in which the owner can approve an operator to originate token transfer, and fractionalized NFTs that an NFT can divide to different tokens and each exchange dependently.Table 1 Comparing NFT standards.
In this section, we propose a framework for presenting NFT-based patents. We describe details of the proposed distributed and trustworthy framework for minting NFT-based patents, as shown in Fig. 2. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application Layer. Details of each layer and the general concepts are presented as follows.
Figure 2
Storage layer
The continuous rise of the data in blockchain technology is moving various information systems towards the use of decentralized storage networks. Decentralized storage networks were created to provide more benefits to the technological world24. Some of the benefits of using decentralized storage systems are explained: (1) Cost savings are achieved by making optimal use of current storage. (2) Multiple copies are kept on various nodes, avoiding bottlenecks on central servers and speeding up downloads. This foundation layer implicitly provides the infrastructure required for the storage. The items on NFT platforms have unique characteristics that must be included for identification.
Non-fungible token metadata provides information that describes a particular token ID. NFT metadata is either represented on the On-chain or Off-chain. On-chain means direct incorporation of the metadata into the NFT’s smart contract, which represents the tokens. On the other hand, off-chain storage means hosting the metadata separately25.
Blockchains provide decentralization but are expensive for data storage and never allow data to be removed. For example, because of the Ethereum blockchain’s current storage limits and high maintenance costs, many projects’ metadata is maintained off-chain. Developers utilize the ERC721 Standard, which features a method known as tokenURI. This method is implemented to let applications know the location of the metadata for a specific item. Currently, there are three solutions for off-chain storage, including InterPlanetary File System (IPFS), Pinata, and Filecoin.
IPFS
InterPlanetary File System (IPFS) is a peer-to-peer hypermedia protocol for decentralized media content storage. Because of the high cost of storing media files related to NFTS on Blockchain, IPFS can be the most affordable and efficient solution. IPFS combines multiple technologies inspired by Gita and BitTorrent, such as Block Exchange System, Distributed Hash Tables (DHT), and Version Control System26. On a peer-to-peer network, DHT is used to coordinate and maintain metadata.
In other words, the hash values must be mapped to the objects they represent. An IPFS generates a hash value that starts with the prefix {Q}_{m} and acts as a reference to a specific item when storing an object like a file. Objects larger than 256 KB are divided into smaller blocks up to 256 KB. Then a hash tree is used to interconnect all the blocks that are a part of the same object. IPFS uses Kamdelia DHT. The Block Exchange System, or BitSwap, is a BitTorrent-inspired system that is used to exchange blocks. It is possible to use asymmetric encryption to prevent unauthorized access to stored content on IPFS27.
Pinata
Pinata is a popular platform for managing and uploading files on IPFS. It provides secure and verifiable files for NFTs. Most data is stored off-chain by most NFTs, where a URL of the data is pointed to the NFT on the blockchain. The main problem here is that some information in the URL can change.
This indicates that an NFT supposed to describe a certain patent can be changed without anyone knowing. This defeats the purpose of the NFT in the first place. This is where Pinata comes in handy. Pinata uses the IPFS to create content-addressable hashes of data, also known as Content-Identifiers (CIDs). These CIDs serve as both a way of retrieving data and a means to ensure data validity. Those looking to retrieve data simply ask the IPFS network for the data associated with a certain CID, and if any node on the network contains that data, it will be returned to the requester. The data is automatically rehashed on the requester’s computer when the requester retrieves it to make sure that the data matches back up with the original CID they asked for. This process ensures the data that’s received is exactly what was asked for; if a malicious node attempts to send fake data, the resulting CID on the requester’s end will be different, alerting the requester that they’re receiving incorrect data28.
Filecoin
Another decentralized storage network is Filecoin. It is built on top of IPFS and is designed to store the most important data, such as media files. Truffle Suite has also launched NFT Development Template with Filecoin Box. NFT.Storage (Free Decentralized Storage for NFTs)29 allows users to easily and securely store their NFT content and metadata using IPFS and Filecoin. NFT.Storage is a service backed by Protocol Labs and Pinata specifically for storing NFT data. Through content addressing and decentralized storage, NFT.Storage allows developers to protect their NFT assets and associated metadata, ensuring that all NFTs follow best practices to stay accessible for the long term. NFT.Storage makes it completely frictionless to mint NFTs following best practices through resilient persistence on IPFS and Filecoin. NFT.Storage allows developers to quickly, safely, and for free store NFT data on decentralized networks. Anyone can leverage the power of IPFS and Filecoin to ensure the persistence of their NFTs. The details of this system are stated as follows30:
Content addressing
Once users upload data on NFT.Storage, They receive a CID, which is an IPFS hash of the content. CIDs are the data’s unique fingerprints, universal addresses that can be used to refer to it regardless of how or where it is stored. Using CIDs to reference NFT data avoids problems such as weak links and “rug pulls” since CIDs are generated from the content itself.
Provable storage
NFT.Storage uses Filecoin for long-term decentralized data storage. Filecoin uses cryptographic proofs to assure the NFT data’s durability and persistence over time.
Resilient retrieval
This data stored via IPFS and Filecoin can be fetched directly in the browser via any public IPFS.
Authentication Layer
The second layer is the authentication layer, which we briefly highlight its functions in this section. The Decentralized Identity (DID) approach assists users in collecting credentials from a variety of issuers, such as the government, educational institutions, or employers, and saving them in a digital wallet. The verifier then uses these credentials to verify a person’s validity by using a blockchain-based ledger to follow the “identity and access management (IAM)” process. Therefore, DID allows users to be in control of their identity. A lack of NFT verifiability also causes intellectual property and copyright infringements; of course, the chain of custody may be traced back to the creator’s public address to check whether a similar patent is filed using that address. However, there is no quick and foolproof way to check an NFTs creator’s legitimacy. Without such verification built into the NFT, an NFT proves ownership only over that NFT itself and nothing more.
Self-sovereign identity (SSI)31 is a solution to this problem. SSI is a new series of standards that will guide a new identity architecture for the Internet. With a focus on privacy, security interoperability, SSI applications use public-key cryptography with public blockchains to generate persistent identities for people with private and selective information disclosure. Blockchain technology offers a solution to establish trust and transparency and provide a secure and publicly verifiable KYC (Know Your Customer). The blockchain architecture allows you to collect information from various service providers into a single cryptographically secure and unchanging database that does not need a third party to verify the authenticity of the information.
The proposed platform generates patents-related smart contracts acting as a program that runs on the blockchain to receive and send transactions. They are unalterable privately identifying clients with a thorough KYC process. After KYC approval, then mint an NFT on the blockchain as a certificate of verification32. This article uses a decentralized authentication solution at this layer for authentication. This solution has been used for various applications in the field of the blockchain (exp: smart city, Internet of Things, etc.33, 34, but we use it here for the proposed framework (patent as NFTs). Details of this solution will be presented in the following.
Decentralized authentication
This section presents the authentication layer similar35 to build validated communication in a secure and decentralized manner via blockchain technology. As shown in Fig. 3, the authentication protocol comprises two processes, including registration and login.
Figure 3
Registration
In the registration process of a suggested authentication protocol, we first initialize a user’s public key as their identity key (UserName). Then, we upload this identity key on a blockchain, in which transactions can be verified later by other users. Finally, the user generates an identity transaction.
Login
After registration, a user logs in to the system. The login process is described as follows:
1. The user commits identity information and imports their secret key into the service application to log in.
2. A user who needs to log in sends a login request to the network’s service provider.
3. The service provider analyzes the login request, extracts the hash, queries the blockchain, and obtains identity information from an identity list (identity transactions).
4. The service provider responds with an authentication request when the above process is completed. A timestamp (to avoid a replay attack), the user’s UserName, and a signature are all included in the authentication request.
5. The user creates a signature with five parameters: timestamp, UserName, and PK, as well as the UserName and PK of the service provider. The user authentication credential is used as the signature.
6. The service provider verifies the received information, and if the received information is valid, the authentication succeeds; otherwise, the authentication fails, and the user’s login is denied.
The World Intellectual Property Organization (WIPO) and multiple target patent offices in various nations or regions should assess a patent application, resulting in inefficiency, high costs, and uncertainty. This study presented a conceptual NFT-based patent framework for issuing, validating, and sharing patent certificates. The platform aims to support counterfeit protection as well as secure access and management of certificates according to the needs of learners, companies, education institutions, and certification authorities.
Here, the certification authority (CA) is used to authenticate patent offices. The procedure will first validate a patent if it is provided with a digital certificate that meets the X.509 standard. Certificate authorities are introduced into the system to authenticate both the nodes and clients connected to the blockchain network.
Verification layer
In permissioned blockchains, just identified nodes can read and write in the distributed ledger. Nodes can act in different roles and have various permissions. Therefore, a distributed system can be designed to be the identified nodes for patent granting offices. Here the system is described conceptually at a high level. Figure 4 illustrates the sequence diagram of this layer. This layer includes four levels as below:
Figure 4
Digitalization
For a patent to publish as an NFT in the blockchain, it must have a digitalized format. This level is the “filling step” in traditional patent registering. An application could be designed in the application layer to allow users to enter different patent information online.
Recording
Patents provide valuable information and would bring financial benefits for their owner. If they are publicly published in a blockchain network, miners may refuse the patent and take the innovation for themselves. At least it can weaken consensus reliability and encourage miners to misbehave. The inventor should record his innovation privately first using proof of existence to prevent this. The inventor generates the hash of the patent document and records it in the blockchain. As soon as it is recorded in the blockchain, the timestamp and the hash are available for others publicly. Then, the inventor can prove the existence of the patent document whenever it is needed.
Furthermore, using methods like Decision Thinking36, an inventor can record each phase of patent development separately. In each stage, a user generates the hash of the finished part and publishes the hash regarding the last part’s hash. Finally, they have a coupled series of hashes that indicate patent development, and they can prove the existence of each phase using the original related documents. This level should be done to prevent others from abusing the patent and taking it for themselves. The inventor can make sure that their patent document is recorded confidentially and immutably37.
Different hash algorithms exist with different architecture, time complexity, and security considerations. Hash functions should satisfy two main requirements: Pre-Image Resistance: This means that it should be computationally hard to find the input of a hash function while the output and the hash algorithm are known publicly. Collision Resistance: This means that it is computationally hard to find two arbitrary inputs, x, and y, that have the same hash output. These requirements are vital for recording patents. First, the hash function should be Pre-Image Resistance to make it impossible for others to calculate the patent documentation. Otherwise, everybody can read the patent, even before its official publication. Second, the hash function should satisfy Collision Resistance to preclude users from changing their document after recording. Otherwise, users can upload another document, and after a while, they can replace it with another one.
There are various hash algorithms, and MD and SHA families are the most useful algorithms. According to38, Collisions have been found for MD2, MD4, MD5, SHA-0, and SHA-1 hash functions. Hence, they cannot be a good choice for recording patents. SHA2 hash algorithm is secure, and no collision has been found. Although SHA2 is noticeably slower than prior hash algorithms, the recording phase is not highly time-sensitive. So, it is a better choice and provides excellent security for users.
Validating
In this phase, the inventors first create NFT for their patents and publish it to the miners/validators. Miners are some identified nodes that validate NFTs to record in the blockchain. Due to the specialization of the patent validation, miners cannot be inexpert public persons. In addition, patent offices are not too many to make the network fully decentralized. Therefore, the miners can be related specialist persons that are certified by the patent offices. They should receive a digital certificate from patent offices that show their eligibility to referee a patent.
Digital certificate
Digital certificates are digital credentials used to verify networked entities’ online identities. They usually include a public key as well as the owner’s identification. They are issued by Certification Authorities (CAs), who must verify the certificate holder’s identity. Certificates contain cryptographic keys for signing, encryption, and decryption. X.509 is a standard that defines the format of public-key certificates and is signed by a certificate authority. X.509 standard has multiple fields, and its structure is shown in Fig. 5. Version: This field indicated the version of the X.509 standard. X.509 contains multiple versions, and each version has a different structure. According to the CA, validators can choose their desired version. Serial Number: It is used to distinguish a certificate from other certificates. Thus, each certificate has a unique serial number. Signature Algorithm Identifier: This field indicates the cryptographic encryption algorithm used by a certificate authority. Issuer Name: This field indicates the issuer’s name, which is generally certificate authority. Validity Period: Each certificate is valid for a defined period, defined as the Validity Period. This limited period partly protects certificates against exposing CA’s private key. Subject Name: Name of the requester. In our proposed framework, it is the validator’s name. Subject Public Key Info: Shows the CA’s or organization’s public key that issued the certificate. These fields are identical among all versions of the X.509 standard39.
Figure 5
Certificate authority
A Certificate Authority (CA) issues digital certificates. CAs encrypt the certificate with their private key, which is not public, and others can decrypt the certificates containing the CA’s public key.
Here, the patent office creates a certificate for requested patent referees. The patent office writes the information of the validator in their certificate and encrypts it with the patent offices’ private key. The validator can use the certificate to assure others about their eligibility. Other nodes can check the requesting node’s information by decrypting the certificate using the public key of the patent office. Therefore, persons can join the network’s miners/validators using their credentials. In this phase, miners perform Formal Examinations, Prior Art Research, and Substantive Examinations and vote to grant or refuse the patent.
Miners perform a consensus about the patent and record the patent in the blockchain. After that, the NFT is recorded in the blockchain with corresponding comments in granting or needing reformations. If the miners detect the NFT as a malicious request, they do not record it in the blockchain.
Blockchain layer
This layer plays as a middleware between the Verification Layer and Application Layer in the patents as NFTs architecture. The main purpose of the blockchain layer in the proposed architecture is to provide IP management. We find that transitioning to a blockchain-based patent as a NFTs records system enables many previously suggested improvements to current patent systems in a flexible, scalable, and transparent manner.
On the other hand, we can use multiple blockchain platforms, including Ethereum, EOS, Flow, and Tezos. Blockchain Systems can be mainly classified into two major types: Permissionless (public) and Permissioned (private) Blockchains based on their consensus mechanism. In a public blockchain, any node can participate in the peer-to-peer network, where the blockchain is fully decentralized. A node can leave the network without any consent from the other nodes in the network.
Bitcoin is one of the most popular examples that fall under the public and permissionless blockchain. Proof of Work (POW), Proof-of-Stake (POS), and directed acyclic graph (DAG) are some examples of consensus algorithms in permissionless blockchains. Bitcoin and Ethereum, two famous and trustable blockchain networks, use the PoW consensus mechanism. Blockchain platforms like Cardano and EOS adopt the PoS consensus40.
Nodes require specific access or permission to get network authentication in a private blockchain. Hyperledger is among the most popular private blockchains, which allow only permissioned members to join the network after authentication. This provides security to a group of entities that do not completely trust one another but wants to achieve a common objective such as exchanging information. All entities of a permissioned blockchain network can use Byzantine-fault-tolerant (BFT) consensus. The Fabric has a membership identity service that manages user IDs and verifies network participants.
Therefore, members are aware of each other’s identity while maintaining privacy and secrecy because they are unaware of each other’s activities41. Due to their more secure nature, private blockchains have sparked a large interest in banking and financial organizations, believing that these platforms can disrupt current centralized systems. Hyperledger, Quorum, Corda, EOS are some examples of permissioned blockchains42.
Reaching consensus in a distributed environment is a challenge. Blockchain is a decentralized network with no central node to observe and check all transactions. Thus, there is a need to design protocols that indicate all transactions are valid. So, the consensus algorithms are considered as the core of each blockchain43. In distributed systems, the consensus has become a problem in which all network members (nodes) agree on accept or reject of a block. When all network members accept the new block, it can append to the previous block.
As mentioned, the main concern in the blockchains is how to reach consensus among network members. A wide range of consensus algorithms has been designed in which each of them has its own pros and cons42. Blockchain consensus algorithms are mainly classified into three groups shown in Table 2. As the first group, proof-based consensus algorithms require the nodes joining the verifying network to demonstrate their qualification to do the appending task. The second group is voting-based consensus that requires validators in the network to share their results of validating a new block or transaction before making the final decision. The third group is DAG-based consensus, a new class of consensus algorithms. These algorithms allow several different blocks to be published and recorded simultaneously on the network.Table 2 Consensus algorithms in blockchain networks.
The proposed patent as the NFTs platform that builds blockchain intellectual property empowers the entire patent ecosystem. It is a solution that removes barriers by addressing fundamental issues within the traditional patent ecosystem. Blockchain can efficiently handle patents and trademarks by effectively reducing approval wait time and other required resources. The user entities involved in Intellectual Property management are Creators, Patent Consumers, and Copyright Managing Entities. Users with ownership of the original data are the patent creators, e.g., inventors, writers, and researchers. Patent Consumers are the users who are willing to consume the content and support the creator’s work. On the other hand, Users responsible for protecting the creators’ Intellectual Property are the copyright management entities, e.g., lawyers. The patents as NFTs solution for IP management in blockchain layer works by implementing the following steps62:
Creators sign up to the platform
Creators need to sign up on the blockchain platform to patent their creative work. The identity information will be required while signing up.
Creators upload IP on the blockchain network
Now, add an intellectual property for which the patent application is required. The creator will upload the information related to IP and the data on the blockchain network. Blockchain ensures traceability and auditability to prevent data from duplicity and manipulation. The patent becomes visible to all network members once it is uploaded to the blockchain.
Consumers generate request to use the content
Consumers who want to access the content must first register on the blockchain network. After Signing up, consumers can ask creators to grant access to the patented content. Before the patent owner authorizes the request, a Smart Contract is created to allow customers to access information such as the owner’s data. Furthermore, consumers are required to pay fees in either fiat money or unique tokens in order to use the creator’s original information. When the creator approves the request, an NDA (Non-Disclosure Agreement) is produced and signed by both parties. Blockchain manages the agreement and guarantees that all parties agree to the terms and conditions filed.
Patent management entities leverage blockchain to protect copyrights and solve related disputes
Blockchain assists the patent management entities in resolving a variety of disputes that may include: sharing confidential information, establishing proof of authorship, transferring IP rights, and making defensive publications, etc. Suppose a person used an Invention from a patent for his company without the inventor’s consent. The inventor can report it to the patent office and claim that he is the owner of that invention.
Application layer
The patent Platform Global Marketplace technology would allow many enterprises, governments, universities, and Small and medium-sized enterprises (SMEs) worldwide to tokenize patents as NFTs to create an infrastructure for storing patent records on a blockchain-based network and developing a decentralized marketplace in which patent holders would easily sell or otherwise monetize their patents. The NFTs-based patent can use smart contracts to determine a set price for a license or purchase.
Any buyer satisfied with the conditions can pay and immediately unlock the rights to the patent without either party ever having to interact directly. While patents are currently regulated jurisdictionally around the world, a blockchain-based patent marketplace using NFTs can reduce the geographical barriers between patent systems using as simple a tool as a search query. The ease of access to patents globally can help aspiring inventors accelerate the innovative process by building upon others’ patented inventions through licenses. There are a wide variety of use cases for patent NFTs such as SMEs, Patent Organization, Grant & Funding, and fundraising/transferring information relating to patents. These applications keep growing as time progresses, and we are constantly finding new ways to utilize these tokens. Some of the most commonly used applications can be seen as follows.
SMEs
The aim is to move intellectual property assets onto a digital, centralized, and secure blockchain network, enabling easier commercialization of patents, especially for small or medium enterprises (SMEs). Smart contracts can be attached to NFTs so terms of use and ownership can be outlined and agreed upon without incurring as many legal fees as traditional IP transfers. This is believed to help SMEs secure funding, as they could more easily leverage the previously undisclosed value of their patent portfolios63.
Transfer ownership of patents
NFTs can be used to transfer ownership of patents. The blockchain can be used to keep track of patent owners, and tokens would include self-executing contracts that transfer the legal rights associated with patents when the tokens are transferred. A partnership between IBM and IPwe has spearheaded the use of NFTs to secure patent ownership. These two companies have teamed together to build the infrastructure for an NFT-based patent marketplace.
Discussion
There are exciting proposals in the legal and economic literature that suggest seemingly straightforward solutions to many of the issues plaguing current patent systems. However, most solutions would constitute major administrative disruptions and place significant and continuous financial burdens on patent offices or their users. An NFT-based patents system not only makes many of these ideas administratively feasible but can also be examined in a step-wise, scalable, and very public manner.
Furthermore, NFT-based patents may facilitate reliable information sharing among offices and patentees worldwide, reducing the burden on examiners and perhaps even accelerating harmonization efforts. NFT-based patents also have additional transparency and archival attributes baked in. A patent should be a privilege bestowed on those who take resource-intensive risks to explore the frontier of technological capabilities. As a reward for their achievements, full transparency of these rewards is much public interest. It is a society that pays for administrative and economic inefficiencies that exist in today’s systems. NFT-based patents can enhance this transparency. From an organizational perspective, an NFT-based patent can remove current bottlenecks in patent processes by making these processes more efficient, rapid, and convenient for applicants without compromising the quality of granted patents.
The proposed framework encounters some challenges that should be solved to reach a developed patent verification platform. First, technical problems are discussed. The consensus method that is used in the verification layer is not addressed in detail. Due to the permissioned structure of miners in the NFT-based patents, consensus algorithms like PBFT, Federated Consensus, and Round Robin Consensus are designed for permissioned blockchains can be applied. Also, miners/validators spend some time validating the patents; hence a protocol should be designed to profit them. Some challenges like proving the miners’ time and effort, the price that inventors should pay to miners, and other economic trade-offs should be considered.
Different NFT standards were discussed. If various patent services use NFT standards, there will be some cross-platform problems. For instance, transferring an NFT from Ethereum blockchain (ERC-721 token) to EOS blockchain is not a forward and straight work and needs some considerations. Also, people usually trade NFTs in marketplaces such as Rarible and OpenSea. These marketplaces are centralized and may prompt some challenges because of their centralized nature. Besides, there exist some other types of challenges. For example, the novelty of NFT-based patents and blockchain services.
Blockchain-based patent service has not been tested before. The patent registration procedure and concepts of the Patent as NFT system may be ambiguous for people who still prefer conventional centralized patent systems over decentralized ones. It should be noted that there are some problems in the mining part. Miners should receive certificates from the accepted organizations. Determining these organizations and how they accept referees as validators need more consideration. Some types of inventions in some countries are prohibited, and inventors cannot register them. In NFT-based patents, inventors can register their patents publicly, and maybe some collisions occur between inventors and the government. There exist some misunderstandings about NFT’s ownership rights. It is not clear that when a person buys an NFT, which rights are given to them exactly; for instance, they have property rights or have moral rights, too.
Conclusion
Blockchain technology provides strong timestamping, the potential for smart contracts, proof-of-existence. It enables creating a transparent, distributed, cost-effective, and resilient environment that is open to all and where each transaction is auditable. On the other hand, blockchain is a definite boon to the IP industry, benefitting patent owners. When blockchain technology’s intrinsic characteristics are applied to the IP domain, it helps copyrights. This paper provided a conceptual framework for presenting an NFT-based patent with a comprehensive discussion of many aspects: background, model components, token standards to application areas, and research challenges. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application. The primary purpose of this patent framework was to provide an NFT-based concept that could be used to patent a decentralized, anti-tamper, and reliable network for trade and exchange around the world. Finally, we addressed several open challenges to NFT-based inventions.
References
Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decent. Bus. Rev. 21260, https://bitcoin.org/bitcoin.pdf (2008).
Buterin, V. A next-generation smart contract and decentralized application platform. White Pap.3 (2014).
Nofer, M., Gomber, P., Hinz, O. & Schiereck, D. Business & infomation system engineering. Blockchain59, 183–187 (2017).Google Scholar
Entriken, W., Shirley, D., Evans, J. & Sachs, N. EIP 721: ERC-721 non-fungible token standard. Ethereum Improv. Propos.. https://eips.ethereum.org/EIPS/eip-721 (2018).
Radomski, W. et al. Eip 1155: Erc-1155 multi token standard. In Ethereum, Standard (2018).
Fairfield, J. Tokenized: The law of non-fungible tokens and unique digital property. Indiana Law J. forthcoming (2021).
Chevet, S. Blockchain technology and non-fungible tokens: Reshaping value chains in creative industries. Available at SSRN 3212662 (2018).
Bal, M. & Ner, C. NFTracer: a Non-Fungible token tracking proof-of-concept using Hyperledger Fabric. arXiv preprint arXiv:1905.04795 (2019).
Wang, Q., Li, R., Wang, Q. & Chen, S. Non-fungible token (NFT): Overview, evaluation, opportunities and challenges. arXiv preprint arXiv:2105.07447 (2021).
Qu, Q., Nurgaliev, I., Muzammal, M., Jensen, C. S. & Fan, J. On spatio-temporal blockchain query processing. Future Gener. Comput. Syst.98: 208–218 (2019).ArticleGoogle Scholar
Rosenfeld, M. Overview of colored coins. White paper, bitcoil. co. il41, 94 (2012).
Benisi, N. Z., Aminian, M. & Javadi, B. Blockchain-based decentralized storage networks: A survey. J. Netw. Comput. Appl.162, 102656 (2020).ArticleGoogle Scholar
NFTReview. On-chain vs. Off-chain Metadata (2021).
Nizamuddin, N., Salah, K., Azad, M. A., Arshad, J. & Rehman, M. Decentralized document version control using ethereum blockchain and IPFS. Comput. Electr. Eng.76, 183–197 (2019).ArticleGoogle Scholar
Tut, K. Who Is Responsible for NFT Data? (2020).
nft.storage. Free Storage for NFTs, Retrieved 16 May, 2021, from https://nft.storage/. (2021).
Psaras, Y. & Dias, D. in 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). 80–80 (IEEE).
Tanner, J. & Roelofs, C. NFTs and the need for Self-Sovereign Identity (2021).
Martens, D., Tuyll van Serooskerken, A. V. & Steenhagen, M. Exploring the potential of blockchain for KYC. J. Digit. Bank.2, 123–131 (2017).Google Scholar
Hammi, M. T., Bellot, P. & Serhrouchni, A. In 2018 IEEE Wireless Communications and Networking Conference (WCNC). 1–6 (IEEE).
Khalid, U. et al. A decentralized lightweight blockchain-based authentication mechanism for IoT systems. Cluster Comput. 1–21 (2020).
Zhong, Y. et al. Distributed blockchain-based authentication and authorization protocol for smart grid. Wirel. Commun. Mobile Comput. (2021).
Schönhals, A., Hepp, T. & Gipp, B. In Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. 105–110.
Verma, S. & Prajapati, G. A Survey of Cryptographic Hash Algorithms and Issues. International Journal of Computer Security & Source Code Analysis (IJCSSCA) 1, 17–20, (2015).
Verma, S. & Prajapati, G. A survey of cryptographic hash algorithms and issues. Int. J. Comput. Secur. Source Code Anal. (IJCSSCA)1 (2015).
SDK, I. X.509 Certificates (1996).
Helliar, C. V., Crawford, L., Rocca, L., Teodori, C. & Veneziani, M. Permissionless and permissioned blockchain diffusion. Int. J. Inf. Manag.54, 102136 (2020).ArticleGoogle Scholar
Frizzo-Barker, J. et al. Blockchain as a disruptive technology for business: A systematic review. Int. J. Inf. Manag.51, 102029 (2020).ArticleGoogle Scholar
Bamakan, S. M. H., Motavali, A. & Bondarti, A. B. A survey of blockchain consensus algorithms performance evaluation criteria. Expert Syst. Appl.154, 113385 (2020).ArticleGoogle Scholar
Bamakan, S. M. H., Bondarti, A. B., Bondarti, P. B. & Qu, Q. Blockchain technology forecasting by patent analytics and text mining. Blockchain Res. Appl. 100019 (2021).
Castro, M. & Liskov, B. Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. (TOCS)20, 398–461 (2002).ArticleGoogle Scholar
Muratov, F., Lebedev, A., Iushkevich, N., Nasrulin, B. & Takemiya, M. YAC: BFT consensus algorithm for blockchain. arXiv preprint arXiv:1809.00554 (2018).
Bessani, A., Sousa, J. & Alchieri, E. E. In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks. 355–362 (IEEE).
Todd, P. Ripple protocol consensus algorithm review. May 11th (2015).
Ongaro, D. & Ousterhout, J. In 2014 {USENIX} Annual Technical Conference ({USENIX}{ATC} 14). 305–319.
Dziembowski, S., Faust, S., Kolmogorov, V. & Pietrzak, K. In Annual Cryptology Conference. 585–605 (Springer).
Bentov, I., Lee, C., Mizrahi, A. & Rosenfeld, M. Proof of Activity: Extending Bitcoin’s Proof of Work via Proof of Stake. IACR Cryptology ePrint Archive2014, 452 (2014).Google Scholar
Bramas, Q. The Stability and the Security of the Tangle (2018).
Baird, L. The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance. In Swirlds Tech Reports SWIRLDS-TR-2016–01, Tech. Rep (2016).
LeMahieu, C. Nano: A feeless distributed cryptocurrency network. Nano [Online resource].https://nano.org/en/whitepaper (date of access: 24.03. 2018) 16, 17 (2018).
Casino, F., Dasaklis, T. K. & Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification and open issues. Telematics Inform.36, 55–81 (2019).ArticleGoogle Scholar
bigredawesomedodo. Helping Small Businesses Survive and Grow With Marketing, Retrieved 3 June, 2021, from https://bigredawesomedodo.com/nft/. (2020).
This work has been partially supported by CAS President’s International Fellowship Initiative, China [grant number 2021VTB0002, 2021] and National Natural Science Foundation of China (No. 61902385).
Author information
Affiliations
Department of Industrial Management, Yazd University, Yazd City, IranSeyed Mojtaba Hosseini Bamakan
Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan City, IranNasim Nezhadsistani
School of Electrical and Computer Engineering, University of Tehran, Tehran City, IranOmid Bodaghi
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, ChinaSeyed Mojtaba Hosseini Bamakan & Qiang Qu
From: Heidi Rheim et al. GA4GH: International policies and standards for data sharing across genomic research and healthcare. (2021): Cell Genomics, Volume 1 Issue 2.
Siloing genomic data in institutions/jurisdictions limits learning and knowledge
GA4GH policy frameworks enable responsible genomic data sharing
GA4GH technical standards ensure interoperability, broad access, and global benefits
Data sharing across research and healthcare will extend the potential of genomics
Summary
The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits.
In order for genomic and personalized medicine to come to fruition it is imperative that data siloes around the world are broken down, allowing the international collaboration for the collection, storage, transferring, accessing and analying of molecular and health-related data.
We had talked on this site in numerous articles about the problems data siloes produce. By data siloes we are meaning that collection and storage of not only DATA but intellectual thought are being held behind physical, electronic, and intellectual walls and inacessible to other scientisits not belonging either to a particular institituion or even a collaborative network.
Standardization and harmonization of data is key to this effort to sharing electronic records. The EU has taken bold action in this matter. The following section is about the General Data Protection Regulation of the EU and can be found at the following link:
The data protection package adopted in May 2016 aims at making Europe fit for the digital age. More than 90% of Europeans say they want the same data protection rights across the EU and regardless of where their data is processed.
The General Data Protection Regulation (GDPR)
Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. This text includes the corrigendum published in the OJEU of 23 May 2018.
The regulation is an essential step to strengthen individuals’ fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. A single law will also do away with the current fragmentation in different national systems and unnecessary administrative burdens.
Directive (EU) 2016/680 on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties, and on the free movement of such data.
The directive protects citizens’ fundamental right to data protection whenever personal data is used by criminal law enforcement authorities for law enforcement purposes. It will in particular ensure that the personal data of victims, witnesses, and suspects of crime are duly protected and will facilitate cross-border cooperation in the fight against crime and terrorism.
The directive entered into force on 5 May 2016 and EU countries had to transpose it into their national law by 6 May 2018.
The following paper by the organiztion The Global Alliance for Genomics and Health discusses these types of collaborative efforts to break down data silos in personalized medicine. This organization has over 2000 subscribers in over 90 countries encompassing over 60 organizations.
Enabling responsible genomic data sharing for the benefit of human health
The Global Alliance for Genomics and Health (GA4GH) is a policy-framing and technical standards-setting organization, seeking to enable responsible genomic data sharing within a human rights framework.
he Global Alliance for Genomics and Health (GA4GH) is an international, nonprofit alliance formed in 2013 to accelerate the potential of research and medicine to advance human health. Bringing together 600+ leading organizations working in healthcare, research, patient advocacy, life science, and information technology, the GA4GH community is working together to create frameworks and standards to enable the responsible, voluntary, and secure sharing of genomic and health-related data. All of our work builds upon the Framework for Responsible Sharing of Genomic and Health-Related Data.
GA4GH Connect is a five-year strategic plan that aims to drive uptake of standards and frameworks for genomic data sharing within the research and healthcare communities in order to enable responsible sharing of clinical-grade genomic data by 2022. GA4GH Connect links our Work Streams with Driver Projects—real-world genomic data initiatives that help guide our development efforts and pilot our tools.
The Global Alliance for Genomics and Health (GA4GH) is a worldwide alliance of genomics researchers, data scientists, healthcare practitioners, and other stakeholders. We are collaborating to establish policy frameworks and technical standards for responsible, international sharing of genomic and other molecular data as well as related health data. Founded in 2013,3 the GA4GH community now consists of more than 1,000 individuals across more than 90 countries working together to enable broad sharing that transcends the boundaries of any single institution or country (see https://www.ga4gh.org).In this perspective, we present the strategic goals of GA4GH and detail current strategies and operational approaches to enable responsible sharing of clinical and genomic data, through both harmonized data aggregation and federated approaches, to advance genomic medicine and research. We describe technical and policy development activities of the eight GA4GH Work Streams and implementation activities across 24 real-world genomic data initiatives (“Driver Projects”). We review how GA4GH is addressing the major areas in which genomics is currently deployed including rare disease, common disease, cancer, and infectious disease. Finally, we describe differences between genomic sequence data that are generated for research versus healthcare purposes, and define strategies for meeting the unique challenges of responsibly enabling access to data acquired in the clinical setting.
GA4GH organization
GA4GH has partnered with 24 real-world genomic data initiatives (Driver Projects) to ensure its standards are fit for purpose and driven by real-world needs. Driver Projects make a commitment to help guide GA4GH development efforts and pilot GA4GH standards (see Table 2). Each Driver Project is expected to dedicate at least two full-time equivalents to GA4GH standards development, which takes place in the context of GA4GH Work Streams (see Figure 1). Work Streams are the key production teams of GA4GH, tackling challenges in eight distinct areas across the data life cycle (see Box 1). Work Streams consist of experts from their respective sub-disciplines and include membership from Driver Projects as well as hundreds of other organizations across the international genomics and health community.
Figure 1Matrix structure of the Global Alliance for Genomics and HealthShow full caption
Box 1GA4GH Work Stream focus areasThe GA4GH Work Streams are the key production teams of the organization. Each tackles a specific area in the data life cycle, as described below (URLs listed in the web resources).
(1)Data use & researcher identities: Develops ontologies and data models to streamline global access to datasets generated in any country9,10
(2)Genomic knowledge standards: Develops specifications and data models for exchanging genomic variant observations and knowledge18
(3)Cloud: Develops federated analysis approaches to support the statistical rigor needed to learn from large datasets
(4)Data privacy & security: Develops guidelines and recommendations to ensure identifiable genomic and phenotypic data remain appropriately secure without sacrificing their analytic potential
(5)Regulatory & ethics: Develops policies and recommendations for ensuring individual-level data are interoperable with existing norms and follow core ethical principles
(6)Discovery: Develops data models and APIs to make data findable, accessible, interoperable, and reusable (FAIR)
(7)Clinical & phenotypic data capture & exchange: Develops data models to ensure genomic data is most impactful through rich metadata collected in a standardized way
(8)Large-scale genomics: Develops APIs and file formats to ensure harmonized technological platforms can support large-scale computing
For more articles on Open Access, Science 2.0, and Data Networks for Genomics on this Open Access Scientific Journal see: