Feeds:
Posts
Comments

Archive for the ‘Intelligent Information Systems’ Category

The Health Care Dossier on Clarivate PLC: How Cortellis Is Changing the Life Sciences Industry

Curator: Stephen J. Williams, Ph.D.

Source: https://en.wikipedia.org/wiki/Clarivate 

Clarivate Plc is a British-American publicly traded analytics company that operates a collection of subscription-based services, in the areas of bibliometrics and scientometricsbusiness / market intelligence, and competitive profiling for pharmacy and biotechpatents, and regulatory compliancetrademark protection, and domain and brand protection. In the academy and the scientific community, Clarivate is known for being the company that calculates the impact factor,[4] using data from its Web of Science product family, that also includes services/applications such as PublonsEndNoteEndNote Click, and ScholarOne. Its other product families are Cortellis, DRG, CPA Global, Derwent, MarkMonitor, CompuMark, and Darts-ip, [3] and also the various ProQuest products and services.

Clarivate was formed in 2016, following the acquisition of Thomson Reuters‘ Intellectual Property and Science business by Onex Corporation and Baring Private Equity Asia. Clarivate has acquired various companies since then, including, notably, ProQuest in 2021.

 

Further information: Thomson Scientific

Clarivate (formerly CPA Global) was formerly the Intellectual Property and Science division of Thomson Reuters. Before 2008, it was known as Thomson Scientific. In 2016, Thomson Reuters struck a $3.55 billion deal in which they spun it off as an independent company, and sold it to private-equity firms Onex Corporation and Baring Private Equity Asia.

In May 2019, Clarivate merged with the Churchill Capital Corp SPAC to obtain a public listing on the New York Stock Exchange (NYSE) It currently trades with symbol NYSE:CLVT.

 

Acquisitions

  • June 1, 2017: Publons, a platform for researchers to share recognition for peer review.
  • April 10, 2018: Kopernio, AI-tech startup providing ability to search for full-text versions of selected scientific journal articles.
  • October 30, 2018: TrademarkVision, provider of Artificial Intelligence (AI) trademark research applications.
  • September 9, 2019: SequenceBase, provider of patent sequence information and search technology to the biotech, pharmaceutical and chemical industries.
  • December 2, 2019: Darts-ip, provider of case law data and analytics for intellectual property (IP) professionals.
  • January 17, 2020: Decision Resources Group (DRG), a leading healthcare research and consulting company, providing high-value healthcare industry analysis and insights.
  • June 22, 2020: CustomersFirst Now, in intellectual property (“IP”) software and tech-enabled services.
  • October 1, 2020: CPA Global, intellectual property (“IP”) software and tech-enabled services.
  • December 1, 2021: ProQuest, software, data and analytics provider to academic, research and national institutions.[27]It was acquired for $5.3 billion from Cambridge Information Group in what was described as a “huge deal in the library and information publishing world”. The company said that the operational concept behind the acquisition was integrating ProQuest’s products and applications with Web of Science. Chairman of ProQuest Andy Snyder became the vice chairman of Clarivate. The Scholarly Publishing and Academic Resources Coalition, an advocacy group for open access to scholarship, voiced antitrust concerns. The acquisition had been delayed mid-year due to a Federal Trade Commission antitrust probe.

Divestments

 

How Clarivate Has Changed Since 2019

2019 Strategy

From 2019 Manager Discussion Yearly Report

We are a leading global information services and analytics company serving the scientific research, intellectual property and life sciences end-markets. We provide structured information and analytics to facilitate the discovery, protection and commercialization of scientific research, innovations and brands.  Our product porfolio includes well-established market-leading brands such as Web of Science, Derwent Innovation, Life Sciences, CompuMark and MarkMonitor (which they later divested).  We believe that the stron balue proposition of our content, user interfaces, visualization and analytical tools, combined with the integration of our products and services into customers’ daily workflows, leads to our substantial customer loyalty as evidenced by their willingness to renew subscriptions with us.

Our structure, enabling a sharp focus on cross-selling opportunities within markets, is comprised of two product groups:

  • Science Group: consists of Web of Science and Life Science Product Lines
  • Intellectual Property Group: consists of Derwent, CompuMark and MarkMonitor

Corporations, government agencies, universities, law firms depend on our high-value curated content, analytics and services.  Unstructured data has grown exponentially over the last decade.  The trend has resulted in a critical need for unstructured data to be meaningfully filtered, analyzed and curated into relvent information that facilitates key operational and strategic decision making.  Our highly curated, proprietary information created through our sourcing, aggregation, verification, translation, and categorization (ONTOLOGY) of data has resulted in our solutions being embedded in our customers’ workflow and decision-making processes.

Overview of Clarivate PLC five year strategy in 2019. Note that in 2019 the Science Group accounted for 56.2% of revenue! This was driven by their product Cortellis!

 

Figure.  Overview of Clarivate PLC five year strategy in 2019. Note that in 2019 the Science Group accounted for 56.2% of revenue! This was driven by their product Cortellis!

Also Note nowhere in the M&A Discussion in years before 2023 was anything mentioned concerning AI or Large Language Models.

 

The Clarivate of Today:  Built for Life Sciences with Cortellis

Clarivate PLC has integrated multiple platforms into their offering Cortellis, which integrated AI and LLM into the structured knowledge bases (see more at https://clarivate.com/products/cortellis-family/)

“Life sciences organizations are tasked, now more than ever, to discover and develop treatments that challenge the status quo, increase ROI, and improve patient lives. However, its become increasingly difficult to find, integrate and analyze the key data your teams need to make critical decisions and get your Cortellis products to patients faster.

The Cortellis solutions help research and developmentportfolio strategy and business development, and regulatory and compliance professionals gather and assess the information you need to discover innovative drugs, differentiate your treatments, and increase chances of successful regulatory approval.

Some of Cortellis solutions include:

  1. Cortellis Competitive Intelligence: maximize ROI and improve patient outcomes
  2. Cortellis Deals Intelligence: Portfolio Strategy and Business Development (find best deal)
  3. Cortellis Clinical Intelligence: Clinical Trial Support and Regulatory
  4. Cortellis Digital Health Intelligence: understand digital health ecosystem
  5. Cortellis Drug Discovery: improve drug development speed and efficiency
  6. MetaBase and MetaCore: integrated omics knowledge bases for drug discovery
  7. Cortellis Regulatory: help with filings
  8. Cortellis HTA: health tech compliance (HIPAA)
  9. CMC Intelligence: new drug marketing
  10. Generics Intelligence
  11. Drug Safety Intelligence: both preclinical safety and post marketing pharmacovigilence

 

 

Watch Videos on Cortellis for Drug Discovery

 

 

 

Watch Video on Qiagen Site to see how Cortellis Integrates with Qiagen Omics Platform IPA with Clarivate Meta Core to gain more insights into genomic and proteomic data

https://digitalinsights.qiagen.com/products-overview/discovery-insights-portfolio/analysis-and-visualization/qiagen-ipa/?cmpid=QDI_GA_Comp&gad_source=2&gclid=EAIaIQobChMIwu6HtvHGhQMVnZ9aBR1iCgHTEAEYASAAEgJiWPD_BwE

From the Qiagen website on Ingenuity Pathway Analysis: https://digitalinsights.qiagen.com/products-overview/discovery-insights-portfolio/analysis-and-visualization/qiagen-ipa/ 

Understand complex ‘omics data to accelerate your research

Discover why QIAGEN Ingenuity Pathway Analysis (IPA) is the leading pathway analysis application among the life science research community and is cited in tens of thousands of articles for the analysis, integration and interpretation of data derived from ‘omics experiments. Such experiments include:

  • RNA-seq
  • Small RNA-seq
  • Metabolomics
  • Proteomics
  • Microarrays including miRNA and SNP
  • Small-scale experiments

With QIAGEN IPA you can predict downstream effects and identify new targets or candidate biomarkers. QIAGEN Ingenuity Pathway Analysis helps you perform insightful data analysis and interpretation to understand your experimental results within the context of various biological systems.

 

Articles Relevant to Drug Development, Natural Language Processing in Drug Development, and Clarivate on this Open Access Scientific Journal Include:

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

 

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

 

Medical Startups – Artificial Intelligence (AI) Startups in Healthcare

 

New York Academy of Sciences Symposium: The New Wave of AI in Healthcare 2024. May 1-2, 2024 New York City, NY

 

Clarivate Analytics – a Powerhouse in IP assets and in Pharmaceuticals Informercials

 

Read Full Post »

The Use of ChatGPT in the World of BioInformatics and Cancer Research and Development of BioGPT by MIT

Curator: Stephen J. Williams, Ph.D.

Chatbots are being used in multiple interdisciplanary areas in research and medicine and so it was a natural progression to incorporate artificial intelligence (AI), natural language processing (NLP), and chatbot technology like ChatGPT into bioinformatic analysis.

“In domains like informatics, management and marketing, media and communication science, languages and philosophy, psychology and sociology, engineering, design, and human-computer interaction, the fast expanding body of chatbot study is clearly interdisciplinary.

The field of bioinformatics is a natural fit for incorporting this technology.  The curated information below shows some use examples of this technology in bioinformatics related to cancer research.

The text in this reference source was actually generated by ChatGPT (https://chat.openai.com/chat)

Source: https://omicstutorials.com/using-chatgpt-in-bioinformatics-and-biomedical-research/

How ChatGPT can be used in bioinformatics research?
ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model that was designed to generate human-like text in a conversational setting. It is not directly related to bioinformatics, which is the field of study that deals with the storage, retrieval, and analysis of biological data, particularly in the context of genetics and genomics. However, ChatGPT could potentially be used in a bioinformatics context as a tool for generating natural language descriptions of genetic or genomic data, or for generating responses to queries about such data.

For example, a researcher could use ChatGPT to generate descriptions of genetic variants or gene expression patterns based on input data. This could be useful for generating summaries of results for reports or papers, or for generating explanations of complex concepts for non-expert audiences. ChatGPT could also be used to generate responses to queries about genetic or genomic data, such as “What is the function of gene X?” or “What are the potential consequences of variant Y?”

It’s worth noting that ChatGPT is just one of many tools and techniques that can be used in bioinformatics, and it is unlikely to be the most suitable or effective option in every situation. There are many specialized tools and resources available for working with biological data, and it is often necessary to use a combination of these tools to fully analyze and understand the data.

Generating descriptions of genetic or genomic data: ChatGPT could be used to generate natural language descriptions of genetic or genomic data based on input data. For example, suppose a researcher has a dataset containing information about gene expression levels in different tissues. The researcher could use ChatGPT to generate a description of the data, such as:
“Gene X is highly expressed in the liver and kidney, with moderate expression in the brain and heart. Gene Y, on the other hand, shows low expression in all tissues except for the lung, where it is highly expressed.”

 

Thereby ChatGPT, at its simplest level, could be used to ask general questions like “What is the function of gene product X?” and a ChatGPT could give a reasonable response without the scientist having to browse through even highly curated databases lie GeneCards or UniProt or GenBank.  Or even “What are potential interactors of Gene X, validated by yeast two hybrid?” without even going to the curated InterActome databases or using expensive software like Genie.

Summarizing results: ChatGPT could be used to generate summaries of results from genetic or genomic studies. For example, a researcher might use ChatGPT to generate a summary of a study that found a association between a particular genetic variant and a particular disease. The summary might look something like this:
“Our study found that individuals with the variant form of gene X are more likely to develop disease Y. Further analysis revealed that this variant is associated with changes in gene expression that may contribute to the development of the disease.”

It’s worth noting that ChatGPT is just one tool that could potentially be used in these types of applications, and it is likely to be most effective when used in combination with other bioinformatics tools and resources. For example, a researcher might use ChatGPT to generate a summary of results, but would also need to use other tools to analyze the data and confirm the findings.

ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model that is designed for open-domain conversation. It is not specifically designed for generating descriptions of genetic variants or gene expression patterns, but it can potentially be used for this purpose if you provide it with a sufficient amount of relevant training data and fine-tune it appropriately.

To use ChatGPT to generate descriptions of genetic variants or gene expression patterns, you would first need to obtain a large dataset of examples of descriptions of genetic variants or gene expression patterns. You could use this dataset to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants or gene expression patterns.

Here’s an example of how you might use ChatGPT to generate a description of a genetic variant:

First, you would need to pre-process your dataset of descriptions of genetic variants to prepare it for use with ChatGPT. This might involve splitting the descriptions into individual sentences or phrases, and encoding them using a suitable natural language processing (NLP) library or tool.

Next, you would need to fine-tune the ChatGPT model on the task of generating descriptions of genetic variants. This could involve using a tool like Hugging Face’s Transformers library to load the ChatGPT model and your pre-processed dataset, and then training the model on the task of generating descriptions of genetic variants using an appropriate optimization algorithm.

Once the model has been fine-tuned, you can use it to generate descriptions of genetic variants by providing it with a prompt or seed text and asking it to generate a response. For example, you might provide the model with the prompt “Generate a description of a genetic variant associated with increased risk of breast cancer,” and ask it to generate a response. The model should then generate a description of a genetic variant that is associated with increased risk of breast cancer.

It’s worth noting that generating high-quality descriptions of genetic variants or gene expression patterns is a challenging task, and it may be difficult to achieve good results using a language model like ChatGPT without a large amount of relevant training data and careful fine-tuning.

 

To train a language model like chatGPT to extract information about specific genes or diseases from research papers, you would need to follow these steps:

Gather a large dataset of research papers that contain information about the specific genes or diseases you are interested in. This dataset should be diverse and representative of the types of papers you want the model to be able to extract information from.

Preprocess the text data in the research papers by tokenizing the text and creating a vocabulary. You may also want to consider lemmatizing or stemming the text to reduce the dimensionality of the dataset.

Train the language model on the preprocessed text data. You may want to fine-tune a pre-trained model such as chatGPT on your specific dataset, or you can train a new model from scratch.

ChatGPT could also be useful for sequence analysis

A few examples of sequence analysis a ChatGPT could be useful include:

  1. Protein structure
  2. Identifying functional regions of a protein
  3. Predicting protein-protein interactions
  4. Identifying protein homologs
  5. Generating Protein alignments

All this could be done without having access to UNIX servers or proprietary software or knowing GCG coding

ChatGPT in biomedical research
There are several potential ways that ChatGPT or other natural language processing (NLP) models could be applied in biomedical research:

Text summarization: ChatGPT or other NLP models could be used to summarize large amounts of text, such as research papers or clinical notes, in order to extract key information and insights more quickly.

Data extraction: ChatGPT or other NLP models could be used to extract structured data from unstructured text sources, such as research papers or clinical notes. For example, the model could be trained to extract information about specific genes or diseases from research papers, and then used to create a database of this information for further analysis.

Literature review: ChatGPT or other NLP models could be used to assist with literature review tasks, such as identifying relevant papers, extracting key information from papers, or summarizing the main findings of a group of papers.

Predictive modeling: ChatGPT or other NLP models could be used to build predictive models based on large amounts of text data, such as electronic health records or research papers. For example, the model could be trained to predict the likelihood of a patient developing a particular disease based on their medical history and other factors.

It’s worth noting that while NLP models like ChatGPT have the potential to be useful tools in biomedical research, they are only as good as the data they are trained on, and it is important to carefully evaluate the quality and reliability of any results generated by these models.

 

ChatGPT in text mining of biomedical data
ChatGPT could potentially be used for text mining in the biomedical field in a number of ways. Here are a few examples:

Extracting information from scientific papers: ChatGPT could be trained on a large dataset of scientific papers in the biomedical field, and then used to extract specific pieces of information from these papers, such as the names of compounds, their structures, and their potential uses.

Generating summaries of scientific papers: ChatGPT could be used to generate concise summaries of scientific papers in the biomedical field, highlighting the main findings and implications of the research.

Identifying trends and patterns in scientific literature: ChatGPT could be used to analyze large datasets of scientific papers in the biomedical field and identify trends and patterns in the data, such as emerging areas of research or common themes among different papers.

Generating questions for further research: ChatGPT could be used to suggest questions for further research in the biomedical field based on existing scientific literature, by identifying gaps in current knowledge or areas where further investigation is needed.

Generating hypotheses for scientific experiments: ChatGPT could be used to generate hypotheses for scientific experiments in the biomedical field based on existing scientific literature and data, by identifying potential relationships or associations that could be tested in future research.

 

PLEASE WATCH VIDEO

 

In this video, a bioinformatician describes the ways he uses ChatGPT to increase his productivity in writing bioinformatic code and conducting bioinformatic analyses.

He describes a series of uses of ChatGPT in his day to day work as a bioinformatian:

  1. Using ChatGPT as a search engine: He finds more useful and relevant search results than a standard Google or Yahoo search.  This saves time as one does not have to pour through multiple pages to find information.  However, a caveat is ChatGPT does NOT return sources, as highlighted in previous postings on this page.  This feature of ChatGPT is probably why Microsoft bought OpenAI in order to incorporate ChatGPT in their Bing search engine, as well as Office Suite programs

 

  1. ChatGPT to help with coding projects: Bioinformaticians will spend multiple hours searching for and altering open access available code in order to run certain function like determining the G/C content of DNA (although there are many UNIX based code that has already been established for these purposes). One can use ChatGPT to find such a code and then assist in debugging that code for any flaws

 

  1. ChatGPT to document and add coding comments: When writing code it is useful to add comments periodically to assist other users to determine how the code works and also how the program flow works as well, including returned variables.

 

One of the comments was interesting and directed one to use BIOGPT instead of ChatGPT

 

@tzvi7989

1 month ago (edited)

0:54 oh dear. You cannot use chatgpt like that in Bioinformatics as it is rn without double checking the info from it. You should be using biogpt instead for paper summarisation. ChatGPT goes for human-like responses over precise information recal. It is quite good for debugging though and automating boring awkward scripts

So what is BIOGPT?

BioGPT https://github.com/microsoft/BioGPT

 

The BioGPT model was proposed in BioGPT: generative pre-trained transformer for biomedical text generation and mining by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a domain-specific generative pre-trained Transformer language model for biomedical text generation and mining. BioGPT follows the Transformer language model backbone, and is pre-trained on 15M PubMed abstracts from scratch.

The abstract from the paper is the following:

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.

Tips:

  • BioGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
  • BioGPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows BioGPT to generate syntactically coherent text as it can be observed in the run_generation.py example script.
  • The model can take the past_key_values (for PyTorch) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the BioGptForCausalLM.forward() method for more information on its usage.

This model was contributed by kamalkraj. The original code can be found here.

 

This repository contains the implementation of BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining, by Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon and Tie-Yan Liu. BioGPT is a github which is being developed by MIT in collaboration with Microsoft. It is based on Python.

License

BioGPT is MIT-licensed. The license applies to the pre-trained models as well.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

As of right now this does not seem Open Access, however a sign up is required!

We provide our pre-trained BioGPT model checkpoints along with fine-tuned checkpoints for downstream tasks, available both through URL download as well as through the Hugging Face 🤗 Hub.

Model Description URL 🤗 Hub
BioGPT Pre-trained BioGPT model checkpoint link link
BioGPT-Large Pre-trained BioGPT-Large model checkpoint link link
BioGPT-QA-PubMedQA-BioGPT Fine-tuned BioGPT for question answering task on PubMedQA link
BioGPT-QA-PubMedQA-BioGPT-Large Fine-tuned BioGPT-Large for question answering task on PubMedQA link
BioGPT-RE-BC5CDR Fine-tuned BioGPT for relation extraction task on BC5CDR link
BioGPT-RE-DDI Fine-tuned BioGPT for relation extraction task on DDI link
BioGPT-RE-DTI Fine-tuned BioGPT for relation extraction task on KD-DTI link
BioGPT-DC-HoC Fine-tuned BioGPT for document classification task on HoC link

 

So far as to date (6/12/2023) there are three papers referencing BioGPT on PubMed (abstracts and links below)

 

BioGPT: generative pre-trained transformer for biomedical text generation and mining.

Luo R, Sun L, Xia Y, Qin T, Zhang S, Poon H, Liu TY.Brief Bioinform. 2022 Nov 19;23(6):bbac409. doi: 10.1093/bib/bbac409.PMID: 36156661

We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. …Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fl …

Here they show the advantages of BioGPT over BioBERT and ChatGPT in an example of text extraction and question answer

Table 7. Example of generated text

Input                                           Model                 Text

Bicalutamide                               GPT-2                 Bicalutamide, a synthetic bicalutamide derivative, was found to increase the cellular proliferation

of C. elegans cells. This effect was not observed with the other synthetic bicalutamide derivatives. This is in line with our previous work which showed that bicalutamide increases the proliferation of C. elegans cells but not in the presence of synthetic bicalutamide.

BioGPT             Bicalutamide (Casodex) is an androgen receptor (AR) antagonist approved for the treatment of metastatic castration-resistant prostate cancer (mCRPC) in patients who have progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of early-stage prostate cancer in men who have not progressed on or are ineligible for docetaxel chemotherapy, as well as for the treatment of metastatic castration-sensitive prostate cancer (mCSPC) in men who have not progressed on or are ineligible for docetaxel chemotherapy.

Janus kinase 3 (JAK-3)                 GPT-2               Janus kinase 3 (JAK-3) mediates the conversion of glucose to glycogen in skeletal muscle. The
increase in glucose uptake by skeletal muscle is believed to be associated with a reduction in muscle protein breakdown and with the release of ketone bodies, which contribute to the metabolism of glucose.

BioGPT             Janus kinase 3 (JAK-3) is a member of the Janus kinase (JAK) family of non-receptor tyrosine kinases and plays an important role in the regulation of cell proliferation, differentiation, survival, migration and angiogenesis.

Apricitabine                                 GPT-2               Apricitabine, duloxetine and citalopram) were investigated in patients with a history of
treatment-resistant depression (n = 10). We found that treatment with the three drugs significantly decreased depressive symptoms, but not depressive symptoms over the next 12 weeks. We conclude that antidepressant drugs that reduce depressive symptoms and reduce depressive symptoms over time may be beneficial in patients with recurrent depression.

BioGPT             Apricitabine is an oral prodrug of 5-aza-2 ’-deoxycytidine (5-aza-CdR), a DNA methyltransferase (DNMT) inhibitor, which has been approved by the US Food and Drug Administration (FDA) for the treatment of myelodysplastic syndrome (MDS) and acute myeloid leukemia (AML) in combination with low-dose cytarabine (Ara-C) and granulocyte colony-stimulating factor (G-CSF) for patients with intermediate-2 or high-risk MDS or AML.

Xylazine                                      GPT-2               Xylazine, the ‘active ingredient’ in ‘bath salts’ (see above) has been linked to numerous deaths,
including those of former University of California, Berkeley, professor Andrew Anglin, who posted

a video of himself having a bath filled with it.

BioGPT             Xylazine is an alpha 2-adrenoceptor agonist which has been used as a sedative and analgesic in veterinary medicine for many years, but its effects on the cardiovascular system have not been extensively studied in the dog, and its effects on the central nervous system (CNS) have not been well characterized in the dog, despite the fact that xylazine has been widely used as a sedative and analgesic in veterinary medicine for more than 30 years.

Psoralen                                      GPT-2               Psoralen, the city of the spiritualist, and had been given to him by God, who wished to show him
the way of salvation, and to draw him back to the true life of Christ, and to bring him into communion with his fellow-man.

BioGPT             Psoralen photochemotherapy (PUVA) is a well-established treatment for psoriasis, but its use is limited by the risk of skin cancer, particularly squamous cell carcinoma (SCC) of the head and neck (H & N), which is the most common site of skin cancer in the United States (US)

CP-673451                                  GPT-2                 CP-673451) and 4-5-6-3-1-2-3-4-6-7-8-9-10-11-12-13-14-15-16-17-18-19-20-21-22-23-24-25-26-27-

28-29-30-31-32-33-34-35-36-37-38-39-40-41-42-43-44-45-46-47-48-49-50…

BioGPT             CP-673451 is a potent, selective, and orally active inhibitor of human neutrophil elastase (HNE) and human cathepsin G (CatG) with in vitro and in vivo anti-inflammatory activity in a variety of animal models of inflammation and in a model of acute lung injury (ALI) in the rat induced by intratracheal instillation of lipopolysaccharide (LPS) and tumor necrosis factor-alpha (TNF-alpha), a model of acute lung injury (ALI) in which neutrophils play an important role.

BIIB-021                                     GPT-2               BIIB-021A-2-1.pdf. Wright, C. (2002). ‘The War on Terror: A Legal Theory of Terrorist Attacks,’ in
L.E.R. Wright, S.R. Warrick and J.H. Rothman (Eds.), The War on Terror: Essays in Honor of Michael T. Klare (New York: Oxford University Press), 143-57.

BioGPT             BIIB-021 is a novel, orally active, non-peptide bradykinin B2 receptor antagonist with potent and long-lasting anti-inflammatory activity in animal models of acute and chronic inflammation and in a rat model of adjuvant-induced arthritis (AIA), an animal model of rheumatoid arthritis (RA) and in a rat model of collagen-induced arthritis (CIA), an animal model of collagen-induced arthritis (CIA), in which arthritis is induced by immunization with bovine type II collagen (CII).

Note how BioGPT is more descriptive and accurate!

EGFI: drug-drug interaction extraction and generation with fusion of enriched entity and sentence information.

Huang L, Lin J, Li X, Song L, Zheng Z, Wong KC.Brief Bioinform. 2022 Jan 17;23(1):bbab451. doi: 10.1093/bib/bbab451.PMID: 34791012

The rapid growth in literature accumulates diverse and yet comprehensive biomedical knowledge hidden to be mined such as drug interactions. However, it is difficult to extract the heterogeneous knowledge to retrieve or even discover the latest and novel knowledge in an efficient manner. To address such a problem, we propose EGFI for extracting and consolidating drug interactions from large-scale medical literature text data. Specifically, EGFI consists of two parts: classification and generation. In the classification part, EGFI encompasses the language model BioBERT which has been comprehensively pretrained on biomedical corpus. In particular, we propose the multihead self-attention mechanism and packed BiGRU to fuse multiple semantic information for rigorous context modeling. In the generation part, EGFI utilizes another pretrained language model BioGPT-2 where the generation sentences are selected based on filtering rules.

Results: We evaluated the classification part on ‘DDIs 2013’ dataset and ‘DTIs’ dataset, achieving the F1 scores of 0.842 and 0.720 respectively. Moreover, we applied the classification part to distinguish high-quality generated sentences and verified with the existing growth truth to confirm the filtered sentences. The generated sentences that are not recorded in DrugBank and DDIs 2013 dataset demonstrated the potential of EGFI to identify novel drug relationships.

Availability: Source code are publicly available at https://github.com/Layne-Huang/EGFI.

 

GeneGPT: Augmenting Large Language Models with Domain Tools for Improved Access to Biomedical Information.

Jin Q, Yang Y, Chen Q, Lu Z.ArXiv. 2023 May 16:arXiv:2304.09667v3. Preprint.PMID: 37131884 Free PMC article.

While large language models (LLMs) have been successfully applied to various tasks, they still face challenges with hallucinations. Augmenting LLMs with domain-specific tools such as database utilities can facilitate easier and more precise access to specialized knowledge. In this paper, we present GeneGPT, a novel method for teaching LLMs to use the Web APIs of the National Center for Biotechnology Information (NCBI) for answering genomics questions. Specifically, we prompt Codex to solve the GeneTuring tests with NCBI Web APIs by in-context learning and an augmented decoding algorithm that can detect and execute API calls. Experimental results show that GeneGPT achieves state-of-the-art performance on eight tasks in the GeneTuring benchmark with an average score of 0.83, largely surpassing retrieval-augmented LLMs such as the new Bing (0.44), biomedical LLMs such as BioMedLM (0.08) and BioGPT (0.04), as well as GPT-3 (0.16) and ChatGPT (0.12). Our further analyses suggest that: (1) API demonstrations have good cross-task generalizability and are more useful than documentations for in-context learning; (2) GeneGPT can generalize to longer chains of API calls and answer multi-hop questions in GeneHop, a novel dataset introduced in this work; (3) Different types of errors are enriched in different tasks, providing valuable insights for future improvements.

PLEASE WATCH THE FOLLOWING VIDEOS ON BIOGPT

This one entitled

Microsoft’s BioGPT Shows Promise as the Best Biomedical NLP

 

gives a good general description of this new MIT/Microsoft project and its usefullness in scanning 15 million articles on PubMed while returning ChatGPT like answers.

 

Please note one of the comments which is VERY IMPORTANT


@rufus9322

2 months ago

bioGPT is difficult for non-developers to use, and Microsoft researchers seem to default that all users are proficient in Python and ML.

 

Much like Microsoft Azure it seems this BioGPT is meant for developers who have advanced programming skill.  Seems odd then to be paying programmers multiK salaries when one or two Key Opinion Leaders from the medical field might suffice but I would be sure Microsoft will figure this out.

 

ALSO VIEW VIDEO

 

 

This is a talk from Microsoft on BioGPT

 

Other Relevant Articles on Natural Language Processing in BioInformatics, Healthcare and ChatGPT for Medicine on this Open Access Scientific Journal Include

Medicine with GPT-4 & ChatGPT
Explanation on “Results of Medical Text Analysis with Natural Language Processing (NLP) presented in LPBI Group’s NEW GENRE Edition: NLP” on Genomics content, standalone volume in Series B and NLP on Cancer content as Part B New Genre Volume 1 in Series C

Proposal for New e-Book Architecture: Bi-Lingual eTOCs, English & Spanish with NLP and Deep Learning results of Medical Text Analysis – Phase 1: six volumes

From High-Throughput Assay to Systems Biology: New Tools for Drug Discovery

Machine Learning (ML) in cancer prognosis prediction helps the researcher to identify multiple known as well as candidate cancer diver genes

 

20 articles in Natural Language Processing

142 articles in BioIT: BioInformatics

111 articles in BioIT: BioInformatics, NGS, Clinical & Translational, Pharmaceutical R&D Informatics, Clinical Genomics, Cancer Informatics

 

Read Full Post »

Relevance of Twitter.com forthcoming Payment System for Scientific Content Promotion and Monetization

Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN

GIASOURCEN M. VOLPICELLI

Gian M. Volpicelli

SENIOR WRITER

Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.

SOURCE

https://www.wired.com/story/twitter-crypto-strategy/

BUSINESS

APR 5, 2022 7:00 AM

What Twitter Is Really Planning for Crypto

The duo behind Twitter Crypto say NFT profile pics and crypto tipping are just the beginning.

YOU MIGHT HAVE heard of crypto Twitter, the corner of the social network where accounts have Bored Apes as profile pictures, posts are rife with talk of tokens, blockchains, and buying the Bitcoin dip, and Elon Musk is venerated.

Then again, you might have heard of Twitter Crypto, the business unit devoted to developing the social network’s strategy for cryptocurrency, blockchains, and that grab-bag of decentralized technologies falling under the rubric of Web3. The team’s unveiling came in November 2021 via a tweet from the newly hired project lead, Tess Rinearson, a Berlin-based American computer scientist whose career includes stints at blockchain companies such as Tendermint and Interchain.

Rinearson joined Twitter at a crucial moment. Jack Dorsey, the vociferously pro-Bitcoin company CEO, would leave a few weeks later, to be replaced by CTO Parag Agrawal. Agrawal had played an instrumental role in Bluesky, a Twitter-backed project to create a protocol—possibly with blockchain components—to build decentralized social networks.

As crypto went mainstream globally and crypto Twitter burgeoned, the company tried to dominate the space. Under the stewardship of product manager Esther Crawford, in September 2021 Twitter introduced a “tipping” feature that helps creators on Twitter to receive Bitcoin contributions through Lightning—a network for fast Bitcoin payments. In January, Twitter allowed subscribers of Twitter’s premium service, Twitter Blue, to flaunt their NFTs as hexagonal profile pictures, through a partnership with NFT marketplace OpenSea.

Twitter Crypto is just getting started. While Rinearson works with people all across the company, her team is still under 10 people, although more hires are in the pipeline, judging from recent job postings. So it’s worth asking what is next. I caught up over a video call with Rinearson and Crawford to talk about where Twitter Crypto is headed. 

The conversation has been edited for clarity and brevity.

WIRED: Let’s start with the basics. Why does Twitter have a crypto unit?

Tess Rinearson: We really see crypto—and what we’re now calling Web3— as something that could be this incredibly powerful tool that would unlock a lot for our users. The whole crypto world is like an internet of money, an internet of value that our users can potentially tap into to create new ways of owning their content, monetizing their content, owning their own identity, and even relating to each other.

One of my goals is to build Twitter’s crypto unit in such a way that it caters to communities that go beyond just that core crypto community. I love the crypto Twitter space, obviously—I’m a very proud member of the crypto community. And at the same time, I recognize that people who are really deep in the crypto space may not relate to concepts, like for instance blockchain’s immutability, in the same way that someone who’s less intensely involved might feel about those things.

So a lot of what we try to think about is, what can we learn from this group of people who are super engaged and really, really, creative? And then, how can we translate some of that stuff into a format or a mechanism or a product that’s a little bit more accessible to people who don’t have that background?

How are you learning from crypto Twitter? Do you just follow a lot of accounts, do you actually talk to them? How does that learning experience play out?

Esther Crawford: It’s a combination. We have an amazing research team that sets up panel interviews and surveys. But we’re also embedded in the community itself and follow a bunch of accounts, sit on Twitter spaces, go to conferences and events, engage with customers in that way. That’s the way the research piece of it works. But we also encounter it as end users: Twitter is the discovery platform today for all things crypto.

One of the things we do differently at Twitter is we build out in the open. And so this means having dialog with customers in real time—designers will take something that is very early-stage and post it as a tweet and then get real-time feedback. They’ll hop into spaces with product managers and engineering managers, talk about it live with real customers, and then incorporate that feedback into the designs and what ultimately we end up launching.

Rinearson: One of the things I wanted to make sure of before I came to Twitter was to know that we would be able to build features in the open and solicit feedback and show rough drafts. And so this is something I asked Parag Agrawal, who’s now the CEO, and was the person who hired me. Pretty early in the job interview process, I said this was going to be really important, and he said, “If you think it’s important to the success of this work, great, do it—thumbs up.” He also shares that openness.

As you said, Tess, you come from crypto. When you were out there, what did you think Twitter was getting right? What did you think Twitter was getting wrong?

Rinearson: I had been a Twitter power user for a really long time. The thing that I saw was a lot of aesthetic alignment between how Twitter exists in the world and the way that crypto exists in the world. Twitter has decentralized user experiences in its DNA. And, this is a bit cheesy, but people use Twitter sometimes in ways that they use a public blockchain, as a public database where everything’s time stamped and people can agree on what happened.

And for most people it’s open, it is there for public conversation. And then obviously it was also the place—a place—where the crypto community really found its footing. I think it’s been a place where an enormous amount of discovery happens, and education and learning for the whole community. I joined when there were some murmurings about Twitter starting to do crypto stuff, mostly stuff Esther had led actually, and I was excited to see where it was going. And then Twitter’s investment in Bluesky also gave me a lot of confidence.

Let’s talk about the two main things you have delivered so far: The crypto tipping feature and NFT pictures. Can you give me just a potted history of how each came about and why?

Crawford: Those are our first set of early explorations, and the reason why we started there was we really wanted to make sure that what we built benefited creators, their audiences, and then all the conversations that are happening on Twitter. For creators in particular, we know that they rely on platforms like Twitter to monetize and earn a living, and not all people are able to use traditional currencies. Not everybody has a traditional banking account setup.

And so we wanted to provide an opportunity for a borderless payment solution, and that’s why we decided to go ahead and use Bitcoin Lightning as our first big integration. One of the reasons we chose Bitcoin Lightning was also because of the low transaction fees. And we have Bitcoin and Ethereum addresses that you can also put in there [on your Twitter “tipping jar”]. We noticed that people were actually adding information about their crypto wallet addresses in their profiles. And so we wanted to make a more seamless experience, so that people could just tip through the platform, so that it felt native.

With NFT profile pictures, the way that came about was, again, looking at user behavior. People were adding NFTs that they owned as avatars, but you didn’t really know whether they owned those NFTs or not. So we decided to go ahead and build out that feature so that one could actually prove ownership.

That’s similar to how other things developed on Twitter, right? The hashtag, or even even the retweet, were initially just things users invented—by adding the # sign, or by pasting other users’ tweets—and then Twitter made that a feature.

Crawford: Yeah, exactly. Many of the best ideas come from watching user behavior on the platform, and then we just productize that.

Rinearson: Sometimes I’ve heard people call that the “help wanted signs,” and like, keeping an eye out for the “help wanted signs” across the platform. The NFT profile picture was a clear example of that.

How do all these things—these two things and possibly other crypto features coming further down the line—really help Twitter’s bottom line?

Crawford: With creator monetization our goal was to help creators get paid, not Twitter. But Twitter takes a really small cut of earnings. For more successful creators, we take a larger percentage. The way we think about this is, it is part of our revenue diversification.

Twitter today is a wholly ad-based business. In the future we imagine Twitter making money from a variety of different product areas. So Twitter Blue is one of those products—you can pay $2.99 a month and you get additional features, such as the NFT profile pictures. We really think that revenue diversification sits across a variety of areas, and creator monetization is one really small component of that.

As you said, these are just early experiments. Where is Twitter Crypto going next? What’s your vision for crypto technology’s role within Twitter?

Rinearson: The real trick here is to find the right parts of Twitter to decentralize, and to not try to decentralize everything at once—or, you know, make every user suddenly responsible for taking care of some private keys or something like that.

We have to find the right ways to open up some access to a decentralized economic layer, or give people ways that they can take their identity with them, without relying on a single centralized service.

We’re really early in these explorations, and even looking at things like Bitcoin tipping or the NFT profile pictures—we view those features as experiments themselves in a lot of ways and learning experiences. We’re learning things about how our users relate to these concepts, what they understand about them, what they find confusing, and what’s most useful to them. We really want to try to use this technology to bring utility to people and you know, not just like, sprinkle a little blockchain on it for the sake of it. So creator monetization is an area that I’m really excited about because I think there’s a really clear path forward. But again, we’re looking beyond that: We’re also looking at using crypto technology in fields like [digital] identity and [digital] ownership space and also figuring out how we can better serve crypto communities on the platform.

Are you going to put Twitter verified users’ blue ticks on a blockchain, then?

[Laughter]

No?

[More laughter]

OK, moving on. How does the kind of work you do dovetail with Bluesky’s plan to create a protocol for a decentralized social media platform? Is there any synergy there?

Rinearson: I have known Jay [Graber], the Bluesky lead, for a long time, and she and I are in pretty close contact. We check in with each other regularly and talk a lot about problems we might have in common that we’ll both need to solve. There’s an overlap looking at things in the identity area, but at the end of the day, it’s a separate project. She’s pretty focused on hiring her team, and they’re very focused on building a prototype of a protocol. That is different from what Esther and I are thinking about, which is like: There are all these blockchain protocols that exist, and we need to figure out how to make them useful and accessible for real people.

And when I say “real people,” I mean that in a sort of tongue-in-cheek contrast to hardcore crypto nerds like me. Jay is thinking much more about building for people who are creating decentralized networks. That is a very different focus area. Beyond that, I would just say it’s too early to say what Bluesky will mean for Twitter as a product. We are in touch, we have aligned values. But at the end of the day—separate teams.

Why is a centralized Silicon Valley company like Twitter the right place to start to bring more decentralization to internet users? Don’t we have just to start from scratch, build a new platform that is already decentralized?

Rinearson: I started in crypto in 2015, and I have a very vivid memory from those years of watching some of my coworkers—crypto engineers—trying to figure out how to secure some of their Bitcoin like before one of the Bitcoin forks [in which the Bitcoin blockchain split, creating new currencies], and they were panicking and freaking out. I thought there was no way that a normal person would be able to handle this in a way that would be safe. And so I was a little bit disillusioned with crypto, especially from a consumer perspective.

And then last year, I started seeing more interest from people whom I’ve known for a long time and weren’t crypto people. They were just starting to perk their heads up and take notice and start creating NFTs or start talking about DAOs. And I thought that that was interesting, that we were coming around a corner, and it might be time to start thinking about what this could mean for people beyond that hardcore crypto group.

And that was when Twitter reached out. You know, I don’t think that just any centralized platform would be able to bring crypto to the masses, so to speak. But I think Twitter has the right stuff. I think you have to meet people where they are with new technologies: find ways to onboard them and bring them along and show them what this might mean for them. make things accessible. And it’s really, really hard to do that with just a protocol. You need to have some kind of community, you need to have some kind of user base, you need to have some kind of platform. And Twitter’s just right there.

I don’t think I would say that a centralized platform is definitely the way to “bring crypto to the masses.” I do think that Twitter is the way to do it.

But why do the masses need crypto right now?

Rinearson: I don’t know that anyone  needs crypto, and our goal is not to get everyone into crypto. Let’s be clear about that. But I do think that crypto is a potentially very powerful tool for people. And so I think what we are trying to do is show people how powerful it is and unlock those possibilities. It’s also possible that we create some products and features, where people actually don’t even really know what’s happening under the hood.

Like maybe we’re using crypto as a payment rail or again as an identity layer—users don’t necessarily need to know all of those implementation details. And that’s actually something we come back to a lot: What level of abstraction are we talking about with users? What story are we telling them about what’s happening under the hood? But yeah, I would just like to reiterate that the goal is not to just shovel everyone into crypto. We want to provide value for people.

Do you think there is a case for Twitter to launch its own cryptocurrency— a Twittercoin?

Rinearson: I think there’s a case for a lot of things—honestly, there’s a case for a lot of things. We’re trying to think really, really broadly about it.

Crawford: We’re actively exploring a lot of things. It’s not it’s not something we would be making an announcement about.

Rinearson: I think it is really important to stress that when you say “Twittercoin” you probably have a slightly different idea of what it is than we do. And are we exploring those ideas? Yes, we want to think about all of them. Do we have road maps for them? No. But are we trying to think about things really creatively and be really, really open-minded? Yes. We have this new economic technology that we think could unlock a lot of things for people. And we want to go down a bunch of rabbit holes and see what we come up with.

Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.

Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN

SOURCE

https://www.wired.com/story/twitter-crypto-strategy/

Read Full Post »

Data Science: Step by Step – A Resource for LPBI Group One-Year Internship in IT, IS, DS

Reporter: Aviva Lev-Ari, PhD, RN

9 free Harvard courses: learning Data Science

In this article, I will list 9 free Harvard courses that you can take to learn data science from scratch. Feel free to skip any of these courses if you already possess knowledge of that subject.

Step 1: Programming

The first step you should take when learning data science is to learn to code. You can choose to do this with your choice of programming language?—?ideally Python or R.

If you’d like to learn R, Harvard offers an introductory R course created specifically for data science learners, called Data Science: R Basics.

This program will take you through R concepts like variables, data types, vector arithmetic, and indexing. You will also learn to wrangle data with libraries like dplyr and create plots to visualize data.

If you prefer Python, you can choose to take CS50’s Introduction to Programming with Python offered for free by Harvard. In this course, you will learn concepts like functions, arguments, variables, data types, conditional statements, loops, objects, methods, and more.

Both programs above are self-paced. However, the Python course is more detailed than the R program, and requires a longer time commitment to complete. Also, the rest of the courses in this roadmap are taught in R, so it might be worth learning R to be able to follow along easily.

Step 2: Data Visualization

Visualization is one of the most powerful techniques with which you can translate your findings in data to another person.

With Harvard’s Data Visualization program, you will learn to build visualizations using the ggplot2 library in R, along with the principles of communicating data-driven insights.

Step 3: Probability

In this course, you will learn essential probability concepts that are fundamental to conducting statistical tests on data. The topics taught include random variables, independence, Monte Carlo simulations, expected values, standard errors, and the Central Limit Theorem.

The concepts above will be introduced with the help of a case study, which means that you will be able to apply everything you learned to an actual real-world dataset.

Step 4: Statistics

After learning probability, you can take this course to learn the fundamentals of statistical inference and modelling.
This program will teach you to define population estimates and margin of errors, introduce you to Bayesian statistics, and provide you with the fundamentals of predictive modeling.

Step 5: Productivity Tools (Optional)

I’ve included this project management course as optional since it isn’t directly related to learning data science. Rather, you will be taught to use Unix/Linux for file management, Github, version control, and creating reports in R.

The ability to do the above will save you a lot of time and help you better manage end-to-end data science projects.

Step 6: Data Pre-Processing

The next course in this list is called Data Wrangling, and will teach you to prepare data and convert it into a format that is easily digestible by machine learning models.

You will learn to import data into R, tidy data, process string data, parse HTML, work with date-time objects, and mine text.

As a data scientist, you often need to extract data that is publicly available on the Internet in the form of a PDF document, HTML webpage, or a Tweet. You will not always be presented with clean, formatted data in a CSV file or Excel sheet.

By the end of this course, you will learn to wrangle and clean data to come up with critical insights from it.

Step 7: Linear Regression

Linear regression is a machine learning technique that is used to model a linear relationship between two or more variables. It can also be used to identify and adjust the effect of confounding variables.

This course will teach you the theory behind linear regression models, how to examine the relationship between two variables, and how confounding variables can be detected and removed before building a machine learning algorithm.

Step 8: Machine Learning

Finally, the course you’ve probably been waiting for! Harvard’s machine learning program will teach you the basics of machine learning, techniques to mitigate overfitting, supervised and unsupervised modelling approaches, and recommendation systems.

Step 9: Capstone Project

After completing all the above courses, you can take Harvard’s data science capstone project, where your skills in data visualization, probability, statistics, data wrangling, data organization, regression, and machine learning will be assessed.

With this final project, you will get the opportunity to put together all the knowledge learnt from the above courses and gain the ability to complete a hands-on data science project from scratch.

Note: All the courses above are available on an online learning platform from edX and can be audited for free. If you want a course certificate, however, you will have to pay for one.

Building a data science learning roadmap with free courses offered by MIT.

8 Free MIT Courses to Learn Data Science Online

 enrolled into an undergraduate computer science program and decided to major in data science. I spent over $25K in tuition fees over the span of three years, only to graduate and realize that I wasn’t equipped with the skills necessary to land a job in the field.

I barely knew how to code, and was unclear about the most basic machine learning concepts.

I took some time out to try and learn data science myself — with the help of YouTube videos, online courses, and tutorials. I realized that all of this knowledge was publicly available on the Internet and could be accessed for free.

It came as a surprise that even Ivy League universities started making many of their courses accessible to students worldwide, for little to no charge. This meant that people like me could learn these skills from some of the best institutions in the world, instead of spending thousands of dollars on a subpar degree program.

In this article, I will provide you with a data science roadmap I created using only freely available MIT online courses.

Step 1: Learn to code

I highly recommend learning a programming language before going deep into the math and theory behind data science models. Once you learn to code, you will be able to work with real-world datasets and get a feel of how predictive algorithms function.

MIT Open Courseware offers a beginner-friendly Python program for beginners, called Introduction to Computer Science and Programming.

This course is designed to help people with no prior coding experience to write programs to tackle useful problems.

Step 2: Statistics

Statistics is at the core of every data science workflow — it is required when building a predictive model, analyzing trends in large amounts of data, or selecting useful features to feed into your model.

MIT Open Courseware offers a beginner-friendly course called Introduction to Probability and Statistics. After taking this course, you will learn the basic principles of statistical inference and probability. Some concepts covered include conditional probability, Bayes theorem, covariance, central limit theorem, resampling, and linear regression.

This course will also walk you through statistical analysis using the R programming language, which is useful as it adds on to your tool stack as a data scientist.

Another useful program offered by MIT for free is called Statistical Thinking and Data Analysis. This is another elementary course in the subject that will take you through different data analysis techniques in Excel, R, and Matlab.

You will learn about data collection, analysis, different types of sampling distributions, statistical inference, linear regression, multiple linear regression, and nonparametric statistical methods.

Step 3: Foundational Math Skills

Calculus and linear algebra are two other branches of math that are used in the field of machine learning. Taking a course or two in these subjects will give you a different perspective of how predictive models function, and the working behind the underlying algorithm.

To learn calculus, you can take Single Variable Calculus offered by MIT for free, followed by Multivariable Calculus.

Then, you can take this Linear Algebra class by Prof. Gilbert Strang to get a strong grasp of the subject.

All of the above courses are offered by MIT Open Courseware, and are paired with lecture notes, problem sets, exam questions, and solutions.

Step 4: Machine Learning

Finally, you can use the knowledge gained in the courses above to take MIT’s Introduction to Machine Learning course. This program will walk you through the implementation of predictive models in Python.

The core focus of this course is in supervised and reinforcement learning problems, and you will be taught concepts such as generalization and how overfitting can be mitigated. Apart from just working with structured datasets, you will also learn to process image and sequential data.

MIT’s machine learning program cites three pre-requisites — Python, linear algebra, and calculus, which is why it is advisable to take the courses above before starting this one.

Are These Courses Beginner-Friendly?

Even if you have no prior knowledge of programming, statistics, or mathematics, you can take all the courses listed above.

MIT has designed these programs to take you through the subject from scratch. However, unlike many MOOCs out there, the pace does build up pretty quickly and the courses cover a large depth of information.

Due to this, it is advisable to do all the exercises that come with the lectures and work through all the reading material provided.

SOURCE

Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

https://www.kdnuggets.com/2022/03/8-free-mit-courses-learn-data-science-online.html

Read Full Post »

From the journal Nature: NFT, Patents, and Intellectual Property: Potential Design

Reporter: Stephen J. Williams, Ph.D.

 

From the journal Nature

Source: https://www.nature.com/articles/s41598-022-05920-6

Patents and intellectual property assets as non-fungible tokens; key technologies and challenges

Scientific Reports volume 12, Article number: 2178 (2022)

Abstract

With the explosive development of decentralized finance, we witness a phenomenal growth in tokenization of all kinds of assets, including equity, funds, debt, and real estate. By taking advantage of blockchain technology, digital assets are broadly grouped into fungible and non-fungible tokens (NFT). Here non-fungible tokens refer to those with unique and non-substitutable properties. NFT has widely attracted attention, and its protocols, standards, and applications are developing exponentially. It has been successfully applied to digital fantasy artwork, games, collectibles, etc. However, there is a lack of research in utilizing NFT in issues such as Intellectual Property. Applying for a patent and trademark is not only a time-consuming and lengthy process but also costly. NFT has considerable potential in the intellectual property domain. It can promote transparency and liquidity and open the market to innovators who aim to commercialize their inventions efficiently. The main objective of this paper is to examine the requirements of presenting intellectual property assets, specifically patents, as NFTs. Hence, we offer a layered conceptual NFT-based patent framework. Furthermore, a series of open challenges about NFT-based patents and the possible future directions are highlighted. The proposed framework provides fundamental elements and guidance for businesses in taking advantage of NFTs in real-world problems such as grant patents, funding, biotechnology, and so forth.

Introduction

Distributed ledger technologies (DLTs) such as blockchain are emerging technologies posing a threat to existing business models. Traditionally, most companies used centralized authorities in various aspects of their business, such as financial operations and setting up a trust with their counterparts. By the emergence of blockchain, centralized organizations can be substituted with a decentralized group of resources and actors. The blockchain mechanism was introduced in Bitcoin white paper in 2008, which lets users generate transactions and spend their money without the intervention of banks1. Ethereum, which is a second generation of blockchain, was introduced in 2014, allowing developers to run smart contracts on a distributed ledger. With smart contracts, developers and businesses can create financial applications that use cryptocurrencies and other forms of tokens for applications such as decentralized finance (DeFi), crowdfunding, decentralized exchanges, data records keeping, etc.2. Recent advances in distributed ledger technology have developed concepts that lead to cost reduction and the simplification of value exchange. Nowadays, by leveraging the advantages of blockchain and taking into account the governance issues, digital assets could be represented as tokens that existed in the blockchain network, which facilitates their transmission and traceability, increases their transparency, and improves their security3.

In the landscape of blockchain technology, there could be defined two types of tokens, including fungible tokens, in which all the tokens have equal value and non-fungible tokens (NFTs) that feature unique characteristics and are not interchangeable. Actually, non-fungible tokens are digital assets with a unique identifier that is stored on a blockchain4. NFT was initially suggested in Ethereum Improvement Proposals (EIP)-7215, and it was later expanded in EIP-11556. NFTs became one of the most widespread applications of blockchain technology that reached worldwide attention in early 2021. They can be digital representations of real-world objects. NFTs are tradable rights of digital assets (pictures, music, films, and virtual creations) where ownership is recorded in blockchain smart contracts7.

In particular, fungibility is the ability to exchange one with another of the same kind as an essential currency feature. The non-fungible token is unique and therefore cannot be substituted8. Recently, blockchain enthusiasts have indicated significant interest in various types of NFTs. They enthusiastically participate in NFT-related games or trades. CryptoPunks9, as one of the first NFTs on Ethereum, has developed almost 10,000 collectible punks and helped popularize the ERC-721 Standard. With the gamification of the breeding mechanics, CryptoKitties10 officially placed NFTs at the forefront of the market in 2017. CryptoKitties is an early blockchain game that enables users to buy, sell, collect, and digital breed cats. Another example is NBA Top Shot11, an NFT trading platform for digital short films buying and selling NBA events.

NFTs are developing remarkably and have provided many applications such as artist royalties, in-game assets, educational certificates, etc. However, it is a relatively new concept, and many areas of application need to be explored. Intellectual Property, including patent, trademark, and copyright, is an important area where NFTs can be applied usefully and solve existing problems.

Although NFTs have had many applications so far, it rarely has been used to solve real-world problems. In fact, an NFT is an exciting concept about Intellectual Property (IP). Applying for a patent and trademark is a time-consuming and lengthy process, but it is also costly. That is, registering a copyright or trademark may take months, while securing a patent can take years. On the contrary, with the help of unique features of NFT technology, it is possible to accelerate this process with considerable confidence and assurance about protecting the ownership of an IP. NFTs can offer IP protection while an applicant waits for the government to grant his/her more formal protection. It is cause for excitement that people who believe NFTs and Blockchain would make buying and selling patents easier, offering new opportunities for companies, universities, and inventors to make money off their innovations12. Patent holders will benefit from such innovation. It would give them the ability to ‘tokenize’ their patents. Because every transaction would be logged on a blockchain, it will be much easier to trace patent ownership changes. However, NFT would also facilitate the revenue generation of patents by democratizing patent licensing via NFT. NFTs support the intellectual property market by embedding automatic royalty collecting methods inside inventors’ works, providing them with financial benefits anytime their innovation is licensed. For example, each inventor’s patent would be minted as an NFT, and these NFTs would be joined together to form a commercial IP portfolio and minted as a compounded NFT. Each investor would automatically get their fair share of royalties whenever the licensing revenue is generated without tracking them down.

The authors in13, an overview of NFTs’ applications in different aspects such as gambling, games, and collectibles has been discussed. In addition4, provides a prototype for an event-tracking application based on Ethereum smart contract, and NFT as a solution for art and real estate auction systems is described in14. However, these studies have not discussed existing standards or a generalized architecture, enabling NFTs to be applied in diverse applications. For example, the authors in15 provide two general design patterns for creating and trading NFTs and discuss existing token standards for NFT. However, the proposed designs are limited to Ethereum, and other blockchains are not considered16. Moreover, different technologies for each step of the proposed procedure are not discussed. In8, the authors provide a conceptual framework for token designing and managing and discuss five views: token view, wallet view, transaction view, user interface view, and protocol view. However, no research provides a generalized conceptual framework for generating, recording, and tracing NFT based-IP, in blockchain network.

Even with the clear benefits that NFT-backed patents offer, there are a number of impediments to actually achieving such a system. For example, convincing patent owners to put current ownership records for their patents into NFTs poses an initial obstacle. Because there is no reliable framework for NFT-based patents, this paper provides a conceptual framework for presenting NFT-based patents with a comprehensive discussion on many aspects, ranging from the background, model components, token standards to application domains and research challenges. The main objective of this paper is to provide a layered conceptual NFT-based patent framework that can be used to register patents in a decentralized, tamper-proof, and trustworthy peer-to-peer network to trade and exchange them in the worldwide market. The main contributions of this paper are highlighted as follows:

  • Providing a comprehensive overview on tokenization of IP assets to create unique digital tokens.
  • Discussing the components of a distributed and trustworthy framework for minting NFT-based patents.
  • Highlighting a series of open challenges of NFT-based patents and enlightening the possible future trends.

The rest of the paper is structured as follows: “Background” section describes the Background of NFTs, Non-Fungible Token Standards. The NFT-based patent framework is described in “NFT-based patent framework” section. The Discussion and challenges are presented in “Discussion” section. Lastly, conclusions are given in “Conclusion” section.

Background

Colored Coins could be considered the first steps toward NFTs designed on the top of the Bitcoin network. Bitcoins are fungible, but it is possible to mark them to be distinguishable from the other bitcoins. These marked coins have special properties representing real-world assets like cars and stocks, and owners can prove their ownership of physical assets through the colored coins. By utilizing Colored Coins, users can transfer their marked coins’ ownership like a usual transaction and benefit from Bitcoin’s decentralized network17. Colored Coins had limited functionality due to the Bitcoin script limitations. Pepe is a green frog meme originated by Matt Furie that; users define tokens for Pepes and trade them through the Counterparty platform. Then, the tokens that were created by the picture of Pepes are decided if they are rare enough. Rare Pepe allows users to preserve scarcity, manage the ownership, and transfer their purchased Pepes.

In 2017, Larva Labs developed the first Ethereum-based NFT named CryptoPunks. It contains 10,000 unique human-like characters generated randomly. The official ownership of each character is stored in the Ethereum smart contract, and owners would trade characters. CryptoPunks project inspired CryptoKitties project. CryptoKitties attracts attention to NFT, and it is a pioneer in blockchain games and NFTs that launched in late 2017. CryptoKitties is a blockchain-based virtual game, and users collect and trade characters with unique features that shape kitties. This game was developed in Ethereum smart contract, and it pioneered the ERC-721 token, which was the first standard token in the Ethereum blockchain for NFTs. After the 2017 hype in NFTs, many projects started in this context. Due to increased attention to NFTs’ use-cases and growing market cap, different blockchains like EOS, Algorand, and Tezos started to support NFTs, and various marketplaces like SuperRare and Rarible, and OpenSea are developed to help users to trade NFTs. As mentioned, in general, assets are categorized into two main classes, fungible and non-fungible assets. Fungible assets are the ones that another similar asset can replace. Fungible items could have two main characteristics: replicability and divisibility.

Currency is a fungible item because a ten-dollar bill can be exchanged for another ten-dollar bill or divided into ten one-dollar bills. Despite fungible items, non-fungible items are unique and distinguishable. They cannot be divided or exchanged by another identical item. The first tweet on Twitter is a non-fungible item with mentioned characteristics. Another tweet cannot replace it, and it is unique and not divisible. NFT is a non-fungible cryptographic asset that is declared in a standard token format and has a unique set of attributes. Due to transparency, proof of ownership, and traceable transactions in the blockchain network, NFTs are created using blockchain technology.

Blockchain-based NFTs help enthusiasts create NFTs in the standard token format in blockchain, transfer the ownership of their NFTs to a buyer, assure uniqueness of NFTs, and manage NFTs completely. In addition, there are semi-fungible tokens that have characteristics of both fungible and non-fungible tokens. Semi-fungible tokens are fungible in the same class or specific time and non-fungible in other classes or different times. A plane ticket can be considered a semi-fungible token because a charter ticket can be exchanged by another charter ticket but cannot be exchanged by a first-class ticket. The concept of semi-fungible tokens plays the main role in blockchain-based games and reduces NFTs overhead. In Fig. 1, we illustrate fungible, non-fungible, and semi-fungible tokens. The main properties of NFTs are described as follows15:

figure 1
Figure 1

Ownership: Because of the blockchain layer, the owner of NFT can easily prove the right of possession by his/her keys. Other nodes can verify the user’s ownership publicly.

  • Transferable: Users can freely transfer owned NFTs ownership to others on dedicated markets.
  • Transparency: By using blockchain, all transactions are transparent, and every node in the network can confirm and trace the trades.
  • Fraud Prevention: Fraud is one of the key problems in trading assets; hence, using NFTs ensures buyers buy a non-counterfeit item.
  • Immutability: Metadata, token ID, and history of transactions of NFTs are recorded in a distributed ledger, and it is impossible to change the information of the purchased NFTs.

Non-fungible standards

Ethereum blockchain was pioneered in implementing NFTs. ERC-721 token was the first standard token accepted in the Ethereum network. With the increase in popularity of the NFTs, developers started developing and enhancing NFTs standards in different blockchains like EOS, Algorand, and Tezos. This section provides a review of implemented NFTs standards on the mentioned blockchains.

Ethereum

ERC-721 was the first Standard for NFTs developed in Ethereum, a free and open-source standard. ERC-721 is an interface that a smart contract should implement to have the ability to transfer and manage NFTs. Each ERC-721 token has unique properties and a different Token Id. ERC-721 tokens include the owner’s information, a list of approved addresses, a transfer function that implements transferring tokens from owner to buyer, and other useful functions5.

In ERC-721, smart contracts can group tokens with the same configuration, and each token has different properties, so ERC-721 does not support fungible tokens. However, ERC-1155 is another standard on Ethereum developed by Enjin and has richer functionalities than ERC-721 that supports fungible, non-fungible, and semi-fungible tokens. In ERC-1155, IDs define the class of assets. So different IDs have a different class of assets, and each ID may contain different assets of the same class. Using ERC-1155, a user can transfer different types of tokens in a single transaction and mix multiple fungible and non-fungible types of tokens in a single smart contract6. ERC-721 and ERC-1155 both support operators in which the owner can let the operator originate transferring of the token.

EOSIO

EOSIO is an open-source blockchain platform released in 2018 and claims to eliminate transaction fees and increase transaction throughput. EOSIO differs from Ethereum in the wallet creation algorithm and procedure of handling transactions. dGood is a free standard developed in the EOS blockchain for assets, and it focuses on large-scale use cases. It supports a hierarchical naming structure in smart contracts. Each contract has a unique symbol and a list of categories, and each category contains a list of token names. Therefore, a single contract in dGoods could contain many tokens, which causes efficiency in transferring a group of tokens. Using this hierarchy, dGoods supports fungible, non-fungible, and semi-fungible tokens. It also supports batch transferring, where the owner can transfer many tokens in one operation18.

Algorand

Algorand is a new high-performance public blockchain launched in 2019. It provides scalability while maintaining security and decentralization. It supports smart contracts and tokens for representing assets19. Algorand defines Algorand Standard Assets (ASA) concept to create and manage assets in the Algorand blockchain. Using ASA, users are able to define fungible and non-fungible tokens. In Algorand, users can create NFTs or FTs without writing smart contracts, and they should run just a single transaction in the Algorand blockchain. Each transaction contains some mutable and immutable properties20.

Each account in Algorand can create up to 1000 assets, and for every asset, an account creates or receives, the minimum balance of the account increases by 0.1 Algos. Also, Algorand supports fractional NFTs by splitting an NFT into a group of divided FTs or NFTs, and each part can be exchanged dependently21. Algorand uses a Clawback Address that operates like an operator in ERC-1155, and it is allowed to transfer tokens of an owner who has permitted the operator.

Tezos

Tezos is another decentralized open-source blockchain. Tezos supports the meta-consensus concept. In addition to using a consensus protocol on the ledger’s state like Bitcoin and Ethereum, It also attempts to reach a consensus about how nodes and the protocol should change or upgrade22. FA2 (TZIP-12) is a standard for a unified token contract interface in the Tezos blockchain. FA2 supports different token types like fungible, non-fungible, and fractionalized NFT contracts. In Tezos, tokens are identified with a token contract address and token ID pair. Also, Tezos supports batch token transferring, which reduces the cost of transferring multiple tokens.

Flow

Flow was developed by Dapper Labs to remove the scalability limitation of the Ethereum blockchain. Flow is a fast and decentralized blockchain that focuses on games and digital collectibles. It improves throughput and scalability without sharding due to its architecture. Flow supports smart contracts using Cadence, which is a resource-oriented programming language. NFTs can be described as a resource with a unique id in Cadence. Resources have important rules for ownership management; that is, resources have just one owner and cannot be copied or lost. These features assure the NFT owner. NFTs’ metadata, including images and documents, can be stored off-chain or on-chain in Flow. In addition, Flow defines a Collection concept, in which each collection is an NFT resource that can include a list of resources. It is a dictionary that the key is resource id, and the value is corresponding NFT.

The collection concept provides batch transferring of NFTs. Besides, users can define an NFT for an FT. For instance, in CryptoKitties, a unique cat as an NFT can own a unique hat (another NFT). Flow uses Cadence’s second layer of access control to allow some operators to access some fields of the NFT23. In Table 1, we provide a comparison between explained standards. They are compared in support of fungible-tokens, non-fungible tokens, batch transferring that owner can transform multiple tokens in one operation, operator support in which the owner can approve an operator to originate token transfer, and fractionalized NFTs that an NFT can divide to different tokens and each exchange dependently.Table 1 Comparing NFT standards.

Full size table

NFT-based patent framework

In this section, we propose a framework for presenting NFT-based patents. We describe details of the proposed distributed and trustworthy framework for minting NFT-based patents, as shown in Fig. 2. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application Layer. Details of each layer and the general concepts are presented as follows.

figure 2
Figure 2

Storage layer

The continuous rise of the data in blockchain technology is moving various information systems towards the use of decentralized storage networks. Decentralized storage networks were created to provide more benefits to the technological world24. Some of the benefits of using decentralized storage systems are explained: (1) Cost savings are achieved by making optimal use of current storage. (2) Multiple copies are kept on various nodes, avoiding bottlenecks on central servers and speeding up downloads. This foundation layer implicitly provides the infrastructure required for the storage. The items on NFT platforms have unique characteristics that must be included for identification.

Non-fungible token metadata provides information that describes a particular token ID. NFT metadata is either represented on the On-chain or Off-chain. On-chain means direct incorporation of the metadata into the NFT’s smart contract, which represents the tokens. On the other hand, off-chain storage means hosting the metadata separately25.

Blockchains provide decentralization but are expensive for data storage and never allow data to be removed. For example, because of the Ethereum blockchain’s current storage limits and high maintenance costs, many projects’ metadata is maintained off-chain. Developers utilize the ERC721 Standard, which features a method known as tokenURI. This method is implemented to let applications know the location of the metadata for a specific item. Currently, there are three solutions for off-chain storage, including InterPlanetary File System (IPFS), Pinata, and Filecoin.

IPFS

InterPlanetary File System (IPFS) is a peer-to-peer hypermedia protocol for decentralized media content storage. Because of the high cost of storing media files related to NFTS on Blockchain, IPFS can be the most affordable and efficient solution. IPFS combines multiple technologies inspired by Gita and BitTorrent, such as Block Exchange System, Distributed Hash Tables (DHT), and Version Control System26. On a peer-to-peer network, DHT is used to coordinate and maintain metadata.

In other words, the hash values must be mapped to the objects they represent. An IPFS generates a hash value that starts with the prefix {Q}_{m} and acts as a reference to a specific item when storing an object like a file. Objects larger than 256 KB are divided into smaller blocks up to 256 KB. Then a hash tree is used to interconnect all the blocks that are a part of the same object. IPFS uses Kamdelia DHT. The Block Exchange System, or BitSwap, is a BitTorrent-inspired system that is used to exchange blocks. It is possible to use asymmetric encryption to prevent unauthorized access to stored content on IPFS27.

Pinata

Pinata is a popular platform for managing and uploading files on IPFS. It provides secure and verifiable files for NFTs. Most data is stored off-chain by most NFTs, where a URL of the data is pointed to the NFT on the blockchain. The main problem here is that some information in the URL can change.

This indicates that an NFT supposed to describe a certain patent can be changed without anyone knowing. This defeats the purpose of the NFT in the first place. This is where Pinata comes in handy. Pinata uses the IPFS to create content-addressable hashes of data, also known as Content-Identifiers (CIDs). These CIDs serve as both a way of retrieving data and a means to ensure data validity. Those looking to retrieve data simply ask the IPFS network for the data associated with a certain CID, and if any node on the network contains that data, it will be returned to the requester. The data is automatically rehashed on the requester’s computer when the requester retrieves it to make sure that the data matches back up with the original CID they asked for. This process ensures the data that’s received is exactly what was asked for; if a malicious node attempts to send fake data, the resulting CID on the requester’s end will be different, alerting the requester that they’re receiving incorrect data28.

Filecoin

Another decentralized storage network is Filecoin. It is built on top of IPFS and is designed to store the most important data, such as media files. Truffle Suite has also launched NFT Development Template with Filecoin Box. NFT.Storage (Free Decentralized Storage for NFTs)29 allows users to easily and securely store their NFT content and metadata using IPFS and Filecoin. NFT.Storage is a service backed by Protocol Labs and Pinata specifically for storing NFT data. Through content addressing and decentralized storage, NFT.Storage allows developers to protect their NFT assets and associated metadata, ensuring that all NFTs follow best practices to stay accessible for the long term. NFT.Storage makes it completely frictionless to mint NFTs following best practices through resilient persistence on IPFS and Filecoin. NFT.Storage allows developers to quickly, safely, and for free store NFT data on decentralized networks. Anyone can leverage the power of IPFS and Filecoin to ensure the persistence of their NFTs. The details of this system are stated as follows30:

Content addressing

Once users upload data on NFT.Storage, They receive a CID, which is an IPFS hash of the content. CIDs are the data’s unique fingerprints, universal addresses that can be used to refer to it regardless of how or where it is stored. Using CIDs to reference NFT data avoids problems such as weak links and “rug pulls” since CIDs are generated from the content itself.

Provable storage

NFT.Storage uses Filecoin for long-term decentralized data storage. Filecoin uses cryptographic proofs to assure the NFT data’s durability and persistence over time.

Resilient retrieval

This data stored via IPFS and Filecoin can be fetched directly in the browser via any public IPFS.

Authentication Layer

The second layer is the authentication layer, which we briefly highlight its functions in this section. The Decentralized Identity (DID) approach assists users in collecting credentials from a variety of issuers, such as the government, educational institutions, or employers, and saving them in a digital wallet. The verifier then uses these credentials to verify a person’s validity by using a blockchain-based ledger to follow the “identity and access management (IAM)” process. Therefore, DID allows users to be in control of their identity. A lack of NFT verifiability also causes intellectual property and copyright infringements; of course, the chain of custody may be traced back to the creator’s public address to check whether a similar patent is filed using that address. However, there is no quick and foolproof way to check an NFTs creator’s legitimacy. Without such verification built into the NFT, an NFT proves ownership only over that NFT itself and nothing more.

Self-sovereign identity (SSI)31 is a solution to this problem. SSI is a new series of standards that will guide a new identity architecture for the Internet. With a focus on privacy, security interoperability, SSI applications use public-key cryptography with public blockchains to generate persistent identities for people with private and selective information disclosure. Blockchain technology offers a solution to establish trust and transparency and provide a secure and publicly verifiable KYC (Know Your Customer). The blockchain architecture allows you to collect information from various service providers into a single cryptographically secure and unchanging database that does not need a third party to verify the authenticity of the information.

The proposed platform generates patents-related smart contracts acting as a program that runs on the blockchain to receive and send transactions. They are unalterable privately identifying clients with a thorough KYC process. After KYC approval, then mint an NFT on the blockchain as a certificate of verification32. This article uses a decentralized authentication solution at this layer for authentication. This solution has been used for various applications in the field of the blockchain (exp: smart city, Internet of Things, etc.3334, but we use it here for the proposed framework (patent as NFTs). Details of this solution will be presented in the following.

Decentralized authentication

This section presents the authentication layer similar35 to build validated communication in a secure and decentralized manner via blockchain technology. As shown in Fig. 3, the authentication protocol comprises two processes, including registration and login.

figure 3
Figure 3
Registration

In the registration process of a suggested authentication protocol, we first initialize a user’s public key as their identity key (UserName). Then, we upload this identity key on a blockchain, in which transactions can be verified later by other users. Finally, the user generates an identity transaction.

Login

After registration, a user logs in to the system. The login process is described as follows:

  • 1. The user commits identity information and imports their secret key into the service application to log in.
  • 2. A user who needs to log in sends a login request to the network’s service provider.
  • 3. The service provider analyzes the login request, extracts the hash, queries the blockchain, and obtains identity information from an identity list (identity transactions).
  • 4. The service provider responds with an authentication request when the above process is completed. A timestamp (to avoid a replay attack), the user’s UserName, and a signature are all included in the authentication request.
  • 5. The user creates a signature with five parameters: timestamp, UserName, and PK, as well as the UserName and PK of the service provider. The user authentication credential is used as the signature.
  • 6. The service provider verifies the received information, and if the received information is valid, the authentication succeeds; otherwise, the authentication fails, and the user’s login is denied.

The World Intellectual Property Organization (WIPO) and multiple target patent offices in various nations or regions should assess a patent application, resulting in inefficiency, high costs, and uncertainty. This study presented a conceptual NFT-based patent framework for issuing, validating, and sharing patent certificates. The platform aims to support counterfeit protection as well as secure access and management of certificates according to the needs of learners, companies, education institutions, and certification authorities.

Here, the certification authority (CA) is used to authenticate patent offices. The procedure will first validate a patent if it is provided with a digital certificate that meets the X.509 standard. Certificate authorities are introduced into the system to authenticate both the nodes and clients connected to the blockchain network.

Verification layer

In permissioned blockchains, just identified nodes can read and write in the distributed ledger. Nodes can act in different roles and have various permissions. Therefore, a distributed system can be designed to be the identified nodes for patent granting offices. Here the system is described conceptually at a high level. Figure 4 illustrates the sequence diagram of this layer. This layer includes four levels as below:

figure 4
Figure 4

Digitalization

For a patent to publish as an NFT in the blockchain, it must have a digitalized format. This level is the “filling step” in traditional patent registering. An application could be designed in the application layer to allow users to enter different patent information online.

Recording

Patents provide valuable information and would bring financial benefits for their owner. If they are publicly published in a blockchain network, miners may refuse the patent and take the innovation for themselves. At least it can weaken consensus reliability and encourage miners to misbehave. The inventor should record his innovation privately first using proof of existence to prevent this. The inventor generates the hash of the patent document and records it in the blockchain. As soon as it is recorded in the blockchain, the timestamp and the hash are available for others publicly. Then, the inventor can prove the existence of the patent document whenever it is needed.

Furthermore, using methods like Decision Thinking36, an inventor can record each phase of patent development separately. In each stage, a user generates the hash of the finished part and publishes the hash regarding the last part’s hash. Finally, they have a coupled series of hashes that indicate patent development, and they can prove the existence of each phase using the original related documents. This level should be done to prevent others from abusing the patent and taking it for themselves. The inventor can make sure that their patent document is recorded confidentially and immutably37.

Different hash algorithms exist with different architecture, time complexity, and security considerations. Hash functions should satisfy two main requirements: Pre-Image Resistance: This means that it should be computationally hard to find the input of a hash function while the output and the hash algorithm are known publicly. Collision Resistance: This means that it is computationally hard to find two arbitrary inputs, x, and y, that have the same hash output. These requirements are vital for recording patents. First, the hash function should be Pre-Image Resistance to make it impossible for others to calculate the patent documentation. Otherwise, everybody can read the patent, even before its official publication. Second, the hash function should satisfy Collision Resistance to preclude users from changing their document after recording. Otherwise, users can upload another document, and after a while, they can replace it with another one.

There are various hash algorithms, and MD and SHA families are the most useful algorithms. According to38, Collisions have been found for MD2, MD4, MD5, SHA-0, and SHA-1 hash functions. Hence, they cannot be a good choice for recording patents. SHA2 hash algorithm is secure, and no collision has been found. Although SHA2 is noticeably slower than prior hash algorithms, the recording phase is not highly time-sensitive. So, it is a better choice and provides excellent security for users.

Validating

In this phase, the inventors first create NFT for their patents and publish it to the miners/validators. Miners are some identified nodes that validate NFTs to record in the blockchain. Due to the specialization of the patent validation, miners cannot be inexpert public persons. In addition, patent offices are not too many to make the network fully decentralized. Therefore, the miners can be related specialist persons that are certified by the patent offices. They should receive a digital certificate from patent offices that show their eligibility to referee a patent.

Digital certificate

Digital certificates are digital credentials used to verify networked entities’ online identities. They usually include a public key as well as the owner’s identification. They are issued by Certification Authorities (CAs), who must verify the certificate holder’s identity. Certificates contain cryptographic keys for signing, encryption, and decryption. X.509 is a standard that defines the format of public-key certificates and is signed by a certificate authority. X.509 standard has multiple fields, and its structure is shown in Fig. 5. Version: This field indicated the version of the X.509 standard. X.509 contains multiple versions, and each version has a different structure. According to the CA, validators can choose their desired version. Serial Number: It is used to distinguish a certificate from other certificates. Thus, each certificate has a unique serial number. Signature Algorithm Identifier: This field indicates the cryptographic encryption algorithm used by a certificate authority. Issuer Name: This field indicates the issuer’s name, which is generally certificate authority. Validity Period: Each certificate is valid for a defined period, defined as the Validity Period. This limited period partly protects certificates against exposing CA’s private key. Subject Name: Name of the requester. In our proposed framework, it is the validator’s name. Subject Public Key Info: Shows the CA’s or organization’s public key that issued the certificate. These fields are identical among all versions of the X.509 standard39.

figure 5
Figure 5

Certificate authority

A Certificate Authority (CA) issues digital certificates. CAs encrypt the certificate with their private key, which is not public, and others can decrypt the certificates containing the CA’s public key.

Here, the patent office creates a certificate for requested patent referees. The patent office writes the information of the validator in their certificate and encrypts it with the patent offices’ private key. The validator can use the certificate to assure others about their eligibility. Other nodes can check the requesting node’s information by decrypting the certificate using the public key of the patent office. Therefore, persons can join the network’s miners/validators using their credentials. In this phase, miners perform Formal Examinations, Prior Art Research, and Substantive Examinations and vote to grant or refuse the patent.

Miners perform a consensus about the patent and record the patent in the blockchain. After that, the NFT is recorded in the blockchain with corresponding comments in granting or needing reformations. If the miners detect the NFT as a malicious request, they do not record it in the blockchain.

Blockchain layer

This layer plays as a middleware between the Verification Layer and Application Layer in the patents as NFTs architecture. The main purpose of the blockchain layer in the proposed architecture is to provide IP management. We find that transitioning to a blockchain-based patent as a NFTs records system enables many previously suggested improvements to current patent systems in a flexible, scalable, and transparent manner.

On the other hand, we can use multiple blockchain platforms, including Ethereum, EOS, Flow, and Tezos. Blockchain Systems can be mainly classified into two major types: Permissionless (public) and Permissioned (private) Blockchains based on their consensus mechanism. In a public blockchain, any node can participate in the peer-to-peer network, where the blockchain is fully decentralized. A node can leave the network without any consent from the other nodes in the network.

Bitcoin is one of the most popular examples that fall under the public and permissionless blockchain. Proof of Work (POW), Proof-of-Stake (POS), and directed acyclic graph (DAG) are some examples of consensus algorithms in permissionless blockchains. Bitcoin and Ethereum, two famous and trustable blockchain networks, use the PoW consensus mechanism. Blockchain platforms like Cardano and EOS adopt the PoS consensus40.

Nodes require specific access or permission to get network authentication in a private blockchain. Hyperledger is among the most popular private blockchains, which allow only permissioned members to join the network after authentication. This provides security to a group of entities that do not completely trust one another but wants to achieve a common objective such as exchanging information. All entities of a permissioned blockchain network can use Byzantine-fault-tolerant (BFT) consensus. The Fabric has a membership identity service that manages user IDs and verifies network participants.

Therefore, members are aware of each other’s identity while maintaining privacy and secrecy because they are unaware of each other’s activities41. Due to their more secure nature, private blockchains have sparked a large interest in banking and financial organizations, believing that these platforms can disrupt current centralized systems. Hyperledger, Quorum, Corda, EOS are some examples of permissioned blockchains42.

Reaching consensus in a distributed environment is a challenge. Blockchain is a decentralized network with no central node to observe and check all transactions. Thus, there is a need to design protocols that indicate all transactions are valid. So, the consensus algorithms are considered as the core of each blockchain43. In distributed systems, the consensus has become a problem in which all network members (nodes) agree on accept or reject of a block. When all network members accept the new block, it can append to the previous block.

As mentioned, the main concern in the blockchains is how to reach consensus among network members. A wide range of consensus algorithms has been designed in which each of them has its own pros and cons42. Blockchain consensus algorithms are mainly classified into three groups shown in Table 2. As the first group, proof-based consensus algorithms require the nodes joining the verifying network to demonstrate their qualification to do the appending task. The second group is voting-based consensus that requires validators in the network to share their results of validating a new block or transaction before making the final decision. The third group is DAG-based consensus, a new class of consensus algorithms. These algorithms allow several different blocks to be published and recorded simultaneously on the network.Table 2 Consensus algorithms in blockchain networks.

Full size table

The proposed patent as the NFTs platform that builds blockchain intellectual property empowers the entire patent ecosystem. It is a solution that removes barriers by addressing fundamental issues within the traditional patent ecosystem. Blockchain can efficiently handle patents and trademarks by effectively reducing approval wait time and other required resources. The user entities involved in Intellectual Property management are Creators, Patent Consumers, and Copyright Managing Entities. Users with ownership of the original data are the patent creators, e.g., inventors, writers, and researchers. Patent Consumers are the users who are willing to consume the content and support the creator’s work. On the other hand, Users responsible for protecting the creators’ Intellectual Property are the copyright management entities, e.g., lawyers. The patents as NFTs solution for IP management in blockchain layer works by implementing the following steps62:

Creators sign up to the platform

Creators need to sign up on the blockchain platform to patent their creative work. The identity information will be required while signing up.

Creators upload IP on the blockchain network

Now, add an intellectual property for which the patent application is required. The creator will upload the information related to IP and the data on the blockchain network. Blockchain ensures traceability and auditability to prevent data from duplicity and manipulation. The patent becomes visible to all network members once it is uploaded to the blockchain.

Consumers generate request to use the content

Consumers who want to access the content must first register on the blockchain network. After Signing up, consumers can ask creators to grant access to the patented content. Before the patent owner authorizes the request, a Smart Contract is created to allow customers to access information such as the owner’s data. Furthermore, consumers are required to pay fees in either fiat money or unique tokens in order to use the creator’s original information. When the creator approves the request, an NDA (Non-Disclosure Agreement) is produced and signed by both parties. Blockchain manages the agreement and guarantees that all parties agree to the terms and conditions filed.

Patent management entities leverage blockchain to protect copyrights and solve related disputes

Blockchain assists the patent management entities in resolving a variety of disputes that may include: sharing confidential information, establishing proof of authorship, transferring IP rights, and making defensive publications, etc. Suppose a person used an Invention from a patent for his company without the inventor’s consent. The inventor can report it to the patent office and claim that he is the owner of that invention.

Application layer

The patent Platform Global Marketplace technology would allow many enterprises, governments, universities, and Small and medium-sized enterprises (SMEs) worldwide to tokenize patents as NFTs to create an infrastructure for storing patent records on a blockchain-based network and developing a decentralized marketplace in which patent holders would easily sell or otherwise monetize their patents. The NFTs-based patent can use smart contracts to determine a set price for a license or purchase.

Any buyer satisfied with the conditions can pay and immediately unlock the rights to the patent without either party ever having to interact directly. While patents are currently regulated jurisdictionally around the world, a blockchain-based patent marketplace using NFTs can reduce the geographical barriers between patent systems using as simple a tool as a search query. The ease of access to patents globally can help aspiring inventors accelerate the innovative process by building upon others’ patented inventions through licenses. There are a wide variety of use cases for patent NFTs such as SMEs, Patent Organization, Grant & Funding, and fundraising/transferring information relating to patents. These applications keep growing as time progresses, and we are constantly finding new ways to utilize these tokens. Some of the most commonly used applications can be seen as follows.

SMEs

The aim is to move intellectual property assets onto a digital, centralized, and secure blockchain network, enabling easier commercialization of patents, especially for small or medium enterprises (SMEs). Smart contracts can be attached to NFTs so terms of use and ownership can be outlined and agreed upon without incurring as many legal fees as traditional IP transfers. This is believed to help SMEs secure funding, as they could more easily leverage the previously undisclosed value of their patent portfolios63.

Transfer ownership of patents

NFTs can be used to transfer ownership of patents. The blockchain can be used to keep track of patent owners, and tokens would include self-executing contracts that transfer the legal rights associated with patents when the tokens are transferred. A partnership between IBM and IPwe has spearheaded the use of NFTs to secure patent ownership. These two companies have teamed together to build the infrastructure for an NFT-based patent marketplace.

Discussion

There are exciting proposals in the legal and economic literature that suggest seemingly straightforward solutions to many of the issues plaguing current patent systems. However, most solutions would constitute major administrative disruptions and place significant and continuous financial burdens on patent offices or their users. An NFT-based patents system not only makes many of these ideas administratively feasible but can also be examined in a step-wise, scalable, and very public manner.

Furthermore, NFT-based patents may facilitate reliable information sharing among offices and patentees worldwide, reducing the burden on examiners and perhaps even accelerating harmonization efforts. NFT-based patents also have additional transparency and archival attributes baked in. A patent should be a privilege bestowed on those who take resource-intensive risks to explore the frontier of technological capabilities. As a reward for their achievements, full transparency of these rewards is much public interest. It is a society that pays for administrative and economic inefficiencies that exist in today’s systems. NFT-based patents can enhance this transparency. From an organizational perspective, an NFT-based patent can remove current bottlenecks in patent processes by making these processes more efficient, rapid, and convenient for applicants without compromising the quality of granted patents.

The proposed framework encounters some challenges that should be solved to reach a developed patent verification platform. First, technical problems are discussed. The consensus method that is used in the verification layer is not addressed in detail. Due to the permissioned structure of miners in the NFT-based patents, consensus algorithms like PBFT, Federated Consensus, and Round Robin Consensus are designed for permissioned blockchains can be applied. Also, miners/validators spend some time validating the patents; hence a protocol should be designed to profit them. Some challenges like proving the miners’ time and effort, the price that inventors should pay to miners, and other economic trade-offs should be considered.

Different NFT standards were discussed. If various patent services use NFT standards, there will be some cross-platform problems. For instance, transferring an NFT from Ethereum blockchain (ERC-721 token) to EOS blockchain is not a forward and straight work and needs some considerations. Also, people usually trade NFTs in marketplaces such as Rarible and OpenSea. These marketplaces are centralized and may prompt some challenges because of their centralized nature. Besides, there exist some other types of challenges. For example, the novelty of NFT-based patents and blockchain services.

Blockchain-based patent service has not been tested before. The patent registration procedure and concepts of the Patent as NFT system may be ambiguous for people who still prefer conventional centralized patent systems over decentralized ones. It should be noted that there are some problems in the mining part. Miners should receive certificates from the accepted organizations. Determining these organizations and how they accept referees as validators need more consideration. Some types of inventions in some countries are prohibited, and inventors cannot register them. In NFT-based patents, inventors can register their patents publicly, and maybe some collisions occur between inventors and the government. There exist some misunderstandings about NFT’s ownership rights. It is not clear that when a person buys an NFT, which rights are given to them exactly; for instance, they have property rights or have moral rights, too.

Conclusion

Blockchain technology provides strong timestamping, the potential for smart contracts, proof-of-existence. It enables creating a transparent, distributed, cost-effective, and resilient environment that is open to all and where each transaction is auditable. On the other hand, blockchain is a definite boon to the IP industry, benefitting patent owners. When blockchain technology’s intrinsic characteristics are applied to the IP domain, it helps copyrights. This paper provided a conceptual framework for presenting an NFT-based patent with a comprehensive discussion of many aspects: background, model components, token standards to application areas, and research challenges. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application. The primary purpose of this patent framework was to provide an NFT-based concept that could be used to patent a decentralized, anti-tamper, and reliable network for trade and exchange around the world. Finally, we addressed several open challenges to NFT-based inventions.

References

  1. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decent. Bus. Rev. 21260, https://bitcoin.org/bitcoin.pdf (2008).
  2. Buterin, V. A next-generation smart contract and decentralized application platform. White Pap. 3 (2014).
  3. Nofer, M., Gomber, P., Hinz, O. & Schiereck, D. Business & infomation system engineering. Blockchain 59, 183–187 (2017).Google Scholar 
  4. Regner, F., Urbach, N. & Schweizer, A. NFTs in practice—non-fungible tokens as core component of a blockchain-based event ticketing application. https://www.researchgate.net/publication/336057493_NFTs_in_Practice_-_Non-Fungible_Tokens_as_Core_Component_of_a_Blockchain-based_Event_Ticketing_Application (2019).
  5. Entriken, W., Shirley, D., Evans, J. & Sachs, N. EIP 721: ERC-721 non-fungible token standard. Ethereum Improv. Propos.https://eips.ethereum.org/EIPS/eip-721 (2018).
  6. Radomski, W. et al. Eip 1155: Erc-1155 multi token standard. In Ethereum, Standard (2018).
  7. Dowling, M. Is non-fungible token pricing driven by cryptocurrencies? Finance Res. Lett. 44, 102097. https://doi.org/10.1016/j.frl.2021.102097 (2021).
  8. Lesavre, L., Varin, P. & Yaga, D. Blockchain Networks: Token Design and Management Overview. (National Institute of Standards and Technology, 2020).
  9. Larva-Labs. About Cryptopunks, Retrieved 13 May, 2021, from https://www.larvalabs.com/cryptopunks (2021).
  10. Cryptokitties. About Cryptokitties, Retrieved 28 May, 2021, from https://www.cryptokitties.co/ (2021).
  11. nbatopshot. About Nba top shot, Retrieved 4 April, 2021, from https://nbatopshot.com/terms (2021).
  12. Fairfield, J. Tokenized: The law of non-fungible tokens and unique digital property. Indiana Law J. forthcoming (2021).
  13. Chevet, S. Blockchain technology and non-fungible tokens: Reshaping value chains in creative industries. Available at SSRN 3212662 (2018).
  14. Bal, M. & Ner, C. NFTracer: a Non-Fungible token tracking proof-of-concept using Hyperledger Fabric. arXiv preprint arXiv:1905.04795 (2019).
  15. Wang, Q., Li, R., Wang, Q. & Chen, S. Non-fungible token (NFT): Overview, evaluation, opportunities and challenges. arXiv preprint arXiv:2105.07447 (2021).
  16. Qu, Q., Nurgaliev, I., Muzammal, M., Jensen, C. S. & Fan, J. On spatio-temporal blockchain query processing. Future Gener. Comput. Syst. 98: 208–218 (2019).Article Google Scholar 
  17. Rosenfeld, M. Overview of colored coins. White paper, bitcoil. co. il 41, 94 (2012).
  18. Obsidian-Labs. dGoods Standard, Retrieved 29 April, 2021, from https://docs.eosstudio.io/contracts/dgoods/standard.html. (2021).
  19. Algorand. Algorand Core Technology Innovation, Retrieved 10 March, 2021, from https://www.algorand.com/technology/core-blockchain-innovation. (2021).
  20. Weathersby, J. Building NFTs on Algorand, Retrieved 15 April, 2021, from https://developer.algorand.org/articles/building-nfts-on-algorand/. (2021).
  21. Algorand. How Algorand Democratizes the Access to the NFT Market with Fractional NFTs, Retrieved 7 April, 2021, from https://www.algorand.com/resources/blog/algorand-nft-market-fractional-nfts. (2021).
  22. Tezos. Welcome to the Tezos Developer Documentation, Retrieved 16 May, 2021, from https://tezos.gitlab.io. (2021).
  23. flowdocs. Non-Fungible Tokens, Retrieved 20 May, 2021, from https://docs.onflow.org/cadence/tutorial/04-non-fungible-tokens/. (2021).
  24. Benisi, N. Z., Aminian, M. & Javadi, B. Blockchain-based decentralized storage networks: A survey. J. Netw. Comput. Appl. 162, 102656 (2020).Article Google Scholar 
  25. NFTReview. On-chain vs. Off-chain Metadata (2021).
  26. Benet, J. Ipfs-content addressed, versioned, p2p file system. arXiv preprint arXiv:1407.3561 (2014).
  27. Nizamuddin, N., Salah, K., Azad, M. A., Arshad, J. & Rehman, M. Decentralized document version control using ethereum blockchain and IPFS. Comput. Electr. Eng. 76, 183–197 (2019).Article Google Scholar 
  28. Tut, K. Who Is Responsible for NFT Data? (2020).
  29. nft.storage. Free Storage for NFTs, Retrieved 16 May, 2021, from https://nft.storage/. (2021).
  30. Psaras, Y. & Dias, D. in 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). 80–80 (IEEE).
  31. Tanner, J. & Roelofs, C. NFTs and the need for Self-Sovereign Identity (2021).
  32. Martens, D., Tuyll van Serooskerken, A. V. & Steenhagen, M. Exploring the potential of blockchain for KYC. J. Digit. Bank. 2, 123–131 (2017).Google Scholar 
  33. Hammi, M. T., Bellot, P. & Serhrouchni, A. In 2018 IEEE Wireless Communications and Networking Conference (WCNC). 1–6 (IEEE).
  34. Khalid, U. et al. A decentralized lightweight blockchain-based authentication mechanism for IoT systems. Cluster Comput. 1–21 (2020).
  35. Zhong, Y. et al. Distributed blockchain-based authentication and authorization protocol for smart grid. Wirel. Commun. Mobile Comput. (2021).
  36. Schönhals, A., Hepp, T. & Gipp, B. In Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. 105–110.
  37. Verma, S. & Prajapati, G. A Survey of Cryptographic Hash Algorithms and Issues. International Journal of Computer Security & Source Code Analysis (IJCSSCA) 1, 17–20, (2015).
  38. Verma, S. & Prajapati, G. A survey of cryptographic hash algorithms and issues. Int. J. Comput. Secur. Source Code Anal. (IJCSSCA) 1 (2015).
  39. SDK, I. X.509 Certificates (1996).
  40. Helliar, C. V., Crawford, L., Rocca, L., Teodori, C. & Veneziani, M. Permissionless and permissioned blockchain diffusion. Int. J. Inf. Manag. 54, 102136 (2020).Article Google Scholar 
  41. Frizzo-Barker, J. et al. Blockchain as a disruptive technology for business: A systematic review. Int. J. Inf. Manag. 51, 102029 (2020).Article Google Scholar 
  42. Bamakan, S. M. H., Motavali, A. & Bondarti, A. B. A survey of blockchain consensus algorithms performance evaluation criteria. Expert Syst. Appl. 154, 113385 (2020).Article Google Scholar 
  43. Bamakan, S. M. H., Bondarti, A. B., Bondarti, P. B. & Qu, Q. Blockchain technology forecasting by patent analytics and text mining. Blockchain Res. Appl. 100019 (2021).
  44. Castro, M. & Liskov, B. Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. (TOCS) 20, 398–461 (2002).Article Google Scholar 
  45. Muratov, F., Lebedev, A., Iushkevich, N., Nasrulin, B. & Takemiya, M. YAC: BFT consensus algorithm for blockchain. arXiv preprint arXiv:1809.00554 (2018).
  46. Bessani, A., Sousa, J. & Alchieri, E. E. In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks. 355–362 (IEEE).
  47. Todd, P. Ripple protocol consensus algorithm review. May 11th (2015).
  48. Ongaro, D. & Ousterhout, J. In 2014 {USENIX} Annual Technical Conference ({USENIX}{ATC} 14). 305–319.
  49. Larimer, D. Delegated proof-of-stake (dpos). Bitshare whitepaper, Reterived March 31, 2019, from http://docs.bitshares.org/bitshares/dpos.html (2014).
  50. Turner, B. (October, 2007).
  51. De Angelis, S. et al. PBFT vs proof-of-authority: Applying the CAP theorem to permissioned blockchain (2018).
  52. King, S. & Nadal, S. Ppcoin: Peer-to-peer crypto-currency with proof-of-stake. self-published paper, August 19 (2012).
  53. Hyperledger. PoET 1.0 Specification (2017).
  54. Buntinx, J. What Is Proof-of-Weight? Reterived March 31, 2019, from https://nulltx.com/what-is-proof-of-weight/# (2018).
  55. P4Titan. A Peer-to-Peer Crypto-Currency with Proof-of-Burn. Reterived March 10, 2019, from https://github.com/slimcoin-project/slimcoin-project.github.io/raw/master/whitepaperSLM.pdf (2014).
  56. Dziembowski, S., Faust, S., Kolmogorov, V. & Pietrzak, K. In Annual Cryptology Conference. 585–605 (Springer).
  57. Bentov, I., Lee, C., Mizrahi, A. & Rosenfeld, M. Proof of Activity: Extending Bitcoin’s Proof of Work via Proof of Stake. IACR Cryptology ePrint Archive 2014, 452 (2014).Google Scholar 
  58. NEM, T. Nem technical referencehttps://nem.io/wpcontent/themes/nem/files/NEM_techRef.pdf (2018).
  59. Bramas, Q. The Stability and the Security of the Tangle (2018).
  60. Baird, L. The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance. In Swirlds Tech Reports SWIRLDS-TR-2016–01, Tech. Rep (2016).
  61. LeMahieu, C. Nano: A feeless distributed cryptocurrency network. Nano [Online resource]. https://nano.org/en/whitepaper (date of access: 24.03. 2018) 16, 17 (2018).
  62. Casino, F., Dasaklis, T. K. & Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification and open issues. Telematics Inform. 36, 55–81 (2019).Article Google Scholar 
  63. bigredawesomedodo. Helping Small Businesses Survive and Grow With Marketing, Retrieved 3 June, 2021, from https://bigredawesomedodo.com/nft/. (2020).

Download references

Acknowledgements

This work has been partially supported by CAS President’s International Fellowship Initiative, China [grant number 2021VTB0002, 2021] and National Natural Science Foundation of China (No. 61902385).

Author information

Affiliations

  1. Department of Industrial Management, Yazd University, Yazd City, IranSeyed Mojtaba Hosseini Bamakan
  2. Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan City, IranNasim Nezhadsistani
  3. School of Electrical and Computer Engineering, University of Tehran, Tehran City, IranOmid Bodaghi
  4. Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, ChinaSeyed Mojtaba Hosseini Bamakan & Qiang Qu
  5. Huawei Blockchain Lab, Huawei Cloud Tech Co., Ltd., Shenzhen, ChinaQiang Qu

Contributions

NFT: Redefined Format of IP Assets

The collaboration between National Center for Advancing Translational Sciences (NCATS) at NIH and BurstIQ

2.0 LPBI is a Very Unique Organization 

 

Read Full Post »

Reporter: Stephen J. Williams, Ph.D.

From: Heidi Rheim et al. GA4GH: International policies and standards for data sharing across genomic research and healthcare. (2021): Cell Genomics, Volume 1 Issue 2.

Source: DOI:https://doi.org/10.1016/j.xgen.2021.100029

Highlights

  • Siloing genomic data in institutions/jurisdictions limits learning and knowledge
  • GA4GH policy frameworks enable responsible genomic data sharing
  • GA4GH technical standards ensure interoperability, broad access, and global benefits
  • Data sharing across research and healthcare will extend the potential of genomics

Summary

The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits.

In order for genomic and personalized medicine to come to fruition it is imperative that data siloes around the world are broken down, allowing the international collaboration for the collection, storage, transferring, accessing and analying of molecular and health-related data.

We had talked on this site in numerous articles about the problems data siloes produce. By data siloes we are meaning that collection and storage of not only DATA but intellectual thought are being held behind physical, electronic, and intellectual walls and inacessible to other scientisits not belonging either to a particular institituion or even a collaborative network.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Standardization and harmonization of data is key to this effort to sharing electronic records. The EU has taken bold action in this matter. The following section is about the General Data Protection Regulation of the EU and can be found at the following link:

https://ec.europa.eu/info/law/law-topic/data-protection/data-protection-eu_en

Fundamental rights

The EU Charter of Fundamental Rights stipulates that EU citizens have the right to protection of their personal data.

Protection of personal data

Legislation

The data protection package adopted in May 2016 aims at making Europe fit for the digital age. More than 90% of Europeans say they want the same data protection rights across the EU and regardless of where their data is processed.

The General Data Protection Regulation (GDPR)

Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. This text includes the corrigendum published in the OJEU of 23 May 2018.

The regulation is an essential step to strengthen individuals’ fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. A single law will also do away with the current fragmentation in different national systems and unnecessary administrative burdens.

The regulation entered into force on 24 May 2016 and applies since 25 May 2018. More information for companies and individuals.

Information about the incorporation of the General Data Protection Regulation (GDPR) into the EEA Agreement.

EU Member States notifications to the European Commission under the GDPR

The Data Protection Law Enforcement Directive

Directive (EU) 2016/680 on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties, and on the free movement of such data.

The directive protects citizens’ fundamental right to data protection whenever personal data is used by criminal law enforcement authorities for law enforcement purposes. It will in particular ensure that the personal data of victims, witnesses, and suspects of crime are duly protected and will facilitate cross-border cooperation in the fight against crime and terrorism.

The directive entered into force on 5 May 2016 and EU countries had to transpose it into their national law by 6 May 2018.

The following paper by the organiztion The Global Alliance for Genomics and Health discusses these types of collaborative efforts to break down data silos in personalized medicine. This organization has over 2000 subscribers in over 90 countries encompassing over 60 organizations.

Enabling responsible genomic data sharing for the benefit of human health

The Global Alliance for Genomics and Health (GA4GH) is a policy-framing and technical standards-setting organization, seeking to enable responsible genomic data sharing within a human rights framework.

he Global Alliance for Genomics and Health (GA4GH) is an international, nonprofit alliance formed in 2013 to accelerate the potential of research and medicine to advance human health. Bringing together 600+ leading organizations working in healthcare, research, patient advocacy, life science, and information technology, the GA4GH community is working together to create frameworks and standards to enable the responsible, voluntary, and secure sharing of genomic and health-related data. All of our work builds upon the Framework for Responsible Sharing of Genomic and Health-Related Data.

GA4GH Connect is a five-year strategic plan that aims to drive uptake of standards and frameworks for genomic data sharing within the research and healthcare communities in order to enable responsible sharing of clinical-grade genomic data by 2022. GA4GH Connect links our Work Streams with Driver Projects—real-world genomic data initiatives that help guide our development efforts and pilot our tools.

From the article on Cell Genomics GA4GH: International policies and standards for data sharing across genomic research and healthcare

Source: Open Access DOI:https://doi.org/10.1016/j.xgen.2021.100029PlumX Metrics

The Global Alliance for Genomics and Health (GA4GH) is a worldwide alliance of genomics researchers, data scientists, healthcare practitioners, and other stakeholders. We are collaborating to establish policy frameworks and technical standards for responsible, international sharing of genomic and other molecular data as well as related health data. Founded in 2013,3 the GA4GH community now consists of more than 1,000 individuals across more than 90 countries working together to enable broad sharing that transcends the boundaries of any single institution or country (see https://www.ga4gh.org).In this perspective, we present the strategic goals of GA4GH and detail current strategies and operational approaches to enable responsible sharing of clinical and genomic data, through both harmonized data aggregation and federated approaches, to advance genomic medicine and research. We describe technical and policy development activities of the eight GA4GH Work Streams and implementation activities across 24 real-world genomic data initiatives (“Driver Projects”). We review how GA4GH is addressing the major areas in which genomics is currently deployed including rare disease, common disease, cancer, and infectious disease. Finally, we describe differences between genomic sequence data that are generated for research versus healthcare purposes, and define strategies for meeting the unique challenges of responsibly enabling access to data acquired in the clinical setting.

GA4GH organization

GA4GH has partnered with 24 real-world genomic data initiatives (Driver Projects) to ensure its standards are fit for purpose and driven by real-world needs. Driver Projects make a commitment to help guide GA4GH development efforts and pilot GA4GH standards (see Table 2). Each Driver Project is expected to dedicate at least two full-time equivalents to GA4GH standards development, which takes place in the context of GA4GH Work Streams (see Figure 1). Work Streams are the key production teams of GA4GH, tackling challenges in eight distinct areas across the data life cycle (see Box 1). Work Streams consist of experts from their respective sub-disciplines and include membership from Driver Projects as well as hundreds of other organizations across the international genomics and health community.

Figure thumbnail gr1
Figure 1Matrix structure of the Global Alliance for Genomics and HealthShow full caption


Box 1
GA4GH Work Stream focus areasThe GA4GH Work Streams are the key production teams of the organization. Each tackles a specific area in the data life cycle, as described below (URLs listed in the web resources).

  • (1)Data use & researcher identities: Develops ontologies and data models to streamline global access to datasets generated in any country9,10
  • (2)Genomic knowledge standards: Develops specifications and data models for exchanging genomic variant observations and knowledge18
  • (3)Cloud: Develops federated analysis approaches to support the statistical rigor needed to learn from large datasets
  • (4)Data privacy & security: Develops guidelines and recommendations to ensure identifiable genomic and phenotypic data remain appropriately secure without sacrificing their analytic potential
  • (5)Regulatory & ethics: Develops policies and recommendations for ensuring individual-level data are interoperable with existing norms and follow core ethical principles
  • (6)Discovery: Develops data models and APIs to make data findable, accessible, interoperable, and reusable (FAIR)
  • (7)Clinical & phenotypic data capture & exchange: Develops data models to ensure genomic data is most impactful through rich metadata collected in a standardized way
  • (8)Large-scale genomics: Develops APIs and file formats to ensure harmonized technological platforms can support large-scale computing

For more articles on Open Access, Science 2.0, and Data Networks for Genomics on this Open Access Scientific Journal see:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

UK Biobank Makes Available 200,000 whole genomes Open Access

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

UK Biobank Makes Available 200,000 whole genomes Open Access

Reporter: Stephen J. Williams, Ph.D.

The following is a summary of an article by Jocelyn Kaiser, published in the November 26, 2021 issue of the journal Science.

To see the full article please go to https://www.science.org/content/article/200-000-whole-genomes-made-available-biomedical-studies-uk-effort

The UK Biobank (UKBB) this week unveiled to scientists the entire genomes of 200,000 people who are part of a long-term British health study.

The trove of genomes, each linked to anonymized medical information, will allow biomedical scientists to scour the full 3 billion base pairs of human DNA for insights into the interplay of genes and health that could not be gleaned from partial sequences or scans of genome markers. “It is thrilling to see the release of this long-awaited resource,” says Stephen Glatt, a psychiatric geneticist at the State University of New York Upstate Medical University.

Other biobanks have also begun to compile vast numbers of whole genomes, 100,000 or more in some cases (see table, below). But UKBB stands out because it offers easy access to the genomic information, according to some of the more than 20,000 researchers in 90 countries who have signed up to use the data. “In terms of availability and data quality, [UKBB] surpasses all others,” says physician and statistician Omar Yaxmehen Bello-Chavolla of the National Institute for Geriatrics in Mexico City.

Enabling your vision to improve public health

Data drives discovery. We have curated a uniquely powerful biomedical database that can be accessed globally for public health research. Explore data from half a million UK Biobank participants to enable new discoveries to improve public health.

Data Showcase

Future data releases

This UKBB biobank represents genomes collected from 500,000 middle-age and elderly participants for 2006 to 2010. The genomes are mostly of a European descent. Other large scale genome sequencing ventures like Iceland’s DECODE, which collected over 100,000 genomes, is now a subsidiary of Amgen, and mostly behind IP protection, not Open Access as this database represents.

UK Biobank is a large-scale biomedical database and research resource, containing in-depth genetic and health information from half a million UK participants. The database is regularly augmented with additional data and is globally accessible to approved researchers undertaking vital research into the most common and life-threatening diseases. It is a major contributor to the advancement of modern medicine and treatment and has enabled several scientific discoveries that improve human health.

A summary of some large scale genome sequencing projects are show in the table below:

BiobankCompleted Whole GenomesRelease Information
UK Biobank200,000300,000 more in early 2023
TransOmics for
Precision Medicien
161,000NIH requires project
specific request
Million Veterans
Program
125,000Non-Veterans Affairs
researchers get first access
100,000 Genomes
Project
120,000Researchers must join Genomics
England collaboration
All of Us90,000NIH expects to release 2022

Other Related Articles on Genome Biobank Projects in this Open Access Online Scientific Journal Include the Following:

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

Exome Aggregation Consortium (ExAC), generated the largest catalogue so far of variation in human protein-coding regions: Sequence data of 60,000 people, NOW is a publicly accessible database

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Diversity and Health Disparity Issues Need to be Addressed for GWAS and Precision Medicine Studies

Read Full Post »

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”

Sources:

https://www.cardiovascularbusiness.com/topics/prevention-risk-reduction/new-ai-model-healthcare-data-predict-type-2-diabetes?utm_source=newsletter

Reference:

Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/08/27/ai-in-drug-discovery-data-science-and-core-biology-merck-co-inc-gns-healthcare-quartzbio-benevolent-ai-and-nuritas/

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/06/04/ai-in-psychiatric-treatment-using-machine-learning-to-increase-treatment-efficacy-in-mental-health/

Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/vyasa-analytics-demos-deep-learning-software-for-life-sciences-at-bio-it-world-2018-vyasas-booth-632/

New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2017/11/08/new-diabetes-treatment-using-smart-artificial-beta-cells/

Read Full Post »

Systems Biology analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Curator: Stephen J. Williams, Ph.D.

In the June 2020 issue of the journal Science, writer Roxanne Khamsi has an interesting article “Computing Cancer’s Weak Spots; An algorithm to unmask tumors’ molecular linchpins is tested in patients”[1], describing some early successes in the incorporation of cancer genome sequencing in conjunction with artificial intelligence algorithms toward a personalized clinical treatment decision for various tumor types.  In 2016, oncologists Amy Tiersten collaborated with systems biologist Andrea Califano and cell biologist Jose Silva at Mount Sinai Hospital to develop a systems biology approach to determine that the drug ruxolitinib, a STAT3 inhibitor, would be effective for one of her patient’s aggressively recurring, Herceptin-resistant breast tumor.  Dr. Califano, instead of defining networks of driver mutations, focused on identifying a few transcription factors that act as ‘linchpins’ or master controllers of transcriptional networks withing tumor cells, and in doing so hoping to, in essence, ‘bottleneck’ the transcriptional machinery of potential oncogenic products. As Dr. Castilano states

“targeting those master regulators and you will stop cancer in its tracks, no matter what mutation initially caused it.”

It is important to note that this approach also relies on the ability to sequence tumors  by RNA-seq to determine the underlying mutations which alter which master regulators are pertinent in any one tumor.  And given the wide tumor heterogeneity in tumor samples, this sequencing effort may have to involve multiple biopsies (as discussed in earlier posts on tumor heterogeneity in renal cancer).

As stated in the article, Califano co-founded a company called Darwin-Health in 2015 to guide doctors by identifying the key transcription factors in a patient’s tumor and suggesting personalized therapeutics to those identified molecular targets (OncoTarget™).  He had collaborated with the Jackson Laboratory and most recently Columbia University to conduct a $15 million 3000 patient clinical trial.  This was a bit of a stretch from his initial training as a physicist and, in 1986, IBM hired him for some artificial intelligence projects.  He then landed in 2003 at Columbia and has been working on identifying these transcriptional nodes that govern cancer survival and tumorigenicity.  Dr. Califano had figured that the number of genetic mutations which potentially could be drivers were too vast:

A 2018 study which analyzed more than 9000 tumor samples reported over 1.5 million mutations[2]

and impossible to develop therapeutics against.  He reasoned that you would just have to identify the common connections between these pathways or transcriptional nodes and termed them master regulators.

A Pan-Cancer Analysis of Enhancer Expression in Nearly 9000 Patient Samples

Chen H, Li C, Peng X, et al. Cell. 2018;173(2):386-399.e12.

Abstract

The role of enhancers, a key class of non-coding regulatory DNA elements, in cancer development has increasingly been appreciated. Here, we present the detection and characterization of a large number of expressed enhancers in a genome-wide analysis of 8928 tumor samples across 33 cancer types using TCGA RNA-seq data. Compared with matched normal tissues, global enhancer activation was observed in most cancers. Across cancer types, global enhancer activity was positively associated with aneuploidy, but not mutation load, suggesting a hypothesis centered on “chromatin-state” to explain their interplay. Integrating eQTL, mRNA co-expression, and Hi-C data analysis, we developed a computational method to infer causal enhancer-gene interactions, revealing enhancers of clinically actionable genes. Having identified an enhancer ∼140 kb downstream of PD-L1, a major immunotherapy target, we validated it experimentally. This study provides a systematic view of enhancer activity in diverse tumor contexts and suggests the clinical implications of enhancers.

 

A diagram of how concentrating on these transcriptional linchpins or nodes may be more therapeutically advantageous as only one pharmacologic agent is needed versus multiple agents to inhibit the various upstream pathways:

 

 

From: Khamsi R: Computing cancer’s weak spots. Science 2020, 368(6496):1174-1177.

 

VIPER Algorithm (Virtual Inference of Protein activity by Enriched Regulon Analysis)

The algorithm that Califano and DarwinHealth developed is a systems biology approach using a tumor’s RNASeq data to determine controlling nodes of transcription.  They have recently used the VIPER algorithm to look at RNA-Seq data from more than 10,000 tumor samples from TCGA and identified 407 transcription factor genes that acted as these linchpins across all tumor types.  Only 20 to 25 of  them were implicated in just one tumor type so these potential nodes are common in many forms of cancer.

Other institutions like the Cold Spring Harbor Laboratories have been using VIPER in their patient tumor analysis.  Linchpins for other tumor types have been found.  For instance, VIPER identified transcription factors IKZF1 and IKF3 as linchpins in multiple myeloma.  But currently approved therapeutics are hard to come by for targets with are transcription factors, as most pharma has concentrated on inhibiting an easier target like kinases and their associated activity.  In general, developing transcription factor inhibitors in more difficult an undertaking for multiple reasons.

Network-based inference of protein activity helps functionalize the genetic landscape of cancer. Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A:. Nature genetics 2016, 48(8):838-847 [3]

Abstract

Identifying the multiple dysregulated oncoproteins that contribute to tumorigenesis in a given patient is crucial for developing personalized treatment plans. However, accurate inference of aberrant protein activity in biological samples is still challenging as genetic alterations are only partially predictive and direct measurements of protein activity are generally not feasible. To address this problem we introduce and experimentally validate a new algorithm, VIPER (Virtual Inference of Protein-activity by Enriched Regulon analysis), for the accurate assessment of protein activity from gene expression data. We use VIPER to evaluate the functional relevance of genetic alterations in regulatory proteins across all TCGA samples. In addition to accurately inferring aberrant protein activity induced by established mutations, we also identify a significant fraction of tumors with aberrant activity of druggable oncoproteins—despite a lack of mutations, and vice-versa. In vitro assays confirmed that VIPER-inferred protein activity outperforms mutational analysis in predicting sensitivity to targeted inhibitors.

 

 

 

 

Figure 1 

Schematic overview of the VIPER algorithm From: Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A: Functional characterization of somatic mutations in cancer using network-based inference of protein activity. Nature genetics 2016, 48(8):838-847.

(a) Molecular layers profiled by different technologies. Transcriptomics measures steady-state mRNA levels; Proteomics quantifies protein levels, including some defined post-translational isoforms; VIPER infers protein activity based on the protein’s regulon, reflecting the abundance of the active protein isoform, including post-translational modifications, proper subcellular localization and interaction with co-factors. (b) Representation of VIPER workflow. A regulatory model is generated from ARACNe-inferred context-specific interactome and Mode of Regulation computed from the correlation between regulator and target genes. Single-sample gene expression signatures are computed from genome-wide expression data, and transformed into regulatory protein activity profiles by the aREA algorithm. (c) Three possible scenarios for the aREA analysis, including increased, decreased or no change in protein activity. The gene expression signature and its absolute value (|GES|) are indicated by color scale bars, induced and repressed target genes according to the regulatory model are indicated by blue and red vertical lines. (d) Pleiotropy Correction is performed by evaluating whether the enrichment of a given regulon (R4) is driven by genes co-regulated by a second regulator (R4∩R1). (e) Benchmark results for VIPER analysis based on multiple-samples gene expression signatures (msVIPER) and single-sample gene expression signatures (VIPER). Boxplots show the accuracy (relative rank for the silenced protein), and the specificity (fraction of proteins inferred as differentially active at p < 0.05) for the 6 benchmark experiments (see Table 2). Different colors indicate different implementations of the aREA algorithm, including 2-tail (2T) and 3-tail (3T), Interaction Confidence (IC) and Pleiotropy Correction (PC).

 Other articles from Andrea Califano on VIPER algorithm in cancer include:

Resistance to neoadjuvant chemotherapy in triple-negative breast cancer mediated by a reversible drug-tolerant state.

Echeverria GV, Ge Z, Seth S, Zhang X, Jeter-Jones S, Zhou X, Cai S, Tu Y, McCoy A, Peoples M, Sun Y, Qiu H, Chang Q, Bristow C, Carugo A, Shao J, Ma X, Harris A, Mundi P, Lau R, Ramamoorthy V, Wu Y, Alvarez MJ, Califano A, Moulder SL, Symmans WF, Marszalek JR, Heffernan TP, Chang JT, Piwnica-Worms H.Sci Transl Med. 2019 Apr 17;11(488):eaav0936. doi: 10.1126/scitranslmed.aav0936.PMID: 30996079

An Integrated Systems Biology Approach Identifies TRIM25 as a Key Determinant of Breast Cancer Metastasis.

Walsh LA, Alvarez MJ, Sabio EY, Reyngold M, Makarov V, Mukherjee S, Lee KW, Desrichard A, Turcan Ş, Dalin MG, Rajasekhar VK, Chen S, Vahdat LT, Califano A, Chan TA.Cell Rep. 2017 Aug 15;20(7):1623-1640. doi: 10.1016/j.celrep.2017.07.052.PMID: 28813674

Inhibition of the autocrine IL-6-JAK2-STAT3-calprotectin axis as targeted therapy for HR-/HER2+ breast cancers.

Rodriguez-Barrueco R, Yu J, Saucedo-Cuevas LP, Olivan M, Llobet-Navas D, Putcha P, Castro V, Murga-Penas EM, Collazo-Lorduy A, Castillo-Martin M, Alvarez M, Cordon-Cardo C, Kalinsky K, Maurer M, Califano A, Silva JM.Genes Dev. 2015 Aug 1;29(15):1631-48. doi: 10.1101/gad.262642.115. Epub 2015 Jul 30.PMID: 26227964

Master regulators used as breast cancer metastasis classifier.

Lim WK, Lyashenko E, Califano A.Pac Symp Biocomput. 2009:504-15.PMID: 19209726 Free

 

Additional References

 

  1. Khamsi R: Computing cancer’s weak spots. Science 2020, 368(6496):1174-1177.
  2. Chen H, Li C, Peng X, Zhou Z, Weinstein JN, Liang H: A Pan-Cancer Analysis of Enhancer Expression in Nearly 9000 Patient Samples. Cell 2018, 173(2):386-399 e312.
  3. Alvarez MJ, Shen Y, Giorgi FM, Lachmann A, Ding BB, Ye BH, Califano A: Functional characterization of somatic mutations in cancer using network-based inference of protein activity. Nature genetics 2016, 48(8):838-847.

 

Other articles of Note on this Open Access Online Journal Include:

Issues in Personalized Medicine in Cancer: Intratumor Heterogeneity and Branched Evolution Revealed by Multiregion Sequencing

 

Read Full Post »

Live Notes, Real Time Conference Coverage AACR 2020: Tuesday June 23, 2020 3:00 PM-5:30 PM Educational Sessions

Reporter: Stephen J. Williams, PhD

Follow Live in Real Time using

#AACR20

@pharma_BI

@AACR

Register for FREE at https://www.aacr.org/

uesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session
Tumor Biology, Bioinformatics and Systems Biology

The Clinical Proteomic Tumor Analysis Consortium: Resources and Data Dissemination

This session will provide information regarding methodologic and computational aspects of proteogenomic analysis of tumor samples, particularly in the context of clinical trials. Availability of comprehensive proteomic and matching genomic data for tumor samples characterized by the National Cancer Institute’s Clinical Proteomic Tumor Analysis Consortium (CPTAC) and The Cancer Genome Atlas (TCGA) program will be described, including data access procedures and informatic tools under development. Recent advances on mass spectrometry-based targeted assays for inclusion in clinical trials will also be discussed.

Amanda G Paulovich, Shankha Satpathy, Meenakshi Anurag, Bing Zhang, Steven A Carr

Methods and tools for comprehensive proteogenomic characterization of bulk tumor to needle core biopsies

Shankha Satpathy
  • TCGA has 11,000 cancers with >20,000 somatic alterations but only 128 proteins as proteomics was still young field
  • CPTAC is NCI proteomic effort
  • Chemical labeling approach now method of choice for quantitative proteomics
  • Looked at ovarian and breast cancers: to measure PTM like phosphorylated the sample preparation is critical

 

Data access and informatics tools for proteogenomics analysis

Bing Zhang
  • Raw and processed data (raw MS data) with linked clinical data can be extracted in CPTAC
  • Python scripts are available for bioinformatic programming

 

Pathways to clinical translation of mass spectrometry-based assays

Meenakshi Anurag

·         Using kinase inhibitor pulldown (KIP) assay to identify unique kinome profiles

·         Found single strand break repair defects in endometrial luminal cases, especially with immune checkpoint prognostic tumors

·         Paper: JNCI 2019 analyzed 20,000 genes correlated with ET resistant in luminal B cases (selected for a list of 30 genes)

·         Validated in METABRIC dataset

·         KIP assay uses magnetic beads to pull out kinases to determine druggable kinases

·         Looked in xenografts and was able to pull out differential kinomes

·         Matched with PDX data so good clinical correlation

·         Were able to detect ESR1 fusion correlated with ER+ tumors

Tuesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session
Survivorship

Artificial Intelligence and Machine Learning from Research to the Cancer Clinic

The adoption of omic technologies in the cancer clinic is giving rise to an increasing number of large-scale high-dimensional datasets recording multiple aspects of the disease. This creates the need for frameworks for translatable discovery and learning from such data. Like artificial intelligence (AI) and machine learning (ML) for the cancer lab, methods for the clinic need to (i) compare and integrate different data types; (ii) scale with data sizes; (iii) prove interpretable in terms of the known biology and batch effects underlying the data; and (iv) predict previously unknown experimentally verifiable mechanisms. Methods for the clinic, beyond the lab, also need to (v) produce accurate actionable recommendations; (vi) prove relevant to patient populations based upon small cohorts; and (vii) be validated in clinical trials. In this educational session we will present recent studies that demonstrate AI and ML translated to the cancer clinic, from prognosis and diagnosis to therapy.
NOTE: Dr. Fish’s talk is not eligible for CME credit to permit the free flow of information of the commercial interest employee participating.

Ron C. Anafi, Rick L. Stevens, Orly Alter, Guy Fish

Overview of AI approaches in cancer research and patient care

Rick L. Stevens
  • Deep learning is less likely to saturate as data increases
  • Deep learning attempts to learn multiple layers of information
  • The ultimate goal is prediction but this will be the greatest challenge for ML
  • ML models can integrate data validation and cross database validation
  • What limits the performance of cross validation is the internal noise of data (reproducibility)
  • Learning curves: not the more data but more reproducible data is important
  • Neural networks can outperform classical methods
  • Important to measure validation accuracy in training set. Class weighting can assist in development of data set for training set especially for unbalanced data sets

Discovering genome-scale predictors of survival and response to treatment with multi-tensor decompositions

Orly Alter
  • Finding patterns using SVD component analysis. Gene and SVD patterns match 1:1
  • Comparative spectral decompositions can be used for global datasets
  • Validation of CNV data using this strategy
  • Found Ras, Shh and Notch pathways with altered CNV in glioblastoma which correlated with prognosis
  • These predictors was significantly better than independent prognostic indicator like age of diagnosis

 

Identifying targets for cancer chronotherapy with unsupervised machine learning

Ron C. Anafi
  • Many clinicians have noticed that some patients do better when chemo is given at certain times of the day and felt there may be a circadian rhythm or chronotherapeutic effect with respect to side effects or with outcomes
  • ML used to determine if there is indeed this chronotherapy effect or can we use unstructured data to determine molecular rhythms?
  • Found a circadian transcription in human lung
  • Most dataset in cancer from one clinical trial so there might need to be more trials conducted to take into consideration circadian rhythms

Stratifying patients by live-cell biomarkers with random-forest decision trees

Stratifying patients by live-cell biomarkers with random-forest decision trees

Guy Fish CEO Cellanyx Diagnostics

 

Tuesday, June 23

3:00 PM – 5:00 PM EDT

Virtual Educational Session
Tumor Biology, Molecular and Cellular Biology/Genetics, Bioinformatics and Systems Biology, Prevention Research

The Wound Healing that Never Heals: The Tumor Microenvironment (TME) in Cancer Progression

This educational session focuses on the chronic wound healing, fibrosis, and cancer “triad.” It emphasizes the similarities and differences seen in these conditions and attempts to clarify why sustained fibrosis commonly supports tumorigenesis. Importance will be placed on cancer-associated fibroblasts (CAFs), vascularity, extracellular matrix (ECM), and chronic conditions like aging. Dr. Dvorak will provide an historical insight into the triad field focusing on the importance of vascular permeability. Dr. Stewart will explain how chronic inflammatory conditions, such as the aging tumor microenvironment (TME), drive cancer progression. The session will close with a review by Dr. Cukierman of the roles that CAFs and self-produced ECMs play in enabling the signaling reciprocity observed between fibrosis and cancer in solid epithelial cancers, such as pancreatic ductal adenocarcinoma.

Harold F Dvorak, Sheila A Stewart, Edna Cukierman

 

The importance of vascular permeability in tumor stroma generation and wound healing

Harold F Dvorak

Aging in the driver’s seat: Tumor progression and beyond

Sheila A Stewart

Why won’t CAFs stay normal?

Edna Cukierman

 

Tuesday, June 23

3:00 PM – 5:00 PM EDT

 

 

 

 

 

 

 

Other Articles on this Open Access  Online Journal on Cancer Conferences and Conference Coverage in Real Time Include

Press Coverage
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Symposium: New Drugs on the Horizon Part 3 12:30-1:25 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on NCI Activities: COVID-19 and Cancer Research 5:20 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Evaluating Cancer Genomics from Normal Tissues Through Metastatic Disease 3:50 PM
Live Notes, Real Time Conference Coverage 2020 AACR Virtual Meeting April 28, 2020 Session on Novel Targets and Therapies 2:35 PM

Read Full Post »

Older Posts »