Feeds:
Posts
Comments

Archive for the ‘BioIT: BioInformatics, NGS, Clinical & Translational, Pharmaceutical R&D Informatics, Clinical Genomics, Cancer Informatics’ Category

Genomic data can predict miscarriage and IVF failure

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.

Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.

As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.

References:

https://medicalxpress-com.cdn.ampproject.org/c/s/medicalxpress.com/news/2022-06-miscarriage-failure-vitro-fertilization-genomic.amp

https://pubmed.ncbi.nlm.nih.gov/35347416/

https://pubmed.ncbi.nlm.nih.gov/31552087/

https://pubmed.ncbi.nlm.nih.gov/33193747/

https://pubmed.ncbi.nlm.nih.gov/33197264/

Read Full Post »

The Human Genome Gets Fully Sequenced: A Simplistic Take on Century Long Effort

 

Curator: Stephen J. Williams, PhD

Ever since the hard work by Rosalind Franklin to deduce structures of DNA and the coincidental work by Francis Crick and James Watson who modeled the basic building blocks of DNA, DNA has been considered as the basic unit of heredity and life, with the “Central Dogma” (DNA to RNA to Protein) at its core.  These were the discoveries in the early twentieth century, and helped drive the transformational shift of biological experimentation, from protein isolation and characterization to cloning protein-encoding genes to characterizing how the genes are expressed temporally, spatially, and contextually.

Rosalind Franklin, who’s crystolagraphic data led to determination of DNA structure. Shown as 1953 Time cover as Time person of the Year

Dr Francis Crick and James Watson in front of their model structure of DNA

 

 

 

 

 

 

 

 

 

Up to this point (1970s-mid 80s) , it was felt that genetic information was rather static, and the goal was still to understand and characterize protein structure and function while an understanding of the underlying genetic information was more important for efforts like linkage analysis of genetic defects and tools for the rapidly developing field of molecular biology.  But the development of the aforementioned molecular biology tools including DNA cloning, sequencing and synthesis, gave scientists the idea that a whole recording of the human genome might be possible and worth the effort.

How the Human Genome Project  Expanded our View of Genes Genetic Material and Biological Processes

 

 

From the Human Genome Project Information Archive

Source:  https://web.ornl.gov/sci/techresources/Human_Genome/project/hgp.shtml

History of the Human Genome Project

The Human Genome Project (HGP) refers to the international 13-year effort, formally begun in October 1990 and completed in 2003, to discover all the estimated 20,000-25,000 human genes and make them accessible for further biological study. Another project goal was to determine the complete sequence of the 3 billion DNA subunits (bases in the human genome). As part of the HGP, parallel studies were carried out on selected model organisms such as the bacterium E. coli and the mouse to help develop the technology and interpret human gene function. The DOE Human Genome Program and the NIH National Human Genome Research Institute (NHGRI) together sponsored the U.S. Human Genome Project.

 

Please see the following for goals, timelines, and funding for this project

 

History of the Project

It is interesting to note that multiple government legislation is credited for the funding of such a massive project including

Project Enabling Legislation

  • The Atomic Energy Act of 1946 (P.L. 79-585) provided the initial charter for a comprehensive program of research and development related to the utilization of fissionable and radioactive materials for medical, biological, and health purposes.
  • The Atomic Energy Act of 1954 (P.L. 83-706) further authorized the AEC “to conduct research on the biologic effects of ionizing radiation.”
  • The Energy Reorganization Act of 1974 (P.L. 93-438) provided that responsibilities of the Energy Research and Development Administration (ERDA) shall include “engaging in and supporting environmental, biomedical, physical, and safety research related to the development of energy resources and utilization technologies.”
  • The Federal Non-nuclear Energy Research and Development Act of 1974 (P.L. 93-577) authorized ERDA to conduct a comprehensive non-nuclear energy research, development, and demonstration program to include the environmental and social consequences of the various technologies.
  • The DOE Organization Act of 1977 (P.L. 95-91) mandated the Department “to assure incorporation of national environmental protection goals in the formulation and implementation of energy programs; and to advance the goal of restoring, protecting, and enhancing environmental quality, and assuring public health and safety,” and to conduct “a comprehensive program of research and development on the environmental effects of energy technology and program.”

It should also be emphasized that the project was not JUST funded through NIH but also Department of Energy

Project Sponsors

For a great read on Dr. Craig Ventnor with interviews with the scientist see Dr. Larry Bernstein’s excellent post The Human Genome Project

 

By 2003 we had gained much information about the structure of DNA, genes, exons, introns and allowed us to gain more insights into the diversity of genetic material and the underlying protein coding genes as well as many of the gene-expression regulatory elements.  However there was much uninvestigated material dispersed between genes, the then called “junk DNA” and, up to 2003 not much was known about the function of this ‘junk DNA’.  In addition there were two other problems:

  • The reference DNA used was actually from one person (Craig Ventor who was the lead initiator of the project)
  • Multiple gaps in the DNA sequence existed, and needed to be filled in

It is important to note that a tremendous amount of diversity of protein has been realized from both transcriptomic and proteomic studies.  Although about 20 to 25,000 coding genes exist the human proteome contains about 600,000 proteoforms (due to alternative splicing, posttranslational modifications etc.)

This expansion of the proteoform via alternate splicing into isoforms, gene duplication to paralogs has been shown to have major effects on, for example, cellular signaling pathways (1)

However just recently it has been reported that the FULL human genome has been sequenced and is complete and verified.  This was the focus of a recent issue in the journal Science.

Source: https://www.science.org/doi/10.1126/science.abj6987

Abstract

Since its initial release in 2000, the human reference genome has covered only the euchromatic fraction of the genome, leaving important heterochromatic regions unfinished. Addressing the remaining 8% of the genome, the Telomere-to-Telomere (T2T) Consortium presents a complete 3.055 billion–base pair sequence of a human genome, T2T-CHM13, that includes gapless assemblies for all chromosomes except Y, corrects errors in the prior references, and introduces nearly 200 million base pairs of sequence containing 1956 gene predictions, 99 of which are predicted to be protein coding. The completed regions include all centromeric satellite arrays, recent segmental duplications, and the short arms of all five acrocentric chromosomes, unlocking these complex regions of the genome to variational and functional studies.

 

The current human reference genome was released by the Genome Reference Consortium (GRC) in 2013 and most recently patched in 2019 (GRCh38.p13) (1). This reference traces its origin to the publicly funded Human Genome Project (2) and has been continually improved over the past two decades. Unlike the competing Celera effort (3) and most modern sequencing projects based on “shotgun” sequence assembly (4), the GRC assembly was constructed from sequenced bacterial artificial chromosomes (BACs) that were ordered and oriented along the human genome by means of radiation hybrid, genetic linkage, and fingerprint maps. However, limitations of BAC cloning led to an underrepresentation of repetitive sequences, and the opportunistic assembly of BACs derived from multiple individuals resulted in a mosaic of haplotypes. As a result, several GRC assembly gaps are unsolvable because of incompatible structural polymorphisms on their flanks, and many other repetitive and polymorphic regions were left unfinished or incorrectly assembled (5).

 

Fig. 1. Summary of the complete T2T-CHM13 human genome assembly.
(A) Ideogram of T2T-CHM13v1.1 assembly features. For each chromosome (chr), the following information is provided from bottom to top: gaps and issues in GRCh38 fixed by CHM13 overlaid with the density of genes exclusive to CHM13 in red; segmental duplications (SDs) (42) and centromeric satellites (CenSat) (30); and CHM13 ancestry predictions (EUR, European; SAS, South Asian; EAS, East Asian; AMR, ad-mixed American). Bottom scale is measured in Mbp. (B and C) Additional (nonsyntenic) bases in the CHM13 assembly relative to GRCh38 per chromosome, with the acrocentrics highlighted in black (B) and by sequence type (C). (Note that the CenSat and SD annotations overlap.) RepMask, RepeatMasker. (D) Total nongap bases in UCSC reference genome releases dating back to September 2000 (hg4) and ending with T2T-CHM13 in 2021. Mt/Y/Ns, mitochondria, chrY, and gaps.

Note in Figure 1D the exponential growth in genetic information.

Also very important is the ability to determine all the paralogs, isoforms, areas of potential epigenetic regulation, gene duplications, and transposable elements that exist within the human genome.

Analyses and resources

A number of companion studies were carried out to characterize the complete sequence of a human genome, including comprehensive analyses of centromeric satellites (30), segmental duplications (42), transcriptional (49) and epigenetic profiles (29), mobile elements (49), and variant calls (25). Up to 99% of the complete CHM13 genome can be confidently mapped with long-read sequencing, opening these regions of the genome to functional and variational analysis (23) (fig. S38 and table S14). We have produced a rich collection of annotations and omics datasets for CHM13—including RNA sequencing (RNA-seq) (30), Iso-seq (21), precision run-on sequencing (PRO-seq) (49), cleavage under targets and release using nuclease (CUT&RUN) (30), and ONT methylation (29) experiments—and have made these datasets available via a centralized University of California, Santa Cruz (UCSC), Assembly Hub genome browser (54).

 

To highlight the utility of these genetic and epigenetic resources mapped to a complete human genome, we provide the example of a segmentally duplicated region of the chromosome 4q subtelomere that is associated with facioscapulohumeral muscular dystrophy (FSHD) (55). This region includes FSHD region gene 1 (FRG1), FSHD region gene 2 (FRG2), and an intervening D4Z4 macrosatellite repeat containing the double homeobox 4 (DUX4) gene that has been implicated in the etiology of FSHD (56). Numerous duplications of this region throughout the genome have complicated past genetic analyses of FSHD.

The T2T-CHM13 assembly reveals 23 paralogs of FRG1 spread across all acrocentric chromosomes as well as chromosomes 9 and 20 (Fig. 5A). This gene appears to have undergone recent amplification in the great apes (57), and approximate locations of FRG1 paralogs were previously identified by FISH (58). However, only nine FRG1 paralogs are found in GRCh38, hampering sequence-based analysis.

Future of the human reference genome

The T2T-CHM13 assembly adds five full chromosome arms and more additional sequence than any genome reference release in the past 20 years (Fig. 1D). This 8% of the genome has not been overlooked because of a lack of importance but rather because of technological limitations. High-accuracy long-read sequencing has finally removed this technological barrier, enabling comprehensive studies of genomic variation across the entire human genome, which we expect to drive future discovery in human genomic health and disease. Such studies will necessarily require a complete and accurate human reference genome.

CHM13 lacks a Y chromosome, and homozygous Y-bearing CHMs are nonviable, so a different sample type will be required to complete this last remaining chromosome. However, given its haploid nature, it should be possible to assemble the Y chromosome from a male sample using the same methods described here and supplement the T2T-CHM13 reference assembly with a Y chromosome as needed.

Extending beyond the human reference genome, large-scale resequencing projects have revealed genomic variation across human populations. Our reanalyses of the 1KGP (25) and SGDP (42) datasets have already shown the advantages of T2T-CHM13, even for short-read analyses. However, these studies give only a glimpse of the extensive structural variation that lies within the most repetitive regions of the genome assembled here. Long-read resequencing studies are now needed to comprehensively survey polymorphic variation and reveal any phenotypic associations within these regions.

Although CHM13 represents a complete human haplotype, it does not capture the full diversity of human genetic variation. To address this bias, the Human Pangenome Reference Consortium (59) has joined with the T2T Consortium to build a collection of high-quality reference haplotypes from a diverse set of samples. Ideally, all genomes could be assembled at the quality achieved here, but automated T2T assembly of diploid genomes presents a difficult challenge that will require continued development. Until this goal is realized, and any human genome can be completely sequenced without error, the T2T-CHM13 assembly represents a more complete, representative, and accurate reference than GRCh38.

 

This paper was the focus of a Time article and their basis for making the lead authors part of their Time 100 people of the year.

From TIME

The Human Genome Is Finally Fully Sequenced

Source: https://time.com/6163452/human-genome-fully-sequenced/

 

The first human genome was mapped in 2001 as part of the Human Genome Project, but researchers knew it was neither complete nor completely accurate. Now, scientists have produced the most completely sequenced human genome to date, filling in gaps and correcting mistakes in the previous version.

The sequence is the most complete reference genome for any mammal so far. The findings from six new papers describing the genome, which were published in Science, should lead to a deeper understanding of human evolution and potentially reveal new targets for addressing a host of diseases.

A more precise human genome

“The Human Genome Project relied on DNA obtained through blood draws; that was the technology at the time,” says Adam Phillippy, head of genome informatics at the National Institutes of Health’s National Human Genome Research Institute (NHGRI) and senior author of one of the new papers. “The techniques at the time introduced errors and gaps that have persisted all of these years. It’s nice now to fill in those gaps and correct those mistakes.”

“We always knew there were parts missing, but I don’t think any of us appreciated how extensive they were, or how interesting,” says Michael Schatz, professor of computer science and biology at Johns Hopkins University and another senior author of the same paper.

The work is the result of the Telomere to Telomere consortium, which is supported by NHGRI and involves genetic and computational biology experts from dozens of institutes around the world. The group focused on filling in the 8% of the human genome that remained a genetic black hole from the first draft sequence. Since then, geneticists have been trying to add those missing portions bit by bit. The latest group of studies identifies about an entire chromosome’s worth of new sequences, representing 200 million more base pairs (the letters making up the genome) and 1,956 new genes.

 

NOTE: In 2001 many scientists postulated there were as much as 100,000 coding human genes however now we understand there are about 20,000 to 25,000 human coding genes.  This does not however take into account the multiple diversity obtained from alternate splicing, gene duplications, SNPs, and chromosomal rearrangements.

Scientists were also able to sequence the long stretches of DNA that contained repeated sequences, which genetic experts originally thought were similar to copying errors and dismissed as so-called “junk DNA”. These repeated sequences, however, may play roles in certain human diseases. “Just because a sequence is repetitive doesn’t mean it’s junk,” says Eichler. He points out that critical genes are embedded in these repeated regions—genes that contribute to machinery that creates proteins, genes that dictate how cells divide and split their DNA evenly into their two daughter cells, and human-specific genes that might distinguish the human species from our closest evolutionary relatives, the primates. In one of the papers, for example, researchers found that primates have different numbers of copies of these repeated regions than humans, and that they appear in different parts of the genome.

“These are some of the most important functions that are essential to live, and for making us human,” says Eichler. “Clearly, if you get rid of these genes, you don’t live. That’s not junk to me.”

Deciphering what these repeated sections mean, if anything, and how the sequences of previously unsequenced regions like the centromeres will translate to new therapies or better understanding of human disease, is just starting, says Deanna Church, a vice president at Inscripta, a genome engineering company who wrote a commentary accompanying the scientific articles. Having the full sequence of a human genome is different from decoding it; she notes that currently, of people with suspected genetic disorders whose genomes are sequenced, about half can be traced to specific changes in their DNA. That means much of what the human genome does still remains a mystery.

The investigators in the Telomere to Telomere Consortium made the Time 100 People of the Year.

Michael Schatz, Karen Miga, Evan Eichler, and Adam Phillippy

Illustration by Brian Lutz for Time (Source Photos: Will Kirk—Johns Hopkins University; Nick Gonzales—UC Santa Cruz; Patrick Kehoe; National Human Genome Research Institute)

BY JENNIFER DOUDNA

MAY 23, 2022 6:08 AM EDT

Ever since the draft of the human genome became available in 2001, there has been a nagging question about the genome’s “dark matter”—the parts of the map that were missed the first time through, and what they contained. Now, thanks to Adam Phillippy, Karen Miga, Evan Eichler, Michael Schatz, and the entire Telomere-to-Telomere Consortium (T2T) of scientists that they led, we can see the full map of the human genomic landscape—and there’s much to explore.

In the scientific community, there wasn’t a consensus that mapping these missing parts was necessary. Some in the field felt there was already plenty to do using the data in hand. In addition, overcoming the technical challenges to getting the missing information wasn’t possible until recently. But the more we learn about the genome, the more we understand that every piece of the puzzle is meaningful.

I admire the

T2T group’s willingness to grapple with the technical demands of this project and their persistence in expanding the genome map into uncharted territory. The complete human genome sequence is an invaluable resource that may provide new insights into the origin of diseases and how we can treat them. It also offers the most complete look yet at the genetic script underlying the very nature of who we are as human beings.

Doudna is a biochemist and winner of the 2020 Nobel Prize in Chemistry

Source: https://time.com/collection/100-most-influential-people-2022/6177818/evan-eichler-karen-miga-adam-phillippy-michael-schatz/

Other articles on the Human Genome Project and Junk DNA in this Open Access Scientific Journal Include:

 

International Award for Human Genome Project

 

Cracking the Genome – Inside the Race to Unlock Human DNA – quotes in newspapers

 

The Human Genome Project

 

Junk DNA and Breast Cancer

 

A Perspective on Personalized Medicine

 

 

 

 

 

 

 

Additional References

 

  1. P. Scalia, A. Giordano, C. Martini, S. J. Williams, Isoform- and Paralog-Switching in IR-Signaling: When Diabetes Opens the Gates to Cancer. Biomolecules 10, (Nov 30, 2020).

 

 

Read Full Post »

Relevance of Twitter.com forthcoming Payment System for Scientific Content Promotion and Monetization

Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN

GIASOURCEN M. VOLPICELLI

Gian M. Volpicelli

SENIOR WRITER

Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.

SOURCE

https://www.wired.com/story/twitter-crypto-strategy/

 

BUSINESS

APR 5, 2022 7:00 AM

What Twitter Is Really Planning for Crypto

The duo behind Twitter Crypto say NFT profile pics and crypto tipping are just the beginning.

 

YOU MIGHT HAVE heard of crypto Twitter, the corner of the social network where accounts have Bored Apes as profile pictures, posts are rife with talk of tokens, blockchains, and buying the Bitcoin dip, and Elon Musk is venerated.

Then again, you might have heard of Twitter Crypto, the business unit devoted to developing the social network’s strategy for cryptocurrency, blockchains, and that grab-bag of decentralized technologies falling under the rubric of Web3. The team’s unveiling came in November 2021 via a tweet from the newly hired project lead, Tess Rinearson, a Berlin-based American computer scientist whose career includes stints at blockchain companies such as Tendermint and Interchain.

Rinearson joined Twitter at a crucial moment. Jack Dorsey, the vociferously pro-Bitcoin company CEO, would leave a few weeks later, to be replaced by CTO Parag Agrawal. Agrawal had played an instrumental role in Bluesky, a Twitter-backed project to create a protocol—possibly with blockchain components—to build decentralized social networks.

As crypto went mainstream globally and crypto Twitter burgeoned, the company tried to dominate the space. Under the stewardship of product manager Esther Crawford, in September 2021 Twitter introduced a “tipping” feature that helps creators on Twitter to receive Bitcoin contributions through Lightning—a network for fast Bitcoin payments. In January, Twitter allowed subscribers of Twitter’s premium service, Twitter Blue, to flaunt their NFTs as hexagonal profile pictures, through a partnership with NFT marketplace OpenSea.

Twitter Crypto is just getting started. While Rinearson works with people all across the company, her team is still under 10 people, although more hires are in the pipeline, judging from recent job postings. So it’s worth asking what is next. I caught up over a video call with Rinearson and Crawford to talk about where Twitter Crypto is headed. 

The conversation has been edited for clarity and brevity.

WIRED: Let’s start with the basics. Why does Twitter have a crypto unit?

Tess Rinearson: We really see crypto—and what we’re now calling Web3— as something that could be this incredibly powerful tool that would unlock a lot for our users. The whole crypto world is like an internet of money, an internet of value that our users can potentially tap into to create new ways of owning their content, monetizing their content, owning their own identity, and even relating to each other.

One of my goals is to build Twitter’s crypto unit in such a way that it caters to communities that go beyond just that core crypto community. I love the crypto Twitter space, obviously—I’m a very proud member of the crypto community. And at the same time, I recognize that people who are really deep in the crypto space may not relate to concepts, like for instance blockchain’s immutability, in the same way that someone who’s less intensely involved might feel about those things.

So a lot of what we try to think about is, what can we learn from this group of people who are super engaged and really, really, creative? And then, how can we translate some of that stuff into a format or a mechanism or a product that’s a little bit more accessible to people who don’t have that background?

How are you learning from crypto Twitter? Do you just follow a lot of accounts, do you actually talk to them? How does that learning experience play out?

Esther Crawford: It’s a combination. We have an amazing research team that sets up panel interviews and surveys. But we’re also embedded in the community itself and follow a bunch of accounts, sit on Twitter spaces, go to conferences and events, engage with customers in that way. That’s the way the research piece of it works. But we also encounter it as end users: Twitter is the discovery platform today for all things crypto.

One of the things we do differently at Twitter is we build out in the open. And so this means having dialog with customers in real time—designers will take something that is very early-stage and post it as a tweet and then get real-time feedback. They’ll hop into spaces with product managers and engineering managers, talk about it live with real customers, and then incorporate that feedback into the designs and what ultimately we end up launching.

Rinearson: One of the things I wanted to make sure of before I came to Twitter was to know that we would be able to build features in the open and solicit feedback and show rough drafts. And so this is something I asked Parag Agrawal, who’s now the CEO, and was the person who hired me. Pretty early in the job interview process, I said this was going to be really important, and he said, “If you think it’s important to the success of this work, great, do it—thumbs up.” He also shares that openness.

As you said, Tess, you come from crypto. When you were out there, what did you think Twitter was getting right? What did you think Twitter was getting wrong?

Rinearson: I had been a Twitter power user for a really long time. The thing that I saw was a lot of aesthetic alignment between how Twitter exists in the world and the way that crypto exists in the world. Twitter has decentralized user experiences in its DNA. And, this is a bit cheesy, but people use Twitter sometimes in ways that they use a public blockchain, as a public database where everything’s time stamped and people can agree on what happened.

And for most people it’s open, it is there for public conversation. And then obviously it was also the place—a place—where the crypto community really found its footing. I think it’s been a place where an enormous amount of discovery happens, and education and learning for the whole community. I joined when there were some murmurings about Twitter starting to do crypto stuff, mostly stuff Esther had led actually, and I was excited to see where it was going. And then Twitter’s investment in Bluesky also gave me a lot of confidence.

Let’s talk about the two main things you have delivered so far: The crypto tipping feature and NFT pictures. Can you give me just a potted history of how each came about and why?

Crawford: Those are our first set of early explorations, and the reason why we started there was we really wanted to make sure that what we built benefited creators, their audiences, and then all the conversations that are happening on Twitter. For creators in particular, we know that they rely on platforms like Twitter to monetize and earn a living, and not all people are able to use traditional currencies. Not everybody has a traditional banking account setup.

And so we wanted to provide an opportunity for a borderless payment solution, and that’s why we decided to go ahead and use Bitcoin Lightning as our first big integration. One of the reasons we chose Bitcoin Lightning was also because of the low transaction fees. And we have Bitcoin and Ethereum addresses that you can also put in there [on your Twitter “tipping jar”]. We noticed that people were actually adding information about their crypto wallet addresses in their profiles. And so we wanted to make a more seamless experience, so that people could just tip through the platform, so that it felt native.

With NFT profile pictures, the way that came about was, again, looking at user behavior. People were adding NFTs that they owned as avatars, but you didn’t really know whether they owned those NFTs or not. So we decided to go ahead and build out that feature so that one could actually prove ownership.

That’s similar to how other things developed on Twitter, right? The hashtag, or even even the retweet, were initially just things users invented—by adding the # sign, or by pasting other users’ tweets—and then Twitter made that a feature.

Crawford: Yeah, exactly. Many of the best ideas come from watching user behavior on the platform, and then we just productize that.

Rinearson: Sometimes I’ve heard people call that the “help wanted signs,” and like, keeping an eye out for the “help wanted signs” across the platform. The NFT profile picture was a clear example of that.

How do all these things—these two things and possibly other crypto features coming further down the line—really help Twitter’s bottom line?

Crawford: With creator monetization our goal was to help creators get paid, not Twitter. But Twitter takes a really small cut of earnings. For more successful creators, we take a larger percentage. The way we think about this is, it is part of our revenue diversification.

Twitter today is a wholly ad-based business. In the future we imagine Twitter making money from a variety of different product areas. So Twitter Blue is one of those products—you can pay $2.99 a month and you get additional features, such as the NFT profile pictures. We really think that revenue diversification sits across a variety of areas, and creator monetization is one really small component of that.

As you said, these are just early experiments. Where is Twitter Crypto going next? What’s your vision for crypto technology’s role within Twitter?

Rinearson: The real trick here is to find the right parts of Twitter to decentralize, and to not try to decentralize everything at once—or, you know, make every user suddenly responsible for taking care of some private keys or something like that.

We have to find the right ways to open up some access to a decentralized economic layer, or give people ways that they can take their identity with them, without relying on a single centralized service.

We’re really early in these explorations, and even looking at things like Bitcoin tipping or the NFT profile pictures—we view those features as experiments themselves in a lot of ways and learning experiences. We’re learning things about how our users relate to these concepts, what they understand about them, what they find confusing, and what’s most useful to them. We really want to try to use this technology to bring utility to people and you know, not just like, sprinkle a little blockchain on it for the sake of it. So creator monetization is an area that I’m really excited about because I think there’s a really clear path forward. But again, we’re looking beyond that: We’re also looking at using crypto technology in fields like [digital] identity and [digital] ownership space and also figuring out how we can better serve crypto communities on the platform.

Are you going to put Twitter verified users’ blue ticks on a blockchain, then?

[Laughter]

No?

[More laughter]

OK, moving on. How does the kind of work you do dovetail with Bluesky’s plan to create a protocol for a decentralized social media platform? Is there any synergy there?

Rinearson: I have known Jay [Graber], the Bluesky lead, for a long time, and she and I are in pretty close contact. We check in with each other regularly and talk a lot about problems we might have in common that we’ll both need to solve. There’s an overlap looking at things in the identity area, but at the end of the day, it’s a separate project. She’s pretty focused on hiring her team, and they’re very focused on building a prototype of a protocol. That is different from what Esther and I are thinking about, which is like: There are all these blockchain protocols that exist, and we need to figure out how to make them useful and accessible for real people.

And when I say “real people,” I mean that in a sort of tongue-in-cheek contrast to hardcore crypto nerds like me. Jay is thinking much more about building for people who are creating decentralized networks. That is a very different focus area. Beyond that, I would just say it’s too early to say what Bluesky will mean for Twitter as a product. We are in touch, we have aligned values. But at the end of the day—separate teams.

Why is a centralized Silicon Valley company like Twitter the right place to start to bring more decentralization to internet users? Don’t we have just to start from scratch, build a new platform that is already decentralized?

Rinearson: I started in crypto in 2015, and I have a very vivid memory from those years of watching some of my coworkers—crypto engineers—trying to figure out how to secure some of their Bitcoin like before one of the Bitcoin forks [in which the Bitcoin blockchain split, creating new currencies], and they were panicking and freaking out. I thought there was no way that a normal person would be able to handle this in a way that would be safe. And so I was a little bit disillusioned with crypto, especially from a consumer perspective.

And then last year, I started seeing more interest from people whom I’ve known for a long time and weren’t crypto people. They were just starting to perk their heads up and take notice and start creating NFTs or start talking about DAOs. And I thought that that was interesting, that we were coming around a corner, and it might be time to start thinking about what this could mean for people beyond that hardcore crypto group.

And that was when Twitter reached out. You know, I don’t think that just any centralized platform would be able to bring crypto to the masses, so to speak. But I think Twitter has the right stuff. I think you have to meet people where they are with new technologies: find ways to onboard them and bring them along and show them what this might mean for them. make things accessible. And it’s really, really hard to do that with just a protocol. You need to have some kind of community, you need to have some kind of user base, you need to have some kind of platform. And Twitter’s just right there.

I don’t think I would say that a centralized platform is definitely the way to “bring crypto to the masses.” I do think that Twitter is the way to do it.

But why do the masses need crypto right now?

Rinearson: I don’t know that anyone  needs crypto, and our goal is not to get everyone into crypto. Let’s be clear about that. But I do think that crypto is a potentially very powerful tool for people. And so I think what we are trying to do is show people how powerful it is and unlock those possibilities. It’s also possible that we create some products and features, where people actually don’t even really know what’s happening under the hood.

Like maybe we’re using crypto as a payment rail or again as an identity layer—users don’t necessarily need to know all of those implementation details. And that’s actually something we come back to a lot: What level of abstraction are we talking about with users? What story are we telling them about what’s happening under the hood? But yeah, I would just like to reiterate that the goal is not to just shovel everyone into crypto. We want to provide value for people.

Do you think there is a case for Twitter to launch its own cryptocurrency— a Twittercoin?

Rinearson: I think there’s a case for a lot of things—honestly, there’s a case for a lot of things. We’re trying to think really, really broadly about it.

Crawford: We’re actively exploring a lot of things. It’s not it’s not something we would be making an announcement about.

Rinearson: I think it is really important to stress that when you say “Twittercoin” you probably have a slightly different idea of what it is than we do. And are we exploring those ideas? Yes, we want to think about all of them. Do we have road maps for them? No. But are we trying to think about things really creatively and be really, really open-minded? Yes. We have this new economic technology that we think could unlock a lot of things for people. And we want to go down a bunch of rabbit holes and see what we come up with.

Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.

 

Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN

SOURCE

https://www.wired.com/story/twitter-crypto-strategy/

Read Full Post »

Data Science: Step by Step – A Resource for LPBI Group One-Year Internship in IT, IS, DS

Reporter: Aviva Lev-Ari, PhD, RN

9 free Harvard courses: learning Data Science

In this article, I will list 9 free Harvard courses that you can take to learn data science from scratch. Feel free to skip any of these courses if you already possess knowledge of that subject.

Step 1: Programming

The first step you should take when learning data science is to learn to code. You can choose to do this with your choice of programming language?—?ideally Python or R.

If you’d like to learn R, Harvard offers an introductory R course created specifically for data science learners, called Data Science: R Basics.

This program will take you through R concepts like variables, data types, vector arithmetic, and indexing. You will also learn to wrangle data with libraries like dplyr and create plots to visualize data.

If you prefer Python, you can choose to take CS50’s Introduction to Programming with Python offered for free by Harvard. In this course, you will learn concepts like functions, arguments, variables, data types, conditional statements, loops, objects, methods, and more.

Both programs above are self-paced. However, the Python course is more detailed than the R program, and requires a longer time commitment to complete. Also, the rest of the courses in this roadmap are taught in R, so it might be worth learning R to be able to follow along easily.

Step 2: Data Visualization

Visualization is one of the most powerful techniques with which you can translate your findings in data to another person.

With Harvard’s Data Visualization program, you will learn to build visualizations using the ggplot2 library in R, along with the principles of communicating data-driven insights.

Step 3: Probability

In this course, you will learn essential probability concepts that are fundamental to conducting statistical tests on data. The topics taught include random variables, independence, Monte Carlo simulations, expected values, standard errors, and the Central Limit Theorem.

The concepts above will be introduced with the help of a case study, which means that you will be able to apply everything you learned to an actual real-world dataset.

Step 4: Statistics

After learning probability, you can take this course to learn the fundamentals of statistical inference and modelling.
This program will teach you to define population estimates and margin of errors, introduce you to Bayesian statistics, and provide you with the fundamentals of predictive modeling.

Step 5: Productivity Tools (Optional)

I’ve included this project management course as optional since it isn’t directly related to learning data science. Rather, you will be taught to use Unix/Linux for file management, Github, version control, and creating reports in R.

The ability to do the above will save you a lot of time and help you better manage end-to-end data science projects.

Step 6: Data Pre-Processing

The next course in this list is called Data Wrangling, and will teach you to prepare data and convert it into a format that is easily digestible by machine learning models.

You will learn to import data into R, tidy data, process string data, parse HTML, work with date-time objects, and mine text.

As a data scientist, you often need to extract data that is publicly available on the Internet in the form of a PDF document, HTML webpage, or a Tweet. You will not always be presented with clean, formatted data in a CSV file or Excel sheet.

By the end of this course, you will learn to wrangle and clean data to come up with critical insights from it.

Step 7: Linear Regression

Linear regression is a machine learning technique that is used to model a linear relationship between two or more variables. It can also be used to identify and adjust the effect of confounding variables.

This course will teach you the theory behind linear regression models, how to examine the relationship between two variables, and how confounding variables can be detected and removed before building a machine learning algorithm.

Step 8: Machine Learning

Finally, the course you’ve probably been waiting for! Harvard’s machine learning program will teach you the basics of machine learning, techniques to mitigate overfitting, supervised and unsupervised modelling approaches, and recommendation systems.

Step 9: Capstone Project

After completing all the above courses, you can take Harvard’s data science capstone project, where your skills in data visualization, probability, statistics, data wrangling, data organization, regression, and machine learning will be assessed.

With this final project, you will get the opportunity to put together all the knowledge learnt from the above courses and gain the ability to complete a hands-on data science project from scratch.

Note: All the courses above are available on an online learning platform from edX and can be audited for free. If you want a course certificate, however, you will have to pay for one.

Building a data science learning roadmap with free courses offered by MIT.

8 Free MIT Courses to Learn Data Science Online

 enrolled into an undergraduate computer science program and decided to major in data science. I spent over $25K in tuition fees over the span of three years, only to graduate and realize that I wasn’t equipped with the skills necessary to land a job in the field.

I barely knew how to code, and was unclear about the most basic machine learning concepts.

I took some time out to try and learn data science myself — with the help of YouTube videos, online courses, and tutorials. I realized that all of this knowledge was publicly available on the Internet and could be accessed for free.

It came as a surprise that even Ivy League universities started making many of their courses accessible to students worldwide, for little to no charge. This meant that people like me could learn these skills from some of the best institutions in the world, instead of spending thousands of dollars on a subpar degree program.

In this article, I will provide you with a data science roadmap I created using only freely available MIT online courses.

Step 1: Learn to code

I highly recommend learning a programming language before going deep into the math and theory behind data science models. Once you learn to code, you will be able to work with real-world datasets and get a feel of how predictive algorithms function.

MIT Open Courseware offers a beginner-friendly Python program for beginners, called Introduction to Computer Science and Programming.

This course is designed to help people with no prior coding experience to write programs to tackle useful problems.

Step 2: Statistics

Statistics is at the core of every data science workflow — it is required when building a predictive model, analyzing trends in large amounts of data, or selecting useful features to feed into your model.

MIT Open Courseware offers a beginner-friendly course called Introduction to Probability and Statistics. After taking this course, you will learn the basic principles of statistical inference and probability. Some concepts covered include conditional probability, Bayes theorem, covariance, central limit theorem, resampling, and linear regression.

This course will also walk you through statistical analysis using the R programming language, which is useful as it adds on to your tool stack as a data scientist.

Another useful program offered by MIT for free is called Statistical Thinking and Data Analysis. This is another elementary course in the subject that will take you through different data analysis techniques in Excel, R, and Matlab.

You will learn about data collection, analysis, different types of sampling distributions, statistical inference, linear regression, multiple linear regression, and nonparametric statistical methods.

Step 3: Foundational Math Skills

Calculus and linear algebra are two other branches of math that are used in the field of machine learning. Taking a course or two in these subjects will give you a different perspective of how predictive models function, and the working behind the underlying algorithm.

To learn calculus, you can take Single Variable Calculus offered by MIT for free, followed by Multivariable Calculus.

Then, you can take this Linear Algebra class by Prof. Gilbert Strang to get a strong grasp of the subject.

All of the above courses are offered by MIT Open Courseware, and are paired with lecture notes, problem sets, exam questions, and solutions.

Step 4: Machine Learning

Finally, you can use the knowledge gained in the courses above to take MIT’s Introduction to Machine Learning course. This program will walk you through the implementation of predictive models in Python.

The core focus of this course is in supervised and reinforcement learning problems, and you will be taught concepts such as generalization and how overfitting can be mitigated. Apart from just working with structured datasets, you will also learn to process image and sequential data.

MIT’s machine learning program cites three pre-requisites — Python, linear algebra, and calculus, which is why it is advisable to take the courses above before starting this one.

Are These Courses Beginner-Friendly?

Even if you have no prior knowledge of programming, statistics, or mathematics, you can take all the courses listed above.

MIT has designed these programs to take you through the subject from scratch. However, unlike many MOOCs out there, the pace does build up pretty quickly and the courses cover a large depth of information.

Due to this, it is advisable to do all the exercises that come with the lectures and work through all the reading material provided.

SOURCE

Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

https://www.kdnuggets.com/2022/03/8-free-mit-courses-learn-data-science-online.html

Read Full Post »

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:

 

This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals

 

Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs

 

Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]

Terminology[edit]

The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]

Concept[edit]

Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]

Reception[edit]

Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”

 

But Before I get to the “selling monkeys to morons” quote,

I want to talk about

THE GOOD, THE BAD, AND THE UGLY

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE GOOD

My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).

 

His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples

 

This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 

 

Abstract

Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.

 

The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

 

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!

See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

 

UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>

 

 

To introduce this discussion first a few startoff material which will fram this discourse

 






The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!

 

We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?

 

There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

 

Read Full Post »

 

Medical Startups – Artificial Intelligence (AI) Startups in Healthcare

Reporters: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN and Shraga Rottem, MD, DSc,

The motivation for this post is two fold:

First, we are presenting an application of AI, NLP, DL to our own medical text in the Genomics space. Here we present the first section of Part 1 in the following book. Part 1 has six subsections that yielded 12 plots. The entire Book is represented by 38 x 2 = 76 plots.

Second, we bring to the attention of the e-Reader the list of 276 Medical Startups – Artificial Intelligence (AI) Startups in Healthcare as a hot universe of R&D activity in Human Health.

Third, to highlight one academic center with an AI focus

ETH Logo
 
ETH AI Center - Header Image
 
 
Dear friends of the ETH AI Center,

We would like to provide you with some exciting updates from the ETH AI Center and its growing community.

We would like to provide you with some exciting updates from the ETH AI Center and its growing community. The ETH AI Center now comprises 110 research groups in the faculty, 20 corporate partners and has led to nine AI startups.

As the Covid-19 restrictions in Switzerland have recently been lifted, we would like to hear from you what kind of events you would like to see in 2022! Participate in the survey to suggest event formats and topics that you would enjoy being a part of. We are already excited to learn what we can achieve together this year.

We already have many interesting events coming up, we look forward to seeing you at our main and community events!

SOURCE

https://news.ethz.ch/html_mail.jsp?params=%2FUnFXUQJ%2FmiOP6akBq8eHxaXG%2BRdNmeoVa9gX5ArpTr6mX74xp5d78HhuIHTd9V6AHtAfRahyx%2BfRGrzVL1G8Jy5e3zykvr1WDtMoUC%2B7vILoHCGQ5p1rxaPzOsF94ID

 

 

LPBI Group is applying AI for Medical Text Analysis with Machine Learning and Natural Language Processing: Statistical and Deep Learning

Our Book 

Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS & BioInformatics, Simulations and the Genome Ontology

Medical Text Analysis of this Books shows the following results obtained by Madison Davis by applying Wolfram NLP for Biological Languages on our own Text. See below an Example:

Part 1: Next Generation Sequencing (NGS)

 

1.1 The NGS Science

1.1.1 BioIT Aspect

 

Hypergraph Plot #1 and Tree Diagram Plot #1

for 1.1.1 based on 16 articles & on 12 keywords

protein, cancer, dna, genes, rna, survival, immune, tumor, patients, human, genome, expression

(more…)

Read Full Post »

From the journal Nature: NFT, Patents, and Intellectual Property: Potential Design

Reporter: Stephen J. Williams, Ph.D.

 

From the journal Nature

Source: https://www.nature.com/articles/s41598-022-05920-6

Patents and intellectual property assets as non-fungible tokens; key technologies and challenges

Scientific Reports volume 12, Article number: 2178 (2022)

Abstract

With the explosive development of decentralized finance, we witness a phenomenal growth in tokenization of all kinds of assets, including equity, funds, debt, and real estate. By taking advantage of blockchain technology, digital assets are broadly grouped into fungible and non-fungible tokens (NFT). Here non-fungible tokens refer to those with unique and non-substitutable properties. NFT has widely attracted attention, and its protocols, standards, and applications are developing exponentially. It has been successfully applied to digital fantasy artwork, games, collectibles, etc. However, there is a lack of research in utilizing NFT in issues such as Intellectual Property. Applying for a patent and trademark is not only a time-consuming and lengthy process but also costly. NFT has considerable potential in the intellectual property domain. It can promote transparency and liquidity and open the market to innovators who aim to commercialize their inventions efficiently. The main objective of this paper is to examine the requirements of presenting intellectual property assets, specifically patents, as NFTs. Hence, we offer a layered conceptual NFT-based patent framework. Furthermore, a series of open challenges about NFT-based patents and the possible future directions are highlighted. The proposed framework provides fundamental elements and guidance for businesses in taking advantage of NFTs in real-world problems such as grant patents, funding, biotechnology, and so forth.

Introduction

Distributed ledger technologies (DLTs) such as blockchain are emerging technologies posing a threat to existing business models. Traditionally, most companies used centralized authorities in various aspects of their business, such as financial operations and setting up a trust with their counterparts. By the emergence of blockchain, centralized organizations can be substituted with a decentralized group of resources and actors. The blockchain mechanism was introduced in Bitcoin white paper in 2008, which lets users generate transactions and spend their money without the intervention of banks1. Ethereum, which is a second generation of blockchain, was introduced in 2014, allowing developers to run smart contracts on a distributed ledger. With smart contracts, developers and businesses can create financial applications that use cryptocurrencies and other forms of tokens for applications such as decentralized finance (DeFi), crowdfunding, decentralized exchanges, data records keeping, etc.2. Recent advances in distributed ledger technology have developed concepts that lead to cost reduction and the simplification of value exchange. Nowadays, by leveraging the advantages of blockchain and taking into account the governance issues, digital assets could be represented as tokens that existed in the blockchain network, which facilitates their transmission and traceability, increases their transparency, and improves their security3.

In the landscape of blockchain technology, there could be defined two types of tokens, including fungible tokens, in which all the tokens have equal value and non-fungible tokens (NFTs) that feature unique characteristics and are not interchangeable. Actually, non-fungible tokens are digital assets with a unique identifier that is stored on a blockchain4. NFT was initially suggested in Ethereum Improvement Proposals (EIP)-7215, and it was later expanded in EIP-11556. NFTs became one of the most widespread applications of blockchain technology that reached worldwide attention in early 2021. They can be digital representations of real-world objects. NFTs are tradable rights of digital assets (pictures, music, films, and virtual creations) where ownership is recorded in blockchain smart contracts7.

In particular, fungibility is the ability to exchange one with another of the same kind as an essential currency feature. The non-fungible token is unique and therefore cannot be substituted8. Recently, blockchain enthusiasts have indicated significant interest in various types of NFTs. They enthusiastically participate in NFT-related games or trades. CryptoPunks9, as one of the first NFTs on Ethereum, has developed almost 10,000 collectible punks and helped popularize the ERC-721 Standard. With the gamification of the breeding mechanics, CryptoKitties10 officially placed NFTs at the forefront of the market in 2017. CryptoKitties is an early blockchain game that enables users to buy, sell, collect, and digital breed cats. Another example is NBA Top Shot11, an NFT trading platform for digital short films buying and selling NBA events.

NFTs are developing remarkably and have provided many applications such as artist royalties, in-game assets, educational certificates, etc. However, it is a relatively new concept, and many areas of application need to be explored. Intellectual Property, including patent, trademark, and copyright, is an important area where NFTs can be applied usefully and solve existing problems.

Although NFTs have had many applications so far, it rarely has been used to solve real-world problems. In fact, an NFT is an exciting concept about Intellectual Property (IP). Applying for a patent and trademark is a time-consuming and lengthy process, but it is also costly. That is, registering a copyright or trademark may take months, while securing a patent can take years. On the contrary, with the help of unique features of NFT technology, it is possible to accelerate this process with considerable confidence and assurance about protecting the ownership of an IP. NFTs can offer IP protection while an applicant waits for the government to grant his/her more formal protection. It is cause for excitement that people who believe NFTs and Blockchain would make buying and selling patents easier, offering new opportunities for companies, universities, and inventors to make money off their innovations12. Patent holders will benefit from such innovation. It would give them the ability to ‘tokenize’ their patents. Because every transaction would be logged on a blockchain, it will be much easier to trace patent ownership changes. However, NFT would also facilitate the revenue generation of patents by democratizing patent licensing via NFT. NFTs support the intellectual property market by embedding automatic royalty collecting methods inside inventors’ works, providing them with financial benefits anytime their innovation is licensed. For example, each inventor’s patent would be minted as an NFT, and these NFTs would be joined together to form a commercial IP portfolio and minted as a compounded NFT. Each investor would automatically get their fair share of royalties whenever the licensing revenue is generated without tracking them down.

The authors in13, an overview of NFTs’ applications in different aspects such as gambling, games, and collectibles has been discussed. In addition4, provides a prototype for an event-tracking application based on Ethereum smart contract, and NFT as a solution for art and real estate auction systems is described in14. However, these studies have not discussed existing standards or a generalized architecture, enabling NFTs to be applied in diverse applications. For example, the authors in15 provide two general design patterns for creating and trading NFTs and discuss existing token standards for NFT. However, the proposed designs are limited to Ethereum, and other blockchains are not considered16. Moreover, different technologies for each step of the proposed procedure are not discussed. In8, the authors provide a conceptual framework for token designing and managing and discuss five views: token view, wallet view, transaction view, user interface view, and protocol view. However, no research provides a generalized conceptual framework for generating, recording, and tracing NFT based-IP, in blockchain network.

Even with the clear benefits that NFT-backed patents offer, there are a number of impediments to actually achieving such a system. For example, convincing patent owners to put current ownership records for their patents into NFTs poses an initial obstacle. Because there is no reliable framework for NFT-based patents, this paper provides a conceptual framework for presenting NFT-based patents with a comprehensive discussion on many aspects, ranging from the background, model components, token standards to application domains and research challenges. The main objective of this paper is to provide a layered conceptual NFT-based patent framework that can be used to register patents in a decentralized, tamper-proof, and trustworthy peer-to-peer network to trade and exchange them in the worldwide market. The main contributions of this paper are highlighted as follows:

  • Providing a comprehensive overview on tokenization of IP assets to create unique digital tokens.
  • Discussing the components of a distributed and trustworthy framework for minting NFT-based patents.
  • Highlighting a series of open challenges of NFT-based patents and enlightening the possible future trends.

The rest of the paper is structured as follows: “Background” section describes the Background of NFTs, Non-Fungible Token Standards. The NFT-based patent framework is described in “NFT-based patent framework” section. The Discussion and challenges are presented in “Discussion” section. Lastly, conclusions are given in “Conclusion” section.

Background

Colored Coins could be considered the first steps toward NFTs designed on the top of the Bitcoin network. Bitcoins are fungible, but it is possible to mark them to be distinguishable from the other bitcoins. These marked coins have special properties representing real-world assets like cars and stocks, and owners can prove their ownership of physical assets through the colored coins. By utilizing Colored Coins, users can transfer their marked coins’ ownership like a usual transaction and benefit from Bitcoin’s decentralized network17. Colored Coins had limited functionality due to the Bitcoin script limitations. Pepe is a green frog meme originated by Matt Furie that; users define tokens for Pepes and trade them through the Counterparty platform. Then, the tokens that were created by the picture of Pepes are decided if they are rare enough. Rare Pepe allows users to preserve scarcity, manage the ownership, and transfer their purchased Pepes.

In 2017, Larva Labs developed the first Ethereum-based NFT named CryptoPunks. It contains 10,000 unique human-like characters generated randomly. The official ownership of each character is stored in the Ethereum smart contract, and owners would trade characters. CryptoPunks project inspired CryptoKitties project. CryptoKitties attracts attention to NFT, and it is a pioneer in blockchain games and NFTs that launched in late 2017. CryptoKitties is a blockchain-based virtual game, and users collect and trade characters with unique features that shape kitties. This game was developed in Ethereum smart contract, and it pioneered the ERC-721 token, which was the first standard token in the Ethereum blockchain for NFTs. After the 2017 hype in NFTs, many projects started in this context. Due to increased attention to NFTs’ use-cases and growing market cap, different blockchains like EOS, Algorand, and Tezos started to support NFTs, and various marketplaces like SuperRare and Rarible, and OpenSea are developed to help users to trade NFTs. As mentioned, in general, assets are categorized into two main classes, fungible and non-fungible assets. Fungible assets are the ones that another similar asset can replace. Fungible items could have two main characteristics: replicability and divisibility.

Currency is a fungible item because a ten-dollar bill can be exchanged for another ten-dollar bill or divided into ten one-dollar bills. Despite fungible items, non-fungible items are unique and distinguishable. They cannot be divided or exchanged by another identical item. The first tweet on Twitter is a non-fungible item with mentioned characteristics. Another tweet cannot replace it, and it is unique and not divisible. NFT is a non-fungible cryptographic asset that is declared in a standard token format and has a unique set of attributes. Due to transparency, proof of ownership, and traceable transactions in the blockchain network, NFTs are created using blockchain technology.

Blockchain-based NFTs help enthusiasts create NFTs in the standard token format in blockchain, transfer the ownership of their NFTs to a buyer, assure uniqueness of NFTs, and manage NFTs completely. In addition, there are semi-fungible tokens that have characteristics of both fungible and non-fungible tokens. Semi-fungible tokens are fungible in the same class or specific time and non-fungible in other classes or different times. A plane ticket can be considered a semi-fungible token because a charter ticket can be exchanged by another charter ticket but cannot be exchanged by a first-class ticket. The concept of semi-fungible tokens plays the main role in blockchain-based games and reduces NFTs overhead. In Fig. 1, we illustrate fungible, non-fungible, and semi-fungible tokens. The main properties of NFTs are described as follows15:

figure 1
Figure 1

Ownership: Because of the blockchain layer, the owner of NFT can easily prove the right of possession by his/her keys. Other nodes can verify the user’s ownership publicly.

  • Transferable: Users can freely transfer owned NFTs ownership to others on dedicated markets.
  • Transparency: By using blockchain, all transactions are transparent, and every node in the network can confirm and trace the trades.
  • Fraud Prevention: Fraud is one of the key problems in trading assets; hence, using NFTs ensures buyers buy a non-counterfeit item.
  • Immutability: Metadata, token ID, and history of transactions of NFTs are recorded in a distributed ledger, and it is impossible to change the information of the purchased NFTs.

Non-fungible standards

Ethereum blockchain was pioneered in implementing NFTs. ERC-721 token was the first standard token accepted in the Ethereum network. With the increase in popularity of the NFTs, developers started developing and enhancing NFTs standards in different blockchains like EOS, Algorand, and Tezos. This section provides a review of implemented NFTs standards on the mentioned blockchains.

Ethereum

ERC-721 was the first Standard for NFTs developed in Ethereum, a free and open-source standard. ERC-721 is an interface that a smart contract should implement to have the ability to transfer and manage NFTs. Each ERC-721 token has unique properties and a different Token Id. ERC-721 tokens include the owner’s information, a list of approved addresses, a transfer function that implements transferring tokens from owner to buyer, and other useful functions5.

In ERC-721, smart contracts can group tokens with the same configuration, and each token has different properties, so ERC-721 does not support fungible tokens. However, ERC-1155 is another standard on Ethereum developed by Enjin and has richer functionalities than ERC-721 that supports fungible, non-fungible, and semi-fungible tokens. In ERC-1155, IDs define the class of assets. So different IDs have a different class of assets, and each ID may contain different assets of the same class. Using ERC-1155, a user can transfer different types of tokens in a single transaction and mix multiple fungible and non-fungible types of tokens in a single smart contract6. ERC-721 and ERC-1155 both support operators in which the owner can let the operator originate transferring of the token.

EOSIO

EOSIO is an open-source blockchain platform released in 2018 and claims to eliminate transaction fees and increase transaction throughput. EOSIO differs from Ethereum in the wallet creation algorithm and procedure of handling transactions. dGood is a free standard developed in the EOS blockchain for assets, and it focuses on large-scale use cases. It supports a hierarchical naming structure in smart contracts. Each contract has a unique symbol and a list of categories, and each category contains a list of token names. Therefore, a single contract in dGoods could contain many tokens, which causes efficiency in transferring a group of tokens. Using this hierarchy, dGoods supports fungible, non-fungible, and semi-fungible tokens. It also supports batch transferring, where the owner can transfer many tokens in one operation18.

Algorand

Algorand is a new high-performance public blockchain launched in 2019. It provides scalability while maintaining security and decentralization. It supports smart contracts and tokens for representing assets19. Algorand defines Algorand Standard Assets (ASA) concept to create and manage assets in the Algorand blockchain. Using ASA, users are able to define fungible and non-fungible tokens. In Algorand, users can create NFTs or FTs without writing smart contracts, and they should run just a single transaction in the Algorand blockchain. Each transaction contains some mutable and immutable properties20.

Each account in Algorand can create up to 1000 assets, and for every asset, an account creates or receives, the minimum balance of the account increases by 0.1 Algos. Also, Algorand supports fractional NFTs by splitting an NFT into a group of divided FTs or NFTs, and each part can be exchanged dependently21. Algorand uses a Clawback Address that operates like an operator in ERC-1155, and it is allowed to transfer tokens of an owner who has permitted the operator.

Tezos

Tezos is another decentralized open-source blockchain. Tezos supports the meta-consensus concept. In addition to using a consensus protocol on the ledger’s state like Bitcoin and Ethereum, It also attempts to reach a consensus about how nodes and the protocol should change or upgrade22. FA2 (TZIP-12) is a standard for a unified token contract interface in the Tezos blockchain. FA2 supports different token types like fungible, non-fungible, and fractionalized NFT contracts. In Tezos, tokens are identified with a token contract address and token ID pair. Also, Tezos supports batch token transferring, which reduces the cost of transferring multiple tokens.

Flow

Flow was developed by Dapper Labs to remove the scalability limitation of the Ethereum blockchain. Flow is a fast and decentralized blockchain that focuses on games and digital collectibles. It improves throughput and scalability without sharding due to its architecture. Flow supports smart contracts using Cadence, which is a resource-oriented programming language. NFTs can be described as a resource with a unique id in Cadence. Resources have important rules for ownership management; that is, resources have just one owner and cannot be copied or lost. These features assure the NFT owner. NFTs’ metadata, including images and documents, can be stored off-chain or on-chain in Flow. In addition, Flow defines a Collection concept, in which each collection is an NFT resource that can include a list of resources. It is a dictionary that the key is resource id, and the value is corresponding NFT.

The collection concept provides batch transferring of NFTs. Besides, users can define an NFT for an FT. For instance, in CryptoKitties, a unique cat as an NFT can own a unique hat (another NFT). Flow uses Cadence’s second layer of access control to allow some operators to access some fields of the NFT23. In Table 1, we provide a comparison between explained standards. They are compared in support of fungible-tokens, non-fungible tokens, batch transferring that owner can transform multiple tokens in one operation, operator support in which the owner can approve an operator to originate token transfer, and fractionalized NFTs that an NFT can divide to different tokens and each exchange dependently.Table 1 Comparing NFT standards.

Full size table

NFT-based patent framework

In this section, we propose a framework for presenting NFT-based patents. We describe details of the proposed distributed and trustworthy framework for minting NFT-based patents, as shown in Fig. 2. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application Layer. Details of each layer and the general concepts are presented as follows.

figure 2
Figure 2

Storage layer

The continuous rise of the data in blockchain technology is moving various information systems towards the use of decentralized storage networks. Decentralized storage networks were created to provide more benefits to the technological world24. Some of the benefits of using decentralized storage systems are explained: (1) Cost savings are achieved by making optimal use of current storage. (2) Multiple copies are kept on various nodes, avoiding bottlenecks on central servers and speeding up downloads. This foundation layer implicitly provides the infrastructure required for the storage. The items on NFT platforms have unique characteristics that must be included for identification.

Non-fungible token metadata provides information that describes a particular token ID. NFT metadata is either represented on the On-chain or Off-chain. On-chain means direct incorporation of the metadata into the NFT’s smart contract, which represents the tokens. On the other hand, off-chain storage means hosting the metadata separately25.

Blockchains provide decentralization but are expensive for data storage and never allow data to be removed. For example, because of the Ethereum blockchain’s current storage limits and high maintenance costs, many projects’ metadata is maintained off-chain. Developers utilize the ERC721 Standard, which features a method known as tokenURI. This method is implemented to let applications know the location of the metadata for a specific item. Currently, there are three solutions for off-chain storage, including InterPlanetary File System (IPFS), Pinata, and Filecoin.

IPFS

InterPlanetary File System (IPFS) is a peer-to-peer hypermedia protocol for decentralized media content storage. Because of the high cost of storing media files related to NFTS on Blockchain, IPFS can be the most affordable and efficient solution. IPFS combines multiple technologies inspired by Gita and BitTorrent, such as Block Exchange System, Distributed Hash Tables (DHT), and Version Control System26. On a peer-to-peer network, DHT is used to coordinate and maintain metadata.

In other words, the hash values must be mapped to the objects they represent. An IPFS generates a hash value that starts with the prefix {Q}_{m} and acts as a reference to a specific item when storing an object like a file. Objects larger than 256 KB are divided into smaller blocks up to 256 KB. Then a hash tree is used to interconnect all the blocks that are a part of the same object. IPFS uses Kamdelia DHT. The Block Exchange System, or BitSwap, is a BitTorrent-inspired system that is used to exchange blocks. It is possible to use asymmetric encryption to prevent unauthorized access to stored content on IPFS27.

Pinata

Pinata is a popular platform for managing and uploading files on IPFS. It provides secure and verifiable files for NFTs. Most data is stored off-chain by most NFTs, where a URL of the data is pointed to the NFT on the blockchain. The main problem here is that some information in the URL can change.

This indicates that an NFT supposed to describe a certain patent can be changed without anyone knowing. This defeats the purpose of the NFT in the first place. This is where Pinata comes in handy. Pinata uses the IPFS to create content-addressable hashes of data, also known as Content-Identifiers (CIDs). These CIDs serve as both a way of retrieving data and a means to ensure data validity. Those looking to retrieve data simply ask the IPFS network for the data associated with a certain CID, and if any node on the network contains that data, it will be returned to the requester. The data is automatically rehashed on the requester’s computer when the requester retrieves it to make sure that the data matches back up with the original CID they asked for. This process ensures the data that’s received is exactly what was asked for; if a malicious node attempts to send fake data, the resulting CID on the requester’s end will be different, alerting the requester that they’re receiving incorrect data28.

Filecoin

Another decentralized storage network is Filecoin. It is built on top of IPFS and is designed to store the most important data, such as media files. Truffle Suite has also launched NFT Development Template with Filecoin Box. NFT.Storage (Free Decentralized Storage for NFTs)29 allows users to easily and securely store their NFT content and metadata using IPFS and Filecoin. NFT.Storage is a service backed by Protocol Labs and Pinata specifically for storing NFT data. Through content addressing and decentralized storage, NFT.Storage allows developers to protect their NFT assets and associated metadata, ensuring that all NFTs follow best practices to stay accessible for the long term. NFT.Storage makes it completely frictionless to mint NFTs following best practices through resilient persistence on IPFS and Filecoin. NFT.Storage allows developers to quickly, safely, and for free store NFT data on decentralized networks. Anyone can leverage the power of IPFS and Filecoin to ensure the persistence of their NFTs. The details of this system are stated as follows30:

Content addressing

Once users upload data on NFT.Storage, They receive a CID, which is an IPFS hash of the content. CIDs are the data’s unique fingerprints, universal addresses that can be used to refer to it regardless of how or where it is stored. Using CIDs to reference NFT data avoids problems such as weak links and “rug pulls” since CIDs are generated from the content itself.

Provable storage

NFT.Storage uses Filecoin for long-term decentralized data storage. Filecoin uses cryptographic proofs to assure the NFT data’s durability and persistence over time.

Resilient retrieval

This data stored via IPFS and Filecoin can be fetched directly in the browser via any public IPFS.

Authentication Layer

The second layer is the authentication layer, which we briefly highlight its functions in this section. The Decentralized Identity (DID) approach assists users in collecting credentials from a variety of issuers, such as the government, educational institutions, or employers, and saving them in a digital wallet. The verifier then uses these credentials to verify a person’s validity by using a blockchain-based ledger to follow the “identity and access management (IAM)” process. Therefore, DID allows users to be in control of their identity. A lack of NFT verifiability also causes intellectual property and copyright infringements; of course, the chain of custody may be traced back to the creator’s public address to check whether a similar patent is filed using that address. However, there is no quick and foolproof way to check an NFTs creator’s legitimacy. Without such verification built into the NFT, an NFT proves ownership only over that NFT itself and nothing more.

Self-sovereign identity (SSI)31 is a solution to this problem. SSI is a new series of standards that will guide a new identity architecture for the Internet. With a focus on privacy, security interoperability, SSI applications use public-key cryptography with public blockchains to generate persistent identities for people with private and selective information disclosure. Blockchain technology offers a solution to establish trust and transparency and provide a secure and publicly verifiable KYC (Know Your Customer). The blockchain architecture allows you to collect information from various service providers into a single cryptographically secure and unchanging database that does not need a third party to verify the authenticity of the information.

The proposed platform generates patents-related smart contracts acting as a program that runs on the blockchain to receive and send transactions. They are unalterable privately identifying clients with a thorough KYC process. After KYC approval, then mint an NFT on the blockchain as a certificate of verification32. This article uses a decentralized authentication solution at this layer for authentication. This solution has been used for various applications in the field of the blockchain (exp: smart city, Internet of Things, etc.3334, but we use it here for the proposed framework (patent as NFTs). Details of this solution will be presented in the following.

Decentralized authentication

This section presents the authentication layer similar35 to build validated communication in a secure and decentralized manner via blockchain technology. As shown in Fig. 3, the authentication protocol comprises two processes, including registration and login.

figure 3
Figure 3
Registration

In the registration process of a suggested authentication protocol, we first initialize a user’s public key as their identity key (UserName). Then, we upload this identity key on a blockchain, in which transactions can be verified later by other users. Finally, the user generates an identity transaction.

Login

After registration, a user logs in to the system. The login process is described as follows:

  • 1. The user commits identity information and imports their secret key into the service application to log in.
  • 2. A user who needs to log in sends a login request to the network’s service provider.
  • 3. The service provider analyzes the login request, extracts the hash, queries the blockchain, and obtains identity information from an identity list (identity transactions).
  • 4. The service provider responds with an authentication request when the above process is completed. A timestamp (to avoid a replay attack), the user’s UserName, and a signature are all included in the authentication request.
  • 5. The user creates a signature with five parameters: timestamp, UserName, and PK, as well as the UserName and PK of the service provider. The user authentication credential is used as the signature.
  • 6. The service provider verifies the received information, and if the received information is valid, the authentication succeeds; otherwise, the authentication fails, and the user’s login is denied.

The World Intellectual Property Organization (WIPO) and multiple target patent offices in various nations or regions should assess a patent application, resulting in inefficiency, high costs, and uncertainty. This study presented a conceptual NFT-based patent framework for issuing, validating, and sharing patent certificates. The platform aims to support counterfeit protection as well as secure access and management of certificates according to the needs of learners, companies, education institutions, and certification authorities.

Here, the certification authority (CA) is used to authenticate patent offices. The procedure will first validate a patent if it is provided with a digital certificate that meets the X.509 standard. Certificate authorities are introduced into the system to authenticate both the nodes and clients connected to the blockchain network.

Verification layer

In permissioned blockchains, just identified nodes can read and write in the distributed ledger. Nodes can act in different roles and have various permissions. Therefore, a distributed system can be designed to be the identified nodes for patent granting offices. Here the system is described conceptually at a high level. Figure 4 illustrates the sequence diagram of this layer. This layer includes four levels as below:

figure 4
Figure 4

Digitalization

For a patent to publish as an NFT in the blockchain, it must have a digitalized format. This level is the “filling step” in traditional patent registering. An application could be designed in the application layer to allow users to enter different patent information online.

Recording

Patents provide valuable information and would bring financial benefits for their owner. If they are publicly published in a blockchain network, miners may refuse the patent and take the innovation for themselves. At least it can weaken consensus reliability and encourage miners to misbehave. The inventor should record his innovation privately first using proof of existence to prevent this. The inventor generates the hash of the patent document and records it in the blockchain. As soon as it is recorded in the blockchain, the timestamp and the hash are available for others publicly. Then, the inventor can prove the existence of the patent document whenever it is needed.

Furthermore, using methods like Decision Thinking36, an inventor can record each phase of patent development separately. In each stage, a user generates the hash of the finished part and publishes the hash regarding the last part’s hash. Finally, they have a coupled series of hashes that indicate patent development, and they can prove the existence of each phase using the original related documents. This level should be done to prevent others from abusing the patent and taking it for themselves. The inventor can make sure that their patent document is recorded confidentially and immutably37.

Different hash algorithms exist with different architecture, time complexity, and security considerations. Hash functions should satisfy two main requirements: Pre-Image Resistance: This means that it should be computationally hard to find the input of a hash function while the output and the hash algorithm are known publicly. Collision Resistance: This means that it is computationally hard to find two arbitrary inputs, x, and y, that have the same hash output. These requirements are vital for recording patents. First, the hash function should be Pre-Image Resistance to make it impossible for others to calculate the patent documentation. Otherwise, everybody can read the patent, even before its official publication. Second, the hash function should satisfy Collision Resistance to preclude users from changing their document after recording. Otherwise, users can upload another document, and after a while, they can replace it with another one.

There are various hash algorithms, and MD and SHA families are the most useful algorithms. According to38, Collisions have been found for MD2, MD4, MD5, SHA-0, and SHA-1 hash functions. Hence, they cannot be a good choice for recording patents. SHA2 hash algorithm is secure, and no collision has been found. Although SHA2 is noticeably slower than prior hash algorithms, the recording phase is not highly time-sensitive. So, it is a better choice and provides excellent security for users.

Validating

In this phase, the inventors first create NFT for their patents and publish it to the miners/validators. Miners are some identified nodes that validate NFTs to record in the blockchain. Due to the specialization of the patent validation, miners cannot be inexpert public persons. In addition, patent offices are not too many to make the network fully decentralized. Therefore, the miners can be related specialist persons that are certified by the patent offices. They should receive a digital certificate from patent offices that show their eligibility to referee a patent.

Digital certificate

Digital certificates are digital credentials used to verify networked entities’ online identities. They usually include a public key as well as the owner’s identification. They are issued by Certification Authorities (CAs), who must verify the certificate holder’s identity. Certificates contain cryptographic keys for signing, encryption, and decryption. X.509 is a standard that defines the format of public-key certificates and is signed by a certificate authority. X.509 standard has multiple fields, and its structure is shown in Fig. 5. Version: This field indicated the version of the X.509 standard. X.509 contains multiple versions, and each version has a different structure. According to the CA, validators can choose their desired version. Serial Number: It is used to distinguish a certificate from other certificates. Thus, each certificate has a unique serial number. Signature Algorithm Identifier: This field indicates the cryptographic encryption algorithm used by a certificate authority. Issuer Name: This field indicates the issuer’s name, which is generally certificate authority. Validity Period: Each certificate is valid for a defined period, defined as the Validity Period. This limited period partly protects certificates against exposing CA’s private key. Subject Name: Name of the requester. In our proposed framework, it is the validator’s name. Subject Public Key Info: Shows the CA’s or organization’s public key that issued the certificate. These fields are identical among all versions of the X.509 standard39.

figure 5
Figure 5

Certificate authority

A Certificate Authority (CA) issues digital certificates. CAs encrypt the certificate with their private key, which is not public, and others can decrypt the certificates containing the CA’s public key.

Here, the patent office creates a certificate for requested patent referees. The patent office writes the information of the validator in their certificate and encrypts it with the patent offices’ private key. The validator can use the certificate to assure others about their eligibility. Other nodes can check the requesting node’s information by decrypting the certificate using the public key of the patent office. Therefore, persons can join the network’s miners/validators using their credentials. In this phase, miners perform Formal Examinations, Prior Art Research, and Substantive Examinations and vote to grant or refuse the patent.

Miners perform a consensus about the patent and record the patent in the blockchain. After that, the NFT is recorded in the blockchain with corresponding comments in granting or needing reformations. If the miners detect the NFT as a malicious request, they do not record it in the blockchain.

Blockchain layer

This layer plays as a middleware between the Verification Layer and Application Layer in the patents as NFTs architecture. The main purpose of the blockchain layer in the proposed architecture is to provide IP management. We find that transitioning to a blockchain-based patent as a NFTs records system enables many previously suggested improvements to current patent systems in a flexible, scalable, and transparent manner.

On the other hand, we can use multiple blockchain platforms, including Ethereum, EOS, Flow, and Tezos. Blockchain Systems can be mainly classified into two major types: Permissionless (public) and Permissioned (private) Blockchains based on their consensus mechanism. In a public blockchain, any node can participate in the peer-to-peer network, where the blockchain is fully decentralized. A node can leave the network without any consent from the other nodes in the network.

Bitcoin is one of the most popular examples that fall under the public and permissionless blockchain. Proof of Work (POW), Proof-of-Stake (POS), and directed acyclic graph (DAG) are some examples of consensus algorithms in permissionless blockchains. Bitcoin and Ethereum, two famous and trustable blockchain networks, use the PoW consensus mechanism. Blockchain platforms like Cardano and EOS adopt the PoS consensus40.

Nodes require specific access or permission to get network authentication in a private blockchain. Hyperledger is among the most popular private blockchains, which allow only permissioned members to join the network after authentication. This provides security to a group of entities that do not completely trust one another but wants to achieve a common objective such as exchanging information. All entities of a permissioned blockchain network can use Byzantine-fault-tolerant (BFT) consensus. The Fabric has a membership identity service that manages user IDs and verifies network participants.

Therefore, members are aware of each other’s identity while maintaining privacy and secrecy because they are unaware of each other’s activities41. Due to their more secure nature, private blockchains have sparked a large interest in banking and financial organizations, believing that these platforms can disrupt current centralized systems. Hyperledger, Quorum, Corda, EOS are some examples of permissioned blockchains42.

Reaching consensus in a distributed environment is a challenge. Blockchain is a decentralized network with no central node to observe and check all transactions. Thus, there is a need to design protocols that indicate all transactions are valid. So, the consensus algorithms are considered as the core of each blockchain43. In distributed systems, the consensus has become a problem in which all network members (nodes) agree on accept or reject of a block. When all network members accept the new block, it can append to the previous block.

As mentioned, the main concern in the blockchains is how to reach consensus among network members. A wide range of consensus algorithms has been designed in which each of them has its own pros and cons42. Blockchain consensus algorithms are mainly classified into three groups shown in Table 2. As the first group, proof-based consensus algorithms require the nodes joining the verifying network to demonstrate their qualification to do the appending task. The second group is voting-based consensus that requires validators in the network to share their results of validating a new block or transaction before making the final decision. The third group is DAG-based consensus, a new class of consensus algorithms. These algorithms allow several different blocks to be published and recorded simultaneously on the network.Table 2 Consensus algorithms in blockchain networks.

Full size table

The proposed patent as the NFTs platform that builds blockchain intellectual property empowers the entire patent ecosystem. It is a solution that removes barriers by addressing fundamental issues within the traditional patent ecosystem. Blockchain can efficiently handle patents and trademarks by effectively reducing approval wait time and other required resources. The user entities involved in Intellectual Property management are Creators, Patent Consumers, and Copyright Managing Entities. Users with ownership of the original data are the patent creators, e.g., inventors, writers, and researchers. Patent Consumers are the users who are willing to consume the content and support the creator’s work. On the other hand, Users responsible for protecting the creators’ Intellectual Property are the copyright management entities, e.g., lawyers. The patents as NFTs solution for IP management in blockchain layer works by implementing the following steps62:

Creators sign up to the platform

Creators need to sign up on the blockchain platform to patent their creative work. The identity information will be required while signing up.

Creators upload IP on the blockchain network

Now, add an intellectual property for which the patent application is required. The creator will upload the information related to IP and the data on the blockchain network. Blockchain ensures traceability and auditability to prevent data from duplicity and manipulation. The patent becomes visible to all network members once it is uploaded to the blockchain.

Consumers generate request to use the content

Consumers who want to access the content must first register on the blockchain network. After Signing up, consumers can ask creators to grant access to the patented content. Before the patent owner authorizes the request, a Smart Contract is created to allow customers to access information such as the owner’s data. Furthermore, consumers are required to pay fees in either fiat money or unique tokens in order to use the creator’s original information. When the creator approves the request, an NDA (Non-Disclosure Agreement) is produced and signed by both parties. Blockchain manages the agreement and guarantees that all parties agree to the terms and conditions filed.

Patent management entities leverage blockchain to protect copyrights and solve related disputes

Blockchain assists the patent management entities in resolving a variety of disputes that may include: sharing confidential information, establishing proof of authorship, transferring IP rights, and making defensive publications, etc. Suppose a person used an Invention from a patent for his company without the inventor’s consent. The inventor can report it to the patent office and claim that he is the owner of that invention.

Application layer

The patent Platform Global Marketplace technology would allow many enterprises, governments, universities, and Small and medium-sized enterprises (SMEs) worldwide to tokenize patents as NFTs to create an infrastructure for storing patent records on a blockchain-based network and developing a decentralized marketplace in which patent holders would easily sell or otherwise monetize their patents. The NFTs-based patent can use smart contracts to determine a set price for a license or purchase.

Any buyer satisfied with the conditions can pay and immediately unlock the rights to the patent without either party ever having to interact directly. While patents are currently regulated jurisdictionally around the world, a blockchain-based patent marketplace using NFTs can reduce the geographical barriers between patent systems using as simple a tool as a search query. The ease of access to patents globally can help aspiring inventors accelerate the innovative process by building upon others’ patented inventions through licenses. There are a wide variety of use cases for patent NFTs such as SMEs, Patent Organization, Grant & Funding, and fundraising/transferring information relating to patents. These applications keep growing as time progresses, and we are constantly finding new ways to utilize these tokens. Some of the most commonly used applications can be seen as follows.

SMEs

The aim is to move intellectual property assets onto a digital, centralized, and secure blockchain network, enabling easier commercialization of patents, especially for small or medium enterprises (SMEs). Smart contracts can be attached to NFTs so terms of use and ownership can be outlined and agreed upon without incurring as many legal fees as traditional IP transfers. This is believed to help SMEs secure funding, as they could more easily leverage the previously undisclosed value of their patent portfolios63.

Transfer ownership of patents

NFTs can be used to transfer ownership of patents. The blockchain can be used to keep track of patent owners, and tokens would include self-executing contracts that transfer the legal rights associated with patents when the tokens are transferred. A partnership between IBM and IPwe has spearheaded the use of NFTs to secure patent ownership. These two companies have teamed together to build the infrastructure for an NFT-based patent marketplace.

Discussion

There are exciting proposals in the legal and economic literature that suggest seemingly straightforward solutions to many of the issues plaguing current patent systems. However, most solutions would constitute major administrative disruptions and place significant and continuous financial burdens on patent offices or their users. An NFT-based patents system not only makes many of these ideas administratively feasible but can also be examined in a step-wise, scalable, and very public manner.

Furthermore, NFT-based patents may facilitate reliable information sharing among offices and patentees worldwide, reducing the burden on examiners and perhaps even accelerating harmonization efforts. NFT-based patents also have additional transparency and archival attributes baked in. A patent should be a privilege bestowed on those who take resource-intensive risks to explore the frontier of technological capabilities. As a reward for their achievements, full transparency of these rewards is much public interest. It is a society that pays for administrative and economic inefficiencies that exist in today’s systems. NFT-based patents can enhance this transparency. From an organizational perspective, an NFT-based patent can remove current bottlenecks in patent processes by making these processes more efficient, rapid, and convenient for applicants without compromising the quality of granted patents.

The proposed framework encounters some challenges that should be solved to reach a developed patent verification platform. First, technical problems are discussed. The consensus method that is used in the verification layer is not addressed in detail. Due to the permissioned structure of miners in the NFT-based patents, consensus algorithms like PBFT, Federated Consensus, and Round Robin Consensus are designed for permissioned blockchains can be applied. Also, miners/validators spend some time validating the patents; hence a protocol should be designed to profit them. Some challenges like proving the miners’ time and effort, the price that inventors should pay to miners, and other economic trade-offs should be considered.

Different NFT standards were discussed. If various patent services use NFT standards, there will be some cross-platform problems. For instance, transferring an NFT from Ethereum blockchain (ERC-721 token) to EOS blockchain is not a forward and straight work and needs some considerations. Also, people usually trade NFTs in marketplaces such as Rarible and OpenSea. These marketplaces are centralized and may prompt some challenges because of their centralized nature. Besides, there exist some other types of challenges. For example, the novelty of NFT-based patents and blockchain services.

Blockchain-based patent service has not been tested before. The patent registration procedure and concepts of the Patent as NFT system may be ambiguous for people who still prefer conventional centralized patent systems over decentralized ones. It should be noted that there are some problems in the mining part. Miners should receive certificates from the accepted organizations. Determining these organizations and how they accept referees as validators need more consideration. Some types of inventions in some countries are prohibited, and inventors cannot register them. In NFT-based patents, inventors can register their patents publicly, and maybe some collisions occur between inventors and the government. There exist some misunderstandings about NFT’s ownership rights. It is not clear that when a person buys an NFT, which rights are given to them exactly; for instance, they have property rights or have moral rights, too.

Conclusion

Blockchain technology provides strong timestamping, the potential for smart contracts, proof-of-existence. It enables creating a transparent, distributed, cost-effective, and resilient environment that is open to all and where each transaction is auditable. On the other hand, blockchain is a definite boon to the IP industry, benefitting patent owners. When blockchain technology’s intrinsic characteristics are applied to the IP domain, it helps copyrights. This paper provided a conceptual framework for presenting an NFT-based patent with a comprehensive discussion of many aspects: background, model components, token standards to application areas, and research challenges. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application. The primary purpose of this patent framework was to provide an NFT-based concept that could be used to patent a decentralized, anti-tamper, and reliable network for trade and exchange around the world. Finally, we addressed several open challenges to NFT-based inventions.

References

  1. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decent. Bus. Rev. 21260, https://bitcoin.org/bitcoin.pdf (2008).
  2. Buterin, V. A next-generation smart contract and decentralized application platform. White Pap. 3 (2014).
  3. Nofer, M., Gomber, P., Hinz, O. & Schiereck, D. Business & infomation system engineering. Blockchain 59, 183–187 (2017).Google Scholar 
  4. Regner, F., Urbach, N. & Schweizer, A. NFTs in practice—non-fungible tokens as core component of a blockchain-based event ticketing application. https://www.researchgate.net/publication/336057493_NFTs_in_Practice_-_Non-Fungible_Tokens_as_Core_Component_of_a_Blockchain-based_Event_Ticketing_Application (2019).
  5. Entriken, W., Shirley, D., Evans, J. & Sachs, N. EIP 721: ERC-721 non-fungible token standard. Ethereum Improv. Propos.https://eips.ethereum.org/EIPS/eip-721 (2018).
  6. Radomski, W. et al. Eip 1155: Erc-1155 multi token standard. In Ethereum, Standard (2018).
  7. Dowling, M. Is non-fungible token pricing driven by cryptocurrencies? Finance Res. Lett. 44, 102097. https://doi.org/10.1016/j.frl.2021.102097 (2021).
  8. Lesavre, L., Varin, P. & Yaga, D. Blockchain Networks: Token Design and Management Overview. (National Institute of Standards and Technology, 2020).
  9. Larva-Labs. About Cryptopunks, Retrieved 13 May, 2021, from https://www.larvalabs.com/cryptopunks (2021).
  10. Cryptokitties. About Cryptokitties, Retrieved 28 May, 2021, from https://www.cryptokitties.co/ (2021).
  11. nbatopshot. About Nba top shot, Retrieved 4 April, 2021, from https://nbatopshot.com/terms (2021).
  12. Fairfield, J. Tokenized: The law of non-fungible tokens and unique digital property. Indiana Law J. forthcoming (2021).
  13. Chevet, S. Blockchain technology and non-fungible tokens: Reshaping value chains in creative industries. Available at SSRN 3212662 (2018).
  14. Bal, M. & Ner, C. NFTracer: a Non-Fungible token tracking proof-of-concept using Hyperledger Fabric. arXiv preprint arXiv:1905.04795 (2019).
  15. Wang, Q., Li, R., Wang, Q. & Chen, S. Non-fungible token (NFT): Overview, evaluation, opportunities and challenges. arXiv preprint arXiv:2105.07447 (2021).
  16. Qu, Q., Nurgaliev, I., Muzammal, M., Jensen, C. S. & Fan, J. On spatio-temporal blockchain query processing. Future Gener. Comput. Syst. 98: 208–218 (2019).Article Google Scholar 
  17. Rosenfeld, M. Overview of colored coins. White paper, bitcoil. co. il 41, 94 (2012).
  18. Obsidian-Labs. dGoods Standard, Retrieved 29 April, 2021, from https://docs.eosstudio.io/contracts/dgoods/standard.html. (2021).
  19. Algorand. Algorand Core Technology Innovation, Retrieved 10 March, 2021, from https://www.algorand.com/technology/core-blockchain-innovation. (2021).
  20. Weathersby, J. Building NFTs on Algorand, Retrieved 15 April, 2021, from https://developer.algorand.org/articles/building-nfts-on-algorand/. (2021).
  21. Algorand. How Algorand Democratizes the Access to the NFT Market with Fractional NFTs, Retrieved 7 April, 2021, from https://www.algorand.com/resources/blog/algorand-nft-market-fractional-nfts. (2021).
  22. Tezos. Welcome to the Tezos Developer Documentation, Retrieved 16 May, 2021, from https://tezos.gitlab.io. (2021).
  23. flowdocs. Non-Fungible Tokens, Retrieved 20 May, 2021, from https://docs.onflow.org/cadence/tutorial/04-non-fungible-tokens/. (2021).
  24. Benisi, N. Z., Aminian, M. & Javadi, B. Blockchain-based decentralized storage networks: A survey. J. Netw. Comput. Appl. 162, 102656 (2020).Article Google Scholar 
  25. NFTReview. On-chain vs. Off-chain Metadata (2021).
  26. Benet, J. Ipfs-content addressed, versioned, p2p file system. arXiv preprint arXiv:1407.3561 (2014).
  27. Nizamuddin, N., Salah, K., Azad, M. A., Arshad, J. & Rehman, M. Decentralized document version control using ethereum blockchain and IPFS. Comput. Electr. Eng. 76, 183–197 (2019).Article Google Scholar 
  28. Tut, K. Who Is Responsible for NFT Data? (2020).
  29. nft.storage. Free Storage for NFTs, Retrieved 16 May, 2021, from https://nft.storage/. (2021).
  30. Psaras, Y. & Dias, D. in 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). 80–80 (IEEE).
  31. Tanner, J. & Roelofs, C. NFTs and the need for Self-Sovereign Identity (2021).
  32. Martens, D., Tuyll van Serooskerken, A. V. & Steenhagen, M. Exploring the potential of blockchain for KYC. J. Digit. Bank. 2, 123–131 (2017).Google Scholar 
  33. Hammi, M. T., Bellot, P. & Serhrouchni, A. In 2018 IEEE Wireless Communications and Networking Conference (WCNC). 1–6 (IEEE).
  34. Khalid, U. et al. A decentralized lightweight blockchain-based authentication mechanism for IoT systems. Cluster Comput. 1–21 (2020).
  35. Zhong, Y. et al. Distributed blockchain-based authentication and authorization protocol for smart grid. Wirel. Commun. Mobile Comput. (2021).
  36. Schönhals, A., Hepp, T. & Gipp, B. In Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. 105–110.
  37. Verma, S. & Prajapati, G. A Survey of Cryptographic Hash Algorithms and Issues. International Journal of Computer Security & Source Code Analysis (IJCSSCA) 1, 17–20, (2015).
  38. Verma, S. & Prajapati, G. A survey of cryptographic hash algorithms and issues. Int. J. Comput. Secur. Source Code Anal. (IJCSSCA) 1 (2015).
  39. SDK, I. X.509 Certificates (1996).
  40. Helliar, C. V., Crawford, L., Rocca, L., Teodori, C. & Veneziani, M. Permissionless and permissioned blockchain diffusion. Int. J. Inf. Manag. 54, 102136 (2020).Article Google Scholar 
  41. Frizzo-Barker, J. et al. Blockchain as a disruptive technology for business: A systematic review. Int. J. Inf. Manag. 51, 102029 (2020).Article Google Scholar 
  42. Bamakan, S. M. H., Motavali, A. & Bondarti, A. B. A survey of blockchain consensus algorithms performance evaluation criteria. Expert Syst. Appl. 154, 113385 (2020).Article Google Scholar 
  43. Bamakan, S. M. H., Bondarti, A. B., Bondarti, P. B. & Qu, Q. Blockchain technology forecasting by patent analytics and text mining. Blockchain Res. Appl. 100019 (2021).
  44. Castro, M. & Liskov, B. Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. (TOCS) 20, 398–461 (2002).Article Google Scholar 
  45. Muratov, F., Lebedev, A., Iushkevich, N., Nasrulin, B. & Takemiya, M. YAC: BFT consensus algorithm for blockchain. arXiv preprint arXiv:1809.00554 (2018).
  46. Bessani, A., Sousa, J. & Alchieri, E. E. In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks. 355–362 (IEEE).
  47. Todd, P. Ripple protocol consensus algorithm review. May 11th (2015).
  48. Ongaro, D. & Ousterhout, J. In 2014 {USENIX} Annual Technical Conference ({USENIX}{ATC} 14). 305–319.
  49. Larimer, D. Delegated proof-of-stake (dpos). Bitshare whitepaper, Reterived March 31, 2019, from http://docs.bitshares.org/bitshares/dpos.html (2014).
  50. Turner, B. (October, 2007).
  51. De Angelis, S. et al. PBFT vs proof-of-authority: Applying the CAP theorem to permissioned blockchain (2018).
  52. King, S. & Nadal, S. Ppcoin: Peer-to-peer crypto-currency with proof-of-stake. self-published paper, August 19 (2012).
  53. Hyperledger. PoET 1.0 Specification (2017).
  54. Buntinx, J. What Is Proof-of-Weight? Reterived March 31, 2019, from https://nulltx.com/what-is-proof-of-weight/# (2018).
  55. P4Titan. A Peer-to-Peer Crypto-Currency with Proof-of-Burn. Reterived March 10, 2019, from https://github.com/slimcoin-project/slimcoin-project.github.io/raw/master/whitepaperSLM.pdf (2014).
  56. Dziembowski, S., Faust, S., Kolmogorov, V. & Pietrzak, K. In Annual Cryptology Conference. 585–605 (Springer).
  57. Bentov, I., Lee, C., Mizrahi, A. & Rosenfeld, M. Proof of Activity: Extending Bitcoin’s Proof of Work via Proof of Stake. IACR Cryptology ePrint Archive 2014, 452 (2014).Google Scholar 
  58. NEM, T. Nem technical referencehttps://nem.io/wpcontent/themes/nem/files/NEM_techRef.pdf (2018).
  59. Bramas, Q. The Stability and the Security of the Tangle (2018).
  60. Baird, L. The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance. In Swirlds Tech Reports SWIRLDS-TR-2016–01, Tech. Rep (2016).
  61. LeMahieu, C. Nano: A feeless distributed cryptocurrency network. Nano [Online resource]. https://nano.org/en/whitepaper (date of access: 24.03. 2018) 16, 17 (2018).
  62. Casino, F., Dasaklis, T. K. & Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification and open issues. Telematics Inform. 36, 55–81 (2019).Article Google Scholar 
  63. bigredawesomedodo. Helping Small Businesses Survive and Grow With Marketing, Retrieved 3 June, 2021, from https://bigredawesomedodo.com/nft/. (2020).

Download references

Acknowledgements

This work has been partially supported by CAS President’s International Fellowship Initiative, China [grant number 2021VTB0002, 2021] and National Natural Science Foundation of China (No. 61902385).

Author information

Affiliations

  1. Department of Industrial Management, Yazd University, Yazd City, IranSeyed Mojtaba Hosseini Bamakan
  2. Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan City, IranNasim Nezhadsistani
  3. School of Electrical and Computer Engineering, University of Tehran, Tehran City, IranOmid Bodaghi
  4. Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, ChinaSeyed Mojtaba Hosseini Bamakan & Qiang Qu
  5. Huawei Blockchain Lab, Huawei Cloud Tech Co., Ltd., Shenzhen, ChinaQiang Qu

Contributions

NFT: Redefined Format of IP Assets

The collaboration between National Center for Advancing Translational Sciences (NCATS) at NIH and BurstIQ

2.0 LPBI is a Very Unique Organization 

 

Read Full Post »

Reporter: Stephen J. Williams, Ph.D.

From: Heidi Rheim et al. GA4GH: International policies and standards for data sharing across genomic research and healthcare. (2021): Cell Genomics, Volume 1 Issue 2.

Source: DOI:https://doi.org/10.1016/j.xgen.2021.100029

Highlights

  • Siloing genomic data in institutions/jurisdictions limits learning and knowledge
  • GA4GH policy frameworks enable responsible genomic data sharing
  • GA4GH technical standards ensure interoperability, broad access, and global benefits
  • Data sharing across research and healthcare will extend the potential of genomics

Summary

The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits.

In order for genomic and personalized medicine to come to fruition it is imperative that data siloes around the world are broken down, allowing the international collaboration for the collection, storage, transferring, accessing and analying of molecular and health-related data.

We had talked on this site in numerous articles about the problems data siloes produce. By data siloes we are meaning that collection and storage of not only DATA but intellectual thought are being held behind physical, electronic, and intellectual walls and inacessible to other scientisits not belonging either to a particular institituion or even a collaborative network.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Standardization and harmonization of data is key to this effort to sharing electronic records. The EU has taken bold action in this matter. The following section is about the General Data Protection Regulation of the EU and can be found at the following link:

https://ec.europa.eu/info/law/law-topic/data-protection/data-protection-eu_en

Fundamental rights

The EU Charter of Fundamental Rights stipulates that EU citizens have the right to protection of their personal data.

Protection of personal data

Legislation

The data protection package adopted in May 2016 aims at making Europe fit for the digital age. More than 90% of Europeans say they want the same data protection rights across the EU and regardless of where their data is processed.

The General Data Protection Regulation (GDPR)

Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. This text includes the corrigendum published in the OJEU of 23 May 2018.

The regulation is an essential step to strengthen individuals’ fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. A single law will also do away with the current fragmentation in different national systems and unnecessary administrative burdens.

The regulation entered into force on 24 May 2016 and applies since 25 May 2018. More information for companies and individuals.

Information about the incorporation of the General Data Protection Regulation (GDPR) into the EEA Agreement.

EU Member States notifications to the European Commission under the GDPR

The Data Protection Law Enforcement Directive

Directive (EU) 2016/680 on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties, and on the free movement of such data.

The directive protects citizens’ fundamental right to data protection whenever personal data is used by criminal law enforcement authorities for law enforcement purposes. It will in particular ensure that the personal data of victims, witnesses, and suspects of crime are duly protected and will facilitate cross-border cooperation in the fight against crime and terrorism.

The directive entered into force on 5 May 2016 and EU countries had to transpose it into their national law by 6 May 2018.

The following paper by the organiztion The Global Alliance for Genomics and Health discusses these types of collaborative efforts to break down data silos in personalized medicine. This organization has over 2000 subscribers in over 90 countries encompassing over 60 organizations.

Enabling responsible genomic data sharing for the benefit of human health

The Global Alliance for Genomics and Health (GA4GH) is a policy-framing and technical standards-setting organization, seeking to enable responsible genomic data sharing within a human rights framework.

he Global Alliance for Genomics and Health (GA4GH) is an international, nonprofit alliance formed in 2013 to accelerate the potential of research and medicine to advance human health. Bringing together 600+ leading organizations working in healthcare, research, patient advocacy, life science, and information technology, the GA4GH community is working together to create frameworks and standards to enable the responsible, voluntary, and secure sharing of genomic and health-related data. All of our work builds upon the Framework for Responsible Sharing of Genomic and Health-Related Data.

GA4GH Connect is a five-year strategic plan that aims to drive uptake of standards and frameworks for genomic data sharing within the research and healthcare communities in order to enable responsible sharing of clinical-grade genomic data by 2022. GA4GH Connect links our Work Streams with Driver Projects—real-world genomic data initiatives that help guide our development efforts and pilot our tools.

From the article on Cell Genomics GA4GH: International policies and standards for data sharing across genomic research and healthcare

Source: Open Access DOI:https://doi.org/10.1016/j.xgen.2021.100029PlumX Metrics

The Global Alliance for Genomics and Health (GA4GH) is a worldwide alliance of genomics researchers, data scientists, healthcare practitioners, and other stakeholders. We are collaborating to establish policy frameworks and technical standards for responsible, international sharing of genomic and other molecular data as well as related health data. Founded in 2013,3 the GA4GH community now consists of more than 1,000 individuals across more than 90 countries working together to enable broad sharing that transcends the boundaries of any single institution or country (see https://www.ga4gh.org).In this perspective, we present the strategic goals of GA4GH and detail current strategies and operational approaches to enable responsible sharing of clinical and genomic data, through both harmonized data aggregation and federated approaches, to advance genomic medicine and research. We describe technical and policy development activities of the eight GA4GH Work Streams and implementation activities across 24 real-world genomic data initiatives (“Driver Projects”). We review how GA4GH is addressing the major areas in which genomics is currently deployed including rare disease, common disease, cancer, and infectious disease. Finally, we describe differences between genomic sequence data that are generated for research versus healthcare purposes, and define strategies for meeting the unique challenges of responsibly enabling access to data acquired in the clinical setting.

GA4GH organization

GA4GH has partnered with 24 real-world genomic data initiatives (Driver Projects) to ensure its standards are fit for purpose and driven by real-world needs. Driver Projects make a commitment to help guide GA4GH development efforts and pilot GA4GH standards (see Table 2). Each Driver Project is expected to dedicate at least two full-time equivalents to GA4GH standards development, which takes place in the context of GA4GH Work Streams (see Figure 1). Work Streams are the key production teams of GA4GH, tackling challenges in eight distinct areas across the data life cycle (see Box 1). Work Streams consist of experts from their respective sub-disciplines and include membership from Driver Projects as well as hundreds of other organizations across the international genomics and health community.

Figure thumbnail gr1
Figure 1Matrix structure of the Global Alliance for Genomics and HealthShow full caption


Box 1
GA4GH Work Stream focus areasThe GA4GH Work Streams are the key production teams of the organization. Each tackles a specific area in the data life cycle, as described below (URLs listed in the web resources).

  • (1)Data use & researcher identities: Develops ontologies and data models to streamline global access to datasets generated in any country9,10
  • (2)Genomic knowledge standards: Develops specifications and data models for exchanging genomic variant observations and knowledge18
  • (3)Cloud: Develops federated analysis approaches to support the statistical rigor needed to learn from large datasets
  • (4)Data privacy & security: Develops guidelines and recommendations to ensure identifiable genomic and phenotypic data remain appropriately secure without sacrificing their analytic potential
  • (5)Regulatory & ethics: Develops policies and recommendations for ensuring individual-level data are interoperable with existing norms and follow core ethical principles
  • (6)Discovery: Develops data models and APIs to make data findable, accessible, interoperable, and reusable (FAIR)
  • (7)Clinical & phenotypic data capture & exchange: Develops data models to ensure genomic data is most impactful through rich metadata collected in a standardized way
  • (8)Large-scale genomics: Develops APIs and file formats to ensure harmonized technological platforms can support large-scale computing

For more articles on Open Access, Science 2.0, and Data Networks for Genomics on this Open Access Scientific Journal see:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

UK Biobank Makes Available 200,000 whole genomes Open Access

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

UK Biobank Makes Available 200,000 whole genomes Open Access

Reporter: Stephen J. Williams, Ph.D.

The following is a summary of an article by Jocelyn Kaiser, published in the November 26, 2021 issue of the journal Science.

To see the full article please go to https://www.science.org/content/article/200-000-whole-genomes-made-available-biomedical-studies-uk-effort

The UK Biobank (UKBB) this week unveiled to scientists the entire genomes of 200,000 people who are part of a long-term British health study.

The trove of genomes, each linked to anonymized medical information, will allow biomedical scientists to scour the full 3 billion base pairs of human DNA for insights into the interplay of genes and health that could not be gleaned from partial sequences or scans of genome markers. “It is thrilling to see the release of this long-awaited resource,” says Stephen Glatt, a psychiatric geneticist at the State University of New York Upstate Medical University.

Other biobanks have also begun to compile vast numbers of whole genomes, 100,000 or more in some cases (see table, below). But UKBB stands out because it offers easy access to the genomic information, according to some of the more than 20,000 researchers in 90 countries who have signed up to use the data. “In terms of availability and data quality, [UKBB] surpasses all others,” says physician and statistician Omar Yaxmehen Bello-Chavolla of the National Institute for Geriatrics in Mexico City.

Enabling your vision to improve public health

Data drives discovery. We have curated a uniquely powerful biomedical database that can be accessed globally for public health research. Explore data from half a million UK Biobank participants to enable new discoveries to improve public health.

Data Showcase

Future data releases

This UKBB biobank represents genomes collected from 500,000 middle-age and elderly participants for 2006 to 2010. The genomes are mostly of a European descent. Other large scale genome sequencing ventures like Iceland’s DECODE, which collected over 100,000 genomes, is now a subsidiary of Amgen, and mostly behind IP protection, not Open Access as this database represents.

UK Biobank is a large-scale biomedical database and research resource, containing in-depth genetic and health information from half a million UK participants. The database is regularly augmented with additional data and is globally accessible to approved researchers undertaking vital research into the most common and life-threatening diseases. It is a major contributor to the advancement of modern medicine and treatment and has enabled several scientific discoveries that improve human health.

A summary of some large scale genome sequencing projects are show in the table below:

BiobankCompleted Whole GenomesRelease Information
UK Biobank200,000300,000 more in early 2023
TransOmics for
Precision Medicien
161,000NIH requires project
specific request
Million Veterans
Program
125,000Non-Veterans Affairs
researchers get first access
100,000 Genomes
Project
120,000Researchers must join Genomics
England collaboration
All of Us90,000NIH expects to release 2022

Other Related Articles on Genome Biobank Projects in this Open Access Online Scientific Journal Include the Following:

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

Exome Aggregation Consortium (ExAC), generated the largest catalogue so far of variation in human protein-coding regions: Sequence data of 60,000 people, NOW is a publicly accessible database

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Diversity and Health Disparity Issues Need to be Addressed for GWAS and Precision Medicine Studies

Read Full Post »

#TUBiol5227: Biomarkers & Biotargets: Genetic Testing and Bioethics

Curator: Stephen J. Williams, Ph.D.

The advent of direct to consumer (DTC) genetic testing and the resultant rapid increase in its popularity as well as companies offering such services has created some urgent and unique bioethical challenges surrounding this niche in the marketplace. At first, most DTC companies like 23andMe and Ancestry.com offered non-clinical or non-FDA approved genetic testing as a way for consumers to draw casual inferences from their DNA sequence and existence of known genes that are linked to disease risk, or to get a glimpse of their familial background. However, many issues arose, including legal, privacy, medical, and bioethical issues. Below are some articles which will explain and discuss many of these problems associated with the DTC genetic testing market as well as some alternatives which may exist.

‘Direct-to-Consumer (DTC) Genetic Testing Market to hit USD 2.5 Bn by 2024’ by Global Market Insights

This post has the following link to the market analysis of the DTC market (https://www.gminsights.com/pressrelease/direct-to-consumer-dtc-genetic-testing-market). Below is the highlights of the report.

As you can see,this market segment appears to want to expand into the nutritional consulting business as well as targeted biomarkers for specific diseases.

Rising incidence of genetic disorders across the globe will augment the market growth

Increasing prevalence of genetic disorders will propel the demand for direct-to-consumer genetic testing and will augment industry growth over the projected timeline. Increasing cases of genetic diseases such as breast cancer, achondroplasia, colorectal cancer and other diseases have elevated the need for cost-effective and efficient genetic testing avenues in the healthcare market.
 

For instance, according to the World Cancer Research Fund (WCRF), in 2018, over 2 million new cases of cancer were diagnosed across the globe. Also, breast cancer is stated as the second most commonly occurring cancer. Availability of superior quality and advanced direct-to-consumer genetic testing has drastically reduced the mortality rates in people suffering from cancer by providing vigilant surveillance data even before the onset of the disease. Hence, the aforementioned factors will propel the direct-to-consumer genetic testing market overt the forecast timeline.
 

DTC Genetic Testing Market By Technology

Get more details on this report – Request Free Sample PDF
 

Nutrigenomic Testing will provide robust market growth

The nutrigenomic testing segment was valued over USD 220 million market value in 2019 and its market will witness a tremendous growth over 2020-2028. The growth of the market segment is attributed to increasing research activities related to nutritional aspects. Moreover, obesity is another major factor that will boost the demand for direct-to-consumer genetic testing market.
 

Nutrigenomics testing enables professionals to recommend nutritional guidance and personalized diet to obese people and help them to keep their weight under control while maintaining a healthy lifestyle. Hence, above mentioned factors are anticipated to augment the demand and adoption rate of direct-to-consumer genetic testing through 2028.
 

Browse key industry insights spread across 161 pages with 126 market data tables & 10 figures & charts from the report, “Direct-To-Consumer Genetic Testing Market Size By Test Type (Carrier Testing, Predictive Testing, Ancestry & Relationship Testing, Nutrigenomics Testing), By Distribution Channel (Online Platforms, Over-the-Counter), By Technology (Targeted Analysis, Single Nucleotide Polymorphism (SNP) Chips, Whole Genome Sequencing (WGS)), Industry Analysis Report, Regional Outlook, Application Potential, Price Trends, Competitive Market Share & Forecast, 2020 – 2028” in detail along with the table of contents:
https://www.gminsights.com/industry-analysis/direct-to-consumer-dtc-genetic-testing-market
 

Targeted analysis techniques will drive the market growth over the foreseeable future

Based on technology, the DTC genetic testing market is segmented into whole genome sequencing (WGS), targeted analysis, and single nucleotide polymorphism (SNP) chips. The targeted analysis market segment is projected to witness around 12% CAGR over the forecast period. The segmental growth is attributed to the recent advancements in genetic testing methods that has revolutionized the detection and characterization of genetic codes.
 

Targeted analysis is mainly utilized to determine any defects in genes that are responsible for a disorder or a disease. Also, growing demand for personalized medicine amongst the population suffering from genetic diseases will boost the demand for targeted analysis technology. As the technology is relatively cheaper, it is highly preferred method used in direct-to-consumer genetic testing procedures. These advantages of targeted analysis are expected to enhance the market growth over the foreseeable future.
 

Over-the-counter segment will experience a notable growth over the forecast period

The over-the-counter distribution channel is projected to witness around 11% CAGR through 2028. The segmental growth is attributed to the ease in purchasing a test kit for the consumers living in rural areas of developing countries. Consumers prefer over-the-counter distribution channel as they are directly examined by regulatory agencies making it safer to use, thereby driving the market growth over the forecast timeline.
 

Favorable regulations provide lucrative growth opportunities for direct-to-consumer genetic testing

Europe direct-to-consumer genetic testing market held around 26% share in 2019 and was valued at around USD 290 million. The regional growth is due to elevated government spending on healthcare to provide easy access to genetic testing avenues. Furthermore, European regulatory bodies are working on improving the regulations set on the direct-to-consumer genetic testing methods. Hence, the above-mentioned factors will play significant role in the market growth.
 

Focus of market players on introducing innovative direct-to-consumer genetic testing devices will offer several growth opportunities

Few of the eminent players operating in direct-to-consumer genetic testing market share include Ancestry, Color Genomics, Living DNA, Mapmygenome, Easy DNA, FamilytreeDNA (Gene By Gene), Full Genome Corporation, Helix OpCo LLC, Identigene, Karmagenes, MyHeritage, Pathway genomics, Genesis Healthcare, and 23andMe. These market players have undertaken various business strategies to enhance their financial stability and help them evolve as leading companies in the direct-to-consumer genetic testing industry.
 

For example, in November 2018, Helix launched a new genetic testing product, DNA discovery kit, that allows customer to delve into their ancestry. This development expanded the firm’s product portfolio, thereby propelling industry growth in the market.

The following posts discuss bioethical issues related to genetic testing and personalized medicine from a clinicians and scientisit’s perspective

Question: Each of these articles discusses certain bioethical issues although focuses on personalized medicine and treatment. Given your understanding of the robust process involved in validating clinical biomarkers and the current state of the DTC market, how could DTC testing results misinform patients and create mistrust in the physician-patient relationship?

Personalized Medicine, Omics, and Health Disparities in Cancer:  Can Personalized Medicine Help Reduce the Disparity Problem?

Diversity and Health Disparity Issues Need to be Addressed for GWAS and Precision Medicine Studies

Genomics & Ethics: DNA Fragments are Products of Nature or Patentable Genes?

The following posts discuss the bioethical concerns of genetic testing from a patient’s perspective:

Ethics Behind Genetic Testing in Breast Cancer: A Webinar by Laura Carfang of survivingbreastcancer.org

Ethical Concerns in Personalized Medicine: BRCA1/2 Testing in Minors and Communication of Breast Cancer Risk

23andMe Product can be obtained for Free from a new app called Genes for Good: UMich’s Facebook-based Genomics Project

Question: If you are developing a targeted treatment with a companion diagnostic, what bioethical concerns would you address during the drug development process to ensure fair, equitable and ethical treatment of all patients, in trials as well as post market?

Articles on Genetic Testing, Companion Diagnostics and Regulatory Mechanisms

Centers for Medicare & Medicaid Services announced that the federal healthcare program will cover the costs of cancer gene tests that have been approved by the Food and Drug Administration

Real Time Coverage @BIOConvention #BIO2019: Genome Editing and Regulatory Harmonization: Progress and Challenges

New York Times vs. Personalized Medicine? PMC President: Times’ Critique of Streamlined Regulatory Approval for Personalized Treatments ‘Ignores Promising Implications’ of Field

Live Conference Coverage @Medcitynews Converge 2018 Philadelphia: Early Diagnosis Through Predictive Biomarkers, NonInvasive Testing

Protecting Your Biotech IP and Market Strategy: Notes from Life Sciences Collaborative 2015 Meeting

Question: What type of regulatory concerns should one have during the drug development process in regards to use of biomarker testing? From the last article on Protecting Your IP how important is it, as a drug developer, to involve all payers during the drug development process?

Read Full Post »

Older Posts »

%d bloggers like this: