Feeds:
Posts
Comments

Archive for the ‘Bio-Ethics’ Category

Cancer Policy Related News from Washington DC and New NCI Appointments

Reportor: Stephen J. Williams, PhD.

Biden to announce appointees to Cancer Panel, part of initiative to cut death rate

The president first launched the initiative in 2016 as vice president.

By Mary Kekatos

July 13, 2022, 3:00 PM

Share

1:50

about:blank

America This Morning

America This Morning

President Joe Biden will announce Wednesday his appointees to the President’s Cancer Panel, ABC News can exclusively reveal.

The Cancer Panel is part of Biden’s Cancer Moonshot Initiative, which was relaunched in February, with a goal of slashing the national cancer death rate by 50% over the next 25 years.MORE: Biden relaunches cancer ‘moonshot’ initiative to help cut death rate

Biden will appoint Dr. Elizabeth Jaffee, Dr. Mitchel Berger and Dr. Carol Brown to the panel, which will advise him and the White House on how to use resources of the federal government to advance cancer research and reduce the burden of cancer in the United States.

Jaffee, who will serve as chair of the panel, is an expert in cancer immunology and pancreatic cancer, according to the White House. She is currently the deputy director of the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins University and previously led the American Association for Cancer Research.

PHOTO: In this Sept. 8, 2016, file photo, Dr. Elizabeth M. Jaffee of the Pancreatic Dream Team attends Stand Up To Cancer (SU2C), a program of the Entertainment Industry Foundation (EIF), in Hollywood, Calif.
In this Sept. 8, 2016, file photo, Dr. Elizabeth M. Jaffee of the Pancreatic Dream Team attends Stand Up To Cancer (SU2C), a program of the Entertainment Industry Foundation (EIF), in Hollywood, Calif.ABC Handout via Getty Images, FILE

Berger, a neurological surgeon, directs the University of California, San Francisco Brain Tumor Center and previously spent 23 years at the school as a professor of neurological surgery.

Brown, a gynecologic oncologist, is the senior vice president and chief health equity officer at Memorial Sloan Kettering Cancer Center in New York City. According to the White House, much of her career has been focused on eliminating cancer care disparities due to racial, ethnic, cultural or socioeconomic factors.

Additionally, First Lady Jill Biden, members of the Cabinet and other administration officials are holding a meeting Wednesday of the Cancer Cabinet, made up of officials across several governmental departments and agencies, the White House said.

The Cabinet will introduce new members and discuss priorities in the battle against cancer including closing the screening gap, addressing potential environmental exposures, reducing the number of preventable cancer and expanding access to cancer research.MORE: Long Island school district found to have higher rates of cancer cases: Study

It is the second meeting of the cabinet since Biden relaunched the initiative in February, which he originally began in 2016 when he was vice president.

Both Jaffee and Berger were members of the Blue Ribbon Panel for the Cancer Moonshot Initiative led by Biden.

The initiative has personal meaning for Biden, whose son, Beau, died of glioblastoma — one of the most aggressive forms of brain cancer — in 2015.

“I committed to this fight when I was vice president,” Biden said at the time, during an event at the White House announcing the relaunch. “It’s one of the reasons why, quite frankly, I ran for president. Let there be no doubt, now that I am president, this is a presidential, White House priority. Period.”

The initiative has several priority actions including diagnosing cancer sooner; preventing cancer; addressing inequities; and supporting patients, caregivers and survivors.

PHOTO: In this June 14, 2016, file photo, Dr. Carol Brown, physician at Memorial Sloan Kettering Cancer Center, gives a presentation, at The White House Summit on The United State of Women, in Washington, D.C.
In this June 14, 2016, file photo, Dr. Carol Brown, physician at Memorial Sloan Kettering Cancer Center, gives a presentation, at The White House Summit on The United State of Women, in Washington, D.C.NurPhoto via Getty Images, FILE

The White House has also issued a call to action to get cancer screenings back to pre-pandemic levels.

More than 9.5 million cancer screenings that would have taken place in 2020 were missed due to the COVID-19 pandemic, according to the National Institutes of Health.MORE: Louisiana’s ‘Cancer Alley’ residents in clean air fight

“We have to get cancer screenings back on track and make sure they’re accessible to all Americans,” Biden said at the time.

Since the first meeting of the Cancer Cabinet, the Centers for Disease Control and Prevention has issued more than $200 million in grants to cancer prevention programs, the Centers for Medicaid & Medicare Services implemented a new model to reduce the cost of cancer care, and the U.S. Patent and Trademark Office said it will fast-track applications for cancer immunotherapies.

ABC News’ Sasha Pezenik contributed to this report.

Biden to tap prominent Harvard cancer surgeon to head National Cancer Institute

Monica Bertagnolli brings leadership experience in cancer clinical trials funded by the $7 billion research agency

headshot of Monica Bertagnolli
Monica BertagnolliASCO; GLENN DAVENPORT

SHARE:

President Joe Biden is expected to pick cancer surgeon Monica Bertagnolli as the next director of the National Cancer Institute (NCI). Bertagnolli, a physician-scientist at Brigham and Women’s Hospital, the Dana-Farber Cancer Center, and Harvard Medical School, specializes in gastrointestinal cancers and is well known for her expertise in clinical trials. She will replace Ned Sharpless, who stepped down as NCI director in April after nearly 5 years.

The White House has not yet announced the selection, first reported by STAT, but several cancer research organizations closely watching for the nomination have issued statements supporting Bertagnolli’s expected selection. She is “a national leader” in clinical cancer research and “a great person to take the job,” Sharpless told ScienceInsider.

With a budget of $7 billion, NCI is the largest component of the National Institutes of Health (NIH) and the world’s largest funder of cancer research. Its director is the only NIH institute director selected by the president. Bertagnolli’s expected appointment, which does not require Senate confirmation, drew applause from the cancer research community

Margaret Foti, CEO of the American Association for Cancer Research, praised Bertagnolli’s “appreciation for … basic research” and “commitment to ensuring that such treatment innovations reach patients … across the United States.” Ellen Sigal, chair and founder of Friends of Cancer Research, says Bertagnolli “brings expertise the agency needs at a true inflection point for cancer research.”

Bertagnolli, 63, will be the first woman to lead NCI. Her lab research on tumor immunology and the role of a gene called APC in colorectal cancer led to a landmark trial she headed showing that an anti-inflammatory drug can help prevent this cancer. In 2007, she became the chief of surgery at the Dana-Farber Brigham Cancer Center.

She served as president of the American Society of Clinical Oncology in 2018 and currently chairs the Alliance for Clinical Trials in Oncology, which is funded by NCI’s National Clinical Trials Network. The network is a “complicated” program, and “Monica will have a lot of good ideas on how to make it work better,” Sharpless says.

ADVERTISEMENT

One of Bertagnolli’s first tasks will be to shape NCI’s role in Biden’s reignited Cancer Moonshot, which aims to slash the U.S. cancer death rate in half within 25 years. NCI’s new leader also needs to sort out how the agency will mesh with a new NIH component that will fund high-risk, goal-driven research, the Advanced Research Projects Agency for Health (ARPA-H).

Bertagnolli will also head NCI efforts already underway to boost grant funding rates, diversify the cancer research workplace, and reduce higher death rates for Black people with cancer.

The White House recently nominated applied physicist Arati Prabhakar to fill another high-level science position, director of the White House Office of Science and Technology Policy (OSTP). But still vacant is the NIH director slot, which Francis Collins, acting science adviser to the president, left in December 2021. And the administration hasn’t yet selected the inaugural director of ARPA-H.

Correction, 22 July, 9 a.m.: This story has been updated to reflect that Francis Collins is acting science adviser to the president, not acting director of the White House Office of Science and Technology Policy.

Read Full Post »

From the journal Nature: NFT, Patents, and Intellectual Property: Potential Design

Reporter: Stephen J. Williams, Ph.D.

 

From the journal Nature

Source: https://www.nature.com/articles/s41598-022-05920-6

Patents and intellectual property assets as non-fungible tokens; key technologies and challenges

Scientific Reports volume 12, Article number: 2178 (2022)

Abstract

With the explosive development of decentralized finance, we witness a phenomenal growth in tokenization of all kinds of assets, including equity, funds, debt, and real estate. By taking advantage of blockchain technology, digital assets are broadly grouped into fungible and non-fungible tokens (NFT). Here non-fungible tokens refer to those with unique and non-substitutable properties. NFT has widely attracted attention, and its protocols, standards, and applications are developing exponentially. It has been successfully applied to digital fantasy artwork, games, collectibles, etc. However, there is a lack of research in utilizing NFT in issues such as Intellectual Property. Applying for a patent and trademark is not only a time-consuming and lengthy process but also costly. NFT has considerable potential in the intellectual property domain. It can promote transparency and liquidity and open the market to innovators who aim to commercialize their inventions efficiently. The main objective of this paper is to examine the requirements of presenting intellectual property assets, specifically patents, as NFTs. Hence, we offer a layered conceptual NFT-based patent framework. Furthermore, a series of open challenges about NFT-based patents and the possible future directions are highlighted. The proposed framework provides fundamental elements and guidance for businesses in taking advantage of NFTs in real-world problems such as grant patents, funding, biotechnology, and so forth.

Introduction

Distributed ledger technologies (DLTs) such as blockchain are emerging technologies posing a threat to existing business models. Traditionally, most companies used centralized authorities in various aspects of their business, such as financial operations and setting up a trust with their counterparts. By the emergence of blockchain, centralized organizations can be substituted with a decentralized group of resources and actors. The blockchain mechanism was introduced in Bitcoin white paper in 2008, which lets users generate transactions and spend their money without the intervention of banks1. Ethereum, which is a second generation of blockchain, was introduced in 2014, allowing developers to run smart contracts on a distributed ledger. With smart contracts, developers and businesses can create financial applications that use cryptocurrencies and other forms of tokens for applications such as decentralized finance (DeFi), crowdfunding, decentralized exchanges, data records keeping, etc.2. Recent advances in distributed ledger technology have developed concepts that lead to cost reduction and the simplification of value exchange. Nowadays, by leveraging the advantages of blockchain and taking into account the governance issues, digital assets could be represented as tokens that existed in the blockchain network, which facilitates their transmission and traceability, increases their transparency, and improves their security3.

In the landscape of blockchain technology, there could be defined two types of tokens, including fungible tokens, in which all the tokens have equal value and non-fungible tokens (NFTs) that feature unique characteristics and are not interchangeable. Actually, non-fungible tokens are digital assets with a unique identifier that is stored on a blockchain4. NFT was initially suggested in Ethereum Improvement Proposals (EIP)-7215, and it was later expanded in EIP-11556. NFTs became one of the most widespread applications of blockchain technology that reached worldwide attention in early 2021. They can be digital representations of real-world objects. NFTs are tradable rights of digital assets (pictures, music, films, and virtual creations) where ownership is recorded in blockchain smart contracts7.

In particular, fungibility is the ability to exchange one with another of the same kind as an essential currency feature. The non-fungible token is unique and therefore cannot be substituted8. Recently, blockchain enthusiasts have indicated significant interest in various types of NFTs. They enthusiastically participate in NFT-related games or trades. CryptoPunks9, as one of the first NFTs on Ethereum, has developed almost 10,000 collectible punks and helped popularize the ERC-721 Standard. With the gamification of the breeding mechanics, CryptoKitties10 officially placed NFTs at the forefront of the market in 2017. CryptoKitties is an early blockchain game that enables users to buy, sell, collect, and digital breed cats. Another example is NBA Top Shot11, an NFT trading platform for digital short films buying and selling NBA events.

NFTs are developing remarkably and have provided many applications such as artist royalties, in-game assets, educational certificates, etc. However, it is a relatively new concept, and many areas of application need to be explored. Intellectual Property, including patent, trademark, and copyright, is an important area where NFTs can be applied usefully and solve existing problems.

Although NFTs have had many applications so far, it rarely has been used to solve real-world problems. In fact, an NFT is an exciting concept about Intellectual Property (IP). Applying for a patent and trademark is a time-consuming and lengthy process, but it is also costly. That is, registering a copyright or trademark may take months, while securing a patent can take years. On the contrary, with the help of unique features of NFT technology, it is possible to accelerate this process with considerable confidence and assurance about protecting the ownership of an IP. NFTs can offer IP protection while an applicant waits for the government to grant his/her more formal protection. It is cause for excitement that people who believe NFTs and Blockchain would make buying and selling patents easier, offering new opportunities for companies, universities, and inventors to make money off their innovations12. Patent holders will benefit from such innovation. It would give them the ability to ‘tokenize’ their patents. Because every transaction would be logged on a blockchain, it will be much easier to trace patent ownership changes. However, NFT would also facilitate the revenue generation of patents by democratizing patent licensing via NFT. NFTs support the intellectual property market by embedding automatic royalty collecting methods inside inventors’ works, providing them with financial benefits anytime their innovation is licensed. For example, each inventor’s patent would be minted as an NFT, and these NFTs would be joined together to form a commercial IP portfolio and minted as a compounded NFT. Each investor would automatically get their fair share of royalties whenever the licensing revenue is generated without tracking them down.

The authors in13, an overview of NFTs’ applications in different aspects such as gambling, games, and collectibles has been discussed. In addition4, provides a prototype for an event-tracking application based on Ethereum smart contract, and NFT as a solution for art and real estate auction systems is described in14. However, these studies have not discussed existing standards or a generalized architecture, enabling NFTs to be applied in diverse applications. For example, the authors in15 provide two general design patterns for creating and trading NFTs and discuss existing token standards for NFT. However, the proposed designs are limited to Ethereum, and other blockchains are not considered16. Moreover, different technologies for each step of the proposed procedure are not discussed. In8, the authors provide a conceptual framework for token designing and managing and discuss five views: token view, wallet view, transaction view, user interface view, and protocol view. However, no research provides a generalized conceptual framework for generating, recording, and tracing NFT based-IP, in blockchain network.

Even with the clear benefits that NFT-backed patents offer, there are a number of impediments to actually achieving such a system. For example, convincing patent owners to put current ownership records for their patents into NFTs poses an initial obstacle. Because there is no reliable framework for NFT-based patents, this paper provides a conceptual framework for presenting NFT-based patents with a comprehensive discussion on many aspects, ranging from the background, model components, token standards to application domains and research challenges. The main objective of this paper is to provide a layered conceptual NFT-based patent framework that can be used to register patents in a decentralized, tamper-proof, and trustworthy peer-to-peer network to trade and exchange them in the worldwide market. The main contributions of this paper are highlighted as follows:

  • Providing a comprehensive overview on tokenization of IP assets to create unique digital tokens.
  • Discussing the components of a distributed and trustworthy framework for minting NFT-based patents.
  • Highlighting a series of open challenges of NFT-based patents and enlightening the possible future trends.

The rest of the paper is structured as follows: “Background” section describes the Background of NFTs, Non-Fungible Token Standards. The NFT-based patent framework is described in “NFT-based patent framework” section. The Discussion and challenges are presented in “Discussion” section. Lastly, conclusions are given in “Conclusion” section.

Background

Colored Coins could be considered the first steps toward NFTs designed on the top of the Bitcoin network. Bitcoins are fungible, but it is possible to mark them to be distinguishable from the other bitcoins. These marked coins have special properties representing real-world assets like cars and stocks, and owners can prove their ownership of physical assets through the colored coins. By utilizing Colored Coins, users can transfer their marked coins’ ownership like a usual transaction and benefit from Bitcoin’s decentralized network17. Colored Coins had limited functionality due to the Bitcoin script limitations. Pepe is a green frog meme originated by Matt Furie that; users define tokens for Pepes and trade them through the Counterparty platform. Then, the tokens that were created by the picture of Pepes are decided if they are rare enough. Rare Pepe allows users to preserve scarcity, manage the ownership, and transfer their purchased Pepes.

In 2017, Larva Labs developed the first Ethereum-based NFT named CryptoPunks. It contains 10,000 unique human-like characters generated randomly. The official ownership of each character is stored in the Ethereum smart contract, and owners would trade characters. CryptoPunks project inspired CryptoKitties project. CryptoKitties attracts attention to NFT, and it is a pioneer in blockchain games and NFTs that launched in late 2017. CryptoKitties is a blockchain-based virtual game, and users collect and trade characters with unique features that shape kitties. This game was developed in Ethereum smart contract, and it pioneered the ERC-721 token, which was the first standard token in the Ethereum blockchain for NFTs. After the 2017 hype in NFTs, many projects started in this context. Due to increased attention to NFTs’ use-cases and growing market cap, different blockchains like EOS, Algorand, and Tezos started to support NFTs, and various marketplaces like SuperRare and Rarible, and OpenSea are developed to help users to trade NFTs. As mentioned, in general, assets are categorized into two main classes, fungible and non-fungible assets. Fungible assets are the ones that another similar asset can replace. Fungible items could have two main characteristics: replicability and divisibility.

Currency is a fungible item because a ten-dollar bill can be exchanged for another ten-dollar bill or divided into ten one-dollar bills. Despite fungible items, non-fungible items are unique and distinguishable. They cannot be divided or exchanged by another identical item. The first tweet on Twitter is a non-fungible item with mentioned characteristics. Another tweet cannot replace it, and it is unique and not divisible. NFT is a non-fungible cryptographic asset that is declared in a standard token format and has a unique set of attributes. Due to transparency, proof of ownership, and traceable transactions in the blockchain network, NFTs are created using blockchain technology.

Blockchain-based NFTs help enthusiasts create NFTs in the standard token format in blockchain, transfer the ownership of their NFTs to a buyer, assure uniqueness of NFTs, and manage NFTs completely. In addition, there are semi-fungible tokens that have characteristics of both fungible and non-fungible tokens. Semi-fungible tokens are fungible in the same class or specific time and non-fungible in other classes or different times. A plane ticket can be considered a semi-fungible token because a charter ticket can be exchanged by another charter ticket but cannot be exchanged by a first-class ticket. The concept of semi-fungible tokens plays the main role in blockchain-based games and reduces NFTs overhead. In Fig. 1, we illustrate fungible, non-fungible, and semi-fungible tokens. The main properties of NFTs are described as follows15:

figure 1
Figure 1

Ownership: Because of the blockchain layer, the owner of NFT can easily prove the right of possession by his/her keys. Other nodes can verify the user’s ownership publicly.

  • Transferable: Users can freely transfer owned NFTs ownership to others on dedicated markets.
  • Transparency: By using blockchain, all transactions are transparent, and every node in the network can confirm and trace the trades.
  • Fraud Prevention: Fraud is one of the key problems in trading assets; hence, using NFTs ensures buyers buy a non-counterfeit item.
  • Immutability: Metadata, token ID, and history of transactions of NFTs are recorded in a distributed ledger, and it is impossible to change the information of the purchased NFTs.

Non-fungible standards

Ethereum blockchain was pioneered in implementing NFTs. ERC-721 token was the first standard token accepted in the Ethereum network. With the increase in popularity of the NFTs, developers started developing and enhancing NFTs standards in different blockchains like EOS, Algorand, and Tezos. This section provides a review of implemented NFTs standards on the mentioned blockchains.

Ethereum

ERC-721 was the first Standard for NFTs developed in Ethereum, a free and open-source standard. ERC-721 is an interface that a smart contract should implement to have the ability to transfer and manage NFTs. Each ERC-721 token has unique properties and a different Token Id. ERC-721 tokens include the owner’s information, a list of approved addresses, a transfer function that implements transferring tokens from owner to buyer, and other useful functions5.

In ERC-721, smart contracts can group tokens with the same configuration, and each token has different properties, so ERC-721 does not support fungible tokens. However, ERC-1155 is another standard on Ethereum developed by Enjin and has richer functionalities than ERC-721 that supports fungible, non-fungible, and semi-fungible tokens. In ERC-1155, IDs define the class of assets. So different IDs have a different class of assets, and each ID may contain different assets of the same class. Using ERC-1155, a user can transfer different types of tokens in a single transaction and mix multiple fungible and non-fungible types of tokens in a single smart contract6. ERC-721 and ERC-1155 both support operators in which the owner can let the operator originate transferring of the token.

EOSIO

EOSIO is an open-source blockchain platform released in 2018 and claims to eliminate transaction fees and increase transaction throughput. EOSIO differs from Ethereum in the wallet creation algorithm and procedure of handling transactions. dGood is a free standard developed in the EOS blockchain for assets, and it focuses on large-scale use cases. It supports a hierarchical naming structure in smart contracts. Each contract has a unique symbol and a list of categories, and each category contains a list of token names. Therefore, a single contract in dGoods could contain many tokens, which causes efficiency in transferring a group of tokens. Using this hierarchy, dGoods supports fungible, non-fungible, and semi-fungible tokens. It also supports batch transferring, where the owner can transfer many tokens in one operation18.

Algorand

Algorand is a new high-performance public blockchain launched in 2019. It provides scalability while maintaining security and decentralization. It supports smart contracts and tokens for representing assets19. Algorand defines Algorand Standard Assets (ASA) concept to create and manage assets in the Algorand blockchain. Using ASA, users are able to define fungible and non-fungible tokens. In Algorand, users can create NFTs or FTs without writing smart contracts, and they should run just a single transaction in the Algorand blockchain. Each transaction contains some mutable and immutable properties20.

Each account in Algorand can create up to 1000 assets, and for every asset, an account creates or receives, the minimum balance of the account increases by 0.1 Algos. Also, Algorand supports fractional NFTs by splitting an NFT into a group of divided FTs or NFTs, and each part can be exchanged dependently21. Algorand uses a Clawback Address that operates like an operator in ERC-1155, and it is allowed to transfer tokens of an owner who has permitted the operator.

Tezos

Tezos is another decentralized open-source blockchain. Tezos supports the meta-consensus concept. In addition to using a consensus protocol on the ledger’s state like Bitcoin and Ethereum, It also attempts to reach a consensus about how nodes and the protocol should change or upgrade22. FA2 (TZIP-12) is a standard for a unified token contract interface in the Tezos blockchain. FA2 supports different token types like fungible, non-fungible, and fractionalized NFT contracts. In Tezos, tokens are identified with a token contract address and token ID pair. Also, Tezos supports batch token transferring, which reduces the cost of transferring multiple tokens.

Flow

Flow was developed by Dapper Labs to remove the scalability limitation of the Ethereum blockchain. Flow is a fast and decentralized blockchain that focuses on games and digital collectibles. It improves throughput and scalability without sharding due to its architecture. Flow supports smart contracts using Cadence, which is a resource-oriented programming language. NFTs can be described as a resource with a unique id in Cadence. Resources have important rules for ownership management; that is, resources have just one owner and cannot be copied or lost. These features assure the NFT owner. NFTs’ metadata, including images and documents, can be stored off-chain or on-chain in Flow. In addition, Flow defines a Collection concept, in which each collection is an NFT resource that can include a list of resources. It is a dictionary that the key is resource id, and the value is corresponding NFT.

The collection concept provides batch transferring of NFTs. Besides, users can define an NFT for an FT. For instance, in CryptoKitties, a unique cat as an NFT can own a unique hat (another NFT). Flow uses Cadence’s second layer of access control to allow some operators to access some fields of the NFT23. In Table 1, we provide a comparison between explained standards. They are compared in support of fungible-tokens, non-fungible tokens, batch transferring that owner can transform multiple tokens in one operation, operator support in which the owner can approve an operator to originate token transfer, and fractionalized NFTs that an NFT can divide to different tokens and each exchange dependently.Table 1 Comparing NFT standards.

Full size table

NFT-based patent framework

In this section, we propose a framework for presenting NFT-based patents. We describe details of the proposed distributed and trustworthy framework for minting NFT-based patents, as shown in Fig. 2. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application Layer. Details of each layer and the general concepts are presented as follows.

figure 2
Figure 2

Storage layer

The continuous rise of the data in blockchain technology is moving various information systems towards the use of decentralized storage networks. Decentralized storage networks were created to provide more benefits to the technological world24. Some of the benefits of using decentralized storage systems are explained: (1) Cost savings are achieved by making optimal use of current storage. (2) Multiple copies are kept on various nodes, avoiding bottlenecks on central servers and speeding up downloads. This foundation layer implicitly provides the infrastructure required for the storage. The items on NFT platforms have unique characteristics that must be included for identification.

Non-fungible token metadata provides information that describes a particular token ID. NFT metadata is either represented on the On-chain or Off-chain. On-chain means direct incorporation of the metadata into the NFT’s smart contract, which represents the tokens. On the other hand, off-chain storage means hosting the metadata separately25.

Blockchains provide decentralization but are expensive for data storage and never allow data to be removed. For example, because of the Ethereum blockchain’s current storage limits and high maintenance costs, many projects’ metadata is maintained off-chain. Developers utilize the ERC721 Standard, which features a method known as tokenURI. This method is implemented to let applications know the location of the metadata for a specific item. Currently, there are three solutions for off-chain storage, including InterPlanetary File System (IPFS), Pinata, and Filecoin.

IPFS

InterPlanetary File System (IPFS) is a peer-to-peer hypermedia protocol for decentralized media content storage. Because of the high cost of storing media files related to NFTS on Blockchain, IPFS can be the most affordable and efficient solution. IPFS combines multiple technologies inspired by Gita and BitTorrent, such as Block Exchange System, Distributed Hash Tables (DHT), and Version Control System26. On a peer-to-peer network, DHT is used to coordinate and maintain metadata.

In other words, the hash values must be mapped to the objects they represent. An IPFS generates a hash value that starts with the prefix {Q}_{m} and acts as a reference to a specific item when storing an object like a file. Objects larger than 256 KB are divided into smaller blocks up to 256 KB. Then a hash tree is used to interconnect all the blocks that are a part of the same object. IPFS uses Kamdelia DHT. The Block Exchange System, or BitSwap, is a BitTorrent-inspired system that is used to exchange blocks. It is possible to use asymmetric encryption to prevent unauthorized access to stored content on IPFS27.

Pinata

Pinata is a popular platform for managing and uploading files on IPFS. It provides secure and verifiable files for NFTs. Most data is stored off-chain by most NFTs, where a URL of the data is pointed to the NFT on the blockchain. The main problem here is that some information in the URL can change.

This indicates that an NFT supposed to describe a certain patent can be changed without anyone knowing. This defeats the purpose of the NFT in the first place. This is where Pinata comes in handy. Pinata uses the IPFS to create content-addressable hashes of data, also known as Content-Identifiers (CIDs). These CIDs serve as both a way of retrieving data and a means to ensure data validity. Those looking to retrieve data simply ask the IPFS network for the data associated with a certain CID, and if any node on the network contains that data, it will be returned to the requester. The data is automatically rehashed on the requester’s computer when the requester retrieves it to make sure that the data matches back up with the original CID they asked for. This process ensures the data that’s received is exactly what was asked for; if a malicious node attempts to send fake data, the resulting CID on the requester’s end will be different, alerting the requester that they’re receiving incorrect data28.

Filecoin

Another decentralized storage network is Filecoin. It is built on top of IPFS and is designed to store the most important data, such as media files. Truffle Suite has also launched NFT Development Template with Filecoin Box. NFT.Storage (Free Decentralized Storage for NFTs)29 allows users to easily and securely store their NFT content and metadata using IPFS and Filecoin. NFT.Storage is a service backed by Protocol Labs and Pinata specifically for storing NFT data. Through content addressing and decentralized storage, NFT.Storage allows developers to protect their NFT assets and associated metadata, ensuring that all NFTs follow best practices to stay accessible for the long term. NFT.Storage makes it completely frictionless to mint NFTs following best practices through resilient persistence on IPFS and Filecoin. NFT.Storage allows developers to quickly, safely, and for free store NFT data on decentralized networks. Anyone can leverage the power of IPFS and Filecoin to ensure the persistence of their NFTs. The details of this system are stated as follows30:

Content addressing

Once users upload data on NFT.Storage, They receive a CID, which is an IPFS hash of the content. CIDs are the data’s unique fingerprints, universal addresses that can be used to refer to it regardless of how or where it is stored. Using CIDs to reference NFT data avoids problems such as weak links and “rug pulls” since CIDs are generated from the content itself.

Provable storage

NFT.Storage uses Filecoin for long-term decentralized data storage. Filecoin uses cryptographic proofs to assure the NFT data’s durability and persistence over time.

Resilient retrieval

This data stored via IPFS and Filecoin can be fetched directly in the browser via any public IPFS.

Authentication Layer

The second layer is the authentication layer, which we briefly highlight its functions in this section. The Decentralized Identity (DID) approach assists users in collecting credentials from a variety of issuers, such as the government, educational institutions, or employers, and saving them in a digital wallet. The verifier then uses these credentials to verify a person’s validity by using a blockchain-based ledger to follow the “identity and access management (IAM)” process. Therefore, DID allows users to be in control of their identity. A lack of NFT verifiability also causes intellectual property and copyright infringements; of course, the chain of custody may be traced back to the creator’s public address to check whether a similar patent is filed using that address. However, there is no quick and foolproof way to check an NFTs creator’s legitimacy. Without such verification built into the NFT, an NFT proves ownership only over that NFT itself and nothing more.

Self-sovereign identity (SSI)31 is a solution to this problem. SSI is a new series of standards that will guide a new identity architecture for the Internet. With a focus on privacy, security interoperability, SSI applications use public-key cryptography with public blockchains to generate persistent identities for people with private and selective information disclosure. Blockchain technology offers a solution to establish trust and transparency and provide a secure and publicly verifiable KYC (Know Your Customer). The blockchain architecture allows you to collect information from various service providers into a single cryptographically secure and unchanging database that does not need a third party to verify the authenticity of the information.

The proposed platform generates patents-related smart contracts acting as a program that runs on the blockchain to receive and send transactions. They are unalterable privately identifying clients with a thorough KYC process. After KYC approval, then mint an NFT on the blockchain as a certificate of verification32. This article uses a decentralized authentication solution at this layer for authentication. This solution has been used for various applications in the field of the blockchain (exp: smart city, Internet of Things, etc.3334, but we use it here for the proposed framework (patent as NFTs). Details of this solution will be presented in the following.

Decentralized authentication

This section presents the authentication layer similar35 to build validated communication in a secure and decentralized manner via blockchain technology. As shown in Fig. 3, the authentication protocol comprises two processes, including registration and login.

figure 3
Figure 3
Registration

In the registration process of a suggested authentication protocol, we first initialize a user’s public key as their identity key (UserName). Then, we upload this identity key on a blockchain, in which transactions can be verified later by other users. Finally, the user generates an identity transaction.

Login

After registration, a user logs in to the system. The login process is described as follows:

  • 1. The user commits identity information and imports their secret key into the service application to log in.
  • 2. A user who needs to log in sends a login request to the network’s service provider.
  • 3. The service provider analyzes the login request, extracts the hash, queries the blockchain, and obtains identity information from an identity list (identity transactions).
  • 4. The service provider responds with an authentication request when the above process is completed. A timestamp (to avoid a replay attack), the user’s UserName, and a signature are all included in the authentication request.
  • 5. The user creates a signature with five parameters: timestamp, UserName, and PK, as well as the UserName and PK of the service provider. The user authentication credential is used as the signature.
  • 6. The service provider verifies the received information, and if the received information is valid, the authentication succeeds; otherwise, the authentication fails, and the user’s login is denied.

The World Intellectual Property Organization (WIPO) and multiple target patent offices in various nations or regions should assess a patent application, resulting in inefficiency, high costs, and uncertainty. This study presented a conceptual NFT-based patent framework for issuing, validating, and sharing patent certificates. The platform aims to support counterfeit protection as well as secure access and management of certificates according to the needs of learners, companies, education institutions, and certification authorities.

Here, the certification authority (CA) is used to authenticate patent offices. The procedure will first validate a patent if it is provided with a digital certificate that meets the X.509 standard. Certificate authorities are introduced into the system to authenticate both the nodes and clients connected to the blockchain network.

Verification layer

In permissioned blockchains, just identified nodes can read and write in the distributed ledger. Nodes can act in different roles and have various permissions. Therefore, a distributed system can be designed to be the identified nodes for patent granting offices. Here the system is described conceptually at a high level. Figure 4 illustrates the sequence diagram of this layer. This layer includes four levels as below:

figure 4
Figure 4

Digitalization

For a patent to publish as an NFT in the blockchain, it must have a digitalized format. This level is the “filling step” in traditional patent registering. An application could be designed in the application layer to allow users to enter different patent information online.

Recording

Patents provide valuable information and would bring financial benefits for their owner. If they are publicly published in a blockchain network, miners may refuse the patent and take the innovation for themselves. At least it can weaken consensus reliability and encourage miners to misbehave. The inventor should record his innovation privately first using proof of existence to prevent this. The inventor generates the hash of the patent document and records it in the blockchain. As soon as it is recorded in the blockchain, the timestamp and the hash are available for others publicly. Then, the inventor can prove the existence of the patent document whenever it is needed.

Furthermore, using methods like Decision Thinking36, an inventor can record each phase of patent development separately. In each stage, a user generates the hash of the finished part and publishes the hash regarding the last part’s hash. Finally, they have a coupled series of hashes that indicate patent development, and they can prove the existence of each phase using the original related documents. This level should be done to prevent others from abusing the patent and taking it for themselves. The inventor can make sure that their patent document is recorded confidentially and immutably37.

Different hash algorithms exist with different architecture, time complexity, and security considerations. Hash functions should satisfy two main requirements: Pre-Image Resistance: This means that it should be computationally hard to find the input of a hash function while the output and the hash algorithm are known publicly. Collision Resistance: This means that it is computationally hard to find two arbitrary inputs, x, and y, that have the same hash output. These requirements are vital for recording patents. First, the hash function should be Pre-Image Resistance to make it impossible for others to calculate the patent documentation. Otherwise, everybody can read the patent, even before its official publication. Second, the hash function should satisfy Collision Resistance to preclude users from changing their document after recording. Otherwise, users can upload another document, and after a while, they can replace it with another one.

There are various hash algorithms, and MD and SHA families are the most useful algorithms. According to38, Collisions have been found for MD2, MD4, MD5, SHA-0, and SHA-1 hash functions. Hence, they cannot be a good choice for recording patents. SHA2 hash algorithm is secure, and no collision has been found. Although SHA2 is noticeably slower than prior hash algorithms, the recording phase is not highly time-sensitive. So, it is a better choice and provides excellent security for users.

Validating

In this phase, the inventors first create NFT for their patents and publish it to the miners/validators. Miners are some identified nodes that validate NFTs to record in the blockchain. Due to the specialization of the patent validation, miners cannot be inexpert public persons. In addition, patent offices are not too many to make the network fully decentralized. Therefore, the miners can be related specialist persons that are certified by the patent offices. They should receive a digital certificate from patent offices that show their eligibility to referee a patent.

Digital certificate

Digital certificates are digital credentials used to verify networked entities’ online identities. They usually include a public key as well as the owner’s identification. They are issued by Certification Authorities (CAs), who must verify the certificate holder’s identity. Certificates contain cryptographic keys for signing, encryption, and decryption. X.509 is a standard that defines the format of public-key certificates and is signed by a certificate authority. X.509 standard has multiple fields, and its structure is shown in Fig. 5. Version: This field indicated the version of the X.509 standard. X.509 contains multiple versions, and each version has a different structure. According to the CA, validators can choose their desired version. Serial Number: It is used to distinguish a certificate from other certificates. Thus, each certificate has a unique serial number. Signature Algorithm Identifier: This field indicates the cryptographic encryption algorithm used by a certificate authority. Issuer Name: This field indicates the issuer’s name, which is generally certificate authority. Validity Period: Each certificate is valid for a defined period, defined as the Validity Period. This limited period partly protects certificates against exposing CA’s private key. Subject Name: Name of the requester. In our proposed framework, it is the validator’s name. Subject Public Key Info: Shows the CA’s or organization’s public key that issued the certificate. These fields are identical among all versions of the X.509 standard39.

figure 5
Figure 5

Certificate authority

A Certificate Authority (CA) issues digital certificates. CAs encrypt the certificate with their private key, which is not public, and others can decrypt the certificates containing the CA’s public key.

Here, the patent office creates a certificate for requested patent referees. The patent office writes the information of the validator in their certificate and encrypts it with the patent offices’ private key. The validator can use the certificate to assure others about their eligibility. Other nodes can check the requesting node’s information by decrypting the certificate using the public key of the patent office. Therefore, persons can join the network’s miners/validators using their credentials. In this phase, miners perform Formal Examinations, Prior Art Research, and Substantive Examinations and vote to grant or refuse the patent.

Miners perform a consensus about the patent and record the patent in the blockchain. After that, the NFT is recorded in the blockchain with corresponding comments in granting or needing reformations. If the miners detect the NFT as a malicious request, they do not record it in the blockchain.

Blockchain layer

This layer plays as a middleware between the Verification Layer and Application Layer in the patents as NFTs architecture. The main purpose of the blockchain layer in the proposed architecture is to provide IP management. We find that transitioning to a blockchain-based patent as a NFTs records system enables many previously suggested improvements to current patent systems in a flexible, scalable, and transparent manner.

On the other hand, we can use multiple blockchain platforms, including Ethereum, EOS, Flow, and Tezos. Blockchain Systems can be mainly classified into two major types: Permissionless (public) and Permissioned (private) Blockchains based on their consensus mechanism. In a public blockchain, any node can participate in the peer-to-peer network, where the blockchain is fully decentralized. A node can leave the network without any consent from the other nodes in the network.

Bitcoin is one of the most popular examples that fall under the public and permissionless blockchain. Proof of Work (POW), Proof-of-Stake (POS), and directed acyclic graph (DAG) are some examples of consensus algorithms in permissionless blockchains. Bitcoin and Ethereum, two famous and trustable blockchain networks, use the PoW consensus mechanism. Blockchain platforms like Cardano and EOS adopt the PoS consensus40.

Nodes require specific access or permission to get network authentication in a private blockchain. Hyperledger is among the most popular private blockchains, which allow only permissioned members to join the network after authentication. This provides security to a group of entities that do not completely trust one another but wants to achieve a common objective such as exchanging information. All entities of a permissioned blockchain network can use Byzantine-fault-tolerant (BFT) consensus. The Fabric has a membership identity service that manages user IDs and verifies network participants.

Therefore, members are aware of each other’s identity while maintaining privacy and secrecy because they are unaware of each other’s activities41. Due to their more secure nature, private blockchains have sparked a large interest in banking and financial organizations, believing that these platforms can disrupt current centralized systems. Hyperledger, Quorum, Corda, EOS are some examples of permissioned blockchains42.

Reaching consensus in a distributed environment is a challenge. Blockchain is a decentralized network with no central node to observe and check all transactions. Thus, there is a need to design protocols that indicate all transactions are valid. So, the consensus algorithms are considered as the core of each blockchain43. In distributed systems, the consensus has become a problem in which all network members (nodes) agree on accept or reject of a block. When all network members accept the new block, it can append to the previous block.

As mentioned, the main concern in the blockchains is how to reach consensus among network members. A wide range of consensus algorithms has been designed in which each of them has its own pros and cons42. Blockchain consensus algorithms are mainly classified into three groups shown in Table 2. As the first group, proof-based consensus algorithms require the nodes joining the verifying network to demonstrate their qualification to do the appending task. The second group is voting-based consensus that requires validators in the network to share their results of validating a new block or transaction before making the final decision. The third group is DAG-based consensus, a new class of consensus algorithms. These algorithms allow several different blocks to be published and recorded simultaneously on the network.Table 2 Consensus algorithms in blockchain networks.

Full size table

The proposed patent as the NFTs platform that builds blockchain intellectual property empowers the entire patent ecosystem. It is a solution that removes barriers by addressing fundamental issues within the traditional patent ecosystem. Blockchain can efficiently handle patents and trademarks by effectively reducing approval wait time and other required resources. The user entities involved in Intellectual Property management are Creators, Patent Consumers, and Copyright Managing Entities. Users with ownership of the original data are the patent creators, e.g., inventors, writers, and researchers. Patent Consumers are the users who are willing to consume the content and support the creator’s work. On the other hand, Users responsible for protecting the creators’ Intellectual Property are the copyright management entities, e.g., lawyers. The patents as NFTs solution for IP management in blockchain layer works by implementing the following steps62:

Creators sign up to the platform

Creators need to sign up on the blockchain platform to patent their creative work. The identity information will be required while signing up.

Creators upload IP on the blockchain network

Now, add an intellectual property for which the patent application is required. The creator will upload the information related to IP and the data on the blockchain network. Blockchain ensures traceability and auditability to prevent data from duplicity and manipulation. The patent becomes visible to all network members once it is uploaded to the blockchain.

Consumers generate request to use the content

Consumers who want to access the content must first register on the blockchain network. After Signing up, consumers can ask creators to grant access to the patented content. Before the patent owner authorizes the request, a Smart Contract is created to allow customers to access information such as the owner’s data. Furthermore, consumers are required to pay fees in either fiat money or unique tokens in order to use the creator’s original information. When the creator approves the request, an NDA (Non-Disclosure Agreement) is produced and signed by both parties. Blockchain manages the agreement and guarantees that all parties agree to the terms and conditions filed.

Patent management entities leverage blockchain to protect copyrights and solve related disputes

Blockchain assists the patent management entities in resolving a variety of disputes that may include: sharing confidential information, establishing proof of authorship, transferring IP rights, and making defensive publications, etc. Suppose a person used an Invention from a patent for his company without the inventor’s consent. The inventor can report it to the patent office and claim that he is the owner of that invention.

Application layer

The patent Platform Global Marketplace technology would allow many enterprises, governments, universities, and Small and medium-sized enterprises (SMEs) worldwide to tokenize patents as NFTs to create an infrastructure for storing patent records on a blockchain-based network and developing a decentralized marketplace in which patent holders would easily sell or otherwise monetize their patents. The NFTs-based patent can use smart contracts to determine a set price for a license or purchase.

Any buyer satisfied with the conditions can pay and immediately unlock the rights to the patent without either party ever having to interact directly. While patents are currently regulated jurisdictionally around the world, a blockchain-based patent marketplace using NFTs can reduce the geographical barriers between patent systems using as simple a tool as a search query. The ease of access to patents globally can help aspiring inventors accelerate the innovative process by building upon others’ patented inventions through licenses. There are a wide variety of use cases for patent NFTs such as SMEs, Patent Organization, Grant & Funding, and fundraising/transferring information relating to patents. These applications keep growing as time progresses, and we are constantly finding new ways to utilize these tokens. Some of the most commonly used applications can be seen as follows.

SMEs

The aim is to move intellectual property assets onto a digital, centralized, and secure blockchain network, enabling easier commercialization of patents, especially for small or medium enterprises (SMEs). Smart contracts can be attached to NFTs so terms of use and ownership can be outlined and agreed upon without incurring as many legal fees as traditional IP transfers. This is believed to help SMEs secure funding, as they could more easily leverage the previously undisclosed value of their patent portfolios63.

Transfer ownership of patents

NFTs can be used to transfer ownership of patents. The blockchain can be used to keep track of patent owners, and tokens would include self-executing contracts that transfer the legal rights associated with patents when the tokens are transferred. A partnership between IBM and IPwe has spearheaded the use of NFTs to secure patent ownership. These two companies have teamed together to build the infrastructure for an NFT-based patent marketplace.

Discussion

There are exciting proposals in the legal and economic literature that suggest seemingly straightforward solutions to many of the issues plaguing current patent systems. However, most solutions would constitute major administrative disruptions and place significant and continuous financial burdens on patent offices or their users. An NFT-based patents system not only makes many of these ideas administratively feasible but can also be examined in a step-wise, scalable, and very public manner.

Furthermore, NFT-based patents may facilitate reliable information sharing among offices and patentees worldwide, reducing the burden on examiners and perhaps even accelerating harmonization efforts. NFT-based patents also have additional transparency and archival attributes baked in. A patent should be a privilege bestowed on those who take resource-intensive risks to explore the frontier of technological capabilities. As a reward for their achievements, full transparency of these rewards is much public interest. It is a society that pays for administrative and economic inefficiencies that exist in today’s systems. NFT-based patents can enhance this transparency. From an organizational perspective, an NFT-based patent can remove current bottlenecks in patent processes by making these processes more efficient, rapid, and convenient for applicants without compromising the quality of granted patents.

The proposed framework encounters some challenges that should be solved to reach a developed patent verification platform. First, technical problems are discussed. The consensus method that is used in the verification layer is not addressed in detail. Due to the permissioned structure of miners in the NFT-based patents, consensus algorithms like PBFT, Federated Consensus, and Round Robin Consensus are designed for permissioned blockchains can be applied. Also, miners/validators spend some time validating the patents; hence a protocol should be designed to profit them. Some challenges like proving the miners’ time and effort, the price that inventors should pay to miners, and other economic trade-offs should be considered.

Different NFT standards were discussed. If various patent services use NFT standards, there will be some cross-platform problems. For instance, transferring an NFT from Ethereum blockchain (ERC-721 token) to EOS blockchain is not a forward and straight work and needs some considerations. Also, people usually trade NFTs in marketplaces such as Rarible and OpenSea. These marketplaces are centralized and may prompt some challenges because of their centralized nature. Besides, there exist some other types of challenges. For example, the novelty of NFT-based patents and blockchain services.

Blockchain-based patent service has not been tested before. The patent registration procedure and concepts of the Patent as NFT system may be ambiguous for people who still prefer conventional centralized patent systems over decentralized ones. It should be noted that there are some problems in the mining part. Miners should receive certificates from the accepted organizations. Determining these organizations and how they accept referees as validators need more consideration. Some types of inventions in some countries are prohibited, and inventors cannot register them. In NFT-based patents, inventors can register their patents publicly, and maybe some collisions occur between inventors and the government. There exist some misunderstandings about NFT’s ownership rights. It is not clear that when a person buys an NFT, which rights are given to them exactly; for instance, they have property rights or have moral rights, too.

Conclusion

Blockchain technology provides strong timestamping, the potential for smart contracts, proof-of-existence. It enables creating a transparent, distributed, cost-effective, and resilient environment that is open to all and where each transaction is auditable. On the other hand, blockchain is a definite boon to the IP industry, benefitting patent owners. When blockchain technology’s intrinsic characteristics are applied to the IP domain, it helps copyrights. This paper provided a conceptual framework for presenting an NFT-based patent with a comprehensive discussion of many aspects: background, model components, token standards to application areas, and research challenges. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application. The primary purpose of this patent framework was to provide an NFT-based concept that could be used to patent a decentralized, anti-tamper, and reliable network for trade and exchange around the world. Finally, we addressed several open challenges to NFT-based inventions.

References

  1. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decent. Bus. Rev. 21260, https://bitcoin.org/bitcoin.pdf (2008).
  2. Buterin, V. A next-generation smart contract and decentralized application platform. White Pap. 3 (2014).
  3. Nofer, M., Gomber, P., Hinz, O. & Schiereck, D. Business & infomation system engineering. Blockchain 59, 183–187 (2017).Google Scholar 
  4. Regner, F., Urbach, N. & Schweizer, A. NFTs in practice—non-fungible tokens as core component of a blockchain-based event ticketing application. https://www.researchgate.net/publication/336057493_NFTs_in_Practice_-_Non-Fungible_Tokens_as_Core_Component_of_a_Blockchain-based_Event_Ticketing_Application (2019).
  5. Entriken, W., Shirley, D., Evans, J. & Sachs, N. EIP 721: ERC-721 non-fungible token standard. Ethereum Improv. Propos.https://eips.ethereum.org/EIPS/eip-721 (2018).
  6. Radomski, W. et al. Eip 1155: Erc-1155 multi token standard. In Ethereum, Standard (2018).
  7. Dowling, M. Is non-fungible token pricing driven by cryptocurrencies? Finance Res. Lett. 44, 102097. https://doi.org/10.1016/j.frl.2021.102097 (2021).
  8. Lesavre, L., Varin, P. & Yaga, D. Blockchain Networks: Token Design and Management Overview. (National Institute of Standards and Technology, 2020).
  9. Larva-Labs. About Cryptopunks, Retrieved 13 May, 2021, from https://www.larvalabs.com/cryptopunks (2021).
  10. Cryptokitties. About Cryptokitties, Retrieved 28 May, 2021, from https://www.cryptokitties.co/ (2021).
  11. nbatopshot. About Nba top shot, Retrieved 4 April, 2021, from https://nbatopshot.com/terms (2021).
  12. Fairfield, J. Tokenized: The law of non-fungible tokens and unique digital property. Indiana Law J. forthcoming (2021).
  13. Chevet, S. Blockchain technology and non-fungible tokens: Reshaping value chains in creative industries. Available at SSRN 3212662 (2018).
  14. Bal, M. & Ner, C. NFTracer: a Non-Fungible token tracking proof-of-concept using Hyperledger Fabric. arXiv preprint arXiv:1905.04795 (2019).
  15. Wang, Q., Li, R., Wang, Q. & Chen, S. Non-fungible token (NFT): Overview, evaluation, opportunities and challenges. arXiv preprint arXiv:2105.07447 (2021).
  16. Qu, Q., Nurgaliev, I., Muzammal, M., Jensen, C. S. & Fan, J. On spatio-temporal blockchain query processing. Future Gener. Comput. Syst. 98: 208–218 (2019).Article Google Scholar 
  17. Rosenfeld, M. Overview of colored coins. White paper, bitcoil. co. il 41, 94 (2012).
  18. Obsidian-Labs. dGoods Standard, Retrieved 29 April, 2021, from https://docs.eosstudio.io/contracts/dgoods/standard.html. (2021).
  19. Algorand. Algorand Core Technology Innovation, Retrieved 10 March, 2021, from https://www.algorand.com/technology/core-blockchain-innovation. (2021).
  20. Weathersby, J. Building NFTs on Algorand, Retrieved 15 April, 2021, from https://developer.algorand.org/articles/building-nfts-on-algorand/. (2021).
  21. Algorand. How Algorand Democratizes the Access to the NFT Market with Fractional NFTs, Retrieved 7 April, 2021, from https://www.algorand.com/resources/blog/algorand-nft-market-fractional-nfts. (2021).
  22. Tezos. Welcome to the Tezos Developer Documentation, Retrieved 16 May, 2021, from https://tezos.gitlab.io. (2021).
  23. flowdocs. Non-Fungible Tokens, Retrieved 20 May, 2021, from https://docs.onflow.org/cadence/tutorial/04-non-fungible-tokens/. (2021).
  24. Benisi, N. Z., Aminian, M. & Javadi, B. Blockchain-based decentralized storage networks: A survey. J. Netw. Comput. Appl. 162, 102656 (2020).Article Google Scholar 
  25. NFTReview. On-chain vs. Off-chain Metadata (2021).
  26. Benet, J. Ipfs-content addressed, versioned, p2p file system. arXiv preprint arXiv:1407.3561 (2014).
  27. Nizamuddin, N., Salah, K., Azad, M. A., Arshad, J. & Rehman, M. Decentralized document version control using ethereum blockchain and IPFS. Comput. Electr. Eng. 76, 183–197 (2019).Article Google Scholar 
  28. Tut, K. Who Is Responsible for NFT Data? (2020).
  29. nft.storage. Free Storage for NFTs, Retrieved 16 May, 2021, from https://nft.storage/. (2021).
  30. Psaras, Y. & Dias, D. in 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). 80–80 (IEEE).
  31. Tanner, J. & Roelofs, C. NFTs and the need for Self-Sovereign Identity (2021).
  32. Martens, D., Tuyll van Serooskerken, A. V. & Steenhagen, M. Exploring the potential of blockchain for KYC. J. Digit. Bank. 2, 123–131 (2017).Google Scholar 
  33. Hammi, M. T., Bellot, P. & Serhrouchni, A. In 2018 IEEE Wireless Communications and Networking Conference (WCNC). 1–6 (IEEE).
  34. Khalid, U. et al. A decentralized lightweight blockchain-based authentication mechanism for IoT systems. Cluster Comput. 1–21 (2020).
  35. Zhong, Y. et al. Distributed blockchain-based authentication and authorization protocol for smart grid. Wirel. Commun. Mobile Comput. (2021).
  36. Schönhals, A., Hepp, T. & Gipp, B. In Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. 105–110.
  37. Verma, S. & Prajapati, G. A Survey of Cryptographic Hash Algorithms and Issues. International Journal of Computer Security & Source Code Analysis (IJCSSCA) 1, 17–20, (2015).
  38. Verma, S. & Prajapati, G. A survey of cryptographic hash algorithms and issues. Int. J. Comput. Secur. Source Code Anal. (IJCSSCA) 1 (2015).
  39. SDK, I. X.509 Certificates (1996).
  40. Helliar, C. V., Crawford, L., Rocca, L., Teodori, C. & Veneziani, M. Permissionless and permissioned blockchain diffusion. Int. J. Inf. Manag. 54, 102136 (2020).Article Google Scholar 
  41. Frizzo-Barker, J. et al. Blockchain as a disruptive technology for business: A systematic review. Int. J. Inf. Manag. 51, 102029 (2020).Article Google Scholar 
  42. Bamakan, S. M. H., Motavali, A. & Bondarti, A. B. A survey of blockchain consensus algorithms performance evaluation criteria. Expert Syst. Appl. 154, 113385 (2020).Article Google Scholar 
  43. Bamakan, S. M. H., Bondarti, A. B., Bondarti, P. B. & Qu, Q. Blockchain technology forecasting by patent analytics and text mining. Blockchain Res. Appl. 100019 (2021).
  44. Castro, M. & Liskov, B. Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. (TOCS) 20, 398–461 (2002).Article Google Scholar 
  45. Muratov, F., Lebedev, A., Iushkevich, N., Nasrulin, B. & Takemiya, M. YAC: BFT consensus algorithm for blockchain. arXiv preprint arXiv:1809.00554 (2018).
  46. Bessani, A., Sousa, J. & Alchieri, E. E. In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks. 355–362 (IEEE).
  47. Todd, P. Ripple protocol consensus algorithm review. May 11th (2015).
  48. Ongaro, D. & Ousterhout, J. In 2014 {USENIX} Annual Technical Conference ({USENIX}{ATC} 14). 305–319.
  49. Larimer, D. Delegated proof-of-stake (dpos). Bitshare whitepaper, Reterived March 31, 2019, from http://docs.bitshares.org/bitshares/dpos.html (2014).
  50. Turner, B. (October, 2007).
  51. De Angelis, S. et al. PBFT vs proof-of-authority: Applying the CAP theorem to permissioned blockchain (2018).
  52. King, S. & Nadal, S. Ppcoin: Peer-to-peer crypto-currency with proof-of-stake. self-published paper, August 19 (2012).
  53. Hyperledger. PoET 1.0 Specification (2017).
  54. Buntinx, J. What Is Proof-of-Weight? Reterived March 31, 2019, from https://nulltx.com/what-is-proof-of-weight/# (2018).
  55. P4Titan. A Peer-to-Peer Crypto-Currency with Proof-of-Burn. Reterived March 10, 2019, from https://github.com/slimcoin-project/slimcoin-project.github.io/raw/master/whitepaperSLM.pdf (2014).
  56. Dziembowski, S., Faust, S., Kolmogorov, V. & Pietrzak, K. In Annual Cryptology Conference. 585–605 (Springer).
  57. Bentov, I., Lee, C., Mizrahi, A. & Rosenfeld, M. Proof of Activity: Extending Bitcoin’s Proof of Work via Proof of Stake. IACR Cryptology ePrint Archive 2014, 452 (2014).Google Scholar 
  58. NEM, T. Nem technical referencehttps://nem.io/wpcontent/themes/nem/files/NEM_techRef.pdf (2018).
  59. Bramas, Q. The Stability and the Security of the Tangle (2018).
  60. Baird, L. The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance. In Swirlds Tech Reports SWIRLDS-TR-2016–01, Tech. Rep (2016).
  61. LeMahieu, C. Nano: A feeless distributed cryptocurrency network. Nano [Online resource]. https://nano.org/en/whitepaper (date of access: 24.03. 2018) 16, 17 (2018).
  62. Casino, F., Dasaklis, T. K. & Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification and open issues. Telematics Inform. 36, 55–81 (2019).Article Google Scholar 
  63. bigredawesomedodo. Helping Small Businesses Survive and Grow With Marketing, Retrieved 3 June, 2021, from https://bigredawesomedodo.com/nft/. (2020).

Download references

Acknowledgements

This work has been partially supported by CAS President’s International Fellowship Initiative, China [grant number 2021VTB0002, 2021] and National Natural Science Foundation of China (No. 61902385).

Author information

Affiliations

  1. Department of Industrial Management, Yazd University, Yazd City, IranSeyed Mojtaba Hosseini Bamakan
  2. Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan City, IranNasim Nezhadsistani
  3. School of Electrical and Computer Engineering, University of Tehran, Tehran City, IranOmid Bodaghi
  4. Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, ChinaSeyed Mojtaba Hosseini Bamakan & Qiang Qu
  5. Huawei Blockchain Lab, Huawei Cloud Tech Co., Ltd., Shenzhen, ChinaQiang Qu

Contributions

NFT: Redefined Format of IP Assets

The collaboration between National Center for Advancing Translational Sciences (NCATS) at NIH and BurstIQ

2.0 LPBI is a Very Unique Organization 

 

Read Full Post »

Reporter: Stephen J. Williams, Ph.D.

From: Heidi Rheim et al. GA4GH: International policies and standards for data sharing across genomic research and healthcare. (2021): Cell Genomics, Volume 1 Issue 2.

Source: DOI:https://doi.org/10.1016/j.xgen.2021.100029

Highlights

  • Siloing genomic data in institutions/jurisdictions limits learning and knowledge
  • GA4GH policy frameworks enable responsible genomic data sharing
  • GA4GH technical standards ensure interoperability, broad access, and global benefits
  • Data sharing across research and healthcare will extend the potential of genomics

Summary

The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits.

In order for genomic and personalized medicine to come to fruition it is imperative that data siloes around the world are broken down, allowing the international collaboration for the collection, storage, transferring, accessing and analying of molecular and health-related data.

We had talked on this site in numerous articles about the problems data siloes produce. By data siloes we are meaning that collection and storage of not only DATA but intellectual thought are being held behind physical, electronic, and intellectual walls and inacessible to other scientisits not belonging either to a particular institituion or even a collaborative network.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Standardization and harmonization of data is key to this effort to sharing electronic records. The EU has taken bold action in this matter. The following section is about the General Data Protection Regulation of the EU and can be found at the following link:

https://ec.europa.eu/info/law/law-topic/data-protection/data-protection-eu_en

Fundamental rights

The EU Charter of Fundamental Rights stipulates that EU citizens have the right to protection of their personal data.

Protection of personal data

Legislation

The data protection package adopted in May 2016 aims at making Europe fit for the digital age. More than 90% of Europeans say they want the same data protection rights across the EU and regardless of where their data is processed.

The General Data Protection Regulation (GDPR)

Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. This text includes the corrigendum published in the OJEU of 23 May 2018.

The regulation is an essential step to strengthen individuals’ fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. A single law will also do away with the current fragmentation in different national systems and unnecessary administrative burdens.

The regulation entered into force on 24 May 2016 and applies since 25 May 2018. More information for companies and individuals.

Information about the incorporation of the General Data Protection Regulation (GDPR) into the EEA Agreement.

EU Member States notifications to the European Commission under the GDPR

The Data Protection Law Enforcement Directive

Directive (EU) 2016/680 on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties, and on the free movement of such data.

The directive protects citizens’ fundamental right to data protection whenever personal data is used by criminal law enforcement authorities for law enforcement purposes. It will in particular ensure that the personal data of victims, witnesses, and suspects of crime are duly protected and will facilitate cross-border cooperation in the fight against crime and terrorism.

The directive entered into force on 5 May 2016 and EU countries had to transpose it into their national law by 6 May 2018.

The following paper by the organiztion The Global Alliance for Genomics and Health discusses these types of collaborative efforts to break down data silos in personalized medicine. This organization has over 2000 subscribers in over 90 countries encompassing over 60 organizations.

Enabling responsible genomic data sharing for the benefit of human health

The Global Alliance for Genomics and Health (GA4GH) is a policy-framing and technical standards-setting organization, seeking to enable responsible genomic data sharing within a human rights framework.

he Global Alliance for Genomics and Health (GA4GH) is an international, nonprofit alliance formed in 2013 to accelerate the potential of research and medicine to advance human health. Bringing together 600+ leading organizations working in healthcare, research, patient advocacy, life science, and information technology, the GA4GH community is working together to create frameworks and standards to enable the responsible, voluntary, and secure sharing of genomic and health-related data. All of our work builds upon the Framework for Responsible Sharing of Genomic and Health-Related Data.

GA4GH Connect is a five-year strategic plan that aims to drive uptake of standards and frameworks for genomic data sharing within the research and healthcare communities in order to enable responsible sharing of clinical-grade genomic data by 2022. GA4GH Connect links our Work Streams with Driver Projects—real-world genomic data initiatives that help guide our development efforts and pilot our tools.

From the article on Cell Genomics GA4GH: International policies and standards for data sharing across genomic research and healthcare

Source: Open Access DOI:https://doi.org/10.1016/j.xgen.2021.100029PlumX Metrics

The Global Alliance for Genomics and Health (GA4GH) is a worldwide alliance of genomics researchers, data scientists, healthcare practitioners, and other stakeholders. We are collaborating to establish policy frameworks and technical standards for responsible, international sharing of genomic and other molecular data as well as related health data. Founded in 2013,3 the GA4GH community now consists of more than 1,000 individuals across more than 90 countries working together to enable broad sharing that transcends the boundaries of any single institution or country (see https://www.ga4gh.org).In this perspective, we present the strategic goals of GA4GH and detail current strategies and operational approaches to enable responsible sharing of clinical and genomic data, through both harmonized data aggregation and federated approaches, to advance genomic medicine and research. We describe technical and policy development activities of the eight GA4GH Work Streams and implementation activities across 24 real-world genomic data initiatives (“Driver Projects”). We review how GA4GH is addressing the major areas in which genomics is currently deployed including rare disease, common disease, cancer, and infectious disease. Finally, we describe differences between genomic sequence data that are generated for research versus healthcare purposes, and define strategies for meeting the unique challenges of responsibly enabling access to data acquired in the clinical setting.

GA4GH organization

GA4GH has partnered with 24 real-world genomic data initiatives (Driver Projects) to ensure its standards are fit for purpose and driven by real-world needs. Driver Projects make a commitment to help guide GA4GH development efforts and pilot GA4GH standards (see Table 2). Each Driver Project is expected to dedicate at least two full-time equivalents to GA4GH standards development, which takes place in the context of GA4GH Work Streams (see Figure 1). Work Streams are the key production teams of GA4GH, tackling challenges in eight distinct areas across the data life cycle (see Box 1). Work Streams consist of experts from their respective sub-disciplines and include membership from Driver Projects as well as hundreds of other organizations across the international genomics and health community.

Figure thumbnail gr1
Figure 1Matrix structure of the Global Alliance for Genomics and HealthShow full caption


Box 1
GA4GH Work Stream focus areasThe GA4GH Work Streams are the key production teams of the organization. Each tackles a specific area in the data life cycle, as described below (URLs listed in the web resources).

  • (1)Data use & researcher identities: Develops ontologies and data models to streamline global access to datasets generated in any country9,10
  • (2)Genomic knowledge standards: Develops specifications and data models for exchanging genomic variant observations and knowledge18
  • (3)Cloud: Develops federated analysis approaches to support the statistical rigor needed to learn from large datasets
  • (4)Data privacy & security: Develops guidelines and recommendations to ensure identifiable genomic and phenotypic data remain appropriately secure without sacrificing their analytic potential
  • (5)Regulatory & ethics: Develops policies and recommendations for ensuring individual-level data are interoperable with existing norms and follow core ethical principles
  • (6)Discovery: Develops data models and APIs to make data findable, accessible, interoperable, and reusable (FAIR)
  • (7)Clinical & phenotypic data capture & exchange: Develops data models to ensure genomic data is most impactful through rich metadata collected in a standardized way
  • (8)Large-scale genomics: Develops APIs and file formats to ensure harmonized technological platforms can support large-scale computing

For more articles on Open Access, Science 2.0, and Data Networks for Genomics on this Open Access Scientific Journal see:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

UK Biobank Makes Available 200,000 whole genomes Open Access

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

UK Biobank Makes Available 200,000 whole genomes Open Access

Reporter: Stephen J. Williams, Ph.D.

The following is a summary of an article by Jocelyn Kaiser, published in the November 26, 2021 issue of the journal Science.

To see the full article please go to https://www.science.org/content/article/200-000-whole-genomes-made-available-biomedical-studies-uk-effort

The UK Biobank (UKBB) this week unveiled to scientists the entire genomes of 200,000 people who are part of a long-term British health study.

The trove of genomes, each linked to anonymized medical information, will allow biomedical scientists to scour the full 3 billion base pairs of human DNA for insights into the interplay of genes and health that could not be gleaned from partial sequences or scans of genome markers. “It is thrilling to see the release of this long-awaited resource,” says Stephen Glatt, a psychiatric geneticist at the State University of New York Upstate Medical University.

Other biobanks have also begun to compile vast numbers of whole genomes, 100,000 or more in some cases (see table, below). But UKBB stands out because it offers easy access to the genomic information, according to some of the more than 20,000 researchers in 90 countries who have signed up to use the data. “In terms of availability and data quality, [UKBB] surpasses all others,” says physician and statistician Omar Yaxmehen Bello-Chavolla of the National Institute for Geriatrics in Mexico City.

Enabling your vision to improve public health

Data drives discovery. We have curated a uniquely powerful biomedical database that can be accessed globally for public health research. Explore data from half a million UK Biobank participants to enable new discoveries to improve public health.

Data Showcase

Future data releases

This UKBB biobank represents genomes collected from 500,000 middle-age and elderly participants for 2006 to 2010. The genomes are mostly of a European descent. Other large scale genome sequencing ventures like Iceland’s DECODE, which collected over 100,000 genomes, is now a subsidiary of Amgen, and mostly behind IP protection, not Open Access as this database represents.

UK Biobank is a large-scale biomedical database and research resource, containing in-depth genetic and health information from half a million UK participants. The database is regularly augmented with additional data and is globally accessible to approved researchers undertaking vital research into the most common and life-threatening diseases. It is a major contributor to the advancement of modern medicine and treatment and has enabled several scientific discoveries that improve human health.

A summary of some large scale genome sequencing projects are show in the table below:

BiobankCompleted Whole GenomesRelease Information
UK Biobank200,000300,000 more in early 2023
TransOmics for
Precision Medicien
161,000NIH requires project
specific request
Million Veterans
Program
125,000Non-Veterans Affairs
researchers get first access
100,000 Genomes
Project
120,000Researchers must join Genomics
England collaboration
All of Us90,000NIH expects to release 2022

Other Related Articles on Genome Biobank Projects in this Open Access Online Scientific Journal Include the Following:

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

Exome Aggregation Consortium (ExAC), generated the largest catalogue so far of variation in human protein-coding regions: Sequence data of 60,000 people, NOW is a publicly accessible database

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Diversity and Health Disparity Issues Need to be Addressed for GWAS and Precision Medicine Studies

Read Full Post »

#TUBiol5227: Biomarkers & Biotargets: Genetic Testing and Bioethics

Curator: Stephen J. Williams, Ph.D.

The advent of direct to consumer (DTC) genetic testing and the resultant rapid increase in its popularity as well as companies offering such services has created some urgent and unique bioethical challenges surrounding this niche in the marketplace. At first, most DTC companies like 23andMe and Ancestry.com offered non-clinical or non-FDA approved genetic testing as a way for consumers to draw casual inferences from their DNA sequence and existence of known genes that are linked to disease risk, or to get a glimpse of their familial background. However, many issues arose, including legal, privacy, medical, and bioethical issues. Below are some articles which will explain and discuss many of these problems associated with the DTC genetic testing market as well as some alternatives which may exist.

‘Direct-to-Consumer (DTC) Genetic Testing Market to hit USD 2.5 Bn by 2024’ by Global Market Insights

This post has the following link to the market analysis of the DTC market (https://www.gminsights.com/pressrelease/direct-to-consumer-dtc-genetic-testing-market). Below is the highlights of the report.

As you can see,this market segment appears to want to expand into the nutritional consulting business as well as targeted biomarkers for specific diseases.

Rising incidence of genetic disorders across the globe will augment the market growth

Increasing prevalence of genetic disorders will propel the demand for direct-to-consumer genetic testing and will augment industry growth over the projected timeline. Increasing cases of genetic diseases such as breast cancer, achondroplasia, colorectal cancer and other diseases have elevated the need for cost-effective and efficient genetic testing avenues in the healthcare market.
 

For instance, according to the World Cancer Research Fund (WCRF), in 2018, over 2 million new cases of cancer were diagnosed across the globe. Also, breast cancer is stated as the second most commonly occurring cancer. Availability of superior quality and advanced direct-to-consumer genetic testing has drastically reduced the mortality rates in people suffering from cancer by providing vigilant surveillance data even before the onset of the disease. Hence, the aforementioned factors will propel the direct-to-consumer genetic testing market overt the forecast timeline.
 

DTC Genetic Testing Market By Technology

Get more details on this report – Request Free Sample PDF
 

Nutrigenomic Testing will provide robust market growth

The nutrigenomic testing segment was valued over USD 220 million market value in 2019 and its market will witness a tremendous growth over 2020-2028. The growth of the market segment is attributed to increasing research activities related to nutritional aspects. Moreover, obesity is another major factor that will boost the demand for direct-to-consumer genetic testing market.
 

Nutrigenomics testing enables professionals to recommend nutritional guidance and personalized diet to obese people and help them to keep their weight under control while maintaining a healthy lifestyle. Hence, above mentioned factors are anticipated to augment the demand and adoption rate of direct-to-consumer genetic testing through 2028.
 

Browse key industry insights spread across 161 pages with 126 market data tables & 10 figures & charts from the report, “Direct-To-Consumer Genetic Testing Market Size By Test Type (Carrier Testing, Predictive Testing, Ancestry & Relationship Testing, Nutrigenomics Testing), By Distribution Channel (Online Platforms, Over-the-Counter), By Technology (Targeted Analysis, Single Nucleotide Polymorphism (SNP) Chips, Whole Genome Sequencing (WGS)), Industry Analysis Report, Regional Outlook, Application Potential, Price Trends, Competitive Market Share & Forecast, 2020 – 2028” in detail along with the table of contents:
https://www.gminsights.com/industry-analysis/direct-to-consumer-dtc-genetic-testing-market
 

Targeted analysis techniques will drive the market growth over the foreseeable future

Based on technology, the DTC genetic testing market is segmented into whole genome sequencing (WGS), targeted analysis, and single nucleotide polymorphism (SNP) chips. The targeted analysis market segment is projected to witness around 12% CAGR over the forecast period. The segmental growth is attributed to the recent advancements in genetic testing methods that has revolutionized the detection and characterization of genetic codes.
 

Targeted analysis is mainly utilized to determine any defects in genes that are responsible for a disorder or a disease. Also, growing demand for personalized medicine amongst the population suffering from genetic diseases will boost the demand for targeted analysis technology. As the technology is relatively cheaper, it is highly preferred method used in direct-to-consumer genetic testing procedures. These advantages of targeted analysis are expected to enhance the market growth over the foreseeable future.
 

Over-the-counter segment will experience a notable growth over the forecast period

The over-the-counter distribution channel is projected to witness around 11% CAGR through 2028. The segmental growth is attributed to the ease in purchasing a test kit for the consumers living in rural areas of developing countries. Consumers prefer over-the-counter distribution channel as they are directly examined by regulatory agencies making it safer to use, thereby driving the market growth over the forecast timeline.
 

Favorable regulations provide lucrative growth opportunities for direct-to-consumer genetic testing

Europe direct-to-consumer genetic testing market held around 26% share in 2019 and was valued at around USD 290 million. The regional growth is due to elevated government spending on healthcare to provide easy access to genetic testing avenues. Furthermore, European regulatory bodies are working on improving the regulations set on the direct-to-consumer genetic testing methods. Hence, the above-mentioned factors will play significant role in the market growth.
 

Focus of market players on introducing innovative direct-to-consumer genetic testing devices will offer several growth opportunities

Few of the eminent players operating in direct-to-consumer genetic testing market share include Ancestry, Color Genomics, Living DNA, Mapmygenome, Easy DNA, FamilytreeDNA (Gene By Gene), Full Genome Corporation, Helix OpCo LLC, Identigene, Karmagenes, MyHeritage, Pathway genomics, Genesis Healthcare, and 23andMe. These market players have undertaken various business strategies to enhance their financial stability and help them evolve as leading companies in the direct-to-consumer genetic testing industry.
 

For example, in November 2018, Helix launched a new genetic testing product, DNA discovery kit, that allows customer to delve into their ancestry. This development expanded the firm’s product portfolio, thereby propelling industry growth in the market.

The following posts discuss bioethical issues related to genetic testing and personalized medicine from a clinicians and scientisit’s perspective

Question: Each of these articles discusses certain bioethical issues although focuses on personalized medicine and treatment. Given your understanding of the robust process involved in validating clinical biomarkers and the current state of the DTC market, how could DTC testing results misinform patients and create mistrust in the physician-patient relationship?

Personalized Medicine, Omics, and Health Disparities in Cancer:  Can Personalized Medicine Help Reduce the Disparity Problem?

Diversity and Health Disparity Issues Need to be Addressed for GWAS and Precision Medicine Studies

Genomics & Ethics: DNA Fragments are Products of Nature or Patentable Genes?

The following posts discuss the bioethical concerns of genetic testing from a patient’s perspective:

Ethics Behind Genetic Testing in Breast Cancer: A Webinar by Laura Carfang of survivingbreastcancer.org

Ethical Concerns in Personalized Medicine: BRCA1/2 Testing in Minors and Communication of Breast Cancer Risk

23andMe Product can be obtained for Free from a new app called Genes for Good: UMich’s Facebook-based Genomics Project

Question: If you are developing a targeted treatment with a companion diagnostic, what bioethical concerns would you address during the drug development process to ensure fair, equitable and ethical treatment of all patients, in trials as well as post market?

Articles on Genetic Testing, Companion Diagnostics and Regulatory Mechanisms

Centers for Medicare & Medicaid Services announced that the federal healthcare program will cover the costs of cancer gene tests that have been approved by the Food and Drug Administration

Real Time Coverage @BIOConvention #BIO2019: Genome Editing and Regulatory Harmonization: Progress and Challenges

New York Times vs. Personalized Medicine? PMC President: Times’ Critique of Streamlined Regulatory Approval for Personalized Treatments ‘Ignores Promising Implications’ of Field

Live Conference Coverage @Medcitynews Converge 2018 Philadelphia: Early Diagnosis Through Predictive Biomarkers, NonInvasive Testing

Protecting Your Biotech IP and Market Strategy: Notes from Life Sciences Collaborative 2015 Meeting

Question: What type of regulatory concerns should one have during the drug development process in regards to use of biomarker testing? From the last article on Protecting Your IP how important is it, as a drug developer, to involve all payers during the drug development process?

Read Full Post »

Science Policy Forum: Should we trust healthcare explanations from AI predictive systems?

Some in industry voice their concerns

Curator: Stephen J. Williams, PhD

Post on AI healthcare and explainable AI

   In a Policy Forum article in ScienceBeware explanations from AI in health care”, Boris Babic, Sara Gerke, Theodoros Evgeniou, and Glenn Cohen discuss the caveats on relying on explainable versus interpretable artificial intelligence (AI) and Machine Learning (ML) algorithms to make complex health decisions.  The FDA has already approved some AI/ML algorithms for analysis of medical images for diagnostic purposes.  These have been discussed in prior posts on this site, as well as issues arising from multi-center trials.  The authors of this perspective article argue that choice of type of algorithm (explainable versus interpretable) algorithms may have far reaching consequences in health care.

Summary

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users’ skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Source: https://science.sciencemag.org/content/373/6552/284?_ga=2.166262518.995809660.1627762475-1953442883.1627762475

Types of AI/ML Algorithms: Explainable and Interpretable algorithms

  1.  Interpretable AI: A typical AI/ML task requires constructing algorithms from vector inputs and generating an output related to an outcome (like diagnosing a cardiac event from an image).  Generally the algorithm has to be trained on past data with known parameters.  When an algorithm is called interpretable, this means that the algorithm uses a transparent or “white box” function which is easily understandable. Such example might be a linear function to determine relationships where parameters are simple and not complex.  Although they may not be as accurate as the more complex explainable AI/ML algorithms, they are open, transparent, and easily understood by the operators.
  2. Explainable AI/ML:  This type of algorithm depends upon multiple complex parameters and takes a first round of predictions from a “black box” model then uses a second algorithm from an interpretable function to better approximate outputs of the first model.  The first algorithm is trained not with original data but based on predictions resembling multiple iterations of computing.  Therefore this method is more accurate or deemed more reliable in prediction however is very complex and is not easily understandable.  Many medical devices that use an AI/ML algorithm use this type.  An example is deep learning and neural networks.

The purpose of both these methodologies is to deal with problems of opacity, or that AI predictions based from a black box undermines trust in the AI.

For a deeper understanding of these two types of algorithms see here:

https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html

or https://www.bmc.com/blogs/machine-learning-interpretability-vs-explainability/

(a longer read but great explanation)

From the above blog post of Jonathan Johnson

  • How interpretability is different from explainability
  • Why a model might need to be interpretable and/or explainable
  • Who is working to solve the black box problem—and how

What is interpretability?

Does Chipotle make your stomach hurt? Does loud noise accelerate hearing loss? Are women less aggressive than men? If a machine learning model can create a definition around these relationships, it is interpretable.

All models must start with a hypothesis. Human curiosity propels a being to intuit that one thing relates to another. “Hmm…multiple black people shot by policemen…seemingly out of proportion to other races…something might be systemic?” Explore.

People create internal models to interpret their surroundings. In the field of machine learning, these models can be tested and verified as either accurate or inaccurate representations of the world.

Interpretability means that the cause and effect can be determined.

What is explainability?

ML models are often called black-box models because they allow a pre-set number of empty parameters, or nodes, to be assigned values by the machine learning algorithm. Specifically, the back-propagation step is responsible for updating the weights based on its error function.

To predict when a person might die—the fun gamble one might play when calculating a life insurance premium, and the strange bet a person makes against their own life when purchasing a life insurance package—a model will take in its inputs, and output a percent chance the given person has at living to age 80.

Below is an image of a neural network. The inputs are the yellow; the outputs are the orange. Like a rubric to an overall grade, explainability shows how significant each of the parameters, all the blue nodes, contribute to the final decision.

In this neural network, the hidden layers (the two columns of blue dots) would be the black box.

For example, we have these data inputs:

  • Age
  • BMI score
  • Number of years spent smoking
  • Career category

If this model had high explainability, we’d be able to say, for instance:

  • The career category is about 40% important
  • The number of years spent smoking weighs in at 35% important
  • The age is 15% important
  • The BMI score is 10% important

Explainability: important, not always necessary

Explainability becomes significant in the field of machine learning because, often, it is not apparent. Explainability is often unnecessary. A machine learning engineer can build a model without ever having considered the model’s explainability. It is an extra step in the building process—like wearing a seat belt while driving a car. It is unnecessary for the car to perform, but offers insurance when things crash.

The benefit a deep neural net offers to engineers is it creates a black box of parameters, like fake additional data points, that allow a model to base its decisions against. These fake data points go unknown to the engineer. The black box, or hidden layers, allow a model to make associations among the given data points to predict better results. For example, if we are deciding how long someone might have to live, and we use career data as an input, it is possible the model sorts the careers into high- and low-risk career options all on its own.

Perhaps we inspect a node and see it relates oil rig workers, underwater welders, and boat cooks to each other. It is possible the neural net makes connections between the lifespan of these individuals and puts a placeholder in the deep net to associate these. If we were to examine the individual nodes in the black box, we could note this clustering interprets water careers to be a high-risk job.

In the previous chart, each one of the lines connecting from the yellow dot to the blue dot can represent a signal, weighing the importance of that node in determining the overall score of the output.

  • If that signal is high, that node is significant to the model’s overall performance.
  • If that signal is low, the node is insignificant.

With this understanding, we can define explainability as:

Knowledge of what one node represents and how important it is to the model’s performance.

So how does choice of these two different algorithms make a difference with respect to health care and medical decision making?

The authors argue: 

“Regulators like the FDA should focus on those aspects of the AI/ML system that directly bear on its safety and effectiveness – in particular, how does it perform in the hands of its intended users?”

A suggestion for

  • Enhanced more involved clinical trials
  • Provide individuals added flexibility when interacting with a model, for example inputting their own test data
  • More interaction between user and model generators
  • Determining in which situations call for interpretable AI versus explainable (for instance predicting which patients will require dialysis after kidney damage)

Other articles on AI/ML in medicine and healthcare on this Open Access Journal include

Applying AI to Improve Interpretation of Medical Imaging

Real Time Coverage @BIOConvention #BIO2019: Machine Learning and Artificial Intelligence #AI: Realizing Precision Medicine One Patient at a Time

LIVE Day Three – World Medical Innovation Forum ARTIFICIAL INTELLIGENCE, Boston, MA USA, Monday, April 10, 2019

Cardiac MRI Imaging Breakthrough: The First AI-assisted Cardiac MRI Scan Solution, HeartVista Receives FDA 510(k) Clearance for One Click™ Cardiac MRI Package

 

Read Full Post »

Personalized Medicine, Omics, and Health Disparities in Cancer:  Can Personalized Medicine Help Reduce the Disparity Problem?

Curator: Stephen J. Williams, PhD

In a Science Perspectives article by Timothy Rebbeck, health disparities, specifically cancer disparities existing in the sub-Saharan African (SSA) nations, highlighting the cancer incidence disparities which exist compared with cancer incidence in high income areas of the world [1].  The sub-Saharan African nations display a much higher incidence of prostate, breast, and cervix cancer and these cancers are predicted to double within the next twenty years, according to IARC[2].  Most importantly,

 the histopathologic and demographic features of these tumors differ from those in high-income countries

meaning that the differences seen in incidence may reflect a true health disparity as increases rates in these cancers are not seen in high income countries (HIC).

Most frequent male cancers in SSA include prostate, lung, liver, leukemia, non-Hodgkin’s lymphoma, and Kaposi’s sarcoma (a cancer frequently seen in HIV infected patients [3]).  In SSA women, breast and cervical cancer are the most common and these display higher rates than seen in high income countries.  In fact, liver cancer is seen in SSA females at twice the rate, and in SSA males almost three times the rate as in high income countries.

 

 

 

 

 

 

Reasons for cancer disparity in SSA

Patients with cancer are often diagnosed at a late stage in SSA countries.  This contrasts with patients from high income countries, which have their cancers usually diagnosed at an earlier stage, and with many cancers, like breast[4], ovarian[5, 6], and colon, detecting the tumor in the early stages is critical for a favorable outcome and prognosis[7-10].  In addition, late diagnosis also limits many therapeutic options for the cancer patient and diseases at later stages are much harder to manage, especially with respect to unresponsiveness and/or resistance of many therapies.  In addition, treatments have to be performed in low-resource settings in SSA, and availability of clinical lab work and imaging technologies may be limited.

Molecular differences in SSA versus HIC cancers which may account for disparities

Emerging evidence suggests that there are distinct molecular signatures with SSA tumors with respect to histotype and pathology.  For example Dr. Rebbeck mentions that Nigerian breast cancers were defined by increased mutational signatures associated with deficiency of the homologous recombination DNA repair pathway, pervasive mutations in the tumor suppressor gene TP53, mutations in GATA binding protein 3 (GATA3), and greater mutational burden, compared with breast tumors from African Americans or Caucasians[11].  However more research will be required to understand the etiology and causal factors related to this molecular distinction in mutational spectra.

It is believed that there is a higher rate of hereditary cancers in SSA. And many SSA cancers exhibit the more aggressive phenotype than in other parts of the world.  For example breast tumors in SSA black cases are twice as likely than SSA Caucasian cases to be of the triple negative phenotype, which is generally more aggressive and tougher to detect and treat, as triple negative cancers are HER2 negative and therefore are not a candidate for Herceptin.  Also BRCA1/2 mutations are more frequent in black SSA cases than in Caucasian SSA cases [12, 13].

Initiatives to Combat Health Disparities in SSA

Multiple initiatives are being proposed or in action to bring personalized medicine to the sub-Saharan African nations.  These include:

H3Africa empowers African researchers to be competitive in genomic sciences, establishes and nurtures effective collaborations among African researchers on the African continent, and generates unique data that could be used to improve both African and global health.

There is currently a global effort to apply genomic science and associated technologies to further the understanding of health and disease in diverse populations. These efforts work to identify individuals and populations who are at risk for developing specific diseases, and to better understand underlying genetic and environmental contributions to that risk. Given the large amount of genetic diversity on the African continent, there exists an enormous opportunity to utilize such approaches to benefit African populations and to inform global health.

The Human Heredity and Health in Africa (H3Africa) consortium facilitates fundamental research into diseases on the African continent while also developing infrastructure, resources, training, and ethical guidelines to support a sustainable African research enterprise – led by African scientists, for the African people. The initiative consists of 51 African projects that include population-based genomic studies of common, non-communicable disorders such as heart and renal disease, as well as communicable diseases such as tuberculosis. These studies are led by African scientists and use genetic, clinical, and epidemiologic methods to identify hereditary and environmental contributions to health and disease. To establish a foundation for African scientists to continue this essential work into the future work, the consortium also supports many crucial capacity building elements, such as: ethical, legal, and social implications research; training and capacity building for bioinformatics; capacity for biobanking; and coordination and networking.

The World Economic Forum’s Leapfrogging with Precision Medicine project 

This project is part of the World Economic Forum’s Shaping the Future of Health and Healthcare Platform

The Challenge

Advancing precision medicine in a way that is equitable and beneficial to society means ensuring that healthcare systems can adopt the most scientifically and technologically appropriate approaches to a more targeted and personalized way of diagnosing and treating disease. In certain instances, countries or institutions may be able to bypass, or “leapfrog”, legacy systems or approaches that prevail in developed country contexts.

The World Economic Forum’s Leapfrogging with Precision Medicine project will develop a set of tools and case studies demonstrating how a precision medicine approach in countries with greenfield policy spaces can potentially transform their healthcare delivery and outcomes. Policies and governance mechanisms that enable leapfrogging will be iterated and scaled up to other projects.

Successes in personalized genomic research in SSA

As Dr. Rebbeck states:

 Because of the underlying genetic and genomic relationships between Africans and members of the African diaspora (primarily in North America and Europe), knowledge gained from research in SSA can be used to address health disparities that are prevalent in members of the African diaspora.

For example members of the West African heritage and genomic ancestry has been reported to confer the highest genomic risk for prostate cancer in any worldwide population [14].

 

PERSPECTIVEGLOBAL HEALTH

Cancer in sub-Saharan Africa

  1. Timothy R. Rebbeck

See all authors and affiliations

Science  03 Jan 2020:
Vol. 367, Issue 6473, pp. 27-28
DOI: 10.1126/science.aay474

Summary/Abstract

Cancer is an increasing global public health burden. This is especially the case in sub-Saharan Africa (SSA); high rates of cancer—particularly of the prostate, breast, and cervix—characterize cancer in most countries in SSA. The number of these cancers in SSA is predicted to more than double in the next 20 years (1). Both the explanations for these increasing rates and the solutions to address this cancer epidemic require SSA-specific data and approaches. The histopathologic and demographic features of these tumors differ from those in high-income countries (HICs). Basic knowledge of the epidemiology, clinical features, and molecular characteristics of cancers in SSA is needed to build prevention and treatment tools that will address the future cancer burden. The distinct distribution and determinants of cancer in SSA provide an opportunity to generate knowledge about cancer risk factors, genomics, and opportunities for prevention and treatment globally, not only in Africa.

 

References

  1. Rebbeck TR: Cancer in sub-Saharan Africa. Science 2020, 367(6473):27-28.
  2. Parkin DM, Ferlay J, Jemal A, Borok M, Manraj S, N’Da G, Ogunbiyi F, Liu B, Bray F: Cancer in Sub-Saharan Africa: International Agency for Research on Cancer; 2018.
  3. Chinula L, Moses A, Gopal S: HIV-associated malignancies in sub-Saharan Africa: progress, challenges, and opportunities. Current opinion in HIV and AIDS 2017, 12(1):89-95.
  4. Colditz GA: Epidemiology of breast cancer. Findings from the nurses’ health study. Cancer 1993, 71(4 Suppl):1480-1489.
  5. Hamilton TC, Penault-Llorca F, Dauplat J: [Natural history of ovarian adenocarcinomas: from epidemiology to experimentation]. Contracept Fertil Sex 1998, 26(11):800-804.
  6. Garner EI: Advances in the early detection of ovarian carcinoma. J Reprod Med 2005, 50(6):447-453.
  7. Brockbank EC, Harry V, Kolomainen D, Mukhopadhyay D, Sohaib A, Bridges JE, Nobbenhuis MA, Shepherd JH, Ind TE, Barton DP: Laparoscopic staging for apparent early stage ovarian or fallopian tube cancer. First case series from a UK cancer centre and systematic literature review. European journal of surgical oncology : the journal of the European Society of Surgical Oncology and the British Association of Surgical Oncology 2013, 39(8):912-917.
  8. Kolligs FT: Diagnostics and Epidemiology of Colorectal Cancer. Visceral medicine 2016, 32(3):158-164.
  9. Rocken C, Neumann U, Ebert MP: [New approaches to early detection, estimation of prognosis and therapy for malignant tumours of the gastrointestinal tract]. Zeitschrift fur Gastroenterologie 2008, 46(2):216-222.
  10. Srivastava S, Verma M, Henson DE: Biomarkers for early detection of colon cancer. Clinical cancer research : an official journal of the American Association for Cancer Research 2001, 7(5):1118-1126.
  11. Pitt JJ, Riester M, Zheng Y, Yoshimatsu TF, Sanni A, Oluwasola O, Veloso A, Labrot E, Wang S, Odetunde A et al: Characterization of Nigerian breast cancer reveals prevalent homologous recombination deficiency and aggressive molecular features. Nature communications 2018, 9(1):4181.
  12. Zheng Y, Walsh T, Gulsuner S, Casadei S, Lee MK, Ogundiran TO, Ademola A, Falusi AG, Adebamowo CA, Oluwasola AO et al: Inherited Breast Cancer in Nigerian Women. Journal of clinical oncology : official journal of the American Society of Clinical Oncology 2018, 36(28):2820-2825.
  13. Rebbeck TR, Friebel TM, Friedman E, Hamann U, Huo D, Kwong A, Olah E, Olopade OI, Solano AR, Teo SH et al: Mutational spectrum in a worldwide study of 29,700 families with BRCA1 or BRCA2 mutations. Human mutation 2018, 39(5):593-620.
  14. Lachance J, Berens AJ, Hansen MEB, Teng AK, Tishkoff SA, Rebbeck TR: Genetic Hitchhiking and Population Bottlenecks Contribute to Prostate Cancer Disparities in Men of African Descent. Cancer research 2018, 78(9):2432-2443.

Other articles on Cancer Health Disparities and Genomics on this Online Open Access Journal Include:

Gender affects the prevalence of the cancer type
The Rutgers Global Health Institute, part of Rutgers Biomedical and Health Sciences, Rutgers University, New Brunswick, New Jersey – A New Venture Designed to Improve Health and Wellness Globally
Breast Cancer Disparities to be Sponsored by NIH: NIH Launches Largest-ever Study of Breast Cancer Genetics in Black Women
War on Cancer Needs to Refocus to Stay Ahead of Disease Says Cancer Expert
Ethical Concerns in Personalized Medicine: BRCA1/2 Testing in Minors and Communication of Breast Cancer Risk
Ethics Behind Genetic Testing in Breast Cancer: A Webinar by Laura Carfang of survivingbreastcancer.org
Live Notes from @HarvardMed Bioethics: Authors Jerome Groopman, MD & Pamela Hartzband, MD, discuss Your Medical Mind
Testing for Multiple Genetic Mutations via NGS for Patients: Very Strong Family History of Breast & Ovarian Cancer, Diagnosed at Young Ages, & Negative on BRCA Test
Study Finds that Both Women and their Primary Care Physicians Confusion over Ovarian Cancer Symptoms May Lead to Misdiagnosis

 

Read Full Post »

From @Harvardmed Center for Bioethics: The Medical Ethics of the Corona Virus Crisis

Reporter: Stephen J. Williams, Ph.D.

From Harvard Medical School Center for Bioethics

source: https://bioethics.hms.harvard.edu/news/medical-ethics-corona-virus-crisis

The Medical Ethics of the Corona Virus Crisis

Executive Director Christine Mitchell discusses the importance of institutions talking through the implications of their decisions with the New Yorker.

Center Executive Director Christine Mitchell spoke with the New Yorker’s Isaac Chotiner about the decisions that may need to be made on limiting movement and, potentially, rationing supplies and hospital space.

“So, in the debate about allocating resources in a pandemic, we have to work with our colleagues around what kind of space is going to be made available—which means that other people and other services have to be dislocated—what kind of supplies we’re going to have, whether we’re going to reuse them, how we will reallocate staff, whether we can have staff who are not specialists take care of patients because we have way more patients than the number of specialized staff,” says Mitchell.

Read the full Q&A in the New Yorker.

 

Note: The following is taken from the Interview in the New Yorker.

As the novel coronaviruscovid-19, spreads across the globe, governments have been taking increasingly severe measures to limit the virus’s infection rate. China, where it originated, has instituted quarantines in areas with a large number of cases, and Italy—which is now facing perhaps the most serious threat outside of China—is entirely under quarantine. In the United States, the National Guard has been deployed to manage a “containment area” in New Rochelle, New York, where one of the country’s largest clusters has emerged. As the number of cases rises, we will soon face decisions on limiting movement and, potentially, rationing supplies and hospital space. These issues will be decided at the highest level by politicians, but they are often influenced by medical ethicists, who advise governments and other institutions about the way to handle medical emergencies.

One of those ethicists, with whom I recently spoke by phone, is Christine Mitchell, the executive director at the Center for Bioethics at Harvard Medical School. Mitchell, who has master’s degrees in nursing and philosophical and religious ethics, has been a clinical ethicist for thirty years. She founded the ethics program at Boston Children’s Hospital, and has served on national and international medical-ethics commissions. During our conversation, which has been edited for length and clarity, we discussed what ethicists tend to focus on during a health crisis, how existing health-care access affects crisis response, and the importance of institutions talking through the ethical implications of their decisions.

What coronavirus-related issue has most occupied your mental space over the past weeks?

One of the things I think about but that we don’t often have an opportunity to talk about, when we are mostly focussing on what clinicians are doing and trying to prepare for, is the more general ways this affects our society. People get sick out there in the real world, and then they come to our hospitals, but, when they are sick, a whole bunch of them don’t have health insurance, or are afraid to come to a hospital, or they don’t have coverage for sick time or taking a day off when their child is sick, so they send their child to school. So these all have very significant influences on our ability to manage population health and community transmission that aren’t things that nurses and physicians and people who work in acute-care hospitals and clinics can really affect. They are elements of the way our society is structured and has failed to meet the needs of our general population, and they influence our ability to manage a crisis like this.

Is there anything specifically about a pandemic or something like coronavirus that makes these issues especially acute?

If a person doesn’t have health insurance and doesn’t come to be tested or treated, and if they don’t have sick-time coverage and can’t leave work, so they teach at a school, or they work at a restaurant, or do events that have large numbers of people, these are all ways in which the spread of a virus like this has to be managed—and yet can’t be managed effectively because of our social-welfare policies, not just our health-care resources.

Just to take a step back, and I want to get back to coronavirus stuff, but what got you interested in medical ethics?

What got me interested were the actual kinds of problems that came up when I was taking care of patients, starting as early as when I was in nursing school and was taking care of a patient who, as a teen-ager, had a terminal kind of cancer that his parents didn’t want him to know about, and which the health-care team had decided to defer to the parents. And yet I was spending every day taking care of him, and he was really puzzled about why he was so sick and whether he was going to get better, and so forth. And so of course I was faced with this question of, What do I do if he asks me? Which, of course, he did.

And this question about what you should tell an adolescent and whether the deference should be to his parents’ judgment about what’s best for him, which we would ordinarily respect, and the moral demands of the relationship that you have with a patient, was one of the cases that reminded me that there’s a lot more to being a nurse or a health-care provider than just knowing how to give cancer chemotherapy and change a bed, or change a dressing, or whatever. That a lot of it is in the relationship you have with a patient and the kinds of ethical choices they and their families are facing. They need your information, but also your help as they think things through. That’s the kind of thing that got me interested in it. There are a whole host of those kinds of cases, but they’re more individual cases.

As I began to work in a hospital as an ethicist, I began to worry about the broader organizational issues, like emergency preparedness. Some years ago, here in Boston, I had a joint appointment running the ethics program at Children’s Hospital and doing clinical ethics at Harvard Medical School. We pulled together a group, with the Department of Public Health and the emergency-preparedness clinicians in the Harvard-affiliated hospitals, to look at what the response within the state of Massachusetts should be to big, major disasters or rolling pandemics, and worked on some guidelines together.

When you looked at the response of our government, in a place like Washington State or in New York City, what things, from a medical-ethics perspective, are you noticing that are either good or maybe not so good?

To be candid and, probably, to use language that’s too sharp for publication, I’m appalled. We didn’t get ourselves ready. We’ve had outbreaks—sars in 2003, H1N1 in 2009, Ebola in 2013, Zika in 2016. We’ve known, and the general population in some ways has known. They even have movies like “Contagion” that did a great job of sharing publicly what this is like, although it is fictional, and that we were going to have these kinds of infectious diseases in a global community that we have to be prepared to handle. And we didn’t get ourselves as ready, in most cases, as we should have. There have been all these cuts to the C.D.C. budget, and the person who was the Ebola czar no longer exists in the new Administration.

And it’s not just this Administration. But the thing about this Administration that perhaps worries me the most is a fundamental lack of respect for science and the facts. Managing the crisis from a public-relations perspective and an economic, Dow Jones perspective are important, but they shouldn’t be fudging the facts. And that’s the piece that makes me feel most concerned—and not just as an ethicist. And then, of course, I want to see public education and information that’s forthright and helps people get the treatment that they need. But the disrespect for the public, and not providing honest information, is . . . yeah, that’s pretty disconcerting.

SOURCE

https://www.newyorker.com/news/q-and-a/the-medical-ethics-of-the-coronavirus-crisis

See more on this and #COVID19 on this Online Open Access Journal at our Coronavirus Portal at

https://pharmaceuticalintelligence.com/coronavirus-portal/

Read Full Post »

Live Notes from @HarvardMed Bioethics: Authors Jerome Groopman, MD & Pamela Hartzband, MD, discuss Your Medical Mind

Writer: Stephen J. Williams, Ph.D.

As part of the Harvard Medical School Series on Bioethics: author, clinician and professor Jerome Groopman, MD and Pamel Harzband, MD gave an online discussion of their book “Your Medical Mind”, a part of Harvard Medical School Center for Bioethics Program’s Critical Reading of Contemporary Books in Bioethics Series. The Contemporary Authors in Bioethics series brings together authors and the community to discuss books that explore new and developing topics in the field. This was held as an online Zoom meeting on March 26, 2020 at 5 pm EST and could be followed on Twitter using #HarvardBioethics.  A recording of the discussion will be made available at the Harvard Med School Center for Bioethics.

 

Available at Amazon: From the Amazon book description:

An entirely new way to make the best medical decisions.

Making the right medical decisions is harder than ever. We are overwhelmed by information from all sides—whether our doctors’ recommendations, dissenting experts, confusing statistics, or testimonials on the Internet. Now Doctors Groopman and Hartzband reveal that each of us has a “medical mind,” a highly individual approach to weighing the risks and benefits of treatments.  Are you a minimalist or a maximalist, a believer or a doubter, do you look for natural healing or the latest technology?  The authors weave vivid narratives of real patients with insights from recent research to demonstrate the power of the medical mind. After reading this groundbreaking book, you will know how to arrive at choices that serve you best.

 

Doctors Groopman and Hartzband began the discussion with a recapping medical research studies and medical panels, which had reported conflicting results or reversal of recommendations, respectively.  These included studies on the benefits of statin therapy in cholesterol management, studies on whether or not Vitamin D therapy is beneficial for postmenopausal women, the ongoing controversy on the frequency with which women should get mammograms, as well as the predictive value of Prostate Specific Antigen and prostate cancer screening.  The authors singled out the research reports and medical panels reviewing the data on PSA in which the same medical panel first came out in support of using PSA levels to screen for prostate cancer and then later, after reconvening, recommended that PSA was not useful for mass screenings for prostate cancer.

In fact, both authors were

completed surprised of the diametrically opposed views within or between panels given similar data presented to those medical professionals.

The authors then asked a question:  Why would the same medical panel come to a reversal of their decision and more, importantly,  why are there such disparate conclusions from the same medical data sets, leading to varied clinical decision-making.

In general, Drs. Groopman and Hartzband asked how do physicians and patients make their decisions?

To answer this they looked at studies that Daniel Bernouli had conducted to model the economic behaviors of risk aversion in the marketplace. Bernouli’s theorem correlated market expectation with probability and outcomes

expectation = probability x utility of outcome

However, in medicine, one can measure probability (or risk) but it is very hard to measure utility (which is the value or worth of the outcome).

For example, they gave an example if a person was born blind but offered a risky to regain sight, the individual values their quality of life from their own perspective and might feel that, as their life is worthwhile as it is, they would not undergo a risky procedure. However a person who had suddenly lost their sight might value sight more, and be willing to undergo a risky procedure.

Three methods are used to put a value on utility or outcome worth with regards to medical decisions

  1. linear scale (life or death; from 0 to 1)
  2. time trade off:  e.g. how much longer do I have to live
  3. standard gamble:  let’s try it

All of these methods however are flawed because one doesn’t know their future medical condition (e.g. new information on the disease) and people values and perceptions change over time.

An example of choice of methods the medical community uses to make decisions include:

  • In the United Kingdom, their system uses a time trade off method to determine value in order to determine appropriate course of action which may inadvertently, result in rationed care
  • in the United States, the medical community uses the time trade off to determine cost effectiveness

 

Therefore Drs. Groopman and Harztband, after conducing multiple interviews with patients and physicians were able to categorize medical decision making based on groups of mindsets

  1. Maximalist: Proactive behavior, wants to stay ahead of the curve
  2. Minimalist: less intervention is more; more hesitant to try any suggested therapy
  3. Naturalist:  more prone to choose natural based therapies or home remedies
  4. Tech Oriented: wants to try the latest therapies and more apt to trust in branded and FDA approved therapeutics
  5. Believer:  trust in suggestions by physician; physician trusts medical panels suggestions
  6. Doubter: naturally inquisitive and more prone to investigate risk benefits of any suggested therapy

The authors also identified many Cognitive Traps that both physicians and patients may fall into including:

  • Relative versus Absolute Numbers: for instance putting emphasis on one number or the other without regard to context; like looking at disease numbers without taking into consideration individual risk
  • Availability: availability or lack of available information; they noticed if you fall in this trap depends on whether you are a Minimalist or Maximalist
  • Framing:  for example  when people talk to others about their conditions and hear stories about others treatments, conditions .. mainly anecdotal evidence

Stories can be helpful but they sometimes increase our overestimation of risk or benefit so framing the information is very important for both the patient as well as the physician (even doctors as patients)

Both authors have noticed a big shift in US to minimalism probably because of the rising costs of healthcare.

How do these mindsets affect the patient-physician relationship?

A University of Michigan study revealed that patients who would be characterized as maximalists pushed their physicians to do more therapy and were more prone to seek outside advice.

Physicians need to understand and listen to their patients during the patients’s first visit and determine what medical mindset that this patient has.

About the authors:

Jerome Groopman, M.D. is the Dina and Raphael Recanati Professor of Medicine at Harvard Medical School, Chief of Experimental Medicine at Beth Israel Deaconess Medical Center, and one of the world’s leading researchers in cancer and AIDS. He is a staff writer for The New Yorker and has written for The New York TimesThe Wall Street Journal,The Washington Post and The New Republic. He is author of The Measure of Our Days (1997), Second Opinions (2000), Anatomy of Hope (2004), How Doctors Think (2007), and the recently released, Your Medical Mind.

Dr. Pamela Hartzband is an Assistant Professor at the Harvard Medical School and Attending Physician in the Division of Endocrinology at the Beth Israel Deaconess Medical Center in Boston. She specializes in disorders of the thyroid and pituitary glands. A magna cum laude graduate of Radcliffe College, Harvard University, she received her M.D. from Harvard Medical School. She served her internship and residency in internal medicine at the Massachusetts General Hospital, and her specialty fellowships in endocrinology and metabolism at UCLA.

More articles on BioEthics and Patient experiences in this Online Open Access Journal Include:

Ethics Behind Genetic Testing in Breast Cancer: A Webinar by Laura Carfang of survivingbreastcancer.org

Tweets and Re-Tweets by @Pharma_BI ‏and @AVIVA1950 at 2019 Petrie-Flom Center Annual Conference: Consuming Genetics: Ethical and Legal Considerations of New Technologies, Friday, May 17, 2019 from 8:00 AM to 5:00 PM EDT @Harvard_Law

Innovation + Technology = Good Patient Experience

Drivers of Patient Experience

Factors in Patient Experience

Patient Experience Survey

Please also see our offering on Amazon at https://www.amazon.com/dp/B076HGB6MZ

“The VOICES of Patients, Hospital CEOs, Health Care Providers, Caregivers and Families: Personal Experience with Critical Care and Invasive Medical Procedures,”

 

 

 

 

 

 

 

 

 

 

 

Read Full Post »

US Responses to Coronavirus Outbreak Expose Many Flaws in Our Medical System

US Responses to Coronavirus Outbreak Expose Many Flaws in Our Medical System

Curator: Stephen J. Williams, Ph.D.

The  coronavirus pandemic has affected almost every country in every continent however, after months of the novel advent of novel COVID-19 cases, it has become apparent that the varied clinical responses in this epidemic (and outcomes) have laid bare some of the strong and weak aspects in, both our worldwide capabilities to respond to infectious outbreaks in a global coordinated response and in individual countries’ response to their localized epidemics.

 

Some nations, like Israel, have initiated a coordinated government-private-health system wide action plan and have shown success in limiting both new cases and COVID-19 related deaths.  After the initial Wuhan China outbreak, China closed borders and the government initiated health related procedures including the building of new hospitals. As of writing today, Wuhan has experienced no new cases of COVID-19 for two straight days.

 

However, the response in the US has been perplexing and has highlighted some glaring problems that have been augmented in this crisis, in the view of this writer.    In my view, which has been formulated after social discussion with members in the field ,these issues can be centered on three major areas of deficiencies in the United States that have hindered a rapid and successful response to this current crisis and potential future crises of this nature.

 

 

  1. The mistrust or misunderstanding of science in the United States
  2. Lack of communication and connection between patients and those involved in the healthcare industry
  3. Socio-geographical inequalities within the US healthcare system

 

1. The mistrust or misunderstanding of science in the United States

 

For the past decade, anyone involved in science, whether directly as active bench scientists, regulatory scientists, scientists involved in science and health policy, or environmental scientists can attest to the constant pressure to not only defend their profession but also to defend the entire scientific process and community from an onslaught of misinformation, mistrust and anxiety toward the field of science.  This can be seen in many of the editorials in scientific publications including the journal Science and Scientific American (as shown below)

 

Stepping Away from Microscopes, Thousands Protest War on Science

Boston rally coincides with annual American Association for the Advancement of Science (AAAS) conference and is a precursor to the March for Science in Washington, D.C.

byLauren McCauley, staff writer

Responding to the troubling suppression of science under the Trump administration, thousands of scientists, allies, and frontline communities are holding a rally in Boston’s Copley Square on Sunday.

#standupforscience Tweets

 

“Science serves the common good,” reads the call to action. “It protects the health of our communities, the safety of our families, the education of our children, the foundation of our economy and jobs, and the future we all want to live in and preserve for coming generations.”

It continues: 

But it’s under attack—both science itself, and the unalienable rights that scientists help uphold and protect. 

From the muzzling of scientists and government agencies, to the immigration ban, the deletion of scientific data, and the de-funding of public science, the erosion of our institutions of science is a dangerous direction for our country. Real people and communities bear the brunt of these actions.

The rally was planned to coincide with the annual American Association for the Advancement of Science (AAAS) conference, which draws thousands of science professionals, and is a precursor to the March for Science in Washington, D.C. and in cities around the world on April 22.

 

Source: https://www.commondreams.org/news/2017/02/19/stepping-away-microscopes-thousands-protest-war-science

https://images.app.goo.gl/UXizCsX4g5wZjVtz9

 

https://www.washingtonpost.com/video/c/embed/85438fbe-278d-11e7-928e-3624539060e8

 

 

The American Association for Cancer Research (AACR) also had marches for public awareness of science and meaningful science policy at their annual conference in Washington, D.C. in 2017 (see here for free recordings of some talks including Joe Biden’s announcement of the Cancer Moonshot program) and also sponsored events such as the Rally for Medical Research.  This patient advocacy effort is led by the cancer clinicians and scientific researchers to rally public support for cancer research for the benefit of those affected by the disease.

Source: https://leadingdiscoveries.aacr.org/cancer-patients-front-and-center/

 

 

     However, some feel that scientists are being too sensitive and that science policy and science-based decision making may not be under that much of a threat in this country. Yet even as some people think that there is no actual war on science and on scientists they realize that the public is not engaged in science and may not be sympathetic to the scientific process or trust scientists’ opinions. 

 

   

From Scientific American: Is There Really a War on Science? People who oppose vaccines, GMOs and climate change evidence may be more anxious than antagonistic

 

Certainly, opponents of genetically modified crops, vaccinations that are required for children and climate science have become louder and more organized in recent times. But opponents typically live in separate camps and protest single issues, not science as a whole, said science historian and philosopher Roberta Millstein of the University of California, Davis. She spoke at a standing-room only panel session at the American Association for the Advancement of Science’s annual meeting, held in Washington, D.C. All the speakers advocated for a scientifically informed citizenry and public policy, and most discouraged broadly applied battle-themed rhetoric.

 

Source: https://www.scientificamerican.com/article/is-there-really-a-war-on-science/

 

      In general, it appears to be a major misunderstanding by the public of the scientific process, and principles of scientific discovery, which may be the fault of miscommunication by scientists or agendas which have the goals of subverting or misdirecting public policy decisions from scientific discourse and investigation.

 

This can lead to an information vacuum, which, in this age of rapid social media communication,

can quickly perpetuate misinformation.

 

This perpetuation of misinformation was very evident in a Twitter feed discussion with Dr. Eric Topol, M.D. (cardiologist and Founder and Director of the Scripps Research Translational  Institute) on the US President’s tweet on the use of the antimalarial drug hydroxychloroquine based on President Trump referencing a single study in the International Journal of Antimicrobial Agents.  The Twitter thread became a sort of “scientific journal club” with input from international scientists discussing and critiquing the results in the paper.  

 

Please note that when we scientists CRITIQUE a paper it does not mean CRITICIZE it.  A critique is merely an in depth analysis of the results and conclusions with an open discussion on the paper.  This is part of the normal peer review process.

 

Below is the original Tweet by Dr. Eric Topol as well as the ensuing tweet thread

 

https://twitter.com/EricTopol/status/1241442247133900801?s=20

 

Within the tweet thread it was discussed some of the limitations or study design flaws of the referenced paper leading the scientists in this impromptu discussion that the study could not reasonably conclude that hydroxychloroquine was not a reliable therapeutic for this coronavirus strain.

 

The lesson: The public has to realize CRITIQUE does not mean CRITICISM.

 

Scientific discourse has to occur to allow for the proper critique of results.  When this is allowed science becomes better, more robust, and we protect ourselves from maybe heading down an incorrect path, which may have major impacts on a clinical outcome, in this case.

 

 

2.  Lack of communication and connection between patients and those involved in the healthcare industry

 

In normal times, it is imperative for the patient-physician relationship to be intact in order for the physician to be able to communicate proper information to their patient during and after therapy/care.  In these critical times, this relationship and good communication skills becomes even more important.

 

Recently, I have had multiple communications, either through Twitter, Facebook, and other social media outlets with cancer patients, cancer advocacy groups, and cancer survivorship forums concerning their risks of getting infected with the coronavirus and how they should handle various aspects of their therapy, whether they were currently undergoing therapy or just about to start chemotherapy.  This made me realize that there were a huge subset of patients who were not receiving all the information and support they needed; namely patients who are immunocompromised.

 

These are patients represent

  1. cancer patient undergoing/or about to start chemotherapy
  2. Patients taking immunosuppressive drugs: organ transplant recipients, patients with autoimmune diseases, multiple sclerosis patients
  3. Patients with immunodeficiency disorders

 

These concerns prompted me to write a posting curating the guidance from National Cancer Institute (NCI) designated cancer centers to cancer patients concerning their risk to COVID19 (which can be found here).

 

Surprisingly, there were only 14 of the 51 US NCI Cancer Centers which had posted guidance (either there own or from organizations like NCI or the National Cancer Coalition Network (NCCN).  Most of the guidance to patients had stemmed from a paper written by Dr. Markham of the Fred Hutchinson Cancer Center in Seattle Washington, the first major US city which was impacted by COVID19.

 

Also I was surprised at the reactions to this posting, with patients and oncologists enthusiastic to discuss concerns around the coronavirus problem.  This led to having additional contact with patients and oncologists who, as I was surprised, are not having these conversations with each other or are totally confused on courses of action during this pandemic.  There was a true need for each party, both patients/caregivers and physicians/oncologists to be able to communicate with each other and disseminate good information.

 

Last night there was a Tweet conversation on Twitter #OTChat sponsored by @OncologyTimes.  A few tweets are included below

https://twitter.com/OncologyTimes/status/1242611841613864960?s=20

https://twitter.com/OncologyTimes/status/1242616756658753538?s=20

https://twitter.com/OncologyTimes/status/1242615906846547978?s=20

 

The Lesson:  Rapid Communication of Vital Information in times of stress is crucial in maintaining a good patient/physician relationship and preventing Misinformation.

 

3.  Socio-geographical Inequalities in the US Healthcare System

It has become very clear that the US healthcare system is fractioned and multiple inequalities (based on race, sex, geography, socio-economic status, age) exist across the whole healthcare system.  These inequalities are exacerbated in times of stress, especially when access to care is limited.

 

An example:

 

On May 12, 2015, an Amtrak Northeast Regional train from Washington, D.C. bound for New York City derailed and wrecked on the Northeast Corridor in the Port Richmond neighborhood of Philadelphia, Pennsylvania. Of 238 passengers and 5 crew on board, 8 were killed and over 200 injured, 11 critically. The train was traveling at 102 mph (164 km/h) in a 50 mph (80 km/h) zone of curved tracks when it derailed.[3]

Some of the passengers had to be extricated from the wrecked cars. Many of the passengers and local residents helped first responders during the rescue operation. Five local hospitals treated the injured. The derailment disrupted train service for several days. 

(Source Wikipedia https://en.wikipedia.org/wiki/2015_Philadelphia_train_derailment)

What was not reported was the difficulties that first responders, namely paramedics had in finding an emergency room capable of taking on the massive load of patients.  In the years prior to this accident, several hospitals, due to monetary reasons, had to close their emergency rooms or reduce them in size. In addition only two in Philadelphia were capable of accepting gun shot victims (Temple University Hospital was the closest to the derailment but one of the emergency rooms which would accept gun shot victims. This was important as Temple University ER, being in North Philadelphia, is usually very busy on any given night.  The stress to the local health system revealed how one disaster could easily overburden many hospitals.

 

Over the past decade many hospitals, especially rural hospitals, have been shuttered or consolidated into bigger health systems.  The graphic below shows this

From Bloomberg: US Hospital Closings Leave Patients with Nowhere to go

 

 

https://images.app.goo.gl/JdZ6UtaG3Ra3EA3J8

 

Note the huge swath of hospital closures in the midwest, especially in rural areas.  This has become an ongoing problem as the health care system deals with rising costs.

 

Lesson:  Epidemic Stresses an already stressed out US healthcare system

 

Please see our Coronavirus Portal at

https://pharmaceuticalintelligence.com/coronavirus-portal/

 

for more up-to-date scientific, clinical information as well as persona stories, videos, interviews and economic impact analyses

and @pharma_BI

Read Full Post »

Older Posts »

%d bloggers like this: