Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Eight Subcellular Pathologies driving Chronic Metabolic Diseases – Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics: Impact on Pharmaceuticals in Use
In this curation we wish to present two breaking through goals:
Goal 1:
Exposition of a new direction of research leading to a more comprehensive understanding of Metabolic Dysfunctional Diseases that are implicated in effecting the emergence of the two leading causes of human mortality in the World in 2023: (a) Cardiovascular Diseases, and (b) Cancer
Goal 2:
Development of Methods for Mapping Bioelectronic Adjustable Measurements as potential new Therapeutics for these eight subcellular causes of chronic metabolic diseases. It is anticipated that it will have a potential impact on the future of Pharmaceuticals to be used, a change from the present time current treatment protocols for Metabolic Dysfunctional Diseases.
According to Dr. Robert Lustig, M.D, an American pediatric endocrinologist. He is Professor emeritus of Pediatrics in the Division of Endocrinology at the University of California, San Francisco, where he specialized in neuroendocrinology and childhood obesity, there are eight subcellular pathologies that drive chronic metabolic diseases.
These eight subcellular pathologies can’t be measured at present time.
In this curation we will attempt to explore methods of measurement for each of these eight pathologies by harnessing the promise of the emerging field known as Bioelectronics.
Unmeasurable eight subcellular pathologies that drive chronic metabolic diseases
Glycation
Oxidative Stress
Mitochondrial dysfunction [beta-oxidation Ac CoA malonyl fatty acid]
Insulin resistance/sensitive [more important than BMI], known as a driver to cancer development
Membrane instability
Inflammation in the gut [mucin layer and tight junctions]
Epigenetics/Methylation
Autophagy [AMPKbeta1 improvement in health span]
Diseases that are not Diseases: no drugs for them, only diet modification will help
Image source
Robert Lustig, M.D. on the Subcellular Processes That Belie Chronic Disease
These eight Subcellular Pathologies driving Chronic Metabolic Diseases are becoming our focus for exploration of the promise of Bioelectronics for two pursuits:
Will Bioelectronics be deemed helpful in measurement of each of the eight pathological processes that underlie and that drive the chronic metabolic syndrome(s) and disease(s)?
IF we will be able to suggest new measurements to currently unmeasurable health harming processes THEN we will attempt to conceptualize new therapeutic targets and new modalities for therapeutics delivery – WE ARE HOPEFUL
In the Bioelecronics domain we are inspired by the work of the following three research sources:
Michael Levin is an American developmental and synthetic biologist at Tufts University, where he is the Vannevar Bush Distinguished Professor. Levin is a director of the Allen Discovery Center at Tufts University and Tufts Center for Regenerative and Developmental Biology. Wikipedia
THE VOICE of Dr. Justin D. Pearlman, MD, PhD, FACC
PENDING
THE VOICE of Stephen J. Williams, PhD
Ten TakeAway Points of Dr. Lustig’s talk on role of diet on the incidence of Type II Diabetes
25% of US children have fatty liver
Type II diabetes can be manifested from fatty live with 151 million people worldwide affected moving up to 568 million in 7 years
A common myth is diabetes due to overweight condition driving the metabolic disease
There is a trend of ‘lean’ diabetes or diabetes in lean people, therefore body mass index not a reliable biomarker for risk for diabetes
Thirty percent of ‘obese’ people just have high subcutaneous fat. the visceral fat is more problematic
there are people who are ‘fat’ but insulin sensitive while have growth hormone receptor defects. Points to other issues related to metabolic state other than insulin and potentially the insulin like growth factors
At any BMI some patients are insulin sensitive while some resistant
Visceral fat accumulation may be more due to chronic stress condition
Fructose can decrease liver mitochondrial function
A methionine and choline deficient diet can lead to rapid NASH development
OpenAI and ChatGPT face unique legal challenges over CopyRight Laws
Reporter: Stephen J. Williams, PhD
In previous weeks on this page and on the sister page ChatGPT applied to Cancer & Oncology, a comparison between ChatGPT, OpenAI, and Google large language model based search reveals a major difference between the algorithms with repect to citation and author credit. In essence while Google returns a hyperlink to the information used to form an answer, ChatGPT and OpenAI are agnostic in crediting or citing the sources of information used to generate answers to queries. With ChatGPT the source data, or more specifically the training set used for the AI algorithm is never properly cited in the query results.
This, as outlined below, is making a big problem when it comes to copyright law and intelectual property. Last week a major lawsuit has been filed because of incorrect and citing, referencing, and attribution of ownership of intellectual property.
As Miles Klee reports in The Rolling Stone
“OpenAI faces allegations of privacy invasion and violating authors’ copyright — but this may be just the tip of the iceberg”
The burgeoning AI industry has just crossed another major milestone, with two new class-action lawsuits calling into question whether this technology violates privacy rights, scrapes intellectual property without consent and negatively affects the public at large. Experts believe they’re likely to be the first in a wave of legal challenges to companies working on such products. Both suits were filed on Wednesday and target OpenAI, a research lab consisting of both a nonprofit arm and a corporation, over ChatGPT software, a “large language model” capable of generating human-like responses to text input. One, filed by Clarkson, a public interest law firm, is wide-ranging and invokes the potentially “existential” threat of AI itself. The other, filed by the Joseph Saveri Law Firm and attorney Matthew Butterick, is focused on two established authors, Paul Tremblay and Mona Awad, who claim that their books were among those ChatGPT was trained on — a violation of copyright, according to the complaint. (Saveri and Butterick are separately pursuing legal action against OpenAI, GitHub and Microsoft over GitHub Copilot, an AI-based coding product that they argue “appears to profit from the work of open-source programmers by violating the conditions of their open-source licenses.”)
Saveri and Butterick’s latest suit goes after OpenAI for direct copyright infringement as well as violations of the Digital Millennium Copyright Act (DMCA). Tremblay (who wrote the novel The Cabin at the End of the World) and Awad (author of 13 Ways of Looking at a Fat Girl and Bunny) are the representatives of a proposed class of plaintiffs who would seek damages as well as injunctive relief in the form of changes to ChatGPT. The filing includes ChatGPT’s detailed responses to user questions about the plots of Tremblay’s and Awad’s books — evidence, the attorneys argue, that OpenAI is unduly profiting off of infringed materials, which were scraped by the chat bot. While the suits venture into uncharted legal territory, they were more or less inevitable, according to those who research AI tech and privacy or practice law around those issues.
“[AI companies] should have and likely did expect these types of challenges,” says Ben Winters, senior counsel at the Electronic Privacy Information Center and head of the organization’s AI and Human Rights Project. He points out that OpenAI CEO Sam Altman mentioned a few prior “frivolous” suits against the company during his congressional testimony on artificial intelligence in May. “Whenever you create a tool that implicates so much personal data and can be used so widely for such harmful and otherwise personal purposes, I would be shocked there is not anticipated legal fire,” Winters says. “Particularly since they allow this sort of unfettered access for third parties to integrate their systems, they end up getting more personal information and more live information that is less publicly available, like keystrokes and browser activity, in ways the consumer could not at all anticipate.”
They say that OpenAI defendants “profit richly” from the use of their copyrighted materials and yet the authors never consented to the use of their copyrighted materials without credit or compensation.
ChatGPT lawsuit says OpenAI has previously utilized illegal ‘shadow libraries’ for AI training datasets
Although many types of material are used to train large language models, “books offer the best examples of high-quality longform writing,” according to the ChatGPT lawsuit.
OpenAI has previously utilized books for its AI training datasets, including unpublished novels (the majority of which were under copyright) available on a website that provides the materials for free. The plaintiffs suggest that OpenAI may have utilized copyrighted materials from “flagrantly illegal shadow libraries.”
Tremblay and Awad note that OpenAI’s March 2023 paper introducing GPT-4 failed to include any information about the training dataset. However, they say that ChatGPT was able to generate highly accurate summaries of their books when prompted, suggesting that their copyrighted material was used in the training dataset without their consent.
They filed the ChatGPT class action lawsuit on behalf of themselves and a proposed class of U.S. residents and entities that own a U.S. copyright for any work used as training data for the OpenAI language models during the class period.
Earlier this year, a tech policy group urged federal regulators to block OpenAI’s GPT-4 AI product because it does not meet federal standards.
What is the general consensus among legal experts on generative AI and copyright?
Given the hype around ChatGPT and the speculation that it could be widely used, it is important to understand the legal implications of the technology. First, do copyright owners of the text used to train ChatGPT have a copyright infringement claim against OpenAI? Second, can the output of ChatGPT be protected by copyright and, if so, who owns that copyright?
To answer these questions, we need to understand the application of US copyright law.
Copyright Law Basics
Based on rights in Article I, Section 8 of the Constitution, Congress passed the first copyright law in 1790. It has been amended several times. Today, US copyright law is governed by the Copyright Act of 1976. This law grants authors of original works exclusive rights to reproduce, distribute, and display their work. Copyright protection applies from the moment of creation, and, for most works, the copyright term is the life of the author plus 70 years after the author’s death. Under copyright law, the copyright holder has the exclusive right to make copies of the work, distribute it, display it publicly, and create derivative works based on it. Others who want to use the work must obtain permission from the copyright holder or use one of the exceptions to copyright law, such as fair use.
The purpose of copyright law is to incentivize authors to create novel and creative works. It does this by granting authors exclusive rights to control the use of their work, thus allowing them to financially benefit from their works. Copyright law also encourages the dissemination of knowledge by allowing others to use copyrighted works under certain conditions, such as through the fair use doctrine, which allows for limited use of copyrighted material for the purposes of criticism, commentary, news reporting, teaching, scholarship, or research. By protecting the rights of authors and creators while also allowing for the use of copyrighted works for the public benefit, copyright law aims to strike a balance between the interests of authors and the public.
Inputs – Training ChatGPT with Copyrighted Material
ChatGPT was trained on a large training dataset sourced from the internet, including a vast amount of text from websites, articles, books, social media posts, and academic papers. Importantly, ChatGPT was not aware of the source of the data, it only saw the text and learned the patterns and relationships between the words, phrases and sentences. The vast majority of the text used to train ChatGPT was certainly subject to copyright protection—except for text that is in the public domain like facts or discoveries or works whose term of copyright protection has ended.
Did the ChatGPT training process violate the copyright of the original authors? To answer that question we must determine: if during the training process OpenAI copied the underlying works—because copyright infringement requires an act of copying works that are protected by copyright; is the output of ChatGPT a “derivative work” under copyright law; and would such copying or production of a derivate work fall under the “fair use” exception of copyright law.
Copying During Training
We are not aware of publicly available information regarding whether, during the training process, the training text for ChatGPT was copied or simply referenced. Therefore, we asked ChatGPT and received this answer:
Prompt: During the ChatGPT training process, was the underlying training text copied into a OpenAI database?
Answer: During the training process, text from various sources is used to train the ChatGPT model. It is likely that the text was copied into an internal database at OpenAI for the purpose of training the model. However, the training text is typically preprocessed to remove any sensitive information such as personal identifiers, copyrighted text, etc. In addition, the text is also used to train the model and not stored in a database. The model uses the text to learn patterns and structure of the language and once the training is done, it doesn’t require the text anymore.
So, it seems that during the training process, text was copied. It is also interesting that ChatGPT stated that the training text was “preprocessed” to remove any copyrighted text. That seems highly unlikely since otherwise nearly all text would have been removed.
Is ChatGPT Output a Derivative Work?
Under US copyright law, the owner of a copyright has the exclusive right “to prepare derivative works based upon the copyrighted work.” A “derivative work” is “a work based upon one or more preexisting works.” ChatGPT is trained on preexisting works and generates output based on that training.
As Daniel Gervais, a professor at Vanderbilt Law School who specializes in intellectual property law, says, the definition of a derivative work under copyright law “could loosely be used as a definition of machine learning when applied to the creation of literary and artistic productions because AI machines can produce literary and artistic content (output) that is almost necessarily ‘based upon’ a dataset consisting of preexisting works.” Under this view, it seems that all ChatGPT output is a derivative work under copyright law.
On a related point, it is worth noting that in producing its output, ChatGPT is not “copying” anything. ChatGPT generates text based on the context of the input and the words and phrase patterns it was trained on. ChatGPT is not “copying” and then changing text.
What About Fair Use?
Let’s assume that the underlying text was copied in some way during the ChatGPT training process. Let’s further assume that outputs from Chatto are, at least sometimes, derivative works under copyright law. If that is the case, do copyright owners of the original works have a copyright infringement claim against OpenAI? Not if the copying and the output generation are covered by the doctrine of “fair use.” If a use qualifies as fair use, then actions that would otherwise be prohibited would not be deemed an infringement of copyright.
In determining whether the use made of a work in any particular case is a fair use, the factors include:
The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes.
The nature of the copyrighted work.
The amount and substantiality of the portion used in relation to the copyrighted work as a whole.
The effect of the use upon the potential market for or value of the copyrighted work.
In this case, assuming OpenAI copied copyrighted text as part of the ChatGPT training process, such copying was not for a commercial purpose and had no economic impact on the copyright owner. Daniel Gervais says “it is much more likely than not” that training systems on copyrighted data will be covered by fair use.
In determining if a commercial use will be considered “fair use,” the courts will primarily look at the scope and purpose of the use and the economic impact of such use. Does the use in question change the nature of the underlying copyright material in some material way (described as a “transformative” use) and does it economically impact the original copyright holder?
Without a specific example, it is difficult to determine exactly if a resulting output from ChatGPT would be fair use. The fact that ChatGPT does not copy and has been trained on millions of underlying works, it seems likely most output would be fair use—without using significant portions of any one protected work. In addition, because of the vast corpus of text used to train ChatGPT, it seems unlikely that ChatGPT output will have a negative economic impact on any one copyright holder. But, given the capabilities of ChatGPT, that might not always be the case.
Imagine if you asked ChatGPT to “Write a long-form, coming of age, story in the style of J.K. Rowling, using the characters from Harry Potter and the Chamber of Secrets.” In that case, it would seem that the argument for fair use would be weak. This story could be sold to the public and could conceivably have a negative economic impact on J.K. Rowling. A person that wants to read a story about Harry Potter might buy this story instead of buying a book by J. K. Rowling.
Finally, it is worth noting that OpenAI is a non-profit entity that is a “AI research and deployment company.” It seems that OpenAI is the type of research company, and ChatGPT is the type of research project, that would have a strong argument for fair use. This practice has been criticized as “AI Data Laundering,” shielding commercial entities from liability by using a non-profit research institution to create the data set and train AI engines that might later be used in commercial applications.
Outputs – Can the Output of ChatGPT be Protected by Copyright
Is the output of ChatGPT protected by copyright law and, if so, who is the owner? As an initial matter, does the ChatGPT textual output fit within the definition of what is covered under copyright law: “original works of authorship fixed in any tangible medium of expression.”
The text generated by ChatGPT is the type of subject matter that, if created by a human, would be covered by copyright. However, most scholars have opined, and the US Copyright Office has ruled that the output of generative AI systems, like ChatGPT, are not protectable under US copyright law because the work must be an original, creative work of a human author.
In 2022, the US Copyright Office, ruling on whether a picture generated completely autonomously by AI could be registered as a valid copyright, stated “[b]because copyright law as codified in the 1976 Act requires human authorship, the [AI Generated] Work cannot be registered.” The U.S. Copyright Office has issued several similar statements, informing creators that it will not register copyright for works produced by a machine or computer program. The human authorship requirement of the US Copyright Office is set forth as follows:
The Human Authorship Requirement – The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being. The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Trade-Mark Cases, 100 U.S. 82, 94 (1879).
While such policies are not binding on the courts, the stance by the US Copyright Office seems to be in line with the purpose of copyright law flowing from the Constitution: to incentivize humans to produce creative works by giving them a monopoly over their creations for a limited period of time. Machines, of course, need and have no such motivation. In fact, copyright law expressly allows a corporation or other legal entity to be the owner of a copyright under the “work made for hire” doctrine. However, to qualify as a work made for hire, the work must be either work prepared by an employee within the scope of his or her employment, or be prepared by a party who “expressly agrees in a written instrument signed by them that the work shall be considered a work made for hire.” Only humans can be employees and only humans or corporations can enter a legally binding contract—machines cannot.
Other articles of note in this Open Access Scientific Journal on ChatGPT and Open AI Include:
The Vibrant Philly Biotech Scene: Recent Happenings & Deals
Curator: Stephen J. Williams, Ph.D.
As the office and retail commercial real estate market has been drying up since the COVID pandemic, commercial real estate developers in the Philadelphia area have been turning to the health science industry to suit their lab space needs. This includes refurbishing old office space as well as new construction.
Gattuso secures $290M construction loan for life sciences building on Drexel campus
By Ryan Mulligan – Reporter, Philadelphia Business Journal
Dec 19, 2022
Gattuso Development Partners and Vigilant Holdings of New York have secured a $290 million construction loan for a major life sciences building set to be developed on Drexel University’s campus.
The funding comes from Houston-based Corebridge Financial, with an additional equity commitment from Boston-based Baupost Group, which is also a partner on the project. JLL’s Capital Markets group arranged the loan.
Plans for the University City project at 3201 Cuthbert St. carry a price tag of $400 million. The 11-story building will total some 520,000 square feet, making it the largest life sciences research and lab space in the city when it comes online.
The building at 3201 Cuthbert will rise on what had served as a recreation field used by Drexel and is located next to the Armory. Gattuso Development, which will lease the parcel from Drexel, expects to to complete the project by fall 2024. Robert A.M. Stern Architects designed the building.
A rendering of a $400 million lab and research facility Drexel University and Gattuso Development Partners plan to build at 3201 Cuthbert St. in Philadelphia.
The building is 45% leased by Drexel and SmartLabs, an operator of life sciences labs. Drexel plans to occupy about 60,000 square feet, while SmartLabs will lease two floors totaling 117,000 square feet.
“We believe the project validates Philadelphia’s emergence as a global hub for life sciences research, and we are excited to begin construction,” said John Gattuso, the co-founder and president of Philadelphia-based Gattuso Development.
Ryan Ade, Brett Segal and Christopher Peck of JLL arranged the financing.
Tmunity CEO Usman Azam departing to lead ‘stealth’ NYC biotech firm
By John George – Senior Reporter, Philadelphia Business Journal
Feb 7, 2022
The CEO of one of Philadelphia’s oldest cell therapy companies is departing to take a new job in the New York City area.
Usman “Oz” Azam, who has been CEO of Tmunity Therapeutics since 2016, will lead an unnamed biotechnology company currently operating in stealth mode.
In a posting on his LinkedIn page, Azam said, “After a decade immersed in cell therapies and immuno-oncology, I am now turning my attention to a new opportunity, and will be going back to where I started my life sciences career in neurosciences.”
Tmunity, a University of Pennsylvania spinout, is looking to apply CAR T-cell therapy, which has proved to be successful in treating liquid cancers, for the treatment of solid tumors.
Last summer, Tmunity suspended clinical testing of its lead cell therapy candidate targeting prostate cancer after two patients in the study died. Azam, in an interview with the Business Journal in June, said the company, which had grown to about 50 employees since its launch in 2015, laid off an undisclosed number of employees as a result of the setback.
Azam said on LinkedIn he is still a big believer in CAR T-cell therapy, noting Tmunity co-founder Dr. Carl June and his colleagues at Penn just published in Nature the 10-year landmark clinical outcomes study with the first CD19 CAR-T patients and programs.
“It’s just the beginning,” he stated. “I’m excited about the prospect of so many new cell- and gene-based therapies emerging in the next five to 10 years to tackle many solid and liquid tumors, and I hope we all continue to see the remarkable impact this makes on patients and families around the world.”
Azam could not be reached for comment Monday. Tmunity has engaged a search firm to identify his successor.
Tmunity, which is based in Philadelphia, has its own manufacturing operations in East Norriton. Tmunity’s founders include June and fellow Penn cell therapy pioneer Bruce Levine, who led the development of a CAR T-cell therapy now marketed by Novartis as Kymriah, a treatment for certain types of blood cancers.
In therapy using CAR-T cells, a patient’s T cells — part of their immune system — are removed and genetically modified in the laboratory. After they are re-injected into a patient, the T cells are better able to attack and destroy tumors. CAR is an acronym for chimeric antigen receptor. Chimeric antigen receptors are receptor proteins that have been engineered to give T cells their improved ability to target tumors.
Jodie Harris is a Philadelphia native who has spent the last 15 years in the U.S. Department of Treasury.
By Ryan Mulligan – Reporter, Philadelphia Business Journal
The Philadelphia Industrial Development Corp. has tapped U.S. Department of Treasury veteran Jodie Harris to be its next president.
Harris succeeds Anne Bovaird Nevins, who spent 15 years in the organization and took over as president in January 2020 before stepping down at the end of last year. Executive Vice President Sam Rhoads has been interim president.
Harris, a Philadelphia native who currently serves as director of the Community Development Financial Institutions Fund for the Department of Treasury, was picked after a regional and national search and will begin her tenure as president on June 1. She becomes the 12th head of PIDC and the first African-American woman to lead the organization.
PIDC is a public-private economic development corporation founded by the city and the Chamber of Commerce for Greater Philadelphia in 1958. It mainly uses industrial and commercial real estate projects to attract jobs, foster business opportunities and spur overall community growth. The organization has spurred over $18.5 billion in financing across its 65 years.
In a statement, Harris said that it is “a critical time for Philadelphia’s economy.”
“I’m especially excited for the opportunity to lead such an important and impactful organization in my hometown of Philadelphia,” Harris said. “As head of the CDFI Fund, I know first-hand what it takes to drive meaningful, sustainable, and equitable economic growth, especially in historically underserved communities.”
Harris is a graduate of the University of Maryland and received an MBA and master of public administration from New York University. In the Treasury Department, Harris’ most recent work aligns with PIDC’s economic development mission. At the Community Development Financial Institutions Fund, she oversaw a $331 million budget, mainly comprised of grant and administrative funding for various economic programs. Under Harris’ watch, the fund distributed over $3 billion in pandemic recovery funding, its highest level of appropriated grants ever.
Harris has been a part of the Treasury Department for 15 years, including as director of community and economic development policy.
In addition to government work, Harris has previously spent time in the private, academia and nonprofit sectors. In the beginning of her career, Harris worked at Meridian Bank and Accenture before turning to become a social and education policy researcher at New York University. She also spent two years as president of the Urban Business Assistance Corporation in New York.
Mayor Jim Kenney said that Philadelphia is “poised for long-term growth” and Harris will help drive it.
Real estate company SkyREM plans to spend $250 million converting the historic Quartermaster site in South Philadelphia to a life sciences campus with restaurants and a hotel.
The redevelopment would feature wet and dry lab space for research, development and bio-manufacturing.
The renamed Quartermaster Science + Technology Park is near the southwest corner of Oregon Avenue and South 20th Street in the city’s Girard Estates neighborhood. It’s east of the Quartermaster Plaza retail center, which sold last year for $100 million.
The 24-acre campus is planned to have six acres of green space, an Aldi grocery store opening by March and already is the headquarters for Indego, the bicycle share program in Philadelphia.
Six buildings totaling 1 million square feet of space would be used for research and development labs. There’s 500,000 square feet of vacant space available for life sciences and high technology companies with availabilities as small as 1,000 square feet up to 250,000 square feet contiguous. There’s also 150,000 square feet of retail space available.
The office park has 200,000 square feet already occupied by tenants. The Philadelphia Job Corps Center and Delaware Valley Intelligence Center are tenants at the site.
A rendering shows part of the Quartermaster Science + Technology Park as a redeveloped mixed-use life science campus.
QUARTERMASTER / PROVIDED BY FIFTEEN
The campus was previously used by the military as a place to produce clothing, footwear and personal equipment during World War I and II. The clothing factory closed in 1994. The Philadelphia Quartermaster Depot was listed on the National Register of Historic Places in 2010.
“We had a vision to preserve the legacy of this built-to-last historic Philadelphia landmark and transform it to create a vibrant space where the best and brightest want to innovate, collaborate, and work,” SkyREM CEO and Founder Alex Dembitzer said in a statement.
SkyREM, a real estate investor and developer, has corporate offices in New York and Philadelphia. The company acquired the site in 2001.
Vered Nohi, SkyREM’s regional executive director of new business development, called the redevelopment “transformational” for Philadelphia.
SkyREM announced the redevelopment of the Quartermaster campus in South Philadelphia into a life sciences campus with restaurants and a hotel. This rendering looks across Oregon Avenue toward the southwest corner of Oregon and 21st Street.
QUARTERMASTER / PROVIDED BY FIFTEEN
Quartermaster would join a wave of new life sciences projects being developed in the surrounding area and across the region.
The site is near both interstates 76 and 95 and is about 2 miles north of the Philadelphia Navy Yard, which has undergone a similar transformation from a military hub to a major life sciences and mixed-use redevelopment project. The Philadelphia Industrial Development Corp. is also in the process of selecting a developer to create a massive cell and gene therapy manufacturing complex across two sites totaling about 40 acres on Southwest Philadelphia’s Lower Schuylkill riverfront.
A rendering shows part of the future Quartermaster Science and Technology Park in South Philadelphia. The 24-acre campus is planned to have six buildings with 1 million square feet of life science space.
QUARTERMASTER / PROVIDED BY FIFTEEN
SkyREM is working with Maryland real estate firm Scheer Partners to lease the science and technology space. Philadelphia’s MPN Realty will handle leasing of the retail space. Architecture firm Fifteen is working on the project’s design.
Scheer Partners Senior Vice President Tim Conrey said the Quartermaster conversion will help companies solve for “speed to market” as demand for life science space in the region has been strong.
Brandywine Realty Trust originally planned to redevelop a Radnor medical office into lab and office space, split 50-50 between the two uses.
After changes in demand for lab and office space, Brandywine (NYSE: BDN) recently completed the 168,000-square-foot, four-story building at 250 King of Prussia Road in Radnor fully for life sciences.
“The pipeline is now 100% life sciences, which, while requiring more capital, is also generating longer term leases at a higher return on cost,” Brandywine CEO Jerry Sweeney of the project said during the company’s fourth-quarter earnings call on Thursday.
At the same time, Brandywine is holding off on developing new office buildings unless it has a tenant lined up in advance.
The shift reflects how Philadelphia-based Brandywine continues to lean into — and bet big — on life sciences.
Brandywine is the city’s largest owner of trophy office buildings and has several major development projects in the works. The company is planning to eventually develop 3 million square feet of life sciences space. For now, 800,000 square feet of life sciences space is under development, including a 12-story, 417,000-square-foot life sciences building at 3151 Market St. and a 29-story building with 200,000 square feet of life sciences space at 3025 John F. Kennedy Blvd. Both are part of the multi-phase Schuylkill Yards project underway near 30th Street Station in University City.
Once its existing projects are completed, Brandywine would have 800,000 square feet of life sciences space, making up 8% of its portfolio.Sweeney said the company wants to grow that figure to 21%.
Brandywine is developing a 145,000-square-foot, build-to-suit office building at 155 King of Prussia Road in Radnor for Arkema, a France-based global supplier of specialty materials. The building will be Arkema’s North American headquarters. Construction began in January and is scheduled to be completed in late 2024.
Brandywine reported that since November it raised over $705 million through fourth-quarter asset sales, an unsecured bond transaction and a secured loan. The company has “complete availability” on its $600 million unsecured line of credit, Sweeney said.
Brandywine sold a 95% leased, 86,000-square-foot office building at 200 Barr Harbor Drive in West Conshohocken for $30.5 million. The company also sold its 50% ownership interest in the 1919 Market joint venture for $83.2 million to an undisclosed buyer. 1919 Market St. is a 29-story building with apartments, office and commercial space. Brandywine co-developed the property with LCOR and the California State Teacher’s Retirement System.
Brandywine declined to comment and LCOR could not be reached.
Brandywine’s core portfolio is 91% leased.
The project at 250 King of Prussia Road cost $103.7 million and was recently completed. The renovation included 12-foot high floor-to-ceiling glass on the second floor, a new roof, lobby, elevator core, common area with a skylight and an added structured parking deck.
Located in the Radnor Life Science Center, a new campus with nearly 1 million square feet of lab, research and office space, Sweeney said it’s a “magnet” for biotech companies. Avantor, a global manufacturer and distributor of life sciences products, is headquartered in the complex.
Sweeney said Brandywine is “very confident” demand will stay strong for life sciences in Radnor. The building at 250 King of Prussia Road is projected to be fully leased by early 2024.
“Larger users we’re talking to, they just tend to take a little bit more time than we would like as they go through technical requirements and space planning requirements,” Sweeney said.
While Brandywine is aiming to increase its life sciences footprint, the company is being selective about what it builds next. The company may steer away from developments other than life sciences. The Schuylkill Yards project, for example, features a significant life sciences portion in University City.
“Other than fully leased build-to-suit opportunities, our future development starts are on hold,” Sweeney said, “pending more leasing on the existing joint venture pipeline and more clarity on the cost of debt capital and cap rates.”
Brandywine said about 70% to 75%of suburban tenants have returned to offices while that number has been around 50% in Philadelphia. At this point, though, it hasn’t yet affected demand when leasing space. Some tenants, for example, have moved out of the city while others have moved in.
In the fourth quarter, Brandywine had $55.7 million funds from operations, or 32 cents per share. That’s down from $60.4 million, or 35 cents per share, in the fourth quarter of 2021. Brandywine generated $129 million in revenue in the fourth quarter, up slightly from $125.5 in the year-ago period.
Brandywine stock is up 6.4% since the start of the year to $6.70 per share on Monday afternoon.
Many of Brandywine’s properties are in desirable locations, which have seen demand remain strong despite challenges facing offices, on par with industry trends.
Brandywine’s 12-story, 417,000-square-foot building at 3151 Market St. is on budget for $308 million and on schedule to be completed in the second quarter of 2024. Sweeney said Brandywine anticipates entering a construction loan in the second half of 2023, which would help complete the project. The building, being developed along with a global institutional investor,would be used for life sciences, innovation and office space as part of the larger Schuylkill Yards development in University City.
The company’s 29-story building at 3025 John F. Kennedy Blvd. with 200,000 square feet of life sciences space and 326 luxury apartments, is also on budget, costing $287.3 million, and on time, eyeing completion in the third quarter of this year.
Clarivate Analytics – a Powerhouse in IP assets and in Pharmaceuticals Informercials
Curator and Reporter: Aviva Lev-Ari, PhD, RN
We addressed in the past in several articles the emergence of Clarivate in its new life post Reuters years which ended by a SPAC IPO in 2019. This articles included:
Clarivate Analytics expanded IP data leadership by new acquisition of the leading provider of intellectual property case law and analytics Darts-ip
That moment in June in which Mr Klein and his partners cashed in more than $60m came after the stock had doubled to more than $20, in part thanks to a deal to buy the intellectual property management and technology company CPA Global. Onex and Barings, the two private equity firms that owned Clarivate before it went public via Mr Klein’s Spac, also sold stock at the same time.
Clarivate’s share price has since risen to $27.69, so the value of Mr Klein’s remaining stake has continued to swell and his investor group still holds shares worth $395m. The group also has separate warrants on top further augmenting their potential profit.
See Figure
SPAC IPO, 11/2018, $10/share
–>>> Merger, 3/2019, $10/share
–>>>Clarivate PLC, 11/2020, $27/share
… asymmetry of Spac mathematics: the risk in Spacs falls most heavily on outside shareholders even as the return on investment for sponsors looks very promising indeed.
It worth exploring the synergies embedded in a potential acquisition of LPBI Groups Portfolio of IP Assets by Clarivate Analytics, a publishing company that has the infrastructure needed for promotion of LPBI Group’s content in Pharmaceutical Media, Medicine, Life Sciences and Health care and for Monetization of this content.
With the explosive development of decentralized finance, we witness a phenomenal growth in tokenization of all kinds of assets, including equity, funds, debt, and real estate. By taking advantage of blockchain technology, digital assets are broadly grouped into fungible and non-fungible tokens (NFT). Here non-fungible tokens refer to those with unique and non-substitutable properties. NFT has widely attracted attention, and its protocols, standards, and applications are developing exponentially. It has been successfully applied to digital fantasy artwork, games, collectibles, etc. However, there is a lack of research in utilizing NFT in issues such as Intellectual Property. Applying for a patent and trademark is not only a time-consuming and lengthy process but also costly. NFT has considerable potential in the intellectual property domain. It can promote transparency and liquidity and open the market to innovators who aim to commercialize their inventions efficiently. The main objective of this paper is to examine the requirements of presenting intellectual property assets, specifically patents, as NFTs. Hence, we offer a layered conceptual NFT-based patent framework. Furthermore, a series of open challenges about NFT-based patents and the possible future directions are highlighted. The proposed framework provides fundamental elements and guidance for businesses in taking advantage of NFTs in real-world problems such as grant patents, funding, biotechnology, and so forth.
Introduction
Distributed ledger technologies (DLTs) such as blockchain are emerging technologies posing a threat to existing business models. Traditionally, most companies used centralized authorities in various aspects of their business, such as financial operations and setting up a trust with their counterparts. By the emergence of blockchain, centralized organizations can be substituted with a decentralized group of resources and actors. The blockchain mechanism was introduced in Bitcoin white paper in 2008, which lets users generate transactions and spend their money without the intervention of banks1. Ethereum, which is a second generation of blockchain, was introduced in 2014, allowing developers to run smart contracts on a distributed ledger. With smart contracts, developers and businesses can create financial applications that use cryptocurrencies and other forms of tokens for applications such as decentralized finance (DeFi), crowdfunding, decentralized exchanges, data records keeping, etc.2. Recent advances in distributed ledger technology have developed concepts that lead to cost reduction and the simplification of value exchange. Nowadays, by leveraging the advantages of blockchain and taking into account the governance issues, digital assets could be represented as tokens that existed in the blockchain network, which facilitates their transmission and traceability, increases their transparency, and improves their security3.
In the landscape of blockchain technology, there could be defined two types of tokens, including fungible tokens, in which all the tokens have equal value and non-fungible tokens (NFTs) that feature unique characteristics and are not interchangeable. Actually, non-fungible tokens are digital assets with a unique identifier that is stored on a blockchain4. NFT was initially suggested in Ethereum Improvement Proposals (EIP)-7215, and it was later expanded in EIP-11556. NFTs became one of the most widespread applications of blockchain technology that reached worldwide attention in early 2021. They can be digital representations of real-world objects. NFTs are tradable rights of digital assets (pictures, music, films, and virtual creations) where ownership is recorded in blockchain smart contracts7.
In particular, fungibility is the ability to exchange one with another of the same kind as an essential currency feature. The non-fungible token is unique and therefore cannot be substituted8. Recently, blockchain enthusiasts have indicated significant interest in various types of NFTs. They enthusiastically participate in NFT-related games or trades. CryptoPunks9, as one of the first NFTs on Ethereum, has developed almost 10,000 collectible punks and helped popularize the ERC-721 Standard. With the gamification of the breeding mechanics, CryptoKitties10 officially placed NFTs at the forefront of the market in 2017. CryptoKitties is an early blockchain game that enables users to buy, sell, collect, and digital breed cats. Another example is NBA Top Shot11, an NFT trading platform for digital short films buying and selling NBA events.
NFTs are developing remarkably and have provided many applications such as artist royalties, in-game assets, educational certificates, etc. However, it is a relatively new concept, and many areas of application need to be explored. Intellectual Property, including patent, trademark, and copyright, is an important area where NFTs can be applied usefully and solve existing problems.
Although NFTs have had many applications so far, it rarely has been used to solve real-world problems. In fact, an NFT is an exciting concept about Intellectual Property (IP). Applying for a patent and trademark is a time-consuming and lengthy process, but it is also costly. That is, registering a copyright or trademark may take months, while securing a patent can take years. On the contrary, with the help of unique features of NFT technology, it is possible to accelerate this process with considerable confidence and assurance about protecting the ownership of an IP. NFTs can offer IP protection while an applicant waits for the government to grant his/her more formal protection. It is cause for excitement that people who believe NFTs and Blockchain would make buying and selling patents easier, offering new opportunities for companies, universities, and inventors to make money off their innovations12. Patent holders will benefit from such innovation. It would give them the ability to ‘tokenize’ their patents. Because every transaction would be logged on a blockchain, it will be much easier to trace patent ownership changes. However, NFT would also facilitate the revenue generation of patents by democratizing patent licensing via NFT. NFTs support the intellectual property market by embedding automatic royalty collecting methods inside inventors’ works, providing them with financial benefits anytime their innovation is licensed. For example, each inventor’s patent would be minted as an NFT, and these NFTs would be joined together to form a commercial IP portfolio and minted as a compounded NFT. Each investor would automatically get their fair share of royalties whenever the licensing revenue is generated without tracking them down.
The authors in13, an overview of NFTs’ applications in different aspects such as gambling, games, and collectibles has been discussed. In addition4, provides a prototype for an event-tracking application based on Ethereum smart contract, and NFT as a solution for art and real estate auction systems is described in14. However, these studies have not discussed existing standards or a generalized architecture, enabling NFTs to be applied in diverse applications. For example, the authors in15 provide two general design patterns for creating and trading NFTs and discuss existing token standards for NFT. However, the proposed designs are limited to Ethereum, and other blockchains are not considered16. Moreover, different technologies for each step of the proposed procedure are not discussed. In8, the authors provide a conceptual framework for token designing and managing and discuss five views: token view, wallet view, transaction view, user interface view, and protocol view. However, no research provides a generalized conceptual framework for generating, recording, and tracing NFT based-IP, in blockchain network.
Even with the clear benefits that NFT-backed patents offer, there are a number of impediments to actually achieving such a system. For example, convincing patent owners to put current ownership records for their patents into NFTs poses an initial obstacle. Because there is no reliable framework for NFT-based patents, this paper provides a conceptual framework for presenting NFT-based patents with a comprehensive discussion on many aspects, ranging from the background, model components, token standards to application domains and research challenges. The main objective of this paper is to provide a layered conceptual NFT-based patent framework that can be used to register patents in a decentralized, tamper-proof, and trustworthy peer-to-peer network to trade and exchange them in the worldwide market. The main contributions of this paper are highlighted as follows:
Providing a comprehensive overview on tokenization of IP assets to create unique digital tokens.
Discussing the components of a distributed and trustworthy framework for minting NFT-based patents.
Highlighting a series of open challenges of NFT-based patents and enlightening the possible future trends.
The rest of the paper is structured as follows: “Background” section describes the Background of NFTs, Non-Fungible Token Standards. The NFT-based patent framework is described in “NFT-based patent framework” section. The Discussion and challenges are presented in “Discussion” section. Lastly, conclusions are given in “Conclusion” section.
Background
Colored Coins could be considered the first steps toward NFTs designed on the top of the Bitcoin network. Bitcoins are fungible, but it is possible to mark them to be distinguishable from the other bitcoins. These marked coins have special properties representing real-world assets like cars and stocks, and owners can prove their ownership of physical assets through the colored coins. By utilizing Colored Coins, users can transfer their marked coins’ ownership like a usual transaction and benefit from Bitcoin’s decentralized network17. Colored Coins had limited functionality due to the Bitcoin script limitations. Pepe is a green frog meme originated by Matt Furie that; users define tokens for Pepes and trade them through the Counterparty platform. Then, the tokens that were created by the picture of Pepes are decided if they are rare enough. Rare Pepe allows users to preserve scarcity, manage the ownership, and transfer their purchased Pepes.
In 2017, Larva Labs developed the first Ethereum-based NFT named CryptoPunks. It contains 10,000 unique human-like characters generated randomly. The official ownership of each character is stored in the Ethereum smart contract, and owners would trade characters. CryptoPunks project inspired CryptoKitties project. CryptoKitties attracts attention to NFT, and it is a pioneer in blockchain games and NFTs that launched in late 2017. CryptoKitties is a blockchain-based virtual game, and users collect and trade characters with unique features that shape kitties. This game was developed in Ethereum smart contract, and it pioneered the ERC-721 token, which was the first standard token in the Ethereum blockchain for NFTs. After the 2017 hype in NFTs, many projects started in this context. Due to increased attention to NFTs’ use-cases and growing market cap, different blockchains like EOS, Algorand, and Tezos started to support NFTs, and various marketplaces like SuperRare and Rarible, and OpenSea are developed to help users to trade NFTs. As mentioned, in general, assets are categorized into two main classes, fungible and non-fungible assets. Fungible assets are the ones that another similar asset can replace. Fungible items could have two main characteristics: replicability and divisibility.
Currency is a fungible item because a ten-dollar bill can be exchanged for another ten-dollar bill or divided into ten one-dollar bills. Despite fungible items, non-fungible items are unique and distinguishable. They cannot be divided or exchanged by another identical item. The first tweet on Twitter is a non-fungible item with mentioned characteristics. Another tweet cannot replace it, and it is unique and not divisible. NFT is a non-fungible cryptographic asset that is declared in a standard token format and has a unique set of attributes. Due to transparency, proof of ownership, and traceable transactions in the blockchain network, NFTs are created using blockchain technology.
Blockchain-based NFTs help enthusiasts create NFTs in the standard token format in blockchain, transfer the ownership of their NFTs to a buyer, assure uniqueness of NFTs, and manage NFTs completely. In addition, there are semi-fungible tokens that have characteristics of both fungible and non-fungible tokens. Semi-fungible tokens are fungible in the same class or specific time and non-fungible in other classes or different times. A plane ticket can be considered a semi-fungible token because a charter ticket can be exchanged by another charter ticket but cannot be exchanged by a first-class ticket. The concept of semi-fungible tokens plays the main role in blockchain-based games and reduces NFTs overhead. In Fig. 1, we illustrate fungible, non-fungible, and semi-fungible tokens. The main properties of NFTs are described as follows15:
Figure 1
Ownership: Because of the blockchain layer, the owner of NFT can easily prove the right of possession by his/her keys. Other nodes can verify the user’s ownership publicly.
Transferable: Users can freely transfer owned NFTs ownership to others on dedicated markets.
Transparency: By using blockchain, all transactions are transparent, and every node in the network can confirm and trace the trades.
Fraud Prevention: Fraud is one of the key problems in trading assets; hence, using NFTs ensures buyers buy a non-counterfeit item.
Immutability: Metadata, token ID, and history of transactions of NFTs are recorded in a distributed ledger, and it is impossible to change the information of the purchased NFTs.
Non-fungible standards
Ethereum blockchain was pioneered in implementing NFTs. ERC-721 token was the first standard token accepted in the Ethereum network. With the increase in popularity of the NFTs, developers started developing and enhancing NFTs standards in different blockchains like EOS, Algorand, and Tezos. This section provides a review of implemented NFTs standards on the mentioned blockchains.
Ethereum
ERC-721 was the first Standard for NFTs developed in Ethereum, a free and open-source standard. ERC-721 is an interface that a smart contract should implement to have the ability to transfer and manage NFTs. Each ERC-721 token has unique properties and a different Token Id. ERC-721 tokens include the owner’s information, a list of approved addresses, a transfer function that implements transferring tokens from owner to buyer, and other useful functions5.
In ERC-721, smart contracts can group tokens with the same configuration, and each token has different properties, so ERC-721 does not support fungible tokens. However, ERC-1155 is another standard on Ethereum developed by Enjin and has richer functionalities than ERC-721 that supports fungible, non-fungible, and semi-fungible tokens. In ERC-1155, IDs define the class of assets. So different IDs have a different class of assets, and each ID may contain different assets of the same class. Using ERC-1155, a user can transfer different types of tokens in a single transaction and mix multiple fungible and non-fungible types of tokens in a single smart contract6. ERC-721 and ERC-1155 both support operators in which the owner can let the operator originate transferring of the token.
EOSIO
EOSIO is an open-source blockchain platform released in 2018 and claims to eliminate transaction fees and increase transaction throughput. EOSIO differs from Ethereum in the wallet creation algorithm and procedure of handling transactions. dGood is a free standard developed in the EOS blockchain for assets, and it focuses on large-scale use cases. It supports a hierarchical naming structure in smart contracts. Each contract has a unique symbol and a list of categories, and each category contains a list of token names. Therefore, a single contract in dGoods could contain many tokens, which causes efficiency in transferring a group of tokens. Using this hierarchy, dGoods supports fungible, non-fungible, and semi-fungible tokens. It also supports batch transferring, where the owner can transfer many tokens in one operation18.
Algorand
Algorand is a new high-performance public blockchain launched in 2019. It provides scalability while maintaining security and decentralization. It supports smart contracts and tokens for representing assets19. Algorand defines Algorand Standard Assets (ASA) concept to create and manage assets in the Algorand blockchain. Using ASA, users are able to define fungible and non-fungible tokens. In Algorand, users can create NFTs or FTs without writing smart contracts, and they should run just a single transaction in the Algorand blockchain. Each transaction contains some mutable and immutable properties20.
Each account in Algorand can create up to 1000 assets, and for every asset, an account creates or receives, the minimum balance of the account increases by 0.1 Algos. Also, Algorand supports fractional NFTs by splitting an NFT into a group of divided FTs or NFTs, and each part can be exchanged dependently21. Algorand uses a Clawback Address that operates like an operator in ERC-1155, and it is allowed to transfer tokens of an owner who has permitted the operator.
Tezos
Tezos is another decentralized open-source blockchain. Tezos supports the meta-consensus concept. In addition to using a consensus protocol on the ledger’s state like Bitcoin and Ethereum, It also attempts to reach a consensus about how nodes and the protocol should change or upgrade22. FA2 (TZIP-12) is a standard for a unified token contract interface in the Tezos blockchain. FA2 supports different token types like fungible, non-fungible, and fractionalized NFT contracts. In Tezos, tokens are identified with a token contract address and token ID pair. Also, Tezos supports batch token transferring, which reduces the cost of transferring multiple tokens.
Flow
Flow was developed by Dapper Labs to remove the scalability limitation of the Ethereum blockchain. Flow is a fast and decentralized blockchain that focuses on games and digital collectibles. It improves throughput and scalability without sharding due to its architecture. Flow supports smart contracts using Cadence, which is a resource-oriented programming language. NFTs can be described as a resource with a unique id in Cadence. Resources have important rules for ownership management; that is, resources have just one owner and cannot be copied or lost. These features assure the NFT owner. NFTs’ metadata, including images and documents, can be stored off-chain or on-chain in Flow. In addition, Flow defines a Collection concept, in which each collection is an NFT resource that can include a list of resources. It is a dictionary that the key is resource id, and the value is corresponding NFT.
The collection concept provides batch transferring of NFTs. Besides, users can define an NFT for an FT. For instance, in CryptoKitties, a unique cat as an NFT can own a unique hat (another NFT). Flow uses Cadence’s second layer of access control to allow some operators to access some fields of the NFT23. In Table 1, we provide a comparison between explained standards. They are compared in support of fungible-tokens, non-fungible tokens, batch transferring that owner can transform multiple tokens in one operation, operator support in which the owner can approve an operator to originate token transfer, and fractionalized NFTs that an NFT can divide to different tokens and each exchange dependently.Table 1 Comparing NFT standards.
In this section, we propose a framework for presenting NFT-based patents. We describe details of the proposed distributed and trustworthy framework for minting NFT-based patents, as shown in Fig. 2. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application Layer. Details of each layer and the general concepts are presented as follows.
Figure 2
Storage layer
The continuous rise of the data in blockchain technology is moving various information systems towards the use of decentralized storage networks. Decentralized storage networks were created to provide more benefits to the technological world24. Some of the benefits of using decentralized storage systems are explained: (1) Cost savings are achieved by making optimal use of current storage. (2) Multiple copies are kept on various nodes, avoiding bottlenecks on central servers and speeding up downloads. This foundation layer implicitly provides the infrastructure required for the storage. The items on NFT platforms have unique characteristics that must be included for identification.
Non-fungible token metadata provides information that describes a particular token ID. NFT metadata is either represented on the On-chain or Off-chain. On-chain means direct incorporation of the metadata into the NFT’s smart contract, which represents the tokens. On the other hand, off-chain storage means hosting the metadata separately25.
Blockchains provide decentralization but are expensive for data storage and never allow data to be removed. For example, because of the Ethereum blockchain’s current storage limits and high maintenance costs, many projects’ metadata is maintained off-chain. Developers utilize the ERC721 Standard, which features a method known as tokenURI. This method is implemented to let applications know the location of the metadata for a specific item. Currently, there are three solutions for off-chain storage, including InterPlanetary File System (IPFS), Pinata, and Filecoin.
IPFS
InterPlanetary File System (IPFS) is a peer-to-peer hypermedia protocol for decentralized media content storage. Because of the high cost of storing media files related to NFTS on Blockchain, IPFS can be the most affordable and efficient solution. IPFS combines multiple technologies inspired by Gita and BitTorrent, such as Block Exchange System, Distributed Hash Tables (DHT), and Version Control System26. On a peer-to-peer network, DHT is used to coordinate and maintain metadata.
In other words, the hash values must be mapped to the objects they represent. An IPFS generates a hash value that starts with the prefix {Q}_{m} and acts as a reference to a specific item when storing an object like a file. Objects larger than 256 KB are divided into smaller blocks up to 256 KB. Then a hash tree is used to interconnect all the blocks that are a part of the same object. IPFS uses Kamdelia DHT. The Block Exchange System, or BitSwap, is a BitTorrent-inspired system that is used to exchange blocks. It is possible to use asymmetric encryption to prevent unauthorized access to stored content on IPFS27.
Pinata
Pinata is a popular platform for managing and uploading files on IPFS. It provides secure and verifiable files for NFTs. Most data is stored off-chain by most NFTs, where a URL of the data is pointed to the NFT on the blockchain. The main problem here is that some information in the URL can change.
This indicates that an NFT supposed to describe a certain patent can be changed without anyone knowing. This defeats the purpose of the NFT in the first place. This is where Pinata comes in handy. Pinata uses the IPFS to create content-addressable hashes of data, also known as Content-Identifiers (CIDs). These CIDs serve as both a way of retrieving data and a means to ensure data validity. Those looking to retrieve data simply ask the IPFS network for the data associated with a certain CID, and if any node on the network contains that data, it will be returned to the requester. The data is automatically rehashed on the requester’s computer when the requester retrieves it to make sure that the data matches back up with the original CID they asked for. This process ensures the data that’s received is exactly what was asked for; if a malicious node attempts to send fake data, the resulting CID on the requester’s end will be different, alerting the requester that they’re receiving incorrect data28.
Filecoin
Another decentralized storage network is Filecoin. It is built on top of IPFS and is designed to store the most important data, such as media files. Truffle Suite has also launched NFT Development Template with Filecoin Box. NFT.Storage (Free Decentralized Storage for NFTs)29 allows users to easily and securely store their NFT content and metadata using IPFS and Filecoin. NFT.Storage is a service backed by Protocol Labs and Pinata specifically for storing NFT data. Through content addressing and decentralized storage, NFT.Storage allows developers to protect their NFT assets and associated metadata, ensuring that all NFTs follow best practices to stay accessible for the long term. NFT.Storage makes it completely frictionless to mint NFTs following best practices through resilient persistence on IPFS and Filecoin. NFT.Storage allows developers to quickly, safely, and for free store NFT data on decentralized networks. Anyone can leverage the power of IPFS and Filecoin to ensure the persistence of their NFTs. The details of this system are stated as follows30:
Content addressing
Once users upload data on NFT.Storage, They receive a CID, which is an IPFS hash of the content. CIDs are the data’s unique fingerprints, universal addresses that can be used to refer to it regardless of how or where it is stored. Using CIDs to reference NFT data avoids problems such as weak links and “rug pulls” since CIDs are generated from the content itself.
Provable storage
NFT.Storage uses Filecoin for long-term decentralized data storage. Filecoin uses cryptographic proofs to assure the NFT data’s durability and persistence over time.
Resilient retrieval
This data stored via IPFS and Filecoin can be fetched directly in the browser via any public IPFS.
Authentication Layer
The second layer is the authentication layer, which we briefly highlight its functions in this section. The Decentralized Identity (DID) approach assists users in collecting credentials from a variety of issuers, such as the government, educational institutions, or employers, and saving them in a digital wallet. The verifier then uses these credentials to verify a person’s validity by using a blockchain-based ledger to follow the “identity and access management (IAM)” process. Therefore, DID allows users to be in control of their identity. A lack of NFT verifiability also causes intellectual property and copyright infringements; of course, the chain of custody may be traced back to the creator’s public address to check whether a similar patent is filed using that address. However, there is no quick and foolproof way to check an NFTs creator’s legitimacy. Without such verification built into the NFT, an NFT proves ownership only over that NFT itself and nothing more.
Self-sovereign identity (SSI)31 is a solution to this problem. SSI is a new series of standards that will guide a new identity architecture for the Internet. With a focus on privacy, security interoperability, SSI applications use public-key cryptography with public blockchains to generate persistent identities for people with private and selective information disclosure. Blockchain technology offers a solution to establish trust and transparency and provide a secure and publicly verifiable KYC (Know Your Customer). The blockchain architecture allows you to collect information from various service providers into a single cryptographically secure and unchanging database that does not need a third party to verify the authenticity of the information.
The proposed platform generates patents-related smart contracts acting as a program that runs on the blockchain to receive and send transactions. They are unalterable privately identifying clients with a thorough KYC process. After KYC approval, then mint an NFT on the blockchain as a certificate of verification32. This article uses a decentralized authentication solution at this layer for authentication. This solution has been used for various applications in the field of the blockchain (exp: smart city, Internet of Things, etc.33, 34, but we use it here for the proposed framework (patent as NFTs). Details of this solution will be presented in the following.
Decentralized authentication
This section presents the authentication layer similar35 to build validated communication in a secure and decentralized manner via blockchain technology. As shown in Fig. 3, the authentication protocol comprises two processes, including registration and login.
Figure 3
Registration
In the registration process of a suggested authentication protocol, we first initialize a user’s public key as their identity key (UserName). Then, we upload this identity key on a blockchain, in which transactions can be verified later by other users. Finally, the user generates an identity transaction.
Login
After registration, a user logs in to the system. The login process is described as follows:
1. The user commits identity information and imports their secret key into the service application to log in.
2. A user who needs to log in sends a login request to the network’s service provider.
3. The service provider analyzes the login request, extracts the hash, queries the blockchain, and obtains identity information from an identity list (identity transactions).
4. The service provider responds with an authentication request when the above process is completed. A timestamp (to avoid a replay attack), the user’s UserName, and a signature are all included in the authentication request.
5. The user creates a signature with five parameters: timestamp, UserName, and PK, as well as the UserName and PK of the service provider. The user authentication credential is used as the signature.
6. The service provider verifies the received information, and if the received information is valid, the authentication succeeds; otherwise, the authentication fails, and the user’s login is denied.
The World Intellectual Property Organization (WIPO) and multiple target patent offices in various nations or regions should assess a patent application, resulting in inefficiency, high costs, and uncertainty. This study presented a conceptual NFT-based patent framework for issuing, validating, and sharing patent certificates. The platform aims to support counterfeit protection as well as secure access and management of certificates according to the needs of learners, companies, education institutions, and certification authorities.
Here, the certification authority (CA) is used to authenticate patent offices. The procedure will first validate a patent if it is provided with a digital certificate that meets the X.509 standard. Certificate authorities are introduced into the system to authenticate both the nodes and clients connected to the blockchain network.
Verification layer
In permissioned blockchains, just identified nodes can read and write in the distributed ledger. Nodes can act in different roles and have various permissions. Therefore, a distributed system can be designed to be the identified nodes for patent granting offices. Here the system is described conceptually at a high level. Figure 4 illustrates the sequence diagram of this layer. This layer includes four levels as below:
Figure 4
Digitalization
For a patent to publish as an NFT in the blockchain, it must have a digitalized format. This level is the “filling step” in traditional patent registering. An application could be designed in the application layer to allow users to enter different patent information online.
Recording
Patents provide valuable information and would bring financial benefits for their owner. If they are publicly published in a blockchain network, miners may refuse the patent and take the innovation for themselves. At least it can weaken consensus reliability and encourage miners to misbehave. The inventor should record his innovation privately first using proof of existence to prevent this. The inventor generates the hash of the patent document and records it in the blockchain. As soon as it is recorded in the blockchain, the timestamp and the hash are available for others publicly. Then, the inventor can prove the existence of the patent document whenever it is needed.
Furthermore, using methods like Decision Thinking36, an inventor can record each phase of patent development separately. In each stage, a user generates the hash of the finished part and publishes the hash regarding the last part’s hash. Finally, they have a coupled series of hashes that indicate patent development, and they can prove the existence of each phase using the original related documents. This level should be done to prevent others from abusing the patent and taking it for themselves. The inventor can make sure that their patent document is recorded confidentially and immutably37.
Different hash algorithms exist with different architecture, time complexity, and security considerations. Hash functions should satisfy two main requirements: Pre-Image Resistance: This means that it should be computationally hard to find the input of a hash function while the output and the hash algorithm are known publicly. Collision Resistance: This means that it is computationally hard to find two arbitrary inputs, x, and y, that have the same hash output. These requirements are vital for recording patents. First, the hash function should be Pre-Image Resistance to make it impossible for others to calculate the patent documentation. Otherwise, everybody can read the patent, even before its official publication. Second, the hash function should satisfy Collision Resistance to preclude users from changing their document after recording. Otherwise, users can upload another document, and after a while, they can replace it with another one.
There are various hash algorithms, and MD and SHA families are the most useful algorithms. According to38, Collisions have been found for MD2, MD4, MD5, SHA-0, and SHA-1 hash functions. Hence, they cannot be a good choice for recording patents. SHA2 hash algorithm is secure, and no collision has been found. Although SHA2 is noticeably slower than prior hash algorithms, the recording phase is not highly time-sensitive. So, it is a better choice and provides excellent security for users.
Validating
In this phase, the inventors first create NFT for their patents and publish it to the miners/validators. Miners are some identified nodes that validate NFTs to record in the blockchain. Due to the specialization of the patent validation, miners cannot be inexpert public persons. In addition, patent offices are not too many to make the network fully decentralized. Therefore, the miners can be related specialist persons that are certified by the patent offices. They should receive a digital certificate from patent offices that show their eligibility to referee a patent.
Digital certificate
Digital certificates are digital credentials used to verify networked entities’ online identities. They usually include a public key as well as the owner’s identification. They are issued by Certification Authorities (CAs), who must verify the certificate holder’s identity. Certificates contain cryptographic keys for signing, encryption, and decryption. X.509 is a standard that defines the format of public-key certificates and is signed by a certificate authority. X.509 standard has multiple fields, and its structure is shown in Fig. 5. Version: This field indicated the version of the X.509 standard. X.509 contains multiple versions, and each version has a different structure. According to the CA, validators can choose their desired version. Serial Number: It is used to distinguish a certificate from other certificates. Thus, each certificate has a unique serial number. Signature Algorithm Identifier: This field indicates the cryptographic encryption algorithm used by a certificate authority. Issuer Name: This field indicates the issuer’s name, which is generally certificate authority. Validity Period: Each certificate is valid for a defined period, defined as the Validity Period. This limited period partly protects certificates against exposing CA’s private key. Subject Name: Name of the requester. In our proposed framework, it is the validator’s name. Subject Public Key Info: Shows the CA’s or organization’s public key that issued the certificate. These fields are identical among all versions of the X.509 standard39.
Figure 5
Certificate authority
A Certificate Authority (CA) issues digital certificates. CAs encrypt the certificate with their private key, which is not public, and others can decrypt the certificates containing the CA’s public key.
Here, the patent office creates a certificate for requested patent referees. The patent office writes the information of the validator in their certificate and encrypts it with the patent offices’ private key. The validator can use the certificate to assure others about their eligibility. Other nodes can check the requesting node’s information by decrypting the certificate using the public key of the patent office. Therefore, persons can join the network’s miners/validators using their credentials. In this phase, miners perform Formal Examinations, Prior Art Research, and Substantive Examinations and vote to grant or refuse the patent.
Miners perform a consensus about the patent and record the patent in the blockchain. After that, the NFT is recorded in the blockchain with corresponding comments in granting or needing reformations. If the miners detect the NFT as a malicious request, they do not record it in the blockchain.
Blockchain layer
This layer plays as a middleware between the Verification Layer and Application Layer in the patents as NFTs architecture. The main purpose of the blockchain layer in the proposed architecture is to provide IP management. We find that transitioning to a blockchain-based patent as a NFTs records system enables many previously suggested improvements to current patent systems in a flexible, scalable, and transparent manner.
On the other hand, we can use multiple blockchain platforms, including Ethereum, EOS, Flow, and Tezos. Blockchain Systems can be mainly classified into two major types: Permissionless (public) and Permissioned (private) Blockchains based on their consensus mechanism. In a public blockchain, any node can participate in the peer-to-peer network, where the blockchain is fully decentralized. A node can leave the network without any consent from the other nodes in the network.
Bitcoin is one of the most popular examples that fall under the public and permissionless blockchain. Proof of Work (POW), Proof-of-Stake (POS), and directed acyclic graph (DAG) are some examples of consensus algorithms in permissionless blockchains. Bitcoin and Ethereum, two famous and trustable blockchain networks, use the PoW consensus mechanism. Blockchain platforms like Cardano and EOS adopt the PoS consensus40.
Nodes require specific access or permission to get network authentication in a private blockchain. Hyperledger is among the most popular private blockchains, which allow only permissioned members to join the network after authentication. This provides security to a group of entities that do not completely trust one another but wants to achieve a common objective such as exchanging information. All entities of a permissioned blockchain network can use Byzantine-fault-tolerant (BFT) consensus. The Fabric has a membership identity service that manages user IDs and verifies network participants.
Therefore, members are aware of each other’s identity while maintaining privacy and secrecy because they are unaware of each other’s activities41. Due to their more secure nature, private blockchains have sparked a large interest in banking and financial organizations, believing that these platforms can disrupt current centralized systems. Hyperledger, Quorum, Corda, EOS are some examples of permissioned blockchains42.
Reaching consensus in a distributed environment is a challenge. Blockchain is a decentralized network with no central node to observe and check all transactions. Thus, there is a need to design protocols that indicate all transactions are valid. So, the consensus algorithms are considered as the core of each blockchain43. In distributed systems, the consensus has become a problem in which all network members (nodes) agree on accept or reject of a block. When all network members accept the new block, it can append to the previous block.
As mentioned, the main concern in the blockchains is how to reach consensus among network members. A wide range of consensus algorithms has been designed in which each of them has its own pros and cons42. Blockchain consensus algorithms are mainly classified into three groups shown in Table 2. As the first group, proof-based consensus algorithms require the nodes joining the verifying network to demonstrate their qualification to do the appending task. The second group is voting-based consensus that requires validators in the network to share their results of validating a new block or transaction before making the final decision. The third group is DAG-based consensus, a new class of consensus algorithms. These algorithms allow several different blocks to be published and recorded simultaneously on the network.Table 2 Consensus algorithms in blockchain networks.
The proposed patent as the NFTs platform that builds blockchain intellectual property empowers the entire patent ecosystem. It is a solution that removes barriers by addressing fundamental issues within the traditional patent ecosystem. Blockchain can efficiently handle patents and trademarks by effectively reducing approval wait time and other required resources. The user entities involved in Intellectual Property management are Creators, Patent Consumers, and Copyright Managing Entities. Users with ownership of the original data are the patent creators, e.g., inventors, writers, and researchers. Patent Consumers are the users who are willing to consume the content and support the creator’s work. On the other hand, Users responsible for protecting the creators’ Intellectual Property are the copyright management entities, e.g., lawyers. The patents as NFTs solution for IP management in blockchain layer works by implementing the following steps62:
Creators sign up to the platform
Creators need to sign up on the blockchain platform to patent their creative work. The identity information will be required while signing up.
Creators upload IP on the blockchain network
Now, add an intellectual property for which the patent application is required. The creator will upload the information related to IP and the data on the blockchain network. Blockchain ensures traceability and auditability to prevent data from duplicity and manipulation. The patent becomes visible to all network members once it is uploaded to the blockchain.
Consumers generate request to use the content
Consumers who want to access the content must first register on the blockchain network. After Signing up, consumers can ask creators to grant access to the patented content. Before the patent owner authorizes the request, a Smart Contract is created to allow customers to access information such as the owner’s data. Furthermore, consumers are required to pay fees in either fiat money or unique tokens in order to use the creator’s original information. When the creator approves the request, an NDA (Non-Disclosure Agreement) is produced and signed by both parties. Blockchain manages the agreement and guarantees that all parties agree to the terms and conditions filed.
Patent management entities leverage blockchain to protect copyrights and solve related disputes
Blockchain assists the patent management entities in resolving a variety of disputes that may include: sharing confidential information, establishing proof of authorship, transferring IP rights, and making defensive publications, etc. Suppose a person used an Invention from a patent for his company without the inventor’s consent. The inventor can report it to the patent office and claim that he is the owner of that invention.
Application layer
The patent Platform Global Marketplace technology would allow many enterprises, governments, universities, and Small and medium-sized enterprises (SMEs) worldwide to tokenize patents as NFTs to create an infrastructure for storing patent records on a blockchain-based network and developing a decentralized marketplace in which patent holders would easily sell or otherwise monetize their patents. The NFTs-based patent can use smart contracts to determine a set price for a license or purchase.
Any buyer satisfied with the conditions can pay and immediately unlock the rights to the patent without either party ever having to interact directly. While patents are currently regulated jurisdictionally around the world, a blockchain-based patent marketplace using NFTs can reduce the geographical barriers between patent systems using as simple a tool as a search query. The ease of access to patents globally can help aspiring inventors accelerate the innovative process by building upon others’ patented inventions through licenses. There are a wide variety of use cases for patent NFTs such as SMEs, Patent Organization, Grant & Funding, and fundraising/transferring information relating to patents. These applications keep growing as time progresses, and we are constantly finding new ways to utilize these tokens. Some of the most commonly used applications can be seen as follows.
SMEs
The aim is to move intellectual property assets onto a digital, centralized, and secure blockchain network, enabling easier commercialization of patents, especially for small or medium enterprises (SMEs). Smart contracts can be attached to NFTs so terms of use and ownership can be outlined and agreed upon without incurring as many legal fees as traditional IP transfers. This is believed to help SMEs secure funding, as they could more easily leverage the previously undisclosed value of their patent portfolios63.
Transfer ownership of patents
NFTs can be used to transfer ownership of patents. The blockchain can be used to keep track of patent owners, and tokens would include self-executing contracts that transfer the legal rights associated with patents when the tokens are transferred. A partnership between IBM and IPwe has spearheaded the use of NFTs to secure patent ownership. These two companies have teamed together to build the infrastructure for an NFT-based patent marketplace.
Discussion
There are exciting proposals in the legal and economic literature that suggest seemingly straightforward solutions to many of the issues plaguing current patent systems. However, most solutions would constitute major administrative disruptions and place significant and continuous financial burdens on patent offices or their users. An NFT-based patents system not only makes many of these ideas administratively feasible but can also be examined in a step-wise, scalable, and very public manner.
Furthermore, NFT-based patents may facilitate reliable information sharing among offices and patentees worldwide, reducing the burden on examiners and perhaps even accelerating harmonization efforts. NFT-based patents also have additional transparency and archival attributes baked in. A patent should be a privilege bestowed on those who take resource-intensive risks to explore the frontier of technological capabilities. As a reward for their achievements, full transparency of these rewards is much public interest. It is a society that pays for administrative and economic inefficiencies that exist in today’s systems. NFT-based patents can enhance this transparency. From an organizational perspective, an NFT-based patent can remove current bottlenecks in patent processes by making these processes more efficient, rapid, and convenient for applicants without compromising the quality of granted patents.
The proposed framework encounters some challenges that should be solved to reach a developed patent verification platform. First, technical problems are discussed. The consensus method that is used in the verification layer is not addressed in detail. Due to the permissioned structure of miners in the NFT-based patents, consensus algorithms like PBFT, Federated Consensus, and Round Robin Consensus are designed for permissioned blockchains can be applied. Also, miners/validators spend some time validating the patents; hence a protocol should be designed to profit them. Some challenges like proving the miners’ time and effort, the price that inventors should pay to miners, and other economic trade-offs should be considered.
Different NFT standards were discussed. If various patent services use NFT standards, there will be some cross-platform problems. For instance, transferring an NFT from Ethereum blockchain (ERC-721 token) to EOS blockchain is not a forward and straight work and needs some considerations. Also, people usually trade NFTs in marketplaces such as Rarible and OpenSea. These marketplaces are centralized and may prompt some challenges because of their centralized nature. Besides, there exist some other types of challenges. For example, the novelty of NFT-based patents and blockchain services.
Blockchain-based patent service has not been tested before. The patent registration procedure and concepts of the Patent as NFT system may be ambiguous for people who still prefer conventional centralized patent systems over decentralized ones. It should be noted that there are some problems in the mining part. Miners should receive certificates from the accepted organizations. Determining these organizations and how they accept referees as validators need more consideration. Some types of inventions in some countries are prohibited, and inventors cannot register them. In NFT-based patents, inventors can register their patents publicly, and maybe some collisions occur between inventors and the government. There exist some misunderstandings about NFT’s ownership rights. It is not clear that when a person buys an NFT, which rights are given to them exactly; for instance, they have property rights or have moral rights, too.
Conclusion
Blockchain technology provides strong timestamping, the potential for smart contracts, proof-of-existence. It enables creating a transparent, distributed, cost-effective, and resilient environment that is open to all and where each transaction is auditable. On the other hand, blockchain is a definite boon to the IP industry, benefitting patent owners. When blockchain technology’s intrinsic characteristics are applied to the IP domain, it helps copyrights. This paper provided a conceptual framework for presenting an NFT-based patent with a comprehensive discussion of many aspects: background, model components, token standards to application areas, and research challenges. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application. The primary purpose of this patent framework was to provide an NFT-based concept that could be used to patent a decentralized, anti-tamper, and reliable network for trade and exchange around the world. Finally, we addressed several open challenges to NFT-based inventions.
References
Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decent. Bus. Rev. 21260, https://bitcoin.org/bitcoin.pdf (2008).
Buterin, V. A next-generation smart contract and decentralized application platform. White Pap.3 (2014).
Nofer, M., Gomber, P., Hinz, O. & Schiereck, D. Business & infomation system engineering. Blockchain59, 183–187 (2017).Google Scholar
Entriken, W., Shirley, D., Evans, J. & Sachs, N. EIP 721: ERC-721 non-fungible token standard. Ethereum Improv. Propos.. https://eips.ethereum.org/EIPS/eip-721 (2018).
Radomski, W. et al. Eip 1155: Erc-1155 multi token standard. In Ethereum, Standard (2018).
Fairfield, J. Tokenized: The law of non-fungible tokens and unique digital property. Indiana Law J. forthcoming (2021).
Chevet, S. Blockchain technology and non-fungible tokens: Reshaping value chains in creative industries. Available at SSRN 3212662 (2018).
Bal, M. & Ner, C. NFTracer: a Non-Fungible token tracking proof-of-concept using Hyperledger Fabric. arXiv preprint arXiv:1905.04795 (2019).
Wang, Q., Li, R., Wang, Q. & Chen, S. Non-fungible token (NFT): Overview, evaluation, opportunities and challenges. arXiv preprint arXiv:2105.07447 (2021).
Qu, Q., Nurgaliev, I., Muzammal, M., Jensen, C. S. & Fan, J. On spatio-temporal blockchain query processing. Future Gener. Comput. Syst.98: 208–218 (2019).ArticleGoogle Scholar
Rosenfeld, M. Overview of colored coins. White paper, bitcoil. co. il41, 94 (2012).
Benisi, N. Z., Aminian, M. & Javadi, B. Blockchain-based decentralized storage networks: A survey. J. Netw. Comput. Appl.162, 102656 (2020).ArticleGoogle Scholar
NFTReview. On-chain vs. Off-chain Metadata (2021).
Nizamuddin, N., Salah, K., Azad, M. A., Arshad, J. & Rehman, M. Decentralized document version control using ethereum blockchain and IPFS. Comput. Electr. Eng.76, 183–197 (2019).ArticleGoogle Scholar
Tut, K. Who Is Responsible for NFT Data? (2020).
nft.storage. Free Storage for NFTs, Retrieved 16 May, 2021, from https://nft.storage/. (2021).
Psaras, Y. & Dias, D. in 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). 80–80 (IEEE).
Tanner, J. & Roelofs, C. NFTs and the need for Self-Sovereign Identity (2021).
Martens, D., Tuyll van Serooskerken, A. V. & Steenhagen, M. Exploring the potential of blockchain for KYC. J. Digit. Bank.2, 123–131 (2017).Google Scholar
Hammi, M. T., Bellot, P. & Serhrouchni, A. In 2018 IEEE Wireless Communications and Networking Conference (WCNC). 1–6 (IEEE).
Khalid, U. et al. A decentralized lightweight blockchain-based authentication mechanism for IoT systems. Cluster Comput. 1–21 (2020).
Zhong, Y. et al. Distributed blockchain-based authentication and authorization protocol for smart grid. Wirel. Commun. Mobile Comput. (2021).
Schönhals, A., Hepp, T. & Gipp, B. In Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. 105–110.
Verma, S. & Prajapati, G. A Survey of Cryptographic Hash Algorithms and Issues. International Journal of Computer Security & Source Code Analysis (IJCSSCA) 1, 17–20, (2015).
Verma, S. & Prajapati, G. A survey of cryptographic hash algorithms and issues. Int. J. Comput. Secur. Source Code Anal. (IJCSSCA)1 (2015).
SDK, I. X.509 Certificates (1996).
Helliar, C. V., Crawford, L., Rocca, L., Teodori, C. & Veneziani, M. Permissionless and permissioned blockchain diffusion. Int. J. Inf. Manag.54, 102136 (2020).ArticleGoogle Scholar
Frizzo-Barker, J. et al. Blockchain as a disruptive technology for business: A systematic review. Int. J. Inf. Manag.51, 102029 (2020).ArticleGoogle Scholar
Bamakan, S. M. H., Motavali, A. & Bondarti, A. B. A survey of blockchain consensus algorithms performance evaluation criteria. Expert Syst. Appl.154, 113385 (2020).ArticleGoogle Scholar
Bamakan, S. M. H., Bondarti, A. B., Bondarti, P. B. & Qu, Q. Blockchain technology forecasting by patent analytics and text mining. Blockchain Res. Appl. 100019 (2021).
Castro, M. & Liskov, B. Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. (TOCS)20, 398–461 (2002).ArticleGoogle Scholar
Muratov, F., Lebedev, A., Iushkevich, N., Nasrulin, B. & Takemiya, M. YAC: BFT consensus algorithm for blockchain. arXiv preprint arXiv:1809.00554 (2018).
Bessani, A., Sousa, J. & Alchieri, E. E. In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks. 355–362 (IEEE).
Todd, P. Ripple protocol consensus algorithm review. May 11th (2015).
Ongaro, D. & Ousterhout, J. In 2014 {USENIX} Annual Technical Conference ({USENIX}{ATC} 14). 305–319.
Dziembowski, S., Faust, S., Kolmogorov, V. & Pietrzak, K. In Annual Cryptology Conference. 585–605 (Springer).
Bentov, I., Lee, C., Mizrahi, A. & Rosenfeld, M. Proof of Activity: Extending Bitcoin’s Proof of Work via Proof of Stake. IACR Cryptology ePrint Archive2014, 452 (2014).Google Scholar
Bramas, Q. The Stability and the Security of the Tangle (2018).
Baird, L. The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance. In Swirlds Tech Reports SWIRLDS-TR-2016–01, Tech. Rep (2016).
LeMahieu, C. Nano: A feeless distributed cryptocurrency network. Nano [Online resource].https://nano.org/en/whitepaper (date of access: 24.03. 2018) 16, 17 (2018).
Casino, F., Dasaklis, T. K. & Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification and open issues. Telematics Inform.36, 55–81 (2019).ArticleGoogle Scholar
bigredawesomedodo. Helping Small Businesses Survive and Grow With Marketing, Retrieved 3 June, 2021, from https://bigredawesomedodo.com/nft/. (2020).
This work has been partially supported by CAS President’s International Fellowship Initiative, China [grant number 2021VTB0002, 2021] and National Natural Science Foundation of China (No. 61902385).
Author information
Affiliations
Department of Industrial Management, Yazd University, Yazd City, IranSeyed Mojtaba Hosseini Bamakan
Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan City, IranNasim Nezhadsistani
School of Electrical and Computer Engineering, University of Tehran, Tehran City, IranOmid Bodaghi
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, ChinaSeyed Mojtaba Hosseini Bamakan & Qiang Qu
#TUBiol5227: Biomarkers & Biotargets: Genetic Testing and Bioethics
Curator: Stephen J. Williams, Ph.D.
The advent of direct to consumer (DTC) genetic testing and the resultant rapid increase in its popularity as well as companies offering such services has created some urgent and unique bioethical challenges surrounding this niche in the marketplace. At first, most DTC companies like 23andMe and Ancestry.com offered non-clinical or non-FDA approved genetic testing as a way for consumers to draw casual inferences from their DNA sequence and existence of known genes that are linked to disease risk, or to get a glimpse of their familial background. However, many issues arose, including legal, privacy, medical, and bioethical issues. Below are some articles which will explain and discuss many of these problems associated with the DTC genetic testing market as well as some alternatives which may exist.
As you can see,this market segment appears to want to expand into the nutritional consulting business as well as targeted biomarkers for specific diseases.
Rising incidence of genetic disorders across the globe will augment the market growth
Increasing prevalence of genetic disorders will propel the demand for direct-to-consumer genetic testing and will augment industry growth over the projected timeline. Increasing cases of genetic diseases such as breast cancer, achondroplasia, colorectal cancer and other diseases have elevated the need for cost-effective and efficient genetic testing avenues in the healthcare market.
For instance, according to the World Cancer Research Fund (WCRF), in 2018, over 2 million new cases of cancer were diagnosed across the globe. Also, breast cancer is stated as the second most commonly occurring cancer. Availability of superior quality and advanced direct-to-consumer genetic testing has drastically reduced the mortality rates in people suffering from cancer by providing vigilant surveillance data even before the onset of the disease. Hence, the aforementioned factors will propel the direct-to-consumer genetic testing market overt the forecast timeline.
Nutrigenomic Testing will provide robust market growth
The nutrigenomic testing segment was valued over USD 220 million market value in 2019 and its market will witness a tremendous growth over 2020-2028. The growth of the market segment is attributed to increasing research activities related to nutritional aspects. Moreover, obesity is another major factor that will boost the demand for direct-to-consumer genetic testing market.
Nutrigenomics testing enables professionals to recommend nutritional guidance and personalized diet to obese people and help them to keep their weight under control while maintaining a healthy lifestyle. Hence, above mentioned factors are anticipated to augment the demand and adoption rate of direct-to-consumer genetic testing through 2028.
Browse key industry insights spread across 161 pages with 126 market data tables & 10 figures & charts from the report, “Direct-To-Consumer Genetic Testing Market Size By Test Type (Carrier Testing, Predictive Testing, Ancestry & Relationship Testing, Nutrigenomics Testing), By Distribution Channel (Online Platforms, Over-the-Counter), By Technology (Targeted Analysis, Single Nucleotide Polymorphism (SNP) Chips, Whole Genome Sequencing (WGS)), Industry Analysis Report, Regional Outlook, Application Potential, Price Trends, Competitive Market Share & Forecast, 2020 – 2028” in detail along with the table of contents: https://www.gminsights.com/industry-analysis/direct-to-consumer-dtc-genetic-testing-market
Targeted analysis techniques will drive the market growth over the foreseeable future
Based on technology, the DTC genetic testing market is segmented into whole genome sequencing (WGS), targeted analysis, and single nucleotide polymorphism (SNP) chips. The targeted analysis market segment is projected to witness around 12% CAGR over the forecast period. The segmental growth is attributed to the recent advancements in genetic testing methods that has revolutionized the detection and characterization of genetic codes.
Targeted analysis is mainly utilized to determine any defects in genes that are responsible for a disorder or a disease. Also, growing demand for personalized medicine amongst the population suffering from genetic diseases will boost the demand for targeted analysis technology. As the technology is relatively cheaper, it is highly preferred method used in direct-to-consumer genetic testing procedures. These advantages of targeted analysis are expected to enhance the market growth over the foreseeable future.
Over-the-counter segment will experience a notable growth over the forecast period
The over-the-counter distribution channel is projected to witness around 11% CAGR through 2028. The segmental growth is attributed to the ease in purchasing a test kit for the consumers living in rural areas of developing countries. Consumers prefer over-the-counter distribution channel as they are directly examined by regulatory agencies making it safer to use, thereby driving the market growth over the forecast timeline.
Favorable regulations provide lucrative growth opportunities for direct-to-consumer genetic testing
Europe direct-to-consumer genetic testing market held around 26% share in 2019 and was valued at around USD 290 million. The regional growth is due to elevated government spending on healthcare to provide easy access to genetic testing avenues. Furthermore, European regulatory bodies are working on improving the regulations set on the direct-to-consumer genetic testing methods. Hence, the above-mentioned factors will play significant role in the market growth.
Focus of market players on introducing innovative direct-to-consumer genetic testing devices will offer several growth opportunities
Few of the eminent players operating in direct-to-consumer genetic testing market share include Ancestry, Color Genomics, Living DNA, Mapmygenome, Easy DNA, FamilytreeDNA (Gene By Gene), Full Genome Corporation, Helix OpCo LLC, Identigene, Karmagenes, MyHeritage, Pathway genomics, Genesis Healthcare, and 23andMe. These market players have undertaken various business strategies to enhance their financial stability and help them evolve as leading companies in the direct-to-consumer genetic testing industry.
For example, in November 2018, Helix launched a new genetic testing product, DNA discovery kit, that allows customer to delve into their ancestry. This development expanded the firm’s product portfolio, thereby propelling industry growth in the market.
The following posts discuss bioethical issues related to genetic testing and personalized medicine from a clinicians and scientisit’s perspective
Question:Each of these articles discusses certain bioethical issues although focuses on personalized medicine and treatment. Given your understanding of the robust process involved in validating clinical biomarkers and the current state of the DTC market, how could DTC testing results misinform patients and create mistrust in the physician-patient relationship?
Question: If you are developing a targeted treatment with a companion diagnostic, what bioethical concerns would you address during the drug development process to ensure fair, equitable and ethical treatment of all patients, in trials as well as post market?
Articles on Genetic Testing, Companion Diagnostics and Regulatory Mechanisms
Question: What type of regulatory concerns should one have during the drug development process in regards to use of biomarker testing?From the last article on Protecting Your IP how important is it, as a drug developer, to involve all payers during the drug development process?
MIT Technology Review announced list of “Innovators Under 35, 2020”
Reporter: Aviva Lev-Ari, PhD, RN
Innovators Under 35, 2020
In chaotic times it can be reassuring to see so many people working toward a better world. That’s true for medical professionals fighting a pandemic and for ordinary citizens fighting for social justice. And it’s true for those among us striving to employ technology to address those problems and many others.
The 35 young innovators in these pages aren’t all working to fight a pandemic, though some are: see Omar Abudayyeh and Andreas Puschnik. And they’re not all looking to remedy social injustices though some are: see Inioluwa Deborah Raji and Mohamed Dhaouafi. But even those who aren’t tackling those specific problems are seeking ways to use technology to help people. They’re trying to solve our climate crisis, find a cure for Parkinson’s, or make drinking water available to those who are desperate for it.
We’ve been presenting our list of innovators under 35 for the past 20 years. We do it to highlight the things young innovators are working on, to show at least some of the possible directions that technology will take in the coming decade. This contest generates more than 500 nominations each year. The editors then face the task of picking 100 semifinalists to put in front of our 25 judges, who have expertise in artificial intelligence, biotechnology, software, energy, materials, and so on. With the invaluable help of these rankings, the editors pick the final list of 35.
Inventors
Their innovations point toward a future with new types of batteries, solar panels, and microchips.
Omar Abudayyeh
He’s working to use CRISPR as a covid-19 test that you could take at home.
Christina Boville
She modifies enzymes to enable production of new compounds for industry.
Manuel Le Gallo
He uses novel computer designs to make AI less power hungry.
Nadya Peek
She builds novel modular machines that can do just about anything you can imagine.
Leila Pirhaji
She developed an AI-based system that can identify more small molecules in a patient’s body, faster than ever before.
Randall Jeffrey Platt
His recording tool provides a video of genes turning on or off.
Rebecca Saive
She found a way to make solar panels cheaper and more efficient.
Venkat Viswanathan
His work on a new type of battery could make EVs much cheaper.
Anastasia Volkova
Her platform uses remote sensing and other techniques to monitor crop health—helping farmers focus their efforts where they’re most needed.
Sihong Wang
His stretchable microchips promise to make all sorts of new devices possible.
Entrepreneurs
Their technological innovations bust up the status quo and lead to new ways of doing business.
Jiwei Li
In the last few months, Google and Facebook have both released new chatbots. Jiwei Li’s techniques are at the heart of both.
Atima Lui
She’s using technology to correct the cosmetics industry’s bias toward light skin.
Tony Pan
His company revamps an old device to allow you to generate electricity in your own home.
Visionaries
Their innovations are leading to breakthroughs in AI, quantum computing, and medical implants.
Leilani Battle
Her program sifts through data faster so scientists can focus more on science.
Morgan Beller
She was a key player behind the idea of a Facebook cryptocurrency.
Eimear Dolan
Medical implants are often thwarted as the body grows tissue to defend itself. She may have found a drug-free fix for the problem.
Rose Faghih
Her sensor-laden wristwatch would monitor your brain states.
Bo Li
By devising new ways to fool AI, she is making it safer.
Zlatko Minev
His discovery could reduce errors in quantum computing.
Miguel Modestino
He is reducing the chemical industry’s carbon footprint by using AI to optimize reactions with electricity instead of heat.
Inioluwa Deborah Raji
Her research on racial bias in data used to train facial recognition systems is forcing companies to change their ways.
Adriana Schulz
Her tools let anyone design products without having to understand materials science or engineering.
Dongjin Seo
He is designing computer chips to seamlessly connect human brains and machines.
Humanitarians
They’re using technology to cure diseases and make water, housing, and prosthetics available to all.
Mohamed Dhaouafi
His company’s artificial limbs are not only high-functioning but cheap enough for people in low-income countries.
Alex Le Roux
A massive 3D-printing project in Mexico could point the way to the future of affordable housing.
Katharina Volz
A loved one’s diagnosis led her to employ machine learning in the search for a Parkinson’s cure.
David Warsinger
His system could alleviate the drawbacks of existing desalination plants.
Pioneers
Their innovations lead the way to biodegradable plastics, textiles that keep you cool, and cars that “see.”
Ghena Alhanaee
Heavy dependence on infrastructure like oil rigs, nuclear reactors, and desalination plants can be catastrophic in a crisis. Her data-driven framework could help nations prepare.
Avinash Manjula Basavanna
His biodegradable plastic protects against extreme chemicals, but heals itself using water.
Lili Cai
She created energy-efficient textiles to break our air-conditioning habit.
Gregory Ekchian
He invented a way to make radiation therapy for cancer safer and more effective.
Jennifer Glick
If quantum computers work, what can we use them for? She’s working to figure that out.
Andrej Karpathy
He’s employing neural networks to allow automated cars to “see.”
Siddharth Krishnan
A tiny, powerful sensor for making disease diagnosis cheaper, faster, and easier.
Andreas Puschnik
Seeking a universal treatment for viral diseases, he might leave us much better prepared for the next pandemic.
Predicting the Protein Structure of Coronavirus: Inhibition of Nsp15 can slow viral replication and Cryo-EM – Spike protein structure (experimentally verified) vs AI-predicted protein structures (not experimentally verified) of DeepMind (Parent: Google) aka AlphaFold
Curators: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN
This illustration, created at the Centers for Disease Control and Prevention (CDC), reveals ultrastructural morphology exhibited by coronaviruses. Note the spikes that adorn the outer surface of the virus, which impart the look of a corona surrounding the virion, when viewed electron microscopically. A novel coronavirus virus was identified as the cause of an outbreak of respiratory illness first detected in Wuhan, China in 2019.
$The authors would like to note that the first eight authors are listed alphabetically.
Abstract
During its first month, the recently emerged 2019 Wuhan novel coronavirus (2019-nCoV) has already infected many thousands of people in mainland China and worldwide and took hundreds of lives. However, the swiftly spreading virus also caused an unprecedentedly rapid response from the research community facing the unknown health challenge of potentially enormous proportions. Unfortunately, the experimental research to understand the molecular mechanisms behind the viral infection and to design a vaccine or antivirals is costly and takes months to develop. To expedite the advancement of our knowledge we leverage the data about the related coronaviruses that is readily available in public databases, and integrate these data into a single computational pipeline. As a result, we provide a comprehensive structural genomics and interactomics road-maps of 2019-nCoV and use these information to infer the possible functional differences and similarities with the related SARS coronavirus. All data are made publicly available to the research community at http://korkinlab.org/wuhan
Figure 2. Structurally characterized non-structural proteins of 2019-nCoV. Highlighted in pink are mutations found when aligning the proteins against their homologs from the closest related coronaviruses: 2019-nCoV and human SARS, bat coronavirus, and another bat betacoronavirus BtRf-BetaCoV. The structurally resolved part of wNsp7 is sequentially identical to its homolog.
Figure 3. Structurally characterized structural proteins and an ORF of 2019-nCoV. Highlighted in pink are mutations found when aligning the proteins against their homologs from the closest related coronaviruses: 2019-nCoV and human SARS, bat coronavirus, and another bat betacoronavirus BtRf-BetaCoV. Highlighted in yellow are novel protein inserts found in wS.
Figure 4. Structurally characterized intra-viral and host-viral protein-protein interaction complexes of 2019-nCoV. Human proteins (colored in orange) are identified through their gene names. For each intra-viral structure, the number of subunits involved in the interaction is specified.
Figure 5. Evolutionary conservation of functional sites in 2019-nCoV proteins. A. Fully conserved protein binding sites (PBS, light orange) of wNsp12 in its interaction with wNsp7 and wNsp8 while other parts of the protein surface shows mutations (magenta); B. Both major monoclonal antibody binding site (light orange) and ACE2 receptor binding site (dark green) of wS are heavily mutated (binding site mutations are shown in red) compared to the same binding sites in other coronaviruses; mutations not located on the two binding sites are shown in magenta; C. Nearly intact protein binding site (light orange) of wNsp (papain-like protease PLpro domain) for its putative interaction with human ubiquitin-aldehyde (binding site mutations for the only two residues are shown in red, non-binding site mutations are shown in magenta); D. Fully conserved inhibitor ligand binding site (LBS, green) for wNsp5; non-binding site mutations are shown in magenta.
According to the World Health Organization, coronaviruses make up a large family of viruses named for the crown-like spikes found on their surface (Figure 1). They carry their genetic material in single strands of RNA and cause respiratory problems and fever. Like HIV, coronaviruses can be transmitted between animals and humans. Coronaviruses have been responsible for the Severe Acute Respiratory Syndrome (SARS) pandemic in the early 2000s and the Middle East Respiratory Syndrome (MERS) outbreak in South Korea in 2015. While the most recent coronavirus, COVID-19, has caused international concern, accessible and inexpensive sequencing is helping us understand COVID-19 and respond to the outbreak quickly.
Figure 1. Coronaviruses with the characteristic spikes as seen under a microscope.
First studies that explore genetic susceptibility to COVID-19 are now being published. The first results indicate that COVID-19 infects cells using the ACE2 cell-surface receptor. Genetic variants in the ACE2 receptor gene are thus likely to influence how effectively COVID-19 can enter the cells in our bodies. Researchers hope to discover genetic variants that confer resistance to a COVID-19 infection, similar to how some variants in the CCR5 receptor gene make people immune to HIV. At Nebula Genomics, we are monitoring the latest COVID-19 research and will add any relevant discoveries to the Nebula Research Library in a timely manner.
The Role of Genomics in Responding to COVID-19
Scientists in China sequenced COVID-19’s genome just a few weeks after the first case was reported in Wuhan. This stands in contrast to SARS, which was discovered in late 2002 but was not sequenced until April of 2003. It is through inexpensive genome-sequencing that many scientists across the globe are learning and sharing information about COVID-19, allowing us to track the evolution of COVID-19 in real-time. Ultimately, sequencing can help remove the fear of the unknown and allow scientists and health professionals to prepare to combat the spread of COVID-19.
Next-generation DNA sequencing technology has enabled us to understand COVID-19 is ~30,000 bases long. Moreover, researchers in China determined that COVID-19 is also almost identical to a coronavirus found in bats and is very similar to SARS. These insights have been critical in aiding in the development of diagnostics and vaccines. For example, the Centers for Disease Control and Prevention developed a diagnostic test to detect COVID-19 RNA from nose or mouth swabs.
Moreover, a number of different government agencies and pharmaceutical companies are in the process of developing COVID-19 vaccines to stop the COVID-19 from infecting more people. To protect humans from infection inactivated virus particles or parts of the virus (e.g. viral proteins) can be injected into humans. The immune system will recognize the inactivated virus as foreign, priming the body to build immunity against possible future infection. Of note, Moderna Inc., the National Institute of Allergy and Infectious Diseases, and Coalition for Epidemic Preparedness Innovations identified a COVID-19 vaccine candidate in a record 42 days. This vaccine will be tested in human clinical trials starting in April.
For more information about COVID-19, please refer to the World Health Organization website.
The problem w/ visionaries is that we don’t recognize them in a timely manner (too late) Ralph Baric @UNCpublichealth and Vineet Menachery deserve recognition for being 5 yrs ahead of #COVID19https://nature.com/articles/nm.3985…@NatureMedicinehttps://pnas.org/content/113/11/3048…@PNASNews via @hoondy
Senior, A.W., Evans, R., Jumper, J. et al.Improved protein structure prediction using potentials from deep learning. Nature577, 706–710 (2020). https://doi.org/10.1038/s41586-019-1923-7
Abstract
Protein structure prediction can be used to determine the three-dimensional shape of a protein from its amino acid sequence1. This problem is of fundamental importance as the structure of a protein largely determines its function2; however, protein structures can be difficult to determine experimentally. Considerable progress has recently been made by leveraging genetic information. It is possible to infer which amino acid residues are in contact by analysing covariation in homologous sequences, which aids in the prediction of protein structures3. Here we show that we can train a neural network to make accurate predictions of the distances between pairs of residues, which convey more information about the structure than contact predictions. Using this information, we construct a potential of mean force4 that can accurately describe the shape of a protein. We find that the resulting potential can be optimized by a simple gradient descent algorithm to generate structures without complex sampling procedures. The resulting system, named AlphaFold, achieves high accuracy, even for sequences with fewer homologous sequences. In the recent Critical Assessment of Protein Structure Prediction5 (CASP13)—a blind assessment of the state of the field—AlphaFold created high-accuracy structures (with template modelling (TM) scores6 of 0.7 or higher) for 24 out of 43 free modelling domains, whereas the next best method, which used sampling and contact information, achieved such accuracy for only 14 out of 43 domains. AlphaFold represents a considerable advance in protein-structure prediction. We expect this increased accuracy to enable insights into the function and malfunction of proteins, especially in cases for which no structures for homologous proteins have been experimentally determined7. https://doi.org/10.1038/s41586-019-1923-7
The scientific community has galvanised in response to the recent COVID-19 outbreak, building on decades of basic research characterising this virus family. Labs at the forefront of the outbreak response shared genomes of the virus in open access databases, which enabled researchers to rapidly develop tests for this novel pathogen. Other labs have shared experimentally-determined and computationally-predicted structures of some of the viral proteins, and still others have shared epidemiological data. We hope to contribute to the scientific effort using the latest version of our AlphaFold system by releasing structure predictions of several under-studied proteins associated with SARS-CoV-2, the virus that causes COVID-19. We emphasise that these structure predictions have not been experimentally verified, but hope they may contribute to the scientific community’s interrogation of how the virus functions, and serve as a hypothesis generation platform for future experimental work in developing therapeutics. We’re indebted to the work of many other labs: this work wouldn’t be possible without the efforts of researchers across the globe who have responded to the COVID-19 outbreak with incredible agility.
Knowing a protein’s structure provides an important resource for understanding how it functions, but experiments to determine the structure can take months or longer, and some prove to be intractable. For this reason, researchers have been developing computational methods to predict protein structure from the amino acid sequence. In cases where the structure of a similar protein has already been experimentally determined, algorithms based on “template modelling” are able to provide accurate predictions of the protein structure. AlphaFold, our recently published deep learning system, focuses on predicting protein structure accurately when no structures of similar proteins are available, called “free modelling”. We’ve continued to improve these methods since that publication and want to provide the most useful predictions, so we’re sharing predicted structures for some of the proteins in SARS-CoV-2 generated using our newly-developed methods.
It’s important to note that our structure prediction system is still in development and we can’t be certain of the accuracy of the structures we are providing, although we are confident that the system is more accurate than our earlier CASP13 system. We confirmed that our system provided an accurate prediction for the experimentally determined SARS-CoV-2 spike protein structure shared in the Protein Data Bank, and this gave us confidence that our model predictions on other proteins may be useful. We recently shared our results with several colleagues at the Francis Crick Institute in the UK, including structural biologists and virologists, who encouraged us to release our structures to the general scientific community now. Our models include per-residue confidence scores to help indicate which parts of the structure are more likely to be correct. We have only provided predictions for proteins which lack suitable templates or are otherwise difficult for template modeling. While these understudied proteins are not the main focus of current therapeutic efforts, they may add to researchers’ understanding of SARS-CoV-2.
Normally we’d wait to publish this work until it had been peer-reviewed for an academic journal. However, given the potential seriousness and time-sensitivity of the situation, we’re releasing the predicted structures as we have them now, under an open license so that anyone can make use of them.
Interested researchers can download the structures here, and can read more technical details about these predictions in a document included with the data. The protein structure predictions we’re releasing are for SARS-CoV-2 membrane protein, protein 3a, Nsp2, Nsp4, Nsp6, and Papain-like proteinase (C terminal domain). To emphasise, these are predicted structures which have not been experimentally verified. Work on the system continues for us, and we hope to share more about it in due course.
DeepMind has shared its results with researchers at the Francis Crick Institute, a biomedical research lab in the UK, as well as offering it for download from its website.
“Normally we’d wait to publish this work until it had been peer-reviewed for an academic journal. However, given the potential seriousness and time-sensitivity of the situation, we’re releasing the predicted structures as we have them now, under an open license so that anyone can make use of them,” it said. [ALA added bold face]
There are 93,090 cases of COVID-19, and 3,198 deaths, spread across 76 countries, according to the latest report from the World Health Organization at time of writing. ®
MHC content – The spike protein is thought to be the key to binding to cells via the angiotensin II receptor, the major mechanism the immune system uses to distinguish self from non-self
Preliminary Identification of Potential Vaccine Targets for the COVID-19 Coronavirus (SARS-CoV-2) Based on SARS-CoV Immunological Studies
Syed Faraz Ahmed 1,† , Ahmed A. Quadeer 1, *,† and Matthew R. McKay 1,2, *
1 Department of Electronic and Computer Engineering, The Hong Kong University of Science and
Technology, Hong Kong, China; sfahmed@connect.ust.hk
2 Department of Chemical and Biological Engineering, The Hong Kong University of Science and
Received: 9 February 2020; Accepted: 24 February 2020; Published: 25 February 2020
Abstract:
The beginning of 2020 has seen the emergence of COVID-19 outbreak caused by a novel coronavirus, Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2). There is an imminent need to better understand this new virus and to develop ways to control its spread. In this study, we sought to gain insights for vaccine design against SARS-CoV-2 by considering the high genetic similarity between SARS-CoV-2 and SARS-CoV, which caused the outbreak in 2003, and leveraging existing immunological studies of SARS-CoV. By screening the experimentally determined SARS-CoV-derived B cell and T cell epitopes in the immunogenic structural proteins of SARS-CoV, we identified a set of B cell and T cell epitopes derived from the spike (S) and nucleocapsid (N) proteins that map identically to SARS-CoV-2 proteins. As no mutation has been observed in these identified epitopes among the 120 available SARS-CoV-2 sequences (as of 21 February 2020), immune targeting of these epitopes may potentially offer protection against this novel virus. For the T cell epitopes, we performed a population coverage analysis of the associated MHC alleles and proposed a set of epitopes that is estimated to provide broad coverage globally, as well as in China. Our findings provide a screened set of epitopes that can help guide experimental efforts towards the development of vaccines against SARS-CoV-2.
Re: Protein structure prediction has been done for ages…
Not quite, Natural Selection does not measure methods, it measures outputs, usually at the organism level.
Sure correct folding is necessary for much protein function and we have prions and chaperone proteins to get it wrong and right.
The only way NS measures methods and mechanisms is if they are very energetically wasteful. But there are some very wasteful ones out there. Beta-Catenin at the end of point of Wnt signalling comes particularly to mind.
“Determining the structure of the virus proteins might also help in developing a molecule that disrupts the operation of just those proteins, and not anything else in the human body.”
Well it might, but predicting whether a ‘drug’ will NOT interact with any other of the 20000+ protein in complex organisms is well beyond current science. If we could do that we could predict/avoid toxicity and other non-mechanism related side-effects & mostly we can’t.
There are 480 structures on PDBe resulting from a search on ‘coronavirus,’ the top hits from MERS and SARS. PR stunt or not, they did win the most recent CASP ‘competition’, so arguably it’s probably our best shot right now – and I am certainly not satisfied that they have been sufficiently open in explaining their algorithms though I have not checked in the last few months. No one is betting anyone’s health on this, and it is not like making one wrong turn in a series of car directions. Latest prediction algorithms incorporate contact map predictions, so it’s not like a wrong dihedral angle sends the chain off in the wrong direction. A decent model would give something to run docking algorithms against with a series of already approved drugs, then we take that shortlist into the lab. A confirmed hit could be an instantly available treatment, no two year wait as currently estimated. [ALA added bold face]
Re: these structure predictions have not been experimentally verified
Naaaah. Can’t possibly be a stupid marketing stunt.
Well yes, a good possibility. But it can also be trying to build on the open-source model of putting it out there for others to build and improve upon. Essentially opening that “peer review” to a larger audience quicker. [ALA added bold face]
What bothers me, besides the obvious PR stunt, is that they say this prediction is licensed. How can a prediction from software be protected by, I presume, patents? And if this can be protected without even verifying which predictions actually work, what’s to stop someone spitting out millions of random, untested predictions just in case they can claim ownership later when one of them is proven to work? [ALA added bold face]
AI-predicted protein structures could unlock vaccine for Wuhan coronavirus… if correct… after clinical trials It’s not quite DeepMind’s ‘Come with me if you want to live’ moment, but it’s close, maybe
Experimentally derived by a group of scientists at the University of Texas at Austin and the National Institute of Allergy and Infectious Diseases, an agency under the US National Institute of Health. They both feature a “Spike protein structure.”
Cryo-EM structure of the 2019-nCoV spike in the prefusion conformation
Other related articles published in this Open Access Online Scientific Journal include the following:
Group of Researchers @ University of California, Riverside, the University of Chicago, the U.S. Department of Energy’s Argonne National Laboratory, and Northwestern University solve COVID-19 Structure and Map Potential Therapeutics
Reporters: Stephen J Williams, PhD and Aviva Lev-Ari, PhD, RN
Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education
Curator: Stephen J. Williams, PhD.
Dr. Cathy N. Davidson from Duke University gives a talk entitled: Now You See It. Why the Future of Learning Demands a Paradigm Shift
In this talk, shown below, Dr. Davidson shows how our current education system has been designed for educating students for the industrial age type careers and skills needed for success in the Industrial Age and how this educational paradigm is failing to prepare students for the challenges they will face in their future careers.
Or as Dr. Davidson summarizes
Designing education not for your past but for their future
As the video is almost an hour I will summarize some of the main points below
PLEASE WATCH VIDEO
Summary of talk
Dr. Davidson starts the talk with a thesis: that Institutions tend to preserve the problems they were created to solve.
All the current work, teaching paradigms that we use today were created for the last information age (19th century)
Our job to to remake the institutions of education work for the future not the one we inherited
Four information ages or technologies that radically changed communication
advent of writing: B.C. in ancient Mesopotamia allowed us to record and transfer knowledge and ideas
movable type – first seen in 10th century China
steam powered press – allowed books to be mass produced and available to the middle class. First time middle class was able to have unlimited access to literature
internet- ability to publish and share ideas worldwide
Interestingly, in the early phases of each of these information ages, the same four complaints about the new technology/methodology of disseminating information was heard
ruins memory
creates a distraction
ruins interpersonal dialogue and authority
reduces complexity of thought
She gives an example of Socrates who hated writing and frequently stated that writing ruins memory, creates a distraction, and worst commits ideas to what one writes down which could not be changed or altered and so destroys ‘free thinking’.
She discusses how our educational institutions are designed for the industrial age.
The need for collaborative (group) learning AND teaching
Designing education not for your past but for the future
In other words preparing students for THEIR future not your past and the future careers that do not exist today.
In the West we were all taught to answer silently and alone. However in Japan, education is arranged in the han or group think utilizing the best talents of each member in the group. In Japan you are arranged in such groups at an early age. The concept is that each member of the group contributes their unique talent and skill for the betterment of the whole group. The goal is to demonstrate that the group worked well together.
In the 19th century in institutions had to solve a problem: how to get people out of the farm and into the factory and/or out of the shop and into the firm
Takes a lot of regulation and institutionalization to convince people that independent thought is not the best way in the corporation
keywords for an industrial age
timeliness
attention to task
standards, standardization
hierarchy
specialization, expertise
metrics (measures, management)
two cultures: separating curriculum into STEM versus artistic tracts or dividing the world of science and world of art
This effort led to a concept used in scientific labor management derived from this old paradigm in education, an educational system controlled and success measured using
grades (A,B,C,D)
multiple choice tests
keywords for our age
workflow
multitasking attention
interactive process (Prototype, Feedback)
data mining
collaboration by difference
Can using a methodology such as scientific curation affect higher education to achieve this goal of teaching students to collaborate in an interactive process using data mining to create a new workflow for any given problem? Can a methodology of scientific curation be able to affect such changes needed in academic departments to achieve the above goal?
This will be the subject of future curations tested using real-world in class examples.
However, it is important to first discern that scientific content curation takes material from Peer reviewed sources and other expert-vetted sources. This is unique from other types of content curation in which take from varied sources, some of which are not expert-reviewed, vetted, or possibly ‘fake news’ or highly edited materials such as altered video and audio. In this respect, the expert acts not only as curator but as referee. In addition, collaboration is necessary and even compulsory for the methodology of scientific content curation, portending the curator not as the sole expert but revealing the CONTENT from experts as the main focus for learning and edification.
Other article of note on this subject in this Open Access Online Scientific Journal include:
The above articles will give a good background on this NEW Conceived Methodology of Scientific Curation and its Applicability in various areas such as Medical Publishing, and as discussed below Medical Education.
To understand the new paradigm in medical communication and the impact curative networks have or will play in this arena please read the following:
This article discusses a history of medical communication and how science and medical communication initially moved from discussions from select individuals to the current open accessible and cooperative structure using Web 2.0 as a platform.
The Digital Age Gave Rise to New Definitions – New Benchmarks were born on the World Wide Web for the Intangible Asset of Firm’s Reputation: Pay a Premium for buying e-Reputation
Curator: Aviva Lev–Ari, PhD, RN
UPDATED on 4/4/2022
Analytics for e-Reputation based on LinkedIn 1st Degree Connections, +7,500 of LPBI Group’s Founder, 2012-2022: An Intangible Asset – Connections’ Position Seniority & Biotech / Pharma Focus
Author: Aviva Lev-Ari, PhD, RN, Founder of 1.0 LPBI, 2012-2020 & 2.0 LPBI, 2021-2025 and Data Scientist, Research Assistant III: Tianzuo George Li
Direct reputation, feedback reputation and signaling effects are present; and shows that better sellers are always more likely to brand stretch. The comparative statics with respect to the initial reputation level, however, are not obvious. … a higher reputation firm can earn a higher direct reputation effect premium. But a higher reputation firm also has more to lose. The trade-off between using one’s reputation and protecting it can go both ways.
Luıs M B Cabral, New York University and CEPR, 2005
Part 1: A Digital Business Defined and the Intangible Asset of Firm’s Reputation
Claiming Distinction
Recognition Bestowed
The Technology
The Sphere of Influence
The Industrial Benefactors in Potential
The Actors at Play – Experts, Authors, Writers – Life Sciences & Medicine as it applies to HEALTH CARE
1st Level Connection on LinkedIn = +7,100 and Endorsements = +1,500
The DIGITAL REPUTATION of our Venture – Twitter for the Professional and for Institutions
Growth in Twitter Followers and in Global Reach: Who are the NEW Followers? they are OUR COMPETITION and other Media Establishments – that is the definition of Trend Setter, Opinion Leader and Source for Emulation
Business Aspects of the Brick & Mortar World render OBSOLETE
Part 2: Business Perspectives on Reputation
Part 3: Economics Perspectives on Reputation
Part 1: A Digital Business Defined and the Intangible Asset of Firm’s Reputation
This curation attempts to teach-by-example the new reality of the Intangible Asset of Firm’s Reputation when the business is 100% in the cloud, 100% electronic in nature (paperless), the customers are the Global Universe and the organization is 100% Global and 100% virtual.
A Case in Point: Intellectual Property Production Process of Health Care Digital Content using electronic Media Channels
Optimal Testimonial of e-Product Quality and Reputation for an Open Access Online Scientific Journal pharmaceuticalintelligence.com
On 8/17/2018, Dr. Lev-Ari, PhD, RN was contacted by the President elect of the Massachusetts Academy of Sciences (MAS), Prof. Katya Ravid of Boston University, School of Medicine, to join MAS in the role of Liaison to the Biotechnology and eScientific Publishing industries for the term of August 2018-July 2021. In the MAS, Dr. Lev-Ari serve as Board member, Fellow, and Advisor to the Governing Board.
LPBI Platform is been used by GLOBAL Communities of Scientists for interactive dialogue of SCIENCE – Four case studies are presented in the link, below
Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies
Curator and Editor-in-Chief: Journal and BioMed e-Series, Aviva Lev-Ari, PhD, RN
9. Growth in Twitter Followers and in Global Reach: Who are the NEW Followers: OUR COMPETITION and other Media Establishments – that is the definition of Trend Setter, Opinion Leader and Source for Emulation
translate research into life-changing Global manufactured Medical Products – drugs, devices, biotech, combination; anything requiring FDA approval#MedProdDev
INmune Bio, Inc. is developing therapies that harness patient’s #immunesystem to treat #cancer. Our focus is on #NKcells and #myeloid derived suppressor cells.
Thomas Pfeiffer1,2,4,*, Lily Tran5, Coco Krumme5 and David G Rand1,3,* 1 Program for Evolutionary Dynamics, FAS, 2 School of Applied Sciences and Engineering, and 3 Department of Psychology, Harvard University, Cambridge MA 02138, USA 4 New Zealand Institute for Advanced Study, Massey University, Auckland 0745, New Zealand 5 MIT Media Laboratory, Cambridge MA 02139, USA
Reputation plays a central role in human societies.
Empirical and theoretical work indicates that a good reputation is valuable in that it increases one’s expected payoff in the future. Here, we explore a game that couples a repeated Prisoner’s Dilemma (PD), in which participants can earn and can benefit from a good reputation, with a market in which reputation can be bought and sold. This game allows us to investigate how the trading of reputation affects cooperation in the PD, and how participants assess the value of having a good reputation. We find that depending on how the game is set up, trading can have a positive or a negative effect on the overall frequency of cooperation. Moreover, we show that the more valuable a good reputation is in the PD, the higher the price at which it is traded in the market. Our findings have important implications for the use of reputation systems in practice.
Keywords: evolution of cooperation; reciprocal altruism; indirect reciprocity; reputation
Important note: The notes in this section are essentially limited to the ideas discussed in the present version of these lectures notes. They cannot therefore be considered a survey of the literature. There are dozens of articles on the economics of reputation which I do not include here. In a future version of the text, I hope to provide a more complete set of notes on the literature. The notes below follow the order with which topics are presented.
Bootstrap models. The bootstrap mechanism for trust is based on a general result known as the folk theorem (known as such because of its uncertain origins). For a fairly general statement of the theorem (and its proof) see Fudenberg and Makin (1986). One of the main areas of application of the folk theorem has been the problem of (tacit or explicit) collusion in oligopoly. This is a typical problem of trust (or lack thereof): all firms would prefer prices to be high and output to be low; but each firm, individually, has an incentive to drop price and increase output. Friedman (1971) presents one of the earliest formal applications of the folk theorem to oligopoly collusion. He considers the case when firms set prices and history is perfectly observable. Both of the extensions presented in Section 2.2 were first developed with oligopoly collusion applications in mind. The case of trust with noisy signals (2.2.1) was first developed by Green and Porter (1984). A long series of papers have been written on this topic, including the influential work by Abreu, Pearce and Stacchetti (1990). Rotemberg and Saloner (1986) proposed a model of oligopoly collusion with fluctuating market demand. In this case, the intuition presented in Section 2.2.2 implies that firms collude on a lower price during periods of higher demand. This suggests that prices are counter-cyclical in markets where firms collude. Rotemberg and Saloner (1986) present supporting evidence from the cement industry. A number of papers have built on Rotemberg and Saloner’s analysis. Kandori (1992) shows that the i.i.d. assumption simplifies the analysis but is not crucial. Harrington (19??) considers a richer demand model and looks at how prices vary along the business cycle. The basic idea of repetition as a form of ensuring seller trustworthiness is developed in Klein and Leffler (1981). See also Telser (1980) and Shapiro (1983). When considering the problem of free entry, Klein and Leffler (1981) propose advertising as a solution, whereas Shapiro (1983) suggests low intro25 ductory prices. Section ?? is based on my own research notes. The general analysis of selfreinforcing agreements when there is an outside option of the kind considered here may be found in Ray (2002). Watson (1999, 2002) also considers models where the level of trust stars at a low level and gradually increases.
Bayesian models. The seminal contributions to the study of Bayesian models of reputation are Kreps and Wilson (1982) and Milgrom and Roberts (1982). The model in Section 3.2.1 includes elements from these papers as well as from Diamond (1989). H¨olmstrom (1982/1999) makes the point that separation leads to reduced incentives to invest in reputation. The issue of reputation with separation and changing types is treated in detail in the forthcoming book by Mailath and Samuelson (2006). In Section 3.3, I presented a series of models that deal with name as carriers of reputations. The part on changing names (Section 3.3.1) reflects elements from a variety of models, though, to the best of my knowledge, no study exists that models the process of secret, costless name changes in an infinite period adverse selection context. The study of markets for names follows the work by Tadelis (1999) and Mailath and Samuelson (2001). All of these papers are based on the Bayesian updating paradigm. Kreps (1990) presents an argument for trading reputations in a bootstrap type of model. The analysis of brand stretching (Section 3.3.3) is adapted from Cabral (2000). The paper considers a more general framework where the direct reputation, feedback reputation and signalling effects are present; and shows that better sellers are always more likely to brand stretch. The comparative statics with respect to the initial reputation level, however, are not obvious. As we saw above, a higher reputation firm can earn a higher direct reputation effect premium. But a higher reputation firm also has more to lose. The trade-off between using one’s reputation and protecting it can go both ways. For other papers on brand stretching and umbrella branding see Choi (1998), Anderson (2002).
Bibliography
Abreu, Dilip, David Pearce and Ennio Stacchetti (1990), “Toward a Theory of Discounted Repeated Games with Imperfect Monitoring,” Econometrica 58, 1041–1064. Andersson, Fredrik (2002), “Pooling reputations,” International Journal of Industrial Organization 20, 715–730. Bernhein, B. Douglas and Michael D. Whinston (1990), “Multimarket Contact and Collusive Behavior,” Rand Journal of Economics 21, 1–26. Cabral, Lu´ıs M B (2000), “Stretching Firm and Brand Reputation,” Rand Journal of Economics 31, 658-673. Choi, J.P. (1998), “Brand Extension and Informational Leverage,” Review of Economic Studies 65, 655–69. Diamond, Douglas W (1989), “Reputation Acquisition in Debt Markets,” Journal of Political Economy 97, 828–862. Ely, Jeffrey C., and Juuso Valim ¨ aki ¨ (2003), “Bad Reputation,” The Quarterly Journal of Economics 118, 785–814. Fishman, A., and R. Rob (2005), “Is Bigger Better? Customer Base Expansion through Word of Mouth Reputation,” forthcoming in Journal of Political Economy. Friedman, James (1971), “A Noncooperative Equilibrium for Supergames,” Review of Economic Studies 28, 1–12. Fudenberg, Drew and Eric Maskin (1986), “The Folk Theorem in Repeated Games with Discounting or with Imperfect Public Information,” Econometrica 54, 533–556. Green, Ed and Robert Porter (1984), “Noncooperative Collusion Under Imperfect Price Information,” Econometrica 52, 87–100. Holmstrom, Bengt ¨ (1999), “Managerial Incentive Problems: A Dynamic Perspective,” Review of Economic Studies 66, 169–182. (Originally (1982) in Essays in Honor of Professor Lars Wahlback.) Kandori, Michihiro (1992), “Repeated Games Played by Overlapping Generations of Players,” Review of Economic Studies 59, 81–92. Klein, B, and K Leffler (1981), “The Role of Market Forces in Assuring Contractual Performance,” Journal of Political Economy 89, 615–641. 27 Kreps, David (1990), “Corporate Culture and Economic Theory,” in J Alt and K Shepsle (Eds), Perspectives on Positive Political Economy, Cambridge: Cambridge University Press, 90–143. Kreps, David M., Paul Milgrom, John Roberts and Robert Wilson (1982), “Rational Cooperation in the Finitely Repeated Prisoners’ Dilemma,” Journal of Economic Theory 27, 245–252. Kreps, David M., and Robert Wilson (1982), “Reputation and Imperfect Information,” Journal of Economic Theory 27, 253–279. Mailath, George J, and Larry Samuelson (2001), “Who Wants a Good Reputation?,” Review of Economic Studies 68, 415–441. Mailath, George J, and Larry Samuelson (1998), “Your Reputation Is Who You’re Not, Not Who You’d Like To Be,” University of Pennsylvania and University of Wisconsin. Mailath, George J, and Larry Samuelson (2006), Repeated Games and Reputations: Long-Run Relationships, Oxford: Oxford University Press. Milgrom, Paul, and John Roberts (1982), “Predation, Reputation, and Entry Deterrence,” Journal of Economic Theory 27, 280–312. Phelan, Christopher (2001), “Public Trust and Government Betrayal,” forthcoming in Journal of Economic Theory. Ray, Debraj (2002), “The Time Structure of Self-Enforcing Agreements,” Econometrica 70, 547–582. Rotemberg, Julio, and Garth Saloner (1986), “A Supergame-Theoretic Model of Price Wars During Booms,” American Economic Review 76, 390–407. Shapiro, Carl (1983), “Premiums for High Quality Products as Rents to Reputation,” Quarterly Journal of Economics 98, 659–680. Tadelis, S. (1999), “What’s in a Name? Reputation as a Tradeable Asset,” American Economic Review 89, 548–563. Tadelis, Steven (2002), “The Market for Reputations as an Incentive Mechanism,” Journal of Political Economy 92, 854–882. Telser, L G (1980), “A Theory of Self-enforcing Agreements,” Journal of Business 53, 27–44. Tirole, Jean (1996), “A Theory of Collective Reputations (with applications to the persistence of corruption and to firm quality),” Review of Economic Studies 63, 1–22. 28 Watson, Joel (1999), “Starting Small and Renegotiation,” Journal of Economic Theory 85, 52–90. Watson, Joel (2002), “Starting Small and Commitment,” Games and Economic Behavior 38, 176–199. Wernerfelt, Birger (1988), “Umbrella Branding as a Signal of New Product Quality: An Example of Signalling by Posting a Bond,” Rand Journal of Economics 19, 458–466.
Paper in collection COVID-19 SARS-CoV-2 preprints from medRxiv and bioRxiv