Feeds:
Posts
Comments

Archive for the ‘Artificial Intelligence – General’ Category

Artificial Intelligence (AI) Used to Successfully Determine Most Likely Repurposed Antibiotic Against Deadly Superbug Acinetobacter baumanni

Reporter: Stephen J. Williams, Ph.D.

The World Health Organization has identified 3 superbugs, or infective micororganisms displaying resistance to common antibiotics and multidrug resistance, as threats to humanity:

Three bacteria were listed as critical:

  • Acinetobacter baumannii bacteria that are resistant to important antibiotics called carbapenems. Acinetobacter baumannii are highly-drug resistant bacteria that can cause a range of infections for hospitalized patients, including pneumonia, wound, or blood infections.
  • Pseudomonas aeruginosa, which are resistant to carbapenems. Pseudomonas aeruginosa can cause skin rashes and ear infectious in healthy people but also severe blood infections and pneumonia when contracted by sick people in the hospital.
  • Enterobacteriaceae — a family of bacteria that live in the human gut — that are resistant to both carbepenems and another class of antibiotics, cephalosporins.

 

It has been designated critical need for development of  antibiotics to these pathogens.  Now researchers at Mcmaster University and others in the US had used artificial intelligence (AI) to screen libraries of over 7,000 chemicals to find a drug that could be repurposed to kill off the pathogen.

Liu et. Al. (1) published their results of an AI screen to narrow down potential chemicals that could work against Acinetobacter baumanii in Nature Chemical Biology recently.

Abstract

Acinetobacter baumannii is a nosocomial Gram-negative pathogen that often displays multidrug resistance. Discovering new antibiotics against A. baumannii has proven challenging through conventional screening approaches. Fortunately, machine learning methods allow for the rapid exploration of chemical space, increasing the probability of discovering new antibacterial molecules. Here we screened ~7,500 molecules for those that inhibited the growth of A. baumannii in vitro. We trained a neural network with this growth inhibition dataset and performed in silico predictions for structurally new molecules with activity against A. baumannii. Through this approach, we discovered abaucin, an antibacterial compound with narrow-spectrum activity against A. baumannii. Further investigations revealed that abaucin perturbs lipoprotein trafficking through a mechanism involving LolE. Moreover, abaucin could control an A. baumannii infection in a mouse wound model. This work highlights the utility of machine learning in antibiotic discovery and describes a promising lead with targeted activity against a challenging Gram-negative pathogen.

Schematic workflow for incorporation of AI for antibiotic drug discovery for A. baumannii from 1. Liu, G., Catacutan, D.B., Rathod, K. et al. Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumannii. Nat Chem Biol (2023). https://doi.org/10.1038/s41589-023-01349-8

Figure source: https://www.nature.com/articles/s41589-023-01349-8

Article Source: https://www.nature.com/articles/s41589-023-01349-8

  1. Liu, G., Catacutan, D.B., Rathod, K. et al.Deep learning-guided discovery of an antibiotic targeting Acinetobacter baumanniiNat Chem Biol (2023). https://doi.org/10.1038/s41589-023-01349-8

 

 

For reference to WHO and lists of most pathogenic superbugs see https://www.scientificamerican.com/article/who-releases-list-of-worlds-most-dangerous-superbugs/

The finding was first reported by the BBC.

Source: https://www.bbc.com/news/health-65709834

By James Gallagher

Health and science correspondent

Scientists have used artificial intelligence (AI) to discover a new antibiotic that can kill a deadly species of superbug.

The AI helped narrow down thousands of potential chemicals to a handful that could be tested in the laboratory.

The result was a potent, experimental antibiotic called abaucin, which will need further tests before being used.

The researchers in Canada and the US say AI has the power to massively accelerate the discovery of new drugs.

It is the latest example of how the tools of artificial intelligence can be a revolutionary force in science and medicine.

Stopping the superbugs

Antibiotics kill bacteria. However, there has been a lack of new drugs for decades and bacteria are becoming harder to treat, as they evolve resistance to the ones we have.

More than a million people a year are estimated to die from infections that resist treatment with antibiotics.The researchers focused on one of the most problematic species of bacteria – Acinetobacter baumannii, which can infect wounds and cause pneumonia.

You may not have heard of it, but it is one of the three superbugs the World Health Organization has identified as a “critical” threat.

It is often able to shrug off multiple antibiotics and is a problem in hospitals and care homes, where it can survive on surfaces and medical equipment.

Dr Jonathan Stokes, from McMaster University, describes the bug as “public enemy number one” as it’s “really common” to find cases where it is “resistant to nearly every antibiotic”.

 

Artificial intelligence

To find a new antibiotic, the researchers first had to train the AI. They took thousands of drugs where the precise chemical structure was known, and manually tested them on Acinetobacter baumannii to see which could slow it down or kill it.

This information was fed into the AI so it could learn the chemical features of drugs that could attack the problematic bacterium.

The AI was then unleashed on a list of 6,680 compounds whose effectiveness was unknown. The results – published in Nature Chemical Biology – showed it took the AI an hour and a half to produce a shortlist.

The researchers tested 240 in the laboratory, and found nine potential antibiotics. One of them was the incredibly potent antibiotic abaucin.

Laboratory experiments showed it could treat infected wounds in mice and was able to kill A. baumannii samples from patients.

However, Dr Stokes told me: “This is when the work starts.”

The next step is to perfect the drug in the laboratory and then perform clinical trials. He expects the first AI antibiotics could take until 2030 until they are available to be prescribed.

Curiously, this experimental antibiotic had no effect on other species of bacteria, and works only on A. baumannii.

Many antibiotics kill bacteria indiscriminately. The researchers believe the precision of abaucin will make it harder for drug-resistance to emerge, and could lead to fewer side-effects.

 

In principle, the AI could screen tens of millions of potential compounds – something that would be impractical to do manually.

“AI enhances the rate, and in a perfect world decreases the cost, with which we can discover these new classes of antibiotic that we desperately need,” Dr Stokes told me.

The researchers tested the principles of AI-aided antibiotic discovery in E. coli in 2020, but have now used that knowledge to focus on the big nasties. They plan to look at Staphylococcus aureus and Pseudomonas aeruginosa next.

“This finding further supports the premise that AI can significantly accelerate and expand our search for novel antibiotics,” said Prof James Collins, from the Massachusetts Institute of Technology.

He added: “I’m excited that this work shows that we can use AI to help combat problematic pathogens such as A. baumannii.”

Prof Dame Sally Davies, the former chief medical officer for England and government envoy on anti-microbial resistance, told Radio 4’s The World Tonight: “We’re onto a winner.”

She said the idea of using AI was “a big game-changer, I’m thrilled to see the work he (Dr Stokes) is doing, it will save lives”.

Other related articles and books published in this Online Scientific Journal include the following:

Series D: e-Books on BioMedicine – Metabolomics, Immunology, Infectious Diseases, Reproductive Genomic Endocrinology

(3 book series: Volume 1, 2&3, 4)

https://www.amazon.com/gp/product/B08VVWTNR4?ref_=dbs_p_pwh_rwt_anx_b_lnk&storeType=ebooks

 

 

 

 

 

 

 

 

 

 

  • The Immune System, Stress Signaling, Infectious Diseases and Therapeutic Implications:

 

  • Series D, VOLUME 2

Infectious Diseases and Therapeutics

and

  • Series D, VOLUME 3

The Immune System and Therapeutics

(Series D: BioMedicine & Immunology) Kindle Edition.

On Amazon.com since September 4, 2017

(English Edition) Kindle Edition – as one Book

https://www.amazon.com/dp/B075CXHY1B $115

 

Bacterial multidrug resistance problem solved by a broad-spectrum synthetic antibiotic

The Journey of Antibiotic Discovery

FDA cleared Clever Culture Systems’ artificial intelligence tech for automated imaging, analysis and interpretation of microbiology culture plates speeding up Diagnostics

Artificial Intelligence: Genomics & Cancer

Read Full Post »

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

The female reproductive lifespan is regulated by the menstrual cycle. Defined as the interval between the menarche and menopause, it is approximately 35 years in length on average. Based on current average human life expectancy figures, and excluding fertility issues, this means that the female body can bear children for almost half of its lifetime. Thus, within this time span many individuals may consider contraception at some point in their reproductive life. A wide variety of contraceptive methods are now available, which are broadly classified into hormonal and non-hormonal approaches. A normal menstrual cycle is controlled by a delicate interplay of hormones, including estrogen, progesterone, follicle-stimulating hormone (FSH) and luteinizing hormone (LH), among others. These molecules are produced by the various glands in the body that make up the endocrine system.

Hormonal contraceptives – including the contraceptive pill, some intrauterine devices (IUDs) and hormonal implants – utilize exogenous (or synthetic) hormones to block or suppress ovulation, the phase of the menstrual cycle where an egg is released into the uterus. Beyond their use as methods to prevent pregnancy, hormonal contraceptives are also being increasingly used to suppress ovulation as a method for treating premenstrual syndromes. Hormonal contraceptives composed of exogenous estrogen and/or progesterone are commonly administered artificial means of birth control. Despite many benefits, adverse side effects associated with high doses such as thrombosis and myocardial infarction, cause hesitation to usage.

Scientists at the University of the Philippines and Roskilde University are exploring methods to optimize the dosage of exogenous hormones in such contraceptives. Their overall aim is the creation of patient-specific minimizing dosing schemes, to prevent adverse side effects that can be associated with hormonal contraceptive use and empower individuals in their contraceptive journey. Their research data showed evidence that the doses of exogenous hormones in certain contraceptive methods could be reduced, while still ensuring ovulation is suppressed. Reducing the total exogenous hormone dose by 92% in estrogen-only contraceptives, or the total dose by 43% in progesterone-only contraceptives, prevented ovulation according to the model. In contraceptives combining estrogen and progesterone, the doses could be reduced further.

References:

https://www.technologynetworks.com/drug-discovery/news/hormone-doses-in-contraceptives-could-be-reduced-by-as-much-as-92-372088?utm_campaign=NEWSLETTER_TN_Breaking%20Science%20News

https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010073

https://www.medicalnewstoday.com/articles/birth-control-with-up-to-92-lower-hormone-doses-could-still-be-effective

https://www.ncbi.nlm.nih.gov/books/NBK441576/

https://www.sciencedirect.com/science/article/pii/S0277953621005797

Read Full Post »

Verily announced other organizational changes, 1/13/2023

Reporter: Aviva Lev-Ari, PhD, RN

The layoffs come just a few months after Verily raised $1 billion in an investment round led by Alphabet. At the time of the investment round, Verily said the $1 billion would be used to expand its business in precision health. 

In addition to the layoffs, Verily announced other organizational changes.

“We are making changes that refine our strategy, prioritize our product portfolio and simplify our operating model,” Gillett said in his email. “We will advance fewer initiatives with greater resources. In doing so, Verily will move from multiple lines of business to one centralized product organization with increasingly connected healthcare solutions.”

The company will specifically focus on AI and data science to accelerate learning and improving outcomes, with advancing precision health being the top overarching goal. In addition, the company will simplify how it works, “designing complexity out of Verily.” 

Among its product portfolio, Verily plans to “do fewer things” and focus its efforts within research and care. The company is “discontinuing the development of Verily Value Suite and some early-stage products, including our work in remote patient monitoring for heart failure and microneedles for drug delivery,” Gillet said. By eliminating Verily Value Suite, some staff will be redeployed elsewhere, while others will leave the company, Gillet said.

The 15% of eliminated staff include roles within discontinued programs and redundancy within the new, simplified organization. Gillet also announced leadership changes, including expanding the role of Amy Abernethy to become president of product development and chief medical officer. Scott Burke will expand his responsibilities as chief technology officer, adding hardware engineering and devices teams to his responsibilities, as well as serving as the bridge between product development and customer needs. Lisa Greenbaum will expand her responsibilities in a new chief commercial officer role, overseeing sales, marketing and corporate strategy teams.

Related Content

Google Health partners with iCAD in commercial AI imaging push
Former Google company Verily raises $1B
Google Health is no more?
Google’s Verily enters drug trials with big pharma
Google, Verily’s diabetes machine learning algorithm gets clinical testing
Walgreens teams up with Verily to tackle chronic conditions

SOURCE

https://healthexec.com/topics/patient-care/precision-medicine/verily-lays-15-workers-months-after-raising-1b?utm_source=newsletter&utm_medium=he_news

Read Full Post »

Use of Systems Biology for Design of inhibitor of Galectins as Cancer Therapeutic – Strategy and Software

Curator: Stephen J. Williams, Ph.D.

Below is a slide representation of the overall mission 4 to produce a PROTAC to inhibit Galectins 1, 3, and 9.

 

Using A Priori Knowledge of Galectin Receptor Interaction to Create a BioModel of Galectin 3 Binding

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now after collecting literature from PubMed on “galectin-3” AND “binding” to determine literature containing kinetic data we generate a WordCloud on the articles.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This following file contains the articles needed for BioModels generation.

https://pharmaceuticalintelligence.com/wp-content/uploads/2022/12/Curating-Galectin-articles-for-Biomodels.docx

 

From the WordCloud we can see that these corpus of articles describe galectin binding to the CRD (carbohydrate recognition domain).  Interestingly there are many articles which describe van Der Waals interactions as well as electrostatic interactions.  Certain carbohydrate modifictions like Lac NAc and Gal 1,4 may be important.  Many articles describe the bonding as well as surface  interactions.  Many studies have been performed with galectin inhibitors like TDGs (thio-digalactosides) like TAZ TDG (3-deoxy-3-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside).  This led to an interesting article

Dual thio-digalactoside-binding modes of human galectins as the structural basis for the design of potent and selective inhibitors

Affiliations 2016 Jul 15;6:29457.
 doi: 10.1038/srep29457. Free PMC article

Abstract

Human galectins are promising targets for cancer immunotherapeutic and fibrotic disease-related drugs. We report herein the binding interactions of three thio-digalactosides (TDGs) including TDG itself, TD139 (3,3′-deoxy-3,3′-bis-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside, recently approved for the treatment of idiopathic pulmonary fibrosis), and TAZTDG (3-deoxy-3-(4-[m-fluorophenyl]-1H-1,2,3-triazol-1-yl)-thio-digalactoside) with human galectins-1, -3 and -7 as assessed by X-ray crystallography, isothermal titration calorimetry and NMR spectroscopy. Five binding subsites (A-E) make up the carbohydrate-recognition domains of these galectins. We identified novel interactions between an arginine within subsite E of the galectins and an arene group in the ligands. In addition to the interactions contributed by the galactosyl sugar residues bound at subsites C and D, the fluorophenyl group of TAZTDG preferentially bound to subsite B in galectin-3, whereas the same group favored binding at subsite E in galectins-1 and -7. The characterised dual binding modes demonstrate how binding potency, reported as decreased Kd values of the TDG inhibitors from μM to nM, is improved and also offer insights to development of selective inhibitors for individual galectins.

Figures

Figure 1
 
Figure 2
 
Figure 3

 

 

Read Full Post »

Technion #1 in Europe in Field of AI for 2nd Straight Year

Reporter: Aviva Lev-Ari, PhD, RN

For the second year in a row, the Technion is ranked first in Europe in the field of artificial intelligence (AI) according to CSRankings, which are highly regarded for their metrics-based ranking of top computer science institutions. The repeat win further solidifies the Technion’s position as a leading institution in AI. It was also ranked 16th in the world in AI and 10th in the world in the subfield of learning systems. 

The Technion recruits researchers and students from all Technion units for interdisciplinary AI research by increasing the number of new programs and initiatives in its various fields with leading companies, top universities, and research institutions around the world. It is also establishing its own AI community to empower the student body and researchers in all fields of AI and deepening their collaborations with others doing related work.  

The Technion’s Tech.AI Center for Artificial Intelligence, established in 2020, is the greatest source of AI innovation and research on campus. Tech.AI includes approximately 150 researchers and aims to apply advanced methodologies and tools at the forefront of AI in a variety of fields including data science, medical research, mechanical engineering, civil engineering, architecture, biology, and more.  

To further facilitate AI research and collaborations, a recent agreement was signed to establish a Zimin Institute at the Technion for AI Solutions in Healthcare that will operate as part of Tech.AI. The Institute will promote interdisciplinary projects and work to develop technologies based on big data and computational learning in order to improve human health and healthcare, with an emphasis on proposals that have an applied AI component.  

https://ats.org/our-impact/technion-1-in-europe-in-field-of-ai-for-2nd-straight-year/?utm_medium=email&utm_source=ats-newsletter&utm_campaign=enews&s_src=enews

Read Full Post »

Explanation on “Results of Medical Text Analysis with Natural Language Processing (NLP) presented in LPBI Group’s NEW GENRE Edition: NLP” on Genomics content, standalone volume in Series B and NLP on Cancer content as Part B New Genre Volume 1 in Series C

NEW GENRE Edition, Editor-in-Chief: Aviva Lev-Ari, PhD, RN

Series B: Frontiers in Genomics Research NEW GENRE Audio English-Spanish

https://pharmaceuticalintelligence.com/audio-english-spanish-biomed-e-series/new-genre-audio-english-spanish-series-b-frontiers-in-genomics-research/new-genre-volume-two-latest-in-genomics-methodologies-for-therapeutics-gene-editing-ngs-and-bioinformatics-simulations-and-the-genome-ontology-series-b-volume-2/

PART A: The eTOCs in Spanish in Audio format AND the eTOCs in Bi-lingual format: Spanish and English in Text format

PART C: The Editorials of the original e-Books in English in Audio format

However,

PART B: The graphical results of Machine Learning (ML), Deep Learning (DL) and Natural Language Processing (NLP) algorithms AND the Domain Knowledge Expert (DKE) interpretation of the results in Text format – PART B IS ISSUED AS A STANDALONE VOLUME, named

https://pharmaceuticalintelligence.com/audio-english-spanish-biomed-e-series/new-genre-audio-english-spanish-series-b-frontiers-in-genomics-research/genomics-volume-2-results-of-medical-text-analysis-with-natural-language-processing-nlp/

 See only Graphic results in

Genomics, Volume 3: NLP results – 38 or 39 Hypergraph Plots and 38 or 39 Tree diagram Plots by Madison Davis

https://pharmaceuticalintelligence.com/biomed-e-books/genomics-orientations-for-personalized-medicine/genomics-volume-2-nlp-results-38-or-39-hypergraph-plots-and-38-or-39-tree-diagram-plots-by-madison-davis/

Series C: e-Books on Cancer & Oncology NEW GENRE Audio English-Spanish

https://pharmaceuticalintelligence.com/audio-english-spanish-biomed-e-series/new-genre-audio-english-spanish-series-c-e-books-on-cancer-oncology/new-genre-volume-one-cancer-biology-and-genomics-for-disease-diagnosis-series-c-volume-1%ef%bf%bc/

PART A:

PART A.1: The eTOCs in Spanish in Audio format AND

PART A.2: The eTOCs in Bi-lingual format: Spanish and English in Text format

PART B:

The graphical results of Medical Text Analysis with Machine Learning (ML), Deep Learning (DL) and Natural Language Processing (NLP) algorithms AND the Domain Knowledge Expert (DKE) interpretation of the results in Text format

See only graphics in

https://pharmaceuticalintelligence.com/biomed-e-books/series-c-e-books-on-cancer-oncology/cancer-volume-1-nlp-results-12-hypergraph-plots-and-12-tree-diagram-plots-by-madison-davis/

PART C:

The Editorials of the original e-Book in English in Audio format

Read Full Post »

Genomic data can predict miscarriage and IVF failure

Reporter and Curator: Dr. Sudipta Saha, Ph.D.

Infertility is a major reproductive health issue that affects about 12% of women of reproductive age in the United States. Aneuploidy in eggs accounts for a significant proportion of early miscarriage and in vitro fertilization failure. Recent studies have shown that genetic variants in several genes affect chromosome segregation fidelity and predispose women to a higher incidence of egg aneuploidy. However, the exact genetic causes of aneuploid egg production remain unclear, making it difficult to diagnose infertility based on individual genetic variants in mother’s genome. Although, age is a predictive factor for aneuploidy, it is not a highly accurate gauge because aneuploidy rates within individuals of the same age can vary dramatically.

Researchers described a technique combining genomic sequencing with machine-learning methods to predict the possibility a woman will undergo a miscarriage because of egg aneuploidy—a term describing a human egg with an abnormal number of chromosomes. The scientists were able to examine genetic samples of patients using a technique called “whole exome sequencing,” which allowed researchers to home in on the protein coding sections of the vast human genome. Then they created software using machine learning, an aspect of artificial intelligence in which programs can learn and make predictions without following specific instructions. To do so, the researchers developed algorithms and statistical models that analyzed and drew inferences from patterns in the genetic data.

As a result, the scientists were able to create a specific risk score based on a woman’s genome. The scientists also identified three genes—MCM5, FGGY and DDX60L—that when mutated and are highly associated with a risk of producing eggs with aneuploidy. So, the report demonstrated that sequencing data can be mined to predict patients’ aneuploidy risk thus improving clinical diagnosis. The candidate genes and pathways that were identified in the present study are promising targets for future aneuploidy studies. Identifying genetic variations with more predictive power will serve women and their treating clinicians with better information.

References:

https://medicalxpress-com.cdn.ampproject.org/c/s/medicalxpress.com/news/2022-06-miscarriage-failure-vitro-fertilization-genomic.amp

https://pubmed.ncbi.nlm.nih.gov/35347416/

https://pubmed.ncbi.nlm.nih.gov/31552087/

https://pubmed.ncbi.nlm.nih.gov/33193747/

https://pubmed.ncbi.nlm.nih.gov/33197264/

Read Full Post »

Data Science: Step by Step – A Resource for LPBI Group One-Year Internship in IT, IS, DS

Reporter: Aviva Lev-Ari, PhD, RN

9 free Harvard courses: learning Data Science

In this article, I will list 9 free Harvard courses that you can take to learn data science from scratch. Feel free to skip any of these courses if you already possess knowledge of that subject.

Step 1: Programming

The first step you should take when learning data science is to learn to code. You can choose to do this with your choice of programming language?—?ideally Python or R.

If you’d like to learn R, Harvard offers an introductory R course created specifically for data science learners, called Data Science: R Basics.

This program will take you through R concepts like variables, data types, vector arithmetic, and indexing. You will also learn to wrangle data with libraries like dplyr and create plots to visualize data.

If you prefer Python, you can choose to take CS50’s Introduction to Programming with Python offered for free by Harvard. In this course, you will learn concepts like functions, arguments, variables, data types, conditional statements, loops, objects, methods, and more.

Both programs above are self-paced. However, the Python course is more detailed than the R program, and requires a longer time commitment to complete. Also, the rest of the courses in this roadmap are taught in R, so it might be worth learning R to be able to follow along easily.

Step 2: Data Visualization

Visualization is one of the most powerful techniques with which you can translate your findings in data to another person.

With Harvard’s Data Visualization program, you will learn to build visualizations using the ggplot2 library in R, along with the principles of communicating data-driven insights.

Step 3: Probability

In this course, you will learn essential probability concepts that are fundamental to conducting statistical tests on data. The topics taught include random variables, independence, Monte Carlo simulations, expected values, standard errors, and the Central Limit Theorem.

The concepts above will be introduced with the help of a case study, which means that you will be able to apply everything you learned to an actual real-world dataset.

Step 4: Statistics

After learning probability, you can take this course to learn the fundamentals of statistical inference and modelling.
This program will teach you to define population estimates and margin of errors, introduce you to Bayesian statistics, and provide you with the fundamentals of predictive modeling.

Step 5: Productivity Tools (Optional)

I’ve included this project management course as optional since it isn’t directly related to learning data science. Rather, you will be taught to use Unix/Linux for file management, Github, version control, and creating reports in R.

The ability to do the above will save you a lot of time and help you better manage end-to-end data science projects.

Step 6: Data Pre-Processing

The next course in this list is called Data Wrangling, and will teach you to prepare data and convert it into a format that is easily digestible by machine learning models.

You will learn to import data into R, tidy data, process string data, parse HTML, work with date-time objects, and mine text.

As a data scientist, you often need to extract data that is publicly available on the Internet in the form of a PDF document, HTML webpage, or a Tweet. You will not always be presented with clean, formatted data in a CSV file or Excel sheet.

By the end of this course, you will learn to wrangle and clean data to come up with critical insights from it.

Step 7: Linear Regression

Linear regression is a machine learning technique that is used to model a linear relationship between two or more variables. It can also be used to identify and adjust the effect of confounding variables.

This course will teach you the theory behind linear regression models, how to examine the relationship between two variables, and how confounding variables can be detected and removed before building a machine learning algorithm.

Step 8: Machine Learning

Finally, the course you’ve probably been waiting for! Harvard’s machine learning program will teach you the basics of machine learning, techniques to mitigate overfitting, supervised and unsupervised modelling approaches, and recommendation systems.

Step 9: Capstone Project

After completing all the above courses, you can take Harvard’s data science capstone project, where your skills in data visualization, probability, statistics, data wrangling, data organization, regression, and machine learning will be assessed.

With this final project, you will get the opportunity to put together all the knowledge learnt from the above courses and gain the ability to complete a hands-on data science project from scratch.

Note: All the courses above are available on an online learning platform from edX and can be audited for free. If you want a course certificate, however, you will have to pay for one.

Building a data science learning roadmap with free courses offered by MIT.

8 Free MIT Courses to Learn Data Science Online

 enrolled into an undergraduate computer science program and decided to major in data science. I spent over $25K in tuition fees over the span of three years, only to graduate and realize that I wasn’t equipped with the skills necessary to land a job in the field.

I barely knew how to code, and was unclear about the most basic machine learning concepts.

I took some time out to try and learn data science myself — with the help of YouTube videos, online courses, and tutorials. I realized that all of this knowledge was publicly available on the Internet and could be accessed for free.

It came as a surprise that even Ivy League universities started making many of their courses accessible to students worldwide, for little to no charge. This meant that people like me could learn these skills from some of the best institutions in the world, instead of spending thousands of dollars on a subpar degree program.

In this article, I will provide you with a data science roadmap I created using only freely available MIT online courses.

Step 1: Learn to code

I highly recommend learning a programming language before going deep into the math and theory behind data science models. Once you learn to code, you will be able to work with real-world datasets and get a feel of how predictive algorithms function.

MIT Open Courseware offers a beginner-friendly Python program for beginners, called Introduction to Computer Science and Programming.

This course is designed to help people with no prior coding experience to write programs to tackle useful problems.

Step 2: Statistics

Statistics is at the core of every data science workflow — it is required when building a predictive model, analyzing trends in large amounts of data, or selecting useful features to feed into your model.

MIT Open Courseware offers a beginner-friendly course called Introduction to Probability and Statistics. After taking this course, you will learn the basic principles of statistical inference and probability. Some concepts covered include conditional probability, Bayes theorem, covariance, central limit theorem, resampling, and linear regression.

This course will also walk you through statistical analysis using the R programming language, which is useful as it adds on to your tool stack as a data scientist.

Another useful program offered by MIT for free is called Statistical Thinking and Data Analysis. This is another elementary course in the subject that will take you through different data analysis techniques in Excel, R, and Matlab.

You will learn about data collection, analysis, different types of sampling distributions, statistical inference, linear regression, multiple linear regression, and nonparametric statistical methods.

Step 3: Foundational Math Skills

Calculus and linear algebra are two other branches of math that are used in the field of machine learning. Taking a course or two in these subjects will give you a different perspective of how predictive models function, and the working behind the underlying algorithm.

To learn calculus, you can take Single Variable Calculus offered by MIT for free, followed by Multivariable Calculus.

Then, you can take this Linear Algebra class by Prof. Gilbert Strang to get a strong grasp of the subject.

All of the above courses are offered by MIT Open Courseware, and are paired with lecture notes, problem sets, exam questions, and solutions.

Step 4: Machine Learning

Finally, you can use the knowledge gained in the courses above to take MIT’s Introduction to Machine Learning course. This program will walk you through the implementation of predictive models in Python.

The core focus of this course is in supervised and reinforcement learning problems, and you will be taught concepts such as generalization and how overfitting can be mitigated. Apart from just working with structured datasets, you will also learn to process image and sequential data.

MIT’s machine learning program cites three pre-requisites — Python, linear algebra, and calculus, which is why it is advisable to take the courses above before starting this one.

Are These Courses Beginner-Friendly?

Even if you have no prior knowledge of programming, statistics, or mathematics, you can take all the courses listed above.

MIT has designed these programs to take you through the subject from scratch. However, unlike many MOOCs out there, the pace does build up pretty quickly and the courses cover a large depth of information.

Due to this, it is advisable to do all the exercises that come with the lectures and work through all the reading material provided.

SOURCE

Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

https://www.kdnuggets.com/2022/03/8-free-mit-courses-learn-data-science-online.html

Read Full Post »

Tweet Collection of 2022 #EmTechDigital @MIT, March 29-30, 2022

Tweet Author: Aviva Lev-Ari, PhD, RN

Selective Tweet Retweets for The Technology Review: Aviva Lev-Ari, PhD, RN

 

UPDATED on 4/11/2022

Analytics for @AVIVA1950 Tweeting at #EmTechDigital

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2022/04/11/analytics-for-aviva1950-tweeting-at-emtechdigital/

 

Aviva Lev-Ari

17.9K Tweets

See new Tweets

Aviva Lev-Ari

@AVIVA1950

Mar 30

#EmTechDigital

@AVIVA1950

@pharma_BI

@techreview

FRONTIER OF #AI follow my tweets of this event more than few tweets per speaker

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

#error in programmatic labeling use auto #ml aggregate #transactions

1

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

RajivShah Snorkel AI #programmatic #labelling solution #heuristics converted #code #tagging integration of #labelled data #classification algorithms #scores #BERT improve ing quality of data labeling #functions #knowledge #graphs

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

@AndrewYNg

#NLP #customization of #tools data #standardization in #healthcare and trucking #datasystem #heterogeneity is highest #data life cycle of #ML

Aviva Lev-Ari

@AVIVA1950

Mar 29

#EmTechDigital

@AVIVA1950

@pharma_BI

@techReview

@AndrewYNg

in last decade #ML advanced #opencode frees effort to #dataset avoid #label inconsistency #images #small vs #big #data-centric #ai #system #dataset #slice #data #curation #teams develop #tools #storage #migration #Legacy

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTechDigital

@MIT

, March 29-30, 2022 https://pharmaceuticalintelligence.com/2022/03/28/2022-emtech-digital-mit/… via

@pharma_BI

Real Time Coverage: Aviva Lev-Ari, PhD, RN #EmTechDigital

@AVIVA1950

@techReview

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTech Digital

@MIT

https://pharmaceuticalintelligence.com/2022/03/28/2022-emtech-digital-mit/… via

@pharma_BI

@AVIVA1950

#EmTechDigital

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 28

2022 EmTech Digital

@MIT

pharmaceuticalintelligence.com

2022 EmTech Digital @MIT

2022 EmTech Digital @MIT Real Time Coverage: Aviva Lev-Ari, PhD, RN  SPEAKERS Ali Alvi Turing Group Program Manager Microsoft Refik Anadol CEO, RAS Lab; Lecturer UCLA Lauren Bennett Group Software …

Aviva Lev-Ari

@AVIVA1950

Mar 26

#EmTech2022

@MIT

Quote Tweet

Stephen J Williams

@StephenJWillia2

  • Mar 25

@AVIVA1950 #EMT twitter.com/Pharma_BI/stat…

1

1

You Retweeted

MIT Technology Review

@techreview

Mar 30

That’s a wrap on #EmTechDigital 2022! Thanks for joining us in-person and online.

2

8

23

Show this thread

You Retweeted

LANDING AI

@landingAI

Mar 29

If you missed

@AndrewYNg

’s #EmTechDigital session, you can still learn more about #DataCentricAI here: https://bit.ly/3iM8bPq

@techreview

@strwbilly

2

1

You Retweeted

Mark Weber

@markRweber

Mar 29

On #bias embedded in historical data. #syntheticdata can help us build models for the world we aspire to rather than the prejudiced one of the past. Paraphrasing

@danny_lange

of

@unity

at #EmTechDigital #generativeai

Selective Tweets and Retweets from @StephenJWillia2

 

 

Read Full Post »

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:

 

This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals

 

Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs

 

Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]

Terminology[edit]

The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]

Concept[edit]

Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]

Reception[edit]

Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”

 

But Before I get to the “selling monkeys to morons” quote,

I want to talk about

THE GOOD, THE BAD, AND THE UGLY

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE GOOD

My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).

 

His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples

 

This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 

 

Abstract

Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.

 

The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

 

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!

See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

 

UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>

 

 

To introduce this discussion first a few startoff material which will fram this discourse

 






The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!

 

We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?

 

There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

 

Read Full Post »

Older Posts »

%d bloggers like this: