Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
ChatGPT Searches and Advent of Meta Threads: What it Means for Social Media and Science 3.0
Curator: Stephen J. Williams, PhD
The following explains how popular ChatGPT has become and how the latest social media platforms, including Meta’s (FaceBook) new platform Threads, is becoming as popular or more popular than older social Platforms. In fact, since its short inception since last week (Threads launced 7/07/2023), Threads is threatening Twitter for dominance in that market.
U.S. searches for ChatGPT overtake TikTok, Pinterest, and Zoom
Google searches for ChatGPT have overtaken TikTok in the U.S., jumping to 7.1 million monthly searches compared to 5.1 million
The term ‘ChatGPT’ is now one of the top 100 search terms in the U.S., ranking 92nd, according to Ahrefs data
ChatGPT is now searched more than most major social networks, including LinkedIn, Pinterest, TikTok, and Reddit
Analysis of Google search data reveals that online searches for ChatGPT, the popular AI chatbot, have overtaken most popular social networks in the U.S. This comes when search interest in artificial intelligence is at its highest point in history.
The findings by Digital-adoption.com reveal that US-based searches for ChatGPT have exploded and overtaken popular social networks, such as LinkedIn, Pinterest, and Tiktok, some by millions.
Ranking
Keyword
US Search Volume (Monthly)
1
Facebook
70,920,000
2
YouTube
69,260,000
3
Twitter
15,440,000
4
Instagram
12,240,000
5
ChatGPT
7,130,000
6
LinkedIn
6,990,000
7
Pinterest
5,790,000
8
TikTok
5,130,000
9
Reddit
4,060,000
10
Snapchat
1,280,000
11
WhatsApp
936,000
Since its release in November 2022, searches for ChatGPT have overtaken those of most major social networks. According to the latest June search figures by search tool Ahrefs, searches for ‘ChatGPT’ and ‘Chat GPT’ are made 7,130,000 times monthly in the U.S.
That’s more than the monthly search volume for most of the top ten social networks, including LinkedIn, Pinterest, and TikTok. TikTok is one of the largest growing social media apps, with 100 million users in just a year.
The term ‘ChatGPT’ is now one of the top 100 search terms in the U.S., ranking 92nd, according to Ahrefs data
Searches for ChatGPT have eclipsed other major networks in the U.S., such as Reddit, by millions.
Every day search terms such as ‘maps’ and ‘flights’ have even seen their search volumes pale compared to the rising popularity of ChatGPT. ‘Maps’ is currently searched 440,000 times less than the chatbot each month, and ‘Flights’ is now Googled 2.2 million times less.
2023 has been a breakout year for AI, as searches for the term have more than doubled from 17 million in January 2023 to 42 million in May. In comparison, there were 7.9 million searches in January 2022. There has been an 825% increase in searches for ‘AI’ in the US compared to the average over the last five years.
There is a correlation between the uptick and the public releases of accessible AI chatbots such as ChatGPT, released on November 30, 2022, and Bing AI and Google Bard, released in May 2023.
According to TikTok data, interest in artificial intelligence has soared tenfold since 2020, and virtual reality has more than tripled.
AI has been a big topic of conversation this year as accessible AI chatbots and new technologies were released and sparked rapid adoption, prompting tech leaders like Elon Musk to call for AI regulation.
A spokesperson from Digital-adoption.com commented on the findings: “There has been a massive surge in AI interest this year. Apple’s announcement of Vision Pro has captured audiences at the right time, when new AI technologies, like ChatGPT, have become accessible to almost anyone. The rapid adoption of ChatGPT is surprising, with it becoming one of the fastest-growing tools available”.
All data was gathered from Ahrefs and Google Trends.
If using this story, please include a link tohttps://www.digital-adoption.com/ who conducted this study. A linked credit allows us to keep supplying you with content that you may find useful in the future.
Updated July 10, 2023 9:00 am ET / Original July 10, 2023 7:44 am ET
The launch of Meta Platforms’ Threads looks to have outpaced even the viral success of ChatGPT in terms of signing up users. The next challenge will be keeping them around.
Since its inception on Thursday 7/07/2023, Meta’s new Threads platform has been signing up new users at an alarming rate. On rollout date 5 million signed up, then 30 million by next morning and now as of today (7/1/2023) Threads has over 100 million signups. Compare that to Twitter’s 436 million users, of which are tweeting on average 25% less than a few years ago, and it is easy to see why many social media pundits are calling Threads the new Twitter killer app.
Here are a few notes from the New York Times podcast The Daily
Last week, Meta, the parent company of Facebook and Instagram, released Threads, a social media platform to compete with Twitter. In just 16 hours, Threads was downloaded more than 30 million times.
Mike Isaac, who covers tech companies and Silicon Valley for The Times, explains how Twitter became so vulnerable and discusses the challenges Meta faces to create a less toxic alternative.
Guest: Mike Isaac, a technology correspondent for The New York Times.
Background reading:
Threads is onpace to exceed 100 million users within two months, a feat achieved only by ChatGPT.
Here’s what to know about Threads and how it differs from Twitter.
Here are a few notes from the podcast:
Mike Isaac lamented that Twitter has become user unfriendly for a host of reasons. These include:
The instant reply’guys’ – people who reply but don’t really follow you or your thread
Your followers or following are not pushed to top of thread
The auto bots – the automated Twitter bots
Spam feeds
The changes in service and all these new fees: Twitter push to monetize everything – like airlines
Elon Musk wanted to transform Twitter but his history is always cutting, not just trimming the excess but he is known to just eliminate departments just because he either doesn’t want to pay or CAN’T pay. With Twitter he gutted content moderation.
Twitter ad business is plumetting but Musk wants to make Twitter a subscription business (the Blue check mark)
Twitter only gets a couple of million $ per month from Twitter Blue but Musk has to pay billions to just pay the interest on Twitter loan for Twitter puchase years ago
It is known that Musk is not paying rent on some California offices (some are suggesting he defaulted on leases) and Musk is selling Tesla stock to pay for Twitter expenses (why TSLA stock has been falling … the consensus out there)
Twitter is largest compendium of natural language conversations and Musk wanted to limit bots from scraping Twitter data to do AI and NLP on Twitter threads. This is also a grievance from other companies… that these ‘scrapers’ are not paying enough for Twitter data. However as Mike asks why do the little Twitter user have to pay in either fees or cutbacks from service. (the reason why Elon is limiting viewing per day is to limit these bots from scraping Twitter for data)
Another problem is that Twitter does not have its own servers so pays a lot to Google and AWS for server space. It appears Elon and Twitter are running out of money.
META and THREADS
Zuckerberg has spent billions of infrastructure spending and created a massive advertising ecosystem. This is one of the thoughts behind his push and entry into this space. Zuckerberg actually wanted to but Twitter a decade ago.
Usage and growth: The launch of Threads was Thursday 7-07-23. There were 2 million initial signups and by next morning 30 million overnight. Today Monday 7-10-23 there are 100 million, rivaling Twitter’s 436 million accounts. And as Musk keeps canceling Twitter accounts, angering users over fees or usage restrictions, people are looking for a good platform. Mastedon in too technical and not having the adoption like Meta Threads is having. Mike Isaac hopes Threads will not go the way of Google Hangouts or Plus but Google strategy did not involve social media like Facebook.
Signup and issues: Signup on Threads is easy but you need to go through Instagram. Some people have concerns about having their instagram thread put on their Threads feed but Mike had talked to the people at Meta and they are working to allow users to keep the feeds separate, mainly because Meta understands that the Instgagram and Twitter social cultures are different and users may want to keep Threads more business-like.
Important issues for LPBI: Twitter had decided, by end of May 2023 to end their relationship with WordPress JetPack service, in which WordPress posts could automatically be posted to your Twitter account and feed. Twitter is making users like WordPress pay for this API and WordPress said it would be too expensive as Twitter is not making a flat fee but per usage fee. This is a major hindrance even though the Twitter social share button is still active on posts.
Initial conversations between META and WordPress have indicated META will keep this API service free for WordPress.
So a little background on Meta Threads and signup features from Meta (Facebook) website:
Takeaways
Threads is a new app, built by the Instagram team, for sharing text updates and joining public conversations.
You log in using your Instagram account and posts can be up to 500 characters long and include links, photos, and videos up to 5 minutes in length.
We’re working to soon make Threads compatible with the open, interoperable social networks that we believe can shape the future of the internet.
Mark Zuckerberg just announced the initial version of Threads, an app built by the Instagram team for sharing with text. Whether you’re a creator or a casual poster, Threads offers a new, separate space for real-time updates and public conversations. We are working toward making Threads compatible with the open, interoperable social networks that we believe can shape the future of the internet.
Instagram is where billions of people around the world connect over photos and videos. Our vision with Threads is to take what Instagram does best and expand that to text, creating a positive and creative space to express your ideas. Just like on Instagram, with Threads you can follow and connect with friends and creators who share your interests – including the people you follow on Instagram and beyond. And you can use our existing suite of safety and user controls.
Join the Conversation from Instagram
It’s easy to get started with Threads: simply use your Instagram account to log in. Your Instagram username and verification will carry over, with the option to customize your profile specifically for Threads.
Everyone who is under 16 (or under 18 in certain countries) will be defaulted into a private profile when they join Threads. You can choose to follow the same accounts you do on Instagram, and find more people who care about the same things you do. The core accessibility features available on Instagram today, such as screen reader support and AI-generated image descriptions, are also enabled on Threads.
Your feed on Threads includes threads posted by people you follow, and recommended content from new creators you haven’t discovered yet. Posts can be up to 500 characters long and include links, photos, and videos up to 5 minutes in length. You can easily share a Threads post to your Instagram story, or share your post as a link on any other platform you choose.
Tune Out the Noise
We built Threads with tools to enable positive, productive conversations. You can control who can mention you or reply to you within Threads. Like on Instagram, you can add hidden words to filter out replies to your threads that contain specific words. You can unfollow, block, restrict or report a profile on Threads by tapping the three-dot menu, and any accounts you’ve blocked on Instagram will automatically be blocked on Threads.
As with all our products, we’re taking safety seriously, and we’ll enforce Instagram’s Community Guidelines on content and interactions in the app. Since 2016 we’ve invested more than $16 billion in building up the teams and technologies needed to protect our users, and we remain focused on advancing our industry-leading integrity efforts and investments to protect our community.
Compatible with Interoperable Networks
Soon, we are planning to make Threads compatible with ActivityPub, the open social networking protocol established by the World Wide Web Consortium (W3C), the body responsible for the open standards that power the modern web. This would make Threads interoperable with other apps that also support the ActivityPub protocol, such as Mastodon and WordPress – allowing new types of connections that are simply not possible on most social apps today. Other platforms including Tumblr have shared plans to support the ActivityPub protocol in the future.
We’re committed to giving you more control over your audience on Threads – our plan is to work with ActivityPub to provide you the option to stop using Threads and transfer your content to another service. Our vision is that people using compatible apps will be able to follow and interact with people on Threads without having a Threads account, and vice versa, ushering in a new era of diverse and interconnected networks. If you have a public profile on Threads, this means your posts would be accessible from other apps, allowing you to reach new people with no added effort. If you have a private profile, you’d be able to approve users on Threads who want to follow you and interact with your content, similar to your experience on Instagram.
The benefits of open social networking protocols go well beyond the ways people can follow each other. Developers can build new types of features and user experiences that can easily plug into other open social networks, accelerating the pace of innovation and experimentation. Each compatible app can set its own community standards and content moderation policies, meaning people have the freedom to choose spaces that align with their values. We believe this decentralized approach, similar to the protocols governing email and the web itself, will play an important role in the future of online platforms.
Threads is Meta’s first app envisioned to be compatible with an open social networking protocol – we hope that by joining this fast-growing ecosystem of interoperable services, Threads will help people find their community, no matter what app they use.
What’s Next
We’re rolling out Threads today in more than 100 countries for iOS and Android, and people in those countries can download the app from the Apple App Store and Google Play Store.
In addition to working toward making Threads compatible with the ActivityPub protocol, soon we’ll be adding a number of new features to help you continue to discover threads and creators you’re interested in, including improved recommendations in feed and a more robust search function that makes it easier to follow topics and trends in real time.
Should Science Migrate over to Threads Instead of Twitter?
I have written multiple time of the impact of social media, Science and Web 2.0 and the new Science and Web 3.0 including
It, as of this writing, appears it is not crucial that scientific institutions need to migrate over to Threads yet, although the impetus is certainly there. Many of the signups have of course been through Instagram (which is the only way to signup for now) and a search of @Threads does not show that large scientific organizations have signed up for now.
A search for NIH, NCBI, AACR, and Personalized Medicine Coalition or PMC which is the big MGH orgaization on personalized medicine appears to return nothing yet. Pfizer and most big pharma is on @Threads now but that is because they maintain a marketing thread on Instagram. How necessary is @Threads for communicating science over Science 3.0 platform remains to be seen. In addition, how will @Threads be used for real time scientific conference coverage? Will Meta be able to integrate with virtual reality?
Other articles of Note on this Open Access Scientific Journal Include:
Science Has A Systemic Problem, Not an Innovation Problem
Curator:Stephen J. Williams, Ph.D.
A recent email, asking me to submit a survey, got me thinking about the malaise that scientists and industry professionals frequently bemoan: that innovation has been stymied for some reason and all sorts of convuluted processes must be altered to spur this mythical void of great new discoveries….. and it got me thinking about our current state of science, and what is the perceived issue… and if this desert of innovation actually exists or is more a fundamental problem which we have created.
The email was from an NIH committee asking for opinions on recreating the grant review process …. now this on the same day someone complained to me about a shoddy and perplexing grant review they received.
The following email, which was sent out to multiple researchers, involved in either NIH grant review on both sides, as well as those who had been involved in previous questionnaires and studies on grant review and bias. The email asked for researchers to fill out a survey on the grant review process, and how to best change it to increase innovation of ideas as well as inclusivity. In recent years, there have been multiple survey requests on these matters, with multiple confusing procedural changes to grant format and content requirements, adding more administrative burden to scientists.
The email from Center for Scientific Review (one of the divisions a grant will go to before review {they set up review study sections and decide what section a grant should be assigned to} was as follows:
Update on Simplifying Review Criteria: A Request for Information
NIH has issued a request for information (RFI) seeking feedback on revising and simplifying the peer review framework for research project grant applications. The goal of this effort is to facilitate the mission of scientific peer review – identification of the strongest, highest-impact research. The proposed changes will allow peer reviewers to focus on scientific merit by evaluating 1) the scientific impact, research rigor, and feasibility of the proposed research without the distraction of administrative questions and 2) whether or not appropriate expertise and resources are available to conduct the research, thus mitigating the undue influence of the reputation of the institution or investigator.
Currently, applications for research project grants (RPGs, such as R01s, R03s, R15s, R21s, R34s) are evaluated based on five scored criteria: Significance, Investigators, Innovation, Approach, and Environment (derived from NIH peer review regulations 42 C.F.R. Part 52h.8; see Definitions of Criteria and Considerations for Research Project Grant Critiques for more detail) and a number of additional review criteria such as Human Subject Protections.
NIH gathered input from the community to identify potential revisions to the review framework. Given longstanding and often-heard concerns from diverse groups, CSR decided to form two working groups to the CSR Advisory Council—one on non-clinical trials and one on clinical trials. To inform these groups, CSR published a Review Matters blog, which was cross-posted on the Office of Extramural Research blog, Open Mike. The blog received more than 9,000 views by unique individuals and over 400 comments. Interim recommendations were presented to the CSR Advisory Council in a public forum (March 2020 video, slides; March 2021 video, slides). Final recommendations from the CSRAC (report) were considered by the major extramural committees of the NIH that included leadership from across NIH institutes and centers. Additional background information can be found here. This process produced many modifications and the final proposal presented below. Discussions are underway to incorporate consideration of a Plan for Enhancing Diverse Perspectives (PEDP) and rigorous review of clinical trials RPGs (~10% of RPGs are clinical trials) within the proposed framework.
Simplified Review Criteria
NIH proposes to reorganize the five review criteria into three factors, with Factors 1 and 2 receiving a numerical score. Reviewers will be instructed to consider all three factors (Factors 1, 2 and 3) in arriving at their Overall Impact Score (scored 1-9), reflecting the overall scientific and technical merit of the application.
Factor 1: Importance of the Research (Significance, Innovation), numerical score (1-9)
Factor 2: Rigor and Feasibility (Approach), numerical score (1-9)
Factor 3: Expertise and Resources (Investigator, Environment), assessed and considered in the Overall Impact Score, but not individually scored
Within Factor 3 (Expertise and Resources), Investigator and Environment will be assessed in the context of the research proposed. Investigator(s) will be rated as “fully capable” or “additional expertise/capability needed”. Environment will be rated as “appropriate” or “additional resources needed.” If a need for additional expertise or resources is identified, written justification must be provided. Detailed descriptions of the three factors can be found here.
Now looking at some of the Comments were very illuminating:
I strongly support streamlining the five current main review criteria into three, and the present five additional criteria into two. This will bring clarity to applicants and reduce the workload on both applicants and reviewers. Blinding reviewers to the applicants’ identities and institutions would be a helpful next step, and would do much to reduce the “rich-getting-richer” / “good ole girls and good ole boys” / “big science” elitism that plagues the present review system, wherein pedigree and connections often outweigh substance and creativity.
I support the proposed changes. The shift away from “innovation” will help reduce the tendency to create hype around a proposed research direction. The shift away from Investigator and Environment assessments will help reduce bias toward already funded investigators in large well-known institutions.
As a reviewer for 5 years, I believe that the proposed changes are a step in the right direction, refocusing the review on whether the science SHOULD be done and whether it CAN BE DONE WELL, while eliminating burdensome and unhelpful sections of review that are better handled administratively. I particularly believe that the de-emphasis of innovation (which typically focuses on technical innovation) will improve evaluation of the overall science, and de-emphasis of review of minor technical details will, if implemented correctly, reduce the “downward pull” on scores for approach. The above comments reference blinded reviews, but I did not see this in the proposed recommendations. I do not believe this is a good idea for several reasons: 1) Blinding of the applicant and institution is not likely feasible for many of the reasons others have described (e.g., self-referencing of prior work), 2) Blinding would eliminate the potential to review investigators’ biosketches and budget justifications, which are critically important in review, 3) Making review blinded would make determination of conflicts of interest harder to identify and avoid, 4) Evaluation of “Investigator and Environment” would be nearly impossible.
Most of the Comments were in favor of the proposed changes, however many admitted that it adds additional confusion on top of many administrative changes to formats and content of grant sections.
Being a Stephen Covey devotee, and just have listened to The Four Principles of Execution, it became more apparent that issues that hinder many great ideas coming into fruition, especially in science, is a result of these systemic or problems in the process, not at the level of individual researchers or small companies trying to get their innovations funded or noticed. In summary, Dr. Covey states most issues related to the success of any initiative is NOT in the strategic planning, but in the failure to adhere to a few EXECUTION principles. Primary to these failures of strategic plans is lack of accounting of what Dr. Covey calls the ‘whirlwind’, or those important but recurring tasks that take us away from achieving the wildly important goals. In addition, lack of determining lead and lag measures of success hinder such plans.
In this case a lag measure in INNOVATION. It appears we have created such a whirlwind and focus on lag measures that we are incapable of translating great discoveries into INNOVATION.
In the following post, I will focus on issues relating to Open Access, publishing and dissemination of scientific discovery may be costing us TIME to INNOVATION. And it appears that there are systemic reasons why we appear stuck in a rut, so to speak.
The first indication is from a paper published by Johan Chu and James Evans in 2021 in PNAS:
Slowed canonical progress in large fields of science
Chu JSG, Evans JA. Slowed canonical progress in large fields of science. Proc Natl Acad Sci U S A. 2021 Oct 12;118(41):e2021636118. doi: 10.1073/pnas.2021636118. PMID: 34607941; PMCID: PMC8522281
Abstract
In many academic fields, the number of papers published each year has increased significantly over time. Policy measures aim to increase the quantity of scientists, research funding, and scientific output, which is measured by the number of papers produced. These quantitative metrics determine the career trajectories of scholars and evaluations of academic departments, institutions, and nations. Whether and how these increases in the numbers of scientists and papers translate into advances in knowledge is unclear, however. Here, we first lay out a theoretical argument for why too many papers published each year in a field can lead to stagnation rather than advance. The deluge of new papers may deprive reviewers and readers the cognitive slack required to fully recognize and understand novel ideas. Competition among many new ideas may prevent the gradual accumulation of focused attention on a promising new idea. Then, we show data supporting the predictions of this theory. When the number of papers published per year in a scientific field grows large, citations flow disproportionately to already well-cited papers; the list of most-cited papers ossifies; new papers are unlikely to ever become highly cited, and when they do, it is not through a gradual, cumulative process of attention gathering; and newly published papers become unlikely to disrupt existing work. These findings suggest that the progress of large scientific fields may be slowed, trapped in existing canon. Policy measures shifting how scientific work is produced, disseminated, consumed, and rewarded may be called for to push fields into new, more fertile areas of study.
So the Summary of this paper is
The authors examined 1.8 billion citations among 90 million papers over 241 subjects
found the corpus of papers do not lead to turnover of new ideas in a field, but rather the ossification or entrenchment of canonical (or older ideas)
this is mainly due to older paper cited more frequently than new papers with new ideas, potentially because authors are trying to get their own papers cited more frequently for funding and exposure purposes
The authors suggest that “fundamental progress may be stymied if quantitative growth of scientific endeavors is not balanced by structures fostering disruptive scholarship and focusing attention of novel ideas”
The authors note that, in most cases, science policy reinforces this “more is better” philosophy”, where metrics of publication productivity are either number of publications or impact measured by citation rankings. However, using an analysis of citation changes occurring in large versus smaller fields, it becomes apparent that this process is favoring the older, more established papers and a recirculating of older canonical ideas.
“Rather than resulting in faster turnover of field paradigms, the massive amounts of new publications entrenches the ideas of top-cited papers.” New ideas are pushed down to the bottom of the citation list and potentially lost in the literature. The authors suggest that this problem will intensify as the “annual mass” of new publications in each field grows, especially in large fields. This issue is exacerbated by the deluge on new online ‘open access’ journals, in which authors would focus on citing the more highly cited literature.
We maybe at a critical junction, where if many papers are published in a short time, new ideas will not be considered as carefully as the older ideas. In addition,
with proliferation of journals and the blurring of journal hierarchies due to online articles-level access can exacerbate this problem
As a counterpoint, the authors do note that even though many molecular biology highly cited articles were done in 1976, there has been extremely much innovation since then however it may take a lot more in experiments and money to gain the level of citations that those papers produced, and hence a lower scientific productivity.
This issue is seen in the field of economics as well
Ellison, Glenn. “Is peer review in decline?” Economic Inquiry, vol. 49, no. 3, July 2011, pp. 635+. Gale Academic OneFile, link.gale.com/apps/doc/A261386330/AONE?u=temple_main&sid=bookmark-AONE&xid=f5891002. Accessed 12 Dec. 2022.
Abstract:
Over the past decade, there has been a decline in the fraction of papers in top economics journals written by economists from the highest-ranked economics departments. This paper documents this fact and uses additional data on publications and citations to assess various potential explanations. Several observations are consistent with the hypothesis that the Internet improves the ability of high-profile authors to disseminate their research without going through the traditional peer-review process. (JEL A14, 030)
The facts part of this paper documents two main facts:
1. Economists in top-ranked departments now publish very few papers in top field journals. There is a marked decline in such publications between the early 1990s and early 2000s.
2. Comparing the early 2000s with the early 1990s, there is a decline in both the absolute number of papers and the share of papers in the top general interest journals written by Harvard economics department faculty.
Although the second fact just concerns one department, I see it as potentially important to understanding what is happening because it comes at a time when Harvard is widely regarded (I believe correctly) as having ascended to the top position in the profession.
The “decline-of-peer-review” theory I allude to in the title is that the necessity of going through the peer-review process has lessened for high-status authors: in the old days peer-reviewed journals were by far the most effective means of reaching readers, whereas with the growth of the Internet high-status authors can now post papers online and exploit their reputation to attract readers.
Many alternate explanations are possible. I focus on four theories: the decline-in-peer-review theory and three alternatives.
1. The trends could be a consequence of top-school authors’ being crowded out of the top journals by other researchers. Several such stories have an optimistic message, for example, there is more talent entering the profession, old pro-elite biases are being broken down, more schools are encouraging faculty to do cutting-edge research, and the Internet is enabling more cutting-edge research by breaking down informational barriers that had hampered researchers outside the top schools. (2)
2. The trends could be a consequence of the growth of revisions at economics journals discussed in Ellison (2002a, 2002b). In this more pessimistic theory, highly productive researchers must abandon some projects and/or seek out faster outlets to conserve the time now required to publish their most important works.
3. The trends could simply reflect that field journals have declined in quality in some relative sense and become a less attractive place to publish. This theory is meant to encompass also the rise of new journals, which is not obviously desirable or undesirable.
The majority of this paper is devoted to examining various data sources that provide additional details about how economics publishing has changed over the past decade. These are intended both to sharpen understanding of the facts to be explained and to provide tests of auxiliary predictions of the theories. Two main sources of information are used: data on publications and data on citations. The publication data include department-level counts of publications in various additional journals, an individual-level dataset containing records of publications in a subset of journals for thousands of economists, and a very small dataset containing complete data on a few authors’ publication records. The citation data include citations at the paper level for 9,000 published papers and less well-matched data that is used to construct measures of citations to authors’ unpublished works, to departments as a whole, and to various journals.
Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship
Josh Angrist, Pierre Azoulay, Glenn Ellison, Ryan Hill, Susan Feng Lu. Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship.
JOURNAL OF ECONOMIC LITERATURE
Abstract
Does academic economic research produce material of general scientific value, or do academic economists write only for peers? Is economics scholarship uniquely insular? We address these questions by quantifying interactions between economics and other disciplines. Changes in the influence of economic scholarship are measured here by the frequency with which other disciplines cite papers in economics journals. We document a clear rise in the extramural influence of economic research, while also showing that economics is increasingly likely to reference other social sciences. A breakdown of extramural citations by economics fields shows broad field influence. Differentiating between theoretical and empirical papers classified using machine learning, we see that much of the rise in economics’ extramural influence reflects growth in citations to empirical work. This growth parallels an increase in the share of empirical cites within economics. At the same time, some disciplines that primarily cite economic theory have also recently increased citations of economics scholarship.
Citation
Angrist, Josh, Pierre Azoulay, Glenn Ellison, Ryan Hill, and Susan Feng Lu. 2020.“Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship.”Journal of Economic Literature, 58 (1): 3-52.DOI: 10.1257/jel.20181508
VOL. 58, NO. 1, MARCH 2020
(pp. 3-52)
So if innovation is there but it may be buried under the massive amount of heavily cited older literature, do we see evidence of this in other fields like medicine?
Why Isn’t Innovation Helping Reduce Health Care Costs?
National health care expenditures (NHEs) in the United States continue to grow at rates outpacing the broader economy: Inflation- and population-adjusted NHEs have increased 1.6 percent faster than the gross domestic product (GDP) between 1990 and 2018. US national health expenditure growth as a share of GDP far outpaces comparable nations in the Organization for Economic Cooperation and Development (17.2 versus 8.9 percent).
Multiple recent analyses have proposed that growth in the prices and intensity of US health care services—rather than in utilization rates or demographic characteristics—is responsible for the disproportionate increases in NHEs relative to global counterparts. The consequences of ever-rising costs amid ubiquitous underinsurance in the US include price-induced deferral of care leading to excess morbidity relative to comparable nations.
These patterns exist despite a robust innovation ecosystem in US health care—implying that novel technologies, in isolation, are insufficient to bend the health care cost curve. Indeed, studies have documented that novel technologies directly increase expenditure growth.
Why is our prolific innovation ecosystem not helping reduce costs? The core issue relates to its apparent failure to enhance net productivity—the relative output generated per unit resource required. In this post, we decompose the concept of innovation to highlight situations in which inventions may not increase net productivity. We begin by describing how this issue has taken on increased urgency amid resource constraints magnified by the COVID-19 pandemic. In turn, we describe incentives for the pervasiveness of productivity-diminishing innovations. Finally, we provide recommendations to promote opportunities for low-cost innovation.
Net Productivity During The COVID-19 Pandemic
The issue of productivity-enhancing innovation is timely, as health care systems have been overwhelmed by COVID-19. Hospitals in Italy, New York City, and elsewhere have lacked adequate capital resources to care for patients with the disease, sufficient liquidity to invest in sorely needed resources, and enough staff to perform all of the necessary tasks.
The critical constraint in these settings is not technology: In fact, the most advanced technology required to routinely treat COVID-19—the mechanical ventilator—was invented nearly 100 years ago in response to polio (the so-called iron lung). Rather, the bottleneck relates to the total financial and human resources required to use the technology—the denominator of net productivity. The clinical implementation of ventilators has been illustrative: Health care workers are still required to operate ventilators on a nearly one-to-one basis, just like in the mid-twentieth century.
High levels of resources required for implementation of health care technologies constrain the scalability of patient care—such as during respiratory disease outbreaks such as COVID-19. Thus, research to reduce health care costs is the same kind of research we urgently require to promote health care access for patients with COVID-19.
Types Of Innovation And Their Relationship To Expenditure Growth
The widespread use of novel medical technologies has been highlighted as a central driver of NHE growth in the US. We believe that the continued expansion of health care costs is largely the result of innovation that tends to have low productivity (exhibit 1). We argue that these archetypes—novel widgets tacked on to existing workflows to reinforce traditional care models—are exactly the wrong properties to reduce NHEs at the systemic level.
Exhibit 1: Relative productivity of innovation subtypes
These may be contrasted with process innovations, which address the organized sequences of activities that implement content. Classically, these include clinical pathways and protocols. They can address the delivery of care for acute conditions, such as central line infections, sepsis, or natural disasters. Alternatively, they can target chronic conditions through initiatives such as team-based management of hypertension and hospital-at-home models for geriatric care. Other processes include hiring staff, delegating labor, and supply chain management.
Performance-Enhancing Versus Cost-Reducing Innovation
Performance-enhancing innovations frequently create incremental outcome gains in diagnostic characteristics, such as sensitivity or specificity, or in therapeutic characteristics, such as biomarkers for disease status. Their performance gains often lead to higher prices compared to existing alternatives.
Performance-enhancing innovations can be compared to “non-inferior” innovations capable of achieving outcomes approximating those of existing alternatives, but at reduced cost. Industries outside of medicine, such as the computing industry, have relied heavily on the ability to reduce costs while retaining performance.
In health care though, this pattern of innovation is rare. Since passage of the 2010 “Biosimilars” Act aimed at stimulating non-inferior innovation and competition in therapeutics markets, only 17 agents have been approved, and only seven have made it to market. More than three-quarters of all drugs receiving new patents between 2005 and 2015 were “reissues,” meaning they had already been approved, and the new patent reflected changes to the previously approved formula. Meanwhile, the costs of approved drugs have increased over time, at rates between 4 percent and 7 percent annually.
Moreover, the preponderance of performance-enhancing diagnostic and therapeutic innovations tend to address narrow patient cohorts (such as rare diseases or cancer subtypes), with limited clear clinical utility in broader populations. For example, the recently approved eculizimab is a monoclonal antibody approved for paroxysmal nocturnal hemoglobinuria—which effects 1 in 10 million individuals. At the time of its launch, eculizimab was priced at more than $400,000 per year, making it the most expensive drug in modern history. For clinical populations with no available alternatives, drugs such as eculizimab may be cost-effective, pending society’s willingness to pay, and morally desirable, given a society’s values. But such drugs are certainly not cost-reducing.
Additive Versus Substitutive Innovation
Additive innovations are those that append to preexisting workflows, while substitutive innovations reconfigure preexisting workflows. In this way, additive innovations increase the use of precedent services, whereas substitutive innovations decrease precedent service use.
For example, previous analyses have found that novel imaging modalities are additive innovations, as they tend not to diminish use of preexisting modalities. Similarly, novel procedures tend to incompletely replace traditional procedures. In the case of therapeutics and devices, off-label uses in disease groups outside of the approved indication(s) can prompt innovation that is additive. This is especially true, given that off-label prescriptions classically occur after approved methods are exhausted.
Eculizimab once again provides an illustrative example. As of February 2019, the drug had been used for 39 indications (it had been approved for three of those, by that time), 69 percent of which lacked any form of evidence of real-world effectiveness. Meanwhile, the drug generated nearly $4 billion in sales in 2019. Again, these expenditures may be something for which society chooses to pay—but they are nonetheless additive, rather than substitutive.
Sustaining Versus Disruptive Innovation
Competitive market theory suggests that incumbents and disruptors innovate differently. Incumbents seek sustaining innovations capable of perpetuating their dominance, whereas disruptors pursue innovations capable of redefining traditional business models.
In health care, while disruptive innovations hold the potential to reduce overall health expenditures, often they run counter to the capabilities of market incumbents. For example, telemedicine can deliver care asynchronously, remotely, and virtually, but large-scale brick-and-mortar medical facilities invest enormous capital in the delivery of synchronous, in-house, in-person care (incentivized by facility fees).
The connection between incumbent business models and the innovation pipeline is particularly relevant given that 58 percent of total funding for biomedical research in the US is now derived from private entities, compared with 46 percent a decade prior. It follows that the growing influence of eminent private organizations may favor innovations supporting their market dominance—rather than innovations that are societally optimal.
Incentives And Repercussions Of High-Cost Innovation
Taken together, these observations suggest that innovation in health care is preferentially designed for revenue expansion rather than for cost reduction. While offering incremental improvements in patient outcomes, therefore creating theoretical value for society, these innovations rarely deliver incremental reductions in short- or long-term costs at the health system level.
For example, content-based, performance-enhancing, additive, sustaining innovations tend to add layers of complexity to the health care system—which in turn require additional administration to manage. The net result is employment growth in excess of outcome improvement, leading to productivity losses. This gap leads to continuously increasing overall expenditures in turn passed along to payers and consumers.
Nonetheless, high-cost innovations are incentivized across health care stakeholders (exhibit 2). From the supply side of innovation, for academic researchers, “breakthrough” and “groundbreaking” innovations constitute the basis for career advancement via funding and tenure. This is despite stakeholders’ frequent inability to generalize early successes to become cost-effective in the clinical setting. As previously discussed, the increasing influence of private entities in setting the medical research agenda is also likely to stimulate innovation benefitting single stakeholders rather than the system.
Source: Authors’ analysis adapted from Hofmann BM. Too much technology. BMJ. 2015 Feb 16.
From the demand side of innovation (providers and health systems), a combined allure (to provide “cutting-edge” patient care), imperative (to leave “no stone unturned” in patient care), and profit-motive (to amplify fee-for-service reimbursements) spur participation in a “technological arms-race.” The status quo thus remains as Clay Christensen has written: “Our major health care institutions…together overshoot the level of care actually needed or used by the vast majority of patients.”
Christensen’s observations have been validated during the COVID-19 epidemic, as treatment of the disease requires predominantly century-old technology. By continually adopting innovation that routinely overshoots the needs of most patients, layer by layer, health care institutions are accruing costs that quickly become the burden of society writ large.
Recommendations To Reduce The Costs Of Health Care Innovation
Henry Aaron wrote in 2002 that “…the forces that have driven up costs are, if anything, intensifying. The staggering fecundity of biomedical research is increasing…[and] always raises expenditures.” With NHEs spiraling ever-higher, urgency to “bend the cost curve” is mounting. Yet, since much biomedical innovation targets the “flat of the [productivity] curve,” alternative forms of innovation are necessary.
The shortcomings in net productivity revealed by the COVID-19 pandemic highlight the urgent need for redesign of health care delivery in this country, and reevaluation of the innovation needed to support it. Specifically, efforts supporting process redesign are critical to promote cost-reducing, substitutive innovations that can inaugurate new and disruptive business models.
Process redesign rarely involves novel gizmos, so much as rejiggering the wiring of, and connections between, existing gadgets. It targets operational changes capable of streamlining workflows, rather than technical advancements that complicate them. As described above, precisely these sorts of “frugal innovations” have led to productivity improvements yielding lower costs in other high-technology industries, such as the computing industry.
Shrank and colleagues recently estimated that nearly one-third of NHEs—almost $1 trillion—were due to preventable waste. Four of the six categories of waste enumerated by the authors—failure in care delivery, failure in care coordination, low-value care, and administrative complexity—represent ripe targets for process innovation, accounting for $610 billion in waste annually, according to Shrank.
Health systems adopting process redesign methods such as continuous improvement and value-based management have exhibited outcome enhancement and expense reduction simultaneously. Internal processes addressed have included supply chain reconfiguration, operational redesign, outlier reconciliation, and resource standardization.
Despite the potential of process innovation, focus on this area (often bundled into “health services” or “quality improvement” research) occupies only a minute fraction of wallet- or mind-share in the biomedical research landscape, accounting for 0.3 percent of research dollars in medicine. This may be due to a variety of barriers beyond minimal funding. One set of barriers is academic, relating to negative perceptions around rigor and a lack of outlets in which to publish quality improvement research. To achieve health care cost containment over the long term, this dimension of innovation must be destigmatized relative to more traditional manners of innovation by the funders and institutions determining the conditions of the research ecosystem.
Another set of barriers is financial: Innovations yielding cost reduction are less “reimbursable” than are innovations fashioned for revenue expansion. This is especially the case in a fee-for-service system where reimbursement is tethered to cost, which creates perverse incentives for health care institutions to overlook cost increases. However, institutions investing in low-cost innovation will be well-positioned in a rapidly approaching future of value-based care—in which the solvency of health care institutions will rely upon their ability to provide economically efficient care.
Innovating For Cost Control Necessitates Frugality Over Novelty
Restraining US NHEs represents a critical step toward health promotion. Innovation for innovation’s sake—that is content-based, incrementally effective, additive, and sustaining—is unlikely to constrain continually expanding NHEs.
In contrast, process innovation offers opportunities to reduce costs while maintaining high standards of patient care. As COVID-19 stress-tests health care systems across the world, the importance of cost control and productivity amplification for patient care has become apparent.
As such, frugality, rather than novelty, may hold the key to health care cost containment. Redesigning the innovation agenda to stem the tide of ever-rising NHEs is an essential strategy to promote widespread access to care—as well as high-value preventive care—in this country. In the words of investors across Silicon Valley: Cost-reducing innovation is no longer a “nice-to-have,” but a “need-to-have” for the future of health and overall well-being this country.
So Do We Need A New Way of Disseminating Scientific Information? Can Curation Help?
We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!
Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.
The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?