Feeds:
Posts
Comments

Archive for the ‘Advanced Computing Platform’ Category

Relevance of Twitter.com forthcoming Payment System for Scientific Content Promotion and Monetization

Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN

GIASOURCEN M. VOLPICELLI

Gian M. Volpicelli

SENIOR WRITER

Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.

SOURCE

https://www.wired.com/story/twitter-crypto-strategy/

 

BUSINESS

APR 5, 2022 7:00 AM

What Twitter Is Really Planning for Crypto

The duo behind Twitter Crypto say NFT profile pics and crypto tipping are just the beginning.

 

YOU MIGHT HAVE heard of crypto Twitter, the corner of the social network where accounts have Bored Apes as profile pictures, posts are rife with talk of tokens, blockchains, and buying the Bitcoin dip, and Elon Musk is venerated.

Then again, you might have heard of Twitter Crypto, the business unit devoted to developing the social network’s strategy for cryptocurrency, blockchains, and that grab-bag of decentralized technologies falling under the rubric of Web3. The team’s unveiling came in November 2021 via a tweet from the newly hired project lead, Tess Rinearson, a Berlin-based American computer scientist whose career includes stints at blockchain companies such as Tendermint and Interchain.

Rinearson joined Twitter at a crucial moment. Jack Dorsey, the vociferously pro-Bitcoin company CEO, would leave a few weeks later, to be replaced by CTO Parag Agrawal. Agrawal had played an instrumental role in Bluesky, a Twitter-backed project to create a protocol—possibly with blockchain components—to build decentralized social networks.

As crypto went mainstream globally and crypto Twitter burgeoned, the company tried to dominate the space. Under the stewardship of product manager Esther Crawford, in September 2021 Twitter introduced a “tipping” feature that helps creators on Twitter to receive Bitcoin contributions through Lightning—a network for fast Bitcoin payments. In January, Twitter allowed subscribers of Twitter’s premium service, Twitter Blue, to flaunt their NFTs as hexagonal profile pictures, through a partnership with NFT marketplace OpenSea.

Twitter Crypto is just getting started. While Rinearson works with people all across the company, her team is still under 10 people, although more hires are in the pipeline, judging from recent job postings. So it’s worth asking what is next. I caught up over a video call with Rinearson and Crawford to talk about where Twitter Crypto is headed. 

The conversation has been edited for clarity and brevity.

WIRED: Let’s start with the basics. Why does Twitter have a crypto unit?

Tess Rinearson: We really see crypto—and what we’re now calling Web3— as something that could be this incredibly powerful tool that would unlock a lot for our users. The whole crypto world is like an internet of money, an internet of value that our users can potentially tap into to create new ways of owning their content, monetizing their content, owning their own identity, and even relating to each other.

One of my goals is to build Twitter’s crypto unit in such a way that it caters to communities that go beyond just that core crypto community. I love the crypto Twitter space, obviously—I’m a very proud member of the crypto community. And at the same time, I recognize that people who are really deep in the crypto space may not relate to concepts, like for instance blockchain’s immutability, in the same way that someone who’s less intensely involved might feel about those things.

So a lot of what we try to think about is, what can we learn from this group of people who are super engaged and really, really, creative? And then, how can we translate some of that stuff into a format or a mechanism or a product that’s a little bit more accessible to people who don’t have that background?

How are you learning from crypto Twitter? Do you just follow a lot of accounts, do you actually talk to them? How does that learning experience play out?

Esther Crawford: It’s a combination. We have an amazing research team that sets up panel interviews and surveys. But we’re also embedded in the community itself and follow a bunch of accounts, sit on Twitter spaces, go to conferences and events, engage with customers in that way. That’s the way the research piece of it works. But we also encounter it as end users: Twitter is the discovery platform today for all things crypto.

One of the things we do differently at Twitter is we build out in the open. And so this means having dialog with customers in real time—designers will take something that is very early-stage and post it as a tweet and then get real-time feedback. They’ll hop into spaces with product managers and engineering managers, talk about it live with real customers, and then incorporate that feedback into the designs and what ultimately we end up launching.

Rinearson: One of the things I wanted to make sure of before I came to Twitter was to know that we would be able to build features in the open and solicit feedback and show rough drafts. And so this is something I asked Parag Agrawal, who’s now the CEO, and was the person who hired me. Pretty early in the job interview process, I said this was going to be really important, and he said, “If you think it’s important to the success of this work, great, do it—thumbs up.” He also shares that openness.

As you said, Tess, you come from crypto. When you were out there, what did you think Twitter was getting right? What did you think Twitter was getting wrong?

Rinearson: I had been a Twitter power user for a really long time. The thing that I saw was a lot of aesthetic alignment between how Twitter exists in the world and the way that crypto exists in the world. Twitter has decentralized user experiences in its DNA. And, this is a bit cheesy, but people use Twitter sometimes in ways that they use a public blockchain, as a public database where everything’s time stamped and people can agree on what happened.

And for most people it’s open, it is there for public conversation. And then obviously it was also the place—a place—where the crypto community really found its footing. I think it’s been a place where an enormous amount of discovery happens, and education and learning for the whole community. I joined when there were some murmurings about Twitter starting to do crypto stuff, mostly stuff Esther had led actually, and I was excited to see where it was going. And then Twitter’s investment in Bluesky also gave me a lot of confidence.

Let’s talk about the two main things you have delivered so far: The crypto tipping feature and NFT pictures. Can you give me just a potted history of how each came about and why?

Crawford: Those are our first set of early explorations, and the reason why we started there was we really wanted to make sure that what we built benefited creators, their audiences, and then all the conversations that are happening on Twitter. For creators in particular, we know that they rely on platforms like Twitter to monetize and earn a living, and not all people are able to use traditional currencies. Not everybody has a traditional banking account setup.

And so we wanted to provide an opportunity for a borderless payment solution, and that’s why we decided to go ahead and use Bitcoin Lightning as our first big integration. One of the reasons we chose Bitcoin Lightning was also because of the low transaction fees. And we have Bitcoin and Ethereum addresses that you can also put in there [on your Twitter “tipping jar”]. We noticed that people were actually adding information about their crypto wallet addresses in their profiles. And so we wanted to make a more seamless experience, so that people could just tip through the platform, so that it felt native.

With NFT profile pictures, the way that came about was, again, looking at user behavior. People were adding NFTs that they owned as avatars, but you didn’t really know whether they owned those NFTs or not. So we decided to go ahead and build out that feature so that one could actually prove ownership.

That’s similar to how other things developed on Twitter, right? The hashtag, or even even the retweet, were initially just things users invented—by adding the # sign, or by pasting other users’ tweets—and then Twitter made that a feature.

Crawford: Yeah, exactly. Many of the best ideas come from watching user behavior on the platform, and then we just productize that.

Rinearson: Sometimes I’ve heard people call that the “help wanted signs,” and like, keeping an eye out for the “help wanted signs” across the platform. The NFT profile picture was a clear example of that.

How do all these things—these two things and possibly other crypto features coming further down the line—really help Twitter’s bottom line?

Crawford: With creator monetization our goal was to help creators get paid, not Twitter. But Twitter takes a really small cut of earnings. For more successful creators, we take a larger percentage. The way we think about this is, it is part of our revenue diversification.

Twitter today is a wholly ad-based business. In the future we imagine Twitter making money from a variety of different product areas. So Twitter Blue is one of those products—you can pay $2.99 a month and you get additional features, such as the NFT profile pictures. We really think that revenue diversification sits across a variety of areas, and creator monetization is one really small component of that.

As you said, these are just early experiments. Where is Twitter Crypto going next? What’s your vision for crypto technology’s role within Twitter?

Rinearson: The real trick here is to find the right parts of Twitter to decentralize, and to not try to decentralize everything at once—or, you know, make every user suddenly responsible for taking care of some private keys or something like that.

We have to find the right ways to open up some access to a decentralized economic layer, or give people ways that they can take their identity with them, without relying on a single centralized service.

We’re really early in these explorations, and even looking at things like Bitcoin tipping or the NFT profile pictures—we view those features as experiments themselves in a lot of ways and learning experiences. We’re learning things about how our users relate to these concepts, what they understand about them, what they find confusing, and what’s most useful to them. We really want to try to use this technology to bring utility to people and you know, not just like, sprinkle a little blockchain on it for the sake of it. So creator monetization is an area that I’m really excited about because I think there’s a really clear path forward. But again, we’re looking beyond that: We’re also looking at using crypto technology in fields like [digital] identity and [digital] ownership space and also figuring out how we can better serve crypto communities on the platform.

Are you going to put Twitter verified users’ blue ticks on a blockchain, then?

[Laughter]

No?

[More laughter]

OK, moving on. How does the kind of work you do dovetail with Bluesky’s plan to create a protocol for a decentralized social media platform? Is there any synergy there?

Rinearson: I have known Jay [Graber], the Bluesky lead, for a long time, and she and I are in pretty close contact. We check in with each other regularly and talk a lot about problems we might have in common that we’ll both need to solve. There’s an overlap looking at things in the identity area, but at the end of the day, it’s a separate project. She’s pretty focused on hiring her team, and they’re very focused on building a prototype of a protocol. That is different from what Esther and I are thinking about, which is like: There are all these blockchain protocols that exist, and we need to figure out how to make them useful and accessible for real people.

And when I say “real people,” I mean that in a sort of tongue-in-cheek contrast to hardcore crypto nerds like me. Jay is thinking much more about building for people who are creating decentralized networks. That is a very different focus area. Beyond that, I would just say it’s too early to say what Bluesky will mean for Twitter as a product. We are in touch, we have aligned values. But at the end of the day—separate teams.

Why is a centralized Silicon Valley company like Twitter the right place to start to bring more decentralization to internet users? Don’t we have just to start from scratch, build a new platform that is already decentralized?

Rinearson: I started in crypto in 2015, and I have a very vivid memory from those years of watching some of my coworkers—crypto engineers—trying to figure out how to secure some of their Bitcoin like before one of the Bitcoin forks [in which the Bitcoin blockchain split, creating new currencies], and they were panicking and freaking out. I thought there was no way that a normal person would be able to handle this in a way that would be safe. And so I was a little bit disillusioned with crypto, especially from a consumer perspective.

And then last year, I started seeing more interest from people whom I’ve known for a long time and weren’t crypto people. They were just starting to perk their heads up and take notice and start creating NFTs or start talking about DAOs. And I thought that that was interesting, that we were coming around a corner, and it might be time to start thinking about what this could mean for people beyond that hardcore crypto group.

And that was when Twitter reached out. You know, I don’t think that just any centralized platform would be able to bring crypto to the masses, so to speak. But I think Twitter has the right stuff. I think you have to meet people where they are with new technologies: find ways to onboard them and bring them along and show them what this might mean for them. make things accessible. And it’s really, really hard to do that with just a protocol. You need to have some kind of community, you need to have some kind of user base, you need to have some kind of platform. And Twitter’s just right there.

I don’t think I would say that a centralized platform is definitely the way to “bring crypto to the masses.” I do think that Twitter is the way to do it.

But why do the masses need crypto right now?

Rinearson: I don’t know that anyone  needs crypto, and our goal is not to get everyone into crypto. Let’s be clear about that. But I do think that crypto is a potentially very powerful tool for people. And so I think what we are trying to do is show people how powerful it is and unlock those possibilities. It’s also possible that we create some products and features, where people actually don’t even really know what’s happening under the hood.

Like maybe we’re using crypto as a payment rail or again as an identity layer—users don’t necessarily need to know all of those implementation details. And that’s actually something we come back to a lot: What level of abstraction are we talking about with users? What story are we telling them about what’s happening under the hood? But yeah, I would just like to reiterate that the goal is not to just shovel everyone into crypto. We want to provide value for people.

Do you think there is a case for Twitter to launch its own cryptocurrency— a Twittercoin?

Rinearson: I think there’s a case for a lot of things—honestly, there’s a case for a lot of things. We’re trying to think really, really broadly about it.

Crawford: We’re actively exploring a lot of things. It’s not it’s not something we would be making an announcement about.

Rinearson: I think it is really important to stress that when you say “Twittercoin” you probably have a slightly different idea of what it is than we do. And are we exploring those ideas? Yes, we want to think about all of them. Do we have road maps for them? No. But are we trying to think about things really creatively and be really, really open-minded? Yes. We have this new economic technology that we think could unlock a lot of things for people. And we want to go down a bunch of rabbit holes and see what we come up with.

Gian M. Volpicelli is a senior writer at WIRED, where he covers cryptocurrency, decentralization, politics, and technology regulation. He received a master’s degree in journalism from City University of London after studying politics and international relations in Rome. He lives in London.

 

Highlighted Text in BLUE, BLACK, GREEN, RED by Aviva Lev-Ari, PhD, RN

SOURCE

https://www.wired.com/story/twitter-crypto-strategy/

Read Full Post »

Data Science: Step by Step – A Resource for LPBI Group One-Year Internship in IT, IS, DS

Reporter: Aviva Lev-Ari, PhD, RN

9 free Harvard courses: learning Data Science

In this article, I will list 9 free Harvard courses that you can take to learn data science from scratch. Feel free to skip any of these courses if you already possess knowledge of that subject.

Step 1: Programming

The first step you should take when learning data science is to learn to code. You can choose to do this with your choice of programming language?—?ideally Python or R.

If you’d like to learn R, Harvard offers an introductory R course created specifically for data science learners, called Data Science: R Basics.

This program will take you through R concepts like variables, data types, vector arithmetic, and indexing. You will also learn to wrangle data with libraries like dplyr and create plots to visualize data.

If you prefer Python, you can choose to take CS50’s Introduction to Programming with Python offered for free by Harvard. In this course, you will learn concepts like functions, arguments, variables, data types, conditional statements, loops, objects, methods, and more.

Both programs above are self-paced. However, the Python course is more detailed than the R program, and requires a longer time commitment to complete. Also, the rest of the courses in this roadmap are taught in R, so it might be worth learning R to be able to follow along easily.

Step 2: Data Visualization

Visualization is one of the most powerful techniques with which you can translate your findings in data to another person.

With Harvard’s Data Visualization program, you will learn to build visualizations using the ggplot2 library in R, along with the principles of communicating data-driven insights.

Step 3: Probability

In this course, you will learn essential probability concepts that are fundamental to conducting statistical tests on data. The topics taught include random variables, independence, Monte Carlo simulations, expected values, standard errors, and the Central Limit Theorem.

The concepts above will be introduced with the help of a case study, which means that you will be able to apply everything you learned to an actual real-world dataset.

Step 4: Statistics

After learning probability, you can take this course to learn the fundamentals of statistical inference and modelling.
This program will teach you to define population estimates and margin of errors, introduce you to Bayesian statistics, and provide you with the fundamentals of predictive modeling.

Step 5: Productivity Tools (Optional)

I’ve included this project management course as optional since it isn’t directly related to learning data science. Rather, you will be taught to use Unix/Linux for file management, Github, version control, and creating reports in R.

The ability to do the above will save you a lot of time and help you better manage end-to-end data science projects.

Step 6: Data Pre-Processing

The next course in this list is called Data Wrangling, and will teach you to prepare data and convert it into a format that is easily digestible by machine learning models.

You will learn to import data into R, tidy data, process string data, parse HTML, work with date-time objects, and mine text.

As a data scientist, you often need to extract data that is publicly available on the Internet in the form of a PDF document, HTML webpage, or a Tweet. You will not always be presented with clean, formatted data in a CSV file or Excel sheet.

By the end of this course, you will learn to wrangle and clean data to come up with critical insights from it.

Step 7: Linear Regression

Linear regression is a machine learning technique that is used to model a linear relationship between two or more variables. It can also be used to identify and adjust the effect of confounding variables.

This course will teach you the theory behind linear regression models, how to examine the relationship between two variables, and how confounding variables can be detected and removed before building a machine learning algorithm.

Step 8: Machine Learning

Finally, the course you’ve probably been waiting for! Harvard’s machine learning program will teach you the basics of machine learning, techniques to mitigate overfitting, supervised and unsupervised modelling approaches, and recommendation systems.

Step 9: Capstone Project

After completing all the above courses, you can take Harvard’s data science capstone project, where your skills in data visualization, probability, statistics, data wrangling, data organization, regression, and machine learning will be assessed.

With this final project, you will get the opportunity to put together all the knowledge learnt from the above courses and gain the ability to complete a hands-on data science project from scratch.

Note: All the courses above are available on an online learning platform from edX and can be audited for free. If you want a course certificate, however, you will have to pay for one.

Building a data science learning roadmap with free courses offered by MIT.

8 Free MIT Courses to Learn Data Science Online

 enrolled into an undergraduate computer science program and decided to major in data science. I spent over $25K in tuition fees over the span of three years, only to graduate and realize that I wasn’t equipped with the skills necessary to land a job in the field.

I barely knew how to code, and was unclear about the most basic machine learning concepts.

I took some time out to try and learn data science myself — with the help of YouTube videos, online courses, and tutorials. I realized that all of this knowledge was publicly available on the Internet and could be accessed for free.

It came as a surprise that even Ivy League universities started making many of their courses accessible to students worldwide, for little to no charge. This meant that people like me could learn these skills from some of the best institutions in the world, instead of spending thousands of dollars on a subpar degree program.

In this article, I will provide you with a data science roadmap I created using only freely available MIT online courses.

Step 1: Learn to code

I highly recommend learning a programming language before going deep into the math and theory behind data science models. Once you learn to code, you will be able to work with real-world datasets and get a feel of how predictive algorithms function.

MIT Open Courseware offers a beginner-friendly Python program for beginners, called Introduction to Computer Science and Programming.

This course is designed to help people with no prior coding experience to write programs to tackle useful problems.

Step 2: Statistics

Statistics is at the core of every data science workflow — it is required when building a predictive model, analyzing trends in large amounts of data, or selecting useful features to feed into your model.

MIT Open Courseware offers a beginner-friendly course called Introduction to Probability and Statistics. After taking this course, you will learn the basic principles of statistical inference and probability. Some concepts covered include conditional probability, Bayes theorem, covariance, central limit theorem, resampling, and linear regression.

This course will also walk you through statistical analysis using the R programming language, which is useful as it adds on to your tool stack as a data scientist.

Another useful program offered by MIT for free is called Statistical Thinking and Data Analysis. This is another elementary course in the subject that will take you through different data analysis techniques in Excel, R, and Matlab.

You will learn about data collection, analysis, different types of sampling distributions, statistical inference, linear regression, multiple linear regression, and nonparametric statistical methods.

Step 3: Foundational Math Skills

Calculus and linear algebra are two other branches of math that are used in the field of machine learning. Taking a course or two in these subjects will give you a different perspective of how predictive models function, and the working behind the underlying algorithm.

To learn calculus, you can take Single Variable Calculus offered by MIT for free, followed by Multivariable Calculus.

Then, you can take this Linear Algebra class by Prof. Gilbert Strang to get a strong grasp of the subject.

All of the above courses are offered by MIT Open Courseware, and are paired with lecture notes, problem sets, exam questions, and solutions.

Step 4: Machine Learning

Finally, you can use the knowledge gained in the courses above to take MIT’s Introduction to Machine Learning course. This program will walk you through the implementation of predictive models in Python.

The core focus of this course is in supervised and reinforcement learning problems, and you will be taught concepts such as generalization and how overfitting can be mitigated. Apart from just working with structured datasets, you will also learn to process image and sequential data.

MIT’s machine learning program cites three pre-requisites — Python, linear algebra, and calculus, which is why it is advisable to take the courses above before starting this one.

Are These Courses Beginner-Friendly?

Even if you have no prior knowledge of programming, statistics, or mathematics, you can take all the courses listed above.

MIT has designed these programs to take you through the subject from scratch. However, unlike many MOOCs out there, the pace does build up pretty quickly and the courses cover a large depth of information.

Due to this, it is advisable to do all the exercises that come with the lectures and work through all the reading material provided.

SOURCE

Natassha Selvaraj is a self-taught data scientist with a passion for writing. You can connect with her on LinkedIn.

https://www.kdnuggets.com/2022/03/8-free-mit-courses-learn-data-science-online.html

Read Full Post »

Will Web 3.0 Do Away With Science 2.0? Is Science Falling Behind?

Curator: Stephen J. Williams, Ph.D.

UPDATED 4/06/2022

A while back (actually many moons ago) I had put on two posts on this site:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Twitter is Becoming a Powerful Tool in Science and Medicine

Each of these posts were on the importance of scientific curation of findings within the realm of social media and the Web 2.0; a sub-environment known throughout the scientific communities as Science 2.0, in which expert networks collaborated together to produce massive new corpus of knowledge by sharing their views, insights on peer reviewed scientific findings. And through this new media, this process of curation would, in itself generate new ideas and new directions for research and discovery.

The platform sort of looked like the image below:

 

This system lied above a platform of the original Science 1.0, made up of all the scientific journals, books, and traditional literature:

In the old Science 1.0 format, scientific dissemination was in the format of hard print journals, and library subscriptions were mandatory (and eventually expensive). Open Access has tried to ameliorate the expense problem.

Previous image source: PeerJ.com

To index the massive and voluminous research and papers beyond the old Dewey Decimal system, a process of curation was mandatory. The dissemination of this was a natural for the new social media however the cost had to be spread out among numerous players. Journals, faced with the high costs of subscriptions and their only way to access this new media as an outlet was to become Open Access, a movement first sparked by journals like PLOS and PeerJ but then begrudingly adopted throughout the landscape. But with any movement or new adoption one gets the Good the Bad and the Ugly (as described in my cited, above, Clive Thompson article). The bad side of Open Access Journals were

  1. costs are still assumed by the individual researcher not by the journals
  2. the arise of the numerous Predatory Journals

 

Even PeerJ, in their column celebrating an anniversary of a year’s worth of Open Access success stories, lamented the key issues still facing Open Access in practice

  • which included the cost and the rise of predatory journals.

In essence, Open Access and Science 2.0 sprung full force BEFORE anyone thought of a way to defray the costs

 

Can Web 3.0 Finally Offer a Way to Right the Issues Facing High Costs of Scientific Publishing?

What is Web 3.0?

From Wikipedia: https://en.wikipedia.org/wiki/Web3

Web 1.0 and Web 2.0 refer to eras in the history of the Internet as it evolved through various technologies and formats. Web 1.0 refers roughly to the period from 1991 to 2004, where most websites were static webpages, and the vast majority of users were consumers, not producers, of content.[6][7] Web 2.0 is based around the idea of “the web as platform”,[8] and centers on user-created content uploaded to social-networking services, blogs, and wikis, among other services.[9] Web 2.0 is generally considered to have begun around 2004, and continues to the current day.[8][10][4]

Terminology[edit]

The term “Web3”, specifically “Web 3.0”, was coined by Ethereum co-founder Gavin Wood in 2014.[1] In 2020 and 2021, the idea of Web3 gained popularity[citation needed]. Particular interest spiked towards the end of 2021, largely due to interest from cryptocurrency enthusiasts and investments from high-profile technologists and companies.[4][5] Executives from venture capital firm Andreessen Horowitz travelled to Washington, D.C. in October 2021 to lobby for the idea as a potential solution to questions about Internet regulation with which policymakers have been grappling.[11]

Web3 is distinct from Tim Berners-Lee‘s 1999 concept for a semantic web, which has also been called “Web 3.0”.[12] Some writers referring to the decentralized concept usually known as “Web3” have used the terminology “Web 3.0”, leading to some confusion between the two concepts.[2][3] Furthermore, some visions of Web3 also incorporate ideas relating to the semantic web.[13][14]

Concept[edit]

Web3 revolves around the idea of decentralization, which proponents often contrast with Web 2.0, wherein large amounts of the web’s data and content are centralized in the fairly small group of companies often referred to as Big Tech.[4]

Specific visions for Web3 differ, but all are heavily based in blockchain technologies, such as various cryptocurrencies and non-fungible tokens (NFTs).[4] Bloomberg described Web3 as an idea that “would build financial assets, in the form of tokens, into the inner workings of almost anything you do online”.[15] Some visions are based around the concepts of decentralized autonomous organizations (DAOs).[16] Decentralized finance (DeFi) is another key concept; in it, users exchange currency without bank or government involvement.[4] Self-sovereign identity allows users to identify themselves without relying on an authentication system such as OAuth, in which a trusted party has to be reached in order to assess identity.[17]

Reception[edit]

Technologists and journalists have described Web3 as a possible solution to concerns about the over-centralization of the web in a few “Big Tech” companies.[4][11] Some have expressed the notion that Web3 could improve data securityscalability, and privacy beyond what is currently possible with Web 2.0 platforms.[14] Bloomberg states that sceptics say the idea “is a long way from proving its use beyond niche applications, many of them tools aimed at crypto traders”.[15] The New York Times reported that several investors are betting $27 billion that Web3 “is the future of the internet”.[18][19]

Some companies, including Reddit and Discord, have explored incorporating Web3 technologies into their platforms in late 2021.[4][20] After heavy user backlash, Discord later announced they had no plans to integrate such technologies.[21] The company’s CEO, Jason Citron, tweeted a screenshot suggesting it might be exploring integrating Web3 into their platform. This led some to cancel their paid subscriptions over their distaste for NFTs, and others expressed concerns that such a change might increase the amount of scams and spam they had already experienced on crypto-related Discord servers.[20] Two days later, Citron tweeted that the company had no plans to integrate Web3 technologies into their platform, and said that it was an internal-only concept that had been developed in a company-wide hackathon.[21]

Some legal scholars quoted by The Conversation have expressed concerns over the difficulty of regulating a decentralized web, which they reported might make it more difficult to prevent cybercrimeonline harassmenthate speech, and the dissemination of child abuse images.[13] But, the news website also states that, “[decentralized web] represents the cyber-libertarian views and hopes of the past that the internet can empower ordinary people by breaking down existing power structures.” Some other critics of Web3 see the concept as a part of a cryptocurrency bubble, or as an extension of blockchain-based trends that they see as overhyped or harmful, particularly NFTs.[20] Some critics have raised concerns about the environmental impact of cryptocurrencies and NFTs. Others have expressed beliefs that Web3 and the associated technologies are a pyramid scheme.[5]

Kevin Werbach, author of The Blockchain and the New Architecture of Trust,[22] said that “many so-called ‘web3’ solutions are not as decentralized as they seem, while others have yet to show they are scalable, secure and accessible enough for the mass market”, adding that this “may change, but it’s not a given that all these limitations will be overcome”.[23]

David Gerard, author of Attack of the 50 Foot Blockchain,[24] told The Register that “web3 is a marketing buzzword with no technical meaning. It’s a melange of cryptocurrencies, smart contracts with nigh-magical abilities, and NFTs just because they think they can sell some monkeys to morons”.[25]

Below is an article from MarketWatch.com Distributed Ledger series about the different forms and cryptocurrencies involved

From Marketwatch: https://www.marketwatch.com/story/bitcoin-is-so-2021-heres-why-some-institutions-are-set-to-bypass-the-no-1-crypto-and-invest-in-ethereum-other-blockchains-next-year-11639690654?mod=home-page

by Frances Yue, Editor of Distributed Ledger, Marketwatch.com

Clayton Gardner, co-CEO of crypto investment management firm Titan, told Distributed Ledger that as crypto embraces broader adoption, he expects more institutions to bypass bitcoin and invest in other blockchains, such as Ethereum, Avalanche, and Terra in 2022. which all boast smart-contract features.

Bitcoin traditionally did not support complex smart contracts, which are computer programs stored on blockchains, though a major upgrade in November might have unlocked more potential.

“Bitcoin was originally seen as a macro speculative asset by many funds and for many it still is,” Gardner said. “If anything solidifies its use case, it’s a store of value. It’s not really used as originally intended, perhaps from a medium of exchange perspective.”

For institutions that are looking for blockchains that can “produce utility and some intrinsic value over time,” they might consider some other smart contract blockchains that have been driving the growth of decentralized finance and web 3.0, the third generation of the Internet, according to Gardner. 

Bitcoin is still one of the most secure blockchains, but I think layer-one, layer-two blockchains beyond Bitcoin, will handle the majority of transactions and activities from NFT (nonfungible tokens) to DeFi,“ Gardner said. “So I think institutions see that and insofar as they want to put capital to work in the coming months, I think that could be where they just pump the capital.”

Decentralized social media? 

The price of Decentralized Social, or DeSo, a cryptocurrency powering a blockchain that supports decentralized social media applications, surged roughly 74% to about $164 from $94, after Deso was listed at Coinbase Pro on Monday, before it fell to about $95, according to CoinGecko.

In the eyes of Nader Al-Naji, head of the DeSo foundation, decentralized social media has the potential to be “a lot bigger” than decentralized finance.

“Today there are only a few companies that control most of what we see online,” Al-Naji told Distributed Ledger in an interview. But DeSo is “creating a lot of new ways for creators to make money,” Al-Naji said.

“If you find a creator when they’re small, or an influencer, you can invest in that, and then if they become bigger and more popular, you make money and they make and they get capital early on to produce their creative work,” according to AI-Naji.

BitClout, the first application that was created by AI-Naji and his team on the DeSo blockchain, had initially drawn controversy, as some found that they had profiles on the platform without their consent, while the application’s users were buying and selling tokens representing their identities. Such tokens are called “creator coins.”

AI-Naji responded to the controversy saying that DeSo now supports more than 200 social-media applications including Bitclout. “I think that if you don’t like those features, you now have the freedom to use any app you want. Some apps don’t have that functionality at all.”

 

But Before I get to the “selling monkeys to morons” quote,

I want to talk about

THE GOOD, THE BAD, AND THE UGLY

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

THE GOOD

My foray into Science 2.0 and then pondering what the movement into a Science 3.0 led me to an article by Dr. Vladimir Teif, who studies gene regulation and the nucleosome, as well as creating a worldwide group of scientists who discuss matters on chromatin and gene regulation in a journal club type format.

For more information on this Fragile Nucleosome journal club see https://generegulation.org/fragile-nucleosome/.

Fragile Nucleosome is an international community of scientists interested in chromatin and gene regulation. Fragile Nucleosome is active in several spaces: one is the Discord server where several hundred scientists chat informally on scientific matters. You can join the Fragile Nucleosome Discord server. Another activity of the group is the organization of weekly virtual seminars on Zoom. Our webinars are usually conducted on Wednesdays 9am Pacific time (5pm UK, 6pm Central Europe). Most previous seminars have been recorded and can be viewed at our YouTube channel. The schedule of upcoming webinars is shown below. Our third activity is the organization of weekly journal clubs detailed at a separate page (Fragile Nucleosome Journal Club).

 

His lab site is at https://generegulation.org/ but had published a paper describing what he felt what the #science2_0 to #science3_0 transition would look like (see his blog page on this at https://generegulation.org/open-science/).

This concept of science 3.0 he had coined back in 2009.  As Dr Teif had mentioned

So essentially I first introduced this word Science 3.0 in 2009, and since then we did a lot to implement this in practice. The Twitter account @generegulation is also one of examples

 

This is curious as we still have an ill defined concept of what #science3_0 would look like but it is a good read nonetheless.

His paper,  entitled “Science 3.0: Corrections to the Science 2.0 paradigm” is on the Cornell preprint server at https://arxiv.org/abs/1301.2522 

 

Abstract

Science 3.0: Corrections to the Science 2.0 paradigm

The concept of Science 2.0 was introduced almost a decade ago to describe the new generation of online-based tools for researchers allowing easier data sharing, collaboration and publishing. Although technically sound, the concept still does not work as expected. Here we provide a systematic line of arguments to modify the concept of Science 2.0, making it more consistent with the spirit and traditions of science and Internet. Our first correction to the Science 2.0 paradigm concerns the open-access publication models charging fees to the authors. As discussed elsewhere, we show that the monopoly of such publishing models increases biases and inequalities in the representation of scientific ideas based on the author’s income. Our second correction concerns post-publication comments online, which are all essentially non-anonymous in the current Science 2.0 paradigm. We conclude that scientific post-publication discussions require special anonymization systems. We further analyze the reasons of the failure of the current post-publication peer-review models and suggest what needs to be changed in Science 3.0 to convert Internet into a large journal club. [bold face added]
In this paper it is important to note the transition of a science 1.0, which involved hard copy journal publications usually only accessible in libraries to a more digital 2.0 format where data, papers, and ideas could be easily shared among networks of scientists.
As Dr. Teif states, the term “Science 2.0” had been coined back in 2009, and several influential journals including Science, Nature and Scientific American endorsed this term and suggested scientists to move online and their discussions online.  However, even at present there are thousands on this science 2.0 platform, Dr Teif notes the number of scientists subscribed to many Science 2.0 networking groups such as on LinkedIn and ResearchGate have seemingly saturated over the years, with little new members in recent times. 
The consensus is that science 2.0 networking is:
  1. good because it multiplies the efforts of many scientists, including experts and adds to the scientific discourse unavailable on a 1.0 format
  2. that online data sharing is good because it assists in the process of discovery (can see this evident with preprint servers, bio-curated databases, Github projects)
  3. open-access publishing is beneficial because free access to professional articles and open-access will be the only publishing format in the future (although this is highly debatable as many journals are holding on to a type of “hybrid open access format” which is not truly open access
  4. only sharing of unfinished works and critiques or opinions is good because it creates visibility for scientists where they can receive credit for their expert commentary

There are a few concerns on Science 3.0 Dr. Teif articulates:

A.  Science 3.0 Still Needs Peer Review

Peer review of scientific findings will always be an imperative in the dissemination of well-done, properly controlled scientific discovery.  As Science 2.0 relies on an army of scientific volunteers, the peer review process also involves an army of scientific experts who give their time to safeguard the credibility of science, by ensuring that findings are reliable and data is presented fairly and properly.  It has been very evident, in this time of pandemic and the rapid increase of volumes of preprint server papers on Sars-COV2, that peer review is critical.  Many of these papers on such preprint servers were later either retracted or failed a stringent peer review process.

Now many journals of the 1.0 format do not generally reward their peer reviewers other than the self credit that researchers use on their curriculum vitaes.  Some journals, like the MDPI journal family, do issues peer reviewer credits which can be used to defray the high publication costs of open access (one area that many scientists lament about the open access movement; where the burden of publication cost lies on the individual researcher).

An issue which is highlighted is the potential for INFORMATION NOISE regarding the ability to self publish on Science 2.0 platforms.

 

The NEW BREED was born in 4/2012

An ongoing effort on this platform, https://pharmaceuticalintelligence.com/, is to establish a scientific methodology for curating scientific findings where one the goals is to assist to quell the information noise that can result from the massive amounts of new informatics and data occurring in the biomedical literature. 

B.  Open Access Publishing Model leads to biases and inequalities in the idea selection

The open access publishing model has been compared to the model applied by the advertising industry years ago and publishers then considered the journal articles as “advertisements”.  However NOTHING could be further from the truth.  In advertising the publishers claim the companies not the consumer pays for the ads.  However in scientific open access publishing, although the consumer (libraries) do not pay for access the burden of BOTH the cost of doing the research and publishing the findings is now put on the individual researcher.  Some of these publishing costs can be as high as $4000 USD per article, which is very high for most researchers.  However many universities try to refund the publishers if they do open access publishing so it still costs the consumer and the individual researcher, limiting the cost savings to either.  

However, this sets up a situation in which young researchers, who in general are not well funded, are struggling with the publication costs, and this sets up a bias or inequitable system which rewards the well funded older researchers and bigger academic labs.

C. Post publication comments and discussion require online hubs and anonymization systems

Many recent publications stress the importance of a post-publication review process or system yet, although many big journals like Nature and Science have their own blogs and commentary systems, these are rarely used.  In fact they show that there are just 1 comment per 100 views of a journal article on these systems.  In the traditional journals editors are the referees of comments and have the ability to censure comments or discourse.  The article laments that comments should be easy to do on journals, like how easy it is to make comments on other social sites, however scientists are not offering their comments or opinions on the matter. 

In a personal experience, 

a well written commentary goes through editors which usually reject a comment like they were rejecting an original research article.  Thus many scientists, I believe, after fashioning a well researched and referenced reply, do not get the light of day if not in the editor’s interests.  

Therefore the need for anonymity is greatly needed and the lack of this may be the hindrance why scientific discourse is so limited on these types of Science 2.0 platforms.  Platforms that have success in this arena include anonymous platforms like Wikipedia or certain closed LinkedIn professional platforms but more open platforms like Google Knowledge has been a failure.

A great example on this platform was a very spirited conversation on LinkedIn on genomics, tumor heterogeneity and personalized medicine which we curated from the LinkedIn discussion (unfortunately LinkedIn has closed many groups) seen here:

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

 

Issues in Personalized Medicine: Discussions of Intratumor Heterogeneity from the Oncology Pharma forum on LinkedIn

 

In this discussion, it was surprising that over a weekend so many scientists from all over the world contributed to a great discussion on the topic of tumor heterogeneity.

But many feel such discussions would be safer if they were anonymized.  However then researchers do not get any credit for their opinions or commentaries.

A Major problem is how to take the intangible and make them into tangible assets which would both promote the discourse as well as reward those who take their time to improve scientific discussion.

This is where something like NFTs or a decentralized network may become important!

See

https://pharmaceuticalintelligence.com/portfolio-of-ip-assets/

 

UPDATED 5/09/2022

Below is an online @TwitterSpace Discussion we had with some young scientists who are just starting out and gave their thoughts on what SCIENCE 3.0 and the future of dissemination of science might look like, in light of this new Meta Verse.  However we have to define each of these terms in light of Science and not just the Internet as merely a decentralized marketplace for commonly held goods.

This online discussion was tweeted out and got a fair amount of impressions (60) as well as interactors (50).

 For the recording on both Twitter as well as on an audio format please see below

<blockquote class=”twitter-tweet”><p lang=”en” dir=”ltr”>Set a reminder for my upcoming Space! <a href=”https://t.co/7mOpScZfGN”>https://t.co/7mOpScZfGN</a&gt; <a href=”https://twitter.com/Pharma_BI?ref_src=twsrc%5Etfw”>@Pharma_BI</a&gt; <a href=”https://twitter.com/PSMTempleU?ref_src=twsrc%5Etfw”>@PSMTempleU</a&gt; <a href=”https://twitter.com/hashtag/science3_0?src=hash&amp;ref_src=twsrc%5Etfw”>#science3_0</a&gt; <a href=”https://twitter.com/science2_0?ref_src=twsrc%5Etfw”>@science2_0</a></p>&mdash; Stephen J Williams (@StephenJWillia2) <a href=”https://twitter.com/StephenJWillia2/status/1519776668176502792?ref_src=twsrc%5Etfw”>April 28, 2022</a></blockquote> <script async src=”https://platform.twitter.com/widgets.js&#8221; charset=”utf-8″></script>

 

 

To introduce this discussion first a few startoff material which will fram this discourse

 






The Intenet and the Web is rapidly adopting a new “Web 3.0” format, with decentralized networks, enhanced virtual experiences, and greater interconnection between people. Here we start the discussion what will the move from Science 2.0, where dissemination of scientific findings was revolutionized and piggybacking on Web 2.0 or social media, to a Science 3.0 format. And what will it involve or what paradigms will be turned upside down?

Old Science 1.0 is still the backbone of all scientific discourse, built on the massive amount of experimental and review literature. However this literature was in analog format, and we moved to a more accesible digital open access format for both publications as well as raw data. However as there was a structure for 1.0, like the Dewey decimal system and indexing, 2.0 made science more accesible and easier to search due to the newer digital formats. Yet both needed an organizing structure; for 1.0 that was the scientific method of data and literature organization with libraries as the indexers. In 2.0 this relied on an army mostly of volunteers who did not have much in the way of incentivization to co-curate and organize the findings and massive literature.

Each version of Science has their caveats: their benefits as well as deficiencies. This curation and the ongoing discussion is meant to solidy the basis for the new format, along with definitions and determination of structure.

We had high hopes for Science 2.0, in particular the smashing of data and knowledge silos. However the digital age along with 2.0 platforms seemed to excaccerbate this somehow. We still are critically short on analysis!

 

We really need people and organizations to get on top of this new Web 3.0 or metaverse so the similar issues do not get in the way: namely we need to create an organizing structure (maybe as knowledgebases), we need INCENTIVIZED co-curators, and we need ANALYSIS… lots of it!!

Are these new technologies the cure or is it just another headache?

 

There were a few overarching themes whether one was talking about AI, NLP, Virtual Reality, or other new technologies with respect to this new meta verse and a concensus of Decentralized, Incentivized, and Integrated was commonly expressed among the attendees

The Following are some slides from representative Presentations

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Other article of note on this topic on this Open Access Scientific Journal Include:

Electronic Scientific AGORA: Comment Exchanges by Global Scientists on Articles published in the Open Access Journal @pharmaceuticalintelligence.com – Four Case Studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

e-Scientific Publishing: The Competitive Advantage of a Powerhouse for Curation of Scientific Findings and Methodology Development for e-Scientific Publishing – LPBI Group, A Case in Point

@PharmaceuticalIntelligence.com –  A Case Study on the LEADER in Curation of Scientific Findings

Real Time Coverage @BIOConvention #BIO2019: Falling in Love with Science: Championing Science for Everyone, Everywhere

Old Industrial Revolution Paradigm of Education Needs to End: How Scientific Curation Can Transform Education

 

Read Full Post »

 

Medical Startups – Artificial Intelligence (AI) Startups in Healthcare

Reporters: Stephen J. Williams, PhD and Aviva Lev-Ari, PhD, RN and Shraga Rottem, MD, DSc,

The motivation for this post is two fold:

First, we are presenting an application of AI, NLP, DL to our own medical text in the Genomics space. Here we present the first section of Part 1 in the following book. Part 1 has six subsections that yielded 12 plots. The entire Book is represented by 38 x 2 = 76 plots.

Second, we bring to the attention of the e-Reader the list of 276 Medical Startups – Artificial Intelligence (AI) Startups in Healthcare as a hot universe of R&D activity in Human Health.

Third, to highlight one academic center with an AI focus

ETH Logo
 
ETH AI Center - Header Image
 
 
Dear friends of the ETH AI Center,

We would like to provide you with some exciting updates from the ETH AI Center and its growing community.

We would like to provide you with some exciting updates from the ETH AI Center and its growing community. The ETH AI Center now comprises 110 research groups in the faculty, 20 corporate partners and has led to nine AI startups.

As the Covid-19 restrictions in Switzerland have recently been lifted, we would like to hear from you what kind of events you would like to see in 2022! Participate in the survey to suggest event formats and topics that you would enjoy being a part of. We are already excited to learn what we can achieve together this year.

We already have many interesting events coming up, we look forward to seeing you at our main and community events!

SOURCE

https://news.ethz.ch/html_mail.jsp?params=%2FUnFXUQJ%2FmiOP6akBq8eHxaXG%2BRdNmeoVa9gX5ArpTr6mX74xp5d78HhuIHTd9V6AHtAfRahyx%2BfRGrzVL1G8Jy5e3zykvr1WDtMoUC%2B7vILoHCGQ5p1rxaPzOsF94ID

 

 

LPBI Group is applying AI for Medical Text Analysis with Machine Learning and Natural Language Processing: Statistical and Deep Learning

Our Book 

Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS & BioInformatics, Simulations and the Genome Ontology

Medical Text Analysis of this Books shows the following results obtained by Madison Davis by applying Wolfram NLP for Biological Languages on our own Text. See below an Example:

Part 1: Next Generation Sequencing (NGS)

 

1.1 The NGS Science

1.1.1 BioIT Aspect

 

Hypergraph Plot #1 and Tree Diagram Plot #1

for 1.1.1 based on 16 articles & on 12 keywords

protein, cancer, dna, genes, rna, survival, immune, tumor, patients, human, genome, expression

(more…)

Read Full Post »

From the journal Nature: NFT, Patents, and Intellectual Property: Potential Design

Reporter: Stephen J. Williams, Ph.D.

 

From the journal Nature

Source: https://www.nature.com/articles/s41598-022-05920-6

Patents and intellectual property assets as non-fungible tokens; key technologies and challenges

Scientific Reports volume 12, Article number: 2178 (2022)

Abstract

With the explosive development of decentralized finance, we witness a phenomenal growth in tokenization of all kinds of assets, including equity, funds, debt, and real estate. By taking advantage of blockchain technology, digital assets are broadly grouped into fungible and non-fungible tokens (NFT). Here non-fungible tokens refer to those with unique and non-substitutable properties. NFT has widely attracted attention, and its protocols, standards, and applications are developing exponentially. It has been successfully applied to digital fantasy artwork, games, collectibles, etc. However, there is a lack of research in utilizing NFT in issues such as Intellectual Property. Applying for a patent and trademark is not only a time-consuming and lengthy process but also costly. NFT has considerable potential in the intellectual property domain. It can promote transparency and liquidity and open the market to innovators who aim to commercialize their inventions efficiently. The main objective of this paper is to examine the requirements of presenting intellectual property assets, specifically patents, as NFTs. Hence, we offer a layered conceptual NFT-based patent framework. Furthermore, a series of open challenges about NFT-based patents and the possible future directions are highlighted. The proposed framework provides fundamental elements and guidance for businesses in taking advantage of NFTs in real-world problems such as grant patents, funding, biotechnology, and so forth.

Introduction

Distributed ledger technologies (DLTs) such as blockchain are emerging technologies posing a threat to existing business models. Traditionally, most companies used centralized authorities in various aspects of their business, such as financial operations and setting up a trust with their counterparts. By the emergence of blockchain, centralized organizations can be substituted with a decentralized group of resources and actors. The blockchain mechanism was introduced in Bitcoin white paper in 2008, which lets users generate transactions and spend their money without the intervention of banks1. Ethereum, which is a second generation of blockchain, was introduced in 2014, allowing developers to run smart contracts on a distributed ledger. With smart contracts, developers and businesses can create financial applications that use cryptocurrencies and other forms of tokens for applications such as decentralized finance (DeFi), crowdfunding, decentralized exchanges, data records keeping, etc.2. Recent advances in distributed ledger technology have developed concepts that lead to cost reduction and the simplification of value exchange. Nowadays, by leveraging the advantages of blockchain and taking into account the governance issues, digital assets could be represented as tokens that existed in the blockchain network, which facilitates their transmission and traceability, increases their transparency, and improves their security3.

In the landscape of blockchain technology, there could be defined two types of tokens, including fungible tokens, in which all the tokens have equal value and non-fungible tokens (NFTs) that feature unique characteristics and are not interchangeable. Actually, non-fungible tokens are digital assets with a unique identifier that is stored on a blockchain4. NFT was initially suggested in Ethereum Improvement Proposals (EIP)-7215, and it was later expanded in EIP-11556. NFTs became one of the most widespread applications of blockchain technology that reached worldwide attention in early 2021. They can be digital representations of real-world objects. NFTs are tradable rights of digital assets (pictures, music, films, and virtual creations) where ownership is recorded in blockchain smart contracts7.

In particular, fungibility is the ability to exchange one with another of the same kind as an essential currency feature. The non-fungible token is unique and therefore cannot be substituted8. Recently, blockchain enthusiasts have indicated significant interest in various types of NFTs. They enthusiastically participate in NFT-related games or trades. CryptoPunks9, as one of the first NFTs on Ethereum, has developed almost 10,000 collectible punks and helped popularize the ERC-721 Standard. With the gamification of the breeding mechanics, CryptoKitties10 officially placed NFTs at the forefront of the market in 2017. CryptoKitties is an early blockchain game that enables users to buy, sell, collect, and digital breed cats. Another example is NBA Top Shot11, an NFT trading platform for digital short films buying and selling NBA events.

NFTs are developing remarkably and have provided many applications such as artist royalties, in-game assets, educational certificates, etc. However, it is a relatively new concept, and many areas of application need to be explored. Intellectual Property, including patent, trademark, and copyright, is an important area where NFTs can be applied usefully and solve existing problems.

Although NFTs have had many applications so far, it rarely has been used to solve real-world problems. In fact, an NFT is an exciting concept about Intellectual Property (IP). Applying for a patent and trademark is a time-consuming and lengthy process, but it is also costly. That is, registering a copyright or trademark may take months, while securing a patent can take years. On the contrary, with the help of unique features of NFT technology, it is possible to accelerate this process with considerable confidence and assurance about protecting the ownership of an IP. NFTs can offer IP protection while an applicant waits for the government to grant his/her more formal protection. It is cause for excitement that people who believe NFTs and Blockchain would make buying and selling patents easier, offering new opportunities for companies, universities, and inventors to make money off their innovations12. Patent holders will benefit from such innovation. It would give them the ability to ‘tokenize’ their patents. Because every transaction would be logged on a blockchain, it will be much easier to trace patent ownership changes. However, NFT would also facilitate the revenue generation of patents by democratizing patent licensing via NFT. NFTs support the intellectual property market by embedding automatic royalty collecting methods inside inventors’ works, providing them with financial benefits anytime their innovation is licensed. For example, each inventor’s patent would be minted as an NFT, and these NFTs would be joined together to form a commercial IP portfolio and minted as a compounded NFT. Each investor would automatically get their fair share of royalties whenever the licensing revenue is generated without tracking them down.

The authors in13, an overview of NFTs’ applications in different aspects such as gambling, games, and collectibles has been discussed. In addition4, provides a prototype for an event-tracking application based on Ethereum smart contract, and NFT as a solution for art and real estate auction systems is described in14. However, these studies have not discussed existing standards or a generalized architecture, enabling NFTs to be applied in diverse applications. For example, the authors in15 provide two general design patterns for creating and trading NFTs and discuss existing token standards for NFT. However, the proposed designs are limited to Ethereum, and other blockchains are not considered16. Moreover, different technologies for each step of the proposed procedure are not discussed. In8, the authors provide a conceptual framework for token designing and managing and discuss five views: token view, wallet view, transaction view, user interface view, and protocol view. However, no research provides a generalized conceptual framework for generating, recording, and tracing NFT based-IP, in blockchain network.

Even with the clear benefits that NFT-backed patents offer, there are a number of impediments to actually achieving such a system. For example, convincing patent owners to put current ownership records for their patents into NFTs poses an initial obstacle. Because there is no reliable framework for NFT-based patents, this paper provides a conceptual framework for presenting NFT-based patents with a comprehensive discussion on many aspects, ranging from the background, model components, token standards to application domains and research challenges. The main objective of this paper is to provide a layered conceptual NFT-based patent framework that can be used to register patents in a decentralized, tamper-proof, and trustworthy peer-to-peer network to trade and exchange them in the worldwide market. The main contributions of this paper are highlighted as follows:

  • Providing a comprehensive overview on tokenization of IP assets to create unique digital tokens.
  • Discussing the components of a distributed and trustworthy framework for minting NFT-based patents.
  • Highlighting a series of open challenges of NFT-based patents and enlightening the possible future trends.

The rest of the paper is structured as follows: “Background” section describes the Background of NFTs, Non-Fungible Token Standards. The NFT-based patent framework is described in “NFT-based patent framework” section. The Discussion and challenges are presented in “Discussion” section. Lastly, conclusions are given in “Conclusion” section.

Background

Colored Coins could be considered the first steps toward NFTs designed on the top of the Bitcoin network. Bitcoins are fungible, but it is possible to mark them to be distinguishable from the other bitcoins. These marked coins have special properties representing real-world assets like cars and stocks, and owners can prove their ownership of physical assets through the colored coins. By utilizing Colored Coins, users can transfer their marked coins’ ownership like a usual transaction and benefit from Bitcoin’s decentralized network17. Colored Coins had limited functionality due to the Bitcoin script limitations. Pepe is a green frog meme originated by Matt Furie that; users define tokens for Pepes and trade them through the Counterparty platform. Then, the tokens that were created by the picture of Pepes are decided if they are rare enough. Rare Pepe allows users to preserve scarcity, manage the ownership, and transfer their purchased Pepes.

In 2017, Larva Labs developed the first Ethereum-based NFT named CryptoPunks. It contains 10,000 unique human-like characters generated randomly. The official ownership of each character is stored in the Ethereum smart contract, and owners would trade characters. CryptoPunks project inspired CryptoKitties project. CryptoKitties attracts attention to NFT, and it is a pioneer in blockchain games and NFTs that launched in late 2017. CryptoKitties is a blockchain-based virtual game, and users collect and trade characters with unique features that shape kitties. This game was developed in Ethereum smart contract, and it pioneered the ERC-721 token, which was the first standard token in the Ethereum blockchain for NFTs. After the 2017 hype in NFTs, many projects started in this context. Due to increased attention to NFTs’ use-cases and growing market cap, different blockchains like EOS, Algorand, and Tezos started to support NFTs, and various marketplaces like SuperRare and Rarible, and OpenSea are developed to help users to trade NFTs. As mentioned, in general, assets are categorized into two main classes, fungible and non-fungible assets. Fungible assets are the ones that another similar asset can replace. Fungible items could have two main characteristics: replicability and divisibility.

Currency is a fungible item because a ten-dollar bill can be exchanged for another ten-dollar bill or divided into ten one-dollar bills. Despite fungible items, non-fungible items are unique and distinguishable. They cannot be divided or exchanged by another identical item. The first tweet on Twitter is a non-fungible item with mentioned characteristics. Another tweet cannot replace it, and it is unique and not divisible. NFT is a non-fungible cryptographic asset that is declared in a standard token format and has a unique set of attributes. Due to transparency, proof of ownership, and traceable transactions in the blockchain network, NFTs are created using blockchain technology.

Blockchain-based NFTs help enthusiasts create NFTs in the standard token format in blockchain, transfer the ownership of their NFTs to a buyer, assure uniqueness of NFTs, and manage NFTs completely. In addition, there are semi-fungible tokens that have characteristics of both fungible and non-fungible tokens. Semi-fungible tokens are fungible in the same class or specific time and non-fungible in other classes or different times. A plane ticket can be considered a semi-fungible token because a charter ticket can be exchanged by another charter ticket but cannot be exchanged by a first-class ticket. The concept of semi-fungible tokens plays the main role in blockchain-based games and reduces NFTs overhead. In Fig. 1, we illustrate fungible, non-fungible, and semi-fungible tokens. The main properties of NFTs are described as follows15:

figure 1
Figure 1

Ownership: Because of the blockchain layer, the owner of NFT can easily prove the right of possession by his/her keys. Other nodes can verify the user’s ownership publicly.

  • Transferable: Users can freely transfer owned NFTs ownership to others on dedicated markets.
  • Transparency: By using blockchain, all transactions are transparent, and every node in the network can confirm and trace the trades.
  • Fraud Prevention: Fraud is one of the key problems in trading assets; hence, using NFTs ensures buyers buy a non-counterfeit item.
  • Immutability: Metadata, token ID, and history of transactions of NFTs are recorded in a distributed ledger, and it is impossible to change the information of the purchased NFTs.

Non-fungible standards

Ethereum blockchain was pioneered in implementing NFTs. ERC-721 token was the first standard token accepted in the Ethereum network. With the increase in popularity of the NFTs, developers started developing and enhancing NFTs standards in different blockchains like EOS, Algorand, and Tezos. This section provides a review of implemented NFTs standards on the mentioned blockchains.

Ethereum

ERC-721 was the first Standard for NFTs developed in Ethereum, a free and open-source standard. ERC-721 is an interface that a smart contract should implement to have the ability to transfer and manage NFTs. Each ERC-721 token has unique properties and a different Token Id. ERC-721 tokens include the owner’s information, a list of approved addresses, a transfer function that implements transferring tokens from owner to buyer, and other useful functions5.

In ERC-721, smart contracts can group tokens with the same configuration, and each token has different properties, so ERC-721 does not support fungible tokens. However, ERC-1155 is another standard on Ethereum developed by Enjin and has richer functionalities than ERC-721 that supports fungible, non-fungible, and semi-fungible tokens. In ERC-1155, IDs define the class of assets. So different IDs have a different class of assets, and each ID may contain different assets of the same class. Using ERC-1155, a user can transfer different types of tokens in a single transaction and mix multiple fungible and non-fungible types of tokens in a single smart contract6. ERC-721 and ERC-1155 both support operators in which the owner can let the operator originate transferring of the token.

EOSIO

EOSIO is an open-source blockchain platform released in 2018 and claims to eliminate transaction fees and increase transaction throughput. EOSIO differs from Ethereum in the wallet creation algorithm and procedure of handling transactions. dGood is a free standard developed in the EOS blockchain for assets, and it focuses on large-scale use cases. It supports a hierarchical naming structure in smart contracts. Each contract has a unique symbol and a list of categories, and each category contains a list of token names. Therefore, a single contract in dGoods could contain many tokens, which causes efficiency in transferring a group of tokens. Using this hierarchy, dGoods supports fungible, non-fungible, and semi-fungible tokens. It also supports batch transferring, where the owner can transfer many tokens in one operation18.

Algorand

Algorand is a new high-performance public blockchain launched in 2019. It provides scalability while maintaining security and decentralization. It supports smart contracts and tokens for representing assets19. Algorand defines Algorand Standard Assets (ASA) concept to create and manage assets in the Algorand blockchain. Using ASA, users are able to define fungible and non-fungible tokens. In Algorand, users can create NFTs or FTs without writing smart contracts, and they should run just a single transaction in the Algorand blockchain. Each transaction contains some mutable and immutable properties20.

Each account in Algorand can create up to 1000 assets, and for every asset, an account creates or receives, the minimum balance of the account increases by 0.1 Algos. Also, Algorand supports fractional NFTs by splitting an NFT into a group of divided FTs or NFTs, and each part can be exchanged dependently21. Algorand uses a Clawback Address that operates like an operator in ERC-1155, and it is allowed to transfer tokens of an owner who has permitted the operator.

Tezos

Tezos is another decentralized open-source blockchain. Tezos supports the meta-consensus concept. In addition to using a consensus protocol on the ledger’s state like Bitcoin and Ethereum, It also attempts to reach a consensus about how nodes and the protocol should change or upgrade22. FA2 (TZIP-12) is a standard for a unified token contract interface in the Tezos blockchain. FA2 supports different token types like fungible, non-fungible, and fractionalized NFT contracts. In Tezos, tokens are identified with a token contract address and token ID pair. Also, Tezos supports batch token transferring, which reduces the cost of transferring multiple tokens.

Flow

Flow was developed by Dapper Labs to remove the scalability limitation of the Ethereum blockchain. Flow is a fast and decentralized blockchain that focuses on games and digital collectibles. It improves throughput and scalability without sharding due to its architecture. Flow supports smart contracts using Cadence, which is a resource-oriented programming language. NFTs can be described as a resource with a unique id in Cadence. Resources have important rules for ownership management; that is, resources have just one owner and cannot be copied or lost. These features assure the NFT owner. NFTs’ metadata, including images and documents, can be stored off-chain or on-chain in Flow. In addition, Flow defines a Collection concept, in which each collection is an NFT resource that can include a list of resources. It is a dictionary that the key is resource id, and the value is corresponding NFT.

The collection concept provides batch transferring of NFTs. Besides, users can define an NFT for an FT. For instance, in CryptoKitties, a unique cat as an NFT can own a unique hat (another NFT). Flow uses Cadence’s second layer of access control to allow some operators to access some fields of the NFT23. In Table 1, we provide a comparison between explained standards. They are compared in support of fungible-tokens, non-fungible tokens, batch transferring that owner can transform multiple tokens in one operation, operator support in which the owner can approve an operator to originate token transfer, and fractionalized NFTs that an NFT can divide to different tokens and each exchange dependently.Table 1 Comparing NFT standards.

Full size table

NFT-based patent framework

In this section, we propose a framework for presenting NFT-based patents. We describe details of the proposed distributed and trustworthy framework for minting NFT-based patents, as shown in Fig. 2. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application Layer. Details of each layer and the general concepts are presented as follows.

figure 2
Figure 2

Storage layer

The continuous rise of the data in blockchain technology is moving various information systems towards the use of decentralized storage networks. Decentralized storage networks were created to provide more benefits to the technological world24. Some of the benefits of using decentralized storage systems are explained: (1) Cost savings are achieved by making optimal use of current storage. (2) Multiple copies are kept on various nodes, avoiding bottlenecks on central servers and speeding up downloads. This foundation layer implicitly provides the infrastructure required for the storage. The items on NFT platforms have unique characteristics that must be included for identification.

Non-fungible token metadata provides information that describes a particular token ID. NFT metadata is either represented on the On-chain or Off-chain. On-chain means direct incorporation of the metadata into the NFT’s smart contract, which represents the tokens. On the other hand, off-chain storage means hosting the metadata separately25.

Blockchains provide decentralization but are expensive for data storage and never allow data to be removed. For example, because of the Ethereum blockchain’s current storage limits and high maintenance costs, many projects’ metadata is maintained off-chain. Developers utilize the ERC721 Standard, which features a method known as tokenURI. This method is implemented to let applications know the location of the metadata for a specific item. Currently, there are three solutions for off-chain storage, including InterPlanetary File System (IPFS), Pinata, and Filecoin.

IPFS

InterPlanetary File System (IPFS) is a peer-to-peer hypermedia protocol for decentralized media content storage. Because of the high cost of storing media files related to NFTS on Blockchain, IPFS can be the most affordable and efficient solution. IPFS combines multiple technologies inspired by Gita and BitTorrent, such as Block Exchange System, Distributed Hash Tables (DHT), and Version Control System26. On a peer-to-peer network, DHT is used to coordinate and maintain metadata.

In other words, the hash values must be mapped to the objects they represent. An IPFS generates a hash value that starts with the prefix {Q}_{m} and acts as a reference to a specific item when storing an object like a file. Objects larger than 256 KB are divided into smaller blocks up to 256 KB. Then a hash tree is used to interconnect all the blocks that are a part of the same object. IPFS uses Kamdelia DHT. The Block Exchange System, or BitSwap, is a BitTorrent-inspired system that is used to exchange blocks. It is possible to use asymmetric encryption to prevent unauthorized access to stored content on IPFS27.

Pinata

Pinata is a popular platform for managing and uploading files on IPFS. It provides secure and verifiable files for NFTs. Most data is stored off-chain by most NFTs, where a URL of the data is pointed to the NFT on the blockchain. The main problem here is that some information in the URL can change.

This indicates that an NFT supposed to describe a certain patent can be changed without anyone knowing. This defeats the purpose of the NFT in the first place. This is where Pinata comes in handy. Pinata uses the IPFS to create content-addressable hashes of data, also known as Content-Identifiers (CIDs). These CIDs serve as both a way of retrieving data and a means to ensure data validity. Those looking to retrieve data simply ask the IPFS network for the data associated with a certain CID, and if any node on the network contains that data, it will be returned to the requester. The data is automatically rehashed on the requester’s computer when the requester retrieves it to make sure that the data matches back up with the original CID they asked for. This process ensures the data that’s received is exactly what was asked for; if a malicious node attempts to send fake data, the resulting CID on the requester’s end will be different, alerting the requester that they’re receiving incorrect data28.

Filecoin

Another decentralized storage network is Filecoin. It is built on top of IPFS and is designed to store the most important data, such as media files. Truffle Suite has also launched NFT Development Template with Filecoin Box. NFT.Storage (Free Decentralized Storage for NFTs)29 allows users to easily and securely store their NFT content and metadata using IPFS and Filecoin. NFT.Storage is a service backed by Protocol Labs and Pinata specifically for storing NFT data. Through content addressing and decentralized storage, NFT.Storage allows developers to protect their NFT assets and associated metadata, ensuring that all NFTs follow best practices to stay accessible for the long term. NFT.Storage makes it completely frictionless to mint NFTs following best practices through resilient persistence on IPFS and Filecoin. NFT.Storage allows developers to quickly, safely, and for free store NFT data on decentralized networks. Anyone can leverage the power of IPFS and Filecoin to ensure the persistence of their NFTs. The details of this system are stated as follows30:

Content addressing

Once users upload data on NFT.Storage, They receive a CID, which is an IPFS hash of the content. CIDs are the data’s unique fingerprints, universal addresses that can be used to refer to it regardless of how or where it is stored. Using CIDs to reference NFT data avoids problems such as weak links and “rug pulls” since CIDs are generated from the content itself.

Provable storage

NFT.Storage uses Filecoin for long-term decentralized data storage. Filecoin uses cryptographic proofs to assure the NFT data’s durability and persistence over time.

Resilient retrieval

This data stored via IPFS and Filecoin can be fetched directly in the browser via any public IPFS.

Authentication Layer

The second layer is the authentication layer, which we briefly highlight its functions in this section. The Decentralized Identity (DID) approach assists users in collecting credentials from a variety of issuers, such as the government, educational institutions, or employers, and saving them in a digital wallet. The verifier then uses these credentials to verify a person’s validity by using a blockchain-based ledger to follow the “identity and access management (IAM)” process. Therefore, DID allows users to be in control of their identity. A lack of NFT verifiability also causes intellectual property and copyright infringements; of course, the chain of custody may be traced back to the creator’s public address to check whether a similar patent is filed using that address. However, there is no quick and foolproof way to check an NFTs creator’s legitimacy. Without such verification built into the NFT, an NFT proves ownership only over that NFT itself and nothing more.

Self-sovereign identity (SSI)31 is a solution to this problem. SSI is a new series of standards that will guide a new identity architecture for the Internet. With a focus on privacy, security interoperability, SSI applications use public-key cryptography with public blockchains to generate persistent identities for people with private and selective information disclosure. Blockchain technology offers a solution to establish trust and transparency and provide a secure and publicly verifiable KYC (Know Your Customer). The blockchain architecture allows you to collect information from various service providers into a single cryptographically secure and unchanging database that does not need a third party to verify the authenticity of the information.

The proposed platform generates patents-related smart contracts acting as a program that runs on the blockchain to receive and send transactions. They are unalterable privately identifying clients with a thorough KYC process. After KYC approval, then mint an NFT on the blockchain as a certificate of verification32. This article uses a decentralized authentication solution at this layer for authentication. This solution has been used for various applications in the field of the blockchain (exp: smart city, Internet of Things, etc.3334, but we use it here for the proposed framework (patent as NFTs). Details of this solution will be presented in the following.

Decentralized authentication

This section presents the authentication layer similar35 to build validated communication in a secure and decentralized manner via blockchain technology. As shown in Fig. 3, the authentication protocol comprises two processes, including registration and login.

figure 3
Figure 3
Registration

In the registration process of a suggested authentication protocol, we first initialize a user’s public key as their identity key (UserName). Then, we upload this identity key on a blockchain, in which transactions can be verified later by other users. Finally, the user generates an identity transaction.

Login

After registration, a user logs in to the system. The login process is described as follows:

  • 1. The user commits identity information and imports their secret key into the service application to log in.
  • 2. A user who needs to log in sends a login request to the network’s service provider.
  • 3. The service provider analyzes the login request, extracts the hash, queries the blockchain, and obtains identity information from an identity list (identity transactions).
  • 4. The service provider responds with an authentication request when the above process is completed. A timestamp (to avoid a replay attack), the user’s UserName, and a signature are all included in the authentication request.
  • 5. The user creates a signature with five parameters: timestamp, UserName, and PK, as well as the UserName and PK of the service provider. The user authentication credential is used as the signature.
  • 6. The service provider verifies the received information, and if the received information is valid, the authentication succeeds; otherwise, the authentication fails, and the user’s login is denied.

The World Intellectual Property Organization (WIPO) and multiple target patent offices in various nations or regions should assess a patent application, resulting in inefficiency, high costs, and uncertainty. This study presented a conceptual NFT-based patent framework for issuing, validating, and sharing patent certificates. The platform aims to support counterfeit protection as well as secure access and management of certificates according to the needs of learners, companies, education institutions, and certification authorities.

Here, the certification authority (CA) is used to authenticate patent offices. The procedure will first validate a patent if it is provided with a digital certificate that meets the X.509 standard. Certificate authorities are introduced into the system to authenticate both the nodes and clients connected to the blockchain network.

Verification layer

In permissioned blockchains, just identified nodes can read and write in the distributed ledger. Nodes can act in different roles and have various permissions. Therefore, a distributed system can be designed to be the identified nodes for patent granting offices. Here the system is described conceptually at a high level. Figure 4 illustrates the sequence diagram of this layer. This layer includes four levels as below:

figure 4
Figure 4

Digitalization

For a patent to publish as an NFT in the blockchain, it must have a digitalized format. This level is the “filling step” in traditional patent registering. An application could be designed in the application layer to allow users to enter different patent information online.

Recording

Patents provide valuable information and would bring financial benefits for their owner. If they are publicly published in a blockchain network, miners may refuse the patent and take the innovation for themselves. At least it can weaken consensus reliability and encourage miners to misbehave. The inventor should record his innovation privately first using proof of existence to prevent this. The inventor generates the hash of the patent document and records it in the blockchain. As soon as it is recorded in the blockchain, the timestamp and the hash are available for others publicly. Then, the inventor can prove the existence of the patent document whenever it is needed.

Furthermore, using methods like Decision Thinking36, an inventor can record each phase of patent development separately. In each stage, a user generates the hash of the finished part and publishes the hash regarding the last part’s hash. Finally, they have a coupled series of hashes that indicate patent development, and they can prove the existence of each phase using the original related documents. This level should be done to prevent others from abusing the patent and taking it for themselves. The inventor can make sure that their patent document is recorded confidentially and immutably37.

Different hash algorithms exist with different architecture, time complexity, and security considerations. Hash functions should satisfy two main requirements: Pre-Image Resistance: This means that it should be computationally hard to find the input of a hash function while the output and the hash algorithm are known publicly. Collision Resistance: This means that it is computationally hard to find two arbitrary inputs, x, and y, that have the same hash output. These requirements are vital for recording patents. First, the hash function should be Pre-Image Resistance to make it impossible for others to calculate the patent documentation. Otherwise, everybody can read the patent, even before its official publication. Second, the hash function should satisfy Collision Resistance to preclude users from changing their document after recording. Otherwise, users can upload another document, and after a while, they can replace it with another one.

There are various hash algorithms, and MD and SHA families are the most useful algorithms. According to38, Collisions have been found for MD2, MD4, MD5, SHA-0, and SHA-1 hash functions. Hence, they cannot be a good choice for recording patents. SHA2 hash algorithm is secure, and no collision has been found. Although SHA2 is noticeably slower than prior hash algorithms, the recording phase is not highly time-sensitive. So, it is a better choice and provides excellent security for users.

Validating

In this phase, the inventors first create NFT for their patents and publish it to the miners/validators. Miners are some identified nodes that validate NFTs to record in the blockchain. Due to the specialization of the patent validation, miners cannot be inexpert public persons. In addition, patent offices are not too many to make the network fully decentralized. Therefore, the miners can be related specialist persons that are certified by the patent offices. They should receive a digital certificate from patent offices that show their eligibility to referee a patent.

Digital certificate

Digital certificates are digital credentials used to verify networked entities’ online identities. They usually include a public key as well as the owner’s identification. They are issued by Certification Authorities (CAs), who must verify the certificate holder’s identity. Certificates contain cryptographic keys for signing, encryption, and decryption. X.509 is a standard that defines the format of public-key certificates and is signed by a certificate authority. X.509 standard has multiple fields, and its structure is shown in Fig. 5. Version: This field indicated the version of the X.509 standard. X.509 contains multiple versions, and each version has a different structure. According to the CA, validators can choose their desired version. Serial Number: It is used to distinguish a certificate from other certificates. Thus, each certificate has a unique serial number. Signature Algorithm Identifier: This field indicates the cryptographic encryption algorithm used by a certificate authority. Issuer Name: This field indicates the issuer’s name, which is generally certificate authority. Validity Period: Each certificate is valid for a defined period, defined as the Validity Period. This limited period partly protects certificates against exposing CA’s private key. Subject Name: Name of the requester. In our proposed framework, it is the validator’s name. Subject Public Key Info: Shows the CA’s or organization’s public key that issued the certificate. These fields are identical among all versions of the X.509 standard39.

figure 5
Figure 5

Certificate authority

A Certificate Authority (CA) issues digital certificates. CAs encrypt the certificate with their private key, which is not public, and others can decrypt the certificates containing the CA’s public key.

Here, the patent office creates a certificate for requested patent referees. The patent office writes the information of the validator in their certificate and encrypts it with the patent offices’ private key. The validator can use the certificate to assure others about their eligibility. Other nodes can check the requesting node’s information by decrypting the certificate using the public key of the patent office. Therefore, persons can join the network’s miners/validators using their credentials. In this phase, miners perform Formal Examinations, Prior Art Research, and Substantive Examinations and vote to grant or refuse the patent.

Miners perform a consensus about the patent and record the patent in the blockchain. After that, the NFT is recorded in the blockchain with corresponding comments in granting or needing reformations. If the miners detect the NFT as a malicious request, they do not record it in the blockchain.

Blockchain layer

This layer plays as a middleware between the Verification Layer and Application Layer in the patents as NFTs architecture. The main purpose of the blockchain layer in the proposed architecture is to provide IP management. We find that transitioning to a blockchain-based patent as a NFTs records system enables many previously suggested improvements to current patent systems in a flexible, scalable, and transparent manner.

On the other hand, we can use multiple blockchain platforms, including Ethereum, EOS, Flow, and Tezos. Blockchain Systems can be mainly classified into two major types: Permissionless (public) and Permissioned (private) Blockchains based on their consensus mechanism. In a public blockchain, any node can participate in the peer-to-peer network, where the blockchain is fully decentralized. A node can leave the network without any consent from the other nodes in the network.

Bitcoin is one of the most popular examples that fall under the public and permissionless blockchain. Proof of Work (POW), Proof-of-Stake (POS), and directed acyclic graph (DAG) are some examples of consensus algorithms in permissionless blockchains. Bitcoin and Ethereum, two famous and trustable blockchain networks, use the PoW consensus mechanism. Blockchain platforms like Cardano and EOS adopt the PoS consensus40.

Nodes require specific access or permission to get network authentication in a private blockchain. Hyperledger is among the most popular private blockchains, which allow only permissioned members to join the network after authentication. This provides security to a group of entities that do not completely trust one another but wants to achieve a common objective such as exchanging information. All entities of a permissioned blockchain network can use Byzantine-fault-tolerant (BFT) consensus. The Fabric has a membership identity service that manages user IDs and verifies network participants.

Therefore, members are aware of each other’s identity while maintaining privacy and secrecy because they are unaware of each other’s activities41. Due to their more secure nature, private blockchains have sparked a large interest in banking and financial organizations, believing that these platforms can disrupt current centralized systems. Hyperledger, Quorum, Corda, EOS are some examples of permissioned blockchains42.

Reaching consensus in a distributed environment is a challenge. Blockchain is a decentralized network with no central node to observe and check all transactions. Thus, there is a need to design protocols that indicate all transactions are valid. So, the consensus algorithms are considered as the core of each blockchain43. In distributed systems, the consensus has become a problem in which all network members (nodes) agree on accept or reject of a block. When all network members accept the new block, it can append to the previous block.

As mentioned, the main concern in the blockchains is how to reach consensus among network members. A wide range of consensus algorithms has been designed in which each of them has its own pros and cons42. Blockchain consensus algorithms are mainly classified into three groups shown in Table 2. As the first group, proof-based consensus algorithms require the nodes joining the verifying network to demonstrate their qualification to do the appending task. The second group is voting-based consensus that requires validators in the network to share their results of validating a new block or transaction before making the final decision. The third group is DAG-based consensus, a new class of consensus algorithms. These algorithms allow several different blocks to be published and recorded simultaneously on the network.Table 2 Consensus algorithms in blockchain networks.

Full size table

The proposed patent as the NFTs platform that builds blockchain intellectual property empowers the entire patent ecosystem. It is a solution that removes barriers by addressing fundamental issues within the traditional patent ecosystem. Blockchain can efficiently handle patents and trademarks by effectively reducing approval wait time and other required resources. The user entities involved in Intellectual Property management are Creators, Patent Consumers, and Copyright Managing Entities. Users with ownership of the original data are the patent creators, e.g., inventors, writers, and researchers. Patent Consumers are the users who are willing to consume the content and support the creator’s work. On the other hand, Users responsible for protecting the creators’ Intellectual Property are the copyright management entities, e.g., lawyers. The patents as NFTs solution for IP management in blockchain layer works by implementing the following steps62:

Creators sign up to the platform

Creators need to sign up on the blockchain platform to patent their creative work. The identity information will be required while signing up.

Creators upload IP on the blockchain network

Now, add an intellectual property for which the patent application is required. The creator will upload the information related to IP and the data on the blockchain network. Blockchain ensures traceability and auditability to prevent data from duplicity and manipulation. The patent becomes visible to all network members once it is uploaded to the blockchain.

Consumers generate request to use the content

Consumers who want to access the content must first register on the blockchain network. After Signing up, consumers can ask creators to grant access to the patented content. Before the patent owner authorizes the request, a Smart Contract is created to allow customers to access information such as the owner’s data. Furthermore, consumers are required to pay fees in either fiat money or unique tokens in order to use the creator’s original information. When the creator approves the request, an NDA (Non-Disclosure Agreement) is produced and signed by both parties. Blockchain manages the agreement and guarantees that all parties agree to the terms and conditions filed.

Patent management entities leverage blockchain to protect copyrights and solve related disputes

Blockchain assists the patent management entities in resolving a variety of disputes that may include: sharing confidential information, establishing proof of authorship, transferring IP rights, and making defensive publications, etc. Suppose a person used an Invention from a patent for his company without the inventor’s consent. The inventor can report it to the patent office and claim that he is the owner of that invention.

Application layer

The patent Platform Global Marketplace technology would allow many enterprises, governments, universities, and Small and medium-sized enterprises (SMEs) worldwide to tokenize patents as NFTs to create an infrastructure for storing patent records on a blockchain-based network and developing a decentralized marketplace in which patent holders would easily sell or otherwise monetize their patents. The NFTs-based patent can use smart contracts to determine a set price for a license or purchase.

Any buyer satisfied with the conditions can pay and immediately unlock the rights to the patent without either party ever having to interact directly. While patents are currently regulated jurisdictionally around the world, a blockchain-based patent marketplace using NFTs can reduce the geographical barriers between patent systems using as simple a tool as a search query. The ease of access to patents globally can help aspiring inventors accelerate the innovative process by building upon others’ patented inventions through licenses. There are a wide variety of use cases for patent NFTs such as SMEs, Patent Organization, Grant & Funding, and fundraising/transferring information relating to patents. These applications keep growing as time progresses, and we are constantly finding new ways to utilize these tokens. Some of the most commonly used applications can be seen as follows.

SMEs

The aim is to move intellectual property assets onto a digital, centralized, and secure blockchain network, enabling easier commercialization of patents, especially for small or medium enterprises (SMEs). Smart contracts can be attached to NFTs so terms of use and ownership can be outlined and agreed upon without incurring as many legal fees as traditional IP transfers. This is believed to help SMEs secure funding, as they could more easily leverage the previously undisclosed value of their patent portfolios63.

Transfer ownership of patents

NFTs can be used to transfer ownership of patents. The blockchain can be used to keep track of patent owners, and tokens would include self-executing contracts that transfer the legal rights associated with patents when the tokens are transferred. A partnership between IBM and IPwe has spearheaded the use of NFTs to secure patent ownership. These two companies have teamed together to build the infrastructure for an NFT-based patent marketplace.

Discussion

There are exciting proposals in the legal and economic literature that suggest seemingly straightforward solutions to many of the issues plaguing current patent systems. However, most solutions would constitute major administrative disruptions and place significant and continuous financial burdens on patent offices or their users. An NFT-based patents system not only makes many of these ideas administratively feasible but can also be examined in a step-wise, scalable, and very public manner.

Furthermore, NFT-based patents may facilitate reliable information sharing among offices and patentees worldwide, reducing the burden on examiners and perhaps even accelerating harmonization efforts. NFT-based patents also have additional transparency and archival attributes baked in. A patent should be a privilege bestowed on those who take resource-intensive risks to explore the frontier of technological capabilities. As a reward for their achievements, full transparency of these rewards is much public interest. It is a society that pays for administrative and economic inefficiencies that exist in today’s systems. NFT-based patents can enhance this transparency. From an organizational perspective, an NFT-based patent can remove current bottlenecks in patent processes by making these processes more efficient, rapid, and convenient for applicants without compromising the quality of granted patents.

The proposed framework encounters some challenges that should be solved to reach a developed patent verification platform. First, technical problems are discussed. The consensus method that is used in the verification layer is not addressed in detail. Due to the permissioned structure of miners in the NFT-based patents, consensus algorithms like PBFT, Federated Consensus, and Round Robin Consensus are designed for permissioned blockchains can be applied. Also, miners/validators spend some time validating the patents; hence a protocol should be designed to profit them. Some challenges like proving the miners’ time and effort, the price that inventors should pay to miners, and other economic trade-offs should be considered.

Different NFT standards were discussed. If various patent services use NFT standards, there will be some cross-platform problems. For instance, transferring an NFT from Ethereum blockchain (ERC-721 token) to EOS blockchain is not a forward and straight work and needs some considerations. Also, people usually trade NFTs in marketplaces such as Rarible and OpenSea. These marketplaces are centralized and may prompt some challenges because of their centralized nature. Besides, there exist some other types of challenges. For example, the novelty of NFT-based patents and blockchain services.

Blockchain-based patent service has not been tested before. The patent registration procedure and concepts of the Patent as NFT system may be ambiguous for people who still prefer conventional centralized patent systems over decentralized ones. It should be noted that there are some problems in the mining part. Miners should receive certificates from the accepted organizations. Determining these organizations and how they accept referees as validators need more consideration. Some types of inventions in some countries are prohibited, and inventors cannot register them. In NFT-based patents, inventors can register their patents publicly, and maybe some collisions occur between inventors and the government. There exist some misunderstandings about NFT’s ownership rights. It is not clear that when a person buys an NFT, which rights are given to them exactly; for instance, they have property rights or have moral rights, too.

Conclusion

Blockchain technology provides strong timestamping, the potential for smart contracts, proof-of-existence. It enables creating a transparent, distributed, cost-effective, and resilient environment that is open to all and where each transaction is auditable. On the other hand, blockchain is a definite boon to the IP industry, benefitting patent owners. When blockchain technology’s intrinsic characteristics are applied to the IP domain, it helps copyrights. This paper provided a conceptual framework for presenting an NFT-based patent with a comprehensive discussion of many aspects: background, model components, token standards to application areas, and research challenges. The proposed framework includes five main layers: Storage Layer, Authentication Layer, Verification Layer, Blockchain Layer, and Application. The primary purpose of this patent framework was to provide an NFT-based concept that could be used to patent a decentralized, anti-tamper, and reliable network for trade and exchange around the world. Finally, we addressed several open challenges to NFT-based inventions.

References

  1. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decent. Bus. Rev. 21260, https://bitcoin.org/bitcoin.pdf (2008).
  2. Buterin, V. A next-generation smart contract and decentralized application platform. White Pap. 3 (2014).
  3. Nofer, M., Gomber, P., Hinz, O. & Schiereck, D. Business & infomation system engineering. Blockchain 59, 183–187 (2017).Google Scholar 
  4. Regner, F., Urbach, N. & Schweizer, A. NFTs in practice—non-fungible tokens as core component of a blockchain-based event ticketing application. https://www.researchgate.net/publication/336057493_NFTs_in_Practice_-_Non-Fungible_Tokens_as_Core_Component_of_a_Blockchain-based_Event_Ticketing_Application (2019).
  5. Entriken, W., Shirley, D., Evans, J. & Sachs, N. EIP 721: ERC-721 non-fungible token standard. Ethereum Improv. Propos.https://eips.ethereum.org/EIPS/eip-721 (2018).
  6. Radomski, W. et al. Eip 1155: Erc-1155 multi token standard. In Ethereum, Standard (2018).
  7. Dowling, M. Is non-fungible token pricing driven by cryptocurrencies? Finance Res. Lett. 44, 102097. https://doi.org/10.1016/j.frl.2021.102097 (2021).
  8. Lesavre, L., Varin, P. & Yaga, D. Blockchain Networks: Token Design and Management Overview. (National Institute of Standards and Technology, 2020).
  9. Larva-Labs. About Cryptopunks, Retrieved 13 May, 2021, from https://www.larvalabs.com/cryptopunks (2021).
  10. Cryptokitties. About Cryptokitties, Retrieved 28 May, 2021, from https://www.cryptokitties.co/ (2021).
  11. nbatopshot. About Nba top shot, Retrieved 4 April, 2021, from https://nbatopshot.com/terms (2021).
  12. Fairfield, J. Tokenized: The law of non-fungible tokens and unique digital property. Indiana Law J. forthcoming (2021).
  13. Chevet, S. Blockchain technology and non-fungible tokens: Reshaping value chains in creative industries. Available at SSRN 3212662 (2018).
  14. Bal, M. & Ner, C. NFTracer: a Non-Fungible token tracking proof-of-concept using Hyperledger Fabric. arXiv preprint arXiv:1905.04795 (2019).
  15. Wang, Q., Li, R., Wang, Q. & Chen, S. Non-fungible token (NFT): Overview, evaluation, opportunities and challenges. arXiv preprint arXiv:2105.07447 (2021).
  16. Qu, Q., Nurgaliev, I., Muzammal, M., Jensen, C. S. & Fan, J. On spatio-temporal blockchain query processing. Future Gener. Comput. Syst. 98: 208–218 (2019).Article Google Scholar 
  17. Rosenfeld, M. Overview of colored coins. White paper, bitcoil. co. il 41, 94 (2012).
  18. Obsidian-Labs. dGoods Standard, Retrieved 29 April, 2021, from https://docs.eosstudio.io/contracts/dgoods/standard.html. (2021).
  19. Algorand. Algorand Core Technology Innovation, Retrieved 10 March, 2021, from https://www.algorand.com/technology/core-blockchain-innovation. (2021).
  20. Weathersby, J. Building NFTs on Algorand, Retrieved 15 April, 2021, from https://developer.algorand.org/articles/building-nfts-on-algorand/. (2021).
  21. Algorand. How Algorand Democratizes the Access to the NFT Market with Fractional NFTs, Retrieved 7 April, 2021, from https://www.algorand.com/resources/blog/algorand-nft-market-fractional-nfts. (2021).
  22. Tezos. Welcome to the Tezos Developer Documentation, Retrieved 16 May, 2021, from https://tezos.gitlab.io. (2021).
  23. flowdocs. Non-Fungible Tokens, Retrieved 20 May, 2021, from https://docs.onflow.org/cadence/tutorial/04-non-fungible-tokens/. (2021).
  24. Benisi, N. Z., Aminian, M. & Javadi, B. Blockchain-based decentralized storage networks: A survey. J. Netw. Comput. Appl. 162, 102656 (2020).Article Google Scholar 
  25. NFTReview. On-chain vs. Off-chain Metadata (2021).
  26. Benet, J. Ipfs-content addressed, versioned, p2p file system. arXiv preprint arXiv:1407.3561 (2014).
  27. Nizamuddin, N., Salah, K., Azad, M. A., Arshad, J. & Rehman, M. Decentralized document version control using ethereum blockchain and IPFS. Comput. Electr. Eng. 76, 183–197 (2019).Article Google Scholar 
  28. Tut, K. Who Is Responsible for NFT Data? (2020).
  29. nft.storage. Free Storage for NFTs, Retrieved 16 May, 2021, from https://nft.storage/. (2021).
  30. Psaras, Y. & Dias, D. in 2020 50th Annual IEEE-IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S). 80–80 (IEEE).
  31. Tanner, J. & Roelofs, C. NFTs and the need for Self-Sovereign Identity (2021).
  32. Martens, D., Tuyll van Serooskerken, A. V. & Steenhagen, M. Exploring the potential of blockchain for KYC. J. Digit. Bank. 2, 123–131 (2017).Google Scholar 
  33. Hammi, M. T., Bellot, P. & Serhrouchni, A. In 2018 IEEE Wireless Communications and Networking Conference (WCNC). 1–6 (IEEE).
  34. Khalid, U. et al. A decentralized lightweight blockchain-based authentication mechanism for IoT systems. Cluster Comput. 1–21 (2020).
  35. Zhong, Y. et al. Distributed blockchain-based authentication and authorization protocol for smart grid. Wirel. Commun. Mobile Comput. (2021).
  36. Schönhals, A., Hepp, T. & Gipp, B. In Proceedings of the 1st Workshop on Cryptocurrencies and Blockchains for Distributed Systems. 105–110.
  37. Verma, S. & Prajapati, G. A Survey of Cryptographic Hash Algorithms and Issues. International Journal of Computer Security & Source Code Analysis (IJCSSCA) 1, 17–20, (2015).
  38. Verma, S. & Prajapati, G. A survey of cryptographic hash algorithms and issues. Int. J. Comput. Secur. Source Code Anal. (IJCSSCA) 1 (2015).
  39. SDK, I. X.509 Certificates (1996).
  40. Helliar, C. V., Crawford, L., Rocca, L., Teodori, C. & Veneziani, M. Permissionless and permissioned blockchain diffusion. Int. J. Inf. Manag. 54, 102136 (2020).Article Google Scholar 
  41. Frizzo-Barker, J. et al. Blockchain as a disruptive technology for business: A systematic review. Int. J. Inf. Manag. 51, 102029 (2020).Article Google Scholar 
  42. Bamakan, S. M. H., Motavali, A. & Bondarti, A. B. A survey of blockchain consensus algorithms performance evaluation criteria. Expert Syst. Appl. 154, 113385 (2020).Article Google Scholar 
  43. Bamakan, S. M. H., Bondarti, A. B., Bondarti, P. B. & Qu, Q. Blockchain technology forecasting by patent analytics and text mining. Blockchain Res. Appl. 100019 (2021).
  44. Castro, M. & Liskov, B. Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. (TOCS) 20, 398–461 (2002).Article Google Scholar 
  45. Muratov, F., Lebedev, A., Iushkevich, N., Nasrulin, B. & Takemiya, M. YAC: BFT consensus algorithm for blockchain. arXiv preprint arXiv:1809.00554 (2018).
  46. Bessani, A., Sousa, J. & Alchieri, E. E. In 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks. 355–362 (IEEE).
  47. Todd, P. Ripple protocol consensus algorithm review. May 11th (2015).
  48. Ongaro, D. & Ousterhout, J. In 2014 {USENIX} Annual Technical Conference ({USENIX}{ATC} 14). 305–319.
  49. Larimer, D. Delegated proof-of-stake (dpos). Bitshare whitepaper, Reterived March 31, 2019, from http://docs.bitshares.org/bitshares/dpos.html (2014).
  50. Turner, B. (October, 2007).
  51. De Angelis, S. et al. PBFT vs proof-of-authority: Applying the CAP theorem to permissioned blockchain (2018).
  52. King, S. & Nadal, S. Ppcoin: Peer-to-peer crypto-currency with proof-of-stake. self-published paper, August 19 (2012).
  53. Hyperledger. PoET 1.0 Specification (2017).
  54. Buntinx, J. What Is Proof-of-Weight? Reterived March 31, 2019, from https://nulltx.com/what-is-proof-of-weight/# (2018).
  55. P4Titan. A Peer-to-Peer Crypto-Currency with Proof-of-Burn. Reterived March 10, 2019, from https://github.com/slimcoin-project/slimcoin-project.github.io/raw/master/whitepaperSLM.pdf (2014).
  56. Dziembowski, S., Faust, S., Kolmogorov, V. & Pietrzak, K. In Annual Cryptology Conference. 585–605 (Springer).
  57. Bentov, I., Lee, C., Mizrahi, A. & Rosenfeld, M. Proof of Activity: Extending Bitcoin’s Proof of Work via Proof of Stake. IACR Cryptology ePrint Archive 2014, 452 (2014).Google Scholar 
  58. NEM, T. Nem technical referencehttps://nem.io/wpcontent/themes/nem/files/NEM_techRef.pdf (2018).
  59. Bramas, Q. The Stability and the Security of the Tangle (2018).
  60. Baird, L. The swirlds hashgraph consensus algorithm: Fair, fast, byzantine fault tolerance. In Swirlds Tech Reports SWIRLDS-TR-2016–01, Tech. Rep (2016).
  61. LeMahieu, C. Nano: A feeless distributed cryptocurrency network. Nano [Online resource]. https://nano.org/en/whitepaper (date of access: 24.03. 2018) 16, 17 (2018).
  62. Casino, F., Dasaklis, T. K. & Patsakis, C. A systematic literature review of blockchain-based applications: Current status, classification and open issues. Telematics Inform. 36, 55–81 (2019).Article Google Scholar 
  63. bigredawesomedodo. Helping Small Businesses Survive and Grow With Marketing, Retrieved 3 June, 2021, from https://bigredawesomedodo.com/nft/. (2020).

Download references

Acknowledgements

This work has been partially supported by CAS President’s International Fellowship Initiative, China [grant number 2021VTB0002, 2021] and National Natural Science Foundation of China (No. 61902385).

Author information

Affiliations

  1. Department of Industrial Management, Yazd University, Yazd City, IranSeyed Mojtaba Hosseini Bamakan
  2. Department of Electrical and Computer Engineering, Isfahan University of Technology, Isfahan City, IranNasim Nezhadsistani
  3. School of Electrical and Computer Engineering, University of Tehran, Tehran City, IranOmid Bodaghi
  4. Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, ChinaSeyed Mojtaba Hosseini Bamakan & Qiang Qu
  5. Huawei Blockchain Lab, Huawei Cloud Tech Co., Ltd., Shenzhen, ChinaQiang Qu

Contributions

NFT: Redefined Format of IP Assets

The collaboration between National Center for Advancing Translational Sciences (NCATS) at NIH and BurstIQ

2.0 LPBI is a Very Unique Organization 

 

Read Full Post »

Reporter: Stephen J. Williams, Ph.D.

From: Heidi Rheim et al. GA4GH: International policies and standards for data sharing across genomic research and healthcare. (2021): Cell Genomics, Volume 1 Issue 2.

Source: DOI:https://doi.org/10.1016/j.xgen.2021.100029

Highlights

  • Siloing genomic data in institutions/jurisdictions limits learning and knowledge
  • GA4GH policy frameworks enable responsible genomic data sharing
  • GA4GH technical standards ensure interoperability, broad access, and global benefits
  • Data sharing across research and healthcare will extend the potential of genomics

Summary

The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits.

In order for genomic and personalized medicine to come to fruition it is imperative that data siloes around the world are broken down, allowing the international collaboration for the collection, storage, transferring, accessing and analying of molecular and health-related data.

We had talked on this site in numerous articles about the problems data siloes produce. By data siloes we are meaning that collection and storage of not only DATA but intellectual thought are being held behind physical, electronic, and intellectual walls and inacessible to other scientisits not belonging either to a particular institituion or even a collaborative network.

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Standardization and harmonization of data is key to this effort to sharing electronic records. The EU has taken bold action in this matter. The following section is about the General Data Protection Regulation of the EU and can be found at the following link:

https://ec.europa.eu/info/law/law-topic/data-protection/data-protection-eu_en

Fundamental rights

The EU Charter of Fundamental Rights stipulates that EU citizens have the right to protection of their personal data.

Protection of personal data

Legislation

The data protection package adopted in May 2016 aims at making Europe fit for the digital age. More than 90% of Europeans say they want the same data protection rights across the EU and regardless of where their data is processed.

The General Data Protection Regulation (GDPR)

Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data. This text includes the corrigendum published in the OJEU of 23 May 2018.

The regulation is an essential step to strengthen individuals’ fundamental rights in the digital age and facilitate business by clarifying rules for companies and public bodies in the digital single market. A single law will also do away with the current fragmentation in different national systems and unnecessary administrative burdens.

The regulation entered into force on 24 May 2016 and applies since 25 May 2018. More information for companies and individuals.

Information about the incorporation of the General Data Protection Regulation (GDPR) into the EEA Agreement.

EU Member States notifications to the European Commission under the GDPR

The Data Protection Law Enforcement Directive

Directive (EU) 2016/680 on the protection of natural persons regarding processing of personal data connected with criminal offences or the execution of criminal penalties, and on the free movement of such data.

The directive protects citizens’ fundamental right to data protection whenever personal data is used by criminal law enforcement authorities for law enforcement purposes. It will in particular ensure that the personal data of victims, witnesses, and suspects of crime are duly protected and will facilitate cross-border cooperation in the fight against crime and terrorism.

The directive entered into force on 5 May 2016 and EU countries had to transpose it into their national law by 6 May 2018.

The following paper by the organiztion The Global Alliance for Genomics and Health discusses these types of collaborative efforts to break down data silos in personalized medicine. This organization has over 2000 subscribers in over 90 countries encompassing over 60 organizations.

Enabling responsible genomic data sharing for the benefit of human health

The Global Alliance for Genomics and Health (GA4GH) is a policy-framing and technical standards-setting organization, seeking to enable responsible genomic data sharing within a human rights framework.

he Global Alliance for Genomics and Health (GA4GH) is an international, nonprofit alliance formed in 2013 to accelerate the potential of research and medicine to advance human health. Bringing together 600+ leading organizations working in healthcare, research, patient advocacy, life science, and information technology, the GA4GH community is working together to create frameworks and standards to enable the responsible, voluntary, and secure sharing of genomic and health-related data. All of our work builds upon the Framework for Responsible Sharing of Genomic and Health-Related Data.

GA4GH Connect is a five-year strategic plan that aims to drive uptake of standards and frameworks for genomic data sharing within the research and healthcare communities in order to enable responsible sharing of clinical-grade genomic data by 2022. GA4GH Connect links our Work Streams with Driver Projects—real-world genomic data initiatives that help guide our development efforts and pilot our tools.

From the article on Cell Genomics GA4GH: International policies and standards for data sharing across genomic research and healthcare

Source: Open Access DOI:https://doi.org/10.1016/j.xgen.2021.100029PlumX Metrics

The Global Alliance for Genomics and Health (GA4GH) is a worldwide alliance of genomics researchers, data scientists, healthcare practitioners, and other stakeholders. We are collaborating to establish policy frameworks and technical standards for responsible, international sharing of genomic and other molecular data as well as related health data. Founded in 2013,3 the GA4GH community now consists of more than 1,000 individuals across more than 90 countries working together to enable broad sharing that transcends the boundaries of any single institution or country (see https://www.ga4gh.org).In this perspective, we present the strategic goals of GA4GH and detail current strategies and operational approaches to enable responsible sharing of clinical and genomic data, through both harmonized data aggregation and federated approaches, to advance genomic medicine and research. We describe technical and policy development activities of the eight GA4GH Work Streams and implementation activities across 24 real-world genomic data initiatives (“Driver Projects”). We review how GA4GH is addressing the major areas in which genomics is currently deployed including rare disease, common disease, cancer, and infectious disease. Finally, we describe differences between genomic sequence data that are generated for research versus healthcare purposes, and define strategies for meeting the unique challenges of responsibly enabling access to data acquired in the clinical setting.

GA4GH organization

GA4GH has partnered with 24 real-world genomic data initiatives (Driver Projects) to ensure its standards are fit for purpose and driven by real-world needs. Driver Projects make a commitment to help guide GA4GH development efforts and pilot GA4GH standards (see Table 2). Each Driver Project is expected to dedicate at least two full-time equivalents to GA4GH standards development, which takes place in the context of GA4GH Work Streams (see Figure 1). Work Streams are the key production teams of GA4GH, tackling challenges in eight distinct areas across the data life cycle (see Box 1). Work Streams consist of experts from their respective sub-disciplines and include membership from Driver Projects as well as hundreds of other organizations across the international genomics and health community.

Figure thumbnail gr1
Figure 1Matrix structure of the Global Alliance for Genomics and HealthShow full caption


Box 1
GA4GH Work Stream focus areasThe GA4GH Work Streams are the key production teams of the organization. Each tackles a specific area in the data life cycle, as described below (URLs listed in the web resources).

  • (1)Data use & researcher identities: Develops ontologies and data models to streamline global access to datasets generated in any country9,10
  • (2)Genomic knowledge standards: Develops specifications and data models for exchanging genomic variant observations and knowledge18
  • (3)Cloud: Develops federated analysis approaches to support the statistical rigor needed to learn from large datasets
  • (4)Data privacy & security: Develops guidelines and recommendations to ensure identifiable genomic and phenotypic data remain appropriately secure without sacrificing their analytic potential
  • (5)Regulatory & ethics: Develops policies and recommendations for ensuring individual-level data are interoperable with existing norms and follow core ethical principles
  • (6)Discovery: Develops data models and APIs to make data findable, accessible, interoperable, and reusable (FAIR)
  • (7)Clinical & phenotypic data capture & exchange: Develops data models to ensure genomic data is most impactful through rich metadata collected in a standardized way
  • (8)Large-scale genomics: Develops APIs and file formats to ensure harmonized technological platforms can support large-scale computing

For more articles on Open Access, Science 2.0, and Data Networks for Genomics on this Open Access Scientific Journal see:

Scientific Curation Fostering Expert Networks and Open Innovation: Lessons from Clive Thompson and others

Icelandic Population Genomic Study Results by deCODE Genetics come to Fruition: Curation of Current genomic studies

eScientific Publishing a Case in Point: Evolution of Platform Architecture Methodologies and of Intellectual Property Development (Content Creation by Curation) Business Model 

UK Biobank Makes Available 200,000 whole genomes Open Access

Systems Biology Analysis of Transcription Networks, Artificial Intelligence, and High-End Computing Coming to Fruition in Personalized Oncology

Read Full Post »

Developing Machine Learning Models for Prediction of Onset of Type-2 Diabetes

Reporter: Amandeep Kaur, B.Sc., M.Sc.

A recent study reports the development of an advanced AI algorithm which predicts up to five years in advance the starting of type 2 diabetes by utilizing regularly collected medical data. Researchers described their AI model as notable and distinctive based on the specific design which perform assessments at the population level.

The first author Mathieu Ravaut, M.Sc. of the University of Toronto and other team members stated that “The main purpose of our model was to inform population health planning and management for the prevention of diabetes that incorporates health equity. It was not our goal for this model to be applied in the context of individual patient care.”

Research group collected data from 2006 to 2016 of approximately 2.1 million patients treated at the same healthcare system in Ontario, Canada. Even though the patients were belonged to the same area, the authors highlighted that Ontario encompasses a diverse and large population.

The newly developed algorithm was instructed with data of approximately 1.6 million patients, validated with data of about 243,000 patients and evaluated with more than 236,000 patient’s data. The data used to improve the algorithm included the medical history of each patient from previous two years- prescriptions, medications, lab tests and demographic information.

When predicting the onset of type 2 diabetes within five years, the algorithm model reached a test area under the ROC curve of 80.26.

The authors reported that “Our model showed consistent calibration across sex, immigration status, racial/ethnic and material deprivation, and a low to moderate number of events in the health care history of the patient. The cohort was representative of the whole population of Ontario, which is itself among the most diverse in the world. The model was well calibrated, and its discrimination, although with a slightly different end goal, was competitive with results reported in the literature for other machine learning–based studies that used more granular clinical data from electronic medical records without any modifications to the original test set distribution.”

This model could potentially improve the healthcare system of countries equipped with thorough administrative databases and aim towards specific cohorts that may encounter the faulty outcomes.

Research group stated that “Because our machine learning model included social determinants of health that are known to contribute to diabetes risk, our population-wide approach to risk assessment may represent a tool for addressing health disparities.”

Sources:

https://www.cardiovascularbusiness.com/topics/prevention-risk-reduction/new-ai-model-healthcare-data-predict-type-2-diabetes?utm_source=newsletter

Reference:

Ravaut M, Harish V, Sadeghi H, et al. Development and Validation of a Machine Learning Model Using Administrative Health Data to Predict Onset of Type 2 Diabetes. JAMA Netw Open. 2021;4(5):e2111315. doi:10.1001/jamanetworkopen.2021.11315 https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2780137

Other related articles were published in this Open Access Online Scientific Journal, including the following:

AI in Drug Discovery: Data Science and Core Biology @Merck &Co, Inc., @GNS Healthcare, @QuartzBio, @Benevolent AI and Nuritas

Reporters: Aviva Lev-Ari, PhD, RN and Irina Robu, PhD

https://pharmaceuticalintelligence.com/2020/08/27/ai-in-drug-discovery-data-science-and-core-biology-merck-co-inc-gns-healthcare-quartzbio-benevolent-ai-and-nuritas/

Can Blockchain Technology and Artificial Intelligence Cure What Ails Biomedical Research and Healthcare

Curator: Stephen J. Williams, Ph.D.

https://pharmaceuticalintelligence.com/2018/12/10/can-blockchain-technology-and-artificial-intelligence-cure-what-ails-biomedical-research-and-healthcare/

HealthCare focused AI Startups from the 100 Companies Leading the Way in A.I. Globally

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/01/18/healthcare-focused-ai-startups-from-the-100-companies-leading-the-way-in-a-i-globally/

AI in Psychiatric Treatment – Using Machine Learning to Increase Treatment Efficacy in Mental Health

Reporter: Aviva Lev- Ari, PhD, RN

https://pharmaceuticalintelligence.com/2019/06/04/ai-in-psychiatric-treatment-using-machine-learning-to-increase-treatment-efficacy-in-mental-health/

Vyasa Analytics Demos Deep Learning Software for Life Sciences at Bio-IT World 2018 – Vyasa’s booth (#632)

Reporter: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2018/05/10/vyasa-analytics-demos-deep-learning-software-for-life-sciences-at-bio-it-world-2018-vyasas-booth-632/

New Diabetes Treatment Using Smart Artificial Beta Cells

Reporter: Irina Robu, PhD

https://pharmaceuticalintelligence.com/2017/11/08/new-diabetes-treatment-using-smart-artificial-beta-cells/

Read Full Post »

re:Invent 2020 – Virtual 3 weeks Conference, Nov. 30 – Dec. 18, 2020: How Healthcare & Life Sciences leaders are using AWS to transform their businesses and innovate on behalf of their customers. 

 

Preview the tracks that will be available, along with the general agenda, on the website. 

 

https://reinvent.awsevents.com/

 

https://virtual.awsevents.com/agenda

 

awsreinvent-support@amazon.com

Marketplace

How to sell on AWS Marketplace

It covers how to list your software product on AWS Marketplace.
Migrate & build faster on AWS with services in AWS Marketplace

freelancers and  consulting firms to get work done on AWS, and about Professional Services in  AWS Marketplace
Keyword found in: 2 Tags

Natural Language Processing

Enterprise Intelligent Search with Amazon Kendra

Facilitate real-time access to crucial data with natural language processing, machine learning, and

[NEW LAUNCH!] Introducing Amazon QuickSight Q: Ask questions on data & get answers in seconds

With QuickSight Q, which uses natural language processing and semantic data understanding techniques,

[NEW LAUNCH!] Introducing Amazon QuickSight Q: Ask questions on data & get answers in seconds

Merlin uses natural language processing and semantic data understanding to make sense of the data.

[NEW LAUNCH!] Introducing Amazon QuickSight Q: Ask questions on data & get answers in seconds

Merlin uses natural language processing and semantic data understanding to make sense of the data.

BLOCKCHAIN

Nestlé brings supply chain transparency with Amazon Managed Blockchain

Using AWS and Amazon Managed Blockchain, Nestlé can store supply chain transactions in ways that are
Keyword found in: 3 Tags1 Details

What’s new in Amazon Managed Blockchain

scalable blockchain networks using popular open-source technologies.
Keyword found in: 3 Tags

Trust and transparency in live virtual events

experiences using the Amazon Interactive Video Service, Amazon Chime, Amazon QLDB, and Amazon Managed Blockchain
Keyword found in: 4 Tags1 Details

What’s new in Amazon Managed Blockchain

scalable blockchain networks using popular open-source technologies.
Keyword found in: 3 Tags

What’s new in Amazon Managed Blockchain

scalable blockchain networks using popular open-source technologies.
Keyword found in: 3 Tags

Trust and transparency in live virtual events

experiences using the Amazon Interactive Video Service, Amazon Chime, Amazon QLDB, and Amazon Managed Blockchain
Keyword found in: 4 Tags1 Details

Trust and transparency in live virtual events

experiences using the Amazon Interactive Video Service, Amazon Chime, Amazon QLDB, and Amazon Managed Blockchain
Keyword found in: 4 Tags1 Details

Introduction to Blockchain and Amazon Managed Blockchain

Blockchain can help you overcome those challenges.
Keyword found in: 3 Tags

Construire des workflows complexes de calcul avec AWS Step Functions et AWS Fargate

AWS propose différentes solutions pour gérer les charges de travail distribuées à grande échelle.
Keyword found in: 1 Tag1 Details

Financial Services Technology 2020 and Beyond: Embracing Disruption

In this report, we look at the technology forces that matter, from FinTech to Blockchain to robotics

AWS re:Invent 2020 – Life Sciences Attendee Guide

AWS re:Invent routinely fills several Las Vegas venues with standing-room only crowds, but we are bringing it to you with an all-virtual and free event this year. This year’s conference is gearing up to be our biggest yet and we have an exciting program planned with five keynotes, 18 leadership sessions, and over 500 breakout sessions beginning November 30. Hear how AWS experts and inspiring members of the Life Sciences & Genomics industry are using cloud technology to transform their businesses and innovate on the behalf of their customers.For Life Sciences attendees looking to get the most out of their experience, follow these steps:

  • Register for re:Invent.
  • Take a look at all of the Life Sciences sessions available, as well as lots of other information and additional activities, in our curated Life Sciences Attendee Guide coming soon!
  • Check back on this post regularly, as we’ll continually update it to reflect the newest information.

Life Sciences at re:Invent 2020

AWS enables pharma and biotech companies to transform every stage of the pharma value chain, with services that enhance data liquidity, operational excellence, and customer engagement. AWS is the trusted technology provider with the cost-effective compute and storage, machine learning capabilities, and customer-centric know how to help companies embrace innovation and bring differentiated therapeutics to market faster.

Dates and Presentation Title

BlockChain Architecture

DEC 1, 2020 | 12:00 AM – 12:20 AM EST
Nestlé brings supply chain transparency with Amazon Managed Blockchain
Chain of origin is the Nestlé answer to complete supply chain transparency, from crop to coffee cup, with technology at its heart. Today, consumers want to know about the quality of their product and know where it is sourced from. Using AWS and Amazon Man

… Learn More

Add To Calendar
Blockchain Retail

Nestlé brings supply chain transparency with Amazon Managed Blockchain

Thursday, December 10
DEC 10, 2020 | 3:15 PM – 3:45 PM EST
Trust and transparency in live virtual events
This session demonstrates how to build interactive, trusted, and transparent live virtual experiences using the Amazon Interactive Video Service, Amazon Chime, Amazon QLDB, and Amazon Managed Blockchain to capture cryptographic, immutable, and verifiable records of real-time audience interactions.
Add To Calendar
Architecture Blockchain

Trust and transparency in live virtual events

DEC 10, 2020 | 11:15 PM – 11:45 PM EST
Trust and transparency in live virtual events
This session demonstrates how to build interactive, trusted, and transparent live virtual experiences using the Amazon Interactive Video Service, Amazon Chime, Amazon QLDB, and Amazon Managed Blockchain to capture cryptographic, immutable, and verifiable records of real-time audience interactions.
Add To Calendar
Architecture Blockchain

Trust and transparency in live virtual events

Friday, December 11
DEC 11, 2020 | 7:15 AM – 7:45 AM EST
Trust and transparency in live virtual events
This session demonstrates how to build interactive, trusted, and transparent live virtual experiences using the Amazon Interactive Video Service, Amazon Chime, Amazon QLDB, and Amazon Managed Blockchain to capture cryptographic, immutable, and verifiable records of real-time audience interactions.
Add To Calendar
Architecture Blockchain

Trust and transparency in live virtual events

Tuesday, December 15
DEC 15, 2020 | 4:00 PM – 4:30 PM EST
What’s new in Amazon Managed Blockchain
Amazon Managed Blockchain is a fully managed service that makes it easy for you to create and manage scalable blockchain networks using popular open-source technologies. Blockchain technologies enable groups of organizations to securely transact and

… Learn More

Add To Calendar
Blockchain Financial Services

What’s new in Amazon Managed Blockchain

Wednesday, December 16
DEC 16, 2020 | 12:00 AM – 12:30 AM EST
What’s new in Amazon Managed Blockchain
Amazon Managed Blockchain is a fully managed service that makes it easy for you to create and manage scalable blockchain networks using popular open-source technologies. Blockchain technologies enable groups of organizations to securely transact and

… Learn More

Add To Calendar
Blockchain Financial Services

What’s new in Amazon Managed Blockchain

DEC 16, 2020 | 8:00 AM – 8:30 AM EST
What’s new in Amazon Managed Blockchain
Amazon Managed Blockchain is a fully managed service that makes it easy for you to create and manage scalable blockchain networks using popular open-source technologies. Blockchain technologies enable groups of organizations to securely transact and

… Learn More

Add To Calendar
Blockchain Financial Services

What’s new in Amazon Managed Blockchain

Life Science and Healthcare

DEC 1, 2020 | 12:00 AM – 12:20 AM EST
Building robots to help hospitals become safer and smarter
In hospitals and other healthcare venues, robots increasingly perform contactless delivery and autonomous maintenance services to reduce the risk of exposing patients and medical staff to harmful viruses and bacteria. In this session, see how Solaris JetBrain and M

… Learn More

Add To Calendar
Robotics IoT

Building robots to help hospitals become safer and smarter

华米基于AWS构建全球云上健康服务

Melhores práticas na construção de um data lake para a saúde

DEC 1, 2020 | 5:15 PM – 5:45 PM EST
Transform research environments with Service Workbench on AWS
Reenvison how research environments are spun up by reducing wait times from days to minutes. Service Workbench on AWS promotes repeatability, multi-institutional collaboration, and transparency in the research process. In this session, learn how Harvard Medical School is procuring and deploying domain-specific data, tools, and secure IT environments to accelerate research.
Add To Calendar
Public Sector Healthcare

Transform research environments with Service Workbench on AWS

Wednesday, December 2
DEC 2, 2020 | 1:15 AM – 1:45 AM EST
Transform research environments with Service Workbench on AWS
Reenvison how research environments are spun up by reducing wait times from days to minutes. Service Workbench on AWS promotes repeatability, multi-institutional collaboration, and transparency in the research process. In this session, learn how Harvard Medical School is procuring and deploying domain-specific data, tools, and secure IT environments to accelerate research.
Add To Calendar
Public Sector Healthcare

Transform research environments with Service Workbench on AWS

DEC 2, 2020 | 9:15 AM – 9:45 AM EST
Transform research environments with Service Workbench on AWS
Reenvison how research environments are spun up by reducing wait times from days to minutes. Service Workbench on AWS promotes repeatability, multi-institutional collaboration, and transparency in the research process. In this session, learn how Harvard Medical School is procuring and deploying domain-specific data, tools, and secure IT environments to accelerate research.
Add To Calendar
Public Sector Healthcare

Transform research environments with Service Workbench on AWS

DEC 2, 2020 | 11:00 AM – 11:30 AM EST
Reinventing medical imaging with machine learning on AWS
It is hard to imagine the future of medical imaging without machine learning (ML) as its central innovation engine. Countless researchers, developers, startups, and larger enterprises are engaged in building, training, and deploying ML solutions for medical i

… Learn More

Add To Calendar
Public Sector Healthcare

Reinventing medical imaging with machine learning on AWS

DEC 2, 2020 | 2:45 PM – 3:15 PM EST
Making healthcare more personal with MetroPlus Health
COVID has made a huge impact across the world, and organizations have had to adapt quickly to changing requirements as a result. Learn how MetroPlus Health, a New York City health plan covering over half a million people, leveraged AWS technology to quickly

… Learn More

Add To Calendar
Healthcare Public Sector

Making healthcare more personal with MetroPlus Health

DEC 2, 2020 | 4:15 PM – 4:45 PM EST
Securing protected health information and high-risk datasets
Join this session featuring Jonathan Cook, Chief Technology Officer at Arcadia, for a discussion around securing mission-critical and high-risk datasets such as personal health information (PHI) in the cloud. Learn how Arcadia developed a HITRUST CSF-certified plat

… Learn More

Add To Calendar
Healthcare Life Sciences

Securing protected health information and high-risk datasets

DEC 2, 2020 | 5:45 PM – 6:15 PM EST
ML and analytics addressing nationwide COVID-19 impact and recovery (sponsored … 
In this session, learn how Fractal.AI delivered a platform to help analyze data and make decisions related to COVID-19 progression for the government of Telangana, India, deploying it on AWS in five days. The solution, based on Intel processors, delivered more than 100 dashboards using anonymized government and public datasets with hundreds of thousands of COVID-19 d… Learn More
Add To Calendar
Public Sector Partner Solutions for Business

ML and analytics addressing nationwide COVID-19 impact and recovery (sponsored by Intel)

DEC 2, 2020 | 6:30 PM – 7:00 PM EST
How Vyaire uses AWS analytics to scale ventilator production & save lives
When COVID-19 hit the US, Vyaire Medical, one of the country’s only ventilator manufacturers, knew it would have to scale rapidly while still offering quality machinery. In order to scale to 20 times more than its usual production and gain insights from all its new

… Learn More

Add To Calendar
Analytics Healthcare

How Vyaire uses AWS analytics to scale ventilator production & save lives

DEC 2, 2020 | 7:00 PM – 7:30 PM EST
Reinventing medical imaging with machine learning on AWS
It is hard to imagine the future of medical imaging without machine learning (ML) as its central innovation engine. Countless researchers, developers, startups, and larger enterprises are engaged in building, training, and deploying ML solutions for medical i

… Learn More

Add To Calendar
Public Sector Healthcare

Reinventing medical imaging with machine learning on AWS

DEC 2, 2020 | 10:45 PM – 11:15 PM EST
Making healthcare more personal with MetroPlus Health
COVID has made a huge impact across the world, and organizations have had to adapt quickly to changing requirements as a result. Learn how MetroPlus Health, a New York City health plan covering over half a million people, leveraged AWS technology to quickly

… Learn More

Add To Calendar
Healthcare Public Sector

Making healthcare more personal with MetroPlus Health

Thursday, December 3
DEC 3, 2020 | 12:15 AM – 12:45 AM EST
Securing protected health information and high-risk datasets
Join this session featuring Jonathan Cook, Chief Technology Officer at Arcadia, for a discussion around securing mission-critical and high-risk datasets such as personal health information (PHI) in the cloud. Learn how Arcadia developed a HITRUST CSF-certified plat

… Learn More

Add To Calendar
Healthcare Life Sciences

Securing protected health information and high-risk datasets

DEC 3, 2020 | 1:45 AM – 2:15 AM EST
ML and analytics addressing nationwide COVID-19 impact and recovery (sponsored … 
In this session, learn how Fractal.AI delivered a platform to help analyze data and make decisions related to COVID-19 progression for the government of Telangana, India, deploying it on AWS in five days. The solution, based on Intel processors, delivered more than 100 dashboards using anonymized government and public datasets with hundreds of thousands of COVID-19 d… Learn More
Add To Calendar
Public Sector Partner Solutions for Business

ML and analytics addressing nationwide COVID-19 impact and recovery (sponsored by Intel)

DEC 3, 2020 | 2:30 AM – 3:00 AM EST
How Vyaire uses AWS analytics to scale ventilator production & save lives
When COVID-19 hit the US, Vyaire Medical, one of the country’s only ventilator manufacturers, knew it would have to scale rapidly while still offering quality machinery. In order to scale to 20 times more than its usual production and gain insights from all its new

… Learn More

Add To Calendar
Analytics Healthcare

How Vyaire uses AWS analytics to scale ventilator production & save lives

DEC 3, 2020 | 3:00 AM – 3:30 AM EST
Reinventing medical imaging with machine learning on AWS
It is hard to imagine the future of medical imaging without machine learning (ML) as its central innovation engine. Countless researchers, developers, startups, and larger enterprises are engaged in building, training, and deploying ML solutions for medical i

… Learn More

Add To Calendar
Public Sector Healthcare

Reinventing medical imaging with machine learning on AWS

DEC 3, 2020 | 6:45 AM – 7:15 AM EST
Making healthcare more personal with MetroPlus Health
COVID has made a huge impact across the world, and organizations have had to adapt quickly to changing requirements as a result. Learn how MetroPlus Health, a New York City health plan covering over half a million people, leveraged AWS technology to quickly

… Learn More

Add To Calendar
Healthcare Public Sector

Making healthcare more personal with MetroPlus Health

DEC 3, 2020 | 8:15 AM – 8:45 AM EST
Securing protected health information and high-risk datasets
Join this session featuring Jonathan Cook, Chief Technology Officer at Arcadia, for a discussion around securing mission-critical and high-risk datasets such as personal health information (PHI) in the cloud. Learn how Arcadia developed a HITRUST CSF-certified plat

… Learn More

Add To Calendar
Healthcare Life Sciences

Securing protected health information and high-risk datasets

DEC 3, 2020 | 9:45 AM – 10:15 AM EST
ML and analytics addressing nationwide COVID-19 impact and recovery (sponsored … 
In this session, learn how Fractal.AI delivered a platform to help analyze data and make decisions related to COVID-19 progression for the government of Telangana, India, deploying it on AWS in five days. The solution, based on Intel processors, delivered more than 100 dashboards using anonymized government and public datasets with hundreds of thousands of COVID-19 d… Learn More
Add To Calendar
Public Sector Partner Solutions for Business

ML and analytics addressing nationwide COVID-19 impact and recovery (sponsored by Intel)

DEC 3, 2020 | 10:30 AM – 11:00 AM EST
How Vyaire uses AWS analytics to scale ventilator production & save lives
When COVID-19 hit the US, Vyaire Medical, one of the country’s only ventilator manufacturers, knew it would have to scale rapidly while still offering quality machinery. In order to scale to 20 times more than its usual production and gain insights from all its new

… Learn More

Add To Calendar
Analytics Healthcare

How Vyaire uses AWS analytics to scale ventilator production & save lives

DEC 3, 2020 | 4:15 PM – 4:45 PM EST
Productionizing R workloads using Amazon SageMaker, featuring Siemens
R language and its 16,000+ packages dedicated to statistics and ML are used by statisticians and data scientists in industries such as energy, healthcare, life science, and financial services. Using R, you can run simulations and ML securely and at scale with Amazon S

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Enterprise/Migration

Productionizing R workloads using Amazon SageMaker, featuring Siemens

DEC 3, 2020 | 6:15 PM – 6:45 PM EST
Accelerating the transition to telehealth with AWS
Learn how AWS is helping healthcare organizations develop and deploy telehealth solutions quickly and at scale. Join speakers from AWS and MedStar Health as they discuss their experience developing and deploying two call centers in less than a week using Amaz

… Learn More

Add To Calendar
Healthcare Public Sector

Accelerating the transition to telehealth with AWS

Friday, December 4
DEC 4, 2020 | 12:15 AM – 12:45 AM EST
Productionizing R workloads using Amazon SageMaker, featuring Siemens
R language and its 16,000+ packages dedicated to statistics and ML are used by statisticians and data scientists in industries such as energy, healthcare, life science, and financial services. Using R, you can run simulations and ML securely and at scale with Amazon S

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Enterprise/Migration

Productionizing R workloads using Amazon SageMaker, featuring Siemens

DEC 4, 2020 | 2:15 AM – 2:45 AM EST
Accelerating the transition to telehealth with AWS
Learn how AWS is helping healthcare organizations develop and deploy telehealth solutions quickly and at scale. Join speakers from AWS and MedStar Health as they discuss their experience developing and deploying two call centers in less than a week using Amaz

… Learn More

Add To Calendar
Healthcare Public Sector

Accelerating the transition to telehealth with AWS

DEC 4, 2020 | 8:15 AM – 8:45 AM EST
Productionizing R workloads using Amazon SageMaker, featuring Siemens
R language and its 16,000+ packages dedicated to statistics and ML are used by statisticians and data scientists in industries such as energy, healthcare, life science, and financial services. Using R, you can run simulations and ML securely and at scale with Amazon S

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Enterprise/Migration

Productionizing R workloads using Amazon SageMaker, featuring Siemens

DEC 4, 2020 | 10:15 AM – 10:45 AM EST
Accelerating the transition to telehealth with AWS
Learn how AWS is helping healthcare organizations develop and deploy telehealth solutions quickly and at scale. Join speakers from AWS and MedStar Health as they discuss their experience developing and deploying two call centers in less than a week using Amaz

… Learn More

Add To Calendar
Healthcare Public Sector

Accelerating the transition to telehealth with AWS

Tuesday, December 8
DEC 8, 2020 | 2:30 PM – 3:00 PM EST
How Erickson Living built a COVID-19 outbreak management solution
As innovators in independent living, assisted living, and skilled nursing care, Erickson Living responded to the COVID-19 outbreak by building an infectious disease management system with the AWS Data Lab to prevent the spread of the virus. This session focuses o

… Learn More

Add To Calendar
Analytics Life Sciences

How Erickson Living built a COVID-19 outbreak management solution

DEC 8, 2020 | 5:30 PM – 6:00 PM EST
Rapidly deploying social services on Amazon Connect
Organizations that respond to disruptive, large-scale events need the ability to rapidly scale and iterate on their contact centers to provide services to their constituents. Amazon Connect can be set up in minutes and scale to handle virtually any number of contact

… Learn More

Add To Calendar
Public Sector Business Apps (including Connect)

Rapidly deploying social services on Amazon Connect

DEC 8, 2020 | 5:30 PM – 6:00 PM EST
Improving data liquidity in Roche’s personalized healthcare platform
Roche’s personalized healthcare mission is to accelerate drug discovery and transform the patient journey by using digital technologies and advanced analytics to facilitate greater scientific collaboration and insight sharing. In this session, Roche shares ho

… Learn More

Add To Calendar
Life Sciences Artificial Intelligence & Machine Learning

Improving data liquidity in Roche’s personalized healthcare platform

DEC 8, 2020 | 10:30 PM – 11:00 PM EST
How Erickson Living built a COVID-19 outbreak management solution
As innovators in independent living, assisted living, and skilled nursing care, Erickson Living responded to the COVID-19 outbreak by building an infectious disease management system with the AWS Data Lab to prevent the spread of the virus. This session focuses o

… Learn More

Add To Calendar
Analytics Life Sciences

How Erickson Living built a COVID-19 outbreak management solution

Wednesday, December 9
DEC 9, 2020 | 1:30 AM – 2:00 AM EST
Rapidly deploying social services on Amazon Connect
Organizations that respond to disruptive, large-scale events need the ability to rapidly scale and iterate on their contact centers to provide services to their constituents. Amazon Connect can be set up in minutes and scale to handle virtually any number of contact

… Learn More

Add To Calendar
Public Sector Business Apps (including Connect)

Rapidly deploying social services on Amazon Connect

DEC 9, 2020 | 1:30 AM – 2:00 AM EST
Improving data liquidity in Roche’s personalized healthcare platform
Roche’s personalized healthcare mission is to accelerate drug discovery and transform the patient journey by using digital technologies and advanced analytics to facilitate greater scientific collaboration and insight sharing. In this session, Roche shares ho

… Learn More

Add To Calendar
Life Sciences Artificial Intelligence & Machine Learning

Improving data liquidity in Roche’s personalized healthcare platform

DEC 9, 2020 | 6:30 AM – 7:00 AM EST
How Erickson Living built a COVID-19 outbreak management solution
As innovators in independent living, assisted living, and skilled nursing care, Erickson Living responded to the COVID-19 outbreak by building an infectious disease management system with the AWS Data Lab to prevent the spread of the virus. This session focuses o

… Learn More

Add To Calendar
Analytics Life Sciences

How Erickson Living built a COVID-19 outbreak management solution

DEC 9, 2020 | 9:30 AM – 10:00 AM EST
Rapidly deploying social services on Amazon Connect
Organizations that respond to disruptive, large-scale events need the ability to rapidly scale and iterate on their contact centers to provide services to their constituents. Amazon Connect can be set up in minutes and scale to handle virtually any number of contact

… Learn More

Add To Calendar
Public Sector Business Apps (including Connect)

Rapidly deploying social services on Amazon Connect

DEC 9, 2020 | 9:30 AM – 10:00 AM EST
Improving data liquidity in Roche’s personalized healthcare platform
Roche’s personalized healthcare mission is to accelerate drug discovery and transform the patient journey by using digital technologies and advanced analytics to facilitate greater scientific collaboration and insight sharing. In this session, Roche shares ho

… Learn More

Add To Calendar
Life Sciences Artificial Intelligence & Machine Learning

Improving data liquidity in Roche’s personalized healthcare platform

DEC 9, 2020 | 10:30 AM – 11:00 AM EST
BlueJeans’ explosive growth journey with AWS during the pandemic
Global video provider BlueJeans (a Verizon company) supports employees working from home, healthcare providers shifting to telehealth, and educators moving to distance learning. With so many people now working from home, BlueJeans saw explosive gro

… Learn More

Add To Calendar
Telecommunications Media & Entertainment

BlueJeans’ explosive growth journey with AWS during the pandemic

DEC 9, 2020 | 5:15 PM – 5:45 PM EST
Using AI to automate clinical workflows
Learn how healthcare organizations can harness the power of AI and machine learning to automate clinical workflows, digitize medical information, extract and summarize medical information, protect patient data, and more. Using AWS document understand

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Using AI to automate clinical workflows

DEC 9, 2020 | 5:45 PM – 6:15 PM EST
Building patient-centric virtualized trials
As clinical trials increasingly become decentralized and virtual, engaging effectively with patients can be challenging. In this session, hear how Evidation Health architects on AWS to create patient-centric experiences, ingests data from millions of devices in real time

… Learn More

Add To Calendar
Life Sciences Healthcare

Building patient-centric virtualized trials

DEC 9, 2020 | 6:30 PM – 7:00 PM EST
BlueJeans’ explosive growth journey with AWS during the pandemic
Global video provider BlueJeans (a Verizon company) supports employees working from home, healthcare providers shifting to telehealth, and educators moving to distance learning. With so many people now working from home, BlueJeans saw explosive gro

… Learn More

Add To Calendar
Telecommunications Media & Entertainment

BlueJeans’ explosive growth journey with AWS during the pandemic

Thursday, December 10
DEC 10, 2020 | 1:15 AM – 1:45 AM EST
Using AI to automate clinical workflows
Learn how healthcare organizations can harness the power of AI and machine learning to automate clinical workflows, digitize medical information, extract and summarize medical information, protect patient data, and more. Using AWS document understand

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Using AI to automate clinical workflows

DEC 10, 2020 | 1:45 AM – 2:15 AM EST
Building patient-centric virtualized trials
As clinical trials increasingly become decentralized and virtual, engaging effectively with patients can be challenging. In this session, hear how Evidation Health architects on AWS to create patient-centric experiences, ingests data from millions of devices in real time

… Learn More

Add To Calendar
Life Sciences Healthcare

Building patient-centric virtualized trials

DEC 10, 2020 | 2:30 AM – 3:00 AM EST
BlueJeans’ explosive growth journey with AWS during the pandemic
Global video provider BlueJeans (a Verizon company) supports employees working from home, healthcare providers shifting to telehealth, and educators moving to distance learning. With so many people now working from home, BlueJeans saw explosive gro

… Learn More

Add To Calendar
Telecommunications Media & Entertainment

BlueJeans’ explosive growth journey with AWS during the pandemic

DEC 10, 2020 | 9:15 AM – 9:45 AM EST
Using AI to automate clinical workflows
Learn how healthcare organizations can harness the power of AI and machine learning to automate clinical workflows, digitize medical information, extract and summarize medical information, protect patient data, and more. Using AWS document understand

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Using AI to automate clinical workflows

DEC 10, 2020 | 9:45 AM – 10:15 AM EST
Building patient-centric virtualized trials
As clinical trials increasingly become decentralized and virtual, engaging effectively with patients can be challenging. In this session, hear how Evidation Health architects on AWS to create patient-centric experiences, ingests data from millions of devices in real time

… Learn More

Add To Calendar
Life Sciences Healthcare

Building patient-centric virtualized trials

DEC 10, 2020 | 2:30 PM – 3:00 PM EST
Edge computing innovation with AWS Snowcone and AWS Snowball Edge
In this session, learn how the AWS Snow Family can help you run operations in harsh, non-data center environments and in locations where there is a lack of consistent network connectivity. The AWS Snow Family, comprised of AWS Snowcone and AWS Snowball Edge, offers a number of physical devices and capacity points with built-in computing capabilities. This sess… Learn More
Add To Calendar
Storage Business Apps (including Connect)

Edge computing innovation with AWS Snowcone and AWS Snowball Edge

DEC 10, 2020 | 4:00 PM – 4:30 PM EST
Healthcare executive outlook: Accelerating transformation
Join AWS Healthcare and Life Science Leader Shez Partovi, MD, for a look into how cloud technology can reshape the future of healthcare. In this executive overview, Dr. Partovi shares a vision of a digitally enhanced, data-driven future. Learn how AWS is working

… Learn More

Add To Calendar
Healthcare Life Sciences

Healthcare executive outlook: Accelerating transformation

DEC 10, 2020 | 10:30 PM – 11:00 PM EST
Edge computing innovation with AWS Snowcone and AWS Snowball Edge
In this session, learn how the AWS Snow Family can help you run operations in harsh, non-data center environments and in locations where there is a lack of consistent network connectivity. The AWS Snow Family, comprised of AWS Snowcone and AWS Snowball Edge, offers a number of physical devices and capacity points with built-in computing capabilities. This sess… Learn More
Add To Calendar
Storage Business Apps (including Connect)

Edge computing innovation with AWS Snowcone and AWS Snowball Edge

Friday, December 11
DEC 11, 2020 | 12:00 AM – 12:30 AM EST
Healthcare executive outlook: Accelerating transformation
Join AWS Healthcare and Life Science Leader Shez Partovi, MD, for a look into how cloud technology can reshape the future of healthcare. In this executive overview, Dr. Partovi shares a vision of a digitally enhanced, data-driven future. Learn how AWS is worki

… Learn More

Add To Calendar
Healthcare Life Sciences

Healthcare executive outlook: Accelerating transformation

DEC 11, 2020 | 6:30 AM – 7:00 AM EST
Edge computing innovation with AWS Snowcone and AWS Snowball Edge
In this session, learn how the AWS Snow Family can help you run operations in harsh, non-data center environments and in locations where there is a lack of consistent network connectivity. The AWS Snow Family, comprised of AWS Snowcone and AWS Snowball Edge, offers a number of physical devices and capacity points with built-in computing capabilities. This sess… Learn More
Add To Calendar
Storage Business Apps (including Connect)

Edge computing innovation with AWS Snowcone and AWS Snowball Edge

DEC 11, 2020 | 8:00 AM – 8:30 AM EST
Healthcare executive outlook: Accelerating transformation
Join AWS Healthcare and Life Science Leader Shez Partovi, MD, for a look into how cloud technology can reshape the future of healthcare. In this executive overview, Dr. Partovi shares a vision of a digitally enhanced, data-driven future. Learn how AWS is worki

… Learn More

Add To Calendar
Healthcare Life Sciences

Healthcare executive outlook: Accelerating transformation

Tuesday, December 15
DEC 15, 2020 | 1:00 PM – 1:30 PM EST
Intelligent document processing for the insurance industry
Organizations in the insurance industry, both for healthcare and financial services, extract sensitive information such as names, dates, claims, or medical procedures from scanned images and documents to perform their business operations. These organization

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Intelligent document processing for the insurance industry

DEC 15, 2020 | 3:00 PM – 3:30 PM EST
AWS Partners driving innovation amidst COVID-19
This session is open to anyone, but it is intended for current and potential AWS Partners. The COVID-19 pandemic has been a formative event affecting our world physically, emotionally, and economically. Despite the challenges created, AWS Partners have responded quickly and proven their resiliency by enabling digital transformation at unprecedented rates. In this sessi… Learn More
Add To Calendar
Global Partner Summit (GPS) Session

AWS Partners driving innovation amidst COVID-19

DEC 15, 2020 | 9:00 PM – 9:30 PM EST
Intelligent document processing for the insurance industry
Organizations in the insurance industry, both for healthcare and financial services, extract sensitive information such as names, dates, claims, or medical procedures from scanned images and documents to perform their business operations. These organization

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Intelligent document processing for the insurance industry

DEC 15, 2020 | 11:00 PM – 11:30 PM EST
AWS Partners driving innovation amidst COVID-19
This session is open to anyone, but it is intended for current and potential AWS Partners. The COVID-19 pandemic has been a formative event affecting our world physically, emotionally, and economically. Despite the challenges created, AWS Partners have responded quickly and proven their resiliency by enabling digital transformation at unprecedented rates. In this sessi… Learn More
Add To Calendar
Global Partner Summit (GPS) Session

AWS Partners driving innovation amidst COVID-19

Wednesday, December 16
DEC 16, 2020 | 5:00 AM – 5:30 AM EST
Intelligent document processing for the insurance industry
Organizations in the insurance industry, both for healthcare and financial services, extract sensitive information such as names, dates, claims, or medical procedures from scanned images and documents to perform their business operations. These organization

… Learn More

Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Intelligent document processing for the insurance industry

DEC 16, 2020 | 7:00 AM – 7:30 AM EST
AWS Partners driving innovation amidst COVID-19
This session is open to anyone, but it is intended for current and potential AWS Partners. The COVID-19 pandemic has been a formative event affecting our world physically, emotionally, and economically. Despite the challenges created, AWS Partners have responded quickly and proven their resiliency by enabling digital transformation at unprecedented rates. In this sessi… Learn More
Add To Calendar
Global Partner Summit (GPS) Session

AWS Partners driving innovation amidst COVID-19

DEC 16, 2020 | 11:45 AM – 12:15 PM EST
Simplify healthcare compliance with third-party solutions
Sensitive health data must be protected to ensure patient privacy, and healthcare organizations need to ensure that IT infrastructure is compliant with changing policies and regulations. In this session, learn how healthcare providers can address compli

… Learn More

Add To Calendar
Marketplace Healthcare

Simplify healthcare compliance with third-party solutions

DEC 16, 2020 | 12:45 PM – 1:15 PM EST
An introduction to healthcare interoperability and FHIR Works on AWS
Fast Healthcare Interoperability Resources (FHIR) is gaining popularity around the world as the standard to use for exchanging healthcare data, and it is being increasingly adopted in Europe and Australasia. In the US, it is actually mandated in the 21st Century

… Learn More

Add To Calendar
Public Sector Healthcare

An introduction to healthcare interoperability and FHIR Works on AWS

DEC 16, 2020 | 1:15 PM – 1:45 PM EST
Achieving healthcare interoperability with FHIR Works on AWS
The Fast Healthcare Interoperability Resources (FHIR) standard has become increasingly necessary for enabling interoperability between healthcare applications and organizations. Join this session for a deep dive into how the FHIR Works on AWS open-source t

… Learn More

Add To Calendar
Public Sector Healthcare

Achieving healthcare interoperability with FHIR Works on AWS

DEC 16, 2020 | 6:00 PM – 6:30 PM EST
AWS at the edge: Using AWS IoT to optimize Amazon wind farms
AWS IoT and edge computing solutions move data processing and analysis closer to where data is generated to enable customers to innovate and achieve more sustainable operations. In this session, learn how Amazon renewable energy projects use AWS IoT to coll

… Learn More

Add To Calendar
IoT Manufacturing

AWS at the edge: Using AWS IoT to optimize Amazon wind farms

DEC 16, 2020 | 7:45 PM – 8:15 PM EST
Simplify healthcare compliance with third-party solutions
Sensitive health data must be protected to ensure patient privacy, and healthcare organizations need to ensure that IT infrastructure is compliant with changing policies and regulations. In this session, learn how healthcare providers can address compli

… Learn More

Add To Calendar
Marketplace Healthcare

Simplify healthcare compliance with third-party solutions

DEC 16, 2020 | 8:45 PM – 9:15 PM EST
An introduction to healthcare interoperability and FHIR Works on AWS
Fast Healthcare Interoperability Resources (FHIR) is gaining popularity around the world as the standard to use for exchanging healthcare data, and it is being increasingly adopted in Europe and Australasia. In the US, it is actually mandated in the 21st Century

… Learn More

Add To Calendar
Public Sector Healthcare

An introduction to healthcare interoperability and FHIR Works on AWS

DEC 16, 2020 | 9:15 PM – 9:45 PM EST
Achieving healthcare interoperability with FHIR Works on AWS
The Fast Healthcare Interoperability Resources (FHIR) standard has become increasingly necessary for enabling interoperability between healthcare applications and organizations. Join this session for a deep dive into how the FHIR Works on AWS open-source t

… Learn More

Add To Calendar
Public Sector Healthcare

Achieving healthcare interoperability with FHIR Works on AWS

Thursday, December 17
DEC 17, 2020 | 2:00 AM – 2:30 AM EST
AWS at the edge: Using AWS IoT to optimize Amazon wind farms
AWS IoT and edge computing solutions move data processing and analysis closer to where data is generated to enable customers to innovate and achieve more sustainable operations. In this session, learn how Amazon renewable energy projects use AWS IoT to coll

… Learn More

Add To Calendar
IoT Manufacturing

AWS at the edge: Using AWS IoT to optimize Amazon wind farms

DEC 17, 2020 | 3:45 AM – 4:15 AM EST
Simplify healthcare compliance with third-party solutions
Sensitive health data must be protected to ensure patient privacy, and healthcare organizations need to ensure that IT infrastructure is compliant with changing policies and regulations. In this session, learn how healthcare providers can address compliance

… Learn More

Add To Calendar
Marketplace Healthcare

Simplify healthcare compliance with third-party solutions

DEC 17, 2020 | 4:45 AM – 5:15 AM EST
An introduction to healthcare interoperability and FHIR Works on AWS
Fast Healthcare Interoperability Resources (FHIR) is gaining popularity around the world as the standard to use for exchanging healthcare data, and it is being increasingly adopted in Europe and Australasia. In the US, it is actually mandated in the 21st Century

… Learn More

Add To Calendar
Public Sector Healthcare

An introduction to healthcare interoperability and FHIR Works on AWS

DEC 17, 2020 | 5:15 AM – 5:45 AM EST
Achieving healthcare interoperability with FHIR Works on AWS
The Fast Healthcare Interoperability Resources (FHIR) standard has become increasingly necessary for enabling interoperability between healthcare applications and organizations. Join this session for a deep dive into how the FHIR Works on AWS open-source t

… Learn More

Add To Calendar
Public Sector Healthcare

Achieving healthcare interoperability with FHIR Works on AWS

DEC 17, 2020 | 10:00 AM – 10:30 AM EST
AWS at the edge: Using AWS IoT to optimize Amazon wind farms
AWS IoT and edge computing solutions move data processing and analysis closer to where data is generated to enable customers to innovate and achieve more sustainable operations. In this session, learn how Amazon renewable energy projects use AWS IoT to coll

… Learn More

Add To Calendar
IoT Manufacturing

AWS at the edge: Using AWS IoT to optimize Amazon wind farms

DEC 17, 2020 | 11:15 AM – 11:45 AM EST
Share information by removing language barriers with Amazon Translate
In this session, see how to use Amazon Translate and Amazon Transcribe to create subtitles for educational videos and translate them to the language of the consumer’s choice. The session includes a demonstration of how this process reduced the time it took for information about COVID-19 to be translated to many languages, thus spreading accurate information quickly.
Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Share information by removing language barriers with Amazon Translate

DEC 17, 2020 | 7:15 PM – 7:45 PM EST
Share information by removing language barriers with Amazon Translate
In this session, see how to use Amazon Translate and Amazon Transcribe to create subtitles for educational videos and translate them to the language of the consumer’s choice. The session includes a demonstration of how this process reduced the time it took for information about COVID-19 to be translated to many languages, thus spreading accurate information quickly.
Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Share information by removing language barriers with Amazon Translate

Friday, December 18
DEC 18, 2020 | 3:15 AM – 3:45 AM EST
Share information by removing language barriers with Amazon Translate
In this session, see how to use Amazon Translate and Amazon Transcribe to create subtitles for educational videos and translate them to the language of the consumer’s choice. The session includes a demonstration of how this process reduced the time it took for information about COVID-19 to be translated to many languages, thus spreading accurate information quickly.
Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Share information by removing language barriers with Amazon Translate

Share information by removing language barriers with Amazon Translate
Friday, December 18
DEC 18, 2020 | 3:15 AM – 3:45 AM EST
Share information by removing language barriers with Amazon Translate
In this session, see how to use Amazon Translate and Amazon Transcribe to create subtitles for educational videos and translate them to the language of the consumer’s choice. The session includes a demonstration of how this process reduced the time it took for information about COVID-19 to be translated to many languages, thus spreading accurate information quickly.
Add To Calendar
Artificial Intelligence & Machine Learning Healthcare

Share information by removing language barriers with Amazon Translate

Life Sciences sessions

Bookmark this blog and check back for direct links to each session and add to your re:Invent schedule as soon as the session catalog is released:

LFS201: Life Sciences Industry: Executive Outlook
Learn how AWS technology is helping organizations improve their data liquidity, achieve operational excellence, and enhance customer engagement.

LFS202: Improving data liquidity in Roche’s personalized healthcare platform
Learn how Roche’s personalized healthcare platform is accelerating drug discovery and transforming the patient journey with digital technology.

LFS302: AstraZeneca genomics on AWS: from petabytes to new medicines
Learn how AstraZeneca built an industry leading genomics pipeline on AWS to analyze 2 million genomes in support of precision medicine.

LFS303: Building patient-centric virtualized trials
Learn how Evidation Health architects on AWS to create patient-centric experiences in decentralized and virtual clinical trials.

LFS304: Streamlining manufacturing and supply chain at Novartis
Learn how Novartis is creating real-time analytics and transparency in the pharma manufacturing process and supply chain to bring innovative medicines to market.

LFS305: Accelerating regulatory assessments in life sciences manufacturing
Learn how Merck leveraged Amazon Machine Learning to build an evaluation and recommendation engine for streamlining pharma manufacturing change requests.

Other related sessions of interest:

ENT203: How BMS automates GxP compliance for SAP systems on AWS

GPS211: AWS Partners driving innovation amidst COVID-19

HLC203: Securing Personal Health Information and High Risk Data Sets

WPS202: Transform research environments with Service Workbench on AWS

AIM310: Intelligent document processing for healthcare organizations

Healthcare Attendee Guide

AWS re:Invent routinely fills several Las Vegas venues with standing-room only crowds, but we are bringing it to you with an all-virtual and free event this year. This year’s conference is gearing up to be our biggest yet and we have an exciting program planned for the Healthcare industry with five keynotes, 18 leadership sessions, and over 500 breakout sessions beginning November 30. See how AWS experts and talented members of the Healthcare industry are using cloud technology to transform their businesses and innovate on the behalf of their customers.For Healthcare attendees looking to get the most out of their experience, follow these steps:

  • Register for re:Invent.
  • Take a look at all of the Healthcare sessions available, as well as lots of other information and additional activities, in our curated Healthcare Attendee Guide coming soon!
  • Check back on this post regularly, as we’ll continually update it to reflect the newest information.

Healthcare at re:Invent 2020

AWS is the trusted technology partner to the global healthcare industry. For over 12 years, AWS has established itself as the most mature, comprehensive, and broadly adopted cloud platform and is trusted by thousands of healthcare customers around the world—including the fastest-growing startups, the largest enterprises, and leading government agencies. The secure and compliant AWS technology enables the highly regulated healthcare industry to improve outcomes and lower costs by providing the tools to unlock the potential of healthcare data, predict healthcare events, and build closer relationships with patients and consumers. The healthcare track at re:Invent 2020 will feature customer-led sessions focused on these each of these critical components, accelerating the transformation of healthcare.

Healthcare sessions

Learn more and bookmark each Healthcare session:

HCL201: Healthcare Executive Outlook: Accelerating Transformation
Learn how AWS is working with industry leaders to increase their pace of innovation, unlock the potential of their healthcare data, help predict patient health events, and personalize the healthcare journey for their patients, consumers, and members.

HLC202: Making Healthcare More Personal with MetroPlus Health
Learn how MetroPlus Health leveraged AWS technology to quickly build and deploy an application that personally and proactively reached out to its members during a time of critical need.

HLC203: Securing Personal Health Information and High Risk Data Sets
Learn how Arcadia developed a HITRUST CSF Certified platform by leveraging AWS technology to enable the secure management of data from over 100 million patients.

HLC204: Accelerating the Transition to Virtual Care with AWS
Learn how MedStar Health developed and deployed two call centers in less than week that are supporting more than 3,500 outpatient telehealth sessions a day.

WPS202: Transform research environments with Service Workbench on AWS
Learn how Harvard Medical School is using AWS to procure and deploy domain-specific data, tools, and secure IT environments to accelerate research.

WPS209: Reinventing medical imaging with machine learning on AWS
Learn how Radboud University Medical Center uses AWS to power its machine learning imaging platform with 45,000+ registered researchers and clinicians from all over the world.

WPS211: An introduction to healthcare interoperability and FHIR Works on AWS
Learn about AWS FHIR Works, an open-source project, designed to accelerate the industries use of the interoperability standard, Fast Healthcare Interoperability Resources (FHIR).

WPS304: Achieving healthcare interoperability with FHIR Works on AWS
Learn how Black Pear Software leveraged AWS to build an integration toolkit to help their customers share healthcare data more effectively.

Extras you won’t want to miss out on!

LFS201: Life Sciences Industry: Executive outlook

LFS202: AstraZeneca genomics on AWS: From petabytes to new medicines

LFS303: Building patient-centric virtualized trials

AIM303: Using AI to automate clinical workflows

AIM310: Intelligent document processing for the insurance industry

INO204: Solving societal challenges with digital innovation on AWS

ZWL208: Using cloud-based genomic research to reduce health care disparities

GPS211: AWS Partners driving innovation amidst COVID-19

Read Full Post »

Tweets & Retweets by @pharma_BI and @AVIVA1950 at #BioIT20, 19th Annual Bio-IT World 2020 Conference, October 6-8, 2020 in Boston

 

Virtual Conference coverage in Real Time: Aviva Lev-Ari, PhD, RN

 

Amazing conference ended at 2PM on October 8, 2020

e-Proceedings 19th Annual Bio-IT World 2020 Conference, October 6-8, 2020 Boston

Virtual Conference coverage in Real Time: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2020/03/26/19th-annual-bio-it-world-2020-conference-october-6-8-2020-in-boston/

Review Tweets and Retweets

and 2 others liked your Tweet

#BioIT20 Plenary Keynote: cutting innovative approach to #Science #Game On: How #AI, #CitizenScience #HumanComputation are facilitating the next leap forward in #Genomics and in #Biology may be in #PrecisionMedicine in the Future @pharma_BI @AVIVA1950 pic.twitter.com/L52qktkeYc

Retweeted your Tweet
#BioIT20 Plenary Keynote: cutting innovative approach to #Science #Game On: How #AI, #CitizenScience #HumanComputation are facilitating the next leap forward in #Genomics and in #Biology may be in #PrecisionMedicine in the Future @pharma_BI @AVIVA1950 pic.twitter.com/L52qktkeYc

and

liked your Tweet

#BioIT20 Plenary Keynote: cutting innovative approach to #Science #Game On: How #AI, #CitizenScience #HumanComputation are facilitating the next leap forward in #Genomics and in #Biology may be in #PrecisionMedicine in the Future @pharma_BI @AVIVA1950 pic.twitter.com/L52qktkeYc

NIH Office of Data Science Strategy
@NIHDataScience

We’ve made progress with #FAIRData, but we still have a ways to go and our future is bright. #BioIT20 #NIHData

Image

3

Aviva Lev-Ari
@AVIVA1950

#BioIT20

Driving Scientific Discovery with Data Digitization great ideas shared by moderator Timothy Gardner

#CEO Inspiration from History Total Quality Implementation is key for BioScience Data #AI won’t solve the problem #Data #Quality will

Image

Rob Lalonde
@HPC_Cloud_Rob

My #BioIT20 talk, “#Bioinformatics in the #Cloud Age,” is tomorrow at 3:30pm. I discuss cloud migration trends in life sciences and #HPC. Join us! A panel with

and

follows the talk.

1
16

Jean Marois
@JeanMarois

My team is participating in Bio-IT World Virtual 2020, October 6-8. Join me! Use discount code 20NUA to save 20%! invt.io/1tdbae9s8lp

#BioIT20

I’m going to Bio-IT World 2020, Oct 6-8, from home! Its a virtual event. Join me!
My team is participating in Bio-IT World Virtual 2020, October 6-8. Join me! Use discount code 20NUA to save 20%! @bioitworld #BioIT20
invt.io
2

NIH Office of Data Science Strategy
@NIHDataScience

One of the challenges we face today: we need an algorithm that can search across the 36+ PB of Sequence Read Archive (SRA) data now in the cloud. Imagine what we could do! #BioIT20 #NIHdata #SRAdata

Image

2

NCBI Staff
@NCBI

NCBI’s virtual #BioIT20 booth will open in 15 minutes. There, you can watch videos, grab some flyers and even speak with an expert! bio-itworld.pathable.co/organizations/ The booth will close at 4:15 PM, but we’ll be back tomorrow, Oct 7 and Thursday, Oct 8 at 9AM.
Bio-IT World
Welcome to Bio-IT World Virtual
bio-itworld.pathable.co
1
6
Show this thread

PERCAYAI
@percayai

Happening soon at #BioIT20: Join our faculty inventor Professor Rich Head’s invited talk “CompBio: An Augmented Intelligence System for Comprehensive Interpretation of Biological Data.”
4

Wendy Anne Warr
@WendyAnneWarr

This was a good discussion
Quote Tweet
Cambridge Innovation
@CIInstitute
·
RT percayai: We’ve put together what’s sure to be a thought-provoking discussion group for #BioIT20 “Why Current Approaches Using #AI in #…
1
2

Cambridge Innovation
@CIInstitute

RT VishakhaSharma_: Excited to speak and moderate a panel on Emerging #AI technologies bioitworld #BioIT20
1

Titian Software
@TitianSoftware

Meet Titian at #BioIT20 on 6-8th October and discover the latest research, science and solutions for exploring the world of precision medicine and the technologies that are powering it: bit.ly/2GjCj4B

Image

1

PERCAYAI
@percayai

Thanks for joining us, Wendy! You’ve done a great job summing up key points from the discussion. #BioIT20
1

Aviva Lev-Ari
@AVIVA1950

#NIHhealthInitiative #BioItWorld20

Out standing Plenary Keynote on #DataScience

CONNECTED DATA ECOSYSTEM FAIR Foundable, Accessible, Interoperable, reusable

Image

2

Read Full Post »

e-Proceedings 19th Annual Bio-IT World 2020 Conference, October 6-8, 2020 Boston

https://bio-itworld.pathable.co/meetings/virtual/3T3SuWw9J2Bceei9s

 Virtual Conference coverage in Real Time: Aviva Lev-Ari, PhD, RN

 

Tweets & Retweets by @pharma_BI and @AVIVA1950 at #BioIT20, 19th Annual Bio-IT World 2020 Conference, October 6-8, 2020 in Boston

Virtual Conference coverage in Real Time: Aviva Lev-Ari, PhD, RN

https://pharmaceuticalintelligence.com/2020/10/08/tweets-retweets-by-pharma_bi-and-aviva1950-for-bioit20-19th-annual-bio-it-world-2020-conference-october-6-8-2020-in-boston/

October 6, 2020

  • Susan Gregurick

    NIH

    Associate Director for Data Science

  • Connected Data Ecosystem – Project is FAIR
  • Data shareable
  • NIH – agenda on data: diverse sets of data: Images of MRI, cells, of organs, of communities,
  • Share images and link it to tables
  • METADATA 34PB enable search – moving Data to clouds for Large-Scalable Analysis
  • Sequence Read Archive (SRA) – DNA seq.
  • COVID-19 from around the World SRA in Cloud Partnerships enabled
  • Open Science – enhance SW tools for making research cloud-ready
  • NIH has 12 Centers: Genomics, Neuro-imaging
  • SCH – Smart & Connected Health
  • IT, Sensor system hardware, effective usability, medical interpretation, Transformative data Science
  • Cancer, Alzheimer’s, Genomics, Medical Imaging, Brain circuits,
  • Coding it Forward: Students come to NIH Virtually from home to join CIVIL DIGITAL FELLOWSHIP
  • COVID-19: repositories of data for researches:
  1. Treatment for Interventions
  2. Long term Sequelae
  3. Clinical platforms: BigData Catalyst, Allow US, ADSO, National COVID Cohort
  4. Across platforms: workflow after RAS August Deploy: Passport for researchers to access data faster, Privacy-Preserving Tokens, Interoperability across clinical COVID data bases
  5. Metadata super rich to link to other new data sources is a challenging issue to solve across studies

Scott Parker

Sinequa Corp

Director of Product Marketing

  • Disconnect between R&D & IT
  • Intelligence search Applications for sensitive information: Sinequa is a leader
  • shares one index cost for document go down & productivity increases

Rebecca Baker

NIH OD

Dir HEAL Initiative

  • END ADDICTION Project – NIH HEAL Initiative: 20 NIH collaborating on Studies
  • National Overdose Deaths overdose opioid drugs – synthetic Fentanyl
  • Heroin, Cocaine, Methamphetamine
  • During COVID Overdose increased during the pandemic
  • Increase in drug use overall and 67% of Fentanyl
  • Chronic Pain: Daily severe pain: can’t go to work – 25 Million
  • $500 Million/year Sustained Research Investment 25+ HEAL Research Programs
  • HEAL Initiative: Pain management, Translating research, New presention, enhance outcomes for affected newborns, novel medications options Pre-clinical translational research in Pain management
  • Improving treatments for opioid misuse & addiction
  • Opioid disorder people do not receive treatment: justice community, collaborative, ER, pregnant mothers
  • Medication-based treatment – do not stay long enough to achieve long-term recovery
  • People experience Pain differently: Muscular, neurological, : Biomarkers, endpoints, signatures, test non-addictive treatments for specific pains
  • Pain control balance of risks of long-term opioid therapy
  • HEAL Research – infant born after exposure to opioids in utero affect brain growth, born with withdrawal syndromes
  • Diversity of Data under HEAL Initiative –>> Harmonize the data
  • Common Data Elements in HEAL Clinical Research in Pain Management
  • CORE CDE & Supplemental CDE
  • Making HEAL Data FAIR: Findable, Accessible, Interpretable, Reusable
  • LINK HEAL data with communities studies, predict behaviours
  • Data sharing made available to the public
  • HEAL Data Lifecycle
  • effect of change due to change in dosage used – if dat is not collected – then we are not able to explore the relationships
  • Use the data to advance research beyond the current understanding of the problem
  • #NIHhealthInitiative

 

Ari Berman

BioTeam Inc

Chief Executive Officer

  • Distributed Questions from the Audience to the speakers

10:00 AM – 11:25 AM EDT on Tuesday, October 6

How to Hold on to Your Knowledge in an Agile World

Etzard Stolte

Roche Pharma

Global Head

October 7, 2020

The Chicagoland COVID-19 Commons: A Regional Data Commons Powering Research to Support Public Health Efforts

  • Matthew Trunnell

    VP & Chief Data Officer

9:00 AM – 9:20 AM EDT on Wednesday, October 7

  • Seattle & COVID – samples from Seattle Flu Study
  • Public Health Practice vs Research – Data from Human Subjects: Avoid delute the control
  • Chicagoland COVID-19 Data Commons – in Chicago
  1. Neighborhood level in Chicago
  2. common data model
  3. power efforts Predictive modeling : Case rate Total confirmed cases, Death cases
  4. Legal agreement of the Consortium
  5. https://chicagoland.pandemic
  • Commons – resources held in commons non-for profit
  • Data Commons: cloud based SW platforms that are co-located data, computing infrastructure and applications
  • Level 1: Basic, Level 2: Repeatable, Level 3: Governance Level4: Interoperability Level 5: Sustainable
  • COVID-19 Data Common: Public health authorities collects data – nor available to Research community
  • Research community need access to Public health authorities
  • Regional COVID-19 Data Commons: Reasons: Public health decision is LOCAL but specific to the Region
  • Fund raising in the communities
  • Data 1: Clinical Data for Health care Summary of incidence – Signals of ethnic dependencies and co-morbidities
  1. Safe harbor: removal of 18 identifiers
  2. Expert Determination
  • Data 2: Public Data: Environmental,
  • Data 3: Resident-Reported Data on iPhones: multiple languages supported early reports of people feeling unwell

CompBio: An Augmented Intelligence System for Comprehensive Interpretation of Biological Data

Richard Head

Washington Univ

Prof & Dir Genome Technology Access Ctr

9:20 AM – 9:40 AM EDT on Wednesday, October 7

  • Formating, data scrubbing,
  • Replace data fabric with simplified version
  • create “Memory Model” Machine learning does classification of patterns
  • dimensions are the variables
  • “Hyper-dimensional – ingestions of abstracts and articles
  • Example; IL^: Aggregate Memories to create a NORMALIZED Aggregate Memory
  • Relationships explored
  • Complex Knowledge Patterns Generated by the PCMM: Compared Utilization
  • Augmented AI System: Combination PCMM with AI
  • Literature mining CompBio
  • Evidence of Utility: PCMM – Accepted or Published Research Leveraging PCMM Applications
  • Example 1: Cell Metabolism CompBio – A person formulate hypothesis
  • Example 2: Analysis of RNA-Seq a rare mutational subtype of GBM
  1. Hypothesis –>> BioExplorer –>> Multiple relations revealed
  2. Example 3: Animal Models to Human Disease: CompBio – Crohn’s Assertion Engine

Summary – Augmented AI Platform for Biological DIscovery

  • PCMM – Memory modle – hyperdimensional
  • AAI Infrastructure
  • Knowledge map libraries
  • In development Medical Discoveries

PercayAI Team – commercial Development

Kingdom Capital

 

Precision Cancer Medicine

  • Jeffrey Rosenfeld

    Rutgers Univ

    Asst Prof

9:40 AM – 10:00 AM EDT on Wednesday, October 7

 

  1. Hereditary cancer sequencing – BRCA
  2. Tumor cancer sequencing
  • Panel Sizes – 500-1000x – the bigger the panel – more computational time more data need be investigated
  1. Hotspot Panels,
  2. Gene Panels,
  3. Exomes
  • Cell free DNA Testing – Liquid biopsy
  1. Apoptosis
  2. Necrosis
  • FoundationONE
  • Patient Results: ALL mutations found, Mutation Burden,
  • Gene EGFR – no mutation
  • For every Mutation what Therapy is recommended for approved drugs
  • Clinical Trials for the mutations
  • VARIANTS of unknown significance
  • WORKFLOW: many MDs send sample get 38pps report
  • Genomic Classification and Prognosis in AML: Mutations subset and therapies available
  • Paradigm Shift in Classification
  1. 2013 – Lung Adenocarcinoma <<<- –
  2. 2011 – another cancer

 

mTOR System: A Database for Systems-Level Biomarker Discovery in Cancer

  • Iman Tavassoly – CANCELLED

    C2i Genomics

    Physician Scientist

10:20 AM – 10:40 AM EDT on Wednesday, October 7
Add to Calendar

mTOR system is a database I have designed for exploring biomarkers and systems-level data related to mTOR pathway in cancer. This database consists of different layers of molecular markers and quantitative parameters assigned to them through a current mathematical model. This database is an example of merging systems-level data with mathematical models for precision oncology.

FAIR and the (Tr)end of Data Lakes

  • Kees Van Bochove

    The Hyve

    Founder & Owner

10:20 AM – 10:40 AM EDT on Wednesday, October 7

Normalizing Regulatory Data Using Natural Language Processing (NLP)

  • Qais Hatim, Dr.

    FDA CDER

    Visiting Assoc

David Milward

Linguamatics

Senior Director, NLP Technology

10:40 AM – 11:00 AM EDT on Wednesday, October 7

  • ML focus on Disease
  • NLP – different words have same meanings, different expression same meaning, grammer & Meaning
  • Normalizes output
  1. Disease
  2. Genes
  3. Dates
  4. Mutations
  • Transform Unstructured into structured
  • Identifying Gaps in adverse events Labelling: Pain and Opioids
  • Improve drug safety
  • ChemAxon

Supplemental Approval Letters

Coding for Adverse events: “derived values of possible interest”

  • Use of Prominent Terminologies used at the FDA: UNII – Translation into ANSI tesaurus standard
  • Matching to the Variation found within Real Text: synonyms
  • Using ML for Normalization in Disease Context
  • Deep Learning PRE-TRAINING APPROACH for annotated date = supervised learning
  • A set of rules to handle overlapping entities
  • normalized the amp extracted from concepts
  • BERN and Terminologies: BioBERN, PubMed Central, PubMed Articles
  • NER – Named Entity Recognition
  • Evaluation of the Approach

Conclusions

NLP, ML, Hybrid methods, Terminology +ML methods

Building an Artificial Intelligence-Based Vaccine Discovery System: Applications in Infectious Diseases & Personalized Neoantigen-Related Immunotherapy for Treatment of Cancers

  • Kamal Rawal

    Amity Univ

    Assoc Prof

10:40 AM – 11:00 AM EDT on Wednesday, October 7

  • Classification of proteins
  • Data Collection
  • Feature Selection – Most important from 1447 features
  • Deep learning Model: Vaxi-DL: Layers, compilation
  • Overfitting Model strategy
  • Balancing Imbalanced
  • Hyper parameter tuning: Internal parameter of the model
  • Stratified K-Fold Training and Validation
  • Ensembling Approach: many weak classifier to create a STRONG Classifier
  • ROC Curve: Ensemble by Consensus
  • Before and after calibration
  • Benchmarking the system: Vaxi-DL Ensemble by Average vs by Consensus
  • SYSTEM developed: Type protein – find results
  • Rare disease CHARGE Syndrome was used for validation
  • Application to COVID-19 – Methodology
  • Application on Cancer: Which peptide can be used as antigen for prediction of immunogenic peptides

 

Using GPU Computing to Evaluate Variant Calling Strategies

  • George Vacek

    NVIDIA Corp

    Sequencing Strategic Development

  • Eriks Sasha Paegle

    Dell EMC

    Senior Business Development Manager

11:15 AM – 11:30 AM EDT on Wednesday, October 7

  • Navidia: 100 Genomes Cohort generated at NY Genome Center  NHGRI
  • Navidia Parabricks mentioned AZURE
  • Dell EMC: Test environment: Dell Technology Cloud Storage for Multi-Cloud: resources across GCU, AWS, Azure in Northern Virginia regions
  • Multi-Cloud ease of use: without Multi-cloud vs with Faction multi-clouds
  • Ease of use
  • Deep Averaging Network (DAN)
  • NVIDIA CLARA PARABRICK TOOLKIT: Short & Long read, Deep learning, Data Analytics, ML
  • Reference applications – host of customized applications, 3rd Party App, Libraries
  • GPU (Genomics PUs) – Drop in tools for Somatic Pipelines : Clara Parabricks v3.5
  • Partnership of NVIDIA and Petagene announced at BioIT20 – NGS Data compretion
  • Petagene technology allows lossless compression reduce storage costs
  • Project with Sanger Institute – Optimizing Muto-graph Identification
  • completed run in 24 hours instead 31 days
  • Parabricks is a joint project Dell/EMC and NVIDIA

PLENARY KEYNOTE: Game On: How AI, Citizen Science, and Human Computation Are Facilitating the Next Leap Forward

12:30 PM – 1:55 PM EDT on Wednesday, October 7

  • Allison Proffitt

    BioIT World & Diagnostics World

    Editorial Dir

Seth Cooper

Northeastern Univ

Asst Prof

  • Foldit – Scientific discovery using video games in the domain of protein structures and folding
  • Combine Human with machine
  • Score based on competition among players for higher score and collaboration in groups
  • Problem: Chemistry give input.
  • Puzzle available for one week on the Internet, games ongoing,
  • Solution analysis – continually IMPROVE the structure of Protein folding
  • Foldit Tutorials offered online
  • Player accomplishments: Articles by scientists ,
  • development of algorithms discovery
  • Electron Density fitting
  • Enzyme re-design
  • de novo Protein Design – named authors on a paper – scientific process
  • Future Work: Coronovirus Spike protein
  • Small molecule design
  • narrative
  • virtual reality – 3D protein structure for manipulation
  • htp://Fold.it/Educator Mode
  • htp://Fold.it/standalone
  • http://fold.it/
  • seth.cooper@gmail.com

Lee Lancashire, CIO

Cohen Veterans Bioscience – not for profit – advancing Brain health

  • Biotyping and stratification
  • Biomarkers
  • Omics data
  • All meet in the Common – Brain Commons: Clinician, Geneticist, Scientist, Bioinformatician, R Studio, Python, Jupyterhub
  • Multidimensional Biomarkers in Multiple Sclerosis

 

Pietro Michelucci

Human Computation Institute

Director

  • Why machine can’t tackle AI on their own and AI can’t do Precision Medicine on their own
  • young people more than others N of 1 – Precision Mediicne
  • Scandinavians and Russians are immune
  • AI & Precision Medicine: can’t solve the complexity of messy data vs big data
  • Messy data: heterogeneous multidimensional, to many combinations to explore, select which combination to explore vs let the machine generate all the combination and do analysis on all and discover PATTERN
  • Causal vs spurious
  • Logical reasoning, right brain abstract and short cuts – Human brain does routinely
  • Human do better on context: Not all info is in pixels such as context
  • #ADS – SBIR suspected the hypothesis to be tested
  • improving crowd wisdom methods: 20 input by different people PLUS machine
  • combine crowd answers with machine faster and improved accuracy
  • Machine has no intuition – machine bias of Human and of machine is similar
  • Wisdom of Crowd: Bootstrapping hybrid Intelligence: CIVIUM
  • bit.ly/civiumintro

 

 

Jerome Waldispuehl

McGill Univ

Assoc Prof

  • visualization of nucleotide – tools for
  • http://phylo.cs.mcgill.ca
  • GAME: Phylo DNA Puzzles: Goal 202, Score, Top Score
  • Whole-genome multiple
  • Phylo: 350,000 participants, 1MM solutions Improve 40 to 95% computer alignments
  • education & science outreach – reach out to the Public
  • Borderlands Science + game designers: 1MM participants 50MM solutions
  • Joint initiative with a major science project
  • Improvement of 16S rRNA
  • MMOS company in Science games

Towards AI-Guided Cell Profiling of Drugs with Automated High-Content Imaging

Ola Spjuth

Uppsala Univ

Professor

2:10 PM – 2:30 PM EDT on Wednesday, October 7

  • Accelerate drug discovering using AI automation in collaboration with AstraZeneca
  • Closed-loop (autonomous) experimentation
  • collect the best data at the minimal cost
  • Active learning: query active learning model
  • Exploitation [best predictions from given data] vs Exploration
  • Automation in Life Science: micro-plate, stack of micro-plates
  • Robot scientist: come out with hypothesis and conduct research
  • high-throughput biology: Robots vs Disease
  • Cell painting: Imaging with multiplexed dyes: genetic or chemical perturbations
  • classify images into biological mechanisms
  • combinations of toxicants
  • A discovery engine: Toxicity, Efficacy, mechanisms combinations
  • Automating our cell-based lab: fixed setup
  • Open source lab automation suite: Github https://github.com/pharmbio/imagedb
  • Dealing with large scale data [TensorFlow]
  • STACKn.com – AI modeling Life cycle
  • HASTE: Hierarchical analysis of Spacial and Temporal
  • https://pharmb.io

Advanced Imaging and AI Technologies Providing New Image and Data Analysis Challenges and Opportunities

Richard Goodwin

AstraZeneca

Dir & Head of Imaging & AI

2:30 PM – 2:50 PM EDT on Wednesday, October 7

AstraZeneca is empowering its scientists to see the complexity of a disease in unprecedented detail to enable effective development and selection of new medicines. This is enabled though the use of an extensive range of cutting-edge imaging technologies that support studies into the efficacy and safety of drugs through the R&D pipeline. This presentation will introduce the range of novel in vivo and ex vivo imaging technologies employed, describe the data challenges associated with scaling up the use of molecular imaging technologies, and address the new data integration and mining challenges. Novel computational methods are required for large cohort imaging studies that involve tissue based multi-omics analysis, which integrate spatial relationships in unprecedented detail.

  • Small molecule – not suitable for complex diseases
  • focus on quality vs quantity
  • compound for commercial value
  • right safety
  • Imaging supports R&D: Molecular, medical, big data and AI
  • convergence of ML for decision making
  • Spatial imaging: morphology
  • Multiplex imaging like MRI
  • Multimodal analysis: tissue data and invivo holistic understanding of drug delivery
  • spacial transcriptomics proteomics: imaging platforms in R&D
  • AZ invest in imaging technologies already impacting projects: AI-empowered imaging delivering subcellular resolution
  • Mass Spec Imaging (MSI) – ex-vivo imaging techniques- spatial distribution of molecular
  • cartography of cancer: Drug metabolite distribution – NEW understanding of disease and drug distribution in tissue
  • DATA: digitization, integration, analysis, exploration
  • Digital pathology and beyond – AI Image Analysis – AI outperform pathololigst and radiologists
  • Data volume and dimensionality challenge and opportunity
  • Data volume and dimensionality: complete image
  • AZ Oncology – disease is understood for drug discovery using Imaging technology

PANEL: Framework and Approach to Unlock the Potential of Quantum Computing in Drug Discovery

  • Brian Martin

    AbbVie Inc

    Research Fellow & Head

Philipp Harbach

Merck KGaA

Head of In Silico Research in Germany

  • chemistry and manufacturing with QC – end user in Pharmaceutical
  • VC at Merck ask expert in Merck to guide investment of Merck in QC
  • 50 people across Merck [three areas at Merck [Pharmaceutics, Animal Health, Diagnostics]

Celia Merzbacher

SRI Intl

Assoc Dir Quantum Economic Dev Consortium (QEDC)

  • Methodology from Pistoia to be used in QC
  • QC R&D developed in parallel
  • Simulation of all the components is possible

John Wise

Pistoia Alliance Inc (2007)

We are a global, not-for-profit members’ organization working to lower barriers to innovation in life science and healthcare R&D through pre-competitive collaboration.

Consultant

  • How Pharmaceutical Industry can benefit from quantum computing
  • 9 of 10 big Pharma are members of the Pistoia Alliance
  • IP created on specifications

 

Zahid Tharia

Pistoia Alliance Inc

Consultant

  • Barriers to adoption of quantum computing (QC) in Pharma is training of staff and skills in the IT aspects of QC

3:10 PM – 4:00 PM EDT on Wednesday, October 7

In 2019, major life sciences companies mobilized to form a pre-competitive, collaborative quantum computing working group (QuPharm) and delineate a framework and approach to accelerate realizing the potential of quantum acceleration in drug discovery. Learn from industry thought leaders on how to valuate and map problems into quantum algorithms, set up organizations to enable and scale quantum computing pilots and establish effective cross-industry, tech, and start-up collaborations.

Session Wrap-Up Panel Discussion

Etzard Stolte, PhD

Roche Pharma

Global Head

  • no official policy
  • 2020 it become important to be mentioned by management as a potential use in automation
  • continual updates needed – it is manual and a disillusion without a business case
  • Roche try to commodatized tools in AI as Classifiers, automation,

Samiul Hasan

GlaxoSmithKline

Scientific Analytics and Visualization Director

  • AI is perceived as having potential to take off on its own
  • POC – demonstrate the vlaue
  • Proof of Concept – Semantic report – a story vs one off
  • demonstration of value is needed and is continuous

 

 

Bin Li

Millennium The Takeda Oncology Co

Dir Computational Biology & Translational Medicine

  • ML community at Takeda
  • Positive to have, how successful not much yet – not used much yet
  • some models are pretty good do not need improvement

Jens Hoefkens

Accenture

Industry Principal Director

  • Future of AI as support to the Human intuition vs replacement of humans
  • automation like pathology classification
  • Machine and Human working together – not as maker of decisions in clinical settings
  • POC cycle prevent production conversion
  • where is the highest value for production and deploy with scale
  • AI Assisted to sift Genomics data
  • BERT term extraction from Google technology to make sense of data assist the user
  • ML
  • RPA – Robotic concept extraction – 80% accuracy needed by scientists

4:00 PM – 4:20 PM EDT on Wednesday, October 7

October 8, 2020

Trends from the Trenches

Kevin Davies, PhD

CRISPR Journal

Exec VP & Exec Editor

Timothy Cutts

Wellcome Sanger Institute

Head

  • Collaborations with scientists in subSahara
  • pay for data analysis – ownership issues
  • in UK 6 Labs for the entire countries: all send the data to Wellcome Sanger Institute for analysis
  • Metadata is the problem – coordination of each of the 6 labs to send the metadata created problems

 

  • Cindy Crowninshield

    Cambridge Healthtech Institute

    Executive Event Director

Vivien Bonazzi

Deloitte Consulting LLP

Managing Dir & Chief Biomedical Data Scientist

  • How organizations use bioscience data
  • Data Ecosystem: Hardware and software: Cloud and other options
  • Operationalize the two trends:
  1. Platforms: End to end solutions resulting in SILOS, systems are native: data ingestions
  2. Data Commons: Open arch, open source – integration and interdependence issues
  • Biomedical Agencies in NIH various Organizations in the Private sector: Sharing data must be more effective
  • IT, Data Science, Management – COVID – reduced barriers
  • Leadership: Different voices from different people
  • Data strategies & Governance not the whole but small pieces , incentives to share data

Chris Dagdigian

BioTeam Inc

Sr Dir

  • 10th Anniversary to Trends from the Trenches
  • IT infrastructure changes
  • Research IT:
  1. Genomics & BioInformatics
  2. Image-based data acquisition and analysis: CryoEM, 3D microscopy, fMRI image analysis
  3. ML and AI – GPU FPGAs, neural processors: Drive in organizations: bottom up
  4. Chemistry & Molecular Dynamics
  5. Storage and exploitation of data for insights
  6. 2020 Hype vs Reality
  7. Scientific Data: managing and understanding, data movement, federated/access
  8. Big Data: data storage, management & governance standards vs human curated data
  9. IT needs guidance and decisions from Science Team
  10. Culture change for joint management by Science & IT: data fidelity, attribution, allocation top down
  11. NERSC File System quotas & Purging overviewSilos & So
  12. Petabytes of open access data, collaborative research resources: Data rich environments
  13. Data Lakes: Gen3 Data Commons
  14. Data hygiene:metadata is Science side vs IT
  15. Biased Data: Model & Data Bias
  • Failed Predictions:
  1. Compilers matter again – not True
  2. CPU benchmarking is back – WRONG
  3. AMD vs Inter arm64 vs both
  4. Policy driven auto-tiering storage – wrong, USER self-service for tiering, movement and archive decision. Let researchers tier/move/archive based on Project, Experiment or Group
  5. Single storage namespace – Wrong: Data intensive science: scientists must do some IT jobs themselves

Kjiersten Fagnan

Lawrence Berkeley Natl Lab

CIO

  • Genome Project of DOE
  • Data management with other agencies
  • COVID: Collaborations, breaking down barriers, small labs and big labs ALL generate data and sharing
  • that collaboration is needed regardless of COVID – not happen
  • If twoo big one lab can’t handle it all
  • Funding and training does not support the Collaborations because next round of funding depend on individual publications – which requires silos
  • Data cleaning and data management:Standards are annoying and painful – not needed for publishing the results as soon as possible – just that someone else will be able to use it
  • Facebook have hundred of curators – the curation of scientific data requires same hunsrands od curators that are SCIENTISTS and Data scientists

Matthew Trunnell

Pandemic Response Commons, Seattle

VP & Chief Data Officer

  • Data commons for intra- and inter-mural data sharing
  • ML is needed for Data commons
  • Progress in FAIRness, NIH efforts driven by Susan Gregory across NIH all centers
  • Large amount of B-to-B Data sharing UBER sharing with a jurisdiction they operate
  • SNOWFLAKES – new cloud technology
  • COVID – plays an accelerator
  • Cancer vs COVID – transfer knowledge from COVID to Cancer

9:00 AM – 10:40 AM EDT on Thursday, October 8

The “Trends from the Trenches” will celebrate its 10th Anniversary at Bio-IT! Since 2010, the “Trends from the Trenches” presentation, given by Chris Dagdigian, has been one of the most popular annual traditions on the Bio-IT Program. The intent of the talk is to deliver a candid (and occasionally blunt) assessment of the best, the worthwhile, and the most overhyped information technologies (IT) for life sciences. The presentation has helped scientists, leadership, and IT professionals understand the basic topics related to computing, storage, data transfer, networks, and cloud that are involved in supporting data-intensive science. In 2020, Chris will give the “Trends from the Trenches” presentation in its original “state-of-the-state address” followed by guest speakers giving podium talks on relevant topics. An interactive Q&A moderated discussion with the audience follows. Come prepared with your questions and commentary for this informative and lively session.

Q&A

  • Project vs enterprise – Sequencing for internal research vs for clients’ data
  • Tension in governmental agencies – no robust solutions: IT, Science, Management
  • different Use cases need different infrastructure: HW & SW: Storage and data exploration
  • Data Lakes: rule base, enterprising – training is an issue in organizations
  • Management, Scientists, IT in enterprises – terra byte of storage, budgets issues, conversation on the limits that IT can ofer putting more burden on the Scientists for triage and quotas – business and scientific value
  • New capabilities in organizations: hands on in data management tactical of data management not IT bur data engineering
  • Citizen Science: privacy vs plants and microbes – no privacy issues
  • Incentives need be changed for Data Citations in addition to Papers
  • Curation Citations as Authorship citation
  • Data sharing in Cancer: GEN3 – NCI Data Commons, Data Governance and Data Permission (Access) – NCI does work in data commons – much data outside this space
  • EBI – in UK Sanger Institute has the infrastructure in one place
  • Migrating Project based Data structure: that involves scientist decisions that should not be a quota (storage is full)  in the IT space
  • Human to Human communications vs tools for data migration
  • Which Organizations get the data curation and annotation well: Subject matter from day 1 – hard to teach vs data engineering skills; TEAM as a solving is critical in Biomedical space no incentives
  • BBC – Meta tagging system is outstanding
  • NCAST TRANSLATOR – across organizations
  • Changing incentives – MORE organizations will do that task better
  • Common metadata across domains with predict uses of data in the Future – collaboration of CS to create in the science organization tagging like in BBC
  • Chris Anderson

    Clinical OMICs

    Editor in Chief

Ian Fore

NIH NCI

Sr Biomedical Informatics Program Mgr

  • NCI – Cancer Data Commons – concierge services to organization on data services

Ravi Madduri – CVD large cohort

Univ of Chicago

Scientist

 

  • Lara Mangravite

    Sage Bionetworks

    President

  • Kees Van Bochove

    The Hyve

    Founder & Owner

11:10 AM – 11:30 AM EDT on Thursday, October 8

 

BREAKOUT: Driving Scientific Discovery with Data / Digitization

  • Timothy Gardner

    Riffyn Inc

    CEO

11:35 AM – 12:00 PM EDT on Thursday, October 8

 

PLENARY KEYNOTE – 12:00 PM – 1:25 PM EDT on Thursday, October 8

Robert Green

Brigham & Womens Hospital

Co-founder of Genome Medicine

Prof & Dir G2P Research

  • Combining data to rapidly analyze COVID-19 Patients –
  • identify BIOMARKERS for vulnerability
  • Preventive Genomics – Angelina Jolly’s musectomy as a preventive clinical condition
  • Patients access to own genomics data
  • Population screening – to predict risks
  • Genetic Testing to Consumer: Preventive Genomics: conflated genotyping/sequencing and labs/care providers
  • Genetic Testing to Consumer: COST & Benefits – UNCLEAR
  1. diagnosis of unsuspected genetic disease
  2. stratification for surveillance
  3. which pieces of the puzzle need to be brought to bear in patient care
  4. Categories and Reporting criteria: Gene-Disease validity vs Variant Pathogenicity –>> Clinic
  5. MedSeq Project: 10MM randomized study – all genome info shared with Patient, other arm only selective genome data shared with patient: 100 patients 20% carry monogenic condition: Polygenic risk scores:
  6. CAD – high Cholesterol biomarker, A-FIb, DM2, 52% Women 48% Men
  7. No high risk error by PCP discussing and disclosing the results of the sequence
  8. Filtering the results: Indication -based testing vs Screening
  9. BabySeq Project: INFANTS sequencing to prevent disease: 11% carry a mutation in a monogenic gene for a monogenic condition -like abnormal narrowed aorta
  10. MDR – Monogenic Disease Risk
  11. MilSeq Project: US Air Force – Military active duty
  12. 5,8,10 – are all Polygenic studies
  13. Polygenic Risk Scores – High risk
  14. Classification need to be repeated every few years (2 years – re-sequence) due to changes in health and to efficiencies in new discovery in curated data which is improving as on-going
  • Risk benefit – UTILITY – Partners Biobank Return of Genomic Results
  • No interest on knowing by the Public NCCN criteria on chart review 20%
  • Brigham Preventive Genomics via telemedicine – First in the country
  • APC mutation after colonoscopy – obstruction diagnosed
  • @robertgreen

 

Juergen Klenk

Deloitte Consulting LLP

Principal

  • Bradykinin hypothesis for COVID-19
  • liberate the data: People , Data Risk

 

Natalija Jovanovic

Sanofi

Chief Digital Officer

  • AI in Pharma
  • Vaccine preventable diseases – produce 1Billion vaccines a year
  1. reduction of incidence: Pertusis – 92% eradication
  • manage risk profile
  • Science mechanism translatable to machines
  1. high automated ingestible data for AI
  2. Digital is about people: Good data Good algorithms Good GUI

Vivien Bonazzi

Deloitte Consulting LLP

Managing Dir & Chief Biomedical Data Scientist

12:00 PM – 1:25 PM EDT on Thursday, October 8
Add to Calendar

12:00 Organizer’s Remarks

Cindy Crowninshield, RDN, LDN, Executive Event Director, Cambridge Healthtech Institute

12:05 Keynote Introduction

Juergen A. Klenk, PhD, Principal, Deloitte Consulting LLP

12:15 Toward Preventive Genomics: Lessons from MedSeq and BabySeq

Robert Green, MD, MPH, Professor of Medicine (Genetics) and Director, G2P Research Program/Preventive Genomics Clinic, Brigham & Women’s Hospital, Broad Institute, and Harvard Medical School

12:40 AI in Pharma: Where We Are Today and How We Will Succeed in the Future

Natalija Jovanovic, PhD, Chief Digital Officer, Sanofi Pasteur

1:05 LIVE Q&A: Session Wrap-Up Panel Discussion

PANEL MODERATORS:

Juergen A. Klenk, PhD, Principal, Deloitte Consulting LLP

Vivien R. Bonazzi, PhD, Managing Director & Chief Biomedical Data Scientist, Deloitte Consulting LLP

Below are included sessions that are NOT included above. I covered ONLY the above sessions.

Session Availability

1. PLENARY KEYNOTE PRESENTATION

10:15 am ET – NIH’s Strategic Vision for Data Science

Susan K. Gregurick, PhD, Associate Director, Data Science (ADDS) and Director, Office of Data Science Strategy (ODSS), National Institutes of Health

Rebecca Baker, PhD, Director, HEAL (Helping to End Addiction Long-term) Initiative, Office of the Director, National Institutes of Health

2. WORKSHOPS

11:55 am ET – W1: Data Management for Biologics: Registration and Beyond

Monica Wang, PhD, Principal Technology Lead, Scientific Informatics, Takeda

Sebastian Schlicker, Head, Biologics Business Operations, Genedata AG

11:55 am ET – W2: A Crash Course in AI: 0-60 in Three

Peter V. Henstock, PhD, Machine Learning & AI Lead, Software Engineering & Statistics & Visualization, Pfizer Inc.

11:55 am ET – W3: Data Science Driving Better Informed Decisions

Meghan Raman, Director, R&D Data Lake & Analytics, Bristol Myers Squibb Co.

Nigel Greene, PhD, Director & Head Data Science & Artificial Intelligence, Drug Safety & Metabolism, AstraZeneca Pharmaceuticals

2:15 pm ET – W4: Digital Biomarkers and Wearables in Pharma R&D and Clinical Trials

Danielle Bradnan, MS, Research Associate, Digital Health and Wellness, Lux Research

Graham Jones, PhD, Director, Innovation, Technical Research and Development, Novartis

Ariel Dowling, PhD, Director of Digital Strategy, Data Sciences Institute, Research and Development, Takeda Pharmaceuticals

2:15 pm ET – W5: AI-Celerating R&D: Foundational Approaches to How Emerging Technologies Can Create Value

Brian Martin, Head of AI, R&D Information Research, Senior Principal Data Scientist, AbbVie

2:15 pm ET – W6: Dealing with Instrument Data at Scale: Challenges and Solutions

Rachana Ananthakrishnan, Executive Director, Globus, University of Chicago

Michael A. Cianfrocco, PhD, Assistant Professor, Department of Biological Chemistry and Research Assistant Professor, Life Sciences Institute, University of Michigan

Brigitte E. Raumann, Product Manager, Globus, University of Chicago

3. Connect with peers from across the industry during these dedicated networking times.

9:25 am ET – Virtual Exhibit Hall Open

1:00 pm ET – Speed Networking

Looking to meet fellow attendees and have meaningful conversations – just as you would at an in- person event? This is the perfect way to achieve just that. Get to know your fellow attendees by joining this interactive speed networking event. To participate, each attendee will be paired at random with another fellow attendee and given a chance to interact for 7 minutes in a private zoom room. Once the 7 minutes are up, you will move on to meet with another selected attendee. Maximize your networking at the meeting and join in.

2:00 pm ET – Stretch Break

Take a minute to revitalize and join our friends from VOS Fitness for a stretch break. The professional trainer from VOS will bring you through some easy moves that will help with screen fatigue and ease your muscles after a long day of sitting at the computer. All moves can be done right at your desk and is appropriate for all fitness levels.

4. Game On!

Earn points by completing the activities listed on our Game tab. Some activities will only award points once, but others will award you every time you do it – so the more involved you are in the virtual event, the more points you will earn! You can start earning points one week before the event – so get ready to start sending meeting invitations, exploring our virtual expo and planning your schedule.

Attendees in the top 5% of points earned when the game closes at the end of the conference will be eligible to win a gift card worth $200 USD!

5. Take part in 1-on-1 networking with an easy-to-navigate profile search and scheduling platform.

  • Check out your recommended connections flagged as “Want to Meet” in the People Tab. These connections were chosen based on your similar roles, companies and conference program interests.
  • Take a moment to add relevant interest tags to your profile. Then search and connect with participants who have the same interests.
  • Engage with technology leaders in their booths and view relevant videos and demos.
  • Take part in live Q&A with speakers and participants following each educational session.
  • Create and join in ad hoc group discussions throughout the event.
  • Watch Our Quick Tutorial on how to Maximize Networking Opportunities: CII’s Virtual Event Platform – Networking

10:00 AM – 11:25 AM EDT on Tuesday, October 6
Add to Calendar

SPONSORED BY:

10:00 Welcome Remarks

Cindy Crowninshield, RDN, LDN, Executive Event Director, Cambridge Healthtech Institute

10:05 Keynote Introduction

Scott Parker, Director of Product Marketing, Marketing, Sinequa

10:15 PLENARY KEYNOTE PRESENTATION: NIH’s Strategic Vision for Data Science

Susan K. Gregurick, PhD, Associate Director, Data Science (ADDS) and Director, Office of Data Science Strategy (ODSS), National Institutes of Health

Rebecca Baker, PhD, Director, HEAL (Helping to End Addiction Long-term) Initiative, Office of the Director, National Institutes of Health

11:05 LIVE Q&A: Session Wrap-Up Panel Discussion

PANEL MODERATOR:

Ari E Berman, PhD, CEO, BioTeam Inc

Session Availability

Wednesday, October 7

9:00 AM EDT
  • TRACK 7: AI FOR DRUG DISCOVERY

    The Emergence of the AI-Augmented Drug Discoverer

    9:00 AM – 9:20 AM EDT
    PRESENTATIONON DEMANDRECORDEDSESSION PASS

    Mark Davies

    BenevolentAI

9:20 AM EDT
  • TRACK 7: AI FOR DRUG DISCOVERY

    Generative Chemistry and Generative Biology for AI-Powered Drug Discovery

    9:20 AM – 9:40 AM EDT
    PRESENTATIONON DEMANDRECORDEDSESSION PASS

    Alex Zhavoronkov

    Insilico Medicine

9:40 AM EDT
  • TRACK 7: AI FOR DRUG DISCOVERY

    Talk Title to be Announced

    9:40 AM – 11:00 AM EDT
    PRESENTATIONON DEMANDRECORDEDSESSION PASS

    Grace Wenjia You

    EMD Serono

11:00 AM EDT
  • TRACK 7: AI FOR DRUG DISCOVERY

    Coupling AI and Network Biology to Generate Insights for Disease Understanding and Target ID

    11:00 AM – 11:30 AM EDT
    Cortellis, A Clarivate Analytics Solution logo
    PRESENTATIONON DEMANDRECORDEDSESSION PASS

    Alexander Ivliev

    Clarivate

11:30 AM EDT
  • TRACK 7: AI FOR DRUG DISCOVERY

    Session Wrap-Up Panel Discussion

    11:30 AM – 11:50 AM EDT
    PANELON DEMANDLIVESESSION PASS

 

@@@@@

OLD Material

http://www.giiconference.com/chi909998/

Welcome to Bio-IT World 2020

In the spirit of open collaboration, the world’s premier bio-IT conference will bring together the community to focus on how we are using technologies and analytic approaches to solve problems, accelerate science, and drive the future of precision medicine. With a focus on AI, data science and other “data-driven” technologies that are advancing biomedical research, drug discovery and healthcare, the Bio-IT World Conference & Expo ’20 will bring together more than 3,000 participants to the Seaport World Trade Center in Boston from October 6-8, 2020.

The participants will have the chance to meet and share research/ideas with leading life sciences, pharmaceutical, clinical, healthcare, informatics and technology experts.

BROCHURE

http://www.giiconference.com/chi909998/catalog.pdf?20200122

2020 CONFERENCE PROGRAMS VIEW

TRACK 1 Data Storage and Transport VIEW

TRACK 2 Data and Metadata Management VIEW

TRACK 3 Data Science and Analytics Technologies VIEW

TRACK 4 Software Applications and Services VIEW

TRACK 5 Data Security and Compliance VIEW

TRACK 6 Cloud Computing VIEW

TRACK 7 AI for Drug Discovery VIEW

TRACK 8 Emerging AI Technologies VIEW

TRACK 9 AI: Business Value Outcomes VIEW

TRACK 10 Data Visualization Tools VIEW

TRACK 11 Bioinformatics VIEW

TRACK 12 Pharmaceutical R&D Informatics VIEW

TRACK 13 Genome Informatics VIEW

TRACK 14 Clinical Research and Translational Informatics VIEW

TRACK 15 Cancer Informatics VIEW

TRACK 16 Open Access and Collaborations

 

2020 Plenary Keynote Speakers

Rebecca Baker, PhD

Director, HEAL (Helping to End Addiction Long-term) Initiative, Office of the Director, National Institutes of Health

Vivien Bonazzi, PhD

Chief Biomedical Data Scientist, Managing Director, Deloitte

Tim Cutts, PhD

Head, Scientific Computing, Wellcome Trust Sanger Institute

Chris Dagdigian

Co-Founder and Senior Director, Infrastructure, BioTeam, Inc

Kevin Davies, PhD

Executive Editor, The CRISPR Journal, Mary Ann Liebert, Inc.

Kjiersten Fagnan, PhD

Chief Informatics Officer, Data Science and Informatics Leader, DOE Joint Genome Institute, Lawrence Berkeley National Laboratory

Robert Green, MD, MPH

Professor of Medicine (Genetics) and Director, G2P Research Program/Preventive Genomics Clinic, Brigham & Women’s Hospital, Broad Institute, and Harvard Medical School

Susan K. Gregurick, PhD

Associate Director, Data Science (ADDS) and Director, Office of Data Science Strategy (ODSS), National Institutes of Health

Natalija Jovanovic, PhD

Chief Digital Officer, Sanofi Pasteur

Pietro Michelucci, PhD

Director, Human Computation Institute

Matthew Trunnell

Vice President and Chief Data Officer, Fred Hutchinson Cancer Research Center

3,200+
Industry
Professionals
160+
Sponsors &
Exhibitors
250+
Scientific
Presentations
16
Diverse
Conference Tracks

Read Full Post »

Older Posts »