Feeds:
Posts
Comments

Posts Tagged ‘Technology’

The Future of Speech-Based Human-Computer Interaction

Reporter: Ethan Coomber, Research Assistant III

2021 LPBI Summer Internship in Data Science and Podcast Library Development
This article reports on a research conducted by the Tokyo Institute of Technology, published on 9 June 2021.

As technology continues to advance, the human-computer relationship develops alongside with it. As researchers and developers find new ways to improve a computer’s ability to recognize the distinct pitches that compose a human’s voice, the potential of technology begins to push back what people previously thought was possible. This constant improvement in technology has allowed us to identify new potential challenges in voice-based technological interaction.

When humans interact with one another, we do not convey our message with only our voices. There are a multitude of complexities to our emotional states and personality that cannot be obtained simply through the sound coming out of our mouths. Aspects of our communication such as rhythm, tone, and pitch are essential in our understanding of one another. This presents a challenge to artificial intelligence as technology is not able to pick up on these cues.

https://www.eurekalert.org/pub_releases/2021-06/tiot-tro060121.php

In the modern day, our interactions with voice-based devices and services continue to increase. In this light, researchers at Tokyo Institute of Technology and RIKEN, Japan, have performed a meta-synthesis to understand how we perceive and interact with the voice (and the body) of various machines. Their findings have generated insights into human preferences, and can be used by engineers and designers to develop future vocal technologies.

– Kate Seaborn

While it will always be difficult for technology to perfectly replicate a human interaction, the inclusion of filler terms such as “I mean…”, “um” and “like…” have been shown to improve human’s interaction and comfort when communicating with technology. Humans prefer communicating with agents that match their personality and overall communication style. The illusion of making the artificial intelligence appear human has a dramatic affect on the overall comfort of the person interacting with the technology. Several factors that have been proven to improve communication are when the artificial intelligence comes across as happy or empathetic with a higher pitched voice.

Using machine learning, computers are able to recognize patterns within human speech rather than requiring programming for specific patterns. This allows for the technology to adapt to human tendencies as they continue to see them. Over time, humans develop nuances in the way they speak and communicate which frequently results in a tendency to shorten certain words. One of the more common examples is the expression “I don’t know”. This expression is frequently reduced to the phrase “dunno”. Using machine learning, computers would be able to recognize this pattern and realize what the human’s intention is.

With advances in technology and the development of voice assistance in our lives, we are expanding our interactions to include computer interfaces and environments. While there are still many advances that need to be made in order to achieve the desirable level of communication, developers have identified the necessary steps to achieve the desirable human-computer interaction.

Sources:

Tokyo Institute of Technology. “The role of computer voice in the future of speech-based human-computer interaction.” ScienceDaily. ScienceDaily, 9 June 2021.

Rev. “Speech Recognition Trends to Watch in 2021 and Beyond: Responsible AI.” Rev, 2 June 2021, http://www.rev.com/blog/artificial-intelligence-machine-learning-speech-recognition.

“The Role of Computer Voice in the Future of Speech-Based Human-Computer Interaction.” EurekAlert!, 1 June 2021, http://www.eurekalert.org/pub_releases/2021-06/tiot-tro060121.php.

Other related articles published in this Open Access Online Scientific Journal include the Following:

Deep Medicine: How Artificial Intelligence Can Make Health Care Human Again
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2020/11/11/deep-medicine-how-artificial-intelligence-can-make-health-care-human-again/

Supporting the elderly: A caring robot with ‘emotions’ and memory
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2015/02/10/supporting-the-elderly-a-caring-robot-with-emotions-and-memory/

Developing Deep Learning Models (DL) for Classifying Emotions through Brainwaves
Reporter: Abhisar Anand, Research Assistant I
https://pharmaceuticalintelligence.com/2021/06/22/developing-deep-learning-models-dl-for-classifying-emotions-through-brainwaves/

Evolution of the Human Cell Genome Biology Field of Gene Expression, Gene Regulation, Gene Regulatory Networks and Application of Machine Learning Algorithms in Large-Scale Biological Data Analysis
Reporter: Aviva Lev-Ari, PhD, RN
https://pharmaceuticalintelligence.com/2019/12/08/evolution-of-the-human-cell-genome-biology-field-of-gene-expression-gene-regulation-gene-regulatory-networks-and-application-of-machine-learning-algorithms-in-large-scale-biological-data-analysis/

The Human Genome Project
Reporter: Larry H Bernstein, MD, FCAP, Curator
https://pharmaceuticalintelligence.com/2015/09/09/the-human-genome-project/

Read Full Post »

 A Revolution in Medicine: Medical 3D BioPrinting

Curated by : Irina Robu, PhD

Imagine a scenario, where years from now, one of your organs stop working properly. What would you do?  The current option is to wait in line for a transplant, hoping that the donor is a match. But what if you can get an organ ready for you with no chance of rejection? Even though it may sound like science fiction at the current moment, organ 3D bioprinting can revolutionize medicine and health care.

I have always found the field of tissue engineering and 3D bioprinting fascinating. What interests me about 3D bioprinting is that it has the capacity to be a game changer, because it would make organs widely available to those who need them and it would eliminate the need for a living or deceased donor.  Moreover, it would be beneficial for pediatric patients who suffer specific problems that the current bio-prosthetic options might not address. It would minimize the risk of rejection as well as the components would be customized to size.

There have been advancements in the field of 3D bioprinting and one such advancement is using a 3D printed cranium by neurosurgeons at the University Medical Centre Utrecht. The patient was a young woman who suffered from a chronic bone disorder. The 3D reconstruction of her skull would minimize the brain damage that might have occurred if doctors used a durable plastic cranium.

So, what exactly is bioprinting? 3D bioprinting is an additive manufacturing procedure where biomaterials, such as cells and growth factors, are combined to generate tissue-like structures that duplicate natural tissues. At its core, bioprinting works in a similar way to conventional 3D printing. A digital model becomes a physical 3D object layer-by-layer.  However, in the case of bioprinting, a living cell suspension is used instead of a thermoplastic.

The procedure mostly involves preparation, printing, maturation and application and can be summarized in three steps:

  1. Pre-bioprinting step which includes creating a digital model obtained by using computed tomography (CT) and magnetic resonance imaging (MRI) scans which are then fed to the printer.
  2. Bioprinting step where the actual printing process takes place, where the bioink is placed in a printer cartridge and deposition occurs based on the digital model.
  3. Post-bioprinting step is the mechanical and chemical stimulation of printed parts in order to create stable biostructures which can ultimately be implanted.

3D bioprinting allows suitable microarchitectures that provide mechanical stability and promote cell ingrowth to be produced while preventing any homogeneity issues that occur after conventional cell seeding, such as cell placement. Immediate vascularization of implanted scaffolds is critical, because it provides an influx of nutrients and outflow of by-products preventing necrosis. The benefits of homogeneous seeded scaffolds are that it allows them to integrate faster into the host tissue, uniform cell growth in vivo and lower risk of rejection.

However, in order to address the limitations of the commercially available technology for producing tissue implants, researchers are working to develop a 3D bioprinter that can fit into a laminar flow hood, ultra-low cost and customizable for different organs. Bioprinting can be applied in a clinical setting where the ultimate goal is to implant 3D bioprinted structures into the patients, it is necessary to maintain sterile printing solutions and ensure accuracy in complex tissues, needed for cell-to-cell distances and correct output.

The final aim of bioprinting is to promote an alternative to autologous and allogeneic tissue implants, which will replace animal testing for the study of disease and development of treatments.  We know that for now a short-term goal for 3D bioprinting is to create alternatives to animal testing and to increase the speed of drug testing. The long-term goal is to change the status quo, to develop a personalized organ made from patient’s own cells. However, some ethical challenges still exist regarding the ownership of the organ.

A powerful starting point is the creation of tissue components for heart, liver, pancreas, and other vital organs.  Moreover, each small progress in 3D bioprinting will allow 3D bioprinting to make organs widely available to those who need them, instead of waiting years for a transplant to become available.

I invite you to read a biomedical e-book that I had the pleasure to author along with several other scientists, called Medical 3D BioPrinting – The Revolution in Medicine Technologies for Patient-centered Medicine: From R&D in Biologics to New Medical Devices (Series E: Patient-Centered Medicine Book 4).

 

 

 

Read Full Post »

Using A.I. to Detect Lung Cancer gets an A!

Reporter: Irina Robu, PhD

3.3.19

3.3.19   Using A.I. to Detect Lung Cancer gets an A!, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

Google researchers hypothesized that computers are as good or better than doctors at detecting tiny lung cancers on CT scans, since CT scan combines data from several X-rays to produce a detailed image of a structure inside the body. CT scans produce 2-dimensional images of a slice of the body and the data can also be used to construct 3-D images.

However, the technology published in Nature Medicine offers input in the future of artificial intelligence in medicine. By feeding vast amounts of data from medical imaging into systems called artificial neural networks, scientists can teach computers to identify patterns linked to a specific condition, like pneumonia, cancer or a wrist fracture that would be hard for a person to see. The system trails an algorithm, or set of instructions, and learns as it goes. The more data it receives, the better it becomes at interpretation.

The process, known as deep learning enables computers to identify objects and understand speech but it also created systems to help pathologists read microscope slides to diagnose cancer, and to help ophthalmologists detect eye disease in people with diabetes. In their recent study, the scientist used artificial intelligence to CT scans used to screen people for lung cancer, which caused 160,000 deaths in the United States last year, and 1.7 million worldwide. The scans are recommended for people at high risk because of a long history of smoking.

Screening studies showed that it can reduce the risk of dying from lung cancer and can also identify spots that might later become cancer, so that radiologists can categorize patients into risk groups and decide whether they need biopsies or more frequent follow-up scans to keep track of the suspect regions.

However, the test has errors. It can miss tumors or mistake benign spots for malignancies and shove patients into invasive, risky procedures like lung biopsies or surgery.

SOURCE

https://www.nytimes.com/2019/05/20/health/cancer-artificial-intelligence-ct-scans.html

Other related articles were published in this Online Scientific Open Access Journal including the following:

https://pharmaceuticalintelligence.com/2019/07/21/multiple-barriers-identified-which-may-hamper-use-of-artificial-intelligence-in-the-clinical-setting/

https://pharmaceuticalintelligence.com/2019/06/28/ai-system-used-to-detect-lung-cancer/

Read Full Post »

Artificial throat may give voice to the voiceless

Reporter
Irina Robu, PhD

Flexible sensors have fascinated more and more attention as a fundamental part of anthropomorphic robot research, medical diagnosis and physical health monitoring. The fundamental mechanism of the sensor is based on triboelectric effect inducing electrostatic charges on the surfaces between two different materials. Just like a plate capacitor, current is produced while the size of the parallel capacitor fluctuations caused by the small mechanical disturbances and therefore the output current/voltage is produced.

Chinese scientists combine ultra sensitive motion detectors with thermal sound-emitting technology invented an “artificial throat” that could enable speech in people with damaged or non-functioning vocal cords. Team members from University in Beijing, fabricated a homemade circuit board on which to build out their dual-mode system combining detection and emitting technologies.

Graphene is a wonder material because it is thinnest material in the universe and the strongest ever measured. And graphene is only a one-atom thick layer of graphite and possess a high Young’s modulus as well as superior thermal and electrical conductivities. Graphene-based sensors have attracted much attention in recent years due to their variety of structures, unique sensing performances, room-temperature working conditions, and tremendous application prospects.

The skin like device, wearable artificial graphene throat (WAGT) is as similar as a temporary tattoo, at least as perceived by the wearer. In order to make the device functional and flexible, scientists designed a laser-scribed graphene on a thin sheet of polyvinyl alcohol film. The device is the size of two thumbnails side by side and can use water to attach the film to the skin over the volunteer’s throat and connected to electrodes to a small armband that contained a circuit board, microcomputer, power amplifier and decoder. At the development phase, the system transformed subtle throat movements into simple sounds like “OK” and “No.” During the trial of the device, volunteers imitated throat motions of speech and the device converted these movements into single-syllable words.

It is believed that this device, would be able to train mute people to generate signals with their throats and the device would translate signals into speech.

SOURCE
https://www.aiin.healthcare/topics/robotics/artificial-throat-may-give-voice-voiceless?utm_source=newsletter

Read Full Post »

AI System Used to Detect Lung Cancer

Reporter: Irina Robu, PhD

3.3.13

3.3.13   AI System Used to Detect Lung Cancer, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

Lung cancer is characterized by uncontrolled cell growth in tissues of the lung. The growth spreads beyond the lung by metastasis into nearby tissues. The most common symptoms are coughing (including coughing up blood), weight loss, shortness of breath, and chest pains. The two main types of lung cancer are small-cell lung carcinoma(SCLC) and non-small-cell lung carcinoma (NSCLC). Lung cancer may be seen on chest radiographs and computed tomography(CT) scans. However, computers seem to be as good or better than regular doctors at detecting tiny lung cancers on CT scans according to scientists from Google.

The AI designed by Google was able to interpret images using the same skills as humans to read microscope slides, X-rays, M.R.I.s and other medical scans by feeding huge amounts of data from medical imaging into the systems. It seems that the researchers were able to train computers to recognize patterns linked to a specific condition.

In a new Google study, the scientists applied artificial intelligence to CT scans used to screen people for lung cancer. Current studies have shown that screening can reduce the risk of dying from lung cancer and can also identify spots that might later become malignant.

The researchers created a neural network with multiple layers of processing and trained the AI by giving it many CT scans from patients whose diagnoses were known. This allows radiologists to sort patients into risk groups and decide whether biopsies are needed or follow up to keep track of the suspected regions. Even though the technology seems promising, but it can have pitfalls such as missing tumors, mistaken benign spots for malignancies and push patients into risky procedures.

Yet, the ability to process vast amounts of data may make it imaginable for artificial intelligence to recognize subtle patterns that humans simply cannot see. It is well understood that the systems should be studied extensively before using them for general public use. The lung-screening neural network is not ready for the clinic yet.

SOURCE

A.I. Took Test To Detect Lung Cancer And Smashed It

Read Full Post »

Sepsis Detection using an Algorithm More Efficient than Standard Methods

Reporter : Irina Robu, PhD

3.3.15

3.3.15   Sepsis Detection using an Algorithm More Efficient than Standard Methods, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

Sepsis is a complication of severe infection categorized by a systemic inflammatory response with mortality rates between 25% to 30% for severe sepsis and 40% to 70% for septic shock. The most common sites of infection are the respiratory, genitourinary, and gastrointestinal systems, as well as the skin and soft tissue. The first manifestation of sepsis is fever with pneumonia being the most common symptom of sepsis with initial treatment which contains respiratory stabilization shadowed by aggressive fluid resuscitation. When fluid resuscitation fails to restore mean arterial pressure and organ perfusion, vasopressor therapy is indicated.

However, a machine-learning algorithm tested by Christopher Barton, MD from UC-San Francisco has exceeded the four typical methods used for catching sepsis early in hospital patients, giving clinicians up to 48 hours to interfere before the condition has a chance to begin turning dangerous. The four standard methods were Systemic Inflammatory Response Syndrome (SIRS) criteria, Sequential (Sepsis-Related) Organ-Failure Assessment (SOFA) and Modified Early Warning System (MEWS). The purpose of dividing the data sets between two far-flung institutions was to train and test the algorithm on demographically miscellaneous patient populations.

The patients involved in the study were admitted to hospital without sepsis and all had at least one recording of each of six vital signs such as oxygen levels in the blood, heart rate, respiratory rate, temperature, systolic blood pressure and diastolic blood pressure. Even though they were admitted to the hospital without it, some have contracted sepsis during their stay while others did not. Researchers used their algorithm detection versus the standard methods applied at sepsis onset at 24 hours and 48 hours prior.
Even though sepsis affects at least 1.7 million adults mostly outside of the hospital settings, nearly 270,000 die. Researchers are hoping that the algorithm would allow clinicians to interfere prior to the condition becoming deadly.

SOURCE
https://www.aiin.healthcare/topics/diagnostics/sepsis-diagnosis-machine-learning

Read Full Post »

AI App for People with Digestive Disorders

Reporter: Irina Robu, PhD

3.3.14

3.3.14   AI App for People with Digestive Disorders, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 2: CRISPR for Gene Editing and DNA Repair

Artificial intelligence (AI) constitutes machine learning and deep learning, which allows computers to learn without being clearly programmed every step of the way. The basic principle decrees that AI is machine intelligence leading to the best outcome when given a problem. This sets up AI well for life science applications, which states that AI can be taught to differentiate cells, be used for higher quality imaging techniques, and analysis of genomic data.

Obviously, this type of technology which serves a function and removes the need for explicit programming. It is clear that digital therapeutics will have an essential role in treatment of individuals with gastrointestinal disorders such as IBS. Deep learning is a favorite among the AI facets in biology. The structure of deep learning has its roots in the structure of the human brain which connect to one another through which the data is passed. At each layer, some data is extracted. For example, in cells, one layer may analyze cell membrane, the next some organelle, and so on until the cell can be identified.

A Berlin-based startup,Cara Care uses AI to help people manage their chronic digestive problems and intends to spend the funding raised getting the app in the hands of gastrointestinal patients in the U.S. The company declares its app has already helped up to 400,000 people in Germany and the U.S. manage widespread GI conditions such as reflux, irritable or inflammatory bowel, food intolerances, Crohn’s disease and ulcerative colitis “with a 78.8% treatment success rate.” Cara Care will also use the funding to conduct research and expand collaborations with companies in the pharmaceutical, diagnostics and food-production industries.

SOURCE
https://www.aiin.healthcare/topics/connected-care/ai-app-digestive-disorders-raises-7m?utm_source=newsletter

Read Full Post »

First Surgical Robot Making Surgeon’s Life More Efficient

Reporter : Irina Robu, PhD

A team of microsurgeons and engineers, developed a high-precision robotic assistant called MUSA which is clinically and commercially available. The high precision robotic assistant is compatible with current operating techniques, workflow, instruments and other or instrument.   Microsure is a medical device company in The Netherlands founded by Eindhoven University of Technology and Maastricht University Medical Center in 2016. Microsure’s focus is to improve patients’ quality of life through developing robot systems for microsurgery.

The Microsure’s MUSA enhances surgical performance by stabilizing and scaling down the surgeon’s movements during complex microsurgical procedures on sub-millimeter scale. The surgical robot, allows lymphatic surgery on lymph vessels smaller than 0.3 mm in diameter. Microsure received the ISO 13485 certificate which assures that Microsure is adhering to the highest standards in quality management and regulatory compliance procedures to develop, manufacture, and test its products and services.

MUSA provides superhuman precision for microsurgeons, enabling new interventions that are currently impossible to perform by hand.

SOURCE

https://www.businesswire.com/news/home/20190607005175/en/

Read Full Post »

Cell Therapy Market to Grow Beyond Oncology As Big Pharma Expands Investments

Reporter: Irina Robu, PhD

Collaborations of Big Pharma with small to mid-segment companies are currently focusing R&D on precision medicine. The market is valued at $2.70 billion in 2017 and is expected to reach $8.21 billion in 2025. A varied therapeutic focus and implementation of advanced manufacturing technologies such as single-use bioreactors, will pave a way for unique cell-gene and stem cell – gene combination therapies.
Novartis and Gilead are the first companies to adopt pay for performance business for their CAR-T cell therapies. In addition to innovative pricing models, Pharma companies are also showing a preference for risk sharing and fast-to-market models in order to support the development of novel therapies. Moreover, developments in cell culturing techniques alongside the use of different stem cells such as adipose-derived stem cells, mesenchymal stem cells, and induced pluripotent stem cell will reinforce the market with superior treatment options for non-oncological conditions such as neurological, musculoskeletal, and dermatological conditions.

With the high request for cell therapies, numerous growth opportunities can occur such as:

  • With more than 959 ongoing regenerative medicine clinical trials, the market finds opportunity across both stem cell and non-stem cell-based therapies.
  • Curative combination therapies which help find application in identifying the right patient as well as predicting the immune response in cancer patients.
  • Implementation of IT solutions and single-use manufacturing techniques for optimizing small-volume, high-value manufacturing of novel cell therapies, thus dropping the time to market radically.
  • Emerging Business Models which aid market players focus on academic and research collaborations together with industry collaborations to support therapeutic and technological innovations.

Source

https://www.newswire.ca/news-releases/cell-therapy-market-to-grow-beyond-oncology-as-big-pharma-expands-investments-826628110.html

 

Read Full Post »

Reporter: Gail S. Thornton

 

From The Wall Street Journal (www.wsj.com)

Published January 9, 2019

Health-Care CEOs Outline Strategies at J.P. Morgan Conference

Chiefs at Johnson & Johnson, CVS discuss what’s next on a range of industry issues

One of the biggest health conferences of the year for investors, the J.P. Morgan Health-Care Conference, is taking place this week in San Francisco. Here are some of the hot topics covered at the four-day event, which wraps up Thursday.

BioMarin Mulls Payment Plans

BioMarin Pharmaceutical Inc. CEO Jean-Jacques Bienaimé said he would consider pursuing installment payment arrangements for the biotech’s experimental gene therapy for hemophilia. At the conference, Mr. Bienaimé told the Wall Street Journal that the one-time infusion, Valrox, is likely to cost in the millions because studies have shown it can eliminate bleeding episodes in patients, and current hemophilia treatments taken chronically can cost millions over several years. “We’re not trying to charge more than existing therapies,” he said. “We want to offer a better treatment at the same or lower cost.”

Johnson & Johnson Warns on Pricing

As politicians hammer drug prices, Johnson & Johnson CEO Alex Gorsky suggested companies need to police themselves. At the conference, Mr. Gorsky told investors that drug companies should price drugs reasonably and be transparent. “If we don’t do this as an industry, I think there will be other alternatives that will be more onerous for us,” Mr. Gorsky says. Some drugmakers pulled back from price increases in mid-2018 amid heightened political scrutiny, but prices went up for many drugs at the start of 2019.

Marijuana-Derived Drugs Show Promise

 

CVS Discusses New Stores

CVS Health Corp. Chief Executive Larry Merlo began showing initial concepts the company will be testing as it begins piloting new models of its drugstores that incorporate its Aetna combination. The first new test store will open next month in Houston, he told investors, and it will include expanded health-care services including a new concierge who will help patients with questions. 

Aetna Savings On the Way

Mr. Merlo also spelled out when the company will achieve the initial $750 million in synergies it has promised from the CVS-Aetna deal. In the first quarter, he said the company will see benefits from consolidating corporate functions. Savings from procurement and aligning lists of covered drugs should be seen in the first half, he says. Medical-cost savings will start affecting results toward the end of the year, he noted. 

Lilly Cuts Price

Drugmaker Eli Lilly & Co. expects average net US pricing for its drugs–after rebates and discounts–to decline in the low- to mid-single digits on a percentage basis this year, Chief Financial Officer Josh Smiley told the Journal. Lilly’s net prices had risen during the first half of 2018, but dropped in the third quarter as the company took a “restrained approach,” Mr. Smiley said. Lilly, which hasn’t yet reported fourth-quarter results, took some list price increases for cancer drugs in late December but hasn’t raised prices in the new year, he said.

Peter Loftus at peter.loftus@wsj.com and Anna Wilde Mathews at anna.mathews@wsj.com

Read Full Post »

Older Posts »

%d bloggers like this: