Funding, Deals & Partnerships: BIOLOGICS & MEDICAL DEVICES; BioMed e-Series; Medicine and Life Sciences Scientific Journal – http://PharmaceuticalIntelligence.com
Pancreatic cancer survival is determined by ratio of two enzymes, Volume 2 (Volume Two: Latest in Genomics Methodologies for Therapeutics: Gene Editing, NGS and BioInformatics, Simulations and the Genome Ontology), Part 1: Next Generation Sequencing (NGS)
Reporter and Curator: Dr. Sudipta Saha, Ph.D.
Protein kinase C (PKC) isozymes function as tumor suppressors in increasing contexts. These enzymes are crucial for a number of cellular activities, including cell survival, proliferation and migration — functions that must be carefully controlled if cells get out of control and form a tumor. In contrast to oncogenic kinases, whose function is acutely regulated by transient phosphorylation, PKC is constitutively phosphorylated following biosynthesis to yield a stable, autoinhibited enzyme that is reversibly activated by second messengers. Researchers at University of California San Diego School of Medicine found that another enzyme, called PHLPP1, acts as a “proofreader” to keep careful tabs on PKC.
The researchers discovered that in pancreatic cancer high PHLPP1 levels lead to low PKC levels, which is associated with poor patient survival. They reported that the phosphatase PHLPP1 opposes PKC phosphorylation during maturation, leading to the degradation of aberrantly active species that do not become autoinhibited. They discovered that any time an over-active PKC is inadvertently produced, the PHLPP1 “proofreader” tags it for destruction. That means the amount of PHLPP1 in patient’s cells determines his amount of PKC and it turns out those enzyme levels are especially important in pancreatic cancer.
This team of researchers reversed a 30-year paradigm when they reported evidence that PKC actually suppresses, rather than promotes, tumors. For decades before this revelation, many researchers had attempted to develop drugs that inhibit PKC as a means to treat cancer. Their study implied that anti-cancer drugs would actually need to do the opposite — boost PKC activity. This study sets the stage for clinicians to one day use a pancreatic cancer patient’s PHLPP1/PKC levels as a predictor for prognosis, and for researchers to develop new therapeutic drugs that inhibit PHLPP1 and boost PKC as a means to treat the disease.
The ratio — high PHLPP1/low PKC — correlated with poor prognoses: no pancreatic patient with low PKC in the database survived longer than five-and-a-half years. On the flip side, 50 percent of the patients with low PHLPP1/high PKC survived longer than that. While still in the earliest stages, the researchers hope that this information might one day aid pancreatic diagnostics and treatment. The researchers are next planning to screen chemical compounds to find those that inhibit PHLPP1 and restore PKC levels in low-PKC-pancreatic cancer cells in the lab. These might form the basis of a new therapeutic drug for pancreatic cancer.
Irreconciliable Dissonance in Physical Space and Cellular Metabolic Conception
Curator: Larry H. Bernstein, MD, FCAP
Pasteur Effect – Warburg Effect – What its history can teach us today.
José Eduardo de Salles Roselino
The Warburg effect, in reality the “Pasteur-effect” was the first example of metabolic regulation described. A decrease in the carbon flux originated at the sugar molecule towards the end of the catabolic pathway, with ethanol and carbon dioxide observed when yeast cells were transferred from an anaerobic environmental condition to an aerobic one. In Pasteur´s studies, sugar metabolism was measured mainly by the decrease of sugar concentration in the yeast growth media observed after a measured period of time. The decrease of the sugar concentration in the media occurs at great speed in yeast grown in anaerobiosis (oxygen deficient) and its speed was greatly reduced by the transfer of the yeast culture to an aerobic condition. This finding was very important for the wine industry of France in Pasteur’s time, since most of the undesirable outcomes in the industrial use of yeast were perceived when yeasts cells took a very long time to create, a rather selective anaerobic condition. This selective culture media was characterized by the higher carbon dioxide levels produced by fast growing yeast cells and by a higher alcohol content in the yeast culture media.
However, in biochemical terms, this finding was required to understand Lavoisier’s results indicating that chemical and biological oxidation of sugars produced the same calorimetric (heat generation) results. This observation requires a control mechanism (metabolic regulation) to avoid burning living cells by fast heat released by the sugar biological oxidative processes (metabolism). In addition, Lavoisier´s results were the first indications that both processes happened inside similar thermodynamics limits. In much resumed form, these observations indicate the major reasons that led Warburg to test failure in control mechanisms in cancer cells in comparison with the ones observed in normal cells.
[It might be added that the availability of O2 and CO2 and climatic conditions over 750 million years that included volcanic activity, tectonic movements of the earth crust, and glaciation, and more recently the use of carbon fuels and the extensive deforestation of our land masses have had a large role in determining the biological speciation over time, in sea and on land. O2 is generated by plants utilizing energy from the sun and conversion of CO2. Remove the plants and we tip the balance. A large source of CO2 is from beneath the earth’s surface.]
Biology inside classical thermodynamics places some challenges to scientists. For instance, all classical thermodynamics must be measured in reversible thermodynamic conditions. In an isolated system, increase in P (pressure) leads to increase in V (volume), all this occurring in a condition in which infinitesimal changes in one affects in the same way the other, a continuum response. Not even a quantic amount of energy will stand beyond those parameters.
In a reversible system, a decrease in V, under same condition, will led to an increase in P. In biochemistry, reversible usually indicates a reaction that easily goes either from A to B or B to A. For instance, when it was required to search for an anti-ischemic effect of Chlorpromazine in an extra hepatic obstructed liver, it was necessary to use an adequate system of increased biliary system pressure in a reversible manner to exclude a direct effect of this drug over the biological system pressure inducer (bile secretion) in Braz. J. Med. Biol. Res 1989; 22: 889-893. Frequently, these details are jumped over by those who read biology in ATGC letters.
Very important observations can be made in this regard, when neutral mutations are taken into consideration since, after several mutations (not affecting previous activity and function), a last mutant may provide a new transcript RNA for a protein and elicit a new function. For an example, consider a Prion C from lamb getting similar to bovine Prion C while preserving its normal role in the lamb when its ability to change Human Prion C is considered (Stanley Prusiner).
This observation is good enough, to confirm one of the most important contributions of Erwin Schrodinger in his What is Life:
“This little book arose from a course of public lectures, delivered by a theoretical physicist to an audience of about four hundred which did not substantially dwindle, though warned at the outset that the subject matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized. The reason for this was not that the subject was simple enough to be explained without mathematics, but rather that it was much too involved to be fully accessible to mathematics.”
After Hans Krebs, description of the cyclic nature of the citrate metabolism and after its followers described its requirement for aerobic catabolism two major lines of research started the search for the understanding of the mechanism of energy transfer that explains how ADP is converted into ATP. One followed the organic chemistry line of reasoning and therefore, searched for a mechanism that could explain how the breakdown of carbon-carbon link could have its energy transferred to ATP synthesis. One of the major leaders of this research line was Britton Chance. He took into account that relatively earlier in the series of Krebs cycle reactions, two carbon atoms of acetyl were released as carbon dioxide ( In fact, not the real acetyl carbons but those on the opposite side of citrate molecule). In stoichiometric terms, it was not important whether the released carbons were or were not exactly those originated from glucose carbons. His research aimed at to find out an intermediate proteinaceous intermediary that could act as an energy reservoir. The intermediary could store in a phosphorylated amino acid the energy of carbon-carbon bond breakdown. This activated amino acid could transfer its phosphate group to ADP producing ATP. A key intermediate involved in the transfer was identified by Kaplan and Lipmann at John Hopkins as acetyl coenzyme A, for which Fritz Lipmann received a Nobel Prize.
Alternatively, under possible influence of the excellent results of Hodgkin and Huxley a second line of research appears. The work of Hodgkin & Huxley indicated that the storage of electrical potential energy in transmembrane ionic asymmetries and presented the explanation for the change from resting to action potential in excitable cells. This second line of research, under the leadership of Peter Mitchell postulated a mechanism for the transfer of oxide/reductive power of organic molecules oxidation through electron transfer as the key for the energetic transfer mechanism required for ATP synthesis.
This diverted the attention from high energy (~P) phosphate bond to the transfer of electrons. During most of the time the harsh period of the two confronting points of view, Paul Boyer and followers attempted to act as a conciliatory third party, without getting good results, according to personal accounts (in L. A. or Latin America) heard from those few of our scientists who were able to follow the major scientific events held in USA, and who could present to us later. Paul Boyer could present how the energy was transduced by a molecular machine that changes in conformation in a series of 3 steps while rotating in one direction in order to produce ATP and in opposite direction in order to produce ADP plus Pi from ATP (reversibility).
However, earlier, a victorious Peter Mitchell obtained the result in the conceptual dispute, over the Britton Chance point of view, after he used E. Coli mutants to show H+ gradients in the cell membrane and its use as energy source, for which he received a Nobel Prize. Somehow, this outcome represents such a blow to Chance’s previous work that somehow it seems to have cast a shadow over very important findings obtained during his earlier career that should not be affected by one or another form of energy transfer mechanism. For instance, Britton Chance got the simple and rapid polarographic assay method of oxidative phosphorylation and the idea of control of energy metabolism that brings us back to Pasteur.
This metabolic alternative result seems to have been neglected in the recent years of obesity epidemics, which led to a search for a single molecular mechanism required for the understanding of the accumulation of chemical (adipose tissue) reserve in our body. It does not mean that here the role of central nervous system is neglected. In short, in respiring mitochondria the rate of electron transport linked to the rate of ATP production is determined primarily by the relative concentrations of ADP, ATP and phosphate in the external media (cytosol) and not by the concentration of respiratory substrate as pyruvate. Therefore, when the yield of ATP is high as it is in aerobiosis and the cellular use of ATP is not changed, the oxidation of pyruvate and therefore of glycolysis is quickly (without change in gene expression), throttled down to the resting state. The dependence of respiratory rate on ADP concentration is also seen in intact cells. A muscle at rest and using no ATP has a very low respiratory rate. [When skeletal muscle is stressed by high exertion, lactic acid produced is released into the circulation and is metabolized aerobically by the heart at the end of the activity].
This respiratory control of metabolism will lead to preservation of body carbon reserves and in case of high caloric intake in a diet, also shows increase in fat reserves essential for our biological ancestors survival (Today for our obesity epidemics). No matter how important this observation is, it is only one focal point of metabolic control. We cannot reduce the problem of obesity to the existence of metabolic control. There are numerous other factors but on the other hand, we cannot neglect or remove this vital process in order to correct obesity. However, we cannot explain obesity ignoring this metabolic control. This topic is so neglected in modern times that we cannot follow major research lines of the past that were interrupted by the emerging molecular biology techniques and the vain belief that a dogmatic vision of biology could replace all previous knowledge by a new one based upon ATGC readings. For instance, in order to display bad consequences derived from the ignorance of these old scientific facts, we can take into account, for instance, how ion movements across membranes affects membrane protein conformation and therefore contradicts the wrong central dogma of molecular biology. This change in protein conformation (with unchanged amino acid sequence) and/or the lack of change in protein conformation is linked to the factors that affect vital processes as the heart beats. This modern ignorance could also explain some major pitfalls seen in new drugs clinical trials and in a small scale on bad medical practices.
The work of Britton Chance and of Peter Mitchell have deep and sound scientific roots that were made with excellent scientific techniques, supported by excellent scientific reasoning and that were produced in a large series of very important intermediary scientific results. Their sole difference was to aim at very different scientific explanations as their goals (They have different Teleology in their minds made by their previous experiences). When, with the use of mutants obtained in microorganisms P Mitchell´s goal was found to survive and B Chance to succumb to the experimental evidence, all those excellent findings of B Chance and followers were directed to the dustbin of scientific history as an example of lack of scientific consideration. [On the one hand, the Mitchell model used a unicellular organism; on the other, Chance’s work was with eukaryotic cells, quite relevant to the discussion.]
We can resume the challenge faced by these two great scientists in the following form: The first conceptual unification in bioenergetics, achieved in the 1940s, is inextricably bound up with the name of Fritz Lipmann. Its central feature was the recognition that adenosine triphosphate, ATP, serves as a universal energy “currency” much as money serves as economic currency. In a nutshell, the purpose of metabolism is to support the synthesis of ATP. In microorganisms, this is perfect! In humans or mammals, or vertebrates, by the same reason that we cannot consider that gene expression is equivalent to protein function (an acceptable error in the case of microorganisms) this oversimplifies the metabolic requirement with a huge error. However, in case our concern is ATP chemistry only, the metabolism produces ATP and the hydrolysis of ATP pays for the performance of almost, all kinds of works. It is possible to presume that to find out how the flow of metabolism (carbon flow) led to ATP production must be considered a major focal point of research of the two contenders. Consequently, what could be a minor fall of one of the contenders, in case we take into account all that was found during their entire life of research, the real failure in B Chance’s final goal was amplified far beyond what may be considered by reason!
Another aspect that must be taken into account: Both contenders have in the scientific past a very sound root. Metabolism may produce two forms of energy currency (I personally don´t like this expression*) and I use it here because it was used by both groups in order to express their findings. Together with simplistic thermodynamics, this expression conveys wrong ideas): The second kind of energy currency is the current of ions passing from one side of a membrane to the other. The P. Mitchell scientific root undoubtedly have the work of Hodgkin & Huxley, Huxley & Huxley, Huxley & Simmons
*ATP is produced under the guidance of cell needs and not by its yield. When glucose yields only 2 ATPs per molecule it is oxidized at very high speed (anaerobiosis) as is required to match cellular needs. On the other hand, when it may yield (thermodynamic terms) 38 ATP the same molecule is oxidized at low speed. It would be similar to an investor choice its least money yield form for its investment (1940s to 1972) as a solid support. B. Chance had the enzymologists involved in clarifying how ATP could be produced directly from NADH + H+ oxidative reductive metabolic reactions or from the hydrolysis of an enolpyruvate intermediary. Both competitors had their work supported by different but, sound scientific roots and have produced very important scientific results while trying to present their hypothetical point of view.
Before the winning results of P. Mitchell were displayed, one line of defense used by B. Chance followers was to create a conflict between what would be expected by a restrictive role of proteins through its specificity ionic interactions and the general ability of ionic asymmetries that could be associated with mitochondrial ATP production. Chemical catalyzed protein activities do not have perfect specificity but an outstanding degree of selective interaction was presented by the lock and key model of enzyme interaction. A large group of outstanding “mitochondriologists” were able to show ATP synthesis associated with Na+, K+, Ca2+… asymmetries on mitochondrial membranes and any time they did this, P. Mitchell have to display the existence of antiporters that exchange X for hydrogen as the final common source of chemiosmotic energy used by mitochondria for ATP synthesis.
This conceptual battle has generated an enormous knowledge that was laid to rest, somehow discontinued in the form of scientific research, when the final E. Coli mutant studies presented the convincing final evidence in favor of P. Mitchell point of view.
Not surprisingly, a “wise anonymous” later, pointed out: “No matter what you are doing, you will always be better off in case you have a mutant”
(Principles of Medical Genetics T D Gelehrter & F.S. Collins chapter 7, 1990).
However, let’s take the example of a mechanical wristwatch. It clearly indicates when the watch is working in an acceptable way, that its normal functioning condition is not the result of one of its isolated components – or something that can be shown by a reductionist molecular view. Usually it will be considered that it is working in an acceptable way, in case it is found that its accuracy falls inside a normal functional range, for instance, one or two standard deviations bellow or above the mean value for normal function, what depends upon the rigor wisely adopted. While, only when it has a faulty component (a genetic inborn error) we can indicate a single isolated piece as the cause of its failure (a reductionist molecular view).
We need to teach in medicine, first the major reasons why the watch works fine (not saying it is “automatic”). The functions may cross the reversible to irreversible regulatory limit change, faster than what we can imagine. Latter, when these ideas about normal are held very clear in the mind set of medical doctors (not medical technicians) we may address the inborn errors and what we may have learn from it. A modern medical technician may cause admiration when he uses an “innocent” virus to correct for a faulty gene (a rather impressive technological advance). However, in case the virus, later shows signals that indicate that it was not so innocent, a real medical doctor will be called upon to put things in correct place again.
Among the missing parts of normal evolution in biochemistry a lot about ion fluxes can be found. Even those oscillatory changes in Ca2+ that were shown to affect gene expression (C. De Duve) were laid to rest since, they clearly indicate a source of biological information that despite the fact that it does not change nucleotides order in the DNA, it shows an opposing flux of biological information against the dogma (DNA to RNA to proteins). Another, line has shown a hierarchy, on the use of mitochondrial membrane potential: First the potential is used for Ca2+ uptake and only afterwards, the potential is used for ADP conversion into ATP (A. L. Lehninger). In fact, the real idea of A. L. Lehninger was by far, more complex since according to him, mitochondria works like a buffer for intracellular calcium releasing it to outside in case of a deep decrease in cytosol levels or capturing it from cytosol when facing transient increase in Ca2+ load. As some of Krebs cycle dehydrogenases were activated by Ca2+, this finding was used to propose a new control factor in addition to the one of ADP (B. Chance). All this was discontinued with the wrong use of calculus (today we could indicate bioinformatics in a similar role) in biochemistry that has established less importance to a mitochondrial role after comparative kinetics that today are seen as faulty.
It is important to combat dogmatic reasoning and restore sound scientific foundations in basic medical courses that must urgently reverse the faulty trend that tries to impose a view that goes from the detail towards generalization instead of the correct form that goes from the general finding well understood towards its molecular details. The view that led to curious subjects as bioinformatics in medical courses as training in sequence finding activities can only be explained by its commercial value. The usual form of scientific thinking respects the limits of our ability to grasp new knowledge and relies on reproducibility of scientific results as a form to surpass lack of mathematical equation that defines relationship of variables and the determination of its functional domains. It also uses old scientific roots, as its sound support never replaces existing knowledge by dogmatic and/or wishful thinking. When the sequence of DNA was found as a technical advance to find amino acid sequence in proteins it was just a technical advance. This technical advance by no means could be considered a scientific result presented as an indication that DNA sequences alone have replaced the need to study protein chemistry, its responses to microenvironmental changes in order to understand its multiple conformations, changes in activities and function. As E. Schrodinger correctly describes the chemical structure responsible for the coded form stored of genetic information must have minimal interaction with its microenvironment in order to endure hundreds and hundreds years as seen in Hapsburg’s lips. Only magical reasoning assumes that it is possible to find out in non-reactive chemical structures the properties of the reactive ones.
For instance, knowledge of the reactions of the Krebs cycle clearly indicate a role for solvent that no longer could be considered to be an inert bath for catalytic activity of the enzymes when the transfer of energy include a role for hydrogen transport. The great increase in understanding this change on chemical reaction arrived from conformational energy.
Again, even a rather simplistic view of this atomic property (Conformational energy) is enough to confirm once more, one of the most important contribution of E. Schrodinger in his What is Life:
“This little book arose from a course of public lectures, delivered by a theoretical physicist to an audience of about four hundred which did not substantially dwindle, though warned at the outset that the subject matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized. The reason for this was not that the subject was simple enough to be explained without mathematics, but rather that it was much too involved to be fully accessible to mathematics.”
In a very simplistic view, while energy manifests itself by the ability to perform work conformational energy as a property derived from our atomic structure can be neutral, positive or negative (no effect, increased or decreased reactivity upon any chemistry reactivity measured as work)
Also:
“I mean the fact that we, whose total being is entirely based on a marvelous interplay of this very kind, yet if all possess the power of acquiring considerable knowledge about it. I think it possible that this knowledge may advance to little just a short of a complete understanding -of the first marvel. The second may well be beyond human understanding.”
In fact, scientific knowledge allows us to understand how biological evolution may have occurred or have not occurred and yet does not present a proof about how it would have being occurred. It will be always be an indication of possible against highly unlike and never a scientific proven fact about the real form of its occurrence.
As was the case of B. Chance in its bioenergetics findings, we may get very important findings that indicates wrong directions in the future as was his case, or directed toward our past.
The Skeleton of Physical Time – Quantum Energies in Relative Space of S-labs
By Radoslav S. Bozov Independent Researcher
WSEAS, Biology and BioSystems of Biomedicine
Space does not equate to distance, displacement of an object by classically defined forces – electromagnetic, gravity or inertia. In perceiving quantum open systems, a quanta, a package of energy, displaces properties of wave interference and statistical outcomes of sums of paths of particles detected by a design of S-labs.
The notion of S-labs, space labs, deals with inherent problems of operational module, R(i+1), where an imagination number ‘struggles’ to work under roots of a negative sign, a reflection of an observable set of sums reaching out of the limits of the human being organ, an eye or other foundational signal processing system.
While heavenly bodies, planets, star systems, and other exotic forms of light reflecting and/or emitting objects, observable via naked eye have been deduced to operate under numerical systems that calculate a periodic displacement of one relative to another, atomic clocks of nanospace open our eyes to ever expanding energy spaces, where matrices of interactive variables point to the problem of infinity of variations in scalar spaces, however, defining properties of minute universes as a mirror image of an astronomical system. The first and furthermost problem is essentially the same as those mathematical methodologies deduced by Isaac Newton and Albert Einstein for processing a surface. I will introduce you to a surface interference method by describing undetermined objective space in terms of determined subjective time.
Therefore, the moment will be an outcome of statistical sums of a numerical system extending from near zero to near one. Three strings hold down a dual system entangled via interference of two waves, where a single wave is a product of three particles (today named accordingly to either weak or strong interactions) momentum.
The above described system emerges from duality into trinity the objective space value of physical realities. The triangle of physical observables – charge, gravity and electromagnetism, is an outcome of interference of particles, strings and waves, where particles are not particles, or are strings strings, or are waves waves of an infinite character in an open system which we attempt to define to predict outcomes of tomorrow’s parameters, either dependent or independent as well as both subjective to time simulations.
We now know that aging of a biological organism cannot be defined within singularity. Thereafter, clocks are subjective to apparatuses measuring oscillation of defined parameters which enable us to calculate both amplitude and a period, which we know to be dependent on phase transitions.
The problem of phase was solved by the applicability of carbon relative systems. A piece of diamond does not get wet, yet it holds water’s light entangled property. Water is the dark force of light. To formulate such statement, we have been searching truth by examining cooling objects where the Maxwell demon is translated into information, a data complex system.
Modern perspectives in computing quantum based matrices, 0+1 =1 and/or 0+0=1, and/or 1+1 =0, will be reduced by applying a conceptual frame of Aladdin’s flying anti-gravity carpet, unwrapping both past and future by sending a photon to both, placing present always near zero. Thus, each parallel quantum computation of a natural system approaching the limit of a vibration of a string defining 0 does not equal 0, and 1 does not equal 1. In any case, if our method 1+1 = 1, yet, 1 is not 1 at time i+1. This will set the fundamentals of an operational module, called labris operator or in simplicity S-labs. Note, that 1 as a result is an event predictable to future, while interacting parameters of addition 1+1 may be both, 1 as an observable past, and 1 as an imaginary system, or 1+1 displaced interactive parameters of past observable events. This is the foundation of Future Quantum Relative Systems Interference (QRSI), taking analytical technologies of future as a result of data matrices compressing principle relative to carbon as a reference matter rational to water based properties.
Goedel’s concept of loops exist therefore only upon discrete relative space uniting to parallel absolute continuity of time ‘lags’. ( Goedel, Escher and Bach: An Eternal Golden Braid. A Metaphorical Fugue on Minds and Machines in the Spirit of Lewis Carroll. D Hofstadter. Chapter XX: Strange Loops, Or Tangled Hierarchies. A grand windup of many of the ideas about hierarchical systems and self-reference. It is concerned with the snarls which arise when systems turn back on themselves-for example, science probing science, government investigating governmental wrongdoing, art violating the rules of art, and finally, humans thinking about their own brains and minds. Does Gödel’s Theorem have anything to say about this last “snarl”? Are free will and the sensation of consciousness connected to Gödel’s Theorem? The Chapter ends by tying Gödel, Escher, and Bach together once again.) The fight struggle in-between time creates dark spaces within which strings manage to obey light properties – entangled bozons of information carrying future outcomes of a systems processing consciousness. Therefore, Albert Einstein was correct in his quantum time realities by rejecting a resolving cube of sugar within a cup of tea (Henri Bergson 19th century philosopher. Bergson’s concept of multiplicity attempts to unify in a consistent way two contradictory features: heterogeneity and continuity. Many philosophers today think that this concept of multiplicity, despite its difficulty, is revolutionary.) However, the unity of time and space could not be achieved by deducing time to charge, gravity and electromagnetic properties of energy and mass.
Charge is further deduced to interference of particles/strings/waves, contrary to the Hawking idea of irreducibility of chemical energy carrying ‘units’, and gravity is accounted for by intrinsic properties of anti-gravity carbon systems processing light, an electromagnetic force, that I have deduced towards ever expanding discrete energy space-energies rational to compressing mass/time. The role of loops seems to operate to control formalities where boundaries of space fluctuate as a result of what we called above – dark time-spaces.
Indeed, the concept of horizon is a constant due to ever expanding observables. Thus, it fails to acquire a rational approach towards space-time issues.
Richard Feynman has touched on issues of touching of space, sums of paths of particle traveling through time. In a way he has resolved an important paradigm, storing information and possibly studying it by opening a black box. Schroedinger’s cat is alive again, but incapable of climbing a tree when chased by a dog. Every time a cat climbs a garden tree, a fruit falls on hedgehogs carried away parallel to living wormholes whose purpose of generating information lies upon carbon units resolving light.
In order to deal with such a paradigm, we will introduce i+1 under square root in relativity, therefore taking negative one ( -1 = sqrt (i+1), an operational module R dealing with Wheelers foam squeezed by light, releasing water – dark spaces. Thousand words down!
What is a number? Is that a name or some kind of language or both? Is the issue of number theory possibly accountable to the value of the concept of entropic timing? Light penetrating a pyramid holding bean seeds on a piece of paper and a piece of slice of bread, a triple set, where a church mouse has taken a drop of tear, but a blood drop. What an amazing physics! The magic of biology lies above egoism, above pride, and below Saints.
We will set up the twelve parameters seen through 3+1 in classic realities:
– discrete absolute energies/forces – no contradiction for now between Newtonian and Albert Einstein mechanics
– mass absolute continuity – conservational law of physics in accordance to weak and strong forces
– quantum relative spaces – issuing a paradox of Albert Einstein’s space-time resolved by the uncertainty principle
– parallel continuity of multiple time/universes – resolving uncertainty of united space and energy through evolving statistical concepts of scalar relative space expansion and vector quantum energies by compressing relative continuity of matter in it, ever compressing flat surfaces – finding the inverse link between deterministic mechanics of displacement and imaginary space, where spheres fit within surface of triangles as time unwraps past by pulling strings from future.
To us, common human beings, with an extra curiosity overloaded by real dreams, value happens to play in the intricate foundation of life – the garden of love, its carbon management in mind, collecting pieces of squeezed cooling time.
The infinite interference of each operational module to another composing ever emerging time constrains unified by the Solar system, objective to humanity, perhaps answers that a drop of blood and a drop of tear is united by a droplet of a substance separating negative entropy to time courses of a physical realities as defined by an open algorithm where chasing power subdue to space becomes an issue of time.
Jose Eduardo de Salles Roselino
Some small errors: For intance an increase i P leads to a decrease in V ( not an increase in V)..
Radoslav S. Bozov Independent Researcher
If we were to use a preventative measures of medical science, instruments of medical science must predict future outcomes based on observable parameters of history….. There are several key issues arising: 1. Despite pinning a difference on genomic scale , say pieces of information, we do not know how to have changed that – that is shift methylome occupying genome surfaces , in a precise manner.. 2. Living systems operational quo DO NOT work as by vector gravity physics of ‘building blocks. That is projecting a delusional concept of a masonry trick, who has not worked by corner stones and ever shifting momenta … Assuming genomic assembling worked, that is dealing with inferences through data mining and annotation, we are not in a position to read future in real time, and we will never be, because of the rtPCR technology self restriction into data -time processing .. We know of existing post translational modalities… 3. We don’t know what we don’t know, and that foundational to future medicine – that is dealing with biological clocks, behavior, and various daily life inputs ranging from radiation to water systems, food quality, drugs…
This e-Book is a comprehensive review of recent Original Research on METABOLOMICS and related opportunities for Targeted Therapy written by Experts, Authors, Writers. This is the first volume of the Series D: e-Books on BioMedicine – Metabolomics, Immunology, Infectious Diseases. It is written for comprehension at the third year medical student level, or as a reference for licensing board exams, but it is also written for the education of a first time baccalaureate degree reader in the biological sciences. Hopefully, it can be read with great interest by the undergraduate student who is undecided in the choice of a career. The results of Original Research are gaining value added for the e-Reader by the Methodology of Curation.The e-Book’s articles have been published on the Open Access Online Scientific Journal, since April 2012. All new articles on this subject, will continue to be incorporated, as published with periodical updates.
We invite e-Readers to write an Article Reviews on Amazon for this e-Book on Amazon.
All forthcoming BioMed e-Book Titles can be viewed at:
Leaders in Pharmaceutical Business Intelligence, launched in April 2012 an Open Access Online Scientific Journal is a scientific, medical and business multi expert authoring environment in several domains of life sciences, pharmaceutical, healthcare & medicine industries. The venture operates as an online scientific intellectual exchange at their website http://pharmaceuticalintelligence.com and for curation and reporting on frontiers in biomedical, biological sciences, healthcare economics, pharmacology, pharmaceuticals & medicine. In addition the venture publishes a Medical E-book Series available on Amazon’s Kindle platform.
Analyzing and sharing the vast and rapidly expanding volume of scientific knowledge has never been so crucial to innovation in the medical field. WE are addressing need of overcoming this scientific information overload by:
delivering curation and summary interpretations of latest findings and innovations on an open-access, Web 2.0 platform with future goals of providing primarily concept-driven search in the near future
providing a social platform for scientists and clinicians to enter into discussion using social media
compiling recent discoveries and issues in yearly-updated Medical E-book Series on Amazon’s mobile Kindle platform
This curation offers better organization and visibility to the critical information useful for the next innovations in academic, clinical, and industrial research by providing these hybrid networks.
Table of Contents forMetabolic Genomics & Pharmaceutics, Vol. I
Chapter 1: Metabolic Pathways
Chapter 2: Lipid Metabolism
Chapter 3: Cell Signaling
Chapter 4: Protein Synthesis and Degradation
Chapter 5: Sub-cellular Structure
Chapter 6: Proteomics
Chapter 7: Metabolomics
Chapter 8: Impairments in Pathological States: Endocrine Disorders; Stress
Hypermetabolism and Cancer
Chapter 9: Genomic Expression in Health and Disease
Introduction to Impairments in Pathological States: Endocrine Disorders, Stress Hypermetabolism and Cancer
Author and Curator: Larry H. Bernstein, MD, FCAP
This leads into a series of presentations and the metabolic imbalance central to findings of endocrine, metabolic, inflammatory, immune diseases and cancer. All of this has been a result of discoveries based on the methods of study of genomiocs, proteomics, transcriptomics, and metabolomics that have preceded this. In some cases there has been the use of knockout methods. The completion of the human genomic and other catalogues have been instrumental in the past few years. In all cases there has been a thorough guidance by a biological concept of mechanism based on gene expression, metabolic disturbance, signaling pathways, and up- or down- regulation of metabolic circuits. It is interesting to recall that a concept of metabolic circuits was not yet formulated at the time of the mid 20th century physiology, except perhaps with respect to the coagulation pathways, and to some extent, glycolysis, gluconeogenesis, the hexose monophosphate shunt, and mitochondrial respiration, which were linear strings of enzyme substrate reactions that intersected and that had flow restraints not then understood as to the complexity we now appreciate. We did know the importance of cytochrome c, the adenine and pyridine nucleotides, and the energy balance. Electron microscopy had opened the door to understanding the mechanism of contraction of skeletal muscle and myocardium, but it also opened the door to understanding kidney structure and function, explaining the “mesangium”. The first cardiac maker was discovered by Arthur Karmen in the serum alanine and aspartate aminotransferases, with a consequent differentiation between hepatic and myocardial damage. This was followed by lactic dehydrogenase and the H- and M-type isoenzymes in the 1960s, and in the next decade, by the MB-isoenzyme of creatine kinase. Troponins T and then I would not be introduced until the mid 1980s, and they have become a gold standard for the diagnosis of myocardial infarction.
In the 1980s we also saw the development of antiplatelet therapy that rapidly advanced interventional cardiology. But advances in surgical as well as medical intervention also proceeded as the understanding of the lipid metabolism was opened by the work of Brown and Goldstein, and UTSW Medical Campus, and major advances in treatment came at Baylor and UT Medical Center in Houston, and at the Cleveland Clinic. The next important advance came with the discovery of nitric oxide synthase role in endothelium and oxidative stress. The field of endocrinology saw advances as well for a solid period of 30 years in a comparable period for the adrenals, thyroid, and pituitary glands, and for the understanding of the male and female sex hormones, and discoveries in breast, ovarian, and prostate cancer. There were cancer markers, such as, CA125 and CA15-3, and PSA. This had more of an impact on timely surgical intervention, and if not that, post surgical followup. Despite a long time into the war on cancer, introduced by President Lynden Johnson, the fundamental knowledge needed was not sufficient. In the meantime, there were advances in the treatment of diabetes, with eventual introduction of the insulin pump for type I diabetes. The problem of Type 2 DM increased in prevalence, reaching into the childhood age group, with ascendent obesity. An epidemiological pattern of disease comorbidities was emergent. Our population has aged out, and with it we are seeing an increase in dementias, especially Alzheimer’s disease. But the knowledge of the brain has lagged far behind.
What follows is a series of chapters that address what has currently been advanced with repect to the alignment of our knowledge of the last decade and pharmacetical discovery. Pharmaceuticals were suitable for bacterial infections until the 1990s, when we saw the rise of resistance to penicillins and Vancomycin, and we had issues with gram negative enterobacter, salmonella, and E. coli strains. That has been and is a significant challenge. The elucidation of the gut microbiome in recent years will help to relieve this problem. The problem of the variety and different aggressive types of cancer has been another challenge. The door has been opened to better diagnostic tools with respsct to imaging and targeted biomarkers for localization. I am not dealing with imaging, which is not the subject here.
HLA targeting efficiency correlates with human T-cell response magnitude and with mortality from influenza A infection
the carriage frequencies of the alleles with the lowest targeting efficiencies, A*24,
were associated with pH1N1 mortality (r = 0.37, P = 0.031) and
are common in certain indigenous populations in which
increased pH1N1 morbidity has been reported.
HLA efficiency scores and HLA use are associated with
CD8 T-cell magnitude in humans after influenza infection.
The computational tools used in this study may be useful predictors of
potential morbidity and identify immunologic differences of new variant influenza strains
more accurately than evolutionary sequence comparisons.
Population-based studies of the relative frequency of these alleles
in severe vs. mild influenza cases might advance clinical practices
for severe H1N1 infections among genetically susceptible populations.
A deeper look into cholesterol synthesis
By Swathi Parasuraman
The human body needs cholesterol to maintain membrane fluidity, and
it acts as a precursor molecule for several important biochemical pathways.
Its regulation requires strict control, as it can cause problems if it’s produced in excess. In 1964, Konrad Bloch received a Nobel Prize for his work elucidating the mechanisms of cholesterol synthesis. His work
eventually contributed to the discovery of statins, drugs used today to lower blood cholesterol levels.
The biosynthesis of cholesterol is a complex process with more than 20 steps. One of the first enzymes is
3-hydroxy-3-methylglutaryl-CoA reductase, also known as HMGCR, the main target of statins.
As links between intermediates in cholesterol synthesis and various diseases are being discovered continually, more information about the regulatory role of the post-HMGCR pathway is needed.
In a recent minireview in The Journal of Biological Chemistry, Laura Sharpe and Andrew Brown of the University of New South Wales describe
multiple ways various enzymes other than HMGCR
are implicated in the modulation of cholesterol synthesis.
One such enzyme is squalene monooxygenase, which, like HMGCR, can be destroyed
by the proteasome when cholesterol levels are high.
The minireview also explains how pathway intermediates
can have functions distinct from those of cholesterol.
For example, intermediate 7-dehydrocholesterol usually is converted to cholesterol by the enzyme DHCR7
but is also a vitamin D precursor.
To synthesize the enzymes necessary to make cholesterol,
SREBPs, short for sterol regulatory element binding proteins, have special functions.
Along with transcriptional cofactors, they activate gene expression
in response to low sterol levels and, conversely,
are suppressed when there is enough cholesterol around.
Additionally, SREBPs control production of
nicotinamide adenine dinucleotide phosphate, or NADPH,
which is the reducing agent required to carry out the different steps in the pathway.
Lipid carrier proteins also can facilitate cholesterol synthesis. One example is SPF, or supernatant protein factor,
which transfers substrate from an inactive to an active pool or
from one enzyme site to another.
Furthermore, translocation of several cholesterogenic enzymes
from the endoplasmic reticulum to other cell compartments can occur under various conditions,
thereby regulating levels and sites of intracellular cholesterol accumulation.
Immunology in the gut mucosa:
20 Feb 2013 by Kausik Datta, posted in Immunology, Science (Nature)
The human gut can be the scene for devastating conditions such as inflammatory bowel disease,
which arises through an improperly controlled immune response.
The gut is often the body’s first point of contact with microbes; every mouthful of food is accompanied by a cargo of micro-organisms that go on to encounter the mucosa, the innermost layer of the gut. Most microbes are destroyed by the harsh acidic environment in the stomach, but a hardy few make it through to the intestines.
The intestinal surface is covered with finger-like protrusions called villi,
whose primary function is the absorption of nutrients.
These structures and the underlying tissues
host the body’s largest population of immune cells.
Scattered along the intestinal mucosa are
dome-like structures called Peyer’s Patches.
These are enriched in lymphoid tissue, making them key sites for
coordinating immune responses to pathogens,
whilst promoting tolerance to harmless microbes and food.
The villi contain a network of blood vessels to transport nutrients from food to the rest of the body. Lymphatics
from both the Peyer’s Patches and the villi
drain into the mesenteric lymph nodes.
Within the villi is a network of loose connective tissue called the lamina propria, and
at the base of the villi are the crypts which host the stem cells that replenish the epithelium.
The epithelium together with its overlying mucus forms
a barrier against microbial invasion.
A mix of immune cells including T- and B-lymphocytes, macrophages, and dendritic cells are
embedded within the matrix of the Peyer’s Patches, .
A key function of the Peyer’s Patch is the sampling of antigens present in the gut. The Peyer’s Patch has a thin mucous layer and specialized phagocytic cells, called M-cells, which
transport material across the epithelial barrier via a process called transcytosis.
Dendritic cells extend dendrites between epithelial cells to sample antigens that are then
broken down and used for presenting to lymphocytes.
Sampling antigens in this way typically results in so-called tolerogenic activation, where
the immune system initiates an anti-inflammatory response.
With their cargo of antigens, these Dendritic Cells then
traffic to the T-cell zones of the Peyer’s Patch.
Upon encounter with specific T-cells, the Dendritic Cells
convert them into an immunomodulatory cell called regulatory T-cell or T-reg.
Defects in the function of these cells are associated with
inflammatory bowel disease in both animals and humans.
These T-regs migrate to lamina propria of the villi via the lymphatics. Here, the T-regs
secrete a molecule called Interleukin (IL)-10,
which exerts a suppressive action on immune cells within the lamina propria
and upon the epithelial layer itself.
IL10 is, therefore, critical in maintaining immune quiescence
and preventing unnecessary inflammation.
However, a breakdown in this process of immune homeostasis results in gut pathology and
when this occurs over a prolonged period and in an uncontrolled manner,
it can lead to inflammatory bowel disease.
Chemical, mechanical or pathogen-triggered barrier disruption
coupled with particular genetic susceptibilities may all combine to set off inflammation.
Epithelium coming into contact with bacteria
is activated, leading to bacterial influx.
Alarm molecules released by the epithelium
activates immune cells, and T-regs in the vicinity
scale down their IL10 secretion to enable an immune response to proceed.
Dendritic cells are also activated by this environment, and
start to release key inflammatory molecules,
such as IL6, IL12, and IL23.
Effector T-cells also appear on the scene and
these coordinate an escalation of the immune response
by secreting their own inflammatory molecules,
Tumor Necrosis Factor (TNF)-α, Interferon (IFN)-γ and IL17.
Soon after the effector T-cells are arrived, a voracious phagocyte called a neutrophil is recruited. Neutrophils are critical for the clearance of the bacteria. One weapon in the neutrophil armory is
the ability to undergo self-destruction.
This leaves behind a jumble of DNA saturated with enzymes, called the Neutrophil Extracellular Trap.
Although this can effectively destroy the bacterial invaders
and plug any breaches in the epithelial wall,
it also causes collateral damage to tissues.
Slowly the tide begins to turn and the bacterial invasion is repulsed. Any remaining neutrophils die off,
and are cleared by macrophages.
Epithelial integrity is restored by replacement of damaged cells with new ones from the intestinal crypts. Finally T-regs are recruited once again to calm the immune response.
Targeting the molecules involved in gut pathology is leading to
effective therapies for inflammatory bowel disease.
Notes:
T- and B-lymphocytes, Macrophages, and Dendritic Cells: These are all important immune effector cells. Macrophages and Dendritic cells are primary defence cells that can eat up (‘phagocytosis’) microbes and destroy them; they also can present parts of these microbes to lymphocytes. T-lymphocytes or T-cells help B-lymphocytes or B-cells recognize the antigen and form antibodies against it. Other types of T-cells can themselves kill microbes. All these cells also secrete various chemical substances, called cytokines and chemokines, which act as molecular messengers in recruiting various immune cells, coordinating and fine-tuning the immune response. Some of these cytokines are called Interleukins, shortened to IL.
Anti-inflammatory response: A type of immune response in which molecular messengers are used to scale down heavy-handed immune cell activity and switch off processes that recruit immune cells. This helps the body recognize and selectively tolerate beneficial substances such as commensalic microbes that live in the gut.
Neutrophils: These are highly versatile immune effector cells. Usually, they are one of the first cells recruited to the site of infection or tissue damage via message spread by molecular messengers. Neutrophils can themselves elaborate cytokines and chemokines, and have the ability to directly kill microbes.
Oxazoloisoindolinones with in vitro antitumor activity selectively activate a p53-pathway through potential inhibition of the p53-MDM2 interaction.
The metabolic network of a cell represents the catabolic and anabolic reactions that interconvert small molecules (metabolites) through the activity of enzymes, transporters and non-catalyzed chemical reactions. Our understanding of individual metabolic networks is increasing as we learn more about the enzymes that are active in particular cells under particular conditions and as technologies advance to allow detailed measurements of the cellular metabolome.
Metabolic network databases are important in allowing us to
contextualise data sets emerging from transcriptomic, proteomic and metabolomic experiments.
This concludes a long step-by-step journey into rediscovering biological processes from the genome as a framework to the remodeled and reconstituted cell through a number of posttranscription and posttranslation processes that modify the proteome and determine the metabolome. The remodeling process continues over a lifetime. The process requires a balance between nutrient intake, energy utilization for work in the lean body mass, energy reserves, endocrine, paracrine and autocrine mechanisms, and autophagy. It is true when we look at this in its full scope – What a creature is man?
Smith CA, O’Maille G, Want EJ, Qin C, Trauger SA, Brandon TR, Custodio DE, Abagyan R, Siuzdak G METLIN: A Metabolite Mass Spectral Database Therapeutic Drug Monitoring, 2005, 27(6), 747-751
Metabolomics is the scientific study of chemical processes involving metabolites. Specifically, metabolomics is the “systematic study of the unique chemical fingerprints that specific cellular processes leave behind”, the study of their small-molecule metabolite profiles.[1] The metabolome represents the collection of all metabolites in a biological cell, tissue, organ or organism, which are the end products of cellular processes.[2]mRNAgene expression data and proteomic analyses reveal the set of gene products being produced in the cell, data that represents one aspect of cellular function. Conversely, metabolic profiling can give an instantaneous snapshot of the physiology of that cell. One of the challenges of systems biology and functional genomics is to integrate proteomic, transcriptomic, and metabolomic information to provide a better understanding of cellular biology.
The term “metabolic profile” was introduced by Horning, et al. in 1971 after they demonstrated that gas chromatography-mass spectrometry (GC-MS) could be used to measure compounds present in human urine and tissue extracts. The Horning group, along with that of Linus Pauling and Arthur B. Robinson led the development of GC-MS methods to monitor the metabolites present in urine through the 1970s.
Concurrently, NMR spectroscopy, which was discovered in the 1940s, was also undergoing rapid advances. In 1974, Seeley et al. demonstrated the utility of using NMR to detect metabolites in unmodified biological samples.This first study on muscle highlighted the value of NMR in that it was determined that 90% of cellular ATP is complexed with magnesium. As sensitivity has improved with the evolution of higher magnetic field strengths and magic angle spinning, NMR continues to be a leading analytical tool to investigate metabolism. Efforts to utilize NMR for metabolomics have been influenced by the laboratory of Dr. Jeremy Nicholson at Birkbeck College, University of London and later at Imperial College London. In 1984, Nicholson showed 1H NMR spectroscopy could potentially be used to diagnose diabetes mellitus, and later pioneered the application of pattern recognition methods to NMR spectroscopic data.
In 2005, the first metabolomics web database, METLIN, for characterizing human metabolites was developed in the Siuzdak laboratory at The Scripps Research Institute and contained over 10,000 metabolites and tandem mass spectral data. As of September 2012, METLIN contains over 60,000 metabolites as well as the largest repository of tandem mass spectrometry data in metabolomics.
On 23 January 2007, the Human Metabolome Project, led by Dr. David Wishart of the University of Alberta, Canada, completed the first draft of the human metabolome, consisting of a database of approximately 2500 metabolites, 1200 drugs and 3500 food components. Similar projects have been underway in several plant species, most notably Medicago truncatula and Arabidopsis thaliana for several years.
As late as mid-2010, metabolomics was still considered an “emerging field”. Further, it was noted that further progress in the field depended in large part, through addressing otherwise “irresolvable technical challenges”, by technical evolution of mass spectrometry instrumentation.
Metabolome refers to the complete set of small-molecule metabolites (such as metabolic intermediates, hormones and other signaling molecules, and secondary metabolites) to be found within a biological sample, such as a single organism. The word was coined in analogy with transcriptomics and proteomics; like the transcriptome and the proteome, the metabolome is dynamic, changing from second to second. Although the metabolome can be defined readily enough, it is not currently possible to analyse the entire range of metabolites by a single analytical method. The first metabolite database(called METLIN) for searching m/z values from mass spectrometry data was developed by scientists at The Scripps Research Institute in 2005. In January 2007, scientists at the University of Alberta and the University of Calgary completed the first draft of the human metabolome. They catalogued approximately 2500 metabolites, 1200 drugs and 3500 food components that can be found in the human body, as reported in the literature. This information, available at the Human Metabolome Database (www.hmdb.ca) and based on analysis of information available in the current scientific literature, is far from complete.
Each type of cell and tissue has a unique metabolic ‘fingerprint’ that can elucidate organ or tissue-specific information, while the study of biofluids can give more generalized though less specialized information. Commonly used biofluids are urine and plasma, as they can be obtained non-invasively or relatively non-invasively, respectively. The ease of collection facilitates high temporal resolution, and because they are always at dynamic equilibrium with the body, they can describe the host as a whole.
Metabolites are the intermediates and products of metabolism. Within the context of metabolomics, a metabolite is usually defined as any molecule less than 1 kDa in size.
A primary metabolite is directly involved in the normal growth, development, and reproduction. A secondary metabolite is not directly involved in those processes. By contrast, in human-based metabolomics, it is more common to describe metabolites as being either endogenous (produced by the host organism) or exogenous. Metabolites of foreign substances such as drugs are termed xenometabolites. The metabolome forms a large network of metabolic reactions, where outputs from one enzymaticchemical reaction are inputs to other chemical reactions.
Metabonomics is defined as “the quantitative measurement of the dynamic multiparametric metabolic response of living systems to pathophysiological stimuli or genetic modification”. The word origin is from the Greek μεταβολή meaning change and nomos meaning a rule set or set of laws. This approach was pioneered by Jeremy Nicholson at Imperial College London and has been used in toxicology, disease diagnosis and a number of other fields. Historically, the metabonomics approach was one of the first methods to apply the scope of systems biology to studies of metabolism.
There is a growing consensus that ‘metabolomics’ places a greater emphasis on metabolic profiling at a cellular or organ level and is primarily concerned with normal endogenous metabolism. ‘Metabonomics’ extends metabolic profiling to include information about perturbations of metabolism caused by environmental factors (including diet and toxins), disease processes, and the involvement of extragenomic influences, such as gut microflora. This is not a trivial difference; metabolomic studies should, by definition, exclude metabolic contributions from extragenomic sources, because these are external to the system being studied.
Toxicity assessment/toxicology. Metabolic profiling (especially of urine or blood plasma samples) detects the physiological changes caused by toxic insult of a chemical (or mixture of chemicals).
Functional genomics. Metabolomics can be an excellent tool for determining the phenotype caused by a genetic manipulation, such as gene deletion or insertion. Sometimes this can be a sufficient goal in itself—for instance, to detect any phenotypic changes in a genetically-modified plant intended for human or animal consumption. More exciting is the prospect of predicting the function of unknown genes by comparison with the metabolic perturbations caused by deletion/insertion of known genes.
Nutrigenomics is a generalised term which links genomics, transcriptomics, proteomics and metabolomics to human nutrition. In general a metabolome in a given body fluid is influenced by endogenous factors such as age, sex, body composition and genetics as well as underlying pathologies. The large bowel microflora are also a very significant potential confounder of metabolic profiles and could be classified as either an endogenous or exogenous factor. The main exogenous factors are diet and drugs. Diet can then be broken down to nutrients and non- nutrients.
The problem with genomics was it was set as explanation for everything. In fact, when something is genetic in nature the genomic reasoning works fine. However, this means whenever an inborn error is found and only in this case the genomic knowledge afterwards may indicate what is wrong and not the completely way to put biology upside down by reading everything in the DNA genetic as well as non-genetic problems.
Coordination of the transcriptome and metabolome by the circadian clock PNAS 2012
conformational changes leading to substrate efflux.img
The cellular response is defined by a network of chemogenomic response signatures.
We have completed a series of discussions on proteomics, a scientific endeavor that is essentially 15 years old. It is quite remarkable what has been accomplished in that time. The interest is abetted by the understanding of the limitations of the genomic venture that has preceded it. The thorough, yet incomplete knowledge of the genome, has led to the clarification of its limits. It is the coding for all that lives, but all that lives has evolved to meet a demanding and changing environment with respect to
availability of nutrients
salinity
temperature
radiation exposure
toxicities in the air, water, and food
stresses – both internal and external
We have seen how both transcription and translation of the code results in a protein, lipoprotein, or other complex than the initial transcript that was modeled from tRNA. What you see in the DNA is not what you get in the functioning cell, organ, or organism. There are comparabilities as well as significant differences between plants, prokaryotes, and eukaryotes. There is extensive variation. The variation goes beyond genomic expression, and includes the functioning cell, organ type, and species.
Here, I return to the introductory discussion. Proteomics is a goal directed, sophisticated science that uses a combination of methods to find the answers to biological questions. Graves PR and Haystead TAJ. Molecular Biologist’s Guide to Proteomics. Microbiol Mol Biol Rev. Mar 2002; 66(1): 39–63. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC120780/
Peptide mass tag searching
Peptide mass tag searching. Shown is a schematic of how information from an unknown peptide (top) is matched to a peptide sequence in a database (bottom) for protein identification. The partial amino acid sequence or “tag” obtained by MS/MS is combined with the peptide mass (parent mass), the mass of the peptide at the start of the sequence (mass tag 1), and the mass of the peptide at the end of the sequence (mass tag 2). The specificity of the protease used (trypsin is shown) can also be included in the search.
ICAT method for measuring differential protein expression
The ICAT method for measuring differential protein expression. (A) Structure of the ICAT reagent. ICAT consists of a biotin affinity group, a linker region that can incorporate heavy (deuterium) or light (hydrogen) atoms, and a thiol-reactive end group for linkage to cysteines. (B) ICAT strategy. Proteins are harvested from two different cell states and labeled on cysteine residues with either the light or heavy form of the ICAT reagent. Following labeling, the two protein samples are mixed and digested with a protease such as trypsin. Peptides labeled with the ICAT reagent can be purified by virtue of the biotin tag by using avidin chromatography. Following purification, ICAT-labeled peptides can be analyzed by MS to quantitate the peak ratios and proteins can be identified by sequencing the peptides with MS/MS.
Strategies for determination of phosphorylation sites in proteins
Strategies for determination of phosphorylation sites in proteins. Proteins phosphorylated in vitro or in vivo can be isolated by protein electrophoresis and analyzed by MS. (A) Identification of phosphopeptides by peptide mass fingerprinting. In this method, phosphopeptides are identified by comparing the mass spectrum of an untreated sample to that of a sample treated with phosphatase. In the phosphatase-treated sample, potential phosphopeptides are identified by a decrease in mass due to loss of a phosphate group (80 Da). (B) Phosphorylation sites can be identified by peptide sequencing using MS/MS. (C) Edman degradation can be used to monitor the release of inorganic 32P to provide information about phosphorylation sites in peptides.
protein mining strategy
Proteome-mining strategy. Proteins are isolated on affinity column arrays from a cell line, organ, or animal source and purified to remove nonspecific adherents. Then, compound libraries are passed over the array and the proteins eluted are analyzed by protein electrophoresis. Protein information obtained by MS or Edman degradation is then used to search DNA and protein databases. If a relevant target is identified, a sublibrary of compounds can be evaluated to refine the lead. From this method a protein target and a drug lead can be simultaneously identified.
Although the technology for the analysis of proteins is rapidly progressing, it is still not feasible to study proteins on a scale equivalent to that of the nucleic acids. Most of proteomics relies on methods, such as protein purification or PAGE, that are not high-throughput methods. Even performing MS can require considerable time in either data acquisition or analysis. Although hundreds of proteins can be analyzed quickly and in an automated fashion by a MALDI-TOF mass spectrometer, the quality of data is sacrificed and many proteins cannot be identified. Much higher quality data can be obtained for protein identification by MS/MS, but this method requires considerable time in data interpretation. In our opinion, new computer algorithms are needed to allow more accurate interpretation of mass spectra without operator intervention. In addition, to access unannotated DNA databases across species, these algorithms should be error tolerant to allow for sequencing errors, polymorphisms, and conservative substitutions. New technologies will have to emerge before protein analysis on a large-scale (such as mapping the human proteome) becomes a reality.
Another major challenge for proteomics is the study of low-abundance proteins. In some eukaryotic cells, the amounts of the most abundant proteins can be 106-fold greater than those of the low-abundance proteins. Many important classes of proteins (that may be important drug targets) such as transcription factors, protein kinases, and regulatory proteins are low-copy proteins. These low-copy proteins will not be observed in the analysis of crude cell lysates without some purification. Therefore, new methods must be devised for subproteome isolation.
Tissue Proteomics for the Next Decade? Towards a Molecular Dimension in Histology
R Longuespe´e, M Fle´ron, C Pottier, F Quesada-Calvo, Marie-Alice Meuwis, et al.
OMICS A Journal of Integrative Biology 2014; 18: 9. http://dx.doi.org:/10.1089/omi.2014.0033
The concept of tissues appeared more than 200 years ago, since textures and attendant differences were described within the whole organism components. Instrumental developments in optics and biochemistry subsequently paved the way to transition from classical to molecular histology in order to decipher the molecular contexts associated with physiological or pathological development or function of a tissue. In 1941, Coons and colleagues performed the first systematic integrated examination of classical histology and biochemistry when his team localized pneumonia antigens in infected tissue sections. Most recently, in the early 21st century, mass spectrometry (MS) has progressively become one of the most valuable tools to analyze biomolecular compounds. Currently, sampling methods, biochemical procedures, and MS instrumentations
allow scientists to perform ‘‘in depth’’ analysis of the protein content of any type of tissue of interest. This article reviews the salient issues in proteomics analysis of tissues. We first outline technical and analytical considerations for sampling and biochemical processing of tissues and subsequently the instrumental possibilities for proteomics analysis such as shotgun proteomics in an anatomical context. Specific attention concerns formalin fixed and paraffin embedded (FFPE) tissues that are potential ‘‘gold mines’’ for histopathological investigations. In all, the matrix assisted laser desorption/ionization (MALDI) MS imaging, which allows for differential mapping of hundreds of compounds on a tissue section, is currently the most striking evidence of linkage and transition between ‘‘classical’’ and ‘‘molecular’’ histology. Tissue proteomics represents a veritable field of research and investment activity for modern biomarker discovery and development for the next decade.
Progressively, tissue analyses evolved towards the description of the whole molecular content of a given sample. Currently, mass spectrometry (MS) is the most versatile
analytical tool for protein identification and has proven its great potential for biological and clinical applications. ‘‘Omics’’ fields, and especially proteomics, are of particular
interest since they allow the analysis of a biomolecular picture associated with a given physiological or pathological state. Biochemical techniques were then adapted for an optimal extraction of several biocompounds classes from tissues of different natures.
Laser capture microdissection (LCM) is used to select and isolate tissue areas of interest for further analysis. The developments of MS instrumentations have then definitively transformed the scientific scene, pushing back more and more detection and identification limits. Since a few decades, new approaches of analyses appeared, involving the use of tissue sections dropped on glass slides as starting material. Two types of analyses can then be applied on tissue sections: shotgun proteomics and the very promising MS imaging (MSI) using Matrix Assisted Laser Desorption/Ionization (MALDI) sources. Also known as ‘‘molecular histology,’’ MSI is the most striking hyphen between histology and molecular analysis. In practice, this method allows visualization of the spatial distribution of proteins, peptides, drugs, or others analytes directly on tissue sections. This technique paved new ways of research, especially in the field of histopathology, since this approach appeared to be complementary to conventional histology.
Tissue processing workflows for molecular analyses
Tissue processing workflows for molecular analyses. Tissues can either be processed in solution or directly on tissue sections. In solution, processing involves protein
extraction from tissue pieces in order to perform 2D gel separation and identification of proteins, shotgun proteomics, or MALDI analyses. Extracts can also be obtained from
tissues area selection and protein extraction after laser micro dissection or on-tissue processing. Imaging techniques are dedicated to the morphological characterization or molecular mapping of tissue sections. Histology can either be conducted by hematoxylin/eosin staining or by molecular mapping using antibodies with IHC. Finally, mass spectrometry imaging allows the cartography of numerous compounds in a single analysis. This approach is a modern form of ‘‘molecular histology’’ as it grafts, with the use of mathematical calculations, a molecular dimension to classical histology. (AR, antigen retrieval; FFPE, formalin fixed and paraffin embedded; fr/fr, fresh frozen; IHC, immunohistochemistry; LCM, laser capture microdissection; MALDI, matrix assisted laser desorption/ionization; MSI, mass spectrometry imaging; PTM, post translational modification.)
Analysis of tissue proteomes has greatly evolved with separation methods and mass spectrometry instrumentation. The choice of the workflow strongly depends on whether a bottom-up or a top-down analysis has to be performed downstream. In-gel or off-gel proteomics principally differentiates proteomic workflows. The almost simultaneous discoveries of the MS ionization sources (Nobel Prize awarded) MALDI (Hillenkamp and Karas, 1990; Tanaka et al., 1988) and electrospray ionization (ESI) (Fenn et al., 1989) have paved the way for analysis of intact proteins and peptides. Separation methods such as two-dimension electrophoresis (2DE) (Fey and Larsen, 2001) and nanoscale reverse phase liquid chromatography (nanoRP-LC) (Deterding et al., 1991) lead to efficient preparation of proteins for respectively topdown and bottom-up strategies. A huge panel of developments was then achieved mostly for LC-MS based proteomics in order to improve ion fragmentation approaches and peptide
identification throughput relying on database interrogation. Moreover, approaches were developed to analyze post translational modifications (PTM) such as phosphorylations (Ficarro et al., 2002; Oda et al., 2001; Zhou et al., 2001) or glycosylations (Zhang et al., 2003), proposing as well different quantification procedures. Regarding instrumentation, the most cutting edge improvements are the gain of mass accuracy for an optimal detection of the eluted peptides during LC-MS runs (Mann and Kelleher, 2008; Michalski et al., 2011) and the increase in scanning speed, for example with the use of Orbitrap analyzers (Hardman and Makarov, 2003; Makarov et al., 2006; Makarov et al., 2009; Olsen et al., 2009). Ion transfer efficiency was also drastically improved with the conception of ion funnels that homogenize the ion transmission
capacities through m/z ranges (Kelly et al., 2010; Kim et al., 2000; Page et al., 2006; Shaffer et al., 1998) or by performing electrospray ionization within low vacuum (Marginean et al., 2010; Page et al., 2008; Tang et al., 2011). Beside collision induced dissociation (CID) that is proposed for many applications (Li et al., 2009; Wells and McLuckey, 2005), new fragmentation methods were investigated, such as higher-energy collisional dissociation (HCD) especially for phosphoproteomic
applications (Nagaraj et al., 2010), and electron transfer dissociation (ETD) and electron capture dissociation (ECD) that are suited for phospho- and glycoproteomics (An
et al., 2009; Boersema et al., 2009; Wiesner et al., 2008). Methods for data-independent MS2 analysis based on peptide fragmentation in given m/z windows without precursor selection neither information knowledge, also improves identification throughput (Panchaud et al., 2009; Venable et al., 2004), especially with the use of MS instruments with high resolution and high mass accuracy specifications (Panchaud et al., 2011). Gas fractionation methods such as ion mobility (IM) can also be used as a supplementary separation dimension which enable more efficient peptide identifications (Masselon et al., 2000; Shvartsburg et al., 2013; Shvartsburg et al., 2011).
Microdissection relies on a laser ablation principle. The tissue section is dropped on a plastic membrane covering a glass slide. The preparation is then placed into a microscope
equipped with a laser. A highly focused beam will then be guided by the user at the external limit of the area of interest. This area composed by the plastic membrane, and the tissue section will then be ejected from the glass slide and collected into a tube cap for further processing. This mode of microdissection is the most widely used due to its ease of handling and the large panels of devices proposed by constructors. Indeed, Leica microsystem proposed the Leica LMD system (Kolble, 2000), Molecular Machine and Industries, the MMI laser microdissection system Microcut, which was used in combination with IHC (Buckanovich et al., 2006), Applied Biosystems developed the Arcturus
microdissection System, and Carl Zeiss patented P.A.L.M. MicroBeam technology (Braakman et al., 2011; Espina et al., 2006a; Espina et al., 2006b; Liu et al., 2012; Micke
et al., 2005). LCM represents a very adequate link between classical histology and sampling methods for molecular analyses as it is a simple customized microscope. Indeed,
optical lenses of different magnification can be used and the method is compatible with classical IHC (Buckanovich et al., 2006). Only the laser and the tube holder need to be
added to the instrumentation.
After microdissection, the tissue pieces can be processed for analyses using different available MS devices and strategies. The simplest one consists in the direct analysis of the
protein profiles by MALDI-TOF-MS (MALDI-time of flight-MS). The microdissected tissues are dropped on a MALDI target and directly covered by the MALDI matrix (Palmer-Toy et al., 2000; Xu et al., 2002). This approach was already used in order to classify breast cancer tumor types (Sanders et al., 2008), identify intestinal neoplasia protein biomarkers (Xu et al., 2009), and to determine differential profiles in glomerulosclerosis (Xu et al., 2005).
Currently the most common proteomic approach for LCM tissue analysis is LC-MS/MS. Label free LC-MS approaches have been used to study several cancers like head and neck squamous cell carcinomas (Baker et al., 2005), esophageal cancer (Hatakeyama et al., 2006), dysplasic cervical cells (Gu et al., 2007), breast carcinoma tumors (Hill et al., 2011; Johann et al., 2009), tamoxifen-resistant breast cancer cells (Umar et al., 2009), ER + / – breast cancer cells (Rezaul et al., 2010), Barretts esophagus (Stingl et al., 2011), and ovarian endometrioid cancer (Alkhas et al., 2011). Different isotope labeling methods have been used in order to compare proteins expression. ICAT was first used to investigate proteomes of hepatocellular carcinoma (Li et al., 2004; 2008). The O16/O18 isotopic labeling was then used for proteomic analysis of ductal carcinoma of the breast (Zang et al., 2004).
Currently, the lowest amount of collected cells for a relevant single analysis using fr/fr breast cancer tissues was 3000–4000 (Braakman et al., 2012; Liu et al., 2012; Umar et al., 2007). With a Q-Exactive (Thermo, Waltham) mass spectrometer coupled to LC, Braakman was able to identify up to 1800 proteins from 4000 cells. Processing
of FFPE microdissected tissues of limited sizes still remains an issue which is being addressed by our team.
Among direct tissue analyses modes, two categories of investigations can be done. MALDI profiling consists in the study of molecular localization of compounds and can be
combined with parallel shotgun proteomic methods. Imaging methods give less detailed molecular information, but is more focused on the accurate mapping of the detected compounds through tissue area. In 2007, a concept of direct tissue proteomics (DTP) was proposed for high-throughput examination of tissue microarray samples. However, contrary to the classical workflow, tissue section chemical treatment involved a first step of scrapping each FFPE tissue spot with a razor blade from the glass slide. The tissues were then transferred into a tube and processed with RIPA buffer and finally submitted to boiling as an AR step (Hwang et al., 2007). Afterward, several teams proved that it was possible to perform the AR directly on tissue sections. These applications were mainly dedicated to MALDI imaging analyses (Bonnel et al., 2011; Casadonte and Caprioli, 2011; Gustafsson et al., 2010). However, more recently, Longuespe´e used citric acid antigen retrieval (CAAR) before shotgun proteomics associated to global profiling proteomics (Longuespee et al., 2013).
MALDI imaging workflow
MALDI imaging workflow. For MALDI imaging experiments, tissue sections are dropped on conductive glass slides. Sample preparations are then adapted depending on the nature of the tissue sample (FFPE or fr/fr). Then, matrix is uniformly deposited on the tissue section using dedicated devices. A laser beam subsequently irradiates the preparation following a given step length and a MALDI spectrum is acquired for each position. Using adapted software, the different detected ions are then mapped through the tissue section, in function of their differential intensities. The ‘‘molecular maps’’ are called images. (FFPE, formalin fixed and paraffin embedded; fr/fr, fresh frozen; MALDI, matrix assisted laser desorption ionization.)
Proteomics instrumentations, specific biochemical preparations, and sampling methods such as LCM altogether allow for the deep exploration and comparison of different proteomes between regions of interest in tissues with up to 104 detected proteins. MALDI MS imaging that allows for differential mapping of hundreds of compounds on a tissue section is currently the most striking illustration of association between ‘‘classical’’ and ‘‘molecular’’ histology.
Novel serum protein biomarker panel revealed by mass spectrometry and its prognostic value in breast cancer
Introduction: Serum profiling using proteomic techniques has great potential to detect biomarkers that might improve diagnosis and predict outcome for breast cancer patients (BC). This study used surface-enhanced laser desorption/ionization time-of-flight (SELDI-TOF) mass spectrometry (MS) to identify differentially expressed proteins in sera from BC and healthy volunteers (HV), with the goal of developing a new prognostic biomarker panel.
Methods: Training set serum samples from 99 BC and 51 HV subjects were applied to four adsorptive chip surfaces (anion-exchange, cation-exchange, hydrophobic, and metal affinity) and analyzed by time-of-flight MS. For validation, 100 independent BC serum samples and 70 HV samples were analyzed similarly. Cluster analysis of protein spectra was performed to identify protein patterns related to BC and HV groups. Univariate and multivariate statistical analyses were used to develop a protein panel to distinguish breast cancer sera from healthy sera, and its prognostic potential was evaluated.
Results: From 51 protein peaks that were significantly up- or downregulated in BC patients by univariate analysis, binary logistic regression yielded five protein peaks that together classified BC and HV with a receiver operating characteristic (ROC) area-under-the-curve value of 0.961. Validation on an independent patient cohort confirmed
the five-protein parameter (ROC value 0.939). The five-protein parameter showed positive association with large tumor size (P = 0.018) and lymph node involvement (P = 0.016). By matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) MS, immunoprecipitation and western blotting the proteins were identified as a fragment
of apolipoprotein H (ApoH), ApoCI, complement C3a, transthyretin, and ApoAI. Kaplan-Meier analysis on 181 subjects after median follow-up of >5 years demonstrated that the panel significantly predicted disease-free survival (P = 0.005), its efficacy apparently greater in women with estrogen receptor (ER)-negative tumors (n = 50, P = 0.003) compared to ER-positive (n = 131, P = 0.161), although the influence of ER status needs to be confirmed after longer follow-up.
Conclusions: Protein mass profiling by MS has revealed five serum proteins which, in combination, can distinguish between serum from women with breast cancer and healthy control subjects with high sensitivity and specificity. The five-protein panel significantly predicts recurrence-free survival in women with ER-negative tumors and may have value in the management of these patients.
Cellular prion protein is required for neuritogenesis: fine-tuning of multiple signaling pathways involved in focal adhesions and actin cytoskeleton dynamics
Aurélie Alleaume-Butaux, et al. Cell Health and Cytoskeleton 2013:5 1–12
Neuritogenesis is a dynamic phenomenon associated with neuronal differentiation that allows a rather spherical neuronal stem cell to develop dendrites and axon, a prerequisite for the integration and transmission of signals. The acquisition of neuronal polarity occurs in three steps:
(1) neurite sprouting, which consists of the formation of buds emerging from the postmitotic neuronal soma;
(2) neurite outgrowth, which represents the conversion of buds into neurites, their elongation and evolution into axon or dendrites; and
(3) the stability and plasticity of neuronal polarity.
In neuronal stem cells, remodeling and activation of focal adhesions (FAs)
associated with deep modifications of the actin cytoskeleton is
a prerequisite for neurite sprouting and subsequent neurite outgrowth.
A multiple set of growth factors and interactors located in
the extracellular matrix and the plasma membrane orchestrate neuritogenesis
by acting on intracellular signaling effectors, notably small G proteins such as RhoA, Rac, and Cdc42,
which are involved in actin turnover and the dynamics of FAs.
The cellular prion protein (PrPC), a glycosylphosphatidylinositol (GPI)-anchored membrane protein
mainly known for its role in a group of fatal neurodegenerative diseases,
has emerged as a central player in neuritogenesis.
Here, we review the contribution of PrPC to neuronal polarization and
detail the current knowledge on the signaling pathways fine-tuned
by PrPC to promote neurite sprouting, outgrowth, and maintenance.
We emphasize that PrPC-dependent neurite sprouting is a process in which
PrPC governs the dynamics of FAs and the actin cytoskeleton via β1 integrin signaling.
The presence of PrPC is necessary to render neuronal stem cells
competent to respond to neuronal inducers and to develop neurites.
In differentiating neurons, PrPC exerts a facilitator role towards neurite elongation.
This function relies on the interaction of PrPC with a set of diverse partners such as
elements of the extracellular matrix,
plasma membrane receptors,
adhesion molecules, and
soluble factors that control actin cytoskeleton turnover
through Rho-GTPase signaling.
Once neurons have reached their terminal stage of differentiation and
acquired their polarized morphology,
PrPC also takes part in the maintenance of neurites.
By acting on tissue nonspecific alkaline phosphatase, or matrix metalloproteinase type 9,
PrPC stabilizes interactions between neurites and the extracellular matrix.
Fusion-pore expansion during syncytium formation is restricted by an actin network
Background: Role of uncoupling proteins (UCP) in the brain is unclear.
Results: UCP, present in astrocytes, mediate the intra-mitochondrial acidification leading to a decrease in mitochondrial ATP production.
Conclusion: Astrocyte pH regulation promotes ATP synthesis by glycolysis whose final product, lactate, increases neuronal survival.
Significance: We describe a new role for a brain uncoupling protein.
Brain activity is energetically costly and requires a steady and
highly regulated flow of energy equivalents between neural cells.
It is believed that a substantial share of cerebral glucose, the major source of energy of the brain,
will preferentially be metabolized in astrocytes via aerobic glycolysis.
The aim of this study was to evaluate whether uncoupling proteins (UCPs),
located in the inner membrane of mitochondria,
play a role in setting up the metabolic response pattern of astrocytes.
UCPs are believed to mediate the transmembrane transfer of protons
resulting in the uncoupling of oxidative phosphorylation from ATP production.
UCPs are therefore potentially important regulators of energy fluxes. The main UCP isoforms
expressed in the brain are UCP2, UCP4, and UCP5.
We examined in particular the role of UCP4 in neuron-astrocyte metabolic coupling
and measured a range of functional metabolic parameters
including mitochondrial electrical potential and pH,
reactive oxygen species production,
NAD/NADH ratio,
ATP/ADP ratio,
CO2 and lactate production, and
oxygen consumption rate (OCR).
In brief, we found that UCP4 regulates the intra-mitochondrial pH of astrocytes
which acidifies as a consequence of glutamate uptake,
with the main consequence of reducing efficiency of mitochondrial ATP production.
the diminished ATP production is effectively compensated by enhancement of glycolysis.
this non-oxidative production of energy is not associated with deleterious H2O2 production.
We show that astrocytes expressing more UCP4 produced more lactate,
used as energy source by neurons, and had the ability to enhance neuronal survival.
Jose Eduardo des Salles Roselino
The problem with genomics was it was set as explanation for everything. In fact, when something is genetic in nature the genomic reasoning works fine. However, this means whenever an inborn error is found and only in this case the genomic knowledge afterwards may indicate what is wrong and not the completely way to put biology upside down by reading everything in the DNA genetic as well as non-genetic problems.
In the imtroduction to this series of discussions I pointed out JEDS Rosalino’s observation about the construction of a complex molecule of acetyl coenzyme A, and the amount of genetic coding that had to go into it. Furthermore, he observes – Millions of years later, or as soon as, the information of interaction leading to activity and regulation could be found in RNA, proteins like reverse transcriptase move this information to a more stable form (DNA). In this way it is easier to understand the use of CoA to make two carbon molecules more reactive.
acetylCoA
In the tutorial that follows we find support for the view that mechanisms and examples from the current literature, which give insight into the developments in cell metabolism, are achieving a separation from inconsistent views introduced by the classical model of molecular biology and genomics, toward a more functional cellular dynamics that is not dependent on the classic view. The classical view fits a rigid framework that is to genomics and metabolomics as Mendelian genetics if to multidimentional, multifactorial genetics. The inherent difficulty lies in two places:
Interactions between differently weighted determinants
A large part of the genome is concerned with regulatory function, not expression of the code
The goal of the tutorial was to achieve an understanding of how cell signaling occurs in a cell. Completion of the tutorial would provide
a basic understanding signal transduction and
the role of phosphorylation in signal transduction.
Regulation of the integrity of endothelial cell–cell contacts by phosphorylation of VE-cadherin
In addition – detailed knowledge of –
the role of Tyrosine kinases and
G protein-coupled receptors in cell signaling.
serine
threonine
protein kinase
We are constantly receiving and interpreting signals from our environment, which can come
in the form of light, heat, odors, touch or sound.
The cells of our bodies are also
constantly receiving signals from other cells.
These signals are important to
keep cells alive and functioning as well as
to stimulate important events such as
cell division and differentiation.
Signals are most often chemicals that can be found
in the extracellular fluid around cells.
These chemicals can come
from distant locations in the body (endocrine signaling by hormones), from
nearby cells (paracrine signaling) or can even
be secreted by the same cell (autocrine signaling).
Signaling molecules may trigger any number of cellular responses, including
changing the metabolism of the cell receiving the signal or
result in a change in gene expression (transcription) within the nucleus of the cell or both.
controlling the output of ribosomes.
To which I would now add..
result in either an inhibitory or a stimulatory effect
The three stages of cell signaling are:
Cell signaling can be divided into 3 stages:
Reception: A cell detects a signaling molecule from the outside of the cell.
Transduction: When the signaling molecule binds the receptor it changes the receptor protein in some way. This change initiates the process of transduction. Signal transduction is usually a pathway of several steps. Each relay molecule in the signal transduction pathway changes the next molecule in the pathway.
Response: Finally, the signal triggers a specific cellular response.
Signal Transduction – ligand binds to surface receptor
Membrane receptors function by binding the signal molecule (ligand) and causing the production of a second signal (also known as a second messenger) that then causes a cellular response. These types of receptors transmit information from the extracellular environment to the inside of the cell.
by changing shape or
by joining with another protein
once a specific ligand binds to it.
Examples of membrane receptors include
G Protein-Coupled Receptors and
Understanding these receptors and identifying their ligands and the resulting signal transduction pathways represent a major conceptual advance.
Intracellular receptors are found inside the cell, either in the cytopolasm or in the nucleus of the target cell (the cell receiving the signal).
Note that though change in gene expression is stated, the change in gene expression does not here imply a change in the genetic information – such as – mutation. That does not have to be the case in the normal homeostatic case.
This point is the differentiating case between what JEDS Roselino has referred as
a fast, adaptive reaction, that is the feature of protein molecules, and distinguishes this interaction from
a one-to-one transcription of the genetic code.
The rate of transcription can be controlled, or it can be blocked. This is in large part in response to the metabolites in the immediate interstitium.
This might only be
a change in the rate of a transcription or a suppression of expression through RNA.
Or through a conformational change in an enzyme
Swinging domains in HECT E3 enzymes
Since signaling systems need to be
responsive to small concentrations of chemical signals and act quickly,
cells often use a multi-step pathway that transmits the signal quickly,
while amplifying the signal to numerous molecules at each step.
Signal transduction pathways are shown (simplified):
Signal Transduction
Signal transduction occurs when an
extracellular signaling molecule activates a specific receptor located on the cell surface or inside the cell.
In turn, this receptor triggers a biochemical chain of events inside the cell, creating a response.
Depending on the cell, the response alters the cell’s metabolism, shape, gene expression, or ability to divide.
The signal can be amplified at any step. Thus, one signaling molecule can cause many responses.
In 1970, Martin Rodbell examined the effects of glucagon on a rat’s liver cell membrane receptor. He noted that guanosine triphosphate disassociated glucagon from this receptor and stimulated the G-protein, which strongly influenced the cell’s metabolism. Thus, he deduced that the G-protein is a transducer that accepts glucagon molecules and affects the cell. For this, he shared the 1994 Nobel Prize in Physiology or Medicine with Alfred G. Gilman.
Guanosine monophosphate structure
In 2007, a total of 48,377 scientific papers—including 11,211 e-review papers—were published on the subject. The term first appeared in a paper’s title in 1979. Widespread use of the term has been traced to a 1980 review article by Rodbell: Research papers focusing on signal transduction first appeared in large numbers in the late 1980s and early 1990s.
Signal transduction involves the binding of extracellular signaling molecules and ligands to cell-surface receptors that trigger events inside the cell. The combination of messenger with receptor causes a change in the conformation of the receptor, known as receptor activation.
This activation is always the initial step (the cause) leading to the cell’s ultimate responses (effect) to the messenger. Despite the myriad of these ultimate responses, they are all directly due to changes in particular cell proteins. Intracellular signaling cascades can be started through cell-substratum interactions; examples are the integrin that binds ligands in the extracellular matrix and steroids.
Integrin
Most steroid hormones have receptors within the cytoplasm and act by stimulating the binding of their receptors to the promoter region of steroid-responsive genes.
steroid hormone receptor
Various environmental stimuli exist that initiate signal transmission processes in multicellular organisms; examples include photons hitting cells in the retina of the eye, and odorants binding to odorant receptors in the nasal epithelium. Certain microbial molecules, such as viral nucleotides and protein antigens, can elicit an immune system response against invading pathogens mediated by signal transduction processes. This may occur independent of signal transduction stimulation by other molecules, as is the case for the toll-like receptor. It may occur with help from stimulatory molecules located at the cell surface of other cells, as with T-cell receptor signaling. Receptors can be roughly divided into two major classes: intracellular receptors and extracellular receptors.
Signal transduction cascades amplify the signal output
Signal transduction cascades amplify the signal output
G protein-coupled receptors (GPCRs) are a family of integral transmembrane proteins that possess seven transmembrane domains and are linked to a heterotrimeric G protein. Many receptors are in this family, including adrenergic receptors and chemokine receptors.
Arrestin binding to active GPCR kinase (GRK)-phosphorylated GPCRs blocks G protein coupling
signal transduction pathways
Arrestin binding to active GPCR kinase (GRK)-phosphorylated GPCRs blocks G protein coupling
Signal transduction by a GPCR begins with an inactive G protein coupled to the receptor; it exists as a heterotrimer consisting of Gα, Gβ, and Gγ. Once the GPCR recognizes a ligand, the conformation of the receptor changes to activate the G protein, causing Gα to bind a molecule of GTP and dissociate from the other two G-protein subunits.
The dissociation exposes sites on the subunits that can interact with other molecules. The activated G protein subunits detach from the receptor and initiate signaling from many downstream effector proteins such as phospholipases and ion channels, the latter permitting the release of second messenger molecules.
Receptor tyrosine kinases (RTKs) are transmembrane proteins with an intracellular kinase domain and an extracellular domain that binds ligands; examples include growth factor receptors such as the insulin receptor.
insulin receptor and and insulin receptor signaling pathway (IRS)
To perform signal transduction, RTKs need to form dimers in the plasma membrane; the dimer is stabilized by ligands binding to the receptor.
RTKs
The interaction between the cytoplasmic domains stimulates the autophosphorylation of tyrosines within the domains of the RTKs, causing conformational changes.
Subsequent to this, the receptors’ kinase domains are activated, initiating phosphorylation signaling cascades of downstream cytoplasmic molecules that facilitate various cellular processes such as cell differentiation and metabolism.
Signal-Transduction-Pathway
As is the case with GPCRs, proteins that bind GTP play a major role in signal transduction from the activated RTK into the cell. In this case, the G proteins are
members of the Ras, Rho, and Raf families, referred to collectively as small G proteins.
They act as molecular switches usually
tethered to membranes by isoprenyl groups linked to their carboxyl ends.
Upon activation, they assign proteins to specific membrane subdomains where they participate in signaling. Activated RTKs in turn activate
small G proteins that activate guanine nucleotide exchange factors such as SOS1.
Once activated, these exchange factors can activate more small G proteins, thus
amplifying the receptor’s initial signal.
The mutation of certain RTK genes, as with that of GPCRs, can result in the expression of receptors that exist in a constitutively activate state; such mutated genes may act as oncogenes.
Integrin
Integrin
Integrin-mediated signal transduction
An overview of integrin-mediated signal transduction, adapted from Hehlgens et al. (2007).
Integrins are produced by a wide variety of cells; they play a role in
cell attachment to other cells and the extracellular matrix and
in the transduction of signals from extracellular matrix components such as fibronectin and collagen.
Ligand binding to the extracellular domain of integrins
changes the protein’s conformation,
clustering it at the cell membrane to
initiate signal transduction.
Integrins lack kinase activity; hence, integrin-mediated signal transduction is achieved through a variety of intracellular protein kinases and adaptor molecules, the main coordinator being integrin-linked kinase.
As shown in the picture, cooperative integrin-RTK signaling determines the
timing of cellular survival,
apoptosis,
proliferation, and
differentiation.
integrin-mediated signal transduction
Integrin signaling
ion channel
A ligand-gated ion channel, upon binding with a ligand, changes conformation
to open a channel in the cell membrane
through which ions relaying signals can pass.
An example of this mechanism is found in the receiving cell of a neural synapse. The influx of ions that occurs in response to the opening of these channels
induces action potentials, such as those that travel along nerves,
by depolarizing the membrane of post-synaptic cells,
resulting in the opening of voltage-gated ion channels.
RyR and Ca+ release from SR
An example of an ion allowed into the cell during a ligand-gated ion channel opening is Ca2+;
it acts as a second messenger
initiating signal transduction cascades and
altering the physiology of the responding cell.
This results in amplification of the synapse response between synaptic cells
by remodelling the dendritic spines involved in the synapse.
In eukaryotic cells, most intracellular proteins activated by a ligand/receptor interaction possess an enzymatic activity; examples include tyrosine kinase and phosphatases. Some of them create second messengers such as cyclic AMP and IP3,
cAMP
Inositol_1,4,5-trisphosphate.svg
the latter controlling the release of intracellular calcium stores into the cytoplasm.
Many adaptor proteins and enzymes activated as part of signal transduction possess specialized protein domains that bind to specific secondary messenger molecules. For example,
calcium ions bind to the EF hand domains of calmodulin,
allowing it to bind and activate calmodulin-dependent kinase.
calcium movement and RyR2 receptor
PIP3 and other phosphoinositides do the same thing to the Pleckstrin homology domains of proteins such as the kinase protein AKT.
Signals can be generated within organelles, such as chloroplasts and mitochondria, modulating the nuclear
gene expression in a process called retrograde signaling.
Recently, integrative genomics approaches, in which correlation analysis has been applied on transcript and metabolite profiling data of Arabidopsis thaliana, revealed the identification of metabolites which are putatively acting as mediators of nuclear gene expression.
Omega-3 (ω-3) fatty acids are one of the two main families of long chain polyunsaturated fatty acids (PUFA). The main omega-3 fatty acids in the mammalian body are
α-linolenic acid (ALA), docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA).
Central nervous tissues of vertebrates are characterized by a high concentration of omega-3 fatty acids. Moreover, in the human brain,
DHA is considered as the main structural omega-3 fatty acid, which comprises about 40% of the PUFAs in total.
DHA deficiency may be the cause of many disorders such as depression, inability to concentrate, excessive mood swings, anxiety, cardiovascular disease, type 2 diabetes, dry skin and so on.
On the other hand,
zinc is the most abundant trace metal in the human brain.
There are many scientific studies linking zinc, especially
excess amounts of free zinc, to cellular death.
Neurodegenerative diseases, such as Alzheimer’s disease, are characterized by altered zinc metabolism. Both animal model studies and human cell culture studies have shown a possible link between
omega-3 fatty acids, zinc transporter levels and
free zinc availability at cellular levels.
Many other studies have also suggested a possible
omega-3 and zinc effect on neurodegeneration and cellular death.
Therefore, in this review, we will examine
the effect of omega-3 fatty acids on zinc transporters and
the importance of free zinc for human neuronal cells.
Moreover, we will evaluate the collective understanding of
mechanism(s) for the interaction of these elements in neuronal research and their
significance for the diagnosis and treatment of neurodegeneration.
Epidemiological studies have linked high intake of fish and shellfish as part of the daily diet to
reduction of the incidence and/or severity of Alzheimer’s disease (AD) and senile mental decline in
Omega-3 fatty acids are one of the two main families of a broader group of fatty acids referred to as polyunsaturated fatty acids (PUFAs). The other main family of PUFAs encompasses the omega-6 fatty acids. In general, PUFAs are essential in many biochemical events, especially in early post-natal development processes such as
cellular differentiation,
photoreceptor membrane biogenesis and
active synaptogenesis.
Despite the significance of these
two families, mammals cannot synthesize PUFA de novo, so they must be ingested from dietary sources. Though belonging to the same family, both
omega-3 and omega-6 fatty acids are metabolically and functionally distinct and have
opposing physiological effects. In the human body,
high concentrations of omega-6 fatty acids are known to increase the formation of prostaglandins and
thereby increase inflammatory processes [10].
the reverse process can be seen with increased omega-3 fatty acids in the body.
Many other factors, such as
thromboxane A2 (TXA2),
leukotriene
B4 (LTB4),
IL-1,
IL-6,
tumor necrosis factor (TNF) and
C-reactive protein,
which are implicated in various health conditions, have been shown to be increased with high omega-6 fatty acids but decreased with omega-3 fatty acids in the human body.
Dietary fatty acids have been identified as protective factors in coronary heart disease, and PUFA levels are known to play a critical role in
immune responses,
gene expression and
intercellular communications.
omega-3 fatty acids are known to be vital in
the prevention of fatal ventricular arrhythmias, and
are also known to reduce thrombus formation propensity by decreasing platelet aggregation, blood viscosity and fibrinogen levels
.Since omega-3 fatty acids are prevalent in the nervous system, it seems logical that a deficiency may result in neuronal problems, and this is indeed what has been identified and reported.
The main
In another study conducted with individuals of 65 years of age or older (n = 6158), it was found that
only high fish consumption, but
not dietary omega-3 acid intake,
had a protective effect on cognitive decline
In 2005, based on a meta-analysis of the available epidemiology and preclinical studies, clinical trials were conducted to assess the effects of omega-3 fatty acids on cognitive protection. Four of the trials completed have shown
a protective effect of omega-3 fatty acids only among those with mild cognitive impairment conditions.
A trial of subjects with mild memory complaints demonstrated
an improvement with 900 mg of DHA.
We review key findings on
the effect of the omega-3 fatty acid DHA on zinc transporters and the
importance of free zinc to human neuronal cells.
DHA is the most abundant fatty acid in neural membranes, imparting appropriate
fluidity and other properties,
and is thus considered as the most important fatty acid in neuronal studies. DHA is well conserved throughout the mammalian species despite their dietary differences. It is mainly concentrated
in membrane phospholipids at synapses and
in retinal photoreceptors and
also in the testis and sperm.
In adult rats’ brain, DHA comprises approximately
17% of the total fatty acid weight, and
in the retina it is as high as 33%.
DHA is believed to have played a major role in the evolution of the modern human –
in particular the well-developed brain.
Premature babies fed on DHA-rich formula show improvements in vocabulary and motor performance.
Analysis of human cadaver brains have shown that
people with AD have less DHA in their frontal lobe
and hippocampus compared with unaffected individuals
Furthermore, studies in mice have increased support for the
protective role of omega-3 fatty acids.
Mice administrated with a dietary intake of DHA showed
an increase in DHA levels in the hippocampus.
Errors in memory were decreased in these mice and they demonstrated
reduced peroxide and free radical levels,
suggesting a role in antioxidant defense.
Another study conducted with a Tg2576 mouse model of AD demonstrated that dietary
DHA supplementation had a protective effect against reduction in
drebrin (actin associated protein), elevated oxidation, and to some extent, apoptosis via
decreased caspase activity.
Zinc
Zinc is a trace element, which is indispensable for life, and it is the second most abundant trace element in the body. It is known to be related to
growth,
development,
differentiation,
immune response,
receptor activity,
DNA synthesis,
gene expression,
neuro-transmission,
enzymatic catalysis,
hormonal storage and release,
tissue repair,
memory,
the visual process
and many other cellular functions. Moreover, the indispensability of zinc to the body can be discussed in many other aspects, as
a component of over 300 different enzymes
an integral component of a metallothioneins
a gene regulatory protein.
Approximately 3% of all proteins contain
zinc binding motifs .
The broad biological functionality of zinc is thought to be due to its stable chemical and physical properties. Zinc is considered to have three different functions in enzymes;
catalytic,
coactive and
Indeed, it is the only metal found in all six different subclasses
of enzymes. The essential nature of zinc to the human body can be clearly displayed by studying the wide range of pathological effects of zinc deficiency. Anorexia, embryonic and post-natal growth retardation, alopecia, skin lesions, difficulties in wound healing, increased hemorrhage tendency and severe reproductive abnormalities, emotional instability, irritability and depression are just some of the detrimental effects of zinc deficiency.
Proper development and function of the central nervous system (CNS) is highly dependent on zinc levels. In the mammalian organs, zinc is mainly concentrated in the brain at around 150 μm. However, free zinc in the mammalian brain is calculated to be around 10 to 20 nm and the rest exists in either protein-, enzyme- or nucleotide bound form. The brain and zinc relationship is thought to be mediated
through glutamate receptors, and
it inhibits excitatory and inhibitory receptors.
Vesicular localization of zinc in pre-synaptic terminals is a characteristic feature of brain-localized zinc, and
its release is dependent on neural activity.
Retardation of the growth and development of CNS tissues have been linked to low zinc levels. Peripheral neuropathy, spina bifida, hydrocephalus, anencephalus, epilepsy and Pick’s disease have been linked to zinc deficiency. However, the body cannot tolerate excessive amounts of zinc.
The relationship between zinc and neurodegeneration, specifically AD, has been interpreted in several ways. One study has proposed that β-amyloid has a greater propensity to
form insoluble amyloid in the presence of
high physiological levels of zinc.
Insoluble amyloid is thought to
aggregate to form plaques,
which is a main pathological feature of AD. Further studies have shown that
chelation of zinc ions can deform and disaggregate plaques.
In AD, the most prominent injuries are found in
hippocampal pyramidal neurons, acetylcholine-containing neurons in the basal forebrain, and in
somatostatin-containing neurons in the forebrain.
All of these neurons are known to favor
rapid and direct entry of zinc in high concentration
leaving neurons frequently exposed to high dosages of zinc.
This is thought to promote neuronal cell damage through oxidative stress and mitochondrial dysfunction. Excessive levels of zinc are also capable of
inhibiting Ca2+ and Na+ voltage gated channels
and up-regulating the cellular levels of reactive oxygen species (ROS).
High levels of zinc are found in Alzheimer’s brains indicating a possible zinc related neurodegeneration. A study conducted with mouse neuronal cells has shown that even a 24-h exposure to high levels of zinc (40 μm) is sufficient to degenerate cells.
If the human diet is deficient in zinc, the body
efficiently conserves zinc at the tissue level by compensating other cellular mechanisms
to delay the dietary deficiency effects of zinc. These include reduction of cellular growth rate and zinc excretion levels, and
redistribution of available zinc to more zinc dependent cells or organs.
A novel method of measuring metallothionein (MT) levels was introduced as a biomarker for the
assessment of the zinc status of individuals and populations.
In humans, erythrocyte metallothionein (E-MT) levels may be considered as an indicator of zinc depletion and repletion, as E-MT levels are sensitive to dietary zinc intake. It should be noted here that MT plays an important role in zinc homeostasis by acting
as a target for zinc ion binding and thus
assisting in the trafficking of zinc ions through the cell,
which may be similar to that of zinc transporters
Zinc Transporters
Deficient or excess amounts of zinc in the body can be catastrophic to the integrity of cellular biochemical and biological systems. The gastrointestinal system controls the absorption, excretion and the distribution of zinc, although the hydrophilic and high-charge molecular characteristics of zinc are not favorable for passive diffusion across the cell membranes. Zinc movement is known to occur
via intermembrane proteins and zinc transporter (ZnT) proteins
These transporters are mainly categorized under two metal transporter families; Zip (ZRT, IRT like proteins) and CDF/ZnT (Cation Diffusion Facilitator), also known as SLC (Solute Linked Carrier) gene families: Zip (SLC-39) and ZnT (SLC-30). More than 20 zinc transporters have been identified and characterized over the last two decades (14 Zips and 8 ZnTs).
Members of the SLC39 family have been identified as the putative facilitators of zinc influx into the cytosol, either from the extracellular environment or from intracellular compartments (Figure 1).
The identification of this transporter family was a result of gene sequencing of known Zip1 protein transporters in plants, yeast and human cells. In contrast to the SLC39 family, the SLC30 family facilitates the opposite process, namely zinc efflux from the cytosol to the extracellular environment or into luminal compartments such as secretory granules, endosomes and synaptic vesicles; thus decreasing intracellular zinc availability (Figure 1). ZnT3 is the most important in the brain where
it is responsible for the transport of zinc into the synaptic vesicles of
glutamatergic neurons in the hippocampus and neocortex,
Figure 1: Subcellular localization and direction of transport of the zinc transporter families, ZnT and ZIP. Arrows show the direction of zinc mobilization for the ZnT (green) and ZIP (red) proteins. A net gain in cytosolic zinc is achieved by the transportation of zinc from the extracellular region and organelles such as the endoplasmic reticulum (ER) and Golgi apparatus by the ZIP transporters. Cytosolic zinc is mobilized into early secretory compartments such as the ER and Golgi apparatus by the ZnT transporters. Figures were produced using Servier Medical Art, http://www.servier.com/. http://www.hindawi.com/journals/jnme/2012/173712.fig.001.jpg
Figure 2: Early zinc signaling (EZS) and late zinc signaling (LZS). EZS involves transcription-independent mechanisms where an extracellular stimulus directly induces an increase in zinc levels within several minutes by releasing zinc from intracellular stores (e.g., endoplasmic reticulum). LSZ is induced several hours after an external stimulus and is dependent on transcriptional changes in zinc transporter expression. Components of this figure were produced using Servier Medical Art, http://www.servier.com/ and adapted from Fukada et al. [30].
omega-3 fatty acids in the mammalian body are
α-linolenic acid (ALA),
docosahexenoic acid (DHA) and
eicosapentaenoic acid (EPA).
In general, seafood is rich in omega-3 fatty acids, more specifically DHA and EPA (Table 1). Thus far, there are nine separate epidemiological studies that suggest a possible link between
increased fish consumption and reduced risk of AD
and eight out of ten studies have reported a link between higher blood omega-3 levels
DHA and Zinc Homeostasis
Many studies have identified possible associations between DHA levels, zinc homeostasis, neuroprotection and neurodegeneration. Dietary DHA deficiency resulted in
increased zinc levels in the hippocampus and
elevated expression of the putative zinc transporter, ZnT3, in the rat brain.
Altered zinc metabolism in neuronal cells has been linked to neurodegenerative conditions such as AD. A study conducted with transgenic mice has shown a significant link between ZnT3 transporter levels and cerebral amyloid plaque pathology. When the ZnT3 transporter was silenced in transgenic mice expressing cerebral amyloid plaque pathology,
a significant reduction in plaque load
and the presence of insoluble amyloid were observed.
In addition to the decrease in plaque load, ZnT3 silenced mice also exhibited a significant
reduction in free zinc availability in the hippocampus
and cerebral cortex.
Collectively, the findings from this study are very interesting and indicate a clear connection between
zinc availability and amyloid plaque formation,
thus indicating a possible link to AD.
DHA supplementation has also been reported to limit the following:
amyloid presence,
synaptic marker loss,
hyper-phosphorylation of Tau,
oxidative damage and
cognitive deficits in transgenic mouse model of AD.
In addition, studies by Stoltenberg, Flinn and colleagues report on the modulation of zinc and the effect in transgenic mouse models of AD. Given that all of these are classic pathological features of AD, and considering the limiting nature of DHA in these processes, it can be argued that DHA is a key candidate in preventing or even curing this debilitating disease.
In order to better understand the possible links and pathways of zinc and DHA with neurodegeneration, we designed a study that incorporates all three of these aspects, to study their effects at the cellular level. In this study, we were able to demonstrate a possible link between omega-3 fatty acid (DHA) concentration, zinc availability and zinc transporter expression levels in cultured human neuronal cells.
When treated with DHA over 48 h, ZnT3 levels were markedly reduced in the human neuroblastoma M17 cell line. Moreover, in the same study, we were able to propose a possible
neuroprotective mechanism of DHA,
which we believe is exerted through
a reduction in cellular zinc levels (through altering zinc transporter expression levels)
that in turn inhibits apoptosis.
DHA supplemented M17 cells also showed a marked depletion of zinc uptake (up to 30%), and
free zinc levels in the cytosol were significantly low compared to the control
This reduction in free zinc availability was specific to DHA; cells treated with EPA had no significant change in free zinc levels (unpublished data). Moreover, DHA-repleted cells had
low levels of active caspase-3 and
high Bcl-2 levels compared to the control treatment.
These findings are consistent with previous published data and further strengthen the possible
correlation between zinc, DHA and neurodegeneration.
On the other hand, recent studies using ZnT3 knockout (ZnT3KO) mice have shown the importance of
ZnT3 in memory and AD pathology.
For example, Sindreu and colleagues have used ZnT3KO mice to establish the important role of
ZnT3 in zinc homeostasis that modulates presynaptic MAPK signaling
required for hippocampus-dependent memory
Results from these studies indicate a possible zinc-transporter-expression-level-dependent mechanism for DHA neuroprotection.
Selected References to Signaling and Metabolic Pathways in PharmaceuticalIntelligence.com
Curator: Larry H. Bernstein, MD, FCAP
This is an added selection of articles in Leaders in Pharmaceutical Intelligence after the third portion of the discussion in a series of articles that began with signaling and signaling pathways. There are fine features on the functioning of enzymes and proteins, on sequential changes in a chain reaction, and on conformational changes that we shall return to. These are critical to developing a more complete understanding of life processes. I have indicated that many of the protein-protein interactions or protein-membrane interactions and associated regulatory features have been referred to previously, but the focus of the discussion or points made were different.
Signaling and signaling pathways
Signaling transduction tutorial.
Carbohydrate metabolism3.1 Selected References to Signaling and Metabolic Pathways in Leaders in Pharmaceutical Intelligence
Lipid metabolism
Protein synthesis and degradation
Subcellular structure
Impairments in pathological states: endocrine disorders; stress hypermetabolism; cancer.
Selected References to Signaling and Metabolic Pathwayspublished in this Open Access Online Scientific Journal, include the following:
Update on mitochondrial function, respiration, and associated disorders
The Centrality of Ca(2+) Signaling and Cytoskeleton Involving Calmodulin Kinases and Ryanodine Receptors in Cardiac Failure, Arterial Smooth Muscle, Post-ischemic Arrhythmia, Similarities and Differences, and Pharmaceutical Targets
Author and Curator: Larry H Bernstein, MD, FCAP, Author, and Content Consultant to e-SERIES A: Cardiovascular Diseases: Justin Pearlman, MD, PhD, FACC And Curator: Aviva Lev-Ari, PhD, RN
Signal transmission by a gas that is produced by one cell, penetrates through membranes and regulates the function of another cell represents an entirely new principle for signaling in biological systems. All compounds that inhibit endothelium-derived relaxation-factor (EDRF) have one property in common, redox activity, which accounts for their inhibitory action on EDRF. One exception is hemoglobin, which inactivates EDRF by binding to it. Furchgott, Ignarro and Murad received the Nobel Prize in Physiology and Medicine for discovery of EDRF in 1998 and demonstrating that it might be nitric oxide (NO) based on a study of the transient relaxations of endothelium-denuded rings of rabbit aorta. These investigators working independently demonstrated that NO is indeed produced by mammalian cells and that NO has specific biological roles in the human body. These studies highlighted the role of NO in cardiovascular, nervous and immune systems. In cardiovascular system NO was shown to cause relaxation of vascular smooth muscle cells causing vasodilatation, in nervous system NO acts as a signaling molecule and in immune system it is used against pathogens by the phagocytosis cells. These pioneering studies opened the path of investigation of role of NO in biology.
NO modulates vascular tone, fibrinolysis, blood pressure and proliferation of vascular smooth muscles. In cardiovascular system disruption of NO pathways or alterations in NO production can result in preponderance to hypertension, hypercholesterolemia, diabetes mellitus, atherosclerosis and thrombosis. The three enzyme isoforms of NO synthase family are responsible for generating NO in different tissues under various circumstances.
Reduction in NO production is implicated as one of the initial factors in initiating endothelial dysfunction. This reduction could be due to
reduction in eNOS production
reduction in eNOS enzymatic activity
reduced bioavailability of NO
Nitric oxide is one of the smallest molecules involved in physiological functions in the body. It is seeks formation of chemical bonds with its targets. Nitric oxide can exert its effects principally by two ways:
Direct
Indirect
Direct actions, as the name suggests, result from direct chemical interaction of NO with its targets e.g. with metal complexes, radical species. These actions occur at relatively low NO concentrations (<200 nM)
Indirect actions result from the effects of reactive nitrogen species (RNS) such as NO2 and N2O3. These reactive species are formed by the interaction of NO with superoxide or molecular oxygen. RNS are generally formed at relatively high NO concentrations (>400 nM)
Although it can be tempting for scientists to believe that RNS will always have deleterious effects and NO will have anabolic effects, this is not entirely true as certain RNS mediated actions mediate important signalling steps e.g. thiol oxidation and nitrosation of proteins mediate cell proliferation and survival, and apoptosis respectively.
Cells subjected to NO concentration between 10-30 nM were associated with cGMP dependent phosphorylation of ERK
Cells subjected to NO concentration between 30-60 nM were associated with Akt phosphorylation
Concentration nearing 100 nM resulted in stabilisation of hypoxia inducible factor-1
Recent data suggests that other NO containing compounds such as S- or N-nitrosoproteins and iron-nitrosyl complexes can be reduced back to produce NO. These NO containing compounds can serve as storage and can reach distant tissues via blood circulation, remote from their place of origin. Hence NO can have both paracrine and ‘endocrine’ effects.
Intracellularly the oxidants present in the cytosol determine the amount of bioacitivity that NO performs. NO can travel roughly 100 microns from NOS enzymes where it is produced.
NO itself in low concentrations have protective action on mitochondrial signaling of cell death.
The aerobic cell was an advance in evolutionary development, but despite the energetic advantage of using oxygen, the associated toxicity of oxygen abundance required adaptive changes.
Oxidation-reduction reactions that are necessary for catabolic and synthetic reactions, can cumulatively damage the organism associated with cancer, cardiovascular disease, neurodegerative disease, and inflammatory overload. The normal balance between production of pro-oxidant species and destruction by the antioxidant defenses is upset in favor of overproduction of the toxic species, which leads to oxidative stress and disease.
We reviewed the complex interactions and underlying regulatory balances/imbalances between the mechanism of vasorelaxation and vasoconstriction of vascular endothelium by way of nitric oxide (NO), prostacyclin, in response to oxidative stress and intimal injury.
Nitric oxide has a ubiquitous role in the regulation of glycolysis with a concomitant influence on mitochondrial function. The influence on mitochondrial function that is active in endothelium, platelets, vascular smooth muscle and neural cells and the resulting balance has a role in chronic inflammation, asthma, hypertension, sepsis and cancer.
Potential cytotoxic mediators of endothelial cell (EC) apoptosis include increased formation of reactive oxygen and nitrogen species (ROSRNS) during the atherosclerotic process. Nitric oxide (NO) has a biphasic action on oxidative cell killing with low concentrations protecting against cell death, whereas higher concentrations are cytotoxic.
ROS induces mitochondrial DNA damage in ECs, and this damage is accompanied by a decrease in mitochondrial RNA (mtRNA) transcripts, mitochondrial protein synthesis, and cellular ATP levels.
NO and circulatory diseases
Blood vessels arise from endothelial precursors that are thin, flat cells lining the inside of blood vessels forming a monolayer throughout the circulatory system. ECs are defined by specific cell surface markers that characterize their phenotype.
Scientists at the University of Helsinki, Finland, wanted to find out if there exists a rare vascular endothelial stem cell (VESC) population that is capable of producing very high numbers of endothelial daughter cells, and can lead to neovascular growth in adults.
VESCs discovered that reside at the blood vessel wall endothelium are a small population of CD117+ ECs capable of self-renewal. These cells are capable of undergoing clonal expansion unlike the surrounding ECs that bear limited proliferating potential. A single VESC cell isolated from the endothelial population was able to generate functional blood vessels.
Among many important roles of Nitric oxide (NO), one of the key actions is to act as a vasodilator and maintain cardiovascular health. Induction of NO is regulated by signals in tissue as well as endothelium.
Chen et. al. (Med. Biol. Eng. Comp., 2011) developed a 3-D model consisting of two branched arterioles and nine capillaries surrounding the vessels. Their model not only takes into account of the 3-D volume, but also branching effects on blood flow.
The model indicates that wall shear stress changes depending upon the distribution of RBC in the microcirculations of blood vessels, lead to differential production of NO along the vascular network.
Endothelial dysfunction, the hallmark of which is reduced activity of endothelial cell derived nitric oxide (NO), is a key factor in developing atherosclerosis and cardiovascular disease. Vascular endothelial cells play a pivotal role in modulation of leukocyte and platelet adherence, thrombogenicity, anticoagulation, and vessel wall contraction and relaxation, so that endothelial dysfunction has become almost a synonym for vascular disease. A single layer of endothelial cells is the only constituent of capillaries, which differ from other vessels, which contain smooth muscle cells and adventitia. Capillaries directly mediate nutritional supply as well as gas exchange within all organs. The failure of the microcirculation leads to tissue apoptosis/necrosis.
Summary – Volume 4, Part 2: Translational Medicine in Cardiovascular Diseases
Author and Curator: Larry H Bernstein, MD, FCAP
We have covered a large amount of material that involves
the development,
application, and
validation of outcomes of medical and surgical procedures
that are based on translation of science from the laboratory to the bedside, improving the standards of medical practice at an accelerated pace in the last quarter century, and in the last decade. Encouraging enabling developments have been:
1. The establishment of national and international outcomes databases for procedures by specialist medical societies
2. The identification of problem areas, particularly in activation of the prothrombotic pathways, infection control to an extent, and targeting of pathways leading to progression or to arrythmogenic complications.
5. This has become possible because of the advances in our knowledge of key related pathogenetic mechanisms involving gene expression and cellular regulation of complex mechanisms.
This completes what has been presented in Part 2, Vol 4 , and supporting references for the main points that are found in the Leaders in Pharmaceutical Intelligence Cardiovascular book. Part 1 was concerned with Posttranslational Modification of Proteins, vital for understanding cellular regulation and dysregulation. Part 2 was concerned with Translational Medical Therapeutics, the efficacy of medical and surgical decisions based on bringing the knowledge gained from the laboratory, and from clinical trials into the realm opf best practice. The time for this to occur in practice in the past has been through roughly a generation of physicians. That was in part related to the busy workload of physicians, and inability to easily access specialty literature as the volume and complexity increased. This had an effect of making access of a family to a primary care provider through a lifetime less likely than the period post WWII into the 1980s.
However, the growth of knowledge has accelerated in the specialties since the 1980’s so that the use of physician referral in time became a concern about the cost of medical care. This is not the place for or a matter for discussion here. It is also true that the scientific advances and improvements in available technology have had a great impact on medical outcomes. The only unrelated issue is that of healthcare delivery, which is not up to the standard set by serial advances in therapeutics, accompanied by high cost due to development costs, marketing costs, and development of drug resistance.
I shall identify continuing developments in cardiovascular diagnostics, therapeutics, and bioengineering that is and has been emerging.
Abstract:Genome-wide characterization of the in vivo cellular response to perturbation is fundamental to understanding how cells survive stress. Identifying the proteins and pathways perturbed by small molecules affects biology and medicine by revealing the mechanisms of drug action. We used a yeast chemogenomics platform that quantifies the requirement for each gene for resistance to a compound in vivo to profile 3250 small molecules in a systematic and unbiased manner. We identified 317 compounds that specifically perturb the function of 121 genes and characterized the mechanism of specific compounds. Global analysis revealed that the cellular response to small molecules is limited and described by a network of 45 major chemogenomic signatures. Our results provide a resource for the discovery of functional interactions among genes, chemicals, and biological processes.
In order to identify how chemical compounds target genes and affect the physiology of the cell, tests of the perturbations that occur when treated with a range of pharmacological chemicals are required. By examining the haploinsufficiency profiling (HIP) and homozygous profiling (HOP) chemogenomic platforms, Lee et al.(p. 208) analyzed the response of yeast to thousands of different small molecules, with genetic, proteomic, and bioinformatic analyses. Over 300 compounds were identified that targeted 121 genes within 45 cellular response signature networks. These networks were used to extrapolate the likely effects of related chemicals, their impact upon genetic pathways, and to identify putative gene functions
A team of cardiovascular researchers from the Cardiovascular Research Center at Icahn School of Medicine at Mount Sinai, Sanford-Burnham Medical Research Institute, and University of California, San Diego have identified a small, but powerful, new player in thIe onset and progression of heart failure. Their findings, published in the journal Nature on March 12, also show how they successfully blocked the newly discovered culprit.
Investigators identified a tiny piece of RNA called miR-25 that blocks a gene known as SERCA2a, which regulates the flow of calcium within heart muscle cells. Decreased SERCA2a activity is one of the main causes of poor contraction of the heart and enlargement of heart muscle cells leading to heart failure.
Using a functional screening system developed by researchers at Sanford-Burnham, the research team discovered miR-25 acts pathologically in patients suffering from heart failure, delaying proper calcium uptake in heart muscle cells. According to co-lead study authors Christine Wahlquist and Dr. Agustin Rojas Muñoz, developers of the approach and researchers in Mercola’s lab at Sanford-Burnham, they used high-throughput robotics to sift through the entire genome for microRNAs involved in heart muscle dysfunction.
Subsequently, the researchers at the Cardiovascular Research Center at Icahn School of Medicine at Mount Sinai found that injecting a small piece of RNA to inhibit the effects of miR-25 dramatically halted heart failure progression in mice. In addition, it also improved their cardiac function and survival.
“In this study, we have not only identified one of the key cellular processes leading to heart failure, but have also demonstrated the therapeutic potential of blocking this process,” says co-lead study author Dr. Dongtak Jeong, a post-doctoral fellow at the Cardiovascular Research Center at Icahn School of Medicine at Mount Sinai in the laboratory of the study’s co-senior author Dr. Roger J. Hajjar.
Publication: Inhibition of miR-25 improves cardiac contractility in the failing heart.Christine Wahlquist, Dongtak Jeong, Agustin Rojas-Muñoz, Changwon Kho, Ahyoung Lee, Shinichi Mitsuyama, Alain Van Mil, Woo Jin Park, Joost P. G. Sluijter, Pieter A. F. Doevendans, Roger J. : Hajjar & Mark Mercola. Nature (March 2014) http://www.nature.com/nature/journal/vaop/ncurrent/full/nature13073.html
“Junk” DNA Tied to Heart Failure
Deep RNA Sequencing Reveals Dynamic Regulation of Myocardial Noncoding RNAs in Failing Human Heart and Remodeling With Mechanical Circulatory Support
The myocardial transcriptome is dynamically regulated in advanced heart failure and after LVAD support. The expression profiles of lncRNAs, but not mRNAs or miRNAs, can discriminate failing hearts of different pathologies and are markedly altered in response to LVAD support. These results suggest an important role for lncRNAs in the pathogenesis of heart failure and in reverse remodeling observed with mechanical support.
Junk DNA was long thought to have no important role in heredity or disease because it doesn’t code for proteins. But emerging research in recent years has revealed that many of these sections of the genome produce noncoding RNA molecules that still have important functions in the body. They come in a variety of forms, some more widely studied than others. Of these, about 90% are called long noncoding RNAs (lncRNAs), and exploration of their roles in health and disease is just beginning.
The Washington University group performed a comprehensive analysis of all RNA molecules expressed in the human heart. The researchers studied nonfailing hearts and failing hearts before and after patients received pump support from left ventricular assist devices (LVAD). The LVADs increased each heart’s pumping capacity while patients waited for heart transplants.
In their study, the researchers found that unlike other RNA molecules, expression patterns of long noncoding RNAs could distinguish between two major types of heart failure and between failing hearts before and after they received LVAD support.
“The myocardial transcriptome is dynamically regulated in advanced heart failure and after LVAD support. The expression profiles of lncRNAs, but not mRNAs or miRNAs, can discriminate failing hearts of different pathologies and are markedly altered in response to LVAD support,” wrote the researchers. “These results suggest an important role for lncRNAs in the pathogenesis of heart failure and in reverse remodeling observed with mechanical support.”
‘Junk’ Genome Regions Linked to Heart Failure
In a recent issue of the journal Circulation, Washington University investigators report results from the first comprehensive analysis of all RNA molecules expressed in the human heart. The researchers studied nonfailing hearts and failing hearts before and after patients received pump support from left ventricular assist devices (LVAD). The LVADs increased each heart’s pumping capacity while patients waited for heart transplants.
“We took an unbiased approach to investigating which types of RNA might be linked to heart failure,” said senior author Jeanne Nerbonne, the Alumni Endowed Professor of Molecular Biology and Pharmacology. “We were surprised to find that long noncoding RNAs stood out.
In the new study, the investigators found that unlike other RNA molecules, expression patterns of long noncoding RNAs could distinguish between two major types of heart failure and between failing hearts before and after they received LVAD support.
“We don’t know whether these changes in long noncoding RNAs are a cause or an effect of heart failure,” Nerbonne said. “But it seems likely they play some role in coordinating the regulation of multiple genes involved in heart function.”
Nerbonne pointed out that all types of RNA molecules they examined could make the obvious distinction: telling the difference between failing and nonfailing hearts. But only expression of the long noncoding RNAs was measurably different between heart failure associated with a heart attack (ischemic) and heart failure without the obvious trigger of blocked arteries (nonischemic). Similarly, only long noncoding RNAs significantly changed expression patterns after implantation of left ventricular assist devices.
Heart failure is a complex disease with a broad spectrum of pathological features. Despite significant advancement in clinical diagnosis through improved imaging modalities and hemodynamic approaches, reliable molecular signatures for better differential diagnosis and better monitoring of heart failure progression remain elusive. The few known clinical biomarkers for heart failure, such as plasma brain natriuretic peptide and troponin, have been shown to have limited use in defining the cause or prognosis of the disease.1,2 Consequently, current clinical identification and classification of heart failure remain descriptive, mostly based on functional and morphological parameters. Therefore, defining the pathogenic mechanisms for hypertrophic versus dilated or ischemic versus nonischemic cardiomyopathies in the failing heart remain a major challenge to both basic science and clinic researchers. In recent years, mechanical circulatory support using left ventricular assist devices (LVADs) has assumed a growing role in the care of patients with end-stage heart failure.3 During the earlier years of LVAD application as a bridge to transplant, it became evident that some patients exhibit substantial recovery of ventricular function, structure, and electric properties.4 This led to the recognition that reverse remodeling is potentially an achievable therapeutic goal using LVADs. However, the underlying mechanism for the reverse remodeling in the LVAD-treated hearts is unclear, and its discovery would likely hold great promise to halt or even reverse the progression of heart failure.
Efficacy and Safety of Dabigatran Compared With Warfarin in Relation to Baseline Renal Function in Patients With Atrial Fibrillation: A RE-LY (Randomized Evaluation of Long-term Anticoagulation Therapy) Trial Analysis
In patients with atrial fibrillation, impaired renal function is associated with a higher risk of thromboembolic events and major bleeding. Oral anticoagulation with vitamin K antagonists reduces thromboembolic events but raises the risk of bleeding. The new oral anticoagulant dabigatran has 80% renal elimination, and its efficacy and safety might, therefore, be related to renal function. In this prespecified analysis from the Randomized Evaluation of Long-Term Anticoagulant Therapy (RELY) trial, outcomes with dabigatran versus warfarin were evaluated in relation to 4 estimates of renal function, that is, equations based on creatinine levels (Cockcroft-Gault, Modification of Diet in Renal Disease (MDRD), Chronic Kidney Disease Epidemiology Collaboration [CKD-EPI]) and cystatin C. The rates of stroke or systemic embolism were lower with dabigatran 150 mg and similar with 110 mg twice daily irrespective of renal function. Rates of major bleeding were lower with dabigatran 110 mg and similar with 150 mg twice daily across the entire range of renal function. However, when the CKD-EPI or MDRD equations were used, there was a significantly greater relative reduction in major bleeding with both doses of dabigatran than with warfarin in patients with estimated glomerular filtration rate ≥80 mL/min. These findings show that dabigatran can be used with the same efficacy and adequate safety in patients with a wide range of renal function and that a more accurate estimate of renal function might be useful for improved tailoring of anticoagulant treatment in patients with atrial fibrillation and an increased risk of stroke.
Aldosterone Regulates MicroRNAs in the Cortical Collecting Duct to Alter Sodium Transport.
ABSTRACT A role for microRNAs (miRs) in the physiologic regulation of sodium transport in the kidney has not been established. In this study, we investigated the potential of aldosterone to alter miR expression in mouse cortical collecting duct (mCCD) epithelial cells. Microarray studies demonstrated the regulation of miR expression by aldosterone in both cultured mCCD and isolated primary distal nephron principal cells.
Aldosterone regulation of the most significantly downregulated miRs, mmu-miR-335-3p, mmu-miR-290-5p, and mmu-miR-1983 was confirmed by quantitative RT-PCR. Reducing the expression of these miRs separately or in combination increased epithelial sodium channel (ENaC)-mediated sodium transport in mCCD cells, without mineralocorticoid supplementation. Artificially increasing the expression of these miRs by transfection with plasmid precursors or miR mimic constructs blunted aldosterone stimulation of ENaC transport.
Using a newly developed computational approach, termed ComiR, we predicted potential gene targets for the aldosterone-regulated miRs and confirmed ankyrin 3 (Ank3) as a novel aldosterone and miR-regulated protein.
A dual-luciferase assay demonstrated direct binding of the miRs with the Ank3-3′ untranslated region. Overexpression of Ank3 increased and depletion of Ank3 decreased ENaC-mediated sodium transport in mCCD cells. These findings implicate miRs as intermediaries in aldosterone signaling in principal cells of the distal kidney nephron.
2. Diagnostic Biomarker Status
A prospective study of the impact of serial troponin measurements on the diagnosis of myocardial infarction and hospital and 6-month mortality in patients admitted to ICU with non-cardiac diagnoses.
ABSTRACT Troponin T (cTnT) elevation is common in patients in the Intensive Care Unit (ICU) and associated with morbidity and mortality. Our aim was to determine the epidemiology of raised cTnT levels and contemporaneous electrocardiogram (ECG) changes suggesting myocardial infarction (MI) in ICU patients admitted for non-cardiac reasons.
cTnT and ECGs were recorded daily during week 1 and on alternate days during week 2 until discharge from ICU or death. ECGs were interpreted independently for the presence of ischaemic changes. Patients were classified into 4 groups: (i) definite MI (cTnT >=15 ng/L and contemporaneous changes of MI on ECG), (ii) possible MI (cTnT >=15 ng/L and contemporaneous ischaemic changes on ECG), (iii) troponin rise alone (cTnT >=15 ng/L), or (iv) normal. Medical notes were screened independently by two ICU clinicians for evidence that the clinical teams had considered a cardiac event.
Data from 144 patients were analysed [42% female; mean age 61.9 (SD 16.9)]. 121 patients (84%) had at least one cTnT level >=15 ng/L. A total of 20 patients (14%) had a definite MI, 27% had a possible MI, 43% had a cTNT rise without contemporaneous ECG changes, and 16% had no cTNT rise. ICU, hospital and 180 day mortality were significantly higher in patients with a definite or possible MI.Only 20% of definite MIs were recognised by the clinical team. There was no significant difference in mortality between recognised and non-recognised events.At time of cTNT rise, 100 patients (70%) were septic and 58% were on vasopressors. Patients who were septic when cTNT was elevated had an ICU mortality of 28% compared to 9% in patients without sepsis. ICU mortality of patients who were on vasopressors at time of cTNT elevation was 37% compared to 1.7% in patients not on vasopressors.
The majority of critically ill patients (84%) had a cTnT rise and 41% met criteria for a possible or definite MI of whom only 20% were recognised clinically. Mortality up to 180 days was higher in patients with a cTnT rise.
Prognostic performance of high-sensitivity cardiac troponin T kinetic changes adjusted for elevated admission values and the GRACE score in an unselected emergency department population.
ABSTRACT To test the prognostic performance of rising and falling kinetic changes of high-sensitivity cardiac troponin T (hs-cTnT) and the GRACE score.
Rising and falling hs-cTnT changes in an unselected emergency department population were compared.
635 patients with a hs-cTnT >99th percentile admission value were enrolled. Of these, 572 patients qualified for evaluation with rising patterns (n=254, 44.4%), falling patterns (n=224, 39.2%), or falling patterns following an initial rise (n=94, 16.4%). During 407days of follow-up, we observed 74 deaths, 17 recurrent AMI, and 79 subjects with a composite of death/AMI. Admission values >14ng/L were associated with a higher rate of adverse outcomes (OR, 95%CI:death:12.6, 1.8-92.1, p=0.01, death/AMI:6.7, 1.6-27.9, p=0.01). Neither rising nor falling changes increased the AUC of baseline values (AUC: rising 0.562 vs 0.561, p=ns, falling: 0.533 vs 0.575, p=ns). A GRACE score ≥140 points indicated a higher risk of death (OR, 95%CI: 3.14, 1.84-5.36), AMI (OR,95%CI: 1.56, 0.59-4.17), or death/AMI (OR, 95%CI: 2.49, 1.51-4.11). Hs-cTnT changes did not improve prognostic performance of a GRACE score ≥140 points (AUC, 95%CI: death: 0.635, 0.570-0.701 vs. 0.560, 0.470-0.649 p=ns, AMI: 0.555, 0.418-0.693 vs. 0.603, 0.424-0.782, p=ns, death/AMI: 0.610, 0.545-0.676 vs. 0.538, 0.454-0.622, p=ns). Coronary angiography was performed earlier in patients with rising than with falling kinetics (median, IQR [hours]:13.7, 5.5-28.0 vs. 20.8, 6.3-59.0, p=0.01).
Neither rising nor falling hs-cTnT changes improve prognostic performance of elevated hs-cTnT admission values or the GRACE score. However, rising values are more likely associated with the decision for earlier invasive strategy.
ABSTRACT: Under normal circumstances, most intracellular troponin is part of the muscle contractile apparatus, and only a small percentage (< 2-8%) is free in the cytoplasm. The presence of a cardiac-specific troponin in the circulation at levels above normal is good evidence of damage to cardiac muscle cells, such as myocardial infarction, myocarditis, trauma, unstable angina, cardiac surgery or other cardiac procedures. Troponins are released as complexes leading to various cut-off values depending on the assay used. This makes them very sensitive and specific indicators of cardiac injury. As with other cardiac markers, observation of a rise and fall in troponin levels in the appropriate time-frame increases the diagnostic specificity for acute myocardial infarction. They start to rise approximately 4-6 h after the onset of acute myocardial infarction and peak at approximately 24 h, as is the case with creatine kinase-MB. They remain elevated for 7-10 days giving a longer diagnostic window than creatine kinase. Although the diagnosis of various types of acute coronary syndrome remains a clinical-based diagnosis, the use of troponin levels contributes to their classification. This Editorial elaborates on the nature of troponin, its classification, clinical use and importance, as well as comparing it with other currently available cardiac markers.
ABSTRACT: Although redefinition for acute myocardial infarction (AMI) has been proposed few years ago, to date it has not been universally adopted by many institutions. The purpose of this study is to evaluate the diagnostic, prognostic and economical impact of the new diagnostic criteria for AMI. Patients consecutively admitted to the emergency department with suspected acute coronary syndromes were enrolled in this study. Troponin T (cTnT) was measured in samples collected for routine CK-MB analyses and results were not available to physicians. Patients without AMI by traditional criteria and cTnT > or = 0.035 ng/mL were coded as redefined AMI. Clinical outcomes were hospital death, major cardiac events and revascularization procedures. In-hospital management and reimbursement rates were also analyzed. Among 363 patients, 59 (16%) patients had AMI by conventional criteria, whereas additional 75 (21%) had redefined AMI, an increase of 127% in the incidence. Patients with redefined AMI were significantly older, more frequently male, with atypical chest pain and more risk factors. In multivariate analysis, redefined AMI was associated with 3.1 fold higher hospital death (95% CI: 0.6-14) and a 5.6 fold more cardiac events (95% CI: 2.1-15) compared to those without AMI. From hospital perspective, based on DRGs payment system, adoption of AMI redefinition would increase 12% the reimbursement rate [3552 Int dollars per 100 patients evaluated]. The redefined criteria result in a substantial increase in AMI cases, and allow identification of high-risk patients. Efforts should be made to reinforce the adoption of AMI redefinition, which may result in more qualified and efficient management of ACS.
Acellular biomaterials can stimulate the local environment to repair tissues without the regulatory and scientific challenges of cell-based therapies. A greater understanding of the mechanisms of such endogenous tissue repair is furthering the design and application of these biomaterials. We discuss recent progress in acellular materials for tissue repair, using cartilage and cardiac tissues as examples of application with substantial intrinsic hurdles, but where human translation is now occurring.
Acellular Biomaterials: An Evolving Alternative to Cell-Based Therapies
Acellular biomaterials can stimulate the local environment to repair tissues without the regulatory and scientific challenges of cell-based therapies. A greater understanding of the mechanisms of such endogenous tissue repair is furthering the design and application of these biomaterials. We discuss recent progress in acellular materials for tissue repair, using cartilage and cardiac tissues as examples of applications with substantial intrinsic hurdles, but where human translation is now occurring.
Instructive Nanofiber Scaffolds with VEGF Create a Microenvironment for Arteriogenesis and Cardiac Repair
Angiogenic therapy is a promising approach for tissue repair and regeneration. However, recent clinical trials with protein delivery or gene therapy to promote angiogenesis have failed to provide therapeutic effects. A key factor for achieving effective revascularization is the durability of the microvasculature and the formation of new arterial vessels. Accordingly, we carried out experiments to test whether intramyocardial injection of self-assembling peptide nanofibers (NFs) combined with vascular endothelial growth factor (VEGF) could create an intramyocardial microenvironment with prolonged VEGF release to improve post-infarct neovascularization in rats. Our data showed that when injected with NF, VEGF delivery was sustained within the myocardium for up to 14 days, and the side effects of systemic edema and proteinuria were significantly reduced to the same level as that of control. NF/VEGF injection significantly improved angiogenesis, arteriogenesis, and cardiac performance 28 days after myocardial infarction. NF/VEGF injection not only allowed controlled local delivery but also transformed the injected site into a favorable microenvironment that recruited endogenous myofibroblasts and helped achieve effective revascularization. The engineered vascular niche further attracted a new population of cardiomyocyte-like cells to home to the injected sites, suggesting cardiomyocyte regeneration. Follow-up studies in pigs also revealed healing benefits consistent with observations in rats. In summary, this study demonstrates a new strategy for cardiovascular repair with potential for future clinical translation.
Along with scientific and regulatory issues, the translation of cell and tissue therapies in the routine clinical practice needs to address standardization and cost-effectiveness through the definition of suitable manufacturing paradigms.