28 research outputs found

    The influence of gastric atrophy on Helicobacter pylori antibiotics resistance in therapy-naĂŻve patients

    Get PDF
    Background: Antibiotic susceptibility of Helicobacter pylori to antibiotics may vary among different niches of the stomach. The progression of chronic H. pylori gastritis to atrophy changes intragastric physiology that may influence selection of resistant strains. Aim: To study the antibiotic resistance of H. pylori taking the severity of atrophic gastritis in antrum and corpus into account. Methods: Helicobacter pylori-positive patients (n = 110, m = 32, mean age 52.6 ± 13.9 years) without prior H. pylori eradication undergoing upper gastrointestinal (GI) endoscopy for dyspeptic symptoms were included in a prospective study. Patients were stratified into three groups depending on the grade of atrophy: no atrophy (OLGA Stage 0), mild atrophy (OLGA Stage I–II) and moderate/severe atrophy (OLGA Stage III–IV). Two biopsies each from the antrum and the corpus and one from the angulus were taken and assessed according to the updated Sydney system. H. pylori strains were isolated from antrum and corpus biopsies and tested for antibiotic susceptibility (AST) for amoxicillin, clarithromycin, metronidazole, levofloxacin, tetracycline, and rifampicin by the agar dilution methods. A Chi-square test of independence with a 95% confidence interval was used to detect differences in the proportion of patients with susceptible and resistant H. pylori strains. Results: Among 110 patients, primary clarithromycin resistance (R) was 30.0%, both in the antrum and corpus; metronidazole resistance accounted for 36.4 and 34.5% in the antrum and corpus; and levofloxacin was 19.1 and 22.7% in the antrum and corpus, respectively. Resistance rates to amoxicillin, tetracycline, and rifampicin were below 5%. Dual antibiotic resistance rate was 21.8%, and triple resistance rate was 9.1%. There was a significant difference in the resistance rate distribution in antrum (p < 0.0001) and corpus (p < 0.0001). With increasing severity of atrophy according to OLGA stages, there was a significant increase in clarithromycin-R and metronidazole-R. Conclusion: In treatment-naĂŻve patients, antibiotic resistance and heteroresistance were related to the severity of atrophy. The high clarithromycin resistance in atrophic gastritis suggests that H. pylori antibiotic susceptibility testing should always be performed in this condition before selecting the eradication regimen

    An Epidemiological Systematic Review with Meta-Analysis on Biomarker Role of Circulating MicroRNAs in Breast Cancer Incidence

    Get PDF
    Breast cancer (BC) is a multifactorial disease caused by an interaction between genetic predisposition and environmental exposures. MicroRNAs are a group of small non-coding RNA molecules, which seem to have a role either as tumor suppressor genes or oncogenes and seem to be related to cancer risk factors. We conducted a systematic review and meta-analysis to identify circulating microRNAs related to BC diagnosis, paying special attention to methodological problems in this research field. A meta-analysis was performed for microRNAs analyzed in at least three independent studies where sufficient data to make analysis were presented. Seventy-five studies were included in the systematic review. A meta-analysis was performed for microRNAs analyzed in at least three independent studies where sufficient data to make analysis were presented. Seven studies were included in the MIR21 and MIR155 meta-analysis, while four studies were included in the MIR10b metanalysis. The pooled sensitivity and specificity of MIR21 for BC diagnosis were 0.86 (95%CI 0.76-0.93) and 0.84 (95%CI 0.71-0.92), 0.83 (95%CI 0.72-0.91) and 0.90 (95%CI 0.69-0.97) for MIR155, and 0.56 (95%CI 0.32-0.71) and 0.95 (95%CI 0.88-0.98) for MIR10b, respectively. Several other microRNAs were found to be dysregulated, distinguishing BC patients from healthy controls. However, there was little consistency between included studies, making it difficult to identify specific microRNAs useful for diagnosis

    Cohort profile: the Turin prostate cancer prognostication (TPCP) cohort

    Get PDF
    Introduction: Prostate cancer (PCa) is the most frequent tumor among men in Europe and has both indolent and aggressive forms. There are several treatment options, the choice of which depends on multiple factors. To further improve current prognostication models, we established the Turin Prostate Cancer Prognostication (TPCP) cohort, an Italian retrospective biopsy cohort of patients with PCa and long-term follow-up. This work presents this new cohort with its main characteristics and the distributions of some of its core variables, along with its potential contributions to PCa research. Methods: The TPCP cohort includes consecutive non-metastatic patients with first positive biopsy for PCa performed between 2008 and 2013 at the main hospital in Turin, Italy. The follow-up ended on December 31st 2021. The primary outcome is the occurrence of metastasis; death from PCa and overall mortality are the secondary outcomes. In addition to numerous clinical variables, the study’s prognostic variables include histopathologic information assigned by a centralized uropathology review using a digital pathology software system specialized for the study of PCa, tumor DNA methylation in candidate genes, and features extracted from digitized slide images via Deep Neural Networks. Results: The cohort includes 891 patients followed-up for a median time of 10 years. During this period, 97 patients had progression to metastatic disease and 301 died; of these, 56 died from PCa. In total, 65.3% of the cohort has a Gleason score less than or equal to 3 + 4, and 44.5% has a clinical stage cT1. Consistent with previous studies, age and clinical stage at diagnosis are important prognostic factors: the crude cumulative incidence of metastatic disease during the 14-years of follow-up increases from 9.1% among patients younger than 64 to 16.2% for patients in the age group of 75-84, and from 6.1% for cT1 stage to 27.9% in cT3 stage. Discussion: This study stands to be an important resource for updating existing prognostic models for PCa on an Italian cohort. In addition, the integrated collection of multi-modal data will allow development and/or validation of new models including new histopathological, digital, and molecular markers, with the goal of better directing clinical decisions to manage patients with PCa

    Using naso- and oro-intestinal catheters in physiological research for intestinal delivery and sampling in vivo:practical and technical aspects to be considered

    Get PDF
    Intestinal catheters have been used for decades in human nutrition, physiology, pharmacokinetics, and gut microbiome research, facilitating the delivery of compounds directly into the intestinal lumen or the aspiration of intestinal fluids in human subjects. Such research provides insights about (local) dynamic metabolic and other intestinal luminal processes, but working with catheters might pose challenges to biomedical researchers and clinicians. Here, we provide an overview of practical and technical aspects of applying naso- and oro-intestinal catheters for delivery of compounds and sampling luminal fluids from the jejunum, ileum, and colon in vivo. The recent literature was extensively reviewed, and combined with experiences and insights we gained through our own clinical trials. We included 60 studies that involved a total of 720 healthy subjects and 42 patients. Most of the studies investigated multiple intestinal regions (24 studies), followed by studies investigating only the jejunum (21 studies), ileum (13 studies), or colon (2 studies). The ileum and colon used to be relatively inaccessible regions in vivo. Custom-made state-of-the-art catheters are available with numerous options for the design, such as multiple lumina, side holes, and inflatable balloons for catheter progression or isolation of intestinal segments. These allow for multiple controlled sampling and compound delivery options in different intestinal regions. Intestinal catheters were often used for delivery (23 studies), sampling (10 studies), or both (27 studies). Sampling speed decreased with increasing distance from the sampling syringe to the specific intestinal segment (i.e., speed highest in duodenum, lowest in ileum/colon). No serious adverse events were reported in the literature, and a dropout rate of around 10% was found for these types of studies. This review is highly relevant for researchers who are active in various research areas and want to expand their research with the use of intestinal catheters in humans in vivo.</p

    Cohort profile: the Turin prostate cancer prognostication (TPCP) cohort

    Get PDF
    IntroductionProstate cancer (PCa) is the most frequent tumor among men in Europe and has both indolent and aggressive forms. There are several treatment options, the choice of which depends on multiple factors. To further improve current prognostication models, we established the Turin Prostate Cancer Prognostication (TPCP) cohort, an Italian retrospective biopsy cohort of patients with PCa and long-term follow-up. This work presents this new cohort with its main characteristics and the distributions of some of its core variables, along with its potential contributions to PCa research.MethodsThe TPCP cohort includes consecutive non-metastatic patients with first positive biopsy for PCa performed between 2008 and 2013 at the main hospital in Turin, Italy. The follow-up ended on December 31st 2021. The primary outcome is the occurrence of metastasis; death from PCa and overall mortality are the secondary outcomes. In addition to numerous clinical variables, the study’s prognostic variables include histopathologic information assigned by a centralized uropathology review using a digital pathology software system specialized for the study of PCa, tumor DNA methylation in candidate genes, and features extracted from digitized slide images via Deep Neural Networks.ResultsThe cohort includes 891 patients followed-up for a median time of 10 years. During this period, 97 patients had progression to metastatic disease and 301 died; of these, 56 died from PCa. In total, 65.3% of the cohort has a Gleason score less than or equal to 3 + 4, and 44.5% has a clinical stage cT1. Consistent with previous studies, age and clinical stage at diagnosis are important prognostic factors: the crude cumulative incidence of metastatic disease during the 14-years of follow-up increases from 9.1% among patients younger than 64 to 16.2% for patients in the age group of 75-84, and from 6.1% for cT1 stage to 27.9% in cT3 stage.DiscussionThis study stands to be an important resource for updating existing prognostic models for PCa on an Italian cohort. In addition, the integrated collection of multi-modal data will allow development and/or validation of new models including new histopathological, digital, and molecular markers, with the goal of better directing clinical decisions to manage patients with PCa

    Circles, an experimental approach to film music composition through sonification of moving images

    Get PDF
    In this paper an experimental approach to film music composition through sonification is discussed. Sonification [7], the practice of transforming moving images into sounds, is not a new concept. There are several attempts to present data as sound. This technique is called “data sonification” and it is the equivalent of the more established practice of “data visualization”. From stock market data to volcanic activity, from gravitational waves to urban pollution, any kind of data has been treated with sonification. Here the scope is to apply sonification to a video, a film or a documentary by extracting data that can be converted into a musical piece. By doing so the film composer can possibly find new sources of artistic inspiration and new composing techniques and approaches that could lead him to unexpected and evocative musical results. 1. INTRODUCTION: 1.1 Goals The scope of this research is to explore the possibility to let the video compose its own music. That would accomplish several interesting outcomes. First, it would create unpredictable artistic results by forcing the composer to deviate from his usual creative workflow that sees him watching the film, gathering musical themes, harmonies and ideas and starting composing the music score. Second, it would speed up the process of music creation because the length of the video would not influence the duration of the writing process: once the sonification of the video data is set, the algorithm would create the music automatically and in real time. Third, the suggested approach could be extended beyond the sonification of a video. By using video cameras, computer vision, artificial intelligence systems and real-time object detection devices, several interactive synesthetic experiences could be created for the general audience by catching the human body movement data and transforming that into music. This form of movement interpretation could help to explain the meaning of sound, movement and music related to the physical experience of everybody. Fourth, this research could lead to a new software, algorithm or plug-in that could enhance the creative workflow of composers, video makers, production companies and similar that could benefit from some sort of automated music creation tools extrapolated from video data. 1.2 Challenges The first challenge was to find meaningful ways to extract usable data from a video. There are many softwares available today, here the choice was Max/MSP and specifically the set of cv.jit objects designed by Jean-Marc Pelletier [6]. By creating several patches and algorithms in Max/MSP it was managed to extrapolate numeric values from visual parameters like brightness, horizontal and vertical position of various objects, size, movement, speed, contrast, saturation and similar. The second challenge was to find ways to attribute a musical meaning to the collected data. The biggest challenge was creating a musical, melodic, harmonic and sonic vocabulary that could artistically use those data. The scope of this work was to achieve three main goals: first, to create a music piece that could be meaningful, pleasant, understandable and not just random and chaotic. Second, to create a music piece that was able to enhance and comment the story of the video in a narrative way, exactly like any traditional film composer would do. Third, to come up with a music piece that was an aural representation of its visual counterpart. A certain level of similarity between what we see and what we hear needed to be achieved. That involved a thoughtful understanding of how people perceive sounds and images in their everyday physical experience in a multisensory approach. 2. BACKGROUND: The idea of using a picture, a drawing or a moving image like a film as a source for music composition is not new. There are several examples of this practice, from composer Sylvano Bussotti graphical scores to the “clavier à lumiùres” [1] ("keyboard with lights"), a synesthetic music instruments invented by the composer Alexander Scriabin for his work Prometheus: Poem of Fire. The work of Conlon Nancarrow, his Studies for Player Piano, his graphical scores and his extensive use of auto-playing instruments are valuable examples of sonification too [3]. The ANS synthesizer created by Russian engineer Evgeny Murzin from 1937 to 1957 is another example of the attempt to convert a graphical image, a drawing or a drawn sound spectrum into a piece of music [4]. Other machines like the “Oramics” designed in 1957 by musician Daphne Oram [5] or the Variophone developed by Evgeny Sholpo in 1930 are all examples of graphical sound techniques designed to create a more literal relationship between visual and audio material. The common characteristic of those early projects was that they all used a static image as a source for sonification. The image was usually scanned from left to right in order to produce sound, Unfortunately the relationship between sound and time was lost. By contrast, if a video is used instead as a source of sonification the interactivity between what we see in a specific moment and what we hear is guaranteed because the process of sonification happens in real time. 3. COLLECTING DATA FROM IMAGE: For the preliminary step of data extrapolation from the video two main approaches have been designed. The first has been called “Centroid Blobs” and the second “Pixel Mosaic”. 3.1 “Centroid Blobs” This approach uses the main features of the various objects and shapes present in the video by identifying clusters of similar pixels from one frame to the other. Whenever the algorithm identifies a corner, a line, a mass or a salient feature it applies a “centroid”, a “blob” and a “label”. The recognition process operates in a black and white version of the video. Additional controls of saturation, brightness and contrast can modify the behavior of the algorithm. Each blob corresponds to an object or a recognizable feature and produces three numeric values at any moment: horizontal position, vertical position and mass size. Those three values are converted into midi information, each blob represents a specific instrument and a midi channel that goes from Max/MSP to Ableton Live through several midi ports. 3.2 From raw data to music: The flux of horizontal and vertical movement data and the size of each blob is converted into midi. The translation of those raw numbers to a music vocabulary aimed to preserve the most obvious correspondence between how people perceive sounds and their physical and body experience. The horizontal position of each blob can be effectively translated into a panoramic sonic value, from left to right and vice versa. It makes sense to put a sound on the left, center or right side of the stereo field if the corresponding object is on the same visual position in the video. For this, the midi continuous-control “cc10” (panning) seemed to be the best option. The vertical position of each blob can be translated into pitch variations. From low to high, bottom-up or top-bottom. This possible translation seems to be quite obvious too. In music a sound is defined as “low” (low pitch) or “high” (high pitch). High frequencies tend to be perceived as higher (closer to our head) and smaller than low frequencies (which tend to be perceived as bigger, heavier and lower, closer to our guts or feet). The mass (size) of the blobs can be translated into a variation of volume and loudness. The best continuous-controller is “cc7” (midi volume). It is worth noticing that our brain tends to perceive the size of a sound not just in terms of volume variation (a bigger object or a closer object will sound louder and vice versa); a variation of the frequency spectrum can suggest a variation in size too. In fact, sounds with less low frequencies tend to be perceived as “thinner” and therefore “smaller” whereas sounds with more low frequencies are perceived as “fatter” and “bigger”. It has been found that translating the blob mass value into a midi control of low pass and high pass filters can convey an effective perception of size. As mentioned before, the blobs midi data are routed into Ableton Live. Here, a certain level of artistic freedom is guaranteed. In fact, each blob can be associated with a specific scale (diatonic, chromatic) and music key or mode. The choice of sounds is free as well, in fact several patch and variations have been designed by using stock synthesizers in Ableton Live as well as Native Instrument’s Kontakt sound banks, pure sinewaves or wavetables. 3.2 “Pixel Mosaic”: The second approach is called “Pixel Mosaic”. The data extrapolation technique and video interpretation are completely different from the “Centroid Blobs” system. Here the video canvas is treated almost like a musical digital instrument. In fact, the video is converted into black and white and it is downscaled to a matrix of forty by eighty pixels for a total of three thousand and two hundred pixels. Each pixel represents a sound, either a pure sinewave (on an additive synthesis setup) or a filtering frequency (on a subtractive synthesis setup). After a black and white conversion, the system effectively uses the luma value (brightness) for controlling the loudness of each pixel-sound. Each pixel can express a behavior of complete silence (black) to full volume (white) with all the “in between” nuances on a grey scale. By doing so, the video acts like a score for the music. Depending on its content, some pixels will be brighter and some others will be darker and the music result will be different every time. 3.3 From raw data to music: The 3200 pixels are divided into eighty vertical columns. Each column contains forty pixels (and consequently forty pure sounds or pass filter bands). For a clearer correspondence of what we see and what we hear column one is panned to the hard-left side and column eighty is panned to the hard-right side, with all the other columns reflecting their relative visual panning position on video. That gives the most natural audio-visual correspondence. Each column is tuned in the same manner and the tuning follows the usual main music keys and scales (C major, D Dorian, E minor, C minor triad in root position or its inversions, et cetera). The tuning of each column can be changed thanks to a sub-patch called “Transposing Machine” that can be triggered via specific buttons corresponding to the various keys and scales or via midi keyboard input. The sound is generated inside Max/MSP by using sound modules like iosc banks, pink noises, white noises, multiband filters or wavetables. 4. CONCLUSIONS: Various conclusions can be found from the experimentation of the presented framework. First of all, it was the scope of this research to try to compose the music for a full feature narrative documentary called “Circles”, a forty-five minutes long film. The documentary narrates the alternance of live and death, the spirituality of human beings and the meaning of our existence. The composition approach was planned with the extensive use of the various sonification algorithms that have been presented in this paper. It was one of the focal points of this research to discover if the designed systems of algorithmic composition and automated composition could be useful and valuable in a “real time scenario” of composing an entire sound track for a full movie. The artistic results that came out seem to positively answer to that question. More specifically the following elements have been discovered: -The algorithm can only comment musically the film in a simple literal and linear relationship. We hear what we see. The system cannot decide what melody, harmony, sound or scale is more appropriate for the specific scene. In other words, the job of determining the best music vocabulary is still left to the composer. -The proposed technique can be a valuable tool for designing new and fresh sonic landscapes and audio palettes that interact very well with their visual counterpart. The concept of “mickey-mousing” (following every movement of a video with a music gesture) is particularly emphasized here. This could be a positive or negative element depending on the aesthetic and artistic results that the composer wishes to achieve. 5. REFERENCES: [1] John Harrison (2001), Synaesthesia: The Strangest Thing, ISBN 0-19-263245-0. [2] Zimmerman, Walter, Desert Plants – Conversations with 23 American Musicians, Berlin: Beginner Press in cooperation with Mode Records, 2020 (originally published in 1976 by A.R.C., Vancouver). [3] Gann, Kyle (2006). The Music of Conlon Nancarrow, p.38. ISBN 978-0521028073. [4] Levin, Thomas. 2003. Tones from out of Nowhere: Rudolf Pfenninger and the Archaeology of Synthetic Sound. Grey Room 12 (Fall 2003): p. 32-79 [5] Daphne Oram (1972), An Individual Note: Of Music, Sound And Electronics, Galliard, ISBN 978-0-8524-9109-6 [6] Pelletier, J.M. "Sonified Motion Flow Fields as a Means of Musical Expression", in Proceedings of the Internation Conference on New Interfaces for Musical Expression, Genova, Italy, 2008. pp. 158-163 [7] Kramer, Gregory, ed. (1994). Auditory Display: Sonification, Audification, and Auditory Interfaces. Santa Fe Institute Studies in the Sciences of Complexity. Vol. Proceedings Volume XVIII. Reading, MA: Addison-Wesley. ISBN 978-0-201-62603-2

    Assemblage of a functional and versatile endoscopy trainer reusing medical waste: Step‐by‐step video tutorial

    Get PDF
    Endoscopy simulators are progressively being integrated into training programs since they provide a safe and controlled learning environment for trainees to acquire and refine endoscopic skills necessary for complex interventions.1-3 While several valid endoscopy trainers have been developed, their widespread availability can be limited by local resources.4 Here we provide a step-by-step guide to assemble a simple and inexpensive endoscopy trainer using medical waste and expired clinic materials (Fig. 1). This project was developed within the “Take Instead of Discard” program at University Hospital LMU Munich, a sustainability initiative incentivizing the reuse of medical equipment packaging for various purposes

    Nuove strategie per competere (Approcci, metodologie e strumenti per vincere nel mercato)

    No full text
    Il volume "Nuove strategie per competere", elaborato assieme ad autorevoli accademici del set-tore, intende offrire ai manager e agli imprenditori non solo un viaggio nelle piĂč rappresentative teorie d'innovazione strategica odierne, ma soprattutto dĂ  l'opportunitĂ  al lettore di apprendere velocemente e approfondire quali siano le metodologie e gli strumenti che si possono impiegare in azienda, consentendo di passare dalla "comprensione del presente alla capacitĂ  di definire il futuro della competizione". L'obiettivo degli autori del volume, accademici e consulenti, Ăš quello di offrire al lettore la teoria e la pratica, le metodologie e gli strumenti, gli approcci e i mezzi per eseguire e implementare in modo vincente la strategia aziendale. Seguendo i percorsi giĂ  propo-sti da autorevoli autori come Robert S. Kaplan e David Norton, viene qui delineato l'iter sostanziale e procedurale che un'organizzazione deve compiere per incrementare i risultati, partendo dai temi della definizione e della innovazione della strategia fino a quelli della esecuzione pratica della stessa, fornendo gli strumenti utili a gestire il cambiamento e a valorizzazione la conoscenza come fattore distintivo fondamentale per ottenere un vantaggio competitivo sostenibile nel tempo. Il volume vuole rappresentare per i manager una guida da poter consultare in modo chiaro e veloce e nel contempo da utilizzare in modo pratico e pragmatico per disegnare in modo nuovo, eseguire e gestire al meglio la strategia della propria organizzazione
    corecore