175 research outputs found
Byte Pair Encoding for Symbolic Music
When used with deep learning, the symbolic music modality is often coupled
with language model architectures. To do so, the music needs to be tokenized,
i.e. converted into a sequence of discrete tokens. This can be achieved by
different approaches, as music can be composed of simultaneous tracks, of
simultaneous notes with several attributes. Until now, the proposed
tokenizations rely on small vocabularies of tokens describing the note
attributes and time events, resulting in fairly long token sequences, and a
sub-optimal use of the embedding space of language models. Recent research has
put efforts on reducing the overall sequence length by merging embeddings or
combining tokens. In this paper, we show that Byte Pair Encoding, a compression
technique widely used for natural language, significantly decreases the
sequence length while increasing the vocabulary size. By doing so, we leverage
the embedding capabilities of such models with more expressive tokens,
resulting in both better results and faster inference in generation and
classification tasks. The source code is shared on Github, along with a
companion website. Finally, BPE is directly implemented in MidiTok, allowing
the reader to easily benefit from this method.Comment: EMNLP 2023, source code: https://github.com/Natooz/BPE-Symbolic-Musi
miditok: A Python package for MIDI file tokenization
Recent progress in natural language processing has been adapted to the
symbolic music modality. Language models, such as Transformers, have been used
with symbolic music for a variety of tasks among which music generation,
modeling or transcription, with state-of-the-art performances. These models are
beginning to be used in production products. To encode and decode music for the
backbone model, they need to rely on tokenizers, whose role is to serialize
music into sequences of distinct elements called tokens. MidiTok is an
open-source library allowing to tokenize symbolic music with great flexibility
and extended features. It features the most popular music tokenizations, under
a unified API. It is made to be easily used and extensible for everyone.Comment: Updated and comprehensive report. Original ISMIR 2021 document at
https://archives.ismir.net/ismir2021/latebreaking/000005.pd
Selective time-dependent changes in activity and cell-specific gene expression in human postmortem brain.
As a means to understand human neuropsychiatric disorders from human brain samples, we compared the transcription patterns and histological features of postmortem brain to fresh human neocortex isolated immediately following surgical removal. Compared to a number of neuropsychiatric disease-associated postmortem transcriptomes, the fresh human brain transcriptome had an entirely unique transcriptional pattern. To understand this difference, we measured genome-wide transcription as a function of time after fresh tissue removal to mimic the postmortem interval. Within a few hours, a selective reduction in the number of neuronal activity-dependent transcripts occurred with relative preservation of housekeeping genes commonly used as a reference for RNA normalization. Gene clustering indicated a rapid reduction in neuronal gene expression with a reciprocal time-dependent increase in astroglial and microglial gene expression that continued to increase for at least 24Â h after tissue resection. Predicted transcriptional changes were confirmed histologically on the same tissue demonstrating that while neurons were degenerating, glial cells underwent an outgrowth of their processes. The rapid loss of neuronal genes and reciprocal expression of glial genes highlights highly dynamic transcriptional and cellular changes that occur during the postmortem interval. Understanding these time-dependent changes in gene expression in post mortem brain samples is critical for the interpretation of research studies on human brain disorders
Increased GABAA Receptor Δ-Subunit Expression on Ventral Respiratory Column Neurons Protects Breathing during Pregnancy
GABAergic signaling is essential for proper respiratory function. Potentiation of this signaling with allosteric modulators such as anesthetics, barbiturates, and neurosteroids can lead to respiratory arrest. Paradoxically, pregnant animals continue to breathe normally despite nearly 100-fold increases in circulating neurosteroids. Δ subunit-containing GABAARs are insensitive to positive allosteric modulation, thus we hypothesized that pregnant rats increase Δ subunit-containing GABAAR expression on brainstem neurons of the ventral respiratory column (VRC). In vivo, pregnancy rendered respiratory motor output insensitive to otherwise lethal doses of pentobarbital, a barbiturate previously used to categorize the Δ subunit. Using electrode array recordings in vitro, we demonstrated that putative respiratory neurons of the preBötzinger Complex (preBötC) were also rendered insensitive to the effects of pentobarbital during pregnancy, but unit activity in the VRC was rapidly inhibited by the GABAAR agonist, muscimol. VRC unit activity from virgin and post-partum females was potently inhibited by both pentobarbital and muscimol. Brainstem Δ subunit mRNA and protein levels were increased in pregnant rats, and GABAAR Δ subunit expression co-localized with a marker of rhythm generating neurons (neurokinin 1 receptors) in the preBötC. These data support the hypothesis that pregnancy renders respiratory motor output and respiratory neuron activity insensitive to barbiturates, most likely via increased Δ subunit-containing GABAAR expression on respiratory rhythm-generating neurons. Increased Δ subunit expression may be critical to preserve respiratory function (and life) despite increased neurosteroid levels during pregnancy
Description of the Method for Evaluating Digital Endpoints in Alzheimer Disease Study : Protocol for an Exploratory, Cross-sectional Study
©Jelena Curcic, Vanessa Vallejo, Jennifer Sorinas, Oleksandr Sverdlov, Jens Praestgaard, Mateusz Piksa, Mark Deurinck, Gul Erdemli, Maximilian BĂŒgler, Ioannis Tarnanas, Nick Taptiklis, Francesca Cormack, Rebekka Anker, Fabien MassĂ©, William Souillard-Mandar, Nathan Intrator, Lior Molcho, Erica Madero, Nicholas Bott, Mieko Chambers, Josef Tamory, Matias Shulz, Gerardo Fernandez, William Simpson, Jessica Robin, JĂłn G SnĂŠdal, Jang-Ho Cha, Kristin Hannesdottir. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 10.08.2022.BACKGROUND: More sensitive and less burdensome efficacy end points are urgently needed to improve the effectiveness of clinical drug development for Alzheimer disease (AD). Although conventional end points lack sensitivity, digital technologies hold promise for amplifying the detection of treatment signals and capturing cognitive anomalies at earlier disease stages. Using digital technologies and combining several test modalities allow for the collection of richer information about cognitive and functional status, which is not ascertainable via conventional paper-and-pencil tests. OBJECTIVE: This study aimed to assess the psychometric properties, operational feasibility, and patient acceptance of 10 promising technologies that are to be used as efficacy end points to measure cognition in future clinical drug trials. METHODS: The Method for Evaluating Digital Endpoints in Alzheimer Disease study is an exploratory, cross-sectional, noninterventional study that will evaluate 10 digital technologies' ability to accurately classify participants into 4 cohorts according to the severity of cognitive impairment and dementia. Moreover, this study will assess the psychometric properties of each of the tested digital technologies, including the acceptable range to assess ceiling and floor effects, concurrent validity to correlate digital outcome measures to traditional paper-and-pencil tests in AD, reliability to compare test and retest, and responsiveness to evaluate the sensitivity to change in a mild cognitive challenge model. This study included 50 eligible male and female participants (aged between 60 and 80 years), of whom 13 (26%) were amyloid-negative, cognitively healthy participants (controls); 12 (24%) were amyloid-positive, cognitively healthy participants (presymptomatic); 13 (26%) had mild cognitive impairment (predementia); and 12 (24%) had mild AD (mild dementia). This study involved 4 in-clinic visits. During the initial visit, all participants completed all conventional paper-and-pencil assessments. During the following 3 visits, the participants underwent a series of novel digital assessments. RESULTS: Participant recruitment and data collection began in June 2020 and continued until June 2021. Hence, the data collection occurred during the COVID-19 pandemic (SARS-CoV-2 virus pandemic). Data were successfully collected from all digital technologies to evaluate statistical and operational performance and patient acceptance. This paper reports the baseline demographics and characteristics of the population studied as well as the study's progress during the pandemic. CONCLUSIONS: This study was designed to generate feasibility insights and validation data to help advance novel digital technologies in clinical drug development. The learnings from this study will help guide future methods for assessing novel digital technologies and inform clinical drug trials in early AD, aiming to enhance clinical end point strategies with digital technologies. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/35442.Peer reviewe
Evolution of coastal zone vulnerability to marine inundation in a global change context. Application to Languedoc Roussillon (France)
The coastal system is likely to suffer increasing costal risk in a global change context. Its management implies to consider those risks in a holistic approach of the different vulnerability components of the coastal zone, by improving knowledge of hazard and exposure as well as analyzing and quantifying present day and future territory vulnerability. The ANR/VMC2007/MISEEVA project (2008-2011) has applied this approach on Languedoc Roussillon region in France. MISEEVA approach relies on several scenarios for 2030 and 2100, in terms of meteorology (driver of coastal hazard), sea level rise, and also considering further trends in demography and economy, and possible adaption strategies Hazard has been modeled (SWAN, MARS and SURFWB), on the base of the presentday situation, sea level rise hypotheses, and existing or modeled data, of extreme meteorological driving f. It allowed to assess the possible surges ranges and map coastal zone exposure to: - a permanent inundation (considering sea level rise in 2030 and 2100, - a recurrent inundation (considering sea level rise and extreme tidal range) - an exceptional inundation (adding extreme storm surge to sea level rise and tidal range). In 2030, exposure will be comparable to present day exposure. In 2100, extreme condition will affect a larger zone. Present days social and economic components of the coastal zone have been analyzed in terms of vulnerability and potential damaging. Adaptation capacity was approached by public inquiries and interviews of stakeholders and policy makers, based on existing planning documents The knowledge of the present day system is then compared to the possible management strategies that could be chosen in the future, so to imagine what would be the evolution of vulnerability to marine inundation, in regards to these possible strategies
Search for dark matter produced in association with bottom or top quarks in âs = 13 TeV pp collisions with the ATLAS detector
A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fbâ1 of protonâproton collision data recorded by the ATLAS experiment at âs = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements
- âŠ