54 research outputs found
Recommended from our members
Spinal column shortening versus revision detethering for recurrent adult tethered cord syndrome: a preliminary comparison of perioperative and clinical outcomes.
OBJECTIVE:Recurrent tethered cord syndrome (TCS), believed to result from tension on the distal portion of the spinal cord, causes a constellation of neurological symptoms. Detethering surgery has been the traditional treatment for TCS. However, in cases of recurrent TCS, there is a risk of new neurological deficits developing, and subsequent retethering is difficult to prevent. Spinal column shortening has been proposed as an alternative technique to reduce the tension on the spinal cord without incurring the morbidity of revision surgery on the spinal cord. The authors compared the perioperative outcomes and morbidity of patients who were treated with one or the other procedure. METHODS:The medical records of 16 adult patients with recurrent TCS who were treated between 2005 and 2018 were reviewed. Eight patients underwent spinal column shortening, and 8 patients underwent revision detethering surgery. Patient demographics, clinical outcomes, and perioperative factors were analyzed. The authors include a video to illustrate their technique of spinal column shortening. RESULTS:Within the spinal column shortening group, no patients experienced any complications, and all 8 patients either improved or stabilized with regard to lower-extremity and bowel and bladder function. Within the revision detethering group, 2 patients had worsening of lower-extremity strength, 3 patients had worsening of bowel and bladder function, and 1 patient had improvement in bladder function. Also, 3 patients had wound-related complications. The median estimated blood loss was 731 ml in the shortening group and 163 ml in the revision detethering group. The median operative time was 358 minutes in the shortening group and 226 minutes in the revision detethering group. CONCLUSIONS:Clinical outcomes were comparable between the groups, but none of the spinal column shortening patients experienced worsening, whereas 3 of the revision detethering patients did and also had wound-related complications. Although the operative times and blood loss were higher in the spinal column shortening group, this procedure may be an alternative to revision detethering in extremely scarred or complex wound revision cases
Chinese Open Instruction Generalist: A Preliminary Release
Instruction tuning is widely recognized as a key technique for building
generalist language models, which has attracted the attention of researchers
and the public with the release of InstructGPT~\citep{ouyang2022training} and
ChatGPT\footnote{\url{https://chat.openai.com/}}. Despite impressive progress
in English-oriented large-scale language models (LLMs), it is still
under-explored whether English-based foundation LLMs can perform similarly on
multilingual tasks compared to English tasks with well-designed instruction
tuning and how we can construct the corpora needed for the tuning.
To remedy this gap, we propose the project as an attempt to create a Chinese
instruction dataset by various methods adapted to the intrinsic characteristics
of 4 sub-tasks. We collect around 200k Chinese instruction tuning samples,
which have been manually checked to guarantee high quality. We also summarize
the existing English and Chinese instruction corpora and briefly describe some
potential applications of the newly constructed Chinese instruction corpora.
The resulting \textbf{C}hinese \textbf{O}pen \textbf{I}nstruction
\textbf{G}eneralist (\textbf{COIG}) corpora are available in
Huggingface\footnote{\url{https://huggingface.co/datasets/BAAI/COIG}} and
Github\footnote{\url{https://github.com/FlagOpen/FlagInstruct}}, and will be
continuously updated
Ethyne Reducing Metal-Organic Frameworks to Control Fabrications of Core/shell Nanoparticles as Catalysts
An approach using cobalt metal-organic frameworks (Co-MOF) as precursors is established for the fabrication of cobalt nanoparticles in porous carbon shells (core/shell Co@C). Chemical vapor deposition of ethyne is used for controlling the reduction of cobalt nanoclusters in the MOF and the spontaneous formation of the porous carbon shells. The metallic cobalt cores formed are up to 4 - 6 nm with the crystal phase varying between hexagonally-close-packed (hcp) and face-centre-packed (fcc). The porous carbon shells change from amorphous to graphene with the ethyne deposition temperature increasing from 400 to 600 oC. The core/shell Co@C nanoparticles exhibit high catalytic activity in selectively converting syngas (CTY: 254.1 - 312.1 μmolCO·gCo-1·s-1) into hydrocarbons (4.0 - 5.2 gHC·g-cat-1·h-1) at 260 oC. As well as the crystal size and phase, the coordination numbers of the cobalt to oxygen and to other cobalt atoms on the surface of the cobalt nanoparticles, and the permeability of the porous carbon shell have been related to the catalytic performance in FTS reactions
LyricWhiz: Robust Multilingual Zero-shot Lyrics Transcription by Whispering to ChatGPT
We introduce LyricWhiz, a robust, multilingual, and zero-shot automatic
lyrics transcription method achieving state-of-the-art performance on various
lyrics transcription datasets, even in challenging genres such as rock and
metal. Our novel, training-free approach utilizes Whisper, a weakly supervised
robust speech recognition model, and GPT-4, today's most performant chat-based
large language model. In the proposed method, Whisper functions as the "ear" by
transcribing the audio, while GPT-4 serves as the "brain," acting as an
annotator with a strong performance for contextualized output selection and
correction. Our experiments show that LyricWhiz significantly reduces Word
Error Rate compared to existing methods in English and can effectively
transcribe lyrics across multiple languages. Furthermore, we use LyricWhiz to
create the first publicly available, large-scale, multilingual lyrics
transcription dataset with a CC-BY-NC-SA copyright license, based on
MTG-Jamendo, and offer a human-annotated subset for noise level estimation and
evaluation. We anticipate that our proposed method and dataset will advance the
development of multilingual lyrics transcription, a challenging and emerging
task.Comment: 9 pages, 2 figures, 5 tables, accepted by ISMIR 202
On the Effectiveness of Speech Self-supervised Learning for Music
Self-supervised learning (SSL) has shown promising results in various speech
and natural language processing applications. However, its efficacy in music
information retrieval (MIR) still remains largely unexplored. While previous
SSL models pre-trained on music recordings may have been mostly closed-sourced,
recent speech models such as wav2vec2.0 have shown promise in music modelling.
Nevertheless, research exploring the effectiveness of applying speech SSL
models to music recordings has been limited. We explore the music adaption of
SSL with two distinctive speech-related models, data2vec1.0 and Hubert, and
refer to them as music2vec and musicHuBERT, respectively. We train SSL
models with 95M parameters under various pre-training configurations and
systematically evaluate the MIR task performances with 13 different MIR tasks.
Our findings suggest that training with music data can generally improve
performance on MIR tasks, even when models are trained using paradigms designed
for speech. However, we identify the limitations of such existing
speech-oriented designs, especially in modelling polyphonic information. Based
on the experimental results, empirical suggestions are also given for designing
future musical SSL strategies and paradigms
Relationship Between Outdoor Air Pollutant Exposure and Premature Delivery in China- Systematic Review and Meta-Analysis
Objective: Preterm birth (PTB) is considered as a public health problem and one of the main risk factors related to the global disease burden. The purpose of this study aims to explore the influence of exposure to major air pollutants at different pregnancies on PTB.Methods: The relationship between air pollutants and PTB in China was collected from cohort studies and case-control studies published before 30 April 2022. Meta-analysis was carried out with STATA 15.0 software.Results: A total of 2,115 papers were retrieved, of which 18 papers met the inclusion criteria. The comprehensive effect of pollutant exposure and PTB were calculated. PM2.5 during entire pregnancy and O3 exposure during third trimester were positively associated with preterm birth. Every 10 μg/m3 increase in the average concentration of PM2.5 during the whole pregnancy will increase the risk of premature delivery by 4%, and every 10 μg/m3 increase in the average concentration of O3 in the third trimester will increase the risk of premature delivery by 1%.Conclusion: Exposure to PM2.5 entire prenatal pregnancy and O3 in third trimester is associated with an increased risk of preterm birth occurrence
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training
Self-supervised learning (SSL) has recently emerged as a promising paradigm
for training generalisable models on large-scale data in the fields of vision,
text, and speech. Although SSL has been proven effective in speech and audio,
its application to music audio has yet to be thoroughly explored. This is
primarily due to the distinctive challenges associated with modelling musical
knowledge, particularly its tonal and pitched characteristics of music. To
address this research gap, we propose an acoustic Music undERstanding model
with large-scale self-supervised Training (MERT), which incorporates teacher
models to provide pseudo labels in the masked language modelling (MLM) style
acoustic pre-training. In our exploration, we identified a superior combination
of teacher models, which outperforms conventional speech and audio approaches
in terms of performance. This combination includes an acoustic teacher based on
Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical
teacher based on the Constant-Q Transform (CQT). These teachers effectively
guide our student model, a BERT-style transformer encoder, to better model
music audio. In addition, we introduce an in-batch noise mixture augmentation
to enhance the representation robustness. Furthermore, we explore a wide range
of settings to overcome the instability in acoustic language model
pre-training, which allows our designed paradigm to scale from 95M to 330M
parameters. Experimental results indicate that our model can generalise and
perform well on 14 music understanding tasks and attains state-of-the-art
(SOTA) overall scores. The code and models are online:
https://github.com/yizhilll/MERT
A New 4D Hyperchaotic System and Its Generalized Function Projective Synchronization
A new four-dimensional hyperchaotic system is investigated. Numerical and analytical studies are carried out on its basic dynamical properties, such as equilibrium point, Lyapunov exponents, Poincaré maps, and chaotic dynamical behaviors. We verify the realizability of the new system via an electronic circuit by using Multisim software. Furthermore, a generalized function projective synchronization scheme of two different hyperchaotic systems with uncertain parameters is proposed, which includes some existing projective synchronization schemes, such as generalized projection synchronization and function projective synchronization. Based on the Lyapunov stability theory, a controller with parameters update laws is designed to realize synchronization. Using this controller, we realize the synchronization between Chen hyperchaotic system and the new system to verify the validity and feasibility of our method
- …