3,076 research outputs found
Coexistence of bounded and unbounded geometry for area-preserving maps
The geometry of the period doubling Cantor sets of strongly dissipative
infinitely renormalizable H\'enon-like maps has been shown to be unbounded by
M. Lyubich, M. Martens and A. de Carvalho, although the measure of unbounded
"spots" in the Cantor set has been demonstrated to be zero.
We show that an even more extreme situation takes places for infinitely
renormalizable area-preserving H\'enon-like maps: bounded and unbounded
geometries coexist with both phenomena occuring on subsets of positive measure
in the Cantor sets
Polyphenolic C-glucosidic ellagitannins present in oak-aged wine inhibit HIV-1 nucleocapsid protein
HIV-1 nucleocapsid protein (NC) is a nucleic acid chaperone implicated in several steps of the virus replication cycle and an attractive new target for drug development. In reverse transcription, NC destabilizes nucleic acid secondary structures and catalyzes the annealing of HIV-1 TAR RNA to its DNA copy (cTAR) to form the heteroduplex TAR/cTAR. A screening program led to the identification of the plant polyphenols acutissimins A and B as potent inhibitors of NC in different assays. These two flavano-ellagitannins, which are found in wine aged in oak barrels, exhibited different mechanisms of protein inhibition and higher potency relatively to their epimers, epiacutissimins A and B, and to simpler structures notably representing hydrolytic fragments and metabolites therefrom
Clinical and biological factors with prognostic value in acute pancreatitis
Acute pancreatitis is an acute inflammatory process of the pancreas, which can remain localized at the level of the gland or can extend to the peripancreatic and retroperitoneal tissues. The use and interpretation of paraclinical examinations at the onset can predict the form of evolution of acute pancreatitis (mild or severe). Depending on the evolution, these data are useful in determining the type of surgical intervention that might be necessary based on severity.
We present a retrospective study consisting of 118 patients diagnosed and hospitalized with acute pancreatitis during 2016-2020 in the Surgery I section of the Sibiu County Emergency Clinical Hospital. Several parameters were taken into account at hospitalization such as age, sex, the environment of origin, etiology of pancreatitis, biochemical parameters with their repetition at 24, 72 hours, and at discharge, and clinical signs at hospitalization. surgeries performed depending on the severity of pancreatitis specifying their complications
Sparse Fine-tuning for Inference Acceleration of Large Language Models
We consider the problem of accurate sparse fine-tuning of large language
models (LLMs), that is, fine-tuning pretrained LLMs on specialized tasks, while
inducing sparsity in their weights. On the accuracy side, we observe that
standard loss-based fine-tuning may fail to recover accuracy, especially at
high sparsities. To address this, we perform a detailed study of
distillation-type losses, determining an L2-based distillation approach we term
SquareHead which enables accurate recovery even at higher sparsities, across
all model types. On the practical efficiency side, we show that sparse LLMs can
be executed with speedups by taking advantage of sparsity, for both CPU and GPU
runtimes. While the standard approach is to leverage sparsity for computational
reduction, we observe that in the case of memory-bound LLMs sparsity can also
be leveraged for reducing memory bandwidth. We exhibit end-to-end results
showing speedups due to sparsity, while recovering accuracy, on T5 (language
translation), Whisper (speech translation), and open GPT-type (MPT for text
generation). For MPT text generation, we show for the first time that sparse
fine-tuning can reach 75% sparsity without accuracy drops, provide notable
end-to-end speedups for both CPU and GPU inference, and highlight that sparsity
is also compatible with quantization approaches. Models and software for
reproducing our results are provided in Section 6
Optical fibers and optical fiber sensors used in radiation monitoring
peer-reviewedBy their very nature, optical fibers and, by extension, intrinsic and extrinsic optical fiberbased
sensors are promising devices to be used in very different and complex environments
considering their characteristics such as: capabilities to work under strong electromagnetic
fields; possibility to carry multiplexed signals (time, wavelength multiplexing); small size
and low mass; ability to handle multi-parameter measurements in distributed configuration;
possibility to monitor sites far away from the controller; their availability to be incorporated
into the monitored structure; wide bandwidth for communication applications. In the case
of the optical fibers, the possibility to be incorporated into various types of sensors and
actuators, free of additional hazards (i.e. fire, explosion), made them promising candidates
to operate in special or adverse conditions as those required by space or terrestrial
applications (spacecraft on board instrumentation, nuclear facilities, future fusion
installations, medical treatment and diagnostics premises, medical equipment sterilization).
Major advantages to be considered in using optical fibers/optical fiber sensors for radiation
detection and monitoring refer to: real-time interrogation capabilities, possibility to design
spatially resolved solutions (the capability to build array detectors), in-vivo investigations
(i.e. inside the body measurements).PUBLISHEDpeer-reviewe
Empirical Analysis Of International Mutual Fund Performance
Since 1990 there has been a tremendous growth in the investment in international mutual funds. This growth is likely to continue as domestic stock market cools down and more U.S. investors seek higher returns as well as the diversification benefits of foreign assets. Investors are also attracted to international funds in the belief that such funds earn abnormally high returns because of the previous relative inefficiency in those markets. This study examines the annual risk-adjusted returns using Sharpeâs Index for ten portfolios of international mutual funds for the period September 2000 through September 2006. The international funds were analyzed by combining the funds into individual portfolios based on sector, geographics and company size. The benchmarks for comparison were the U.S. mutual fund performance reported by MorningStar. The risk-adjusted returns were then determined and compared to each other and to the U.S. market. During this period, nine out of ten of the international mutual fund portfolios outperformed the U.S. market.  The portfolio that contained all International Mutual Funds (IMF) significantly outperformed on a risk-adjusted basis the fund that was made up of all of the U.S. stock mutual funds, (All U.S. Stock Funds- USSF). Additionally, the Foreign Small Value (FSV) portfolio, Foreign Small Growth (FSG) portfolio, Emerging Markets (EM) portfolio, Latin America (LA) portfolio, and the Pacific Asia without Japan (PA-J) portfolio all had average annual returns (not adjusted for risk) that exceeded USMFâs returns by more than 10 percent
Using the WinQSB Software in Critical Path Analysis
In the present paper will be appealed sub modules from the PERT/CPM module of WinQSB software, adequate to solve the scheduling problems.CPM, PERT, critical path, normal duration, crash duration
Accurate Neural Network Pruning Requires Rethinking Sparse Optimization
Obtaining versions of deep neural networks that are both highly-accurate and
highly-sparse is one of the main challenges in the area of model compression,
and several high-performance pruning techniques have been investigated by the
community. Yet, much less is known about the interaction between sparsity and
the standard stochastic optimization techniques used for training sparse
networks, and most existing work uses standard dense schedules and
hyperparameters for training sparse networks. In this work, we examine the
impact of high sparsity on model training using the standard computer vision
and natural language processing sparsity benchmarks. We begin by showing that
using standard dense training recipes for sparse training is suboptimal, and
results in under-training. We provide new approaches for mitigating this issue
for both sparse pre-training of vision models (e.g. ResNet50/ImageNet) and
sparse fine-tuning of language models (e.g. BERT/GLUE), achieving
state-of-the-art results in both settings in the high-sparsity regime, and
providing detailed analyses for the difficulty of sparse training in both
scenarios. Our work sets a new threshold in terms of the accuracies that can be
achieved under high sparsity, and should inspire further research into
improving sparse model training, to reach higher accuracies under high
sparsity, but also to do so efficiently
Neural reactivation during human sleep
Sleep promotes memory consolidation: the process by which newly acquired memories are stabilised, strengthened, and integrated into long-term storage. Pioneering research in rodents has revealed that memory reactivation in sleep is a primary mechanism underpinning sleepâs beneficial effect on memory. In this review, we consider evidence for memory reactivation processes occurring in human sleep. Converging lines of research support the view that memory reactivation occurs during human sleep, and is functionally relevant for consolidation. Electrophysiology studies have shown that memory reactivation is tightly coupled to the cardinal neural oscillations of non-rapid eye movement sleep, namely slow oscillation-spindle events. In addition, functional imaging studies have found that brain regions recruited during learning become reactivated during post-learning sleep. In sum, the current evidence paints a strong case for a mechanistic role of neural reactivation in promoting memory consolidation during human sleep
The estimation of multivariate extreme value models from choice-based samples
We consider an estimation procedure for discrete choice models in general and Multivariate Extreme Value (MEV) models in particular. It is based on a pseudo-likelihood function, generalizing the Conditional Maximum Likelihood (CML) estimator by Manski and McFadden (1981) and theWeighted Exogenous Sample Maximum Likelihood (WESML) estimator by Manski and Lerman (1977). We show that the property of Multinomial Logit (MNL) models, that consistent estimates of all parameters but the constants can be obtained from an Exogenous Sample Maximum Likelihood (ESML) estimation, does not hold in general for MEV models. We propose a new estimator for the more general case. This new estimator estimates the selection bias directly from the data. We illustrate the new estimator on pseudo-synthetic and real data
- âŠ