268 research outputs found
RACOFI: A Rule-Applying Collaborative Filtering System
In this paper we give an overview of the RACOFI (Rule-Applying Collaborative Filtering) multidimensional rating system and its related technologies. This will be exemplified with RACOFI Music, an implemented collaboration agent that assists on-line users in the rating and recommendation of audio (Learning) Objects. It lets users rate contemporary Canadian music in the five dimensions of impression, lyrics, music, originality, and production. The collaborative filtering algorithms STI Pearson, STIN2, and the Per Item Average algorithms are then employed together with RuleML-based rules to recommend music objects that best match user queries. RACOFI has been on-line since August 2003 at http://racofi.elg.ca.
Local Linear Convergence of ISTA and FISTA on the LASSO Problem
We establish local linear convergence bounds for the ISTA and FISTA
iterations on the model LASSO problem. We show that FISTA can be viewed as an
accelerated ISTA process. Using a spectral analysis, we show that, when close
enough to the solution, both iterations converge linearly, but FISTA slows down
compared to ISTA, making it advantageous to switch to ISTA toward the end of
the iteration processs. We illustrate the results with some synthetic numerical
examples
Effects of Hearing Aid Amplification on Robust Neural Coding of Speech
Hearing aids are able to restore some hearing abilities for people with auditory impairments, but background noise remains a significant problem. Unfortunately, we know very little about how speech is encoded in the auditory system, particularly in impaired systems with prosthetic amplifiers. There is growing evidence that relative timing in the neural signals (known as spatiotemporal coding) is important for speech perception, but there is little research that relates spatiotemporal coding and hearing aid amplification.
This research uses a combination of computational modeling and physiological experiments to characterize how hearing aids affect vowel coding in noise at the level of the auditory nerve. The results indicate that sensorineural hearing impairment degrades the temporal cues transmitted from the ear to the brain. Two hearing aid strategies (linear gain and wide dynamic-range compression) were used to amplify the acoustic signal. Although appropriate gain was shown to improve temporal coding for individual auditory nerve fibers, neither strategy improved spatiotemporal cues. Previous work has attempted to correct the relative timing by adding frequency-dependent delays to the acoustic signal (e.g., within a hearing aid). We show that, although this strategy can affect the timing of auditory nerve responses, it is unlikely to improve the relative timing as intended.
We have shown that existing hearing aid technologies do not improve some of the neural cues that we think are important for perception, but it is important to understand these limitations. Our hope is that this knowledge can be used to develop new technologies to improve auditory perception in difficult acoustic environments
Bayes beats Cross Validation: Efficient and Accurate Ridge Regression via Expectation Maximization
We present a novel method for tuning the regularization hyper-parameter,
, of a ridge regression that is faster to compute than leave-one-out
cross-validation (LOOCV) while yielding estimates of the regression parameters
of equal, or particularly in the setting of sparse covariates, superior quality
to those obtained by minimising the LOOCV risk. The LOOCV risk can suffer from
multiple and bad local minima for finite and thus requires the
specification of a set of candidate , which can fail to provide good
solutions. In contrast, we show that the proposed method is guaranteed to find
a unique optimal solution for large enough , under relatively mild
conditions, without requiring the specification of any difficult to determine
hyper-parameters. This is based on a Bayesian formulation of ridge regression
that we prove to have a unimodal posterior for large enough , allowing for
both the optimal and the regression coefficients to be jointly
learned within an iterative expectation maximization (EM) procedure.
Importantly, we show that by utilizing an appropriate preprocessing step, a
single iteration of the main EM loop can be implemented in
operations, for input data with rows and columns. In contrast,
evaluating a single value of using fast LOOCV costs
operations when using the same preprocessing. This advantage amounts to an
asymptotic improvement of a factor of for candidate values for
(in the regime where is the number of
regression targets)
- …