1,091 research outputs found

    Speech Separation Using Partially Asynchronous Microphone Arrays Without Resampling

    Full text link
    We consider the problem of separating speech sources captured by multiple spatially separated devices, each of which has multiple microphones and samples its signals at a slightly different rate. Most asynchronous array processing methods rely on sample rate offset estimation and resampling, but these offsets can be difficult to estimate if the sources or microphones are moving. We propose a source separation method that does not require offset estimation or signal resampling. Instead, we divide the distributed array into several synchronous subarrays. All arrays are used jointly to estimate the time-varying signal statistics, and those statistics are used to design separate time-varying spatial filters in each array. We demonstrate the method for speech mixtures recorded on both stationary and moving microphone arrays.Comment: To appear at the International Workshop on Acoustic Signal Enhancement (IWAENC 2018

    Acoustic Impulse Responses for Wearable Audio Devices

    Full text link
    We present an open-access dataset of over 8000 acoustic impulse from 160 microphones spread across the body and affixed to wearable accessories. The data can be used to evaluate audio capture and array processing systems using wearable devices such as hearing aids, headphones, eyeglasses, jewelry, and clothing. We analyze the acoustic transfer functions of different parts of the body, measure the effects of clothing worn over microphones, compare measurements from a live human subject to those from a mannequin, and simulate the noise-reduction performance of several beamformers. The results suggest that arrays of microphones spread across the body are more effective than those confined to a single device.Comment: To appear at ICASSP 201

    Linear MMSE-Optimal Turbo Equalization Using Context Trees

    Get PDF
    Formulations of the turbo equalization approach to iterative equalization and decoding vary greatly when channel knowledge is either partially or completely unknown. Maximum aposteriori probability (MAP) and minimum mean square error (MMSE) approaches leverage channel knowledge to make explicit use of soft information (priors over the transmitted data bits) in a manner that is distinctly nonlinear, appearing either in a trellis formulation (MAP) or inside an inverted matrix (MMSE). To date, nearly all adaptive turbo equalization methods either estimate the channel or use a direct adaptation equalizer in which estimates of the transmitted data are formed from an expressly linear function of the received data and soft information, with this latter formulation being most common. We study a class of direct adaptation turbo equalizers that are both adaptive and nonlinear functions of the soft information from the decoder. We introduce piecewise linear models based on context trees that can adaptively approximate the nonlinear dependence of the equalizer on the soft information such that it can choose both the partition regions as well as the locally linear equalizer coefficients in each region independently, with computational complexity that remains of the order of a traditional direct adaptive linear equalizer. This approach is guaranteed to asymptotically achieve the performance of the best piecewise linear equalizer and we quantify the MSE performance of the resulting algorithm and the convergence of its MSE to that of the linear minimum MSE estimator as the depth of the context tree and the data length increase.Comment: Submitted to the IEEE Transactions on Signal Processin

    Investigating periphyton biofilm response to changing phosphorus concentrations in UK rivers using within-river flumes

    Get PDF
    The excessive growth of benthic algal biofilms in UK rivers is a widespread problem, resulting in loss of plant communities and wider ecological damage. Elevated nutrient concentrations (particularly phosphorus) are often implicated, as P is usually considered the limiting nutrient in most rivers. Phosphorus loadings to rivers in the UK have rapidly decreased in the last decade,due to improvements in sewage treatment and changes to agricultural practises. However, in many cases, these improvements in water quality have not resulted in a reduction in nuisance algal growth. It is therefore vital that catchment managers know what phosphorus concentrations need to be achieved, in order to meet the UK’s obligations to attain good ecological status, under the EU’s Water Framework Directive. This study has developed a novel methodology, using within river mesocosms, which allows P concentrations of river water to be either increased or decreased, and the effect on biofilm accrual rate is quantified. These experiments identify the phosphorus concentrations at which algae becomes P-limited, which can be used to determine knowledge-based P targets for rivers. The ability to reduce P concentrations in river water enables algae–nutrient limitation to be studied in nutrient-enriched rivers for the first time

    Personalized medicine : the impact on chemistry

    Get PDF
    An effective strategy for personalized medicine requires a major conceptual change in the development and application of therapeutics. In this article, we argue that further advances in this field should be made with reference to another conceptual shift, that of network pharmacology. We examine the intersection of personalized medicine and network pharmacology to identify strategies for the development of personalized therapies that are fully informed by network pharmacology concepts. This provides a framework for discussion of the impact personalized medicine will have on chemistry in terms of drug discovery, formulation and delivery, the adaptations and changes in ideology required and the contribution chemistry is already making. New ways of conceptualizing chemistry’s relationship with medicine will lead to new approaches to drug discovery and hold promise of delivering safer and more effective therapies

    Unsupervised Opinion Aggregation -- A Statistical Perspective

    Full text link
    Complex decision-making systems rarely have direct access to the current state of the world and they instead rely on opinions to form an understanding of what the ground truth could be. Even in problems where experts provide opinions without any intention to manipulate the decision maker, it is challenging to decide which expert's opinion is more reliable -- a challenge that is further amplified when decision-maker has limited, delayed, or no access to the ground truth after the fact. This paper explores a statistical approach to infer the competence of each expert based on their opinions without any need for the ground truth. Echoing the logic behind what is commonly referred to as \textit{the wisdom of crowds}, we propose measuring the competence of each expert by their likeliness to agree with their peers. We further show that the more reliable an expert is the more likely it is that they agree with their peers. We leverage this fact to propose a completely unsupervised version of the na\"{i}ve Bayes classifier and show that the proposed technique is asymptotically optimal for a large class of problems. In addition to aggregating a large block of opinions, we further apply our technique for online opinion aggregation and for decision-making based on a limited the number of opinions.Comment: This research was conducted during Noyan Sevuktekin's time at University of Illinois at Urbana-Champaign and the results were first presented in Chapter 3 of his dissertation, entitled "Learning From Opinions". Permalink: https://hdl.handle.net/2142/11081
    corecore