119 research outputs found

    Psychophysical Models for Signal Detection with Time Varying Uncertainty

    Get PDF
    Psychophysical models for the behavior of the human operator in detection tasks which include change in detectability, correlation between observations and deferred decisions are developed. Classical Signal Detection Theory (SDT) is discussed and its emphasis on the sensory processes is contrasted to decision strategies. The analysis of decision strategies utilizes detection tasks with time varying signal strength. The classical theory is modified to include such tasks and several optimal decision strategies are explored. Two methods of classifying strategies are suggested. The first method is similar to the analysis of ROC curves, while the second is based on the relation between the criterion level (CL) and the detectability. Experiments to verify the analysis of tasks with changes of signal strength are designed. The results show that subjects are aware of changes in detectability and tend to use strategies that involve changes in the CL's

    Advanced analyses of physiological signals and their role in Neonatal Intensive Care

    Get PDF
    Preterm infants admitted to the neonatal intensive care unit (NICU) face an array of life-threatening diseases requiring procedures such as resuscitation and invasive monitoring, and other risks related to exposure to the hospital environment, all of which may have lifelong implications. This thesis examined a range of applications for advanced signal analyses in the NICU, from identifying of physiological patterns associated with neonatal outcomes, to evaluating the impact of certain treatments on physiological variability. Firstly, the thesis examined the potential to identify infants at risk of developing intraventricular haemorrhage, often interrelated with factors leading to preterm birth, mechanical ventilation, hypoxia and prolonged apnoeas. This thesis then characterised the cardiovascular impact of caffeine therapy which is often administered to prevent and treat apnoea of prematurity, finding greater pulse pressure variability and enhanced responsiveness of the autonomic nervous system. Cerebral autoregulation maintains cerebral blood flow despite fluctuations in arterial blood pressure and is an important consideration for preterm infants who are especially vulnerable to brain injury. Using various time and frequency domain correlation techniques, the thesis found acute changes in cerebral autoregulation of preterm infants following caffeine therapy. Nutrition in early life may also affect neurodevelopment and morbidity in later life. This thesis developed models for identifying malnutrition risk using anthropometry and near-infrared interactance features. This thesis has presented a range of ways in which advanced analyses including time series analysis, feature selection and model development can be applied to neonatal intensive care. There is a clear role for such analyses in early detection of clinical outcomes, characterising the effects of relevant treatments or pathologies and identifying infants at risk of later morbidity

    Decentralized Narrowband and Wideband Spectrum Sensing with Correlated Observations

    Get PDF
    This dissertation evaluates the utility of several approaches to the design of good distributed sensing systems for both narrowband and wideband spectrum sensing problems with correlated sensor observations

    In pursuit of high resolution radar using pursuit algorithms

    Get PDF
    Radar receivers typically employ matched filters designed to maximize signal to noise ratio (SNR) in a single target environment. In a multi-target environment, however, matched filter estimates of target environment often consist of spurious targets because of radar signal sidelobes. As a result, matched filters are not suitable for use in high resolution radars operating in multi-target environments. Assuming a point target model, we show that the radar problem can be formulated as a linear under-determined system with a sparse solution. This suggests that radar can be considered as a sparse signal recovery problem. However, it is shown that the sensing matrix obtained using common radar signals does not usually satisfy the mutual coherence condition. This implies that using recovery techniques available in compressed sensing literature may not result in the optimal solution. In this thesis, we focus on the greedy algorithm approach to solve the problem and show that it naturally yields a quantitative measure for radar resolution. In addition, we show that the limitations of the greedy algorithms can be attributed to the close relation between greedy matching pursuit algorithms and the matched filter. This suggests that improvements to the resolution capability of the greedy pursuit algorithms can be made by using a mismatched signal dictionary. In some cases, unlike the mismatched filter, the proposed mismatched pursuit algorithm is shown to offer improved resolution and stability without any noticeable difference in detection performance. Further improvements in resolution are proposed by using greedy algorithms in a radar system using multiple transmit waveforms. It is shown that while using the greedy algorithms together with linear channel combining can yield significant resolution improvement, a greedy approach using nonlinear channel combining also shows some promise. Finally, a forward-backward greedy algorithm is proposed for target environments comprising of point targets as well as extended targets

    ECONOMIC DESIGN OF X-BAR AND CUSUM CHARTS AS APPLIED TO NON-NORMAL PROCESSES.

    Get PDF

    Discovering robust dependencies from data

    Get PDF
    Science revolves around forming hypotheses, designing experiments, collecting data, and tests. It was not until recently, with the advent of modern hardware and data analytics, that science shifted towards a big-data-driven paradigm that led to an unprecedented success across various fields. What is perhaps the most astounding feature of this new era, is that interesting hypotheses can now be automatically discovered from observational data. This dissertation investigates knowledge discovery procedures that do exactly this. In particular, we seek algorithms that discover the most informative models able to compactly “describe” aspects of the phenomena under investigation, in both supervised and unsupervised settings. We consider interpretable models in the form of subsets of the original variable set. We want the models to capture all possible interactions, e.g., linear, non-linear, between all types of variables, e.g., discrete, continuous, and lastly, we want their quality to be meaningfully assessed. For this, we employ information-theoretic measures, and particularly, the fraction of information for the supervised setting, and the normalized total correlation for the unsupervised. The former measures the uncertainty reduction of the target variable conditioned on a model, and the latter measures the information overlap of the variables included in a model. Without access to the true underlying data generating process, we estimate the aforementioned measures from observational data. This process is prone to statistical errors, and in our case, the errors manifest as biases towards larger models. This can lead to situations where the results are utterly random, hindering therefore further analysis. We correct this behavior with notions from statistical learning theory. In particular, we propose regularized estimators that are unbiased under the hypothesis of independence, leading to robust estimation from limited data samples and arbitrary dimensionalities. Moreover, we do this for models consisting of both discrete and continuous variables. Lastly, to discover the top scoring models, we derive effective optimization algorithms for exact, approximate, and heuristic search. These algorithms are powered by admissible, tight, and efficient-to-compute bounding functions for our proposed estimators that can be used to greatly prune the search space. Overall, the products of this dissertation can successfully assist data analysts with data exploration, discovering powerful description models, or concluding that no satisfactory models exist, implying therefore new experiments and data are required for the phenomena under investigation. This statement is supported by Materials Science researchers who corroborated our discoveries.In der Wissenschaft geht es um Hypothesenbildung, Entwerfen von Experimenten, Sammeln von Daten und Tests. Jüngst hat sich die Wissenschaft, durch das Aufkommen moderner Hardware und Datenanalyse, zu einem Big-Data-basierten Paradigma hin entwickelt, das zu einem beispiellosen Erfolg in verschiedenen Bereichen geführt hat. Ein erstaunliches Merkmal dieser neuen ra ist, dass interessante Hypothesen jetzt automatisch aus Beobachtungsdaten entdeckt werden k nnen. In dieser Dissertation werden Verfahren zur Wissensentdeckung untersucht, die genau dies tun. Insbesondere suchen wir nach Algorithmen, die Modelle identifizieren, die in der Lage sind, Aspekte der untersuchten Ph nomene sowohl in beaufsichtigten als auch in unbeaufsichtigten Szenarien kompakt zu “beschreiben”. Hierzu betrachten wir interpretierbare Modelle in Form von Untermengen der ursprünglichen Variablenmenge. Ziel ist es, dass diese Modelle alle m glichen Interaktionen erfassen (z.B. linear, nicht-lineare), zwischen allen Arten von Variablen unterscheiden (z.B. diskrete, kontinuierliche) und dass schlussendlich ihre Qualit t sinnvoll bewertet wird. Dazu setzen wir informationstheoretische Ma e ein, insbesondere den Informationsanteil für das überwachte und die normalisierte Gesamtkorrelation für das unüberwachte Szenario. Ersteres misst die Unsicherheitsreduktion der Zielvariablen, die durch ein Modell bedingt ist, und letztere misst die Informationsüberlappung der enthaltenen Variablen. Ohne Kontrolle des Datengenerierungsprozesses werden die oben genannten Ma e aus Beobachtungsdaten gesch tzt. Dies ist anf llig für statistische Fehler, die zu Verzerrungen in gr  eren Modellen führen. So entstehen Situationen, wobei die Ergebnisse v llig zuf llig sind und somit weitere Analysen st ren. Wir korrigieren dieses Verhalten mit Methoden aus der statistischen Lerntheorie. Insbesondere schlagen wir regularisierte Sch tzer vor, die unter der Hypothese der Unabh ngigkeit nicht verzerrt sind und somit zu einer robusten Sch tzung aus begrenzten Datenstichproben und willkürlichen-Dimensionalit ten führen. Darüber hinaus wenden wir dies für Modelle an, die sowohl aus diskreten als auch aus kontinuierlichen Variablen bestehen. Um die besten Modelle zu entdecken, leiten wir effektive Optimierungsalgorithmen mit verschiedenen Garantien ab. Diese Algorithmen basieren auf speziellen Begrenzungsfunktionen der vorgeschlagenen Sch tzer und erlauben es den Suchraum stark einzuschr nken. Insgesamt sind die Produkte dieser Arbeit sehr effektiv für die Wissensentdeckung. Letztere Aussage wurde von Materialwissenschaftlern best tigt
    corecore