54 research outputs found

    Postmortem Brain Imaging in Alzheimer\u27s Disease and Related Dementias: The South Texas Alzheimer\u27s Disease Research Center Repository

    Get PDF
    Background: Neuroimaging bears the promise of providing new biomarkers that could refine the diagnosis of dementia. Still, obtaining the pathology data required to validate the relationship between neuroimaging markers and neurological changes is challenging. Existing data repositories are focused on a single pathology, are too small, or do not precisely match neuroimaging and pathology findings. Objective: The new data repository introduced in this work, the South Texas Alzheimer’s Disease research center repository, was designed to address these limitations. Our repository covers a broad diversity of dementias, spans a wide age range, and was specifically designed to draw exact correspondences between neuroimaging and pathology data. Methods: Using four different MRI sequences, we are reaching a sample size that allows for validating multimodal neuroimaging biomarkers and studying comorbid conditions. Our imaging protocol was designed to capture markers of cerebrovascular disease and related lesions. Quantification of these lesions is currently underway with MRI-guided histopathological examination. Results: A total of 139 postmortem brains (70 females) with mean age of 77.9 years were collected, with 71 brains fully analyzed. Of these, only 3% showed evidence of AD-only pathology and 76% had high prevalence of multiple pathologies contributing to clinical diagnosis. Conclusion: This repository has a significant (and increasing) sample size consisting of a wide range of neurodegenerative disorders and employs advanced imaging protocols and MRI-guided histopathological analysis to help disentangle the effects of comorbid disorders to refine diagnosis, prognosis and better understand neurodegenerative disorders

    Fitting phase type distribution to service process with sequential phases

    Get PDF
    The work of this thesis is concerned with fitting Hypo-exponential and Erlang phase type distributions for modeling real life processes with non-exponential service time. There exist situations where exponential distributions cannot explain the distribution of service time properly. This thesis presents the application of two traditional statistical estimation techniques to approximate the service distributions of processes with coefficient of variation less than one. It also presents an algorithm to fit Hypo-exponential distribution for complex situations which can’t be handled properly with traditional estimation techniques. The result shows the effect of variation of sample size and other parameters on the efficiency of the estimation techniques by comparing their respective outputs. Furthermore it checks how accurately the proposed algorithm approximates a given distribution

    Scalable Approach to Uncertainty Quantification and Robust Design of Interconnected Dynamical Systems

    Full text link
    Development of robust dynamical systems and networks such as autonomous aircraft systems capable of accomplishing complex missions faces challenges due to the dynamically evolving uncertainties coming from model uncertainties, necessity to operate in a hostile cluttered urban environment, and the distributed and dynamic nature of the communication and computation resources. Model-based robust design is difficult because of the complexity of the hybrid dynamic models including continuous vehicle dynamics, the discrete models of computations and communications, and the size of the problem. We will overview recent advances in methodology and tools to model, analyze, and design robust autonomous aerospace systems operating in uncertain environment, with stress on efficient uncertainty quantification and robust design using the case studies of the mission including model-based target tracking and search, and trajectory planning in uncertain urban environment. To show that the methodology is generally applicable to uncertain dynamical systems, we will also show examples of application of the new methods to efficient uncertainty quantification of energy usage in buildings, and stability assessment of interconnected power networks

    Contributions from the United States National Herbarium

    Get PDF
    v.42(2002

    Energieeffiziente und rechtzeitige Ereignismeldung mittels drahtloser Sensornetze

    Get PDF
    This thesis investigates the suitability of state-of-the-art protocols for large-scale and long-term environmental event monitoring using wireless sensor networks based on the application scenario of early forest fire detection. By suitable combination of energy-efficient protocol mechanisms a novel communication protocol, referred to as cross-layer message-merging protocol (XLMMP), is developed. Qualitative and quantitative protocol analyses are carried out to confirm that XLMMP is particularly suitable for this application area. The quantitative analysis is mainly based on finite-source retrial queues with multiple unreliable servers. While this queueing model is widely applicable in various research areas even beyond communication networks, this thesis is the first to determine the distribution of the response time in this model. The model evaluation is mainly carried out using Markovian analysis and the method of phases. The obtained quantitative results show that XLMMP is a feasible basis to design scalable wireless sensor networks that (1) may comprise hundreds of thousands of tiny sensor nodes with reduced node complexity, (2) are suitable to monitor an area of tens of square kilometers, (3) achieve a lifetime of several years. The deduced quantifiable relationships between key network parameters — e.g., node size, node density, size of the monitored area, aspired lifetime, and the maximum end-to-end communication delay — enable application-specific optimization of the protocol

    Using a new generation of remote sensing to monitor Peru’s mountain glaciers

    Get PDF
    Remote sensing technologies are integral to monitoring mountain glaciers in a warming world. Tropical glaciers, of which around 70% are located in Peru, are particularly at risk as a result of climate warming. Satellite missions and field-based platforms have transformed understanding of the processes driving mountain glacier dynamics and the associated emergence of hazards (e.g. avalanches, floods, landslides), yet are seldom specialised to overcome the unique challenges of acquiring data in mountainous environments. A ‘new generation’ of remote sensing, marked by open access to powerful cloud computing and large datasets, high resolution satellite missions, and low-cost science-grade field sensors, looks to revolutionise the way we monitor the mountain cryosphere. In this thesis, three novel remote sensing techniques and their applicability towards monitoring the glaciers of the Peruvian Cordillera Vilcanota are examined. Using novel processing chains and image archives generated by the ASTER satellite, the first mass balance estimate of the Cordillera Vilcanota is calculated (-0.48 ± 0.07 m w.e. yr-1) and ELA change of up to 32.8 m per decade in the neighbouring Cordillera Vilcabamba is quantified. The performance of new satellite altimetry missions, Sentinel-3 and ICESat-2, are assessed, with the tracking mode of Sentinel-3 being a key limitation of the potential for its use over mountain environments. Although currently limited in its ability to extract widespread mass balance measurements over mountain glaciers, other applications for ICESat-2 in long-term monitoring of mountain glaciers include quantifying surface elevation change, identifying large accumulation events, and monitoring lake bathymetry. Finally, a novel low-cost method of performing timelapse photogrammetry using Raspberry Pi camera sensors is created and compared to 3D models generated by a UAV. Mean difference between the Raspberry Pi and UAV sensors is 0.31 ± 0.74 m, giving promise to the use of these sensors for long-term monitoring of recession and short-term warning of hazards at glacier calving fronts. Together, this ‘new generation’ of remote sensing looks to provide new glaciological insights and opportunities for regular monitoring of data-scarce mountainous regions. The techniques discussed in this thesis could benefit communities and societal programmes in rapidly deglaciating environments, including across the Cordillera Vilcanota

    A High-Throughput Byzantine Fault-Tolerant Protocol

    Get PDF
    State-machine replication (SMR) is a software technique for tolerating failures and for providing high availability in large-scale systems, through the use of commodity hardware. A replicated state-machine comprises a number of replicas, each of which runs an agreement protocol, with the goal of ensuring a consistent state across all of the replicas. In hostile environments, such as the Internet, Byzantine fault tolerant state-machine replication (BFT) is an important technique for providing robust services. During the past decade, we have seen an emergence of various BFT protocols. In order to be adopted, besides providing correctness, a BFT must provide good performance as well. Consequently, all of the new protocols focus on improving performance under various conditions. However, a closer look at the performance of state-of-the-art BFT protocols reveals that even in best-case execution scenarios, they still remain far behind their theoretical maximum. Based on exhaustive evaluation and monitoring of existing BFT protocols, we highlight a few impediments to their scalability. These obstructions include the use of IP multicast, the presence of bottlenecks due to asymmetric replica processing, and an unbalanced network bandwidth utilization. The goal of this thesis is to evaluate the actual impact of these scalability impediments, and to offer a solution for a high-throughput BFT protocol in the case in which the network itself is the bottleneck. To that end, we have developed Ring, a new BFT protocol which circumvents the aforementioned impediments. As its name suggests, Ring uses the ring communication topology, in the fault-free case. In the ring topology, each replica only performs point-to-point communications with two other replicas, namely its neighbors on the ring. Moreover, all of the replicas equally accept requests from clients and perform symmetric processing. Our performance evaluation shows that, with the network as the bottleneck, Ring outperforms all other, state-of-the-art BFT protocols. Ring achieves 118 Mbps on the Fast Ethernet – a 24% improvement in throughput over previous protocols. Finally, we conducted an extensive practical and analytic evaluation of Ring. In order to analyse benefits (and drawbacks) of Ring (and other protocols) under different settings, without resorting to costly experimentation, we developed an analytical performance model. Our performance model is based on queueing theory, and relies only on a handful of protocolagnostic measurements of the environment

    Architecture-Level Software Performance Models for Online Performance Prediction

    Get PDF
    Proactive performance and resource management of modern IT infrastructures requires the ability to predict at run-time, how the performance of running services would be affected if the workload or the system changes. In this thesis, modeling and prediction facilities that enable online performance prediction during system operation are presented. Analyses about the impact of reconfigurations and workload trends can be conducted on the model level, without executing expensive performance tests

    Decisive Routing and Admission Control According to Quality of Service Constraints

    Get PDF
    This research effort examines, models, and proposes options to enhance command and control for decision makers when applied to the communications network. My goal is to research the viability of combining three students’ past research efforts and expanding and enhancing those efforts. The area of this research is predicting a snapshot of the communications network, context-aware routing between network nodes, and Quality of Service-based routing optimization in order to create an intelligent routing protocol platform. It will consolidate efforts from an Intelligent Agent Based Framework to Maximize Information Utility by Captain John Pecarina, Dialable Cryptography for Wireless Networks by Major Marnita Eaddie, and Stochastic Estimation and Control of Queues within a Computer Network by Captain Nathan Stuckey. My research effort will create a framework that is greater than the sum of its individual parts. The framework will take predictions about the health of the network and will take the priority level of a commodity which needs to be routed, and then will utilize this information to intelligently route the commodity in such a way as to optimize the information flow of network traffic. Developing this framework will ensure that the forward commander and decision makers can make sound judgments at the right time using the most accurate information and on the proper communications network

    Knowledge-defined networking : a machine learning based approach for network and traffic modeling

    Get PDF
    The research community has considered in the past the application of Machine Learning (ML) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this thesis, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of ML techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and ML, and provide use-cases that illustrate its applicability and benefits. We also present some relevant use-cases, in which ML tools can be useful. We refer to this new paradigm as Knowledge-Defined Networking (KDN). In this context, ML can be used as a network modeling technique to build models that estimate the network performance. Network modeling is a central technique to many networking functions, for instance in the field of optimization. One of the objective of this thesis is to provide an answer to the following question: Can neural networks accurately model the performance of a computer network as a function of the input traffic?. In this thesis, we focus mainly on modeling the average delay, but also on estimating the jitter and the packets lost. For this, we assume the network as a black-box that has as input a traffic matrix and as output the desired performance matrix. Then we train different regressors, including deep neural networks, and evaluate its accuracy under different fundamental network characteristics: topology, size, traffic intensity and routing. Moreover, we also study the impact of having multiple traffic flows between each pair of nodes. We also explore the use of ML techniques in other network related fields. One relevant application is traffic forecasting. Accurate forecasting enables scaling up or down the resources to efficiently accommodate the load of traffic. Such models are typically based on traditional time series ARMA or ARIMA models. We propose a new methodology that results from the combination of an ARIMA model with an ANN. The Neural Network greatly improves the ARIMA estimation by modeling complex and nonlinear dependencies, particularly for outliers. In order to train the Neural Network and to improve the outliers estimation, we use external information: weather, events, holidays, etc. The main hypothesis is that network traffic depends on the behavior of the end-users, which in turn depend on external factors. We evaluate the accuracy of our methodology using real-world data from an egress Internet link of a campus network. The analysis shows that the model works remarkably well, outperforming standard ARIMA models. Another relevant application is in the Network Function Virtualization (NFV). The NFV paradigm makes networks more flexible by using Virtual Network Functions (VNF) instead of dedicated hardware. The main advantage is the flexibility offered by these virtual elements. However, the use of virtual nodes increases the difficulty of modeling such networks. This problem may be addressed by the use of ML techniques, to model or to control such networks. As a first step, we focus on the modeling of the performance of single VNFs as a function of the input traffic. In this thesis, we demonstrate that the CPU consumption of a VNF can be estimated only as a function of the input traffic characteristics.L'aplicació de tècniques d'aprenentatge automàtic (ML) pel control i operació de xarxes informàtiques ja s'ha plantejat anteriorment per la comunitat científica. Un exemple important és "Knowledge Plane", proposat per D. Clark et al. Tot i això, aquestes propostes no s'han utilitzat ni implementat mai en aquest camp. En aquesta tesi, explorem els motius que han fet impossible l'adopció fins al present, i que ara en permeten la implementació. El principal motiu és l'adopció de dos nous paradigmes: Software-Defined Networking (SDN) i Network Analytics (NA), que permeten la utilització de tècniques d'aprenentatge automàtic en el context de control i operació de xarxes informàtiques. En aquesta tesi, es descriu aquest paradigma, que aprofita les possibilitats ofertes per SDN, per NA i per ML, i s'expliquen aplicacions en el món de la informàtica i les comunicacions on l'aplicació d'aquestes tècniques poden ser molt beneficioses. Hem anomenat a aquest paradigma Knowledge-Defined Networking (KDN). En aquest context, una de les aplicacions de ML és el modelatge de xarxes informàtiques per estimar-ne el comportament. El modelatge de xarxes és un camp de recerca important el aquest camp, i que permet, per exemple, optimitzar-ne el seu rendiment. Un dels objectius de la tesi és respondre la següent pregunta: Pot una xarxa neuronal modelar de manera acurada el comportament d'una xarxa informàtica en funció del tràfic d'entrada? Aquesta tesi es centra principalment en el modelatge del retard mig (temps entre que s'envia i es rep un paquet). També s'estudia com varia aquest retard (jitter) i el nombre de paquets perduts. Per fer-ho, s'assumeix que la xarxa és totalment desconeguda i que només es coneix la matriu de tràfic d'entrada i la matriu de rendiment com a sortida. Es fan servir diferents tècniques de ML, com ara regressors lineals i xarxes neuronals, i se n'avalua la precisió per diferents xarxes i diferents configuracions de xarxa i tràfic. Finalment, també s'estudia l'impacte de tenir múltiples fluxos entre els parells de nodes. En la tesi, també s'explora l'ús de tècniques d¿aprenentatge automàtic en altres camps relacionats amb les xarxes informàtiques. Un cas rellevant és la predicció de tràfic. Una bona estimació del tràfic permet preveure la utilització dels diversos elements de la xarxa i optimitzar-ne el seu rendiment. Les tècniques tradicionals de predicció de tràfic es basen en tècniques de sèries temporals, com ara models ARMA o ARIMA. En aquesta tesis es proposa una nova metodologia que combina un model ARIMA amb una xarxa neuronal. La xarxa neuronal millora la predicció dels valors atípics, que tenen comportament complexos i no lineals. Per fer-ho, s'incorpora a l'anàlisi l'ús d'informació externa, com ara: informació meteorològica, esdeveniments, vacances, etc. La hipòtesi principal és que el tràfic de xarxes informàtiques depèn del comportament dels usuaris finals, que a la vegada depèn de factors externs. Per això, s'avalua la precisió de la metodologia presentada fent servir dades reals d'un enllaç de sortida de la xarxa d'un campus. S'observa que el model presentat funciona bé, superant la precisió de models ARIMA estàndards. Una altra aplicació important és en el camp de Network Function Virtualization (NFV). El paradigma de NFV fa les xarxes més flexibles gràcies a l'ús de Virtual Network Functions (VNF) en lloc de dispositius específics. L'avantatge principal és la flexibilitat que ofereixen aquests elements virtuals. Per contra, l'ús de nodes virtuals augmenta la dificultat de modelar aquestes xarxes. Aquest problema es pot estudiar també mitjançant tècniques d'aprenentatge automàtic, tant per modelar com per controlar la xarxa. Com a primer pas, aquesta tesi es centra en el modelatge del comportament de VNFs treballant soles en funció del tràfic que processen. Concretament, es demostra que el consum de CPU d'una VNF es pot estimar a partir a partir de diverses característiques del tràfic d'entrada.Postprint (published version
    • …
    corecore