981 research outputs found

    Point process modeling as a framework to dissociate intrinsic and extrinsic components in neural systems

    Get PDF
    Understanding the factors shaping neuronal spiking is a central problem in neuroscience. Neurons may have complicated sensitivity and, often, are embedded in dynamic networks whose ongoing activity may influence their likelihood of spiking. One approach to characterizing neuronal spiking is the point process generalized linear model (GLM), which decomposes spike probability into explicit factors. This model represents a higher level of abstraction than biophysical models, such as Hodgkin-Huxley, but benefits from principled approaches for estimation and validation. Here we address how to infer factors affecting neuronal spiking in different types of neural systems. We first extend the point process GLM, most commonly used to analyze single neurons, to model population-level voltage discharges recorded during human seizures. Both GLMs and descriptive measures reveal rhythmic bursting and directional wave propagation. However, we show that GLM estimates account for covariance between these features in a way that pairwise measures do not. Failure to account for this covariance leads to confounded results. We interpret the GLM results to speculate the mechanisms of seizure and suggest new therapies. The second chapter highlights flexibility of the GLM. We use this single framework to analyze enhancement, a statistical phenomenon, in three distinct systems. Here we define the enhancement score, a simple measure of shared information between spike factors in a GLM. We demonstrate how to estimate the score, including confidence intervals, using simulated data. In real data, we find that enhancement occurs prominently during human seizure, while redundancy tends to occur in mouse auditory networks. We discuss implications for physiology, particularly during seizure. In the third part of this thesis, we apply point process modeling to spike trains recorded from single units in vitro under external stimulation. We re-parameterize models in a low-dimensional and physically interpretable way; namely, we represent their effects in principal component space. We show that this approach successfully separates the neurons observed in vitro into different classes consistent with their gene expression profiles. Taken together, this work contributes a statistical framework for analyzing neuronal spike trains and demonstrates how it can be applied to create new insights into clinical and experimental data sets

    ECG compression for Holter monitoring

    Get PDF
    Cardiologists can gain useful insight into a patient's condition when they are able to correlate the patent's symptoms and activities. For this purpose, a Holter Monitor is often used - a portable electrocardiogram (ECG) recorder worn by the patient for a period of 24-72 hours. Preferably, the monitor is not cumbersome to the patient and thus it should be designed to be as small and light as possible; however, the storage requirements for such a long signal are very large and can significantly increase the recorder's size and cost, and so signal compression is often employed. At the same time, the decompressed signal must contain enough detail for the cardiologist to be able to identify irregularities. "Lossy" compressors may obscure such details, where a "lossless" compressor preserves the signal exactly as captured.The purpose of this thesis is to develop a platform upon which a Holter Monitor can be built, including a hardware-assisted lossless compression method in order to avoid the signal quality penalties of a lossy algorithm. The objective of this thesis is to develop and implement a low-complexity lossless ECG encoding algorithm capable of at least a 2:1 compression ratio in an embedded system for use in a Holter Monitor. Different lossless compression techniques were evaluated in terms of coding efficiency as well as suitability for ECG waveform application, random access within the signal and complexity of the decoding operation. For the reduction of the physical circuit size, a System On a Programmable Chip (SOPC) design was utilized. A coder based on a library of linear predictors and Rice coding was chosen and found to give a compression ratio of at least 2:1 and as high as 3:1 on real-world signals tested while having a low decoder complexity and fast random access to arbitrary parts of the signal. In the hardware-assisted implementation, the speed of encoding was a factor of between four and five faster than a software encoder running on the same CPU while allowing the CPU to perform other tasks during the encoding process

    New product development system fit for innovative performance

    Get PDF
    Innovation is an imperative for long-term health and shareholder returns in firms dependent on product development. Yet, most companies struggle with the tension between creative ideation and implementation. Effectively finding new sources of revenue and improving and profiting from existing products and services reflects a product development ambidexterity challenge. Surveys were collected from 212 development team members representing 31 teams from the transportation, aerospace, and chemical sectors to understand if a new product development (NPD) system’s ambidexterity supported team level innovation performance. When NPD systems are perceived by development team users as ambidextrous structures, their combined ideation and implementation strength contributes directly to team innovation performance. This research supports past findings in contextual ambidexterity and provides a new measure for assessing the ideation and implementation characteristics of NPD systems

    Joint University Program for Air Transportation Research, 1986

    Get PDF
    The research conducted under the NASA/FAA sponsored Joint University Program for Air Transportation Research is summarized. The Joint University Program is a coordinated set of three grants sponsored by NASA and the FAA, one each with the Mass. Inst. of Tech., Ohio Univ., and Princeton Univ. Completed works, status reports, and bibliographies are presented for research topics, which include computer science, guidance and control theory and practice, aircraft performance, flight dynamics, and applied experimental psychology. An overview of activities is presented

    Dependence-driven techniques in system design

    Get PDF
    Burstiness in workloads is often found in multi-tier architectures, storage systems, and communication networks. This feature is extremely important in system design because it can significantly degrade system performance and availability. This dissertation focuses on how to use knowledge of burstiness to develop new techniques and tools for performance prediction, scheduling, and resource allocation under bursty workload conditions.;For multi-tier enterprise systems, burstiness in the service times is catastrophic for performance. Via detailed experimentation, we identify the cause of performance degradation on the persistent bottleneck switch among various servers. This results in an unstable behavior that cannot be captured by existing capacity planning models. In this dissertation, beyond identifying the cause and effects of bottleneck switch in multi-tier systems, we also propose modifications to the classic TPC-W benchmark to emulate bursty arrivals in multi-tier systems.;This dissertation also demonstrates how burstiness can be used to improve system performance. Two dependence-driven scheduling policies, SWAP and ALoC, are developed. These general scheduling policies counteract burstiness in workloads and maintain high availability by delaying selected requests that contribute to burstiness. Extensive experiments show that both SWAP and ALoC achieve good estimates of service times based on the knowledge of burstiness in the service process. as a result, SWAP successfully approximates the shortest job first (SJF) scheduling without requiring a priori information of job service times. ALoC adaptively controls system load by infinitely delaying only a small fraction of the incoming requests.;The knowledge of burstiness can also be used to forecast the length of idle intervals in storage systems. In practice, background activities are scheduled during system idle times. The scheduling of background jobs is crucial in terms of the performance degradation of foreground jobs and the utilization of idle times. In this dissertation, new background scheduling schemes are designed to determine when and for how long idle times can be used for serving background jobs, without violating predefined performance targets of foreground jobs. Extensive trace-driven simulation results illustrate that the proposed schemes are effective and robust in a wide range of system conditions. Furthermore, if there is burstiness within idle times, then maintenance features like disk scrubbing and intra-disk data redundancy can be successfully scheduled as background activities during idle times

    Storing, single photons in broadband vapor cell quantum memories

    Get PDF
    Single photons are an essential resource for realizing quantum technologies. Together with compatible quantum memories granting control over when a photon arrives, they form a foundational component both of quantum communication and quantum information processing. Quality solid-state single photon sources deliver on the high bandwidths and rates required for scalable quantum technology, but require memories that match these operational parameters. In this thesis, I report on quantum memories based on electromagnetically induced transparency and built in warm rubidium vapor, with such fast and high bandwidth interfaces in mind. I also present work on a heralded single photon source based on parametric downconversion in an optical cavity, operated in a bandwidth regime of a few 100s of megahertz. The systems are characterized on their own and together in a functional interface. As the photon generation process is spontaneous, the memory is implemented as a fully reactive device, capable of storing and retrieving photons in response to an asynchronous external trigger. The combined system is used to demonstrate the storage and retrieval of single photons in and from the quantum memory. Using polarization selection rules in the Zeeman substructure of the atoms, the read-out noise of the memory is considerably reduced from what is common in ground-state storage schemes in warm vapor. Critically, the quantum signature in the photon number statistics of the retrieved photons is successfully maintained, proving that the emission from the memory is dominated by single photons. We observe a retrieved single-photon state accuracy of gc,ret(2)=0.177(23)g_{c,\,\text{ret}}^{(2)}=0.177(23) for short storage times, which remains gc,ret(2)<0.5g_{c,\,\text{ret}}^{(2)}<0.5 throughout the memory lifetime of 680(50)680(50)\,ns. The end-to-end efficiency of the memory interfaced with the photon source is ηe2e=1.1(2)%\eta_{e2e}=1.1(2)\,\%, which will be further improved in the future by optimizing the operating regime. With its operation bandwidth of 370370\,MHz, our system opens up new possibilities for single-photon synchronization and local quantum networking experiments at high repetition rates

    Intelligent detectors

    Get PDF
    Die vorliegende Arbeit stellt eine Basis zur Entwicklung von On-Board Software für astronomische Satelliten dar. Sie dient als Anleitung und Nachschlagewerk und zeigt anhand der Projekte Herschel/PACS und SPICA/SAFARI, wie aus den Grundlagen weltraumtaugliche Flugsoftware entsteht. Dazu gehören das Verstehen des wissenschaftlichen Zwecks, also was soll wie gemessen werden und wofür ist das gut, sowie die Kenntnis der physikalischen Eigenschaften des Detektors, das Beherrschen der mathematischen Operationen zur Verarbeitung der Daten und natürlich auch die Berücksichtigung der Umstände, unter welchen der Detektor zum Einsatz kommt.This thesis contains the knowledge and a good deal of experience that are necessary for the development of such astronomical on-board software for satellites. The key elements in the development are the understanding of the scientific purpose, knowledge of the physical properties of the detector, the comprehension of the mathematical operations involved in data processing and the consideration of the technical and observational circumstances

    System Engineering Applied to Fuenmayor Karst Aquifer (San Julián de Banzo, Huesca) and Collins Glacier (King George Island, Antarctica)

    Get PDF
    La ingeniería de sistemas, definida generalmente como arte y ciencia de crear soluciones integrales a problemas complejos, se aplica en el presente documento a dos sistemas naturales, a saber, un sistema acuífero kárstico y un sistema glaciar, desde una perspectiva hidrológica. Las técnicas de identificación, desarrolladas típicamente en ingeniería para representar sistemas artificiales por medio de modelos lineales y no lineales, pueden aplicarse en el estudio de los sistemas naturales donde se producen fenómenos de acoplamiento entre el clima y la hidrosfera. Los métodos evolucionan para afrontar nuevos campos de identificación donde se requieren estrategias para encontrar el modelo idóneo adaptado a las peculiaridades del sistema. En este sentido, se han considerado especialmente las herramientas basadas en la transformada wavelet utilizadas en la preparación de series temporales, suavizado de señales, análisis espectral, correlación cruzada y predicción, entre otros. Bajo este enfoque, una aplicación a mencionar entre las tratadas en esta tesis, es la determinación analítica del núcleo efectivo estacional (SEC) a través del estudio de la coherencia wavelet entre temperatura del aire y la descarga del glaciar, que establece un conjunto de períodos de muestreo aceptablemente coherentes, a partir del cual se crearán los modelos del sistema glacial. El estudio está dirigido específicamente a estimar la influencia de la precipitación sobre la descarga del acuífero kárstico de Fuenmayor, en San Julián de Banzo, Huesca, España. De la misma manera, se ocupa de las consecuencias de la temperatura del aire en la fusión del hielo glaciar, que se manifiesta en la corriente de drenaje del glaciar Collins, isla King George, Antártida. En el proceso de identificación paramétrica y no paramétrica se buscan los modelos que mejor representen la dinámica interna del sistema. Eso conduce a pruebas iterativas, donde se van creando modelos que se verifican sistemáticamente con los datos reales del muestreo, de acuerdo a un criterio de eficiencia dado. La solución mejor valorada según los resultados obtenidos en los casos tratados apuntan a estructuras de modelos en bloques. Esta tesis significa una exposición formal de la metodología de identificación de sistemas propios de la ingeniería en el contexto de los sistemas naturales, que mejoran los resultados obtenidos en muchos casos de la hidrología kárstica que comúnmente usaban métodos ad hoc ocasionales de carácter estadístico; así mismo, los enfoques propuestos en los casos de glaciología con el análisis wavelet y los modelos orientados a datos raramente considerados en la literatura, revelan información esencial ante la imposibilidad de precisar la totalidad de la física que rige el sistema. Notables resultados se derivan en la caracterización de la respuesta del manantial de Fuenmayor y su correlación con la precipitación, desde la perspectiva de un sistema lineal, que se complementa con los métodos de identificación basados en técnicas no lineales. Así mismo, la implementación del modelo para el glaciar Collins, obtenido también mediante métodos de identificación de caja negra, puede revelar una inestabilidad de los límites de los periodos activos de la descarga, y consecuentemente la variabilidad en la tendencia actual en el cambio climático global

    Estimating the Amount of Information Conveyed by a Population of Neurons

    Get PDF
    Recent advances in electrophysiological recording technology have allowed for the collection of data from large populations of neurons simultaneously. Yet despite these advances, methods for the estimation of the amount of information conveyed by multiple neurons have been stymied by the “curse of dimensionality”–as the number of included neurons increases, so too does the dimensionality of the data necessary for such measurements, leading to an exponential and, therefore, intractible increase in the amounts of data required for valid measurements. Here we put forth a novel method for the estimation of the amount of information transmitted by the discharge of a large population of neurons, a method which exploits the little-known fact that (under certain constraints) the Fourier coefficients of variables such as neural spike trains follow a Gaussian distribution. This fact enables an accurate measure of information even with limited data. The method, which we call the Fourier Method, is presented in detail, tested for robustness, and its application is demonstrated with both simulated and real spike trains. ii

    Detecting Events and Patterns in Large-Scale User Generated Textual Streams with Statistical Learning Methods

    Full text link
    A vast amount of textual web streams is influenced by events or phenomena emerging in the real world. The social web forms an excellent modern paradigm, where unstructured user generated content is published on a regular basis and in most occasions is freely distributed. The present Ph.D. Thesis deals with the problem of inferring information - or patterns in general - about events emerging in real life based on the contents of this textual stream. We show that it is possible to extract valuable information about social phenomena, such as an epidemic or even rainfall rates, by automatic analysis of the content published in Social Media, and in particular Twitter, using Statistical Machine Learning methods. An important intermediate task regards the formation and identification of features which characterise a target event; we select and use those textual features in several linear, non-linear and hybrid inference approaches achieving a significantly good performance in terms of the applied loss function. By examining further this rich data set, we also propose methods for extracting various types of mood signals revealing how affective norms - at least within the social web's population - evolve during the day and how significant events emerging in the real world are influencing them. Lastly, we present some preliminary findings showing several spatiotemporal characteristics of this textual information as well as the potential of using it to tackle tasks such as the prediction of voting intentions.Comment: PhD thesis, 238 pages, 9 chapters, 2 appendices, 58 figures, 49 table
    corecore