5 research outputs found

    Visualizing simulated electrical fields from electroencephalography and transcranial electric brain stimulation: a comparative evaluation

    Get PDF
    pre-printElectrical activity of neuronal populations is a crucial aspect of brain activity. This activity is not measured directly but recorded as electrical potential changes using head surface electrodes (electroencephalogram - EEG). Head surface electrodes can also be deployed to inject electrical currents in order to modulate brain activity (transcranial electric stimulation techniques) for therapeutic and neuroscientific purposes. In electroencephalography and noninvasive electric brain stimulation, electrical fields mediate between electrical signal sources and regions of interest (ROI). These fields can be very complicated in structure, and are influenced in a complex way by the conductivity profile of the human head. Visualization techniques play a central role to grasp the nature of those fields because such techniques allow for an effective conveyance of complex data and enable quick qualitative and quantitative assessments. The examination of volume conduction effects of particular head model parameterizations (e.g., skull thickness and layering), of brain anomalies (e.g., holes in the skull, tumors), location and extent of active brain areas (e.g., high concentrations of current densities) and around current injecting electrodes can be investigated using visualization. Here, we evaluate a number of widely used visualization techniques, based on either the potential distribution or on the current-flow. In particular, we focus on the extractability of quantitative and qualitative information from the obtained images, their effective integration of anatomical context information, and their interaction. We present illustrative examples from clinically and neuroscientifically relevant cases and discuss the pros and cons of the various visualization techniques

    THE CHRONOBIOLOGICAL HYPOTHESIS AND PSYCHOSOCIAL AND CULTURAL GENOMIC THERAPIES: FROM THE EXAMPLE OF PARKINSON’S DISEASE AND NEURODEGENERATIVE PROCESS

    Get PDF
    Recent research on Parkinson’s Disease (PD) revealed some common etiopathogenic mechanisms underlying a large range of chronic diseases, and more specifically the deregulation of chronobiological rhythms that may start in the very beginning of such pathological development. Contrary to the bottom-up tradition, the post-genomic biology provides now a much favorable zeitgeist to integrate holistic interventions in both biomedical research and mainstream medical practice. Such integrative efforts may not only improve our understanding of system-level dynamics about health and disease, but also opens larger possibilities for new clinical options that bring positive pathological evolutions while globally increasing quality of life of patients, caregivers, and their families. To facilitate the transition, we renew the invitation to join our international open source research project in widely testing the Creative Psychosocial Genomic Healing Experience (CPGHE), an evidence-based construction of a holistic and naturalist intervention with its standardized version specially designed to offer easy adaption for scientific investigation towards an evidence-based integration of holistic intervention possibilities

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Bioelectrical User Authentication

    Get PDF
    There has been tremendous growth of mobile devices, which includes mobile phones, tablets etc. in recent years. The use of mobile phone is more prevalent due to their increasing functionality and capacity. Most of the mobile phones available now are smart phones and better processing capability hence their deployment for processing large volume of information. The information contained in these smart phones need to be protected against unauthorised persons from getting hold of personal data. To verify a legitimate user before accessing the phone information, the user authentication mechanism should be robust enough to meet present security challenge. The present approach for user authentication is cumbersome and fails to consider the human factor. The point of entry mechanism is intrusive which forces users to authenticate always irrespectively of the time interval. The use of biometric is identified as a more reliable method for implementing a transparent and non-intrusive user authentication. Transparent authentication using biometrics provides the opportunity for more convenient and secure authentication over secret-knowledge or token-based approaches. The ability to apply biometrics in a transparent manner improves the authentication security by providing a reliable way for smart phone user authentication. As such, research is required to investigate new modalities that would easily operate within the constraints of a continuous and transparent authentication system. This thesis explores the use of bioelectrical signals and contextual information for non-intrusive approach for authenticating a user of a mobile device. From fusion of bioelectrical signals and context awareness information, three algorithms where created to discriminate subjects with overall Equal Error Rate (EER of 3.4%, 2.04% and 0.27% respectively. Based vii | P a g e on the analysis from the multi-algorithm implementation, a novel architecture is proposed using a multi-algorithm biometric authentication system for authentication a user of a smart phone. The framework is designed to be continuous, transparent with the application of advanced intelligence to further improve the authentication result. With the proposed framework, it removes the inconvenience of password/passphrase etc. memorability, carrying of token or capturing a biometric sample in an intrusive manner. The framework is evaluated through simulation with the application of a voting scheme. The simulation of the voting scheme using majority voting improved to the performance of the combine algorithm (security level 2) to FRR of 22% and FAR of 0%, the Active algorithm (security level 2) to FRR of 14.33% and FAR of 0% while the Non-active algorithm (security level 3) to FRR of 10.33% and FAR of 0%
    corecore