572,336 research outputs found

    Transcranial Magnetic Stimulation-coil design with improved focality

    Get PDF
    Transcranial Magnetic Stimulation (TMS) is a technique for neuromodulation that can be used as a non-invasive therapy for various neurological disorders. In TMS, a time varying magnetic field generated from an electromagnetic coilplaced on the scalp is used to induce an electric field inside the brain. TMS coilgeometry plays an important role in determining the focality and depth of penetration of the induced electric field responsible for stimulation. Clinicians and basic scientists are interested in stimulating a localized area of the brain,while minimizing the stimulation of surrounding neural networks. In this paper, a novel coil has been proposed, namely Quadruple Butterfly Coil (QBC) with an improved focality over the commercial Figure-8 coil. Finite element simulations were conducted with both the QBC and the conventional Figure-8 coil. The two coil’s stimulation profiles were assessed with 50 anatomically realistic MRIderived head models. The coils were positioned on the vertex and the scalp over the dorsolateral prefrontal cortex to stimulate the brain. Computer modeling of the coils has been done to determine the parameters of interest-volume of stimulation, maximum electric field, location of maximum electric field and area of stimulation across all 50 head models for both coils

    Performance evaluation considering iterations per phase and SA temperature in WMN-SA system

    Get PDF
    One of the key advantages of Wireless Mesh Networks (WMNs) is their importance for providing cost-efficient broadband connectivity. There are issues for achieving the network connectivity and user coverage, which are related with the node placement problem. In this work, we consider Simulated Annealing Algorithm (SA) temperature and Iteration per phase for the router node placement problem in WMNs. We want to find the optimal distribution of router nodes in order to provide the best network connectivity and provide the best coverage in a set of Normal distributed clients. From simulation results, we found how to optimize both the size of Giant Component and number of covered mesh clients. When the number of iterations per phase is big, the performance is better in WMN-SA System. From for SA temperature, when SA temperature is 0 and 1, the performance is almost same. When SA temperature is 2 and 3 or more, the performance decrease because there are many kick ups.Peer ReviewedPostprint (published version

    Benchmark Analysis of Representative Deep Neural Network Architectures

    Full text link
    This work presents an in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition. For each DNN multiple performance indices are observed, such as recognition accuracy, model complexity, computational complexity, memory usage, and inference time. The behavior of such performance indices and some combinations of them are analyzed and discussed. To measure the indices we experiment the use of DNNs on two different computer architectures, a workstation equipped with a NVIDIA Titan X Pascal and an embedded system based on a NVIDIA Jetson TX1 board. This experimentation allows a direct comparison between DNNs running on machines with very different computational capacity. This study is useful for researchers to have a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future; and for practitioners to select the DNN architecture(s) that better fit the resource constraints of practical deployments and applications. To complete this work, all the DNNs, as well as the software used for the analysis, are available online.Comment: Will appear in IEEE Acces
    • …
    corecore