11 research outputs found

    Interplay between astrocytic and neuronal networks during virtual navigation in the mouse hippocampus

    Get PDF
    Encoding of spatial information in hippocapal place cells is believed to contribute to spatial cognition during navigation. Whether the processing of spatial information is exclusively limited to neuronal cells or it involves other cell types, e.g. glial cells, in the brain is currently unknown. In this thesis work, I developed an analysis pipeline to tackle this question using statistical methods and Information Theory approaches. I applied these analytical tools to two experimental data sets in which neuronal place cells in the hippocampus were imaged using two-photon microscopy, while selectively manipulating astrocytic calcium dynamics with pharmacogenetics during virtual navigation. Using custom analytical methods, we observed that pharmacogenetic perturbation of astrocytic calcium dynamics, through clozapine-n-oxyde (CNO) injection, induced a significant increase in neuronal place field and response profile width compared to control conditions. The distributions of neuronal place field and response profile center were also significantly different upon perturbation of astrocytic calcium dynamics compared to control conditions. Moreover, we found contrasting effect of astrocytic calcium dynamics perturbation on neuronal content of spatial information in the two data sets. In the first data set, we found that CNO injection resulted in a significant increase in the average information content in all neurons. In the second data set, we instead found that mutual information values were not significantly different upon CNO application compared to controls. Although the presented results are still preliminary and more experiments and analyses are needed, these findings suggest that astrocytic calcium dynamics may actively control the way hippocampal neuronal networks encode spatial information during virtual navigation. These data thus suggest a complex and tight interplay between neuronal and astrocytic networks during higher cognitive functions

    Route discovery schemes in Mobile Ad hoc Networks with variable-range transmission power

    Get PDF
    Broadcasting in MANETs is important for route discovery but consumes significant amounts of power that is difficult to renew for devices that rely heavily on batteries. Most existing routing protocols make use of a broadcast scheme known as simple flooding. In such an on-demand routing protocol (e.g. AODV) the source node originates a Route Request (RREQ) packet that is blindly rebroadcast via neighbouring nodes to all nodes in the network. Simple flooding leads to serious redundancy, together with contention, and collisions, which is often called the broadcast storm problem. This thesis proposes two improvement strategies: topology control (adjusting transmission power) and reduced retransmissions (reducing redundant rebroadcasts) to reduce energy consumption. For energy efficient route discovery the main idea is to reduce the energy consumed per broadcast during route discovery. An Energy Efficient Adaptive Forwarding Algorithm (called EEAFA) is proposed to reduce the impact of RREQ packet flooding in on-demand routing protocols. The algorithm operates in two phases: 1) Topology construction phase, which establishes a more scalable and energy efficient network structure where nodes can adjust their transmission power range dynamically, based on their local density. 2) A Forwarding Node Determination phase, that utilises network information provided by the constructed topology, where nodes independently decide to forward a RREQ packet or not without relying on GPS or any distance calculations. A further Enhanced EEAFA (called E-EEAFA) algorithm is also proposed, which combines two techniques: graph colouring and sectoring techniques. Graph colouring increases awareness at network nodes to improve the determination of a forwarding node, while the sectoring technique divides neighbours into different forwarding sectors. This helps to reduce overlap between forwarding nodes and select suitable nodes in each sector to forward RREQ packets. These techniques are employed in a distributed manner and collaborate to reduce the number of forwarding nodes, which thus reduces the volume of RREQ packets populating the network. These algorithms have been validated as effective by NS2 simulation studies that are detailed in the thesis

    A Survey on the Application of Evolutionary Algorithms for Mobile Multihop Ad Hoc Network Optimization Problems

    Get PDF
    Evolutionary algorithms are metaheuristic algorithms that provide quasioptimal solutions in a reasonable time. They have been applied to many optimization problems in a high number of scientific areas. In this survey paper, we focus on the application of evolutionary algorithms to solve optimization problems related to a type of complex network likemobilemultihop ad hoc networks. Since its origin, mobile multihop ad hoc network has evolved causing new types of multihop networks to appear such as vehicular ad hoc networks and delay tolerant networks, leading to the solution of new issues and optimization problems. In this survey, we review the main work presented for each type of mobile multihop ad hoc network and we also present some innovative ideas and open challenges to guide further research in this topic

    A Multi-Objectif Genetic Algorithm-Based Adaptive Weighted Clustering Protocol in VANET

    Get PDF
    International audience—Vehicular Ad hoc NETwork (VANET) is the main component that is used recently for the development of Intelligent Transportation Systems (ITSs). VANET has a highly dynamic and portioned network topology due to the constant and rapid movement of vehicles. Recently, the clustering algorithms are widely used as the control schemes to make VANET topology less dynamic for MAC, routing and security protocols. An efficient clustering algorithm must take into consideration all the necessary information related to node mobility. In this paper, we propose an Adaptive Weighted Clustering Protocol (AWCP), specially designed for vehicular networks, which takes the highway ID, direction of vehicles, position, speed and the number of neighbors vehicles into account in order to enhance the network topology stability. However, the multiple control parameters of our AWCP, make parameter tuning a non-trivial problem. In order to optimize AWCP protocol, we define a multi-objective problem whose inputs are the AWCPs parameters and whose objectives are: providing stable cluster structure as possible, maximizing data delivery rate, and reducing the clustering overhead. We then face this multi-objective problem with the the Multi-Objective Genetic Algorithm (MOGA). We evaluate and compare its performance with other multi-objective optimization techniques: Multi-objective Particle Swarm Optimization (MOPSO) and Multi-objective Differential Evolution (MODE). The experiments analysis reveal that NSGA-II improves the results of MOPSO and MODE in terms of the spacing, spread, and ratio of non-dominated solutions and generational distance metrics used for comparison

    Workload-sensitive Timing Behavior Analysis for Fault: Localization in Software Systems

    Get PDF
    Software timing behavior measurements, such as response times, often show high statistical variance. This variance can make the analysis difficult or even threaten the applicability of statistical techniques. This thesis introduces a method for improving the analysis of software response time measurements that show high variance. Our approach can find relations between timing behavior variance and both trace shape information and workload intensity information. This relation is used to provide timing behavior measurements with virtually less variance. This can make timing behavior analysis more robust (e.g., improved confidence and precision) and faster (e.g., less simulation runs and shorter monitoring period). The thesis contributes TracSTA (Trace-Context-Sensitive Timing Behavior Analysis) and WiSTA (Workload-Intensity-Sensitive Timing Behavior Analysis). TracSTA uses trace shape information (i.e., the shape of the control flow corresponding to a software operation execution) and WiSTA uses workload intensity metrics (e.g., the number of concurrent software executions) to create context-specific timing behavior profiles. Both the applicability and effectiveness are evaluated in several case studies and field studies. The evaluation shows a strong relation between timing behavior and the metrics considered by TracSTA and WiSTA. Additionally, a fault localization approach for enterprise software systems is presented as application scenario. It uses the timing behavior data provided by TracSTA and WiSTA for anomaly detection.Die Analyse von Zeitverhalten wie z.B. Antwortzeiten von Software-Operationen ist oft schwierig wegen der hohen statistischen Varianz. Diese Varianz gefährdet sogar die Anwendbarkeit von statistischen Verfahren. In dieser Arbeit wird eine Methode zur Verbesserung der Analyse von Antwortzeiten mit hoher statistischer Varianz vorgestellt. Der vorgestellte Ansatz ist in der Lage, einen Teil der Varianz aus dem gemessenen Zeitverhalten anhand von Aufrufsequenzen und Schwankungen in der Nutzungsintensität zu erklären. Dadurch kann praktisch Varianz aus den Messdaten entfernt werden, was die Anwendbarkeit von statistischen Analysen in Bezug auf Verlässlichkeit, Präzision und Geschwindigkeit (z.B. kürzere Messperiode und Simulationsdauer) verbessern kann. Der Hauptbeitrag dieser Arbeit liegt in den zwei Verfahren TracSTA (Trace-Context-Sensitive Timing Behavior Analysis) und WiSTA (Workload-Intensity-Sensitive Timing Behavior Analysis). TracSTA verwendet die Form des Aufrufflusses (d.h. die Form der Aufrufsequenz, in die ein Methodenaufruf eingebettet ist). WiSTA wertet die Nutzungsintensität aus (z.B. Anzahl gleichzeitig ausgeführter Methoden). Dies resultiert in kontextspezifischen Antwortzeitprofilen. In mehreren Fall- und Feldstudien wird die Anwendbarkeit und die Wirksamkeit evaluiert. Es zeigt sich ein deutlicher Zusammenhang zwischen dem Zeitverhalten und den von TracSTA und WiSTA betrachteten Einflussfaktoren. Zusätzlich wird als Anwendungsszenario ein Ansatz zur Fehlerlokalisierung vorgestellt, welcher von TracSTA und WiSTA bereitgestellte Antwortzeiten zur Anomalieerkennung verwendet

    Energy-aware scheduling in heterogeneous computing systems

    Get PDF
    In the last decade, the grid computing systems emerged as useful provider of the computing power required for solving complex problems. The classic formulation of the scheduling problem in heterogeneous computing systems is NP-hard, thus approximation techniques are required for solving real-world scenarios of this problem. This thesis tackles the problem of scheduling tasks in a heterogeneous computing environment in reduced execution times, considering the schedule length and the total energy consumption as the optimization objectives. An efficient multithreading local search algorithm for solving the multi-objective scheduling problem in heterogeneous computing systems, named MEMLS, is presented. The proposed method follows a fully multi-objective approach, applying a Pareto-based dominance search that is executed in parallel by using several threads. The experimental analysis demonstrates that the new multithreading algorithm outperforms a set of fast and accurate two-phase deterministic heuristics based on the traditional MinMin. The new ME-MLS method is able to achieve significant improvements in both makespan and energy consumption objectives in reduced execution times for a large set of testbed instances, while exhibiting very good scalability. The ME-MLS was evaluated solving instances comprised of up to 2048 tasks and 64 machines. In order to scale the dimension of the problem instances even further and tackle large-sized problem instances, the Graphical Processing Unit (GPU) architecture is considered. This line of future work has been initially tackled with the gPALS: a hybrid CPU/GPU local search algorithm for efficiently tackling a single-objective heterogeneous computing scheduling problem. The gPALS shows very promising results, being able to tackle instances of up to 32768 tasks and 1024 machines in reasonable execution times.En la última década, los sistemas de computación grid se han convertido en útiles proveedores de la capacidad de cálculo necesaria para la resolución de problemas complejos. En su formulación clásica, el problema de la planificación de tareas en sistemas heterogéneos es un problema NP difícil, por lo que se requieren técnicas de resolución aproximadas para atacar instancias de tamaño realista de este problema. Esta tesis aborda el problema de la planificación de tareas en sistemas heterogéneos, considerando el largo de la planificación y el consumo energético como objetivos a optimizar. Para la resolución de este problema se propone un algoritmo de búsqueda local eficiente y multihilo. El método propuesto se trata de un enfoque plenamente multiobjetivo que consiste en la aplicación de una búsqueda basada en dominancia de Pareto que se ejecuta en paralelo mediante el uso de varios hilos de ejecución. El análisis experimental demuestra que el algoritmo multithilado propuesto supera a un conjunto de heurísticas deterministas rápidas y e caces basadas en el algoritmo MinMin tradicional. El nuevo método, ME-MLS, es capaz de lograr mejoras significativas tanto en el largo de la planificación y como en consumo energético, en tiempos de ejecución reducidos para un gran número de casos de prueba, mientras que exhibe una escalabilidad muy promisoria. El ME-MLS fue evaluado abordando instancias de hasta 2048 tareas y 64 máquinas. Con el n de aumentar la dimensión de las instancias abordadas y hacer frente a instancias de gran tamaño, se consideró la utilización de la arquitectura provista por las unidades de procesamiento gráfico (GPU). Esta línea de trabajo futuro ha sido abordada inicialmente con el algoritmo gPALS: un algoritmo híbrido CPU/GPU de búsqueda local para la planificación de tareas en en sistemas heterogéneos considerando el largo de la planificación como único objetivo. La evaluación del algoritmo gPALS ha mostrado resultados muy prometedores, siendo capaz de abordar instancias de hasta 32768 tareas y 1024 máquinas en tiempos de ejecución razonables

    nformation Theoretical Prediction of Alternative Splicing with Application to Type-2 Diabetes Mellitus.

    No full text
    Für die biomedizinische Grundlagenforschung ist es von besonderem Interesse, die Aktivität von Genen in verschiedenen Geweben eines Organismus zu bestimmen. Die Genaktivität wird hier bestimmt durch die Menge der direkten Produkte eines Gens, die Transkripte. Die Häufigkeit der Transkripte wird durch experimentelle Technologien quantifiziert und als Genexpression bezeichnet. Aber ein Gen produziert nicht immer nur ein Transkript, sondern kann mehrere Transkripte herstellen mittels der parallelen Kodierung, dem sogenannten alternativen Spleissen. Solch ein Mechanismus ist notwendig um die grosse Zahl an Proteinen und die verhältnismässig kleine Anzahl an Genen zu erklären: 25 000 Gene im Menschen gegenüber 20 000 im Fadenwurm caenorhabditis elegans. Alternatives Spleissen kontrolliert die Expression von verschiedenen Transkriptvarianten unter verschiedenen Bedingungen. Es ist nicht überraschend, dass auch kleine Fehler beim Spleissen pathologische Wirkung entfalten, d.h. Krankheiten auslösen können. Da Organismen wie der des Menschen etwa 25 000 verschiedene Gene besitzen, war es notwendig, für die Analyse der globalen Genexpression Hochdurchsatzmethoden zur Datengenerierung zu entwickeln. Mit dem alternativen Spleissen stehen all diesen Genen mehrere Transkripte gegenüber. Erst seit kurzem kann die notwendige Menge an Daten generiert werden durch Technologien wie z.Bsp. Microarrays oder Sequenzierungstechnologie der neuesten Generation. Gleichzeitig mit dem technischen Fortschritt müssen die Datenanalyseverfahren mithalten, um neuen Forschungsfragen zu entsprechen. Im Laufe dieser Arbeit wird eine Softwarepipeline vorgestellt für die Analyse von alternativem Spleissen sowie differentieller Genexpression. Sie wurde entwickelt und implementiert in der Programmiersprache und Statistik-Software R und BioConductor und umfasst die Schritte Qualitätskontrolle, Vorverarbeitung, statistische Auswertung der Expressionsveränderungen und Genmengenauswertung. Für die Erkennung von alternativem Spleissen wird die Informationstheorie in das Gebiet der Genexpression eingeführt. Die vorgestellte Lösung besteht aus einer Erweiterung der Shannon-Entropie auf die Erkennung veränderter Transkripthäufigkeiten und heisst ARH – Alternatives Spleissen Robuste Vorhersage mittels Entropie. Der Nutzen der entwickelten Methoden und Implementierungen wird aufgezeigt am Beispiel von Daten zum Typ-2 Diabetes Mellitus. Mittels Datenintegration und Metaanalyse von unterschiedlichen Datenquellen werden Markergene bestimmt mit Fokus auf differentielle Expression. Danach wird alternatives Spleissen untersucht mit speziellem Fokus auf die Markergene und funktionelle Genmengen, d.h. Stoffwechselwegen

    Proceedings of the inaugural construction management and economics ‘Past, Present and Future’ conference CME25, 16-18 July 2007, University of Reading, UK

    Get PDF
    This conference was an unusual and interesting event. Celebrating 25 years of Construction Management and Economics provides us with an opportunity to reflect on the research that has been reported over the years, to consider where we are now, and to think about the future of academic research in this area. Hence the sub-title of this conference: “past, present and future”. Looking through these papers, some things are clear. First, the range of topics considered interesting has expanded hugely since the journal was first published. Second, the research methods are also more diverse. Third, the involvement of wider groups of stakeholder is evident. There is a danger that this might lead to dilution of the field. But my instinct has always been to argue against the notion that Construction Management and Economics represents a discipline, as such. Granted, there are plenty of university departments around the world that would justify the idea of a discipline. But the vast majority of academic departments who contribute to the life of this journal carry different names to this. Indeed, the range and breadth of methodological approaches to the research reported in Construction Management and Economics indicates that there are several different academic disciplines being brought to bear on the construction sector. Some papers are based on economics, some on psychology and others on operational research, sociology, law, statistics, information technology, and so on. This is why I maintain that construction management is not an academic discipline, but a field of study to which a range of academic disciplines are applied. This may be why it is so interesting to be involved in this journal. The problems to which the papers are applied develop and grow. But the broad topics of the earliest papers in the journal are still relevant today. What has changed a lot is our interpretation of the problems that confront the construction sector all over the world, and the methodological approaches to resolving them. There is a constant difficulty in dealing with topics as inherently practical as these. While the demands of the academic world are driven by the need for the rigorous application of sound methods, the demands of the practical world are quite different. It can be difficult to meet the needs of both sets of stakeholders at the same time. However, increasing numbers of postgraduate courses in our area result in larger numbers of practitioners with a deeper appreciation of what research is all about, and how to interpret and apply the lessons from research. It also seems that there are contributions coming not just from construction-related university departments, but also from departments with identifiable methodological traditions of their own. I like to think that our authors can publish in journals beyond the construction-related areas, to disseminate their theoretical insights into other disciplines, and to contribute to the strength of this journal by citing our articles in more mono-disciplinary journals. This would contribute to the future of the journal in a very strong and developmental way. The greatest danger we face is in excessive self-citation, i.e. referring only to sources within the CM&E literature or, worse, referring only to other articles in the same journal. The only way to ensure a strong and influential position for journals and university departments like ours is to be sure that our work is informing other academic disciplines. This is what I would see as the future, our logical next step. If, as a community of researchers, we are not producing papers that challenge and inform the fundamentals of research methods and analytical processes, then no matter how practically relevant our output is to the industry, it will remain derivative and secondary, based on the methodological insights of others. The balancing act between methodological rigour and practical relevance is a difficult one, but not, of course, a balance that has to be struck in every single paper

    Annual Report

    Get PDF
    corecore