72 research outputs found

    Learning preferences for personalisation in a pervasive environment

    Get PDF
    With ever increasing accessibility to technological devices, services and applications there is also an increasing burden on the end user to manage and configure such resources. This burden will continue to increase as the vision of pervasive environments, with ubiquitous access to a plethora of resources, continues to become a reality. It is key that appropriate mechanisms to relieve the user of such burdens are developed and provided. These mechanisms include personalisation systems that can adapt resources on behalf of the user in an appropriate way based on the user's current context and goals. The key knowledge base of many personalisation systems is the set of user preferences that indicate what adaptations should be performed under which contextual situations. This thesis investigates the challenges of developing a system that can learn such preferences by monitoring user behaviour within a pervasive environment. Based on the findings of related works and experience from EU project research, several key design requirements for such a system are identified. These requirements are used to drive the design of a system that can learn accurate and up to date preferences for personalisation in a pervasive environment. A standalone prototype of the preference learning system has been developed. In addition the preference learning system has been integrated into a pervasive platform developed through an EU research project. The preference learning system is fully evaluated in terms of its machine learning performance and also its utility in a pervasive environment with real end users

    Second CLIPS Conference Proceedings, volume 1

    Get PDF
    Topics covered at the 2nd CLIPS Conference held at the Johnson Space Center, September 23-25, 1991 are given. Topics include rule groupings, fault detection using expert systems, decision making using expert systems, knowledge representation, computer aided design and debugging expert systems

    Vitruv: Specifying Temporal Aspects of Multimedia Presentations - A Transformational Approach based on Intervals

    Get PDF
    The development of large multimedia applications reveals similar problems to those of developing large software systems. This is not surprising, as multimedia applications are a special kind of software systems. Our experience within the Altenberg Cathedral Project showed, however, that during developing multimedia applications particular problems arise, which do not appear during traditional software development. This is the starting point of the research reported in this thesis. In this introduction, we start with a report on the Altenberg Cathedral Project (sec. 1.1), resulting in a problem statement and a list of requirements for possible solutions. After that we propose our solution named Vitruv (sec. 1.2 on page 11) and explain how it works in general (sec. 1.3 on page 12). It is followed by a discussion of key aspects of Vitruv and relations to other approaches (sec. 1.4 on page 14). The introduction closes with a brief outline of the thesis

    Assessment of goal-directed closed-loop management in intensive care medicine

    Get PDF
    Given an aging population, shortage of nursing staff and a continuously increasing workload, automation in the medical sector is an important aspect of future intensive care. Although automation and machine learning are current research topics, progress is still very limited in comparison to other application areas. Probably one of the most serious problems is data shortage in a heterogeneous landscape of medical devices with limited interfaces and various protocols. In addition, the recording of data or, even more so, the evaluation of automation is limited by a complex legal framework. Given these complications and the sensitive legal nature of medical records, only very limited data is accessible for further analysis and development of automated systems. For this reason, within the context of this thesis various solutions for data acquisition and automation were developed and evaluated concomitant to two clinical studies utilizing a large animal model in a realistic intensive care setting at the University Hospital Tübingen. Foremost, to overcome the problems of data availability and interconnection of medical devices, a software framework for data collection and remote control using a client-server architecture was developed and significant amounts of research data could be collected in a central database. Furthermore, a closed-loop controller based on fuzzy logic was developed and used for management of end-tital CO2, glucose, and other parameters to stabilize the animal subjects during therapy and reduce caregivers’ workload. In addition to the fuzzy controller, closed-loop management for temperature and anticoagulation could be established by developing hardware interfaces for a forced-air warming unit and a point-of-care analysis device, respectively. Besides further reduction of caregivers’ workload, such systems can provide additional patient safety and allow management in settings where human supervision may not be present at all times. One general and encountered problem for closed-loop control in a medical setting is limited availability of measurements, especially if manual blood withdrawals are required. As an initial step to address this problem, measured parameters from other devices as potential surrogates were evaluated in a comparison between different regression approaches. The required training data, a matched set of blood gas and monitoring parameters, was obtained by utilizing a developed algorithm for automated detection of withdrawal events. Yet, besides any specific implementations and analysis, many general aspects regarding the physical implementation of such a system and interaction with caregivers could be evaluated in the experimental setting and might guide further development of clinical automation.Angesichts der alternden Bevölkerung, des Mangels an Pflegekräften und der ständig steigenden Arbeitsbelastung ist Automatisierung ein wichtiger Aspekt zukünftiger Intensivmedizin. Obwohl Automatisierung und maschinelles Lernen aktuelle Forschungsthemen sind, ist der Fortschritt im Vergleich zu anderen Anwendungsbereichen jedoch noch sehr begrenzt. Eines der größten Probleme ist wohl die Datenknappheit in einer heterogenen Medizinproduktelandschaft mit begrenzten Schnittstellen und zahlreichen unterschiedlichen Protokollen. Darüber hinaus sind die Datenerfassung und erst recht die Erprobung einer Automatisierung durch ein komplexes rechtliches Rahmenwerk eingeschränkt. Aufgrund dieser Komplikationen und der sensiblen Rechtslage für Patientendaten sind diese nur sehr begrenzt für weitere Analysen und die Entwicklung automatisierter Systeme zugänglich. Im Rahmen dieser Dissertation wurden daher verschiedene Lösungen zur Datenerfassung und Automatisierung begleitend zu zwei klinischen Studien des Universitätsklinikums Tübingen am Großtiermodell in einer realitätsnahen Intensivstation entwickelt und evaluiert. Um die Probleme der Datenverfügbarkeit und Vernetzung medizinischer Geräte zu lösen, wurde vorrangig ein Software-Framework für die Datenerfassung und Steuerung mittels einer Client-Server-Architektur entwickelt und umfangreiche Forschungsdaten in einer zentralen Datenbank gesammelt. Darüber hinaus wurde ein auf Fuzzy-Logik basierender Regler entwickelt, welcher zur Stabilisierung des endtitalen CO2, Glukose und anderen Parametern verwendet wurde und damit die Arbeitsbelastung der Pflegekräfte reduzieren konnte. Zusätzlich zum Fuzzy-Regler konnten durch die Entwicklung von Hardware-Schnittstellen für Geräte zum Temperaturmanagement mittels luftbasierter Wärmedecken und zur Messung der Blutgerinnung geschlossene Regelkreise aufgebaut werden. Neben einer weiteren Arbeitserleichterung für die Pflegekräfte können solche Systeme zusätzliche Sicherheit für den Patienten bieten und die Anwendung in nicht ständig überwachten Bereichen ermöglichen. Ein allgemeines und auch beobachtetes Problem für Regelkreise im medizinischen Bereich ist die begrenzte Verfügbarkeit von Messwerten, insbesondere bei manuellen Blutentnahmen. Als erster Schritt zur Lösung dieses Problems wurden Messparameter anderer Geräte als potentielle Ersatzparameter mit verschiedenen Regressionsansätzen analysiert und verglichen. Die dazu erforderlichen Trainingsdaten, Paare von Blutgas- und weiteren Vitaldaten, wurden mit Hilfe eines entwickelten Algorithmus zur automatisierten Erkennung von Blutentnahmen erzeugt. Abgesehen von diesen konkreten Anwendungen und Analysen konnten in der experimentellen Evaluation auch viele generelle Aspekte der realen Implementierung eines solchen Systems und die Interaktion mit Ärzten und Pflegekräften untersucht werden und damit der Entwicklung weiterer klinischen Automatisierung dienen

    Students´ language in computer-assisted tutoring of mathematical proofs

    Get PDF
    Truth and proof are central to mathematics. Proving (or disproving) seemingly simple statements often turns out to be one of the hardest mathematical tasks. Yet, doing proofs is rarely taught in the classroom. Studies on cognitive difficulties in learning to do proofs have shown that pupils and students not only often do not understand or cannot apply basic formal reasoning techniques and do not know how to use formal mathematical language, but, at a far more fundamental level, they also do not understand what it means to prove a statement or even do not see the purpose of proof at all. Since insight into the importance of proof and doing proofs as such cannot be learnt other than by practice, learning support through individualised tutoring is in demand. This volume presents a part of an interdisciplinary project, set at the intersection of pedagogical science, artificial intelligence, and (computational) linguistics, which investigated issues involved in provisioning computer-based tutoring of mathematical proofs through dialogue in natural language. The ultimate goal in this context, addressing the above-mentioned need for learning support, is to build intelligent automated tutoring systems for mathematical proofs. The research presented here has been focused on the language that students use while interacting with such a system: its linguistic propeties and computational modelling. Contribution is made at three levels: first, an analysis of language phenomena found in students´ input to a (simulated) proof tutoring system is conducted and the variety of students´ verbalisations is quantitatively assessed, second, a general computational processing strategy for informal mathematical language and methods of modelling prominent language phenomena are proposed, and third, the prospects for natural language as an input modality for proof tutoring systems is evaluated based on collected corpora

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence

    A new connectivity strategy for wireless mesh networks using dynamic spectrum access

    Get PDF
    The introduction of Dynamic Spectrum Access (DSA) marked an important juncture in the evolution of wireless networks. DSA is a spectrum assignment paradigm where devices are able to make real-time adjustment to their spectrum usage and adapt to changes in their spectral environment to meet performance objectives. DSA allows spectrum to be used more efficiently and may be considered as a viable approach to the ever increasing demand for spectrum in urban areas and the need for coverage extension to unconnected communities. While DSA can be applied to any spectrum band, the initial focus has been in the Ultra-High Frequency (UHF) band traditionally used for television broadcast because the band is lightly occupied and also happens to be ideal spectrum for sparsely populated rural areas. Wireless access in general is said to offer the most hope in extending connectivity to rural and unconnected peri-urban communities. Wireless Mesh Networks (WMN) in particular offer several attractive characteristics such as multi-hopping, ad-hoc networking, capabilities of self-organising and self-healing, hence the focus on WMNs. Motivated by the desire to leverage DSA for mesh networking, this research revisits the aspect of connectivity in WMNs with DSA. The advantages of DSA when combined with mesh networking not only build on the benefits, but also creates additional challenges. The study seeks to address the connectivity challenge across three key dimensions, namely network formation, link metric and multi-link utilisation. To start with, one of the conundrums faced in WMNs with DSA is that the current 802.11s mesh standard provides limited support for DSA, while DSA related standards such as 802.22 provide limited support for mesh networking. This gap in standardisation complicates the integration of DSA in WMNs as several issues are left outside the scope of the applicable standard. This dissertation highlights the inadequacy of the current MAC protocol in ensuring TVWS regulation compliance in multi-hop environments and proposes a logical link MAC sub-layer procedure to fill the gap. A network is considered compliant in this context if each node operates on a channel that it is allowed to use as determined for example, by the spectrum database. Using a combination of prototypical experiments, simulation and numerical analysis, it is shown that the proposed protocol ensures network formation is accomplished in a manner that is compliant with TVWS regulation. Having tackled the compliance problem at the mesh formation level, the next logical step was to explore performance improvement avenues. Considering the importance of routing in WMNs, the study evaluates link characterisation to determine suitable metric for routing purposes. Along this dimension, the research makes two main contributions. Firstly, A-link-metric (Augmented Link Metric) approach for WMN with DSA is proposed. A-link-metric reinforces existing metrics to factor in characteristics of a DSA channel, which is essential to improve the routing protocol's ranking of links for optimal path selection. Secondly, in response to the question of “which one is the suitable metric?”, the Dynamic Path Metric Selection (DPMeS) concept is introduced. The principal idea is to mechanise the routing protocol such that it assesses the network via a distributed probing mechanism and dynamically binds the routing metric. Using DPMeS, a routing metric is selected to match the network type and prevailing conditions, which is vital as each routing metric thrives or recedes in performance depending on the scenario. DPMeS is aimed at unifying the years worth of prior studies on routing metrics in WMNs. Simulation results indicate that A-link-metric achieves up to 83.4 % and 34.6 % performance improvement in terms of throughput and end-to-end delay respectively compared to the corresponding base metric (i.e. non-augmented variant). With DPMeS, the routing protocol is expected to yield better performance consistently compared to the fixed metric approach whose performance fluctuates amid changes in network setup and conditions. By and large, DSA-enabled WMN nodes will require access to some fixed spectrum to fall back on when opportunistic spectrum is unavailable. In the absence of fully functional integrated-chip cognitive radios to enable DSA, the immediate feasible solution for the interim is single hardware platforms fitted with multiple transceivers. This configuration results in multi-band multi-radio node capability that lends itself to a variety of link options in terms of transmit/receive radio functionality. The dissertation reports on the experimental performance evaluation of radios operating in the 5 GHz and UHF-TVWS bands for hybrid back-haul links. It is found that individual radios perform differently depending on the operating parameter settings, namely channel, channel-width and transmission power subject to prevailing environmental (both spectral and topographical) conditions. When aggregated, if the radios' data-rates are approximately equal, there is a throughput and round-trip time performance improvement of 44.5 - 61.8 % and 7.5 - 41.9 % respectively. For hybrid links comprising radios with significantly unequal data-rates, this study proposes an adaptive round-robin (ARR) based algorithm for efficient multilink utilisation. Numerical analysis indicate that ARR provides 75 % throughput improvement. These results indicate that network optimisation overall requires both time and frequency division duplexing. Based on the experimental test results, this dissertation presents a three-layered routing framework for multi-link utilisation. The top layer represents the nodes' logical interface to the WMN while the bottom layer corresponds to the underlying physical wireless network interface cards (WNIC). The middle layer is an abstract and reductive representation of the possible and available transmission, and reception options between node pairs, which depends on the number and type of WNICs. Drawing on the experimental results and insight gained, the study builds criteria towards a mechanism for auto selection of the optimal link option. Overall, this study is anticipated to serve as a springboard to stimulate the adoption and integration of DSA in WMNs, and further development in multi-link utilisation strategies to increase capacity. Ultimately, it is hoped that this contribution will collectively contribute effort towards attaining the global goal of extending connectivity to the unconnected

    STABLE ADAPTIVE STRATEGY of HOMO SAPIENS and EVOLUTIONARY RISK of HIGH TECH. Transdisciplinary essay

    Get PDF
    The co-evolutionary concept of Three-modal stable evolutionary strategy of Homo sapiens is developed. The concept based on the principle of evolutionary complementarity of anthropogenesis: value of evolutionary risk and evolutionary path of human evolution are defined by descriptive (evolutionary efficiency) and creative-teleological (evolutionary correctly) parameters simultaneously, that cannot be instrumental reduced to others ones. Resulting volume of both parameters define the trends of biological, social, cultural and techno-rationalistic human evolution by two gear mechanism ˗ gene-cultural co-evolution and techno- humanitarian balance. The resultant each of them can estimated by the ratio of socio-psychological predispositions of humanization/dehumanization in mentality. Explanatory model and methodology of evaluation of creatively teleological evolutionary risk component of NBIC technological complex is proposed. Integral part of the model is evolutionary semantics (time-varying semantic code, the compliance of the biological, socio-cultural and techno-rationalist adaptive modules of human stable evolutionary strategy)
    corecore