19 research outputs found

    A web-based approach to engineering adaptive collaborative applications

    Get PDF
    Current methods employed to develop collaborative applications have to make decisions and speculate about the environment in which the application will operate within, the network infrastructure that will be used and the device type the application will operate on. These decisions and assumptions about the environment in which collaborative applications were designed to work are not ideal. These methods produce collaborative applications that are characterised as being inflexible, working on homogeneous networks and single platforms, requiring pre-existing knowledge of the data and information types they need to use and having a rigid choice of architecture. On the other hand, future collaborative applications are required to be flexible; to work in highly heterogeneous environments; be adaptable to work on different networks and on a range of device types. This research investigates the role that the Web and its various pervasive technologies along with a component-based Grid middleware can play to address these concerns. The aim is to develop an approach to building adaptive collaborative applications that can operate on heterogeneous and changing environments. This work proposes a four-layer model that developers can use to build adaptive collaborative applications. The four-layer model is populated with Web technologies such as Scalable Vector Graphics (SVG), the Resource Description Framework (RDF), Protocol and RDF Query Language (SPARQL) and Gridkit, a middleware infrastructure, based on the Open Overlays concept. The Middleware layer (the first layer of the four-layer model) addresses network and operating system heterogeneity, the Group Communication layer enables collaboration and data sharing, while the Knowledge Representation layer proposes an interoperable RDF data modelling language and a flexible storage facility with an adaptive architecture for heterogeneous data storage. And finally there is the Presentation and Interaction layer which proposes a framework (Oea) for scalable and adaptive user interfaces. The four layer model has been successfully used to build a collaborative application, called Wildfurt that overcomes challenges facing collaborative applications. This research has demonstrated new applications for cutting-edge Web technologies in the area of building collaborative applications. SVG has been used for developing superior adaptive and scalable user interfaces that can operate on different device types. RDF and RDFS, have also been used to design and model collaborative applications providing a mechanism to define classes and properties and the relationships between them. A flexible and adaptable storage facility that is able to change its architecture based on the surrounding environments and requirements has also been achieved by combining the RDF technology with the Open Overlays middleware, Gridkit

    Antennas and Electromagnetics Research via Natural Language Processing.

    Get PDF
    Advanced techniques for performing natural language processing (NLP) are being utilised to devise a pioneering methodology for collecting and analysing data derived from scientific literature. Despite significant advancements in automated database generation and analysis within the domains of material chemistry and physics, the implementation of NLP techniques in the realms of metamaterial discovery, antenna design, and wireless communications remains at its early stages. This thesis proposes several novel approaches to advance research in material science. Firstly, an NLP method has been developed to automatically extract keywords from large-scale unstructured texts in the area of metamaterial research. This enables the uncovering of trends and relationships between keywords, facilitating the establishment of future research directions. Additionally, a trained neural network model based on the encoder-decoder Long Short-Term Memory (LSTM) architecture has been developed to predict future research directions and provide insights into the influence of metamaterials research. This model lays the groundwork for developing a research roadmap of metamaterials. Furthermore, a novel weighting system has been designed to evaluate article attributes in antenna and propagation research, enabling more accurate assessments of impact of each scientific publication. This approach goes beyond conventional numeric metrics to produce more meaningful predictions. Secondly, a framework has been proposed to leverage text summarisation, one of the primary NLP tasks, to enhance the quality of scientific reviews. It has been applied to review recent development of antennas and propagation for body-centric wireless communications, and the validation has been made available for comparison with well-referenced datasets for text summarisation. Lastly, the effectiveness of automated database building in the domain of tunable materials and their properties has been presented. The collected database will use as an input for training a surrogate machine learning model in an iterative active learning cycle. This model will be utilised to facilitate high-throughput material processing, with the ultimate goal of discovering novel materials exhibiting high tunability. The approaches proposed in this thesis will help to accelerate the discovery of new materials and enhance their applications in antennas, which has the potential to transform electromagnetic material research

    Stratégies pour le raisonnement sur le contexte dans les environnements d assistance pour les personnes âgées

    Get PDF
    Tirant parti de notre expérience avec une approche traditionnelle des environnements d'assistance ambiante (AAL) qui repose sur l'utilisation de nombreuses technologies hétérogènes dans les déploiements, cette thèse étudie la possibilité d'une approche simplifiée et complémentaire, ou seul un sous-ensemble hardware réduit est déployé, initiant un transfert de complexité vers le côté logiciel. Axé sur les aspects de raisonnement dans les systèmes AAL, ce travail a permis à la proposition d'un moteur d'inférence sémantique adapté à l'utilisation particulière à ces systèmes, répondant ainsi à un besoin de la communauté scientifique. Prenant en compte la grossière granularité des données situationnelles disponible avec une telle approche, un ensemble de règles dédiées avec des stratégies d'inférence adaptées est proposé, implémenté et validé en utilisant ce moteur. Un mécanisme de raisonnement sémantique novateur est proposé sur la base d'une architecture de raisonnement inspiré du système cognitif. Enfin, le système de raisonnement est intégré dans un framework de provision de services sensible au contexte, se chargeant de l'intelligence vis-à-vis des données contextuelles en effectuant un traitement des événements en direct par des manipulations ontologiques complexes. L ensemble du système est validé par des déploiements in-situ dans une maison de retraite ainsi que dans des maisons privées, ce qui en soi est remarquable dans un domaine de recherche principalement cantonné aux laboratoiresLeveraging our experience with the traditional approach to ambient assisted living (AAL) which relies on a large spread of heterogeneous technologies in deployments, this thesis studies the possibility of a more stripped down and complementary approach, where only a reduced hardware subset is deployed, probing a transfer of complexity towards the software side, and enhancing the large scale deployability of the solution. Focused on the reasoning aspects in AAL systems, this work has allowed the finding of a suitable semantic inference engine for the peculiar use in these systems, responding to a need in this scientific community. Considering the coarse granularity of situational data available, dedicated rule-sets with adapted inference strategies are proposed, implemented, and validated using this engine. A novel semantic reasoning mechanism is proposed based on a cognitively inspired reasoning architecture. Finally, the whole reasoning system is integrated in a fully featured context-aware service framework, powering its context awareness by performing live event processing through complex ontological manipulation. the overall system is validated through in-situ deployments in a nursing home as well as private homes over a few months period, which itself is noticeable in a mainly laboratory-bound research domainEVRY-INT (912282302) / SudocSudocFranceF

    An Evolutionary Approach to Adaptive Image Analysis for Retrieving and Long-term Monitoring Historical Land Use from Spatiotemporally Heterogeneous Map Sources

    Get PDF
    Land use changes have become a major contributor to the anthropogenic global change. The ongoing dispersion and concentration of the human species, being at their orders unprecedented, have indisputably altered Earth’s surface and atmosphere. The effects are so salient and irreversible that a new geological epoch, following the interglacial Holocene, has been announced: the Anthropocene. While its onset is by some scholars dated back to the Neolithic revolution, it is commonly referred to the late 18th century. The rapid development since the industrial revolution and its implications gave rise to an increasing awareness of the extensive anthropogenic land change and led to an urgent need for sustainable strategies for land use and land management. By preserving of landscape and settlement patterns at discrete points in time, archival geospatial data sources such as remote sensing imagery and historical geotopographic maps, in particular, could give evidence of the dynamic land use change during this crucial period. In this context, this thesis set out to explore the potentials of retrospective geoinformation for monitoring, communicating, modeling and eventually understanding the complex and gradually evolving processes of land cover and land use change. Currently, large amounts of geospatial data sources such as archival maps are being worldwide made online accessible by libraries and national mapping agencies. Despite their abundance and relevance, the usage of historical land use and land cover information in research is still often hindered by the laborious visual interpretation, limiting the temporal and spatial coverage of studies. Thus, the core of the thesis is dedicated to the computational acquisition of geoinformation from archival map sources by means of digital image analysis. Based on a comprehensive review of literature as well as the data and proposed algorithms, two major challenges for long-term retrospective information acquisition and change detection were identified: first, the diversity of geographical entity representations over space and time, and second, the uncertainty inherent to both the data source itself and its utilization for land change detection. To address the former challenge, image segmentation is considered a global non-linear optimization problem. The segmentation methods and parameters are adjusted using a metaheuristic, evolutionary approach. For preserving adaptability in high level image analysis, a hybrid model- and data-driven strategy, combining a knowledge-based and a neural net classifier, is recommended. To address the second challenge, a probabilistic object- and field-based change detection approach for modeling the positional, thematic, and temporal uncertainty adherent to both data and processing, is developed. Experimental results indicate the suitability of the methodology in support of land change monitoring. In conclusion, potentials of application and directions for further research are given

    OGRS2012 Symposium Proceedings

    Get PDF
    Do you remember the Open Source Geospatial Research and Education Symposium (OGRS) in Nantes? "Les Machines de l’Île", the Big Elephant, the "Storm Boat" with Claramunt, Petit et al. (2009), and "le Biniou et la Bombarde"? A second edition of OGRS was promised, and that promise is now fulfilled in OGRS 2012, Yverdon-les-Bains, Switzerland, October 24-26, 2012. OGRS is a meeting dedicated to sharing knowledge, new solutions, methods, practices, ideas and trends in the field of geospatial information through the development and the use of free and open source software in both research and education. In recent years, the development of geospatial free and open source software (GFOSS) has breathed new life into the geospatial domain. GFOSS has been extensively promoted by FOSS4G events, which evolved from meetings which gathered together interested GFOSS development communities to a standard business conference. More in line with the academic side of the FOSS4G conferences, OGRS is a rather neutral forum whose goal is to assemble a community whose main concern is to find new solutions by sharing knowledge and methods free of software license limits. This is why OGRS is primarily concerned with the academic world, though it also involves public institutions, organizations and companies interested in geospatial innovation. This symposium is therefore not an exhibition for presenting existing industrial software solutions, but an event we hope will act as a catalyst for research and innovation and new collaborations between research teams, public agencies and industries. An educational aspect has recently been added to the content of the symposium. This important addition examines the knowledge triangle - research, education, and innovation - through the lens of how open source methods can improve education efficiency. Based on their experience, OGRS contributors bring to the table ideas on how open source training is likely to offer pedagogical advantages to equip students with the skills and knowledge necessary to succeed in tomorrow’s geospatial labor market. OGRS brings together a large collection of current innovative research projects from around the world, with the goal of examining how research uses and contributes to open source initiatives. By presenting their research, OGRS contributors shed light on how the open-source approach impacts research, and vice-versa. The organizers of the symposium wish to demonstrate how the use and development of open source software strengthen education, research and innovation in geospatial fields. To support this approach, the present proceedings propose thirty short papers grouped under the following thematic headings: Education, Earth Science & Landscape, Data, Remote Sensing, Spatial Analysis, Urban Simulation and Tools. These papers are preceded by the contributions of the four keynote speakers: Prof Helena Mitasova, Dr Gérard Hégron, Prof Sergio Rey and Prof Robert Weibel, who share their expertise in research and education in order to highlight the decisive advantages of openness over the limits imposed by the closed-source license system

    L'AIS : une donnée pour l'analyse des activités en mer

    Get PDF
    4 pages, session "Mer et littoral"International audienceCette contribution présente des éléments méthodologiques pour la description des activités humaines en mer dans une perspective d'aide à la gestion. Différentes procédures, combinant l'exploitation de bases de données spatio-temporelles issue de données AIS archivées à des analyses spatiales au sein d'un SIG, sont testées afin de caractériser le transport maritime en Mer d'Iroise (Bretagne, France) sur les plans spatiaux, temporels et quantitatifs au cours d'une année

    32. Forum Bauinformatik 2021

    Get PDF
    Das Forum Bauinformatik ist eine jährlich stattfindende Tagung und ein wichtiger Bestandteil der Bauinformatik im deutschsprachigen Raum. Insbesondere Nachwuchswissenschaftlerinnen und -wissenschaftlern bietet es die Möglichkeit, ihre Forschungsarbeiten zu präsentieren, Problemstellungen fachspezifisch zu diskutieren und sich über den neuesten Stand der Forschung zu informieren. Es bietet sich ausgezeichnete Gelegenheit, in die wissenschaftliche Gemeinschaft im Bereich der Bauinformatik einzusteigen und Kontakte mit anderen Forschenden zu knüpfen

    Text Similarity Between Concepts Extracted from Source Code and Documentation

    Get PDF
    Context: Constant evolution in software systems often results in its documentation losing sync with the content of the source code. The traceability research field has often helped in the past with the aim to recover links between code and documentation, when the two fell out of sync. Objective: The aim of this paper is to compare the concepts contained within the source code of a system with those extracted from its documentation, in order to detect how similar these two sets are. If vastly different, the difference between the two sets might indicate a considerable ageing of the documentation, and a need to update it. Methods: In this paper we reduce the source code of 50 software systems to a set of key terms, each containing the concepts of one of the systems sampled. At the same time, we reduce the documentation of each system to another set of key terms. We then use four different approaches for set comparison to detect how the sets are similar. Results: Using the well known Jaccard index as the benchmark for the comparisons, we have discovered that the cosine distance has excellent comparative powers, and depending on the pre-training of the machine learning model. In particular, the SpaCy and the FastText embeddings offer up to 80% and 90% similarity scores. Conclusion: For most of the sampled systems, the source code and the documentation tend to contain very similar concepts. Given the accuracy for one pre-trained model (e.g., FastText), it becomes also evident that a few systems show a measurable drift between the concepts contained in the documentation and in the source code.</p

    Across Space and Time. Papers from the 41st Conference on Computer Applications and Quantitative Methods in Archaeology, Perth, 25-28 March 2013

    Get PDF
    This volume presents a selection of the best papers presented at the forty-first annual Conference on Computer Applications and Quantitative Methods in Archaeology. The theme for the conference was "Across Space and Time", and the papers explore a multitude of topics related to that concept, including databases, the semantic Web, geographical information systems, data collection and management, and more
    corecore