1,868 research outputs found

    Easing the Creation Process of Mobile Applications for Non-Technical Users

    Get PDF
    In this day and age, the mobile phone is becoming one of the most indispensable personal computing device. People no longer use it just for communication (i.e. calling, sending messages) but also for other aspects of their lives as well. Because of this rise in demand for different and innovative applications, mobile companies (i.e. mobile handset manufacturers and mobile network providers) and organizations have realized the power of collaborative software development and have changed their business strategy. Instead of hiring specific organizations to do programming, they are now opening up their APIs and tools to allow ordinary people create their own mobile applications either for personal use or for profit. However, the problem with this approach is that there are people who might have nice ideas of their own but do not possess the technical expertise in order to create applications implementing these ideas. The goal of this research is to find ways to simplify the creation of mobile applications for non-technical people by applying model-driven software development particularly domain-specific modeling combined with techniques from the field of human-computer interaction (HCI) particularly iterative, user-centered system design. As proof of concept, we concentrate on the development of applications in the domain of mHealth and use the Android Framework as the target platform for code generation. The iterative user-centered design and development of the front-end tool which is called the Mobia Modeler, led us to eventually create a tool that features a configurable-component based design and integrated modeless environment to simplify the different development tasks of end-users. The Mobia models feature both constructs specialized for specific domains (e.g. sensor component, special component ), and also those that are applicable to any type of domain (e.g. structure component, basic component ). In order to accommodate different needs of end-users, a clear separation between the front-end tools (i.e. Mobia Modeler ) and the underlying code generator (i.e. Mobia Processor ) is recommended as long as there is a consistent model in between, that serves as a bridge between the different tools

    Easing the Creation Process of Mobile Applications for Non-Technical Users

    Get PDF
    In this day and age, the mobile phone is becoming one of the most indispensable personal computing device. People no longer use it just for communication (i.e. calling, sending messages) but also for other aspects of their lives as well. Because of this rise in demand for different and innovative applications, mobile companies (i.e. mobile handset manufacturers and mobile network providers) and organizations have realized the power of collaborative software development and have changed their business strategy. Instead of hiring specific organizations to do programming, they are now opening up their APIs and tools to allow ordinary people create their own mobile applications either for personal use or for profit. However, the problem with this approach is that there are people who might have nice ideas of their own but do not possess the technical expertise in order to create applications implementing these ideas. The goal of this research is to find ways to simplify the creation of mobile applications for non-technical people by applying model-driven software development particularly domain-specific modeling combined with techniques from the field of human-computer interaction (HCI) particularly iterative, user-centered system design. As proof of concept, we concentrate on the development of applications in the domain of mHealth and use the Android Framework as the target platform for code generation. The iterative user-centered design and development of the front-end tool which is called the Mobia Modeler, led us to eventually create a tool that features a configurable-component based design and integrated modeless environment to simplify the different development tasks of end-users. The Mobia models feature both constructs specialized for specific domains (e.g. sensor component, special component ), and also those that are applicable to any type of domain (e.g. structure component, basic component ). In order to accommodate different needs of end-users, a clear separation between the front-end tools (i.e. Mobia Modeler ) and the underlying code generator (i.e. Mobia Processor ) is recommended as long as there is a consistent model in between, that serves as a bridge between the different tools

    Conservation GIS: Ontology and spatial reasoning for commonsense knowledge.

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.Geographic information available from multiple sources are moving beyond their local context and widening the semantic difference. The major challenge emerged with ubiquity of geographic information, evolving geospatial technology and location-aware service is to deal with the semantic interoperability. Although the use of ontology aims at capturing shared conceptualization of geospatial information, human perception of world view is not adequately addressed in geospatial ontology. This study proposes ‘Conservation GIS Ontology’ that comprises spatial knowledge of non-expert conservationists in the context of Chitwan National Park, Nepal. The discussion is presented in four parts: exploration of commonsense spatial knowledge about conservation; development of conceptual ontology to conceptualize domain knowledge; formal representation of conceptualization in Web Ontology Language (OWL); and quality assessment of the ontology development tasks. Elicitation of commonsense spatial knowledge is performed with the notion of cognitive view of semantic. Emphasis is given to investigate the observation of wildlife movement and habitat change scenarios. Conceptualization is carried out by providing the foundation of the top-level ontology- ‘DOLCE’ and geospatial ontologies. ProtĂ©gĂ© 4.1 ontology editor is employed for ontology engineering tasks. Quality assessment is accomplished based on the intrinsic approach of ontology evaluation.(...

    Semantically defined Analytics for Industrial Equipment Diagnostics

    Get PDF
    In this age of digitalization, industries everywhere accumulate massive amount of data such that it has become the lifeblood of the global economy. This data may come from various heterogeneous systems, equipment, components, sensors, systems and applications in many varieties (diversity of sources), velocities (high rate of changes) and volumes (sheer data size). Despite significant advances in the ability to collect, store, manage and filter data, the real value lies in the analytics. Raw data is meaningless, unless it is properly processed to actionable (business) insights. Those that know how to harness data effectively, have a decisive competitive advantage, through raising performance by making faster and smart decisions, improving short and long-term strategic planning, offering more user-centric products and services and fostering innovation. Two distinct paradigms in practice can be discerned within the field of analytics: semantic-driven (deductive) and data-driven (inductive). The first emphasizes logic as a way of representing the domain knowledge encoded in rules or ontologies and are often carefully curated and maintained. However, these models are often highly complex, and require intensive knowledge processing capabilities. Data-driven analytics employ machine learning (ML) to directly learn a model from the data with minimal human intervention. However, these models are tuned to trained data and context, making it difficult to adapt. Industries today that want to create value from data must master these paradigms in combination. However, there is great need in data analytics to seamlessly combine semantic-driven and data-driven processing techniques in an efficient and scalable architecture that allows extracting actionable insights from an extreme variety of data. In this thesis, we address these needs by providing: ‱ A unified representation of domain-specific and analytical semantics, in form of ontology models called TechOnto Ontology Stack. It is highly expressive, platform-independent formalism to capture conceptual semantics of industrial systems such as technical system hierarchies, component partonomies etc and its analytical functional semantics. ‱ A new ontology language Semantically defined Analytical Language (SAL) on top of the ontology model that extends existing DatalogMTL (a Horn fragment of Metric Temporal Logic) with analytical functions as first class citizens. ‱ A method to generate semantic workflows using our SAL language. It helps in authoring, reusing and maintaining complex analytical tasks and workflows in an abstract fashion. ‱ A multi-layer architecture that fuses knowledge- and data-driven analytics into a federated and distributed solution. To our knowledge, the work in this thesis is one of the first works to introduce and investigate the use of the semantically defined analytics in an ontology-based data access setting for industrial analytical applications. The reason behind focusing our work and evaluation on industrial data is due to (i) the adoption of semantic technology by the industries in general, and (ii) the common need in literature and in practice to allow domain expertise to drive the data analytics on semantically interoperable sources, while still harnessing the power of analytics to enable real-time data insights. Given the evaluation results of three use-case studies, our approach surpass state-of-the-art approaches for most application scenarios.Im Zeitalter der Digitalisierung sammeln die Industrien ĂŒberall massive Daten-mengen, die zum Lebenselixier der Weltwirtschaft geworden sind. Diese Daten können aus verschiedenen heterogenen Systemen, GerĂ€ten, Komponenten, Sensoren, Systemen und Anwendungen in vielen Varianten (Vielfalt der Quellen), Geschwindigkeiten (hohe Änderungsrate) und Volumina (reine DatengrĂ¶ĂŸe) stammen. Trotz erheblicher Fortschritte in der FĂ€higkeit, Daten zu sammeln, zu speichern, zu verwalten und zu filtern, liegt der eigentliche Wert in der Analytik. Rohdaten sind bedeutungslos, es sei denn, sie werden ordnungsgemĂ€ĂŸ zu verwertbaren (GeschĂ€fts-)Erkenntnissen verarbeitet. Wer weiß, wie man Daten effektiv nutzt, hat einen entscheidenden Wettbewerbsvorteil, indem er die Leistung steigert, indem er schnellere und intelligentere Entscheidungen trifft, die kurz- und langfristige strategische Planung verbessert, mehr benutzerorientierte Produkte und Dienstleistungen anbietet und Innovationen fördert. In der Praxis lassen sich im Bereich der Analytik zwei unterschiedliche Paradigmen unterscheiden: semantisch (deduktiv) und Daten getrieben (induktiv). Die erste betont die Logik als eine Möglichkeit, das in Regeln oder Ontologien kodierte DomĂ€nen-wissen darzustellen, und wird oft sorgfĂ€ltig kuratiert und gepflegt. Diese Modelle sind jedoch oft sehr komplex und erfordern eine intensive Wissensverarbeitung. Datengesteuerte Analysen verwenden maschinelles Lernen (ML), um mit minimalem menschlichen Eingriff direkt ein Modell aus den Daten zu lernen. Diese Modelle sind jedoch auf trainierte Daten und Kontext abgestimmt, was die Anpassung erschwert. Branchen, die heute Wert aus Daten schaffen wollen, mĂŒssen diese Paradigmen in Kombination meistern. Es besteht jedoch ein großer Bedarf in der Daten-analytik, semantisch und datengesteuerte Verarbeitungstechniken nahtlos in einer effizienten und skalierbaren Architektur zu kombinieren, die es ermöglicht, aus einer extremen Datenvielfalt verwertbare Erkenntnisse zu gewinnen. In dieser Arbeit, die wir auf diese BedĂŒrfnisse durch die Bereitstellung: ‱ Eine einheitliche Darstellung der DomĂ€nen-spezifischen und analytischen Semantik in Form von Ontologie Modellen, genannt TechOnto Ontology Stack. Es ist ein hoch-expressiver, plattformunabhĂ€ngiger Formalismus, die konzeptionelle Semantik industrieller Systeme wie technischer Systemhierarchien, Komponenten-partonomien usw. und deren analytische funktionale Semantik zu erfassen. ‱ Eine neue Ontologie-Sprache Semantically defined Analytical Language (SAL) auf Basis des Ontologie-Modells das bestehende DatalogMTL (ein Horn fragment der metrischen temporĂ€ren Logik) um analytische Funktionen als erstklassige BĂŒrger erweitert. ‱ Eine Methode zur Erzeugung semantischer workflows mit unserer SAL-Sprache. Es hilft bei der Erstellung, Wiederverwendung und Wartung komplexer analytischer Aufgaben und workflows auf abstrakte Weise. ‱ Eine mehrschichtige Architektur, die Wissens- und datengesteuerte Analysen zu einer föderierten und verteilten Lösung verschmilzt. Nach unserem Wissen, die Arbeit in dieser Arbeit ist eines der ersten Werke zur EinfĂŒhrung und Untersuchung der Verwendung der semantisch definierten Analytik in einer Ontologie-basierten Datenzugriff Einstellung fĂŒr industrielle analytische Anwendungen. Der Grund fĂŒr die Fokussierung unserer Arbeit und Evaluierung auf industrielle Daten ist auf (i) die Übernahme semantischer Technologien durch die Industrie im Allgemeinen und (ii) den gemeinsamen Bedarf in der Literatur und in der Praxis zurĂŒckzufĂŒhren, der es der Fachkompetenz ermöglicht, die Datenanalyse auf semantisch inter-operablen Quellen voranzutreiben, und nutzen gleichzeitig die LeistungsfĂ€higkeit der Analytik, um Echtzeit-Daten-einblicke zu ermöglichen. Aufgrund der Evaluierungsergebnisse von drei AnwendungsfĂ€llen Übertritt unser Ansatz fĂŒr die meisten Anwendungsszenarien Modernste AnsĂ€tze

    INTEROP deliverable DTG 6.2 : Method repository

    Get PDF
    This deliverable presents the INTEROP method chunks repository (MCR), its architecture and provided services. It includes the definition of a reusable method chunk, its structure, illustrated with examples of method chunks stored in the repository and guidelines for method chunks definition and characterisation covering tasks TG6.2 and TG6.3 of the work plan of the task group. The main result is the definition of the structure of the method chunk repository emphasizing the link to interoperability. Interoperability is a first-class concept in the structure of the method chunk repository. It not only characterizes method chunks, i.e. procedures to solve interoperability problems, but also interoperability cases, i.e. the presentation of actual problems involving interoperability issues. TG 6 has produced three MCR prototypes. Two experiments were undertaken using the Metis system and one using ConceptBase. The task group attended a two-day intense workshop on Metis. As a result, two experiments with Metis as platform for the method chunk repository are under way and reported in this deliverable. One is realizing the structure of the MCR as specified in this report. The other is an alternative approach that serves as a benchmark and is reported in the appendix. The ConceptBase prototype utilizes the metamodel presented in this deliverable. We have analysed three cases involving various aspects of interoperability. One case is about establishing a broker platform for insurance agents, the second about linking the information systems in the public utility sector, and the third case is establishing the relation of the ATHENA Model-Driven Interoperability Framework to the goals of the MCR. The results of the TG6 have been published at the ISD conference 2006 and the ER conference 2006. Copies of the papers are included in the appendix. The report of the example session with the method chunk repository has been shifted towards deliverable TG6.3 (Tutorial of the MCR). This is the more logical place. We want to emphasize that TG6 was not only busy in drafting concepts, exploring the state of the art, and analyzing cases. We are actually experimenting with a prototype and consider this a valuable contribution to the network. As soon as the prototype is stable, knowledge about interoperability solutions can be coded in this repository and can guide designers of interoperable systems by experience knowledge

    AH 2003 : workshop on adaptive hypermedia and adaptive web-based systems

    Get PDF

    AH 2003 : workshop on adaptive hypermedia and adaptive web-based systems

    Get PDF

    Adaptive hypertext and hypermedia : workshop : proceedings, 3rd, Sonthofen, Germany, July 14, 2001 and Aarhus, Denmark, August 15, 2001

    Get PDF
    This paper presents two empirical usability studies based on techniques from Human-Computer Interaction (HeI) and software engineering, which were used to elicit requirements for the design of a hypertext generation system. Here we will discuss the findings of these studies, which were used to motivate the choice of adaptivity techniques. The results showed dependencies between different ways to adapt the explanation content and the document length and formatting. Therefore, the system's architecture had to be modified to cope with this requirement. In addition, the system had to be made adaptable, in addition to being adaptive, in order to satisfy the elicited users' preferences

    A note on organizational learning and knowledge sharing in the context of communities of practice

    Get PDF
    Please, cite this publication as: Antonova, A. & Gourova, E. (2006). A note on organizational learning and knowledge sharing in the context of communities of practice. Proceedings of International Workshop in Learning Networks for Lifelong Competence Development, TENCompetence Conference. September 12th, Sofia, Bulgaria: TENCompetence. Retrieved June 30th, 2006, from http://dspace.learningnetworks.orgThe knowledge management (KM) literature emphasizes the impact of human factors for successful implementation of KM within the organization. Isolated initiatives for promoting learning organization and team collaboration, without taking consideration of the knowledge sharing limitations and constraints can defeat further development of KM culture. As an effective instrument for knowledge sharing, communities of practice (CoP) are appearing to overcome these constraints and to foster human collaboration.This work has been sponsored by the EU project TENCompetenc
    • 

    corecore