633 research outputs found

    Ubiquitous and context-aware computing modelling : study of devices integration in their environment

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Information Systems and Technologies ManagementIn an almost imperceptible way, ubiquitous and context-aware computing make part of our everyday lives, as the world has developed in an interconnected way between humans and technological devices. This interconnectedness raises the need to integrate humans’ interaction with the different devices they use in different social contexts and environments. In the proposed research, it is suggested the development of new scenario building based on a current ubiquitous computing model dedicated to the environment context-awareness. We will also follow previous research made on the formal structure computation model based on social paradigm theory, dedicated to embed devices into different context environments with social roles developed by Santos (2012/2015). Furthermore, several socially relevant context scenarios are to be identified and studied. Once identified, we gather and document the requirements that devices should have, according to the model, in order to achieve a correct integration in their contextual environment

    Forum Session at the First International Conference on Service Oriented Computing (ICSOC03)

    Get PDF
    The First International Conference on Service Oriented Computing (ICSOC) was held in Trento, December 15-18, 2003. The focus of the conference ---Service Oriented Computing (SOC)--- is the new emerging paradigm for distributed computing and e-business processing that has evolved from object-oriented and component computing to enable building agile networks of collaborating business applications distributed within and across organizational boundaries. Of the 181 papers submitted to the ICSOC conference, 10 were selected for the forum session which took place on December the 16th, 2003. The papers were chosen based on their technical quality, originality, relevance to SOC and for their nature of being best suited for a poster presentation or a demonstration. This technical report contains the 10 papers presented during the forum session at the ICSOC conference. In particular, the last two papers in the report ere submitted as industrial papers

    Web-based discovery and dissemination of multidimensional geographic information

    Get PDF
    A spatial data clearinghouse is an electronic facility for searching, viewing, transferring, ordering, advertising, and disseminating spatial data from numerous sources via the Internet. Governments and other institutions have been implementing spatial data clearinghouses to minimise data duplication and thus reduce the cost of spatial data acquisition. Underlying these clearinghouses are geoportals and databases of geospatial metadata.A geoportal is an access point of a spatial data clearinghouse and metadata is data that describes data. The success of a clearinghouse's spatial data discovery system is dependent on its ability to communicate the contents of geospatial metadata by providing both visual and analytical assistancet o a user. The model currently adopted by the geographic information community was inherited from generic information systems and thus to an extent ignores spatial characteristics of geographic data. Consequently, research in Geographic Information Retrieval (GIR) has focussed on spatial aspects of webbased data discovery and acquisition. This thesis considers how the process of GIR from geoportals can be enhanced through multidimensional visualisation served by web-based geographic data sources. An approach is proposed for the presentation of search results in ontology assisted GIR. Also proposed is an approach for the visualisation of multidimensional geographic data from web-based data sources. These approaches are implemented in two prototypes, the Geospatial Database Online Visualisation Environment (GeoDOVE) and the Spatio-Temporal Ontological Relevance Model (STORM). A discussion of their design, implementation and evaluation is presented. The results suggest that ontology-assisted visualisation can improve a user's ability to identify the most relevant multidimensional geographic datasets from a set of search results. Additional results suggest that it is possible to offer the proposed visualisation approaches on existing geoportal frameworks. The implication of the results is that multidimensional visualisation should be considered by the wider geographic information community as an alternative to historic approaches for presenting search results on geoportals, such as the textual ranked list and two-dimensional maps.EThOS - Electronic Theses Online ServiceUniversity of Newcastle upon TyneGBUnited Kingdo

    Building an Multi-Agent Whisky Recommender System

    Get PDF
    MAS (Multi-Agent Systems), classifiers and other AI (Artificial Intelligence) techniques are increasingly becoming more common. The capability to handle complex and advanced problems by MAS was explored in this thesis. An MAS duty-free shopping recommender system was built for this purpose. The MAS system was part of a larger system built by the AmbieSense project. In addition, the AmbieSense project had built a prototype that was tested at Oslo Airport (OSL) Gardermoen. As a test case, the duty-free shopping system was set to classify and recommend whiskies. The system incorporated several AI techniques such as agents, ontology, knowledge base and classifiers. The MAS was built using the JADE-LEAP framework, and Protégé was used for constructing the knowledge base. Various tests were performed for testing the system. Firstly, the agent communication was monitored using a Sniffer Agent. Secondly, the system’s ability to run on mobile devices was tested on a PDA and various mobile phones. Thirdly, the MAS abilities compared to a ‘normal’ computer program were tested by replacing agents at run-time, using several JADE platforms, and by the experience gathered during development and the use of the developed system. Lastly, the recommendation was cross-validated against Dr. Wishart’s whisky classification. Weka was employed as a tool for testing different features and classifiers. Different classification algorithms are explained such as NNR (Nearest-Neighbour Rule), Bayesian, CBR (Case-Based Reasoning), cluster analysis and self-organizing feature maps. Two classification algorithms were tested; NNR and Bayesian. Features were tested using feature evaluation algorithms; information gain and ReliefF. The accuracy of the classification was tested using 10 fold cross-validation. The testing showed that it is possible to make an MAS handling complex and advanced problems. It has also been shown that an MAS have some benefits in the areas of reliability, extensibility, computational efficiency and maintainability when compared to a ‘normal’ program. The error rate produced by the classifier was 56%; a figure which is too high for a recommendation system. Improvements could probably be achieved by finding better features or by selecting a different classifier. The system developed does not necessarily have to be used for duty-free shopping but could also be used for any shopping items

    Analyzing epigenomic data in a large-scale context

    Get PDF
    While large amounts of epigenomic data are publicly available, their retrieval in a form suitable for downstream analysis is a bottleneck in current research. In a typical analysis, users are required to download huge files that span the entire genome, even if they are only interested in a small subset (e.g., promoter regions) or an aggregation thereof. Moreover, complex operations on genome-level data are not always feasible on a local computer due to resource limitations. The DeepBlue Epigenomic Data Server mitigates this issue by providing a robust server that affords a powerful API for searching, filtering, transforming, aggregating, enriching, and downloading data from several epigenomic consortia. Furthermore, its main component implements scalable data storage and Manipulation methods that scale with the increasing amount of epigenetic data, thereby making it the ideal resource for researchers that seek to integrate epigenomic data into their analysis workflow. This work also presents companion tools that utilize the DeepBlue API to enable users not proficient in scripting or programming languages to analyze epigenomic data in a user-friendly way: (i) an R/Bioconductor package that integrates DeepBlue into the R analysis workflow. The extracted data are automatically converted into suitable R data structures for downstream analysis and visualization within the Bioconductor frame- work; (ii) a web portal that enables users to search, select, filter and download the epigenomic data available in the DeepBlue Server. This interface provides elements, such as data tables, grids, data selections, developed for empowering users to find the required epigenomic data in a straightforward interface; (iii) DIVE, a web data analysis tool that allows researchers to perform large-epigenomic data analysis in a programming-free environment. DIVE enables users to compare their datasets to the datasets available in the DeepBlue Server in an intuitive interface, which summarizes the comparison of hundreds of datasets in a simple chart. Furthermore, these tools are integrated, being capable of sharing results among themselves, creating a powerful large-scale epigenomic data analysis environment. The DeepBlue Epigenomic Data Server and its ecosystem was well received by the International Human Epigenome Consortium and already attracted much attention by the epigenomic research community with currently 160 registered users and more than three million anonymous workflow processing requests since its release.Während große Mengen epigenomischer Daten öffentlich verfügbar sind, ist ihre Abfrage in einer für die Downstream-Analyse geeigneten Form ein Engpass in der aktuellen Forschung. Bei einer typischen Analyse müssen Benutzer riesige Dateien herunterladen, die das gesamte Genom umfassen, selbst wenn sie nur an einer kleinen Teilmenge (z.B., Promotorregionen) oder einer Aggregation davon interessiert sind. Darüber hinaus sind komplexe Vorgänge mit Daten auf Genomebene aufgrund von Ressourceneinschränkungen auf einem lokalen Computer nicht immer möglich. Der DeepBlue Epigenomic Data Server behebt dieses Problem, indem er eine leistungsstarke API zum Suchen, Filtern, Umwandeln, Aggregieren, Anreichern und Herunterladen von Daten verschiedener epigenomischer Konsortien bietet. Darüber hinaus implementiert der DeepBlue-Server skalierbare Datenspeicherungs- und manipulationsmethoden, die der zunehmenden Menge epigenetischer Daten gerecht werden. Dadurch ist der DeepBlue Server ideal für Forscher geeignet, die die aktuellen epigenomischen Ressourcen in ihren Analyse-Workflow integrieren möchten. In dieser Arbeit werden zusätzlich Begleittools vorgestellt, die die DeepBlue-API verwenden, um Benutzern, die sich mit Scripting oder Programmiersprachen nicht auskennen, die Möglichkeit zu geben, epigenomische Daten auf benutzerfreundliche Weise zu analysieren: (i) ein R/ Bioconductor-Paket, das DeepBlue in den R-Analyse-Workflow integriert. Die extrahierten Daten werden automatisch in geeignete R-Datenstrukturen für die Downstream-Analyse und Visualisierung innerhalb des Bioconductor-Frameworks konvertiert; (ii) ein Webportal, über das Benutzer die auf dem DeepBlue Server verfügbaren epigenomischen Daten suchen, auswählen, filtern und herunterladen können. Diese Schnittstelle bietet Elemente wie Datentabellen, Raster, Datenselektionen, mit denen Benutzer die erforderlichen epigenomischen Daten in einer einfachen Schnittstelle finden können; (iii) DIVE, ein Webdatenanalysetool, mit dem Forscher umfangreiche epigenomische Datenanalysen in einer programmierungsfreien Umgebung durchführen können. Mit DIVE können Benutzer ihre Datensätze mit den im Deep- Blue Server verfügbaren Datensätzen in einer intuitiven Benutzeroberfläche vergleichen. Dabei kann der Vergleich hunderter Datensätze in einem Diagramm ausgedrückt werden. Aufgrund der großen Datenmenge, die in DIVE verfügbar ist, werden Methoden bereitgestellt, mit denen die ähnlichsten Datensätze für eine vergleichende Analyse vorgeschlagen werden können. Alle zuvor genannten Tools sind miteinander integriert, so dass sie die Ergebnisse untereinander austauschen können, wodurch eine leistungsstarke Umgebung für die Analyse epigenomischer Daten entsteht. Der DeepBlue Epigenomic Data Server und sein Ökosystem wurden vom International Human Epigenome Consortium äußerst gut aufgenommen und erreichten seit ihrer Veröffentlichung große Aufmerksamkeit bei der epigenomischen Forschungsgemeinschaft mit derzeit 160 registrierten Benutzern und mehr als drei Millionen anonymen Verarbeitungsanforderungen

    7th SC@RUG 2010 proceedings:Student Colloquium 2009-2010

    Get PDF

    Building Blocks for IoT Analytics Internet-of-Things Analytics

    Get PDF
    Internet-of-Things (IoT) Analytics are an integral element of most IoT applications, as it provides the means to extract knowledge, drive actuation services and optimize decision making. IoT analytics will be a major contributor to IoT business value in the coming years, as it will enable organizations to process and fully leverage large amounts of IoT data, which are nowadays largely underutilized. The Building Blocks of IoT Analytics is devoted to the presentation the main technology building blocks that comprise advanced IoT analytics systems. It introduces IoT analytics as a special case of BigData analytics and accordingly presents leading edge technologies that can be deployed in order to successfully confront the main challenges of IoT analytics applications. Special emphasis is paid in the presentation of technologies for IoT streaming and semantic interoperability across diverse IoT streams. Furthermore, the role of cloud computing and BigData technologies in IoT analytics are presented, along with practical tools for implementing, deploying and operating non-trivial IoT applications. Along with the main building blocks of IoT analytics systems and applications, the book presents a series of practical applications, which illustrate the use of these technologies in the scope of pragmatic applications. Technical topics discussed in the book include: Cloud Computing and BigData for IoT analyticsSearching the Internet of ThingsDevelopment Tools for IoT Analytics ApplicationsIoT Analytics-as-a-ServiceSemantic Modelling and Reasoning for IoT AnalyticsIoT analytics for Smart BuildingsIoT analytics for Smart CitiesOperationalization of IoT analyticsEthical aspects of IoT analyticsThis book contains both research oriented and applied articles on IoT analytics, including several articles reflecting work undertaken in the scope of recent European Commission funded projects in the scope of the FP7 and H2020 programmes. These articles present results of these projects on IoT analytics platforms and applications. Even though several articles have been contributed by different authors, they are structured in a well thought order that facilitates the reader either to follow the evolution of the book or to focus on specific topics depending on his/her background and interest in IoT and IoT analytics technologies. The compilation of these articles in this edited volume has been largely motivated by the close collaboration of the co-authors in the scope of working groups and IoT events organized by the Internet-of-Things Research Cluster (IERC), which is currently a part of EU's Alliance for Internet of Things Innovation (AIOTI)

    7th SC@RUG 2010 proceedings:Student Colloquium 2009-2010

    Get PDF

    7th SC@RUG 2010 proceedings:Student Colloquium 2009-2010

    Get PDF
    corecore