4,836 research outputs found

    Locational wireless and social media-based surveillance

    Get PDF
    The number of smartphones and tablets as well as the volume of traffic generated by these devices has been growing constantly over the past decade and this growth is predicted to continue at an increasing rate over the next five years. Numerous native features built into contemporary smart devices enable highly accurate digital fingerprinting techniques. Furthermore, software developers have been taking advantage of locational capabilities of these devices by building applications and social media services that enable convenient sharing of information tied to geographical locations. Mass online sharing resulted in a large volume of locational and personal data being publicly available for extraction. A number of researchers have used this opportunity to design and build tools for a variety of uses – both respectable and nefarious. Furthermore, due to the peculiarities of the IEEE 802.11 specification, wireless-enabled smart devices disclose a number of attributes, which can be observed via passive monitoring. These attributes coupled with the information that can be extracted using social media APIs present an opportunity for research into locational surveillance, device fingerprinting and device user identification techniques. This paper presents an in-progress research study and details the findings to date

    Trusted operational scenarios - Trust building mechanisms and strategies for electronic marketplaces.

    Get PDF
    This document presents and describes the trusted operational scenarios, resulting from the research and work carried out in Seamless project. The report presents identified collaboration habits of small and medium enterprises with low e-skills, trust building mechanisms and issues as main enablers of online business relationships on the electronic marketplace, a questionnaire analysis of the level of trust acceptance and necessity of trust building mechanisms, a proposal for the development of different strategies for the different types of trust mechanisms and recommended actions for the SEAMLESS project or other B2B marketplaces.trust building mechanisms, trust, B2B networks, e-marketplaces

    Core Services in the Architecture of the National Digital Library for Science Education (NSDL)

    Full text link
    We describe the core components of the architecture for the (NSDL) National Science, Mathematics, Engineering, and Technology Education Digital Library. Over time the NSDL will include heterogeneous users, content, and services. To accommodate this, a design for a technical and organization infrastructure has been formulated based on the notion of a spectrum of interoperability. This paper describes the first phase of the interoperability infrastructure including the metadata repository, search and discovery services, rights management services, and user interface portal facilities

    Design and implementation of a platform for predicting pharmacological properties of molecules

    Get PDF
    Tese de mestrado, Bioinformática e Biologia Computacional, Universidade de Lisboa, Faculdade de Ciências, 2019O processo de descoberta e desenvolvimento de novos medicamentos prolonga-se por vários anos e implica o gasto de imensos recursos monetários. Como tal, vários métodos in silico são aplicados com o intuito de dimiuir os custos e tornar o processo mais eficiente. Estes métodos incluem triagem virtual, um processo pelo qual vastas coleções de compostos são examinadas para encontrar potencial terapêutico. QSAR (Quantitative Structure Activity Relationship) é uma das tecnologias utilizada em triagem virtual e em optimização de potencial farmacológico, em que a informação estrutural de ligandos conhecidos do alvo terapêutico é utilizada para prever a actividade biológica de um novo composto para com o alvo. Vários investigadores desenvolvem modelos de aprendizagem automática de QSAR para múltiplos alvos terapêuticos. Mas o seu uso está dependente do acesso aos mesmos e da facilidade em ter os modelos funcionais, o que pode ser complexo quando existem várias dependências ou quando o ambiente de desenvolvimento difere bastante do ambiente em que é usado. A aplicação ao qual este documento se refere foi desenvolvida para lidar com esta questão. Esta é uma plataforma centralizada onde investigadores podem aceder a vários modelos de QSAR, podendo testar os seus datasets para uma multitude de alvos terapêuticos. A aplicação permite usar identificadores moleculares como SMILES e InChI, e gere a sua integração em descritores moleculares para usar como input nos modelos. A plataforma pode ser acedida através de uma aplicação web com interface gráfica desenvolvida com o pacote Shiny para R e directamente através de uma REST API desenvolvida com o pacote flask-restful para Python. Toda a aplicação está modularizada através de teconologia de “contentores”, especificamente o Docker. O objectivo desta plataforma é divulgar o acesso aos modelos criados pela comunidade, condensando-os num só local e removendo a necessidade do utilizador de instalar ou parametrizar qualquer tipo de software. Fomentando assim o desenvolvimento de conhecimento e facilitando o processo de investigação.The drug discovery and design process is expensive, time-consuming and resource-intensive. Various in silico methods are used to make the process more efficient and productive. Methods such as Virtual Screening often take advantage of QSAR machine learning models to more easily pinpoint the most promising drug candidates, from large pools of compounds. QSAR, which means Quantitative Structure Activity Relationship, is a ligand-based method where structural information of known ligands of a specific target is used to predict the biological activity of another molecule against that target. They are also used to improve upon an existing molecule’s pharmacologic potential by elucidating the structural composition with desirable properties. Several researchers create and develop QSAR machine learning models for a variety of different therapeutic targets. However, their use is limited by lack of access to said models. Beyond access, there are often difficulties in using published software given the need to manage dependencies and replicating the development environment. To address this issue, the application documented here was designed and developed. In this centralized platform, researchers can access several QSAR machine learning models and test their own datasets for interaction with various therapeutic targets. The platform allows the use of widespread molecule identifiers as input, such as SMILES and InChI, handling the necessary integration into the appropriate molecular descriptors to be used in the model. The platform can be accessed through a Web Application with a full graphical user interface developed with the R package Shiny and through a REST API developed with the Flask Restful package for Python. The complete application is packaged up in container technology, specifically Docker. The main goal of this platform is to grant widespread access to the QSAR models developed by the scientific community, by concentrating them in a single location and removing the user’s need to install or set up software unfamiliar to them. This intends to incite knowledge creation and facilitate the research process

    Publication, Testing and Visualization with EFES : A tool for all stages of the EpiDoc XML editing process

    Get PDF
    EpiDoc is a set of recommendations, schema and other tools for the encoding of ancient texts, especially inscriptions and papyri, in TEI XML, that is now used by upwards of a hundred projects around the world, and large numbers of scholars seek training in EpiDoc encoding every year. The EpiDoc Front-End Services tool (EFES) was designed to fill the important need for a publication solution for researchers and editors who have produced EpiDoc encoded texts but do not have access to digital humanities support or a well-funded IT service to produce a publication for them. This paper will discuss the use of EFES not only for final publication, but as a tool in the editing and publication workflow, by editors of inscriptions, papyri and similar texts including those on coins and seals. The edition visualisations, indexes and search interface produced by EFES are able to serve as part of the validation, correction and research apparatus for the author of an epigraphic corpus, iteratively improving the editions long before final publication. As we will argue, this research process is a key component of epigraphic and papyrological editing practice, and studying these needs will help us to further enhance the effectiveness of EFES as a tool. To this end we also plan to add three major functionalities to the EFES toolbox: (1) date visualisation and filter—building on the existing “date slider,” and inspired by partner projects such as Pelagios and Godot; (2) geographic visualization features, again building on Pelagios code, allowing the display of locations within a corpus or from a specific set of search results in a map; (3) export of information and metadata from the corpus as Linked Open Data, following the recommendations of projects such as the Linked Places format, SNAP, Chronontology and Epigraphy.info, to enable the semantic sharing of data within and beyond the field of classical and historical editions. Finally, we will discuss the kinds of collaboration that will be required to bring about desired enhancements to the EFES toolset, especially in this age of research-focussed, short-term funding. Embedding essential infrastructure work of this kind in research applications for specific research and publication projects will almost certainly need to be part of the solution.Peer reviewe

    Intelligent Data Networking for the Earth System Science Community

    No full text
    Earth system science (ESS) research is generally very data intense. To enable detailed discovery and transparent access of the data stored in heterogeneous and organisationally separated data centres common data and metadata community interfaces are needed. This paper describes the development of a coherent data discovery and data access infrastructure for the ESS community in Germany. To comprehensively and consistently describe the characteristics of geographic data, required for their discovery (discovery metadata) and for their usage (use metadata) the ISO standard 19115 is adopted. Webservice technology is used to hide the details of heterogeneous data access mechanisms and preprocessing implementations. The commitment to international standards and the modular character of the approach facilitates the expandability of the infrastructure as well as the interoperability with international partners and other communities

    A Semantic Data Grid for Satellite Mission Quality Analysis

    Full text link
    The combination of Semantic Web and Grid technologies and architectures cases the development of applications that share heterogeneous resource,, (data and computing elements) that belong to several organisations. The Aerospace domain has an extensive and heterogeneous network of facilities and institutions, with a strong need to share both data and computational resources for complex processing tasks. One such task is monitoring and data analysis for Satellite Missions. This paper presents a Semantic Data Grid for satellite missions, where flexibility, scalability, interoperability, extensibility and efficient development have been considered the key issues to be addressed

    Multi-Objective Approaches to Markov Decision Processes with Uncertain Transition Parameters

    Full text link
    Markov decision processes (MDPs) are a popular model for performance analysis and optimization of stochastic systems. The parameters of stochastic behavior of MDPs are estimates from empirical observations of a system; their values are not known precisely. Different types of MDPs with uncertain, imprecise or bounded transition rates or probabilities and rewards exist in the literature. Commonly, analysis of models with uncertainties amounts to searching for the most robust policy which means that the goal is to generate a policy with the greatest lower bound on performance (or, symmetrically, the lowest upper bound on costs). However, hedging against an unlikely worst case may lead to losses in other situations. In general, one is interested in policies that behave well in all situations which results in a multi-objective view on decision making. In this paper, we consider policies for the expected discounted reward measure of MDPs with uncertain parameters. In particular, the approach is defined for bounded-parameter MDPs (BMDPs) [8]. In this setting the worst, best and average case performances of a policy are analyzed simultaneously, which yields a multi-scenario multi-objective optimization problem. The paper presents and evaluates approaches to compute the pure Pareto optimal policies in the value vector space.Comment: 9 pages, 5 figures, preprint for VALUETOOLS 201
    corecore