1,511 research outputs found

    Large-Scale Data Management and Analysis (LSDMA) - Big Data in Science

    Get PDF

    B!SON: A Tool for Open Access Journal Recommendation

    Get PDF
    Finding a suitable open access journal to publish scientific work is a complex task: Researchers have to navigate a constantly growing number of journals, institutional agreements with publishers, funders’ conditions and the risk of Predatory Publishers. To help with these challenges, we introduce a web-based journal recommendation system called B!SON. It is developed based on a systematic requirements analysis, built on open data, gives publisher-independent recommendations and works across domains. It suggests open access journals based on title, abstract and references provided by the user. The recommendation quality has been evaluated using a large test set of 10,000 articles. Development by two German scientific libraries ensures the longevity of the project

    A Linux Implementation of Temporal Access Controls

    Full text link
    Abstract — Control of access to information based upon temporal attributes can add another dimension to access control. To demonstrate the feasibility of operating system-level support for temporal access controls, the Time Interval File Protection System (TIFPS), a prototype of the Time In-terval Access Control (TIAC) model, has been implemented by modifying Linux extended attributes to include temporal metadata associated both with files and users. The Linux Security Module was used to provide hooks for temporal ac-cess control logic. In addition, a set of utilities was modified to be TIFPS-aware. These tools permit users to view and manage the temporal attributes associated with their files and directories. Functional, performance, and concurrency testing were conducted. The ability of TIFPS to grant or revoke access in the future, as well to limit access to specific time intervals enhances traditional information control and sharing. I

    Automatic generation of software interfaces for supporting decisionmaking processes. An application of domain engineering & machine learning

    Get PDF
    [EN] Data analysis is a key process to foster knowledge generation in particular domains or fields of study. With a strong informative foundation derived from the analysis of collected data, decision-makers can make strategic choices with the aim of obtaining valuable benefits in their specific areas of action. However, given the steady growth of data volumes, data analysis needs to rely on powerful tools to enable knowledge extraction. Information dashboards offer a software solution to analyze large volumes of data visually to identify patterns and relations and make decisions according to the presented information. But decision-makers may have different goals and, consequently, different necessities regarding their dashboards. Moreover, the variety of data sources, structures, and domains can hamper the design and implementation of these tools. This Ph.D. Thesis tackles the challenge of improving the development process of information dashboards and data visualizations while enhancing their quality and features in terms of personalization, usability, and flexibility, among others. Several research activities have been carried out to support this thesis. First, a systematic literature mapping and review was performed to analyze different methodologies and solutions related to the automatic generation of tailored information dashboards. The outcomes of the review led to the selection of a modeldriven approach in combination with the software product line paradigm to deal with the automatic generation of information dashboards. In this context, a meta-model was developed following a domain engineering approach. This meta-model represents the skeleton of information dashboards and data visualizations through the abstraction of their components and features and has been the backbone of the subsequent generative pipeline of these tools. The meta-model and generative pipeline have been tested through their integration in different scenarios, both theoretical and practical. Regarding the theoretical dimension of the research, the meta-model has been successfully integrated with other meta-model to support knowledge generation in learning ecosystems, and as a framework to conceptualize and instantiate information dashboards in different domains. In terms of the practical applications, the focus has been put on how to transform the meta-model into an instance adapted to a specific context, and how to finally transform this later model into code, i.e., the final, functional product. These practical scenarios involved the automatic generation of dashboards in the context of a Ph.D. Programme, the application of Artificial Intelligence algorithms in the process, and the development of a graphical instantiation platform that combines the meta-model and the generative pipeline into a visual generation system. Finally, different case studies have been conducted in the employment and employability, health, and education domains. The number of applications of the meta-model in theoretical and practical dimensions and domains is also a result itself. Every outcome associated to this thesis is driven by the dashboard meta-model, which also proves its versatility and flexibility when it comes to conceptualize, generate, and capture knowledge related to dashboards and data visualizations

    Virtualisation and Thin Client : A Survey of Virtual Desktop environments

    Get PDF
    This survey examines some of the leading commercial Virtualisation and Thin Client technologies. Reference is made to a number of academic research sources and to prominent industry specialists and commentators. A basic virtualisation Laboratory model is assembled to demonstrate fundamental Thin Client operations and to clarify potential problem areas

    Runtime Adaptation of Scientific Service Workflows

    Get PDF
    Software landscapes are rather subject to change than being complete after having been built. Changes may be caused by a modified customer behavior, the shift to new hardware resources, or otherwise changed requirements. In such situations, several challenges arise. New architectural models have to be designed and implemented, existing software has to be integrated, and, finally, the new software has to be deployed, monitored, and, where appropriate, optimized during runtime under realistic usage scenarios. All of these situations often demand manual intervention, which causes them to be error-prone. This thesis addresses these types of runtime adaptation. Based on service-oriented architectures, an environment is developed that enables the integration of existing software (i.e., the wrapping of legacy software as web services). A workflow modeling tool that aims at an easy-to-use approach by separating the role of the workflow expert and the role of the domain expert. After the development of workflows, tools that observe the executing infrastructure and perform automatic scale-in and scale-out operations are presented. Infrastructure-as-a-Service providers are used to scale the infrastructure in a transparent and cost-efficient way. The deployment of necessary middleware tools is automatically done. The use of a distributed infrastructure can lead to communication problems. In order to keep workflows robust, these exceptional cases need to treated. But, in this way, the process logic of a workflow gets mixed up and bloated with infrastructural details, which yields an increase in its complexity. In this work, a module is presented that can deal automatically with infrastructural faults and that thereby allows to keep the separation of these two layers. When services or their components are hosted in a distributed environment, some requirements need to be addressed at each service separately. Although techniques as object-oriented programming or the usage of design patterns like the interceptor pattern ease the adaptation of service behavior or structures. Still, these methods require to modify the configuration or the implementation of each individual service. On the other side, aspect-oriented programming allows to weave functionality into existing code even without having its source. Since the functionality needs to be woven into the code, it depends on the specific implementation. In a service-oriented architecture, where the implementation of a service is unknown, this approach clearly has its limitations. The request/response aspects presented in this thesis overcome this obstacle and provide a SOA-compliant and new methods to weave functionality into the communication layer of web services. The main contributions of this thesis are the following: Shifting towards a service-oriented architecture: The generic and extensible Legacy Code Description Language and the corresponding framework allow to wrap existing software, e.g., as web services, which afterwards can be composed into a workflow by SimpleBPEL without overburdening the domain expert with technical details that are indeed handled by a workflow expert. Runtime adaption: Based on the standardized Business Process Execution Language an automatic scheduling approach is presented that monitors all used resources and is able to automatically provision new machines in case a scale-out becomes necessary. If the resource's load drops, e.g., because of less workflow executions, a scale-in is also automatically performed. The scheduling algorithm takes the data transfer between the services into account in order to prevent scheduling allocations that eventually increase the workflow's makespan due to unnecessary or disadvantageous data transfers. Furthermore, a multi-objective scheduling algorithm that is based on a genetic algorithm is able to additionally consider cost, in a way that a user can define her own preferences rising from optimized execution times of a workflow and minimized costs. Possible communication errors are automatically detected and, according to certain constraints, corrected. Adaptation of communication: The presented request/response aspects allow to weave functionality into the communication of web services. By defining a pointcut language that only relies on the exchanged documents, the implementation of services must neither be known nor be available. The weaving process itself is modeled using web services. In this way, the concept of request/response aspects is naturally embedded into a service-oriented architecture

    Métodos computacionais para otimização de desempenho em redes de imagem médica

    Get PDF
    Over the last few years, the medical imaging has consolidated its position as a major mean of clinical diagnosis. The amount of data generated by the medical imaging practice is increasing tremendously. As a result, repositories are turning into rich databanks of semi-structured data related to patients, ailments, equipment and other stakeholders involved in the medical imaging panorama. The exploration of these repositories for secondary uses of data promises to elevate the quality standards and efficiency of the medical practice. However, supporting these advanced usage scenarios in traditional institutional systems raises many technical challenges that are yet to be overcome. Moreover, the reported poor performance of standard protocols opened doors to the general usage of proprietary solutions, compromising the interoperability necessary for supporting these advanced scenarios. This thesis has researched, developed, and now proposes a series of computer methods and architectures intended to maximize the performance of multi-institutional medical imaging environments. The methods are intended to improve the performance of standard protocols for medical imaging content discovery and retrieval. The main goal is to use them to increase the acceptance of vendor-neutral solutions through the improvement of their performance. Moreover, it intends to promote the adoption of such standard technologies in advanced scenarios that are still a mirage nowadays, such as clinical research or data analytics directly on top of live institutional repositories. Finally, these achievements will facilitate the cooperation between healthcare institutions and researchers, resulting in an increment of healthcare quality and institutional efficiency.As diversas modalidades de imagem médica têm vindo a consolidar a sua posição dominante como meio complementar de diagnóstico. O número de procedimentos realizados e o volume de dados gerados aumentou significativamente nos últimos anos, colocando pressão nas redes e sistemas que permitem o arquivo e distribuição destes estudos. Os repositórios de estudos imagiológicos são fontes de dados ricas contendo dados semiestruturados relacionados com pacientes, patologias, procedimentos e equipamentos. A exploração destes repositórios para fins de investigação e inteligência empresarial, tem potencial para melhorar os padrões de qualidade e eficiência da prática clínica. No entanto, estes cenários avançados são difíceis de acomodar na realidade atual dos sistemas e redes institucionais. O pobre desempenho de alguns protocolos standard usados em ambiente de produção, conduziu ao uso de soluções proprietárias nestes nichos aplicacionais, limitando a interoperabilidade de sistemas e a integração de fontes de dados. Este doutoramento investigou, desenvolveu e propõe um conjunto de métodos computacionais cujo objetivo é maximizar o desempenho das atuais redes de imagem médica em serviços de pesquisa e recuperação de conteúdos, promovendo a sua utilização em ambientes de elevados requisitos aplicacionais. As propostas foram instanciadas sobre uma plataforma de código aberto e espera-se que ajudem a promover o seu uso generalizado como solução vendor-neutral. As metodologias foram ainda instanciadas e validadas em cenários de uso avançado. Finalmente, é expectável que o trabalho desenvolvido possa facilitar a investigação em ambiente hospitalar de produção, promovendo, desta forma, um aumento da qualidade e eficiência dos serviços.Programa Doutoral em Engenharia Informátic

    Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    Get PDF
    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization
    • …
    corecore