266 research outputs found

    Decentralized Learning Infrastructures for Community Knowledge Building

    Get PDF
    Learning in Communities of Practice (CoPs) makes up a significant portion of today's knowledge gain. However, only little technological support is tailored specifically towards CoPs and their particular strengths and challenges. Even worse, CoPs often do not possess the resources to host or develop a software ecosystem to support their activities. In this contribution, we describe a decentralized learning infrastructure for community knowledge building. It takes into account the constant change of these communities by providing a leightweight and scalable infrastructure, without the need for central coordination or facilitation. As a real use case, we implement a question-based dialog application for inquiry-based learning and ignorance modeling with our infrastructure. Additionally, we explore the possibility of using social bots to connect the services provided by the decentralized infrastructure to communication tools already present in most communities (e.g. chat platforms). Following a design science approach, we describe a multi-step evaluation of both the infrastructure and application, together with the improvements made to the resulting artifacts of each step. Our results indicate the relevance of our approach, that may serve as an example of how decentralized learning infrastructures for learning outside of formal settings can be applied by CoPs for knowledge building

    EGI user forum 2011 : book of abstracts

    Get PDF

    Scientific Workflows for Metabolic Flux Analysis

    Get PDF
    Metabolic engineering is a highly interdisciplinary research domain that interfaces biology, mathematics, computer science, and engineering. Metabolic flux analysis with carbon tracer experiments (13 C-MFA) is a particularly challenging metabolic engineering application that consists of several tightly interwoven building blocks such as modeling, simulation, and experimental design. While several general-purpose workflow solutions have emerged in recent years to support the realization of complex scientific applications, the transferability of these approaches are only partially applicable to 13C-MFA workflows. While problems in other research fields (e.g., bioinformatics) are primarily centered around scientific data processing, 13C-MFA workflows have more in common with business workflows. For instance, many bioinformatics workflows are designed to identify, compare, and annotate genomic sequences by "pipelining" them through standard tools like BLAST. Typically, the next workflow task in the pipeline can be automatically determined by the outcome of the previous step. Five computational challenges have been identified in the endeavor of conducting 13 C-MFA studies: organization of heterogeneous data, standardization of processes and the unification of tools and data, interactive workflow steering, distributed computing, and service orientation. The outcome of this thesis is a scientific workflow framework (SWF) that is custom-tailored for the specific requirements of 13 C-MFA applications. The proposed approach – namely, designing the SWF as a collection of loosely-coupled modules that are glued together with web services – alleviates the realization of 13C-MFA workflows by offering several features. By design, existing tools are integrated into the SWF using web service interfaces and foreign programming language bindings (e.g., Java or Python). Although the attributes "easy-to-use" and "general-purpose" are rarely associated with distributed computing software, the presented use cases show that the proposed Hadoop MapReduce framework eases the deployment of computationally demanding simulations on cloud and cluster computing resources. An important building block for allowing interactive researcher-driven workflows is the ability to track all data that is needed to understand and reproduce a workflow. The standardization of 13 C-MFA studies using a folder structure template and the corresponding services and web interfaces improves the exchange of information for a group of researchers. Finally, several auxiliary tools are developed in the course of this work to complement the SWF modules, i.e., ranging from simple helper scripts to visualization or data conversion programs. This solution distinguishes itself from other scientific workflow approaches by offering a system of loosely-coupled components that are flexibly arranged to match the typical requirements in the metabolic engineering domain. Being a modern and service-oriented software framework, new applications are easily composed by reusing existing components

    A Model for User-centric Information Security Risk Assessment and Response

    Get PDF
    Managing and assessing information security risks in organizations is a well understood and accepted approach, with literature providing a vast array of proposed tools, methods and techniques. They are, however, tailored for organizations, with little literature supporting how these can be achieved more generally for end-users, i.e. users, who are solely responsible for their devices, data and for making their own security decisions. To protect against them, technical countermeasures alone has been found insufficient as it can be misused by users and become vulnerable to various threats. This research focuses on better understanding of human behavior which is vital for ensuring an efficient information security environment. Motivated by the fact that different users react differently to the same stimuli, identifying the reasons behind variations in security behavior and why certain users could be “at risk” more than others is a step towards developing techniques that can enhance user’s behavior and protect them against security attacks. A user survey was undertaken to explore users security behavior in several domains and to investigate the correlation between users characteristics and their risk taking behavior. Analysis of the results demonstrated that user’s characteristics do play a significant role in affecting their security behavior risk levels. Based upon these findings, this study proposed a user-centric model that is intended to provide a comprehensive framework for assessing and communicating information security risks for users of the general public with the aim of monitoring, assessing and responding to user’s behavior in a continuous, individualized and timely manner. The proposed approach is built upon two components: assessing risks and communicating them. Aside from the traditional risk assessment formula, three risk estimation models are proposed: a user-centric, system-based and an aggregated model to create an individualized risk profile. As part of its novelty, both user-centric and behavioral-related factors are considered in the assessment. This resulted in an individualized and timely risk assessment in granular form. Aside from the traditional risk communication approach of one message/one-size-fits-all, a gradual response mechanism is proposed to individually and persuasively respond to risk and educate the user of his risk-taking behavior. Two experiments and a scenario-based simulation of users with varying user-centric factors has been implemented to simulate the proposed model, how it works and to evaluate its effectiveness and usefulness. The proposed approach worked in the way it was expected to. The analysis of the experiments results provided an indication that risk could be assessed differently for the same behavior based upon a number of user-centric and behavioral-related factors resulting in an individualized granular risk score/level. This granular risk assessment, away from high, medium and low, provided a more insightful evaluation of both risk and response. The analysis of results was also useful in demonstrating how risk is not the same for all users and how the proposed model is effective in adapting to differences between users offering a novel approach to assessing information security risks

    Web observations: analysing Web data through automated data extraction

    Get PDF
    In this thesis, a generic architecture for Web observations is introduced. Beginning with fundamental data aspects and technologies for building Web observations, requirements and architectural designs are outlined. Because Web observations are basic tools to collect information from any Web resource, legal perspectives are discussed in order to give an understanding of recent regulations, e.g. General Data Protection Regulation (GDPR). The general idea of Web observatories, its concepts, and experiments are presented to identify the best solution for Web data collections and based thereon, visualisation from any kind of Web resource. With the help of several Web observation scenarios, data sets were collected, analysed and eventually published in a machine-readable or visual form for users to be interpreted. The main research goal was to create a Web observation based on an architecture that is able to collect information from any given Web resource to make sense of a broad amount of yet untapped information sources. To find this generally applicable architectural structure, several research projects with different designs have been conducted. Eventually, the container based building block architecture emerged from these initial designs as the most flexible architectural structure. Thanks to these considerations and architectural designs, a flexible and easily adaptable architecture was created that is able to collect data from all kinds of Web resources. Thanks to such broad Web data collections, users can get a more comprehensible understanding and insight of real-life problems, the efficiency and profitability of services as well as gaining valuable information on the changes of a Web resource

    The Ticker, February 2, 2015

    Full text link
    The Ticker is the student newspaper of Baruch College. It has been published continuously since 1932, when the Baruch College campus was the School of Business and Civic Administration of the City College of New York

    Building a Simple Smart Factory

    Get PDF
    This thesis describes (a) the search and findings of smart factories and their enabling technologies (b) the methodology to build or retrofit a smart factory and (c) the building and operation of a simple smart factory using the methodology. A factory is an industrial site with large buildings and collection of machines, which are operated by persons to manufacture goods and services. These factories are made smart by incorporating sensing, processing, and autonomous responding capabilities. Developments in four main areas (a) sensor capabilities (b) communication capabilities (c) storing and processing huge amount of data and (d) better utilization of technology in management and further development have contributed significantly for this incorporation of smartness to factories. There is a flurry of literature in each of the above four topics and their combinations. The findings from the literature can be summarized in the following way. Sensors detect or measure a physical property and records, indicates, or otherwise responds to it. In real-time, they can make a very large amount of observations. Internet is a global computer network providing a variety of information and communication facilities and the internet of things, IoT, is the interconnection via the Internet of computing devices embedded in everyday objects, enabling them to send and receive data. Big data handling and the provision of data services are achieved through cloud computing. Due to the availability of computing power, big data can be handled and analyzed under different classifications using several different analytics. The results from these analytics can be used to trigger autonomous responsive actions that make the factory smart. Having thus comprehended the literature, a seven stepped methodology for building or retrofitting a smart factory was established. The seven steps are (a) situation analysis where the condition of the current technology is studied (b) breakdown prevention analysis (c) sensor selection (d) data transmission and storage selection (e) data processing and analytics (f) autonomous action network and (g) integration with the plant units. Experience in a cement factory highlighted the wear in a journal bearing causes plant stoppages and thus warrant a smart system to monitor and make decisions. The experience was used to develop a laboratory-scale smart factory monitoring the wear of a half-journal bearing. To mimic a plant unit a load-carrying shaft supported by two half-journal bearings were chosen and to mimic a factory with two plant units, two such shafts were chosen. Thus, there were four half-journal bearings to monitor. USB Logitech C920 webcam that operates in full-HD 1080 pixels was used to take pictures at specified intervals. These pictures are then analyzed to study the wear at these intervals. After the preliminary analysis wear versus time data for all four bearings are available. Now the ‘making smart activity’ begins. Autonomous activities are based on various analyses. The wear time data are analyzed under different classifications. Remaining life, wear coefficient specific to the bearings, weekly variation in wear and condition of adjacent bearings are some of the characteristics that can be obtained from the analytics. These can then be used to send a message to the maintenance and supplies division alerting them on the need for a replacement shortly. They can also be alerted about other bearings reaching their maturity to plan a major overhaul if needed

    Linked Research on the Decentralised Web

    Get PDF
    This thesis is about research communication in the context of the Web. I analyse literature which reveals how researchers are making use of Web technologies for knowledge dissemination, as well as how individuals are disempowered by the centralisation of certain systems, such as academic publishing platforms and social media. I share my findings on the feasibility of a decentralised and interoperable information space where researchers can control their identifiers whilst fulfilling the core functions of scientific communication: registration, awareness, certification, and archiving. The contemporary research communication paradigm operates under a diverse set of sociotechnical constraints, which influence how units of research information and personal data are created and exchanged. Economic forces and non-interoperable system designs mean that researcher identifiers and research contributions are largely shaped and controlled by third-party entities; participation requires the use of proprietary systems. From a technical standpoint, this thesis takes a deep look at semantic structure of research artifacts, and how they can be stored, linked and shared in a way that is controlled by individual researchers, or delegated to trusted parties. Further, I find that the ecosystem was lacking a technical Web standard able to fulfill the awareness function of research communication. Thus, I contribute a new communication protocol, Linked Data Notifications (published as a W3C Recommendation) which enables decentralised notifications on the Web, and provide implementations pertinent to the academic publishing use case. So far we have seen decentralised notifications applied in research dissemination or collaboration scenarios, as well as for archival activities and scientific experiments. Another core contribution of this work is a Web standards-based implementation of a clientside tool, dokieli, for decentralised article publishing, annotations and social interactions. dokieli can be used to fulfill the scholarly functions of registration, awareness, certification, and archiving, all in a decentralised manner, returning control of research contributions and discourse to individual researchers. The overarching conclusion of the thesis is that Web technologies can be used to create a fully functioning ecosystem for research communication. Using the framework of Web architecture, and loosely coupling the four functions, an accessible and inclusive ecosystem can be realised whereby users are able to use and switch between interoperable applications without interfering with existing data. Technical solutions alone do not suffice of course, so this thesis also takes into account the need for a change in the traditional mode of thinking amongst scholars, and presents the Linked Research initiative as an ongoing effort toward researcher autonomy in a social system, and universal access to human- and machine-readable information. Outcomes of this outreach work so far include an increase in the number of individuals self-hosting their research artifacts, workshops publishing accessible proceedings on the Web, in-the-wild experiments with open and public peer-review, and semantic graphs of contributions to conference proceedings and journals (the Linked Open Research Cloud). Some of the future challenges include: addressing the social implications of decentralised Web publishing, as well as the design of ethically grounded interoperable mechanisms; cultivating privacy aware information spaces; personal or community-controlled on-demand archiving services; and further design of decentralised applications that are aware of the core functions of scientific communication
    corecore