2,065 research outputs found

    Manufacturing Process Optimization Using Edge Analytics

    Get PDF
    Most manufacturing plants contain some amount of time series sensor data – streams of values and time stamps. This data, however, isn’t useful with most types of analytics or machine learning for the purpose of process optimization. This thesis presents a novel and innovative solution to the problem using a software stack leveraging the Predix Complex Event Processing Engine (Edge Analytics) to condition the data, combined with RFID for serialization. Each step in the formation of the solution is documented, from connecting equipment to analyzing and ingesting data produced by the edge analytic. This solution was developed and piloted at the GE Grid Solutions plant in Clearwater, FL

    Crowdsourcing Linked Data on listening experiences through reuse and enhancement of library data

    Get PDF
    Research has approached the practice of musical reception in a multitude of ways, such as the analysis of professional critique, sales figures and psychological processes activated by the act of listening. Studies in the Humanities, on the other hand, have been hindered by the lack of structured evidence of actual experiences of listening as reported by the listeners themselves, a concern that was voiced since the early Web era. It was however assumed that such evidence existed, albeit in pure textual form, but could not be leveraged until it was digitised and aggregated. The Listening Experience Database (LED) responds to this research need by providing a centralised hub for evidence of listening in the literature. Not only does LED support search and reuse across nearly 10,000 records, but it also provides machine-readable structured data of the knowledge around the contexts of listening. To take advantage of the mass of formal knowledge that already exists on the Web concerning these contexts, the entire framework adopts Linked Data principles and technologies. This also allows LED to directly reuse open data from the British Library for the source documentation that is already published. Reused data are re-published as open data with enhancements obtained by expanding over the model of the original data, such as the partitioning of published books and collections into individual stand-alone documents. The database was populated through crowdsourcing and seamlessly incorporates data reuse from the very early data entry phases. As the sources of the evidence often contain vague, fragmentary of uncertain information, facilities were put in place to generate structured data out of such fuzziness. Alongside elaborating on these functionalities, this article provides insights into the most recent features of the latest instalment of the dataset and portal, such as the interlinking with the MusicBrainz database, the relaxation of geographical input constraints through text mining, and the plotting of key locations in an interactive geographical browser

    Verification and Configuration of Software-based Networks

    Get PDF
    The innovative trends of Network Function Virtualization (NFV) and Software Defined Networking (SDN) have posed never experienced opportunities in productive environments, like data centers. While NFV decouples software implementation of the network functions (e.g., DPI and NAT) from their physical counterparts, SDN is in charge of dynamically changing those functions to create network paths. One new opportunity of such Software-based networks is to make the network service-provisioning models more flexible, by enabling users to build their own service graphs: users can select the Virtual Network Functions (VNFs) to use and can specify how packets have to be processed and forwarded in their networks. In particular, this PhD thesis spans mostly topics related to the verification and configuration of service graphs. For what concerns the challenges of network verification, our aim is to explore strategies that overcome the limitations of traditional techniques, which generally exploit complex modelling approaches and takes considerable verification times. Thus we envision for verification techniques that are based on non-complex modelling approaches in order to be much more efficient than existing proposals. Under these conditions, such novel approaches may work at run-time and, in particular, may be performed before deploying the service graphs, in order to avoid unexpected network behaviours and detect errors as early as possible. Another requirement is that verification should take a reasonable amount of time from a VNF Orchestrator point of view, with fair processing resources (e.g. CPU, memory and so on). This is because we are in the context of flexible services, where the reconfiguration of network functions can be frequently triggered, both in case of user request and in case of management events. The first contribution of this thesis lays on the service graphs specification by means of forwarding policies (i.e, a high-level specification of how packet flows are forwarded). While the majority of the SDN verification tools operate on OpenFlow configurations, we have defined a formal model to detect a set of anomalies in forwarding policies (i.e., erroneous specifications that may cause misleading network conditions and states). The key factors that distinguish our work from existing approaches are both an early detection of policies anomalies (i.e., before translating such policies into OpenFlow entries), in order to speed up the fixing phase, without even starting service deployment, and a scalable approach that achieves verification times in the order of milliseconds for medium- large- sized networks. Another advancement in network verification has been the possibility to verify networks including stateful VNFs, which are functions that may dynamically change the forwarding path of a traffic flow according to their local algorithms and states (e.g., IDSs). Our second contribution is thus a verification approach that models the network and the involved (possibly stateful) VNFs as a set of FOL formulas. Those formulas are passed to the off-the-shelf SMT (Satisfiability Modulo Theory) solver Z3 in order to verify some reachability-based properties. In particular, the proposed solution has been implemented in a tool released under the AGPLv3 license, named VeriGraph, which takes the functional configurations of all deployed VNFs (e.g., filtering rules on firewalls) into account to check the network. The adopted approach achieves verification times in the order of milliseconds, which is compliant with the timing limitations needed by a VNF Orchestrator. Finally, for what concerns the configuration of VNFs, service graph deployment should include a strategy to deploy VNF configurations in order to fix bugs in case of verification failures. Here, we have to face several challenges like the different ways a network function may require for being configured (REST API, CLI, etc...) and the configuration semantic that depends on the function itself (e.g., router parameters are clearly different from firewall ones). We conclude this thesis by proposing a model-based configuration approach, which means defining a representation of the main configuration parameters of a VNF. This VNF model is then automatically processed by further software modules in the VNF architecture to translate the configuration parameters into a particular format required by a VNF and to deliver the produced configuration into the VNF following one of the configuration strategies (e.g., REST, configuration file, etc.) already supported by the function. The achieved results of this last work, w.r.t. the current state of the art, are the exploitation of a model-driven approach that achieves a higher flexibility and the insertion of non-VNF-specific software modules to avoid changes in the VNF implementation

    Ground Systems Development Environment (GSDE) interface requirements analysis: Operations scenarios

    Get PDF
    This report is a preliminary assessment of the functional and data interface requirements to the link between the GSDE GS/SPF (Amdahl) and the Space Station Control Center (SSCC) and Space Station Training Facility (SSTF) Integration, Verification, and Test Environments (IVTE's). These interfaces will be involved in ground software development of both the control center and the simulation and training systems. Our understanding of the configuration management (CM) interface and the expected functional characteristics of the Amdahl-IVTE interface is described. A set of assumptions and questions that need to be considered and resolved in order to complete the interface functional and data requirements definitions are presented. A listing of information items defined to describe software configuration items in the GSDE CM system is included. It also includes listings of standard reports of CM information and of CM-related tools in the GSDE

    Managing the bazaar: commercialization and peripheral participation in mature, community-led free/open source software projects

    Get PDF
    The thesis investigates two fundamental dynamics of participation and collaboration in mature, community-led Free/Open Source (F/OS) software projects - commercialization and peripheral participation. The aim of the thesis is to examine whether the power relations that underlie the F/OS model of development are indicative of a new form of power relations supported by ICTs. Theoretically, the thesis is located within the Communities of Practice (CoP) literature and it draws upon Michel Foucault's ideas about the historical and relational character of power. It also mobilizes, to a lesser extent, Erving Goffman's notion of `face-work'. This framework supports a methodology that questions the rationality of how F/OS is organized and examines the relations between employed coders and volunteers, experienced and inexperienced coders, and programmers and nonprogrammers. The thesis examines discursive and structural dimensions of collaboration and employs quantitative and qualitative methods. Structural characteristics are considered in the light of arguments about embeddedness. The thesis contributes insights into how the gift economy is embedded in the exchange economy and the role of peripheral contributors. The analysis indicates that community-integrated paid developers have a key role in project development, maintaining the infrastructure aspects of the code base. The analysis suggests that programming and non-programming contributors are distinct in their make-up, priorities and rhythms of participation, and that learning plays an important role in controlling access. The results show that volunteers are important drivers of peripheral activities, such as translation and documentation. The term `autonomous peripherality' is used to capture the unique characteristics of these activities. These findings support the argument that centrality and peripherality are associated with the division of labour, which, in turn, is associated with employment relations and frameworks of institutional support. The thesis shows how the tensions produced by commercialization and peripheral participation are interwoven with values of meritocracy, ritual and strategic enactment of the idea of community as well as with tools and techniques developed to address the emergence of a set of problems specific to management and governance. These are characterized as `technologies of communities'. It is argued that the emerging topology of F/OS participation, seen as a `relational meshwork', is indicative of a redefinition of the relationship between sociality and economic production within mature, community-led F/OS projects

    Next Steps in Signaling (NSIS): Framework

    Get PDF

    Inferring undesirable behavior from P2P traffic analysis

    Get PDF
    While peer-to-peer (P2P) systems have emerged in popularity in recent years, their large-scale and complexity make them difficult to reason about. In this paper, we argue that systematic analysis of traffic characteristics of P2P systems can reveal a wealth of information about their behavior, and highlight potential undesirable activities that such systems may exhibit. As a first step to this end, we present an offline and semi-automated approach to detect undesirable behavior. Our analysis is applied on real traffic traces collected from a Point-of-Presence (PoP) of a national-wide ISP in which over 70% of the total traffic is due to eMule, a popular P2P file-sharing system. Flow-level measurements are aggregated into "samples" referring to the activity of each host during a time interval. We then employ a clustering technique to automatically and coarsely identify similar behavior across samples, and extensively use domain knowledge to interpret and analyze the resulting clusters. Our analysis shows several examples of undesirable behavior including evidence of DDoS attacks exploiting live P2P clients, significant amounts of unwanted traffic that may harm network performance, and instances where the performance of participating peers may be subverted due to maliciously deployed servers. Identification of such patterns can benefit network operators, P2P system developers, and actual end-user
    • …
    corecore