8,693 research outputs found

    Recording accurate process documentation in the presence of failures

    No full text
    Scientific and business communities present unprecedented requirements on provenance, where the provenance of some data item is the process that led to that data item. Previous work has conceived a computer-based representation of past executions for determining provenance, termed process documentation, and has developed a protocol, PReP, to record process documentation in service oriented architectures. However, PReP assumes a failure free environment. The presence of failures may lead to inaccurate process documentation, which does not reflect reality and hence cannot be trustful and utilized. This paper outlines our solution, F-PReP, a protocol for recording accurate process documentation in the presence of failures

    Recording Process Documentation in the Presence of Failures

    No full text
    Scientific and business communities present unprecedented requirements on provenance, where the provenance of some data item is the process that led to that data item. Previous work has conceived a computer-based representation of past executions for determining provenance, termed process documentation, and has developed a protocol, PReP, to record process documentation in service oriented architectures. However, PReP assumes a failure free environment. Failures lead to process documentation unable to be recorded, losing the evidence that a process occurred. This is not acceptable in the applications relying on process documentation and would cause disastrous consequences. This paper describes our solution, F-PReP, a protocol for recording process documentation in the presence of failures. A complete formalization of the protocol using Abstract State Machines is also presented

    SANTO: Social Aerial NavigaTion in Outdoors

    Get PDF
    In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones

    Autonomic networks: engineering the self-healing property

    Get PDF

    System configuration and executive requirements specifications for reusable shuttle and space station/base

    Get PDF
    System configuration and executive requirements specifications for reusable shuttle and space station/bas

    Architectures v/s Microservices

    Get PDF
    As it evolves, technology has always found a better way to build applications and improve their efficiency. New techniques have been learned by adapting old technologies and observing how markets shift towards new trends to satisfy their customers and shareholders. By taking Service Oriented Architecture (SOA) and evolving techniques in cloud computing, Web 2.0 emerged with a new pattern for designing an architecture evolved from the conventional monolithic approach known as microservice architecture (MSA). This new pattern develops an application by breaking the substantial use into a group of smaller applications, which run on their processes and communicate through an API. This style of application development is suitable for many infrastructures, especially within a cloud environment. These new patterns advanced to satisfy the concepts of domain-driven, continuous integration, and automated infrastructure more effectively. MSA has created a way to develop and deploy small scalable applications, which allows enterprise-level applications to dynamically adjust to their resources. This paper discusses what that architecture is, what makes it necessary, what factors affect best-fit architecture choices, how microservices-based architecture has evolved, and what factors are driving service-based architectures, in addition to comparing SOA and microservice. By analyzing a few popular architectures, the factors which help in choosing the architecture design will be compared with the MSA to show the benefits and challenges that may arise as an enterprise shifts their developing architecture to microservices

    “Computing” Requirements for Open Source Software: A Distributed Cognitive Approach

    Get PDF
    Most requirements engineering (RE) research has been conducted in the context of structured and agile software development. Software, however, is increasingly developed in open source software (OSS) forms which have several unique characteristics. In this study, we approach OSS RE as a sociotechnical, distributed cognitive process where distributed actors “compute” requirements—i.e., transform requirements-related knowledge into forms that foster a shared understanding of what the software is going to do and how it can be implemented. Such computation takes place through social sharing of knowledge and the use of heterogeneous artifacts. To illustrate the value of this approach, we conduct a case study of a popular OSS project, Rubinius—a runtime environment for the Ruby programming language—and identify ways in which cognitive workload associated with RE becomes distributed socially, structurally, and temporally across actors and artifacts. We generalize our observations into an analytic framework of OSS RE, which delineates three stages of requirements computation: excavation, instantiation, and testing-in-the-wild. We show how the distributed, dynamic, and heterogeneous computational structure underlying OSS development builds an effective mechanism for managing requirements. Our study contributes to sorely needed theorizing of appropriate RE processes within highly distributed environments as it identifies and articulates several novel mechanisms that undergird cognitive processes associated with distributed forms of RE
    corecore