19 research outputs found

    Making sense of actor behaviour: an algebraic filmstrip pattern and its implementation

    Get PDF
    Sense-making with respect to actor-based systems is challenging because of the non-determinism arising from concurrent behaviour. One strategy is to produce a trace of event histories that can be processed post-execution. Given a semantic domain, the histories can be translated into visual representations of the semantics in the form of filmstrips. This paper proposes a general pattern for the production of filmstrips from actor histories that can be implemented in a way that is independent of the particular data types used to represent the events, semantics and graphical displays. We demonstrate the pattern with respect to a simulation involving predators and prey which is a typical agent-based application

    NIAS Annual Report 2018-2019

    Get PDF

    NIAS Annual Report 2019-2020

    Get PDF

    Environmental Management

    Get PDF
    Environmental Management - Pollution, Habitat, Ecology, and Sustainability includes sixteen chapters that discuss pressing environmental issues in diverse locations around the world. Chapters discuss methods, technologies, analyses, and actions that may enlighten and enable decision-makers and managers in their quests for control of environmental problems. The authors present the facts and the challenges behind the assorted issues and offer new perspectives for contending with natural, social, economic, and political aspects of management

    On the enhancement of Big Data Pipelines through Data Preparation, Data Quality, and the distribution of Optimisation Problems

    Get PDF
    Nowadays, data are fundamental for companies, providing operational support by facilitating daily transactions. Data has also become the cornerstone of strategic decision-making processes in businesses. For this purpose, there are numerous techniques that allow to extract knowledge and value from data. For example, optimisation algorithms excel at supporting decision-making processes to improve the use of resources, time and costs in the organisation. In the current industrial context, organisations usually rely on business processes to orchestrate their daily activities while collecting large amounts of information from heterogeneous sources. Therefore, the support of Big Data technologies (which are based on distributed environments) is required given the volume, variety and speed of data. Then, in order to extract value from the data, a set of techniques or activities is applied in an orderly way and at different stages. This set of techniques or activities, which facilitate the acquisition, preparation, and analysis of data, is known in the literature as Big Data pipelines. In this thesis, the improvement of three stages of the Big Data pipelines is tackled: Data Preparation, Data Quality assessment, and Data Analysis. These improvements can be addressed from an individual perspective, by focussing on each stage, or from a more complex and global perspective, implying the coordination of these stages to create data workflows. The first stage to improve is the Data Preparation by supporting the preparation of data with complex structures (i.e., data with various levels of nested structures, such as arrays). Shortcomings have been found in the literature and current technologies for transforming complex data in a simple way. Therefore, this thesis aims to improve the Data Preparation stage through Domain-Specific Languages (DSLs). Specifically, two DSLs are proposed for different use cases. While one of them is a general-purpose Data Transformation language, the other is a DSL aimed at extracting event logs in a standard format for process mining algorithms. The second area for improvement is related to the assessment of Data Quality. Depending on the type of Data Analysis algorithm, poor-quality data can seriously skew the results. A clear example are optimisation algorithms. If the data are not sufficiently accurate and complete, the search space can be severely affected. Therefore, this thesis formulates a methodology for modelling Data Quality rules adjusted to the context of use, as well as a tool that facilitates the automation of their assessment. This allows to discard the data that do not meet the quality criteria defined by the organisation. In addition, the proposal includes a framework that helps to select actions to improve the usability of the data. The third and last proposal involves the Data Analysis stage. In this case, this thesis faces the challenge of supporting the use of optimisation problems in Big Data pipelines. There is a lack of methodological solutions that allow computing exhaustive optimisation problems in distributed environments (i.e., those optimisation problems that guarantee the finding of an optimal solution by exploring the whole search space). The resolution of this type of problem in the Big Data context is computationally complex, and can be NP-complete. This is caused by two different factors. On the one hand, the search space can increase significantly as the amount of data to be processed by the optimisation algorithms increases. This challenge is addressed through a technique to generate and group problems with distributed data. On the other hand, processing optimisation problems with complex models and large search spaces in distributed environments is not trivial. Therefore, a proposal is presented for a particular case in this type of scenario. As a result, this thesis develops methodologies that have been published in scientific journals and conferences.The methodologies have been implemented in software tools that are integrated with the Apache Spark data processing engine. The solutions have been validated through tests and use cases with real datasets

    Challenges for engineering students working with authentic complex problems

    Get PDF
    Engineers are important participants in solving societal, environmental and technical problems. However, due to an increasing complexity in relation to these problems new interdisciplinary competences are needed in engineering. Instead of students working with monodisciplinary problems, a situation where students work with authentic complex problems in interdisciplinary teams together with a company may scaffold development of new competences. The question is: What are the challenges for students structuring the work on authentic interdisciplinary problems? This study explores a three-day event where 7 students from Aalborg University (AAU) from four different faculties and one student from University College North Denmark (UCN), (6th-10th semester), worked in two groups at a large Danish company, solving authentic complex problems. The event was structured as a Hackathon where the students for three days worked with problem identification, problem analysis and finalizing with a pitch competition presenting their findings. During the event the students had workshops to support the work and they had the opportunity to use employees from the company as facilitators. It was an extracurricular activity during the summer holiday season. The methodology used for data collection was qualitative both in terms of observations and participants’ reflection reports. The students were observed during the whole event. Findings from this part of a larger study indicated, that students experience inability to transfer and transform project competences from their previous disciplinary experiences to an interdisciplinary setting

    Exploring the practical use of a collaborative robot for academic purposes

    Get PDF
    This article presents a set of experiences related to the setup and exploration of potential educational uses of a collaborative robot (cobot). The basic principles that have guided the work carried out have been three. First and foremost, study of all the functionalities offered by the robot and exploration of its potential academic uses both in subjects focused on industrial robotics and in subjects of related disciplines (automation, communications, computer vision). Second, achieve the total integration of the cobot at the laboratory, seeking not only independent uses of it but also seeking for applications (laboratory practices) in which the cobot interacts with some of the other devices already existing at the laboratory (other industrial robots and a flexible manufacturing system). Third, reuse of some available components and minimization of the number and associated cost of required new components. The experiences, carried out following a project-based learning methodology under the framework of bachelor and master subjects and thesis, have focused on the integration of mechanical, electronic and programming aspects in new design solutions (end effector, cooperative workspace, artificial vision system integration) and case studies (advanced task programming, cybersecure communication, remote access). These experiences have consolidated the students' acquisition of skills in the transition to professional life by having the close collaboration of the university faculty with the experts of the robotics company.Postprint (published version
    corecore