29,838 research outputs found

    The Hierarchic treatment of marine ecological information from spatial networks of benthic platforms

    Get PDF
    Measuring biodiversity simultaneously in different locations, at different temporal scales, and over wide spatial scales is of strategic importance for the improvement of our understanding of the functioning of marine ecosystems and for the conservation of their biodiversity. Monitoring networks of cabled observatories, along with other docked autonomous systems (e.g., Remotely Operated Vehicles [ROVs], Autonomous Underwater Vehicles [AUVs], and crawlers), are being conceived and established at a spatial scale capable of tracking energy fluxes across benthic and pelagic compartments, as well as across geographic ecotones. At the same time, optoacoustic imaging is sustaining an unprecedented expansion in marine ecological monitoring, enabling the acquisition of new biological and environmental data at an appropriate spatiotemporal scale. At this stage, one of the main problems for an effective application of these technologies is the processing, storage, and treatment of the acquired complex ecological information. Here, we provide a conceptual overview on the technological developments in the multiparametric generation, storage, and automated hierarchic treatment of biological and environmental information required to capture the spatiotemporal complexity of a marine ecosystem. In doing so, we present a pipeline of ecological data acquisition and processing in different steps and prone to automation. We also give an example of population biomass, community richness and biodiversity data computation (as indicators for ecosystem functionality) with an Internet Operated Vehicle (a mobile crawler). Finally, we discuss the software requirements for that automated data processing at the level of cyber-infrastructures with sensor calibration and control, data banking, and ingestion into large data portals.Peer ReviewedPostprint (published version

    Agent-Based Computational Economics

    Get PDF
    Agent-based computational economics (ACE) is the computational study of economies modeled as evolving systems of autonomous interacting agents. Starting from initial conditions, specified by the modeler, the computational economy evolves over time as its constituent agents repeatedly interact with each other and learn from these interactions. ACE is therefore a bottom-up culture-dish approach to the study of economic systems. This study discusses the key characteristics and goals of the ACE methodology. Eight currently active research areas are highlighted for concrete illustration. Potential advantages and disadvantages of the ACE methodology are considered, along with open questions and possible directions for future research.Agent-based computational economics; Autonomous agents; Interaction networks; Learning; Evolution; Mechanism design; Computational economics; Object-oriented programming.

    A new model to support the personalised management of a quality e-commerce service

    No full text
    The paper presents an aiding model to support the management of a high quality e-commerce service. The approach focuses on the service quality aspects related to customer relationship management (CRM). Knowing the individual characteristics of a customer, it is possible to supply a personalised and high quality service. A segmentation model, based on the "relationship evolution" between users and Web site, is developed. The method permits the provision of a specific service management for each user segment. Finally, some preliminary experimental results for a sport-clothing industry application are described

    Attribute Identification and Predictive Customisation Using Fuzzy Clustering and Genetic Search for Industry 4.0 Environments

    Get PDF
    Today´s factory involves more services and customisation. A paradigm shift is towards “Industry 4.0” (i4) aiming at realising mass customisation at a mass production cost. However, there is a lack of tools for customer informatics. This paper addresses this issue and develops a predictive analytics framework integrating big data analysis and business informatics, using Computational Intelligence (CI). In particular, a fuzzy c-means is used for pattern recognition, as well as managing relevant big data for feeding potential customer needs and wants for improved productivity at the design stage for customised mass production. The selection of patterns from big data is performed using a genetic algorithm with fuzzy c-means, which helps with clustering and selection of optimal attributes. The case study shows that fuzzy c-means are able to assign new clusters with growing knowledge of customer needs and wants. The dataset has three types of entities: specification of various characteristics, assigned insurance risk rating, and normalised losses in use compared with other cars. The fuzzy c-means tool offers a number of features suitable for smart designs for an i4 environment

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201

    Кибербезопасность в образовательных сетях

    Get PDF
    The paper discusses the possible impact of digital space on a human, as well as human-related directions in cyber-security analysis in the education: levels of cyber-security, social engineering role in cyber-security of education, “cognitive vaccination”. “A Human” is considered in general meaning, mainly as a learner. The analysis is provided on the basis of experience of hybrid war in Ukraine that have demonstrated the change of the target of military operations from military personnel and critical infrastructure to a human in general. Young people are the vulnerable group that can be the main goal of cognitive operations in long-term perspective, and they are the weakest link of the System.У статті обговорюється можливий вплив цифрового простору на людину, а також пов'язані з людиною напрямки кібербезпеки в освіті: рівні кібербезпеки, роль соціального інжинірингу в кібербезпеці освіти, «когнітивна вакцинація». «Людина» розглядається в загальному значенні, головним чином як та, що навчається. Аналіз надається на основі досвіду гібридної війни в Україні, яка продемонструвала зміну цілей військових операцій з військовослужбовців та критичної інфраструктури на людину загалом. Молодь - це вразлива група, яка може бути основною метою таких операцій в довгостроковій перспективі, і вони є найслабшою ланкою системи.В документе обсуждается возможное влияние цифрового пространства на человека, а также связанные с ним направления в анализе кибербезопасности в образовании: уровни кибербезопасности, роль социальной инженерии в кибербезопасности образования, «когнитивная вакцинация». «Человек» рассматривается в общем смысле, в основном как ученик. Анализ представлен на основе опыта гибридной войны в Украине, которая продемонстрировала изменение цели военных действий с военного персонала и критической инфраструктуры на человека в целом. Молодые люди являются уязвимой группой, которая может быть главной целью когнитивных операций в долгосрочной перспективе, и они являются самым слабым звеном Систем
    corecore