14 research outputs found
Analysing Wiki Quality using Probabilistic Model Checking
Abstract-Wikis delineate a new work tool in enterprises and they are spreading everywhere. Indeed, they are often used as internal documentation for various in-house systems and applications as well as powerful tools for collaboration and knowledge sharing. As occurs with software, the fundamental growth of a wiki may lead to its degradation. The quality of wikis, especially in enterprise contexts, should not play a trivial role. Software quality is a very discussed topic, but there are not many studies regarding the quality of wikis. We propose a probabilistic model to represent wikis and to investigate their quality. Due to the similarity with the World Wide Web it is natural to consider the popular Google PageRank (with minor modifications) to calculate probabilities between pages. Each wiki category, a set of wiki pages, is modelled using the PRISM language in order to verify specific properties in PCTL*. Experiments conducted on a adequate number of (enterprise) wikis assess the validity of our methodology
A framework to maximise the communicative power of knowledge visualisations
Knowledge visualisation, in the field of information systems, is both a process and a product, informed by the closely aligned fields of information visualisation and knowledg management. Knowledge visualisation has untapped potential within the purview of knowledge communication. Even so, knowledge visualisations are infrequently deployed due to a lack of evidence-based guidance. To improve this situation, we carried out a systematic literature review to derive a number of “lenses” that can be used to reveal the essential perspectives to feed into the visualisation production process.We propose a conceptual framework which incorporates these lenses to guide producers of knowledge visualisations. This framework uses the different lenses to reveal critical perspectives that need to be considered during the design process. We conclude by demonstrating how this framework could be used to produce an effective knowledge visualisation
A Model Based Framework for IoT-Aware Business Process Management
IoT-aware Business Processes (BPs) that exchange data with Internet of Things (IoT) devices, briefly referred to as IoT-aware BPs, are gaining momentum in the BPM field. Introducing IoT technologies from the early stages of the BP development process requires dealing with the complexity and heterogeneity of such technologies at design and analysis time. This paper analyzes widely used IoT frameworks and ontologies to introduce a BPMN extension that improves the expressiveness of relevant BP modeling notations and allows an appropriate representation of IoT devices from both an architectural and a behavioral perspective. In the BP management field, the use of simulation-based approaches is recognized as an effective technology for analyzing BPs. Simulation models need to be parameterized according to relevant properties of the process under study. Unfortunately, such parameters may change during the process operational life, thus making the simulation model invalid with respect to the actual process behavior. To ease the analysis of IoT-aware BPs, this paper introduces a model-driven method for the automated development of digital twins of actual business processes. The proposed method also exploits data retrieved by IoT sensors to automatically reconfigure the simulation model, to make the digital twin continuously coherent and compliant with its actual counterpart
Complex Event Processing as a Service in Multi-Cloud Environments
The rise of mobile technologies and the Internet of Things, combined with advances in Web technologies, have created a new Big Data world in which the volume and velocity of data generation have achieved an unprecedented scale. As a technology created to process continuous streams of data, Complex Event Processing (CEP) has been often related to Big Data and used as a tool to obtain real-time insights. However, despite this recent surge of interest, the CEP market is still dominated by solutions that are costly and inflexible or too low-level and hard to operate.
To address these problems, this research proposes the creation of a CEP system that can be offered as a service and used over the Internet. Such a CEP as a Service (CEPaaS) system would give its users CEP functionalities associated with the advantages of the services model, such as no up-front investment and low maintenance cost. Nevertheless, creating such a service involves challenges that are not addressed by current CEP systems. This research proposes solutions for three open problems that exist in this context.
First, to address the problem of understanding and reusing existing CEP management procedures, this research introduces the Attributed Graph Rewriting for Complex Event Processing Management (AGeCEP) formalism as a technology- and language-agnostic representation of queries and their reconfigurations. Second, to address the problem of evaluating CEP query management and processing strategies, this research introduces CEPSim, a simulator of cloud-based CEP systems. Finally, this research also introduces a CEPaaS system based on a multi-cloud architecture, container management systems, and an AGeCEP-based multi-tenant design.
To demonstrate its feasibility, AGeCEP was used to design an autonomic manager and a selected set of self-management policies. Moreover, CEPSim was thoroughly evaluated by experiments that showed it can simulate existing systems with accuracy and low execution overhead. Finally, additional experiments validated the CEPaaS system and demonstrated it achieves the goal of offering CEP functionalities as a scalable and fault-tolerant service. In tandem, these results confirm this research significantly advances the CEP state of the art and provides novel tools and methodologies that can be applied to CEP research
A Game-Theory Based Incentive Framework for an Intelligent Traffic System as Part of a Smart City Initiative
Intelligent Transportation Systems (ITSs) can be applied to inform and incentivize travellers to help them make cognizant choices concerning their trip routes and transport modality use for their daily travel whilst achieving more sustainable societal and transport authority goals. However, in practice, it is challenging for an ITS to enable incentive generation that is context-driven and personalized, whilst supporting multi-dimensional travel goals. This is because an ITS has to address the situation where different travellers have different travel preferences and constraints for route and modality, in the face of dynamically-varying traffic conditions. Furthermore, personalized incentive generation also needs to dynamically achieve different travel goals from multiple travellers, in the face of their conducts being a mix of both competitive and cooperative behaviours. To address this challenge, a Rule-based Incentive Framework (RIF) is proposed in this paper that utilizes both decision tree and evolutionary game theory to process travel information and intelligently generate personalized incentives for travellers. The travel information processed includes travellers’ mobile patterns, travellers’ modality preferences and route traffic volume information. A series of MATLAB simulations of RIF was undertaken to validate RIF to show that it is potentially an effective way to incentivize travellers to change travel routes and modalities as an essential smart city service
The survey on Near Field Communication
PubMed ID: 26057043Near Field Communication (NFC) is an emerging short-range wireless communication technology that offers great and varied promise in services such as payment, ticketing, gaming, crowd sourcing, voting, navigation, and many others. NFC technology enables the integration of services from a wide range of applications into one single smartphone. NFC technology has emerged recently, and consequently not much academic data are available yet, although the number of academic research studies carried out in the past two years has already surpassed the total number of the prior works combined. This paper presents the concept of NFC technology in a holistic approach from different perspectives, including hardware improvement and optimization, communication essentials and standards, applications, secure elements, privacy and security, usability analysis, and ecosystem and business issues. Further research opportunities in terms of the academic and business points of view are also explored and discussed at the end of each section. This comprehensive survey will be a valuable guide for researchers and academicians, as well as for business in the NFC technology and ecosystem.Publisher's Versio
Teamwork collaboration around CAE models in an industrial context
2015 - 2016Medium and Large Companies must compete every day in a global context. To achieve
greater efficiency in their products/processes they are forced to globalize by opening
multiple locations in geographically distant places. In this context, people from the same
team or different teams must work together regardless of the time zone and where they are
located. Therefore, a "virtual" team consists of groups of geographically distant people who
can coordinate with the help of new technologies.
The tools and methodologies supporting "Computer Supported Cooperative Work"
(CSCW) can facilitate collaboration by reducing distance and time related issues.
The main goals CSCW aims to achieve within a complex organization are listed below:
• Schedule, track, and chart the steps in a project as it is being completed (Project Management)
• Share, review, approve or reject project proposals from other workgroup members (Authoring
Systems)
• Collaborative management of tasks and documents within a knowledge-based business process
(Workflow Management)
• Collect, organize, manage, and share various forms of information (Knowledge Management)
• Collaborative bookmarking engine to tag, organize, share, and search enterprise data
(Enterprise Bookmarking)
• Collect, organize, manage and share information associated with the delivery of a project
(Extranet Systems)
• Quickly share company information to members within a company via Internet (Intranet
Systems)
• Organize social relations of groups (Social Network)
• Collaborate and share structured data and information (Online SpreadSheet)
This work is based on the main objectives outlined through a specific research experience
that verifies compliance and ensures its applicability.
The real context consists of virtual team of engineers and the way they cooperate within the
automotive industry. The research “iter” can be summarized as follows: (1) the main
collaborative and engineering requirements have been identified by referring to a real use
case within Fiat Chrysler Automobiles; (2) each requirement has been met by implementing
an integrated, modular and extensible architecture; (3) Floasys platform for collecting,
centralizing and sharing simulations has been designed, implemented and tested; (4) a tool
called ExploraTool has been designed to visually explore a simulation repository within
Floasys; (5) the possible extension of the platform has been identified in terms of
multidisciplinarity and multisectorality; (6) downstream of the whole process, all the
requirements a CSCW intended to meet were verified.
The initial phase of the work has focused on collecting collaborative requirements and
related needs that emerge when different virtual teams find themselves collaborating to
pursue a common result.
The collaborative requirements identified to support collaboration between geographically
remote teams are: centralizing simulation data, providing annotation and adding metadata
to files, providing a search engine for simulations completed by other analysts, providing
data versioning and support their sharing. In line with the requirements identified, a
collaborative platform prototype (CSCW) called Floasys was developed.
Floasys customers are all industries using CAE simulations to design their products, so the
automotive, aeronautical and naval industries, etc.
Floasys collects simulation data, stores them in open XML format and centralizes them into
a shared repository; It also provides additional services on collected data stored in open
format, such as the ability to annotate files or search within the repository regardless of the
simulator with which they were generated.
It is extremely useful to be able to retrieve simulations from other members of the same
team or different teams in order to compare the performance of a current project. In order to
provide these services, various aspects must be considered: surely the services listed above
must be immersed in an existing business environment with existing practices, workflows
and software systems. To bring a concrete example, the only centralization of simulation
data involves communication with existing simulation software by mitigating the problem
of Vendor Lock-In, which is the strong dependence on the simulators themselves.
From an architectural point of view, Floasys meets the non-functional extensibility and
modularity requirements. This way the system can be tailored to the needs of customers,
open to meet future needs and be used in other departments.
The modular and extensible Floasys architecture was obtained based on the concept of
plug-in. Although the research activity directly concerns the automotive industry, the
requirements and the difficulties described are common to other sectors as described in the
literature. So many of the considerations made in this work and the solutions adopted can
be reused for other types of simulation as well as for data obtained from experiments.
Finally, within Floasys, an interactive tool called "ExploraTool" was integrated for
viewing, exploring, and querying simulation repositories. Although the idea of this tool was
born in the context of simulation repository navigation, it is generic and can be used with
any dataset. The tool is based on Eulero-Venn diagrams. The universe is the set of all
simulations stored in one or more repositories. Simulation groups are represented by
grafted ellipses. Using this tool, analysts can explore the repository through drill-down and
roll-up operations to get more or less detail. Going down in the hierarchy, the user filters
the items within the dataset and performs a graphical query. In this way, the user explores
the repository by finally obtaining two or more simulations to be compared. After the
design, implementation and implementation phase, the tool was tested with real users to
gain data on its usability. [edited by author]XV n.s
Processamento de eventos complexos como serviço em ambientes multi-nuvem
Orientadores: Luiz Fernando Bittencourt, Miriam Akemi Manabe CapretzTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O surgimento das tecnologias de dispositivos mĂłveis e da Internet das Coisas, combinada com avanços das tecnologias Web, criou um novo mundo de Big Data em que o volume e a velocidade da geração de dados atingiu uma escala sem precedentes. Por ser uma tecnologia criada para processar fluxos contĂnuos de dados, o Processamento de Eventos Complexos (CEP, do inglĂŞs Complex Event Processing) tem sido frequentemente associado a Big Data e aplicado como uma ferramenta para obter informações em tempo real. Todavia, apesar desta onda de interesse, o mercado de CEP ainda Ă© dominado por soluções proprietárias que requerem grandes investimentos para sua aquisição e nĂŁo proveem a flexibilidade que os usuários necessitam. Como alternativa, algumas empresas adotam soluções de baixo nĂvel que demandam intenso treinamento tĂ©cnico e possuem alto custo operacional. A fim de solucionar esses problemas, esta pesquisa propõe a criação de um sistema de CEP que pode ser oferecido como serviço e usado atravĂ©s da Internet. Um sistema de CEP como Serviço (CEPaaS, do inglĂŞs CEP as a Service) oferece aos usuários as funcionalidades de CEP aliadas Ă s vantagens do modelo de serviços, tais como redução do investimento inicial e baixo custo de manutenção. No entanto, a criação de tal serviço envolve inĂşmeros desafios que nĂŁo sĂŁo abordados no atual estado da arte de CEP. Em especial, esta pesquisa propõe soluções para trĂŞs problemas em aberto que existem neste contexto. Em primeiro lugar, para o problema de entender e reusar a enorme variedade de procedimentos para gerĂŞncia de sistemas CEP, esta pesquisa propõe o formalismo Reescrita de Grafos com Atributos para GerĂŞncia de Processamento de Eventos Complexos (AGeCEP, do inglĂŞs Attributed Graph Rewriting for Complex Event Processing Management). Este formalismo inclui modelos para consultas CEP e transformações de consultas que sĂŁo independentes de tecnologia e linguagem. Em segundo lugar, para o problema de avaliar estratĂ©gias de gerĂŞncia e processamento de consultas CEP, esta pesquisa apresenta CEPSim, um simulador de sistemas CEP baseado em nuvem. Por fim, esta pesquisa tambĂ©m descreve um sistema CEPaaS fundamentado em ambientes multi-nuvem, sistemas de gerĂŞncia de contĂŞineres e um design multiusuário baseado em AGeCEP. Para demonstrar sua viabilidade, o formalismo AGeCEP foi usado para projetar um gerente autĂ´nomo e um conjunto de polĂticas de auto-gerenciamento para sistemas CEP. AlĂ©m disso, o simulador CEPSim foi minuciosamente avaliado atravĂ©s de experimentos que demonstram sua capacidade de simular sistemas CEP com acurácia e baixo custo adicional de processamento. Por fim, experimentos adicionais validaram o sistema CEPaaS e demonstraram que o objetivo de oferecer funcionalidades CEP como um serviço escalável e tolerante a falhas foi atingido. Em conjunto, esses resultados confirmam que esta pesquisa avança significantemente o estado da arte e tambĂ©m oferece novas ferramentas e metodologias que podem ser aplicadas Ă pesquisa em CEPAbstract: The rise of mobile technologies and the Internet of Things, combined with advances in Web technologies, have created a new Big Data world in which the volume and velocity of data generation have achieved an unprecedented scale. As a technology created to process continuous streams of data, Complex Event Processing (CEP) has been often related to Big Data and used as a tool to obtain real-time insights. However, despite this recent surge of interest, the CEP market is still dominated by solutions that are costly and inflexible or too low-level and hard to operate. To address these problems, this research proposes the creation of a CEP system that can be offered as a service and used over the Internet. Such a CEP as a Service (CEPaaS) system would give its users CEP functionalities associated with the advantages of the services model, such as no up-front investment and low maintenance cost. Nevertheless, creating such a service involves challenges that are not addressed by current CEP systems. This research proposes solutions for three open problems that exist in this context. First, to address the problem of understanding and reusing existing CEP management procedures, this research introduces the Attributed Graph Rewriting for Complex Event Processing Management (AGeCEP) formalism as a technology- and language-agnostic representation of queries and their reconfigurations. Second, to address the problem of evaluating CEP query management and processing strategies, this research introduces CEPSim, a simulator of cloud-based CEP systems. Finally, this research also introduces a CEPaaS system based on a multi-cloud architecture, container management systems, and an AGeCEP-based multi-tenant design. To demonstrate its feasibility, AGeCEP was used to design an autonomic manager and a selected set of self-management policies. Moreover, CEPSim was thoroughly evaluated by experiments that showed it can simulate existing systems with accuracy and low execution overhead. Finally, additional experiments validated the CEPaaS system and demonstrated it achieves the goal of offering CEP functionalities as a scalable and fault-tolerant service. In tandem, these results confirm this research significantly advances the CEP state of the art and provides novel tools and methodologies that can be applied to CEP researchDoutoradoCiĂŞncia da ComputaçãoDoutor em CiĂŞncia da Computação140920/2012-9CNP
Teamwork collaboration around simulation data in an industrial context
2013 - 2014Nowadays even more small, medium and large enterprises are world-wide and com-
pete on a global market. In order to face the new challenges, industries have multiple
co-located and geographically dispersed teams that work across time, space, and organ-
isational boundaries. A virtual team or a dispersed team is a group of geographically,
organisationally and/or time dispersed knowledge workers who coordinate their work
using electronic technologies to accomplish a common goal. The advent of Internet and
Computer Supported Cooperative Work (CSCW) technologies can reduce the distances
between these teams and are used to support the collaboration among them. The topic
of this thesis concerns the engineering dispersed teams and their collaboration within
enterprises. In this context, the contributions of this thesis are the following: I was
able to (1) identify the key collaborative requirements analysing a real use case of two
engineering dispersed teams within Fiat Chrysler Automobiles; (2) address each of them
with an integrated, extensible and modular architecture; (3) implement a working in-
dustrial prototype called Floasys to collect, centralise, search, and share simulations as
well as automate repetitive, error-prone and time-consuming tasks like the document
generation; (4) design a tool called ExploraTool to visually explore a repository of sim-
ulations provided by Floasys, and (5) identify the possible extensions of this work to
other contexts (like aeronautic, rail and naval sectors).
The rst research aim of this work is the analysis of the key collaborative require-
ments within a real industrial use case of geographically dispersed teams. In order
to gather these requirements, I worked closely with two geographically separated en-
gineering teams in Fiat Chrysler Automobiles (FCA): one team located in Pomigliano
D'Arco (Italy) and the other one in Torino (Italy). Both teams use computer numerical
Computational Fluid Dynamic (CFD) simulations to design vehicle products simulating
physical phenomenons, such as vehicle aerodynamic and its drag coefficient, or the in-
ternal
ow for the passengers thermal comfort. The applied methodology to collect the
collaborative and engineering requirements is based on an extensive literature review,
on site directly observations, stakeholders' interviews and an user survey. The identi ed
key collaborative requirements as actions to perform to improve the collaboration among
dispersed teams are: centralise simulation data, provide metadata over simulation data,
provide search facility, simulation data versioning, and data sharing... [edited by Author]XIII n.s
Démarche, modèles et outils multi-agents pour l'ingénierie des collectifs cyber-physiques
We call a Collective Cyber-Physical System (CCPS), a system consisting of numerous autonomous execution units achieving tasks of control, communication, data processing or acquisition. These nodes are autonomous in decision making and they can cooperate to overcome gaps of knowledge or individual skills in goal achievement.There are many challenges in the design of these collective systems. This Habilitation thesis discusses various aspects of such a system engineering modeled according to a multi-agent approach.First, a complete CCPS design method is proposed. Its special features are discussed regarding the challenges mentioned above. Agent models and collective models suitable to constrained communications and changing environments are then proposed to facilitate the design of CCPS. Finally, a tool that enables the simulation and the deployment of hw/sw mixed collective systems is presented.These contributions have been used in several academic and industrial projects whose experience feedbacks are discussed.Nous appelons "collectif cyber-physique" un système embarqué en réseau dans lequel les nœuds ont une autonomie de décision et coopèrent spontanément afin de participer à l'accomplissement d'objectifs du système global ou de pallier des manques de connaissances ou de compétences individuelles. Ces objectifs portent notamment sur l'état de leur environnement physique. La conception de ces collectifs présente de nombreux défis. Ce mémoire d'Habilitation propose une discussion des différents aspects de l'ingénierie de ces systèmes que nous modélisons en utilisant le paradigme multi-agent. Tout d'abord, une méthode complète d'analyse et de conception est proposée. Ses différentes particularités sont discutées au regard des différents défis précédemment évoqués. Des modèles d'agent et de collectifs adaptés aux communications contraintes et aux environnements changeants sont alors proposés. Ils permettent de simplifier la conception des collectifs cyber-physiques. Enfin, un outil qui permet la simulation et le déploiement de systèmes collectifs mixtes logiciels/matériels est introduit.Ces contributions ont été éprouvées dans des projets académiques et industriels dont les retours d'expériences sont exploités dans les différentes discussions