18 research outputs found
A Cloud-Based System for Improving Retention Marketing Loyalty Programs in Industry 4.0: A Study on Big Data Storage Implications
Nowadays, the growing global economy and demand for customized products are bringing the manufacturing industry from a sellers' market toward a buyers' market. In this context, the smart manufacturing enabled by Industry 4.0 is changing the whole production cycle of companies specialized on different kinds of products. On one hand, the advent of cloud computing and social media makes the customers' experience more and more inclusive, whereas on the other hand cyber-physical system technologies help industries to change in real time the cycle of production according to customers' needs. In this context, "retention" marketing strategies aimed not only at the acquisition of new customers but also at the profitability of existing ones allow industries to apply specific production strategies so as to maximize their revenues. This is possible by means of the analysis of various kinds of information coming from customers, products, purchases, and so on. In this paper, we focus on customer loyalty programs. In particular, we propose cloud-based software as a service architecture that store and analyses big data related to purchases and products' ranks in order to provide customers a list of recommended products. Experiments focus on a prototype of human to machine workflow for the pre-selection of customers deployed on both private and hybrid cloud scenarios
Recommended from our members
AMACoT: a marketplace architecture for trading Cloud of Things resources
Cloud of Things (CoT) is increasingly viewed as a paradigm that can satisfy the diverse requirements of emerging IoT applications. The potential of CoT is not yet realised due to challenges in sharing and reusing IoT physical resources across multiple applications. Existing approaches provide small-scale and hardware-dependent shared access to IoT resources. This paper considers using market mechanisms to commoditise CoT resources as the approach to enable shared access to CoT resources and to improve their reusability. In order to achieve this, the requirements for trading CoT resources are discussed to conceptualise the proposed approach. A generic description model for CoT resource is introduced to quantify the value of CoT resources. In this paper, a marketplace architecture for trading CoT resources referred to as AMACoT is proposed. By formulating the trading of CoT resources as an optimisation problem, the proposed approach is experimentally validated. The evaluation measures the system performance and verifies the optimisation problem using three evolutionary algorithms. The evaluation of the optimisation algorithms demonstrates the optimality of trading CoT resources solutions in terms of resource cost, resource utilisation, provider lock-in and provider profit
Sustainability Benefits Analysis of CyberManufacturing Systems
Confronted with growing sustainability awareness, mounting environmental pressure, meeting modern customers’ demand and the need to develop stronger market competitiveness, the manufacturing industry is striving to address sustainability-related issues in manufacturing. A new manufacturing system called CyberManufacturing System (CMS) has a great potential in addressing sustainability issues by handling manufacturing tasks differently and better than traditional manufacturing systems. CMS is an advanced manufacturing system where physical components are fully integrated and seamlessly networked with computational processes. The recent developments in Internet of Things, Cloud Computing, Fog Computing, Service-Oriented Technologies, etc., all contribute to the development of CMS. Under the context of this new manufacturing paradigm, every manufacturing resource or capability is digitized, registered and shared with all the networked users and stakeholders directly or through the Internet. CMS infrastructure enables intelligent behaviors of manufacturing components and systems such as self-monitoring, self-awareness, self-prediction, self-optimization, self-configuration, self-scalability, self-remediating and self-reusing. Sustainability benefits of CMS are generally mentioned in the existing researches. However, the existing sustainability studies of CMS focus a narrow scope of CMS (e.g., standalone machines and specific industrial domains) or partial aspects of sustainability analysis (e.g., solely from energy consumption or material consumption perspectives), and thus no research has comprehensively addressed the sustainability analysis of CMS. The proposed research intends to address these gaps by developing a comprehensive definition, architecture, functionality study of CMS for sustainability benefits analysis. A sustainability assessment framework based on Distance-to-Target methodology is developed to comprehensively and objectively evaluate manufacturing systems’ sustainability performance. Three practical cases are captured as examples for instantiating all CMS functions and analyzing the advancements of CMS in addressing concrete sustainability issues. As a result, CMS has proven to deliver substantial sustainability benefits in terms of (i) the increment of productivity, production quality, profitability & facility utilization and (ii) the reduction in Working-In-Process (WIP) inventory level & material consumption compared with the alternative traditional manufacturing system paradigms
Towards Protection Against Low-Rate Distributed Denial of Service Attacks in Platform-as-a-Service Cloud Services
Nowadays, the variety of technology to perform daily tasks is abundant and different business
and people benefit from this diversity. The more technology evolves, more useful it gets and in
contrast, they also become target for malicious users. Cloud Computing is one of the technologies
that is being adopted by different companies worldwide throughout the years. Its popularity
is essentially due to its characteristics and the way it delivers its services. This Cloud expansion
also means that malicious users may try to exploit it, as the research studies presented throughout
this work revealed. According to these studies, Denial of Service attack is a type of threat
that is always trying to take advantage of Cloud Computing Services.
Several companies moved or are moving their services to hosted environments provided by Cloud
Service Providers and are using several applications based on those services. The literature on
the subject, bring to attention that because of this Cloud adoption expansion, the use of applications
increased. Therefore, DoS threats are aiming the Application Layer more and additionally,
advanced variations are being used such as Low-Rate Distributed Denial of Service attacks.
Some researches are being conducted specifically for the detection and mitigation of this kind
of threat and the significant problem found within this DDoS variant, is the difficulty to differentiate
malicious traffic from legitimate user traffic. The main goal of this attack is to exploit
the communication aspect of the HTTP protocol, sending legitimate traffic with small changes
to fill the requests of a server slowly, resulting in almost stopping the access of real users to
the server resources during the attack.
This kind of attack usually has a small time window duration but in order to be more efficient,
it is used within infected computers creating a network of attackers, transforming into
a Distributed attack. For this work, the idea to battle Low-Rate Distributed Denial of Service
attacks, is to integrate different technologies inside an Hybrid Application where the main goal
is to identify and separate malicious traffic from legitimate traffic. First, a study is done to
observe the behavior of each type of Low-Rate attack in order to gather specific information
related to their characteristics when the attack is executing in real-time. Then, using the Tshark
filters, the collection of those packet information is done. The next step is to develop combinations
of specific information obtained from the packet filtering and compare them. Finally,
each packet is analyzed based on these combinations patterns. A log file is created to store the
data gathered after the Entropy calculation in a friendly format.
In order to test the efficiency of the application, a Cloud virtual infrastructure was built using
OpenNebula Sandbox and Apache Web Server. Two tests were done against the infrastructure,
the first test had the objective to verify the effectiveness of the tool proportionally against the
Cloud environment created. Based on the results of this test, a second test was proposed to
demonstrate how the Hybrid Application works against the attacks performed. The conclusion
of the tests presented how the types of Slow-Rate DDoS can be disruptive and also exhibited
promising results of the Hybrid Application performance against Low-Rate Distributed Denial of
Service attacks. The Hybrid Application was successful in identify each type of Low-Rate DDoS,
separate the traffic and generate few false positives in the process. The results are displayed
in the form of parameters and graphs.Actualmente, a variedade de tecnologias que realizam tarefas diárias é abundante e diferentes
empresas e pessoas se beneficiam desta diversidade. Quanto mais a tecnologia evolui, mais
usual se torna, em contraposição, essas empresas acabam por se tornar alvo de actividades maliciosas.
Computação na Nuvem é uma das tecnologias que vem sendo adoptada por empresas
de diferentes segmentos ao redor do mundo durante anos. Sua popularidade se deve principalmente
devido as suas características e a maneira com o qual entrega seus serviços ao cliente.
Esta expansão da Computação na Nuvem também implica que usuários maliciosos podem tentar
explorá-la, como revela estudos de pesquisas apresentados ao longo deste trabalho. De acordo
também com estes estudos, Ataques de Negação de Serviço são um tipo de ameaça que sempre
estão a tentar tirar vantagens dos serviços de Computação na Nuvem.
Várias empresas moveram ou estão a mover seus serviços para ambientes hospedados fornecidos
por provedores de Computação na Nuvem e estão a utilizar várias aplicações baseadas nestes
serviços. A literatura existente sobre este tema chama atenção sobre o fato de que, por conta
desta expansão na adopção à serviços na Nuvem, o uso de aplicações aumentou. Portanto,
ameaças de Negação de Serviço estão visando mais a camada de aplicação e também, variações
de ataques mais avançados estão sendo utilizadas como Negação de Serviço Distribuída de Baixa
Taxa. Algumas pesquisas estão a ser feitas relacionadas especificamente para a detecção e mitigação
deste tipo de ameaça e o maior problema encontrado nesta variante é diferenciar tráfego
malicioso de tráfego legítimo. O objectivo principal desta ameaça é explorar a maneira como o
protocolo HTTP trabalha, enviando tráfego legítimo com pequenas modificações para preencher
as solicitações feitas a um servidor lentamente, tornando quase impossível para usuários legítimos
aceder os recursos do servidor durante o ataque.
Este tipo de ataque geralmente tem uma janela de tempo curta mas para obter melhor eficiência,
o ataque é propagado utilizando computadores infectados, criando uma rede de ataque,
transformando-se em um ataque distribuído. Para este trabalho, a ideia para combater Ataques
de Negação de Serviço Distribuída de Baixa Taxa é integrar diferentes tecnologias dentro de uma
Aplicação Híbrida com o objectivo principal de identificar e separar tráfego malicioso de tráfego
legítimo. Primeiro, um estudo é feito para observar o comportamento de cada tipo de Ataque
de Baixa Taxa, a fim de recolher informações específicas relacionadas às suas características
quando o ataque é executado em tempo-real. Então, usando os filtros do programa Tshark, a
obtenção destas informações é feita. O próximo passo é criar combinações das informações específicas
obtidas dos pacotes e compará-las. Então finalmente, cada pacote é analisado baseado
nos padrões de combinações feitos. Um arquivo de registo é criado ao fim para armazenar os
dados recolhidos após o cálculo da Entropia em um formato amigável.
A fim de testar a eficiência da Aplicação Híbrida, uma infra-estrutura Cloud virtual foi construída
usando OpenNebula Sandbox e servidores Apache. Dois testes foram feitos contra a
infra-estrutura, o primeiro teste teve o objectivo de verificar a efectividade da ferramenta
proporcionalmente contra o ambiente de Nuvem criado. Baseado nos resultados deste teste,
um segundo teste foi proposto para verificar o funcionamento da Aplicação Híbrida contra os
ataques realizados. A conclusão dos testes mostrou como os tipos de Ataques de Negação de
Serviço Distribuída de Baixa Taxa podem ser disruptivos e também revelou resultados promissores relacionados ao desempenho da Aplicação Híbrida contra esta ameaça. A Aplicação Híbrida
obteve sucesso ao identificar cada tipo de Ataque de Negação de Serviço Distribuída de Baixa
Taxa, em separar o tráfego e gerou poucos falsos positivos durante o processo. Os resultados
são exibidos em forma de parâmetros e grafos
Towards Interoperable Research Infrastructures for Environmental and Earth Sciences
This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a ‘reference model guided’ engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions
Sistematização dos requisitos gerais para o desenvolvimento de software na indústria 4.0
The identification and implementation of technologies and innovations, through the development of business strategies, are factors that contribute to organizational success. In this context, industry 4.0 can contribute to the development of industries that seek technological and innovative differentials. The objective of this research is to develop a system for software development in industry 4.0. Because they are current topics and linked to technology and innovation, they are still underdeveloped in academic studies. Through a bibliographic review, using bibliometrics and content analysis techniques, the general requirements for software development in industry 4.0 were identified, and for each requirement, what are the challenges, risks, gaps, advantages and trends. The systematic is analyzed by academic experts, being considered as theoretical with the incorporations of the practice. The AHP method was used to prioritize the requirements, resulting in the adjusted systematic, having as elements and relative score of the specialists: analysis of opportunities 34.98%, operational efficiency 2.0 19.41%, optimization of the business model 15.57%, information technology 20.26%, IT integration 12.29%, IT security management 7.97%, new interfaces and data 17.93%, data analysis and management 6.90%, new intellectual property management 6.69%, cloud-based applications 4.34%, integrated and intelligent management 10.75%, intelligent supply chain 3.85%, lifecycle management 3.81%, intelligent logistics 3.09%, change and learning 16.08%, corporate business 9.33%, organizational learning 6.75%. Among the criteria, the requirement that got the most weight was the analysis of opportunities and among the sub-criteria the ones with the highest weight were: operational efficiency, optimization of the business model and IT integration. Finally, an analysis by specialists from companies that develop software is applied to understand how the requirements are used in these companies, reaching the final system. The research through the AHP, showed the importance of analyzing opportunities and operational efficiency in organizations, justifying that companies are concerned with identifying business opportunities and managing processes efficiently. These results also showed the stage of preparing the system as the most relevant. The interviews demonstrated the concern with cybersecurity and IT integration tools, but the gap was shown in relation to the development of integrated and intelligent management, with emphasis on intelligent logistics.Agência 1A identificação e implementação das tecnologias e inovações, por meio do desenvolvimento de estratégias empresariais, são fatores que contribuem para o sucesso organizacional. Neste contexto, a indústria 4.0 pode contribuir para o desenvolvimento das indústrias que buscam pelo diferencial tecnológico e inovador. O objetivo desta pesquisa é desenvolver uma sistemática para o desenvolvimento de software na indústria 4.0. Por serem temas atuais e ligados à tecnologia e inovação, ainda são pouco desenvolvidos nos estudos acadêmicos. Por meio de uma revisão bibliográfica, utilizando técnicas de bibliometria e análise de conteúdo, identificaram-se os requisitos gerais do desenvolvimento de software na indústria 4.0, e para cada requisito, quais são os desafios, riscos, lacunas, vantagens e tendências. A sistemática passa por análises de especialistas acadêmicos, sendo considerada como teórica com as incorporações da prática. O método AHP foi utilizado para priorizar os requisitos, resultando na sistemática ajustada, tendo como elementos e pontuação relativa dos especialistas: análise de oportunidades 34,98%, eficiência operacional 2.0 19,41%, otimização do modelo de negócios 15,57%, tecnologia da informação 20,26%, integração de TI 12,29%, gerenciamento de segurança de TI 7,97%, novas interfaces e dados 17,93%, análise e gerenciamento de dados 6,90%, novo gerenciamento de propriedade intelectual 6,69%, aplicações baseadas em nuvem 4,34%, gestão integrada e inteligente 10,75%, cadeia de suprimentos inteligente 3,85%, gerenciamento do ciclo de vida 3,81%, logística inteligente 3,09%, mudança e aprendizagem 16,08%, negócio corporativo 9,33%, aprendizagem organizacional 6,75%. Entre os critérios, o requisito que obtive maior peso foi a análise de oportunidades e entre os subcritérios os de maior peso foram: eficiência operacional, otimização do modelo de negócios e integração de TI. Por fim, é aplicada uma análise de especialistas de empresas que desenvolvem software para entender como os requisitos são utilizados nessas empresas, chegando à sistemática final. A pesquisa por meio do AHP, mostrou a importância da análise de oportunidades e da eficiência operacional nas organizações, justificando que as empresas se preocupam em identificar as oportunidades de negócios e gerenciar os processos de forma eficiente. Esses resultados também mostraram a etapa de preparação da sistemática como a mais relevante. As entrevistas demonstraram a preocupação com a cybersegurança e as ferramentas de integração de TI, porém mostrou-se o gap em relação ao desenvolvimento da gestão integrada e inteligente, com destaque para a logística inteligente
Self-managed Workflows for Cyber-physical Systems
Workflows are a well-established concept for describing business logics and processes in web-based applications and enterprise application integration scenarios on an abstract implementation-agnostic level. Applying Business Process Management (BPM) technologies to increase autonomy and automate sequences of activities in Cyber-physical Systems (CPS) promises various advantages including a higher flexibility and simplified programming, a more efficient resource usage, and an easier integration and orchestration of CPS devices. However, traditional BPM notations and engines have not been designed to be used in the context of CPS, which raises new research questions occurring with the close coupling of the virtual and physical worlds. Among these challenges are the interaction with complex compounds of heterogeneous sensors, actuators, things and humans; the detection and handling of errors in the physical world; and the synchronization of the cyber-physical process execution models. Novel factors related to the interaction with the physical world including real world obstacles, inconsistencies and inaccuracies may jeopardize the successful execution of workflows in CPS and may lead to unanticipated situations.
This thesis investigates properties and requirements of CPS relevant for the introduction of BPM technologies into cyber-physical domains. We discuss existing BPM systems and related work regarding the integration of sensors and actuators into workflows, the development of a Workflow Management System (WfMS) for CPS, and the synchronization of the virtual and physical process execution as part of self-* capabilities for WfMSes. Based on the identified research gap, we present concepts and prototypes regarding the development of a CPS WFMS w.r.t. all phases of the BPM lifecycle. First, we introduce a CPS workflow notation that supports the modelling of the interaction of complex sensors, actuators, humans, dynamic services and WfMSes on the business process level. In addition, the effects of the workflow execution can be specified in the form of goals defining success and error criteria for the execution of individual process steps. Along with that, we introduce the notion of Cyber-physical Consistency. Following, we present a system architecture for a corresponding WfMS (PROtEUS) to execute the modelled processes-also in distributed execution settings and with a focus on interactive process management. Subsequently, the integration of a cyber-physical feedback loop to increase resilience of the process execution at runtime is discussed. Within this MAPE-K loop, sensor and context data are related to the effects of the process execution, deviations from expected behaviour are detected, and compensations are planned and executed. The execution of this feedback loop can be scaled depending on the required level of precision and consistency. Our implementation of the MAPE-K loop proves to be a general framework for adding self-* capabilities to WfMSes. The evaluation of our concepts within a smart home case study shows expected behaviour, reasonable execution times, reduced error rates and high coverage of the identified requirements, which makes our CPS~WfMS a suitable system for introducing workflows on top of systems, devices, things and applications of CPS.:1. Introduction 15
1.1. Motivation 15
1.2. Research Issues 17
1.3. Scope & Contributions 19
1.4. Structure of the Thesis 20
2. Workflows and Cyber-physical Systems 21
2.1. Introduction 21
2.2. Two Motivating Examples 21
2.3. Business Process Management and Workflow Technologies 23
2.4. Cyber-physical Systems 31
2.5. Workflows in CPS 38
2.6. Requirements 42
3. Related Work 45
3.1. Introduction 45
3.2. Existing BPM Systems in Industry and Academia 45
3.3. Modelling of CPS Workflows 49
3.4. CPS Workflow Systems 53
3.5. Cyber-physical Synchronization 58
3.6. Self-* for BPM Systems 63
3.7. Retrofitting Frameworks for WfMSes 69
3.8. Conclusion & Deficits 71
4. Modelling of Cyber-physical Workflows with Consistency Style Sheets 75
4.1. Introduction 75
4.2. Workflow Metamodel 76
4.3. Knowledge Base 87
4.4. Dynamic Services 92
4.5. CPS-related Workflow Effects 94
4.6. Cyber-physical Consistency 100
4.7. Consistency Style Sheets 105
4.8. Tools for Modelling of CPS Workflows 106
4.9. Compatibility with Existing Business Process Notations 111
5. Architecture of a WfMS for Distributed CPS Workflows 115
5.1. Introduction 115
5.2. PROtEUS Process Execution System 116
5.3. Internet of Things Middleware 124
5.4. Dynamic Service Selection via Semantic Access Layer 125
5.5. Process Distribution 126
5.6. Ubiquitous Human Interaction 130
5.7. Towards a CPS WfMS Reference Architecture for Other Domains 137
6. Scalable Execution of Self-managed CPS Workflows 141
6.1. Introduction 141
6.2. MAPE-K Control Loops for Autonomous Workflows 141
6.3. Feedback Loop for Cyber-physical Consistency 148
6.4. Feedback Loop for Distributed Workflows 152
6.5. Consistency Levels, Scalability and Scalable Consistency 157
6.6. Self-managed Workflows 158
6.7. Adaptations and Meta-adaptations 159
6.8. Multiple Feedback Loops and Process Instances 160
6.9. Transactions and ACID for CPS Workflows 161
6.10. Runtime View on Cyber-physical Synchronization for Workflows 162
6.11. Applicability of Workflow Feedback Loops to other CPS Domains 164
6.12. A Retrofitting Framework for Self-managed CPS WfMSes 165
7. Evaluation 171
7.1. Introduction 171
7.2. Hardware and Software 171
7.3. PROtEUS Base System 174
7.4. PROtEUS with Feedback Service 182
7.5. Feedback Service with Legacy WfMSes 213
7.6. Qualitative Discussion of Requirements and Additional CPS Aspects 217
7.7. Comparison with Related Work 232
7.8. Conclusion 234
8. Summary and Future Work 237
8.1. Summary and Conclusion 237
8.2. Advances of this Thesis 240
8.3. Contributions to the Research Area 242
8.4. Relevance 243
8.5. Open Questions 245
8.6. Future Work 247
Bibliography 249
Acronyms 277
List of Figures 281
List of Tables 285
List of Listings 287
Appendices 28
Towards Interoperable Research Infrastructures for Environmental and Earth Sciences
This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a ‘reference model guided’ engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions