52,443 research outputs found
Multi-level Autonomic Business Process Management
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-38484-4_14Nowadays, business processes are becoming increasingly complex
and heterogeneous. Autonomic Computing principles can reduce this complexity
by autonomously managing the software systems and the running processes,
their states and evolution. Business Processes that are able to be self-managed
are referred to as Autonomic Business Processes (ABP). However, a key challenge
is to keep the models of such ABP understandable and expressive in
increasingly complex scenarios. This paper discusses the design aspects of an
autonomic business process management system able to self-manage processes
based on operational adaptation. The goal is to minimize human intervention
during the process definition and execution phases. This novel approach, named
MABUP, provides four well-defined levels of abstraction to express business
and operational knowledge and to guide the management activity; namely, Organizational
Level, Technological Level, Operational Level and Service Level.
A real example is used to illustrate our proposal.Research supported by CAPES, CNPQ and Spanish Ministry of Science and Innovation.Oliveira, K.; Castro, J.; España Cubillo, S.; Pastor LĂłpez, O. (2013). Multi-level Autonomic Business Process Management. En Enterprise, Business-Process and Information Systems Modeling. Springer. 184-198. doi:10.1007/978-3-642-38484-4_14S184198España, S., GonzĂĄlez, A., Pastor, Ă.: Communication Analysis: A Requirements Engineering Method for Information Systems. In: van Eck, P., Gordijn, J., Wieringa, R. (eds.) CAiSE 2009. LNCS, vol. 5565, pp. 530â545. Springer, Heidelberg (2009)Ganek, A.G., Corbi, T.A.: The dawning of the autonomic computing era. IBM Systems Journal 42(1), 5â18 (2003)Gonzalez, A., et al.: Unity criteria for Business Process Modelling. In: Third International Conference on Research Challenges in Information Science, RCIS 2009, pp. 155â164 (2009)Greenwood, D., Rimassa, G.: Autonomic Goal-Oriented Business Process Management. Management, 43 (2007)Haupt, T., et al.: Autonomic execution of computational workflows. In: 2011 Federated Conference on Computer Science and Information Systems, FedCSIS, pp. 965â972 (2011)Kephart, J.O., Chess, D.M.: The vision of autonomic computing. IEEE (2003)Lee, K., et al.: Workflow adaptation as an autonomic computing problem. In: Proceedings of the 2nd Workshop on Workflows in Support of Large-Scale Science, New York, NY, USA, pp. 29â34 (2007)Mosincat, A., Binder, W.: Transparent Runtime Adaptability for BPEL Processes. In: Bouguettaya, A., Krueger, I., Margaria, T. (eds.) ICSOC 2008. LNCS, vol. 5364, pp. 241â255. Springer, Heidelberg (2008)Oliveira, K., et al.: Towards Autonomic Business Process Models. In: International Conference on Software Engineering and Knowledge, SEKE 2012, San Francisco, California, USA (2012)Rahman, M., et al.: A taxonomy and survey on autonomic management of applications in grid computing environments. Concurr. Comput.: Pract. Exper. 23(16), 1990â2019 (2011)Reijers, H.A., Mendling, J.: Modularity in process models: Review and effects. In: Dumas, M., Reichert, M., Shan, M.-C. (eds.) BPM 2008. LNCS, vol. 5240, pp. 20â35. Springer, Heidelberg (2008)Rodrigues Nt., J.A., Monteiro Jr., P.C.L., de O. Sampaio, J., de Souza, J.M., ZimbrĂŁo, G.: Autonomic Business Processes Scalable Architecture. In: ter Hofstede, A.H.M., Benatallah, B., Paik, H.-Y. (eds.) BPM Workshops 2007. LNCS, vol. 4928, pp. 78â83. Springer, Heidelberg (2008)Strohmaier, M., Yu, E.: Towards autonomic workflow management systems. ACM Press (2006)Terres, L.D., et al.: Selection of Business Process for Autonomic Automation. In: 2010 14th IEEE International Enterprise Distributed Object Computing Conference, pp. 237â246 (October 2010)Tretola, G., Zimeo, E.: Autonomic internet-scale workflows. In: Proceedings of the 3rd International Workshop on Monitoring, Adaptation and Beyond, New York, NY, USA, pp. 48â56 (2010)Vedam, H., Venkatasubramanian, V.: A wavelet theory-based adaptive trend analysis system for process monitoring and diagnosis. In: Proceedings of the 1997 American Control Conference, vol. 1, pp. 309â313 (June 1997)Wang, Y., Mylopoulos, J.: Self-Repair through Reconfiguration: A Requirements Engineering Approach. In: 2009 IEEE/ACM International Conference on Automated Software Engineering, pp. 257â268 (November 2009)Yu, T., Lin, K.: Adaptive algorithms for finding replacement services in autonomic distributed business processes. In: Proceedings Autonomous Decentralized Systems, ISADS 2005, pp. 427â434 (2005
Real-time co-ordinated resource management in a computational enviroment
Design co-ordination is an emerging engineering design management philosophy with its emphasis on timeliness and appropriateness. Furthermore, a key element of design coordination has been identified as resource management, the aim of which is to facilitate the optimised use of resources throughout a dynamic and changeable process. An approach to operational design co-ordination has been developed, which incorporates the appropriate techniques to ensure that the aim of co-ordinated resource management can be fulfilled. This approach has been realised within an agent-based software system, called the Design Coordination System (DCS), such that a computational design analysis can be managed in a coherent and co-ordinated manner. The DCS is applied to a computational analysis for turbine blade design provided by industry. The application of the DCS involves resources, i.e. workstations within a computer network, being utilised to perform the computational analysis involving the use of a suite of software tools to calculate stress and vibration characteristics of turbine blades. Furthermore, the application of the system shows that the utilisation of resources can be optimised throughout the computational design analysis despite the variable nature of the computer network
Activity-Centric Computing Systems
âą Activity-Centric Computing (ACC) addresses deep-rooted information management problems in traditional application centric computing by providing a unifying computational model for human goal-oriented âactivity,â cutting across system boundaries. âą We provide a historical review of the motivation for and development of ACC systems, and highlight the need for broadening up this research topic to also include low-level system research and development. âą ACC concepts and technology relate to many facets of computing; they are relevant for researchers working on new computing models and operating systems, as well as for application designers seeking to incorporate these technologies in domain-specific applications
A performance study of anomaly detection using entropy method
An experiment to study the entropy method for an anomaly detection system has
been performed. The study has been conducted using real data generated from the
distributed sensor networks at the Intel Berkeley Research Laboratory. The
experimental results were compared with the elliptical method and has been
analyzed in two dimensional data sets acquired from temperature and humidity
sensors across 52 micro controllers. Using the binary classification to
determine the upper and lower boundaries for each series of sensors, it has
been shown that the entropy method are able to detect more number of out
ranging sensor nodes than the elliptical methods. It can be argued that the
better result was mainly due to the lack of elliptical approach which is
requiring certain correlation between two sensor series, while in the entropy
approach each sensor series is treated independently. This is very important in
the current case where both sensor series are not correlated each other.Comment: Proceeding of the International Conference on Computer, Control,
Informatics and its Applications (2017) pp. 137-14
DRS: Dynamic Resource Scheduling for Real-Time Analytics over Fast Streams
In a data stream management system (DSMS), users register continuous queries,
and receive result updates as data arrive and expire. We focus on applications
with real-time constraints, in which the user must receive each result update
within a given period after the update occurs. To handle fast data, the DSMS is
commonly placed on top of a cloud infrastructure. Because stream properties
such as arrival rates can fluctuate unpredictably, cloud resources must be
dynamically provisioned and scheduled accordingly to ensure real-time response.
It is quite essential, for the existing systems or future developments, to
possess the ability of scheduling resources dynamically according to the
current workload, in order to avoid wasting resources, or failing in delivering
correct results on time. Motivated by this, we propose DRS, a novel dynamic
resource scheduler for cloud-based DSMSs. DRS overcomes three fundamental
challenges: (a) how to model the relationship between the provisioned resources
and query response time (b) where to best place resources; and (c) how to
measure system load with minimal overhead. In particular, DRS includes an
accurate performance model based on the theory of \emph{Jackson open queueing
networks} and is capable of handling \emph{arbitrary} operator topologies,
possibly with loops, splits and joins. Extensive experiments with real data
confirm that DRS achieves real-time response with close to optimal resource
consumption.Comment: This is the our latest version with certain modificatio
Recommended from our members
Knowledge dependencies in fuzzy information systems evaluation
Experience and research within the field of Information Systems Evaluation (ISE), has traditionally centered on providing tools and techniques for investment justification and appraisal, based upon explicit knowledge which encodes financial and other direct situational factors (such as accounting, costing and risk metrics). However, such approaches tend not to include additional causal interdependencies that are based upon tacit knowledge and are inherent within such a decision-making task. The authors show the results of applying a cognitive mapping approach, in the guise of a Fuzzy Cognitive Mapping (FCM) simulation, i.e. Fuzzy Information Systems Evaluation (F-ISE), in order to highlight the usefulness of applying such a technique. The authors highlight those contingent and necessary knowledge dependencies, in an exploratory sense, which relate to the investment appraisal decision-making task, in terms of the interplay between tacit and explicit knowledge, in this regard
Recommended from our members
A comparative analysis of business process modelling techniques
Business process modelling is an increasingly popular research area for both organisations and academia due to its usefulness in facilitating human understanding and communication. Several modelling techniques have been proposed and used to capture the characteristics of business processes. However, available techniques view business processes from different perspectives and have different features and capabilities. Furthermore, to date limited guidelines exist for selecting appropriate modelling techniques based on the characteristics of the problem and its requirements. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on five criteria: flexibility, ease of use, understandability, simulation support and scope. The study highlights some of the major paradigmatic differences between the techniques. The proposed framework can serve as the basis for evaluating further modelling techniques and generating selection procedures
Toward optimal multi-objective models of network security: Survey
Information security is an important aspect of a successful business today. However, financial difficulties and budget cuts create a problem of selecting appropriate security measures and keeping networked systems up and running. Economic models proposed in the literature do not address the challenging problem of security countermeasure selection. We have made a classification of security models, which can be used to harden a system in a cost effective manner based on the methodologies used. In addition, we have specified the challenges of the simplified risk assessment approaches used in the economic models and have made recommendations how the challenges can be addressed in order to support decision makers
Contested modelling
We suggest that the role and function of expert computational modelling in real-world decision-making needs scrutiny and practices need to change. We discuss some empirical and theory-based improvements to the coupling of the modelling process and the real world, including social and behavioural processes, which we have expressed as a set of questions that we believe need to be answered by all projects engaged in such modelling. These are based on a systems analysis of four research initiatives, covering different scales and timeframes, and addressing the complexity of intervention in a sustainability context. Our proposed improvements require new approaches for analysing the relationship between a project’s models and its publics. They reflect what we believe is a necessary and beneficial dialogue between the realms of expert scientific modelling and systems thinking. This paper is an attempt to start that process, itself reflecting a robust dialogue between two practitioners sat within differing traditions, puzzling how to integrate perspectives and achieve wider participation in researching this problem space. 
- âŠ