350 research outputs found

    The evolution of business analytics : based on case study research

    Get PDF
    While business analytics is becoming more significant and widely used by companies from increasing industries, for many the concept remains a complex illusion. The field of business analytics is considerably generic and fragmented, leaving managers confused and ultimately inhibited to make valuable decisions. This paper presents an evolutionary depiction of business analytics, using real-world case studies to illustrate a distinct overview that describes where the phenomenon was derived from, where it currently stands, and where it is heading towards. This paper provides eight case studies, representing three different eras: yesterday (1950s to 1990s), today (2000s to 2020s), and tomorrow (2030s to 2050s). Through cross-case analysis we have identified concluding patterns that lay as foundation for the discussion on future development within business analytics. We argue based on our findings that automatization of business processes will most likely continue to increase. AI is expanding in numerous areas, each specializing in a complex task, previously reserved by professionals. However, patterns show that new occupations linked to artificial intelligence will most probably be created. For the training of intelligent systems, data will most likely be requested more than ever. The increasing data will likely cause complications in current data infrastructures, causing the need for stronger networks and systems. The systems will need to process, store, and manage the great amount of various data types in real-time, while maintaining high security. Furthermore, data privacy concerns have become more significant in recent years, although, the case study research indicates that it has not limited corporations access to data. On the contrary, corporations, people, and devices will most likely become even more connected than ever before.nhhma

    A Semantic Model for Enhancing Network Services Management and Auditing

    Get PDF
    The road toward ubiquity, heterogeneity and virtualization of network services and resources urges for a formal and systematic approach to network management tasks. In particular, the semantic characterization and modeling of services provided to users assume an essential role in fostering autonomic service management, service negotiation and auditing. This paper is centered on the definition of an ontology for multiservice IP networks which intends to address multiple service management goals, namely: (i) to foster client and service provider interoperability; (ii) to manage network service contracts, facilitating the dynamic negotiation between clients and ISPs; (iii) to access and query SLA/SLSs data on an individual or aggregated basis to assist service provisioning in the network; and (iv) to sustain service monitoring and auditing. In order to take full advantage of the proposed semantic model, a service model API is provided to allow service management platforms to access the ontological contents. This ontological development also takes advantage of SWRL to discover new knowledge, enriching the possibilities of systems described using this support

    StarMX: A Framework for Developing Self-Managing Software Systems

    Get PDF
    The scale of computing systems has extensively grown over the past few decades in order to satisfy emerging business requirements. As a result of this evolution, the complexity of these systems has increased significantly, which has led to many difficulties in managing and administering them. The solution to this problem is to build systems that are capable of managing themselves, given high-level objectives. This vision is also known as Autonomic Computing. A self-managing system is governed by a closed control loop, which is responsible for dynamically monitoring the underlying system, analyzing the observed situation, planning the recovering actions, and executing the plan to maintain the system equilibrium. The realization of such systems poses several developmental and operational challenges, including: developing their architecture, constructing the control loop, and creating services that enable dynamic adaptation behavior. Software frameworks are effective in addressing these challenges: they can simplify the development of such systems by reducing design and implementation efforts, and they provide runtime services for supporting self-managing behavior. This dissertation presents a novel software framework, called StarMX, for developing adaptive and self-managing Java-based systems. It is a generic configurable framework based on standards and well-established principles, and provides the required features and facilities for the development of such systems. It extensively supports Java Management Extensions (JMX) and is capable of integrating with different policy engines. This allows the developer to incorporate and use these techniques in the design of a control loop in a flexible manner. The control loop is created as a chain of entities, called processes, such that each process represents one or more functions of the loop (monitoring, analyzing, planning, and executing). A process is implemented by either a policy language or the Java language. At runtime, the framework invokes the chain of processes in the control loop, providing each one with the required set of objects for monitoring and effecting. An open source Java-based Voice over IP system, called CC2, is selected as the case study used in a set of experiments that aim to capture a solid understanding of the framework suitability for developing adaptive systems and to improve its feature set. The experiments are also used to evaluate the performance overhead incurred by the framework at runtime. The performance analysis results show the execution time spent in different components, including the framework itself, the policy engine, and the sensors/effectors. The results also reveal that the time spent in the framework is negligible, and it has no considerable impact on the system's overall performance

    On Design and Realization of New Generation Misson-critial Application Systems

    Get PDF
    Mission-critical system typically refers to a project or system for which the success is vital to the mission of the underlying organization. The failure or delayed completion of the tasks in mission-critical systems may cause severe financial loss, even human casualties. For example, failure of an accurate and timely forecast of Hurricane Rita in September 2005 caused enormous financial loss and several deaths. As such, real-time guarantee and reliability have always been two key foci of mission-critical system design. Many factors affect real-time guarantee and reliability. From the software design perspective, which is the focus of this paper, three aspects are most important. The first of these is how to design a single application to effectively support real-time requirement and improve reliability, the second is how to integrate different applications in a cluster environment to guarantee real-time requirement and improve reliability, and the third is how to effectively coordinate distributed applications to support real-time requirements and improve reliability. Following these three aspects, this dissertation proposes and implements three novel methodologies: real-time component based single node application development, real-time workflow-based cluster application integration, and real-time distributed admission control. For ease of understanding, we introduce these three methodologies and implementations in three real-world mission-critical application systems: single node mission-critical system, cluster environment mission-critical system, and wide-area network mission-critical system. We study full-scale design and implementation of these mission-critical systems, more specifically: 1) For the single node system, we introduce a real-time component based application model, a novel design methodology, and based on the model and methodology, we implement a real-time component based Enterprise JavaBean (EJB) System. Through component based design, efficient resource management and scheduling, we show that our model and design methodology can effectively improve system reliability and guarantee real-time requirement. 2) For the system in a cluster environment, we introduce a new application model, a real-time workflow-based application integration methodology, and based on the model and methodology, we implement a data center management system for the Southeastern Universities Research Association (SURA) project. We show that our methodology can greatly simplify the design of such a system and make it easier to meet deadline requirements, while improving system reliability through the reuse of fully tested legacy models. 3) For the system in a wide area network, we narrow our focus to a representative VoIP system and introduce a general distributed real-time VoIP system model, a novel system design methodology, and an implementation. We show that with our new model and architectural design mechanism, we can provide effective real-time requirement for Voice over Internet Protocol (VoIP)

    Abstractions to Support Dynamic Adaptation of Communication Frameworks for User-Centric Communication

    Get PDF
    The convergence of data, audio and video on IP networks is changing the way individuals, groups and organizations communicate. This diversity of communication media presents opportunities for creating synergistic collaborative communications. This form of collaborative communication is however not without its challenges. The increasing number of communication service providers coupled with a combinatorial mix of offered services, varying Quality-of-Service and oscillating pricing of services increases the complexity for the user to manage and maintain `always best\u27 priced or performance services. Consumers have to manually manage and adapt their communication in line with differences in services across devices, networks and media while ensuring that the usage remain consistent with their intended goals. This dissertation proposes a novel user-centric approach to address this problem. The proposed approach aims to reduce the aforementioned complexity to the user by (1) providing high-level abstractions and a policy based methodology for automated selection of the communication services guided by high-level user policies and (2) providing services through the seamless integration of multiple communication service providers and providing an extensible framework to support the integration of multiple communication service providers. The approach was implemented in the Communication Virtual Machine (CVM), a model-driven technology for realizing communication applications. The CVM includes the Network Communication Broker, the layer responsible for providing a network-independent API to the upper layers of CVM. The initial prototype for the NCB supported only a single communication framework which limited the number, quality and types of services available. Experimental evaluation of the approach show the additional overhead of the approach is minimal compared to the individual communication services frameworks. Additionally the automated approach proposed out performed the individual communication services frameworks for cross framework switching

    Ensuring interoperability between network elements in next generation networks

    Get PDF
    Next Generation Networks (NGNs), based on the Internet Protocol (IP), implement several services such as IP-based telephony and are beginning to replace the classic telephony systems. Due to the development and implementation of new powerful services these systems are becoming increasingly complex. Implementing these new services (typically software-based network elements) is often accompanied by unexpected and erratic behaviours which can manifest as interoperability problems. The reason for this caused by insufficient testing at the developing companies. The testing of such products is by nature a costly and time-consuming exercise and therefore cut down to what is considered the maximum acceptable level. Ensuring the interoperability between network elements is a known challenge. However, there exists no concept of which testing methods should be utilised to achieve an acceptable level of quality. The objective of this thesis was to improve the interoperability between network elements in NGNs by creating a testing scheme comprising of three diverse testing methods: conformance testing, interoperability testing and posthoc analysis. In the first project a novel conformance testing methodology for developing sets of conformance test cases for service specifications in NGNs was proposed. This methodology significantly improves the chance of interoperability and provides a considerable enhancement to the currently used interoperability tests. It was evaluated by successfully applying it to the Presence Service. The second report proposed a post-hoc methodology which enables the identification of the ultimate causes for interoperability problems in a NGN in daily operation. The new methods were implemented in the tool IMPACT (IP-Based Multi Protocol Posthoc Analyzer and Conformance Tester), which stores all exchanged messages between network elements in a database. Using SQL queries, the causes for errors can be found efficiently. Overall the presented testing scheme improves significantly the chance that network elements interoperate successfully by providing new methods. Beyond that, the quality of the software product is raised by mapping these methods to phases in a process model and providing well defined steps on which test method is the best suited at a certain stage

    Service composition based on SIP peer-to-peer networks

    Get PDF
    Today the telecommunication market is faced with the situation that customers are requesting for new telecommunication services, especially value added services. The concept of Next Generation Networks (NGN) seems to be a solution for this, so this concept finds its way into the telecommunication area. These customer expectations have emerged in the context of NGN and the associated migration of the telecommunication networks from traditional circuit-switched towards packet-switched networks. One fundamental aspect of the NGN concept is to outsource the intelligence of services from the switching plane onto separated Service Delivery Platforms using SIP (Session Initiation Protocol) to provide the required signalling functionality. Caused by this migration process towards NGN SIP has appeared as the major signalling protocol for IP (Internet Protocol) based NGN. This will lead in contrast to ISDN (Integrated Services Digital Network) and IN (Intelligent Network) to significantly lower dependences among the network and services and enables to implement new services much easier and faster. In addition, further concepts from the IT (Information Technology) namely SOA (Service-Oriented Architecture) have largely influenced the telecommunication sector forced by amalgamation of IT and telecommunications. The benefit of applying SOA in telecommunication services is the acceleration of service creation and delivery. Main features of the SOA are that services are reusable, discoverable combinable and independently accessible from any location. Integration of those features offers a broader flexibility and efficiency for varying demands on services. This thesis proposes a novel framework for service provisioning and composition in SIP-based peer-to-peer networks applying the principles of SOA. One key contribution of the framework is the approach to enable the provisioning and composition of services which is performed by applying SIP. Based on this, the framework provides a flexible and fast way to request the creation for composite services. Furthermore the framework enables to request and combine multimodal value-added services, which means that they are no longer limited regarding media types such as audio, video and text. The proposed framework has been validated by a prototype implementation
    • …
    corecore