14,584 research outputs found

    Interpreting Business Strategy and Market Dynamics: A Multi-Method AI Approach

    Get PDF
    This research paper presents an integrated approach that combines Long Short-Term Memory (LSTM), Q-Learning, Monte Carlo methods, and Text-to-Text Transfer Transformer (T5) to analyze and evaluate the business strategies of public companies. Leveraging a large and diverse dataset sourced from multiple reliable sources, the study examines corporate strategies and their impact on market dynamics. LSTM and Q-Learning are employed to process sequential data, enabling informed decision-making in simulated market environments and providing insights into potential outcomes of different strategies. The Monte Carlo method manages uncertainty, allowing for a comprehensive analysis of risks and rewards associated with specific strategies. T5 interprets textual data from earnings calls, press releases, and industry reports, offering a deeper understanding of strategic changes and market sentiments. The integration of these techniques enhances the evaluation of business strategies, enabling decision-makers to anticipate future market scenarios and make informed strategic shifts. Overall, this integrated approach provides a comprehensive framework for evaluating and anticipating market dynamics, enhancing the assessment and adjustment of public companies\u27 business decisions

    Leveraging Multi-Perspective A priori Knowledge in Predictive Business Process Monitoring

    Get PDF
    Äriprotsesside ennestusseire on valdkond, mis on pühendunud käimasolevate äriprotsesside tuleviku ennustamisele kasutades selleks minevikus sooritatud äriprotsesside kohta käivaid andmeid. Valdav osa uurimustööst selles valdkonnas keskendub ainult seda tüüpi andmetele, jättes tähelepanuta täiendavad teadmised (a priori teadmised) protsessi teostumise kohta tulevikus. Hiljuti pakuti välja lähenemine, mis võimaldab a priori teadmisi kasutada LTL-reeglite näol. Kuid tõsiasjana on antud tehnika limiteeritud äriprotsessi kontroll-voole, jättes välja võimaluse väljendada a priori teadmisi, mis puudutavad lisaks kontrollvoole ka informatsiooni protsessis leiduvate atribuutide kohta (multiperspektiivsed a priori teadmised). Me pakume välja lahenduse, mis võimaldab seda tüüpi teadmiste kasutuse, tehes multiperspektiivseid ennustusi käimasoleva äriprotsessi kohta. Tulemused, milleni jõuti rakendades väljapakutud tehnikat 20-le tehisärilogile ning ühele elulisele ärilogile, näitavad, et meie lähenemine suudab pakkuda konkurentsivõimelisi ennustusi.Predictive business process monitoring is an area dedicated to exploiting past process execution data in order to predict the future unfolding of a currently executed business process instance. Most of the research done in this domain focuses on exploiting the past process execution data only, leaving neglected additional a priori knowledge that might become available at runtime. Recently, an approach was proposed, which allows to leverage a priori knowledge on the control flow in the form of LTL-rules. However, cases exist in which more granular a priori knowledge becomes available about perspectives that go be-yond the pure control flow like data, time and resources (multiperspective a priori knowledge). In this thesis, we propose a technique that enables to leverage multi-perspective a priori knowledge when making predictions of complex sequences, i.e., sequences of events with a subset of the data attributes attached to them. The results, obtained by applying the proposed technique to 20 synthetic logs and 1 real life log, show that the proposed technique is able to overcome state-of-the-art approaches by successfully leveraging multiperspective a priori knowledge

    Human Computation and Convergence

    Full text link
    Humans are the most effective integrators and producers of information, directly and through the use of information-processing inventions. As these inventions become increasingly sophisticated, the substantive role of humans in processing information will tend toward capabilities that derive from our most complex cognitive processes, e.g., abstraction, creativity, and applied world knowledge. Through the advancement of human computation - methods that leverage the respective strengths of humans and machines in distributed information-processing systems - formerly discrete processes will combine synergistically into increasingly integrated and complex information processing systems. These new, collective systems will exhibit an unprecedented degree of predictive accuracy in modeling physical and techno-social processes, and may ultimately coalesce into a single unified predictive organism, with the capacity to address societies most wicked problems and achieve planetary homeostasis.Comment: Pre-publication draft of chapter. 24 pages, 3 figures; added references to page 1 and 3, and corrected typ

    Encoding & Characterization of process models for Deep Predictive Process Monitoring.

    Get PDF
    La sempre crescente digitalizzazione di molti aspetti della vita, sta modificando l'esecuzione operativa di molte attività umane, producendo anche una grande quantità di informazione sotto forma di log di dati. Questi possono essere sfruttati per migliorare la qualità di queste esecuzioni. Un modo per sfruttare queste informazioni è usarle per predire come l'esecuzione di un'attività umana possa evolvere fino al suo completamento, così da supportare i manager nel determinare, per esempio, se intervenire per prevenire delle situazioni indesiderate o per meglio allocare le risorse a disposizione. Nella presente tesi, si propone un approccio che usa l'informazione relativa al parallelismo presente tra le attività per eseguire i task tipici del Predictive Process Monitoring. Questo viene fatto rappresentando le esecuzioni di processo con il corrispondente Instance Graph e processandole utilizzando delle graph convolutional neural networks. Inoltre, per definire gli ambiti in cui tale approccio funziona al meglio nel presente elaborato si illustra una nuova metrica ideata per misurare il parallelismo all'interno dei processi di business. Infine, è presentato un insieme di metriche che descrivono il contesto di esecuzione di una attività all'interno di un processo per rappresentare l'attività stessa. Questo è utilizzato sia per definire un meccanismo di "querying" per le attività all'interno dei processi sia per introdurre la nozione di "location" come un ulteriore obiettivo di predizione per le tecniche di Predictive Process Monitoring. Gli approcci proposti sono stati valutati utilizzando vari dataset reali e i risultati ottenuti sono promettenti.Ever-increasing digitalization of all aspects of life modifies the operative executions of most human tasks and produces a huge wealth of information, in the form of data logs, that could be leveraged to further improve the general quality of such executions. One way of leveraging such information is to predict how the execution of such tasks will unfold until their completion so as to be capable of supporting the managers in determining, for example, whether to intervene to prevent undesired process outcomes or how to best allocate resources. In the present thesis, it is proposed an approach that uses the information about the parallelism among activities for the Predictive Process Monitoring tasks, by representing process executions with their corresponding Instance Graph and processing them using deep graph convolutional neural networks. Also, to define the scope to best apply such an approach is devised a novel metric that manages to effectively measure the parallelism in a business process model. Lastly, the definition of a set of metrics that describe the execution context of an activity inside a process to represent the activity itself is presented. This is used both to define a querying mechanism for activities in processes and to introduce the notion of "location" as a further relevant prediction target for Predictive Process Monitoring techniques. The proposed techniques have been experimentally evaluated using several real-world datasets and the results are promising

    A Literature Review on Predictive Monitoring of Business Processes

    Get PDF
    Oleme läbi vaadanud mitmesuguseid ennetava jälgimise meetodeid äriprotsessides. Prognoositavate seirete eesmärk on aidata ettevõtetel oma eesmärke saavutada, aidata neil valida õige ärimudel, prognoosida tulemusi ja aega ning muuta äriprotsessid riskantsemaks. Antud väitekirjaga oleme hoolikalt kogunud ja üksikasjalikult läbi vaadanud selle väitekirja teemal oleva kirjanduse. Kirjandusuuringu tulemustest ja tähelepanekutest lähtuvalt oleme hoolikalt kavandanud ennetava jälgimisraamistiku. Raamistik on juhendiks ettevõtetele ja teadlastele, teadustöötajatele, kes uurivad selles valdkonnas ja ettevõtetele, kes soovivad neid tehnikaid oma valdkonnas rakendada.The goal of predictive monitoring is to help the business achieve their goals, help them take the right business path, predict outcomes, estimate delivery time, and make business processes risk aware. In this thesis, we have carefully collected and reviewed in detail all literature which falls in this process mining category. The objective of the thesis is to design a Predictive Monitoring Framework and classify the different predictive monitoring techniques. The framework acts as a guide for researchers and businesses. Researchers who are investigating in this field and businesses who want to apply these techniques in their respective field

    Next challenges for adaptive learning systems

    Get PDF
    Learning from evolving streaming data has become a 'hot' research topic in the last decade and many adaptive learning algorithms have been developed. This research was stimulated by rapidly growing amounts of industrial, transactional, sensor and other business data that arrives in real time and needs to be mined in real time. Under such circumstances, constant manual adjustment of models is in-efficient and with increasing amounts of data is becoming infeasible. Nevertheless, adaptive learning models are still rarely employed in business applications in practice. In the light of rapidly growing structurally rich 'big data', new generation of parallel computing solutions and cloud computing services as well as recent advances in portable computing devices, this article aims to identify the current key research directions to be taken to bring the adaptive learning closer to application needs. We identify six forthcoming challenges in designing and building adaptive learning (pre-diction) systems: making adaptive systems scalable, dealing with realistic data, improving usability and trust, integrat-ing expert knowledge, taking into account various application needs, and moving from adaptive algorithms towards adaptive tools. Those challenges are critical for the evolving stream settings, as the process of model building needs to be fully automated and continuous.</jats:p

    Incremental Predictive Process Monitoring: How to Deal with the Variability of Real Environments

    Full text link
    A characteristic of existing predictive process monitoring techniques is to first construct a predictive model based on past process executions, and then use it to predict the future of new ongoing cases, without the possibility of updating it with new cases when they complete their execution. This can make predictive process monitoring too rigid to deal with the variability of processes working in real environments that continuously evolve and/or exhibit new variant behaviors over time. As a solution to this problem, we propose the use of algorithms that allow the incremental construction of the predictive model. These incremental learning algorithms update the model whenever new cases become available so that the predictive model evolves over time to fit the current circumstances. The algorithms have been implemented using different case encoding strategies and evaluated on a number of real and synthetic datasets. The results provide a first evidence of the potential of incremental learning strategies for predicting process monitoring in real environments, and of the impact of different case encoding strategies in this setting

    Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability

    Get PDF
    Internet-of-Things (IoT) envisions an intelligent infrastructure of networked smart devices offering task-specific monitoring and control services. The unique features of IoT include extreme heterogeneity, massive number of devices, and unpredictable dynamics partially due to human interaction. These call for foundational innovations in network design and management. Ideally, it should allow efficient adaptation to changing environments, and low-cost implementation scalable to massive number of devices, subject to stringent latency constraints. To this end, the overarching goal of this paper is to outline a unified framework for online learning and management policies in IoT through joint advances in communication, networking, learning, and optimization. From the network architecture vantage point, the unified framework leverages a promising fog architecture that enables smart devices to have proximity access to cloud functionalities at the network edge, along the cloud-to-things continuum. From the algorithmic perspective, key innovations target online approaches adaptive to different degrees of nonstationarity in IoT dynamics, and their scalable model-free implementation under limited feedback that motivates blind or bandit approaches. The proposed framework aspires to offer a stepping stone that leads to systematic designs and analysis of task-specific learning and management schemes for IoT, along with a host of new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive and Scalable Communication Network

    Identifying smart design attributes for Industry 4.0 customization using a clustering Genetic Algorithm

    Get PDF
    Industry 4.0 aims at achieving mass customization at a mass production cost. A key component to realizing this is accurate prediction of customer needs and wants, which is however a challenging issue due to the lack of smart analytics tools. This paper investigates this issue in depth and then develops a predictive analytic framework for integrating cloud computing, big data analysis, business informatics, communication technologies, and digital industrial production systems. Computational intelligence in the form of a cluster k-means approach is used to manage relevant big data for feeding potential customer needs and wants to smart designs for targeted productivity and customized mass production. The identification of patterns from big data is achieved with cluster k-means and with the selection of optimal attributes using genetic algorithms. A car customization case study shows how it may be applied and where to assign new clusters with growing knowledge of customer needs and wants. This approach offer a number of features suitable to smart design in realizing Industry 4.0

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations
    corecore