283 research outputs found

    Using big data for customer centric marketing

    Get PDF
    This chapter deliberates on “big data” and provides a short overview of business intelligence and emerging analytics. It underlines the importance of data for customer-centricity in marketing. This contribution contends that businesses ought to engage in marketing automation tools and apply them to create relevant, targeted customer experiences. Today’s business increasingly rely on digital media and mobile technologies as on-demand, real-time marketing has become more personalised than ever. Therefore, companies and brands are striving to nurture fruitful and long lasting relationships with customers. In a nutshell, this chapter explains why companies should recognise the value of data analysis and mobile applications as tools that drive consumer insights and engagement. It suggests that a strategic approach to big data could drive consumer preferences and may also help to improve the organisational performance.peer-reviewe

    Data granulation by the principles of uncertainty

    Full text link
    Researches in granular modeling produced a variety of mathematical models, such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets, which are all suitable to characterize the so-called information granules. Modeling of the input data uncertainty is recognized as a crucial aspect in information granulation. Moreover, the uncertainty is a well-studied concept in many mathematical settings, such as those of probability theory, fuzzy set theory, and possibility theory. This fact suggests that an appropriate quantification of the uncertainty expressed by the information granule model could be used to define an invariant property, to be exploited in practical situations of information granulation. In this perspective, a procedure of information granulation is effective if the uncertainty conveyed by the synthesized information granule is in a monotonically increasing relation with the uncertainty of the input data. In this paper, we present a data granulation framework that elaborates over the principles of uncertainty introduced by Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is possible to apply such principles regardless of the input data type and the specific mathematical setting adopted for the information granules. The proposed framework is conceived (i) to offer a guideline for the synthesis of information granules and (ii) to build a groundwork to compare and quantitatively judge over different data granulation procedures. To provide a suitable case study, we introduce a new data granulation technique based on the minimum sum of distances, which is designed to generate type-2 fuzzy sets. We analyze the procedure by performing different experiments on two distinct data types: feature vectors and labeled graphs. Results show that the uncertainty of the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference

    Dealing with Data for RE: Mitigating Challenges while using NLP and Generative AI

    Full text link
    Across the dynamic business landscape today, enterprises face an ever-increasing range of challenges. These include the constantly evolving regulatory environment, the growing demand for personalization within software applications, and the heightened emphasis on governance. In response to these multifaceted demands, large enterprises have been adopting automation that spans from the optimization of core business processes to the enhancement of customer experiences. Indeed, Artificial Intelligence (AI) has emerged as a pivotal element of modern software systems. In this context, data plays an indispensable role. AI-centric software systems based on supervised learning and operating at an industrial scale require large volumes of training data to perform effectively. Moreover, the incorporation of generative AI has led to a growing demand for adequate evaluation benchmarks. Our experience in this field has revealed that the requirement for large datasets for training and evaluation introduces a host of intricate challenges. This book chapter explores the evolving landscape of Software Engineering (SE) in general, and Requirements Engineering (RE) in particular, in this era marked by AI integration. We discuss challenges that arise while integrating Natural Language Processing (NLP) and generative AI into enterprise-critical software systems. The chapter provides practical insights, solutions, and examples to equip readers with the knowledge and tools necessary for effectively building solutions with NLP at their cores. We also reflect on how these text data-centric tasks sit together with the traditional RE process. We also highlight new RE tasks that may be necessary for handling the increasingly important text data-centricity involved in developing software systems.Comment: 24 pages, 2 figures, to be published in NLP for Requirements Engineering Boo

    Stochastic information granules extraction for graph embedding and classification

    Get PDF
    3noopenGraphs are data structures able to efficiently describe real-world systems and, as such, have been extensively used in recent years by many branches of science, including machine learning engineering. However, the design of efficient graph-based pattern recognition systems is bottlenecked by the intrinsic problem of how to properly match two graphs. In this paper, we investigate a granular computing approach for the design of a general purpose graph-based classification system. The overall framework relies on the extraction of meaningful pivotal substructures on the top of which an embedding space can be build and in which the classification can be performed without limitations. Due to its importance, we address whether information can be preserved by performing stochastic extraction on the training data instead of performing an exhaustive extraction procedure which is likely to be unfeasible for large datasets. Tests on benchmark datasets show that stochastic extraction can lead to a meaningful set of pivotal substructures with a much lower memory footprint and overall computational burden, making the proposed strategies suitable also for dealing with big datasets.openAccademicoBaldini, Luca; Martino, Alessio; Rizzi, AntonelloBaldini, Luca; Martino, Alessio; Rizzi, Antonell

    Impact of Service-Centric Computing on Business and Education

    Get PDF
    Service-centric computing is one of the new IT paradigms that are transforming the way corporations organize their information resources. However, research and teaching activities in the IS community are lagging behind the recent advances in the corporate world. This paper investigates the impact of service-centric computing on business and education. We first examine the transformative impacts of service-centric computing on business and education in the foreseeable future. Then, we discuss opportunities and challenges in new research directions and instructional innovations with respect to service-centric computing. We believe that this article will serve as a good starting point for our IS colleagues to explore this exciting and emerging area of research and teaching

    Justified granulation aided noninvasive liver fibrosis classification system

    Get PDF
    According to the World Health Organization 130-150 million (according to WHO) of people globally are chronically infected with hepatitis C virus. The virus is responsible for chronic hepatitis that ultimately may cause liver cirrhosis and death. The disease is progressive, however antiviral treatment may slow down or stop its development. Therefore, it is important to estimate the severity of liver fibrosis for diagnostic, therapeutic and prognostic purposes. Liver biopsy provides a high accuracy diagnosis, however it is painful and invasive procedure. Recently, we witness an outburst of non-invasive tests (biological and physical ones) aiming to define severity of liver fibrosis, but commonly used FibroTest®, according to an independent research, in some cases may have accuracy lower than 50 %. In this paper a data mining and classification technique is proposed to determine the stage of liver fibrosis using easily accessible laboratory data. Methods: Research was carried out on archival records of routine laboratory blood tests (morphology, coagulation, biochemistry, protein electrophoresis) and histopathology records of liver biopsy as a reference value. As a result, the granular model was proposed, that contains a series of intervals representing influence of separate blood attributes on liver fibrosis stage. The model determines final diagnosis for a patient using aggregation method and voting procedure. The proposed solution is robust to missing or corrupted data. Results: The results were obtained on data from 290 patients with hepatitis C virus collected over 6 years. The model has been validated using training and test data. The overall accuracy of the solution is equal to 67.9 %. The intermediate liver fibrosis stages are hard to distinguish, due to effectiveness of biopsy itself. Additionally, the method was verified against dataset obtained from 365 patients with liver disease of various etiologies. The model proved to be robust to new data. What is worth mentioning, the error rate in misclassification of the first stage and the last stage is below 6.5 % for all analyzed datasets. Conclusions: The proposed system supports the physician and defines the stage of liver fibrosis in chronic hepatitis C. The biggest advantage of the solution is a human-centric approach using intervals, which can be verified by a specialist, before giving the final decision. Moreover, it is robust to missing data. The system can be used as a powerful support tool for diagnosis in real treatmen

    Combining heterogeneous classifiers via granular prototypes.

    Get PDF
    In this study, a novel framework to combine multiple classifiers in an ensemble system is introduced. Here we exploit the concept of information granule to construct granular prototypes for each class on the outputs of an ensemble of base classifiers. In the proposed method, uncertainty in the outputs of the base classifiers on training observations is captured by an interval-based representation. To predict the class label for a new observation, we first determine the distances between the output of the base classifiers for this observation and the class prototypes, then the predicted class label is obtained by choosing the label associated with the shortest distance. In the experimental study, we combine several learning algorithms to build the ensemble system and conduct experiments on the UCI, colon cancer, and selected CLEF2009 datasets. The experimental results demonstrate that the proposed framework outperforms several benchmarked algorithms including two trainable combining methods, i.e., Decision Template and Two Stages Ensemble System, AdaBoost, Random Forest, L2-loss Linear Support Vector Machine, and Decision Tree

    Information protection in content-centric networks

    Get PDF
    Information-centric networks have distinct advantages with regard to securing sensitive content as a result of their new approaches to managing data in potential future internet architectures. These kinds of systems, because of their data-centric perspective, provide the opportunity to embed policy-centric content management components that can address looming problems in information distribution that both companies and federal agencies are beginning to face with respect to sensitive content. This information-centricity facilitates the application of security techniques that are very difficult and in some cases impossible to apply in traditional packetized networks. This work addresses the current state of the art in both these kinds of cross-domain systems and information-centric networking in general. It then covers other related work, outlining why information-centric networks are more powerful than traditional packetized networks with regard to usage management. Then, it introduces a taxonomy of types of policy-centric usage managed information network systems and an associated methodology for evaluating the individual taxonomic elements. It finally delves into experimental evaluation of the various defined architectural options and presents results of comparing experimental evaluation with anticipated outcomes

    Aggregation of classifiers: a justifiable information granularity approach.

    Get PDF
    In this paper, we introduced a new approach of combining multiple classifiers in a heterogeneous ensemble system. Instead of using numerical membership values when combining, we constructed interval membership values for each class prediction from the meta-data of observation by using the concept of information granule. In the proposed method, the uncertainty (diversity) of the predictions produced by the base classifiers is quantified by the interval-based information granules. The decision model is then generated by considering both bound and length of the intervals. Extensive experimentation using the UCI datasets has demonstrated the superior performance of our algorithm over other algorithms including six fixed combining methods, one trainable combining method, AdaBoost, bagging, and random subspace
    corecore