2,378 research outputs found

    A probabilistic analysis of selected notions of iterated conditioning under coherence

    Get PDF
    It is well known that basic conditionals satisfy some desirable basic logical and probabilistic properties, such as the compound probability theorem. However checking the validity of these becomes trickier when we switch to compound and iterated conditionals. Herein we consider de Finetti's notion of conditional both in terms of a three-valued object and as a conditional random quantity in the betting framework. We begin by recalling the notions of conjunction and disjunction among conditionals in selected trivalent logics. Then we analyze the notions of iterated conditioning in the frameworks of the specific three-valued logics introduced by Cooper-Calabrese, by de Finetti, and by Farrel. By computing some probability propagation rules we show that the compound probability theorem and other important properties are not always preserved by these formulations. Then, for each trivalent logic we introduce an iterated conditional as a suitable random quantity which satisfies the compound prevision theorem as well as some other desirable properties. We also check the validity of two generalized versions of Bayes' Rule for iterated conditionals. We study the p-validity of generalized versions of Modus Ponens and two-premise centering for iterated conditionals. Finally, we observe that all the basic properties are satisfied within the framework of iterated conditioning followed in recent papers by Gilio and Sanfilippo in the setting of conditional random quantities

    The Supply Chain Management for Perishables Products : A Literature Review

    Get PDF
    In recent years, food loss has emerged as a global concern, with research indicating that between 20% to 60% of total production is lost within the food supply chain. Consequently, both researchers and practitioners have increasingly directed their attention towards maximizing the availability of food products for society. As a result, researchers have employed various operations research tools to optimize the food supply chain and facilitate decision�making processes. This paper aims to provide a literature review of modeling and optimization approaches in perishable supply chain management, with a specific focus on minimizing losses throughout the supply chain. Our primary emphasis is on perishable foods, and we analyze selected research papers based on their objectives, employed models, and solution approaches. Through our research analysis, we identify potential avenues for future research in the field of perishable products supply chains, with the overarching goal of reducing losses along the entire supply chain

    Online semi-supervised learning in non-stationary environments

    Get PDF
    Existing Data Stream Mining (DSM) algorithms assume the availability of labelled and balanced data, immediately or after some delay, to extract worthwhile knowledge from the continuous and rapid data streams. However, in many real-world applications such as Robotics, Weather Monitoring, Fraud Detection Systems, Cyber Security, and Computer Network Traffic Flow, an enormous amount of high-speed data is generated by Internet of Things sensors and real-time data on the Internet. Manual labelling of these data streams is not practical due to time consumption and the need for domain expertise. Another challenge is learning under Non-Stationary Environments (NSEs), which occurs due to changes in the data distributions in a set of input variables and/or class labels. The problem of Extreme Verification Latency (EVL) under NSEs is referred to as Initially Labelled Non-Stationary Environment (ILNSE). This is a challenging task because the learning algorithms have no access to the true class labels directly when the concept evolves. Several approaches exist that deal with NSE and EVL in isolation. However, few algorithms address both issues simultaneously. This research directly responds to ILNSE’s challenge in proposing two novel algorithms “Predictor for Streaming Data with Scarce Labels” (PSDSL) and Heterogeneous Dynamic Weighted Majority (HDWM) classifier. PSDSL is an Online Semi-Supervised Learning (OSSL) method for real-time DSM and is closely related to label scarcity issues in online machine learning. The key capabilities of PSDSL include learning from a small amount of labelled data in an incremental or online manner and being available to predict at any time. To achieve this, PSDSL utilises both labelled and unlabelled data to train the prediction models, meaning it continuously learns from incoming data and updates the model as new labelled or unlabelled data becomes available over time. Furthermore, it can predict under NSE conditions under the scarcity of class labels. PSDSL is built on top of the HDWM classifier, which preserves the diversity of the classifiers. PSDSL and HDWM can intelligently switch and adapt to the conditions. The PSDSL adapts to learning states between self-learning, micro-clustering and CGC, whichever approach is beneficial, based on the characteristics of the data stream. HDWM makes use of “seed” learners of different types in an ensemble to maintain its diversity. The ensembles are simply the combination of predictive models grouped to improve the predictive performance of a single classifier. PSDSL is empirically evaluated against COMPOSE, LEVELIW, SCARGC and MClassification on benchmarks, NSE datasets as well as Massive Online Analysis (MOA) data streams and real-world datasets. The results showed that PSDSL performed significantly better than existing approaches on most real-time data streams including randomised data instances. PSDSL performed significantly better than ‘Static’ i.e. the classifier is not updated after it is trained with the first examples in the data streams. When applied to MOA-generated data streams, PSDSL ranked highest (1.5) and thus performed significantly better than SCARGC, while SCARGC performed the same as the Static. PSDSL achieved better average prediction accuracies in a short time than SCARGC. The HDWM algorithm is evaluated on artificial and real-world data streams against existing well-known approaches such as the heterogeneous WMA and the homogeneous Dynamic DWM algorithm. The results showed that HDWM performed significantly better than WMA and DWM. Also, when recurring concept drifts were present, the predictive performance of HDWM showed an improvement over DWM. In both drift and real-world streams, significance tests and post hoc comparisons found significant differences between algorithms, HDWM performed significantly better than DWM and WMA when applied to MOA data streams and 4 real-world datasets Electric, Spam, Sensor and Forest cover. The seeding mechanism and dynamic inclusion of new base learners in the HDWM algorithms benefit from the use of both forgetting and retaining the models. The algorithm also provides the independence of selecting the optimal base classifier in its ensemble depending on the problem. A new approach, Envelope-Clustering is introduced to resolve the cluster overlap conflicts during the cluster labelling process. In this process, PSDSL transforms the centroids’ information of micro-clusters into micro-instances and generates new clusters called Envelopes. The nearest envelope clusters assist the conflicted micro-clusters and successfully guide the cluster labelling process after the concept drifts in the absence of true class labels. PSDSL has been evaluated on real-world problem ‘keystroke dynamics’, and the results show that PSDSL achieved higher prediction accuracy (85.3%) and SCARGC (81.6%), while the Static (49.0%) significantly degrades the performance due to changes in the users typing pattern. Furthermore, the predictive accuracies of SCARGC are found highly fluctuated between (41.1% to 81.6%) based on different values of parameter ‘k’ (number of clusters), while PSDSL automatically determine the best values for this parameter

    Fuzzy Norm-Explicit Product Quantization for Recommender Systems

    Get PDF
    As the data resources grow, providing recommendations that best meet the demands has become a vital requirement in business and life to overcome the information overload problem. However, building a system suggesting relevant recommendations has always been a point of debate. One of the most cost-efficient techniques in terms of producing relevant recommendations at a low complexity is Product Quantization (PQ). PQ approaches have continued developing in recent years. This system’s crucial challenge is improving product quantization performance in terms of recall measures without compromising its complexity. This makes the algorithm suitable for problems that require a greater number of potentially relevant items without disregarding others, at high-speed and low-cost to keep up with traffic. This is the case of online shops where the recommendations for the purpose are important, although customers can be susceptible to scoping other products. A recent approach has been exploiting the notion of norm sub-vectors encoded in product quantizers. This research proposes a fuzzy approach to perform norm-based product quantization. Type-2 Fuzzy sets (T2FSs) define the codebook allowing sub-vectors (T2FSs) to be associated with more than one element of the codebook, and next, its norm calculus is resolved by means of integration. Our method finesses the recall measure up, making the algorithm suitable for problems that require querying at most possible potential relevant items without disregarding others. The proposed approach is tested with three public recommender benchmark datasets and compared against seven PQ approaches for Maximum Inner-Product Search (MIPS). The proposed method outperforms all PQ approaches such as NEQ, PQ, and RQ up to +6%, +5%, and +8% by achieving a recall of 94%, 69%, 59% in Netflix, Audio, Cifar60k datasets, respectively. More and over, computing time and complexity nearly equals the most computationally efficient existing PQ method in the state-of-the-art

    Life settlement pricing with fuzzy parameters

    Full text link
    Existing literature asserts that the growth of life settlement (LS) markets, where they exist, is hampered by limited policyholder participation and suggests that to foster this growth appropriate pricing of LS transactions is crucial. The pricing of LSs relies on quantifying two key variables: the insured's mortality multiplier and the internal rate of return (IRR). However, the available information on these parameters is often scarce and vague. To address this issue, this article proposes a novel framework that models these variables using triangular fuzzy numbers (TFNs). This modelling approach aligns with how mortality multiplier and IRR data are typically provided in insurance markets and has the advantage of offering a natural interpretation for practitioners. When both the mortality multiplier and the IRR are represented as TFNs, the resulting LS price becomes a FN that no longer retains the triangular shape. Therefore, the paper introduces three alternative triangular approximations to simplify computations and enhance interpretation of the price. Additionally, six criteria are proposed to evaluate the effectiveness of each approximation method. These criteria go beyond the typical approach of assessing the approximation quality to the FN itself. They also consider the usability and comprehensibility for financial analysts with no prior knowledge of FNs. In summary, the framework presented in this paper represents a significant advancement in LS pricing. By incorporating TFNs, offering several triangular approximations and proposing goodness criteria of them, it addresses the challenges posed by limited and vague data, while also considering the practical needs of industry practitioners

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    MalBoT-DRL: Malware botnet detection using deep reinforcement learning in IoT networks

    Get PDF
    In the dynamic landscape of cyber threats, multi-stage malware botnets have surfaced as significant threats of concern. These sophisticated threats can exploit Internet of Things (IoT) devices to undertake an array of cyberattacks, ranging from basic infections to complex operations such as phishing, cryptojacking, and distributed denial of service (DDoS) attacks. Existing machine learning solutions are often constrained by their limited generalizability across various datasets and their inability to adapt to the mutable patterns of malware attacks in real world environments, a challenge known as model drift. This limitation highlights the pressing need for adaptive Intrusion Detection Systems (IDS), capable of adjusting to evolving threat patterns and new or unseen attacks. This paper introduces MalBoT-DRL, a robust malware botnet detector using deep reinforcement learning. Designed to detect botnets throughout their entire lifecycle, MalBoT-DRL has better generalizability and offers a resilient solution to model drift. This model integrates damped incremental statistics with an attention rewards mechanism, a combination that has not been extensively explored in literature. This integration enables MalBoT-DRL to dynamically adapt to the ever-changing malware patterns within IoT environments. The performance of MalBoT-DRL has been validated via trace-driven experiments using two representative datasets, MedBIoT and N-BaIoT, resulting in exceptional average detection rates of 99.80% and 99.40% in the early and late detection phases, respectively. To the best of our knowledge, this work introduces one of the first studies to investigate the efficacy of reinforcement learning in enhancing the generalizability of IDS

    AN AUTOMATED, DEEP LEARNING APPROACH TO SYSTEMATICALLY & SEQUENTIALLY DERIVE THREE-DIMENSIONAL KNEE KINEMATICS DIRECTLY FROM TWO-DIMENSIONAL FLUOROSCOPIC VIDEO

    Get PDF
    Total knee arthroplasty (TKA), also known as total knee replacement, is a surgical procedure to replace damaged parts of the knee joint with artificial components. It aims to relieve pain and improve knee function. TKA can improve knee kinematics and reduce pain, but it may also cause altered joint mechanics and complications. Proper patient selection, implant design, and surgical technique are important for successful outcomes. Kinematics analysis plays a vital role in TKA by evaluating knee joint movement and mechanics. It helps assess surgery success, guides implant and technique selection, informs implant design improvements, detects problems early, and improves patient outcomes. However, evaluating the kinematics of patients using conventional approaches presents significant challenges. The reliance on 3D CAD models limits applicability, as not all patients have access to such models. Moreover, the manual and time-consuming nature of the process makes it impractical for timely evaluations. Furthermore, the evaluation is confined to laboratory settings, limiting its feasibility in various locations. This study aims to address these limitations by introducing a new methodology for analyzing in vivo 3D kinematics using an automated deep learning approach. The proposed methodology involves several steps, starting with image segmentation of the femur and tibia using a robust deep learning approach. Subsequently, 3D reconstruction of the implants is performed, followed by automated registration. Finally, efficient knee kinematics modeling is conducted. The final kinematics results showed potential for reducing workload and increasing efficiency. The algorithms demonstrated high speed and accuracy, which could enable real-time TKA kinematics analysis in the operating room or clinical settings. Unlike previous studies that relied on sponsorships and limited patient samples, this algorithm allows the analysis of any patient, anywhere, and at any time, accommodating larger subject populations and complete fluoroscopic sequences. Although further improvements can be made, the study showcases the potential of machine learning to expand access to TKA analysis tools and advance biomedical engineering applications

    Fuzzy modeling to define corrosivity potential in oil pipelines

    Get PDF
    In this work, a Fuzzy logic model was developed using the Fuzzy Logic Toolbox™ of the MATLAB® software, for monitoring the corrosivity potential in oil pipelines whose corrosion mechanism is predominantly by microbiological action. With the use of operational parameters, the model presents itself as an alternative to conventional monitoring methods, allowing to infer the corrosion rate in the pipeline, and therefore, the corrosivity potential. The model was applied to an oil pipeline and its results were compared with conventional monitoring methods. The analysis of the results concluded that the model can be used as a monitoring method for pipelines with those predominant corrosion mechanisms, helping to manage the integrity of oil pipelines
    • …
    corecore