2,616 research outputs found

    C3Ro: An efficient mining algorithm of extende d-close d contiguous robust sequential patterns in noisy data

    Get PDF
    International audienceSequential pattern mining has been the focus of many works, but still faces a tough challenge in the mining of large databases for both efficiency and apprehensibility of its resulting set. To overcome these issues, the most promising direction taken by the literature relies on the use of constraints, including the well-known closedness constraint. However, such a mining is not resistant to noise in data, a characteristic of most real-world data. The main research question raised in this paper is thus: how to efficiently mine an apprehensible set of sequential patterns from noisy data? In order to address this research question, we introduce 1) two original constraints designed for the mining of noisy data: the robustness and the extended-closedness constraints, 2) a generic pattern mining algorithm, C3Ro, designed to mine a wide range of sequential patterns, going from closed or maximal contiguous sequential patterns to closed or maximal regular sequential patterns. C3Ro is dedicated to practitioners and is able to manage their multiple constraints. C3Ro also is the first sequential pattern mining algorithm to be as generic and parameterizable. Extensive experiments have been conducted and reveal the high efficiency of C3Ro, especially in large datasets, over well-known algorithms from the literature. Additional experiments have been conducted on a real-world job offers noisy dataset, with the goal to mine activities. This experiment offers a more thorough insight into C3Ro algorithm: job market experts confirm that the constraints we introduced actually have a significant positive impact on the apprehensibility of the set of mined activities

    Generating Knowledge in Maintenance from Experience Feedback

    Get PDF
    Knowledge is nowadays considered as a significant source of performance improvement, but may be difficult to identify, structure, analyse and reuse properly. A possible source of knowledge is in the data and information stored in various modules of industrial information systems, like CMMS (Computerized Maintenance Management Systems) for maintenance. In that context, the main objective of this paper is to propose a framework allowing to manage and generate knowledge from information on past experiences, for improving the decisions related to the maintenance activity. In that purpose, we suggest an original Experience Feedback process dedicated to maintenance, allowing to capitalize on past interventions by i) formalizing the domain knowledge and experiences using a visual knowledge representation formalism with logical foundation (Conceptual Graphs); ii) extracting new knowledge thanks to association rules mining algorithms, using an innovative interactive approach; iii) interpreting and evaluating this new knowledge thanks to the reasoning operations of Conceptual Graphs. The suggested method is illustrated on a case study based on real data dealing with the maintenance of overhead cranes

    Delivering IoT Services in Smart Cities and Environmental Monitoring through Collective Awareness, Mobile Crowdsensing and Open Data

    Get PDF
    The Internet of Things (IoT) is the paradigm that allows us to interact with the real world by means of networking-enabled devices and convert physical phenomena into valuable digital knowledge. Such a rapidly evolving field leveraged the explosion of a number of technologies, standards and platforms. Consequently, different IoT ecosystems behave as closed islands and do not interoperate with each other, thus the potential of the number of connected objects in the world is far from being totally unleashed. Typically, research efforts in tackling such challenge tend to propose a new IoT platforms or standards, however, such solutions find obstacles in keeping up the pace at which the field is evolving. Our work is different, in that it originates from the following observation: in use cases that depend on common phenomena such as Smart Cities or environmental monitoring a lot of useful data for applications is already in place somewhere or devices capable of collecting such data are already deployed. For such scenarios, we propose and study the use of Collective Awareness Paradigms (CAP), which offload data collection to a crowd of participants. We bring three main contributions: we study the feasibility of using Open Data coming from heterogeneous sources, focusing particularly on crowdsourced and user-contributed data that has the drawback of being incomplete and we then propose a State-of-the-Art algorith that automatically classifies raw crowdsourced sensor data; we design a data collection framework that uses Mobile Crowdsensing (MCS) and puts the participants and the stakeholders in a coordinated interaction together with a distributed data collection algorithm that prevents the users from collecting too much or too less data; (3) we design a Service Oriented Architecture that constitutes a unique interface to the raw data collected through CAPs through their aggregation into ad-hoc services, moreover, we provide a prototype implementation

    Understanding and Mitigating Multi-sided Exposure Bias in Recommender Systems

    Get PDF
    Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. It is especially important in multi-sided recommendation platforms where it may be crucial to optimize utilities not just for the end user, but also for other actors such as item sellers or producers who desire a fair representation of their items. Existing solutions do not properly address various aspects of multi-sided fairness in recommendations as they may either solely have one-sided view (i.e. improving the fairness only for one side), or do not appropriately measure the fairness for each actor involved in the system. In this thesis, I aim at first investigating the impact of unfair recommendations on the system and how these unfair recommendations can negatively affect major actors in the system. Then, I seek to propose solutions to tackle the unfairness of recommendations. I propose a rating transformation technique that works as a pre-processing step before building the recommendation model to alleviate the inherent popularity bias in the input data and consequently to mitigate the exposure unfairness for items and suppliers in the recommendation lists. Also, as another solution, I propose a general graph-based solution that works as a post-processing approach after recommendation generation for mitigating the multi-sided exposure bias in the recommendation results. For evaluation, I introduce several metrics for measuring the exposure fairness for items and suppliers, and show that these metrics better capture the fairness properties in the recommendation results. I perform extensive experiments to evaluate the effectiveness of the proposed solutions. The experiments on different publicly-available datasets and comparison with various baselines confirm the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.Comment: Doctoral thesi

    Longterm schedule optimization of an underground mine under geotechnical and ventilation constraints using SOT

    Get PDF
    Long-term mine scheduling is complex as well time and labour intensive. Yet in the mainstream of the mining industry, there is no computing program for schedule optimization and, in consequence, schedules are still created manually. The objective of this study was to compare a base case schedule generated with the Enhanced Production Scheduler (EPS®) and an optimized schedule generated with the Schedule Optimization Tool (SOT). The intent of having an optimized schedule is to improve the project value for underground mines. This study shows that SOT generates mine schedules that improve the Net Present Value (NPV) associated with orebody extraction. It does so by means of systematically and automatically exploring the options to vary the sequence and timing of mine activities, subject to constraints. First, a conventional scheduling method (EPS®) was adopted to identify a schedule of mining activities that satisfied basic sets of constraints, including physical adjacencies of mining activities and operational resource capacity. Additional constraint scenarios explored were geotechnical and ventilation, which negatively effect development rates. Next, the automated SOT procedure was applied to determine whether the schedules could be improved upon. It was demonstrated that SOT permitted the rapid re-assessment of project value when new constraint scenarios were applied. This study showed that the automated schedule optimization added value to the project every time it was applied. In addition, the reoptimizing and re-evaluating was quickly achieved. Therefore, the tool used in this research produced more optimized schedules than those produced using conventional scheduling methods.Master of Applied Science (MASc) in Natural Resources Engineerin

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table
    • …
    corecore