12 research outputs found

    Preference Mining Using Neighborhood Rough Set Model on Two Universes

    Get PDF
    Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method

    Arabic Rule-Based Named Entity Recognition Systems Progress and Challenges

    Get PDF
    Rule-based approaches are using human-made rules to extract Named Entities (NEs), it is one of the most famous ways to extract NE as well as Machine Learning.  The term Named Entity Recognition (NER) is defined as a task determined to indicate personal names, locations, organizations and many other entities. In Arabic language, Big Data challenges make Arabic NER develops rapidly and extracts useful information from texts. The current paper sheds some light on research progress in rule-based via a diagnostic comparison among linguistic resource, entity type, domain, and performance. We also highlight the challenges of the processing Arabic NEs through rule-based systems. It is expected that good performance of NER will be effective to other modern fields like semantic web searching, question answering, machine translation, information retrieval, and abstracting systems

    Effect of nano black rice husk ash on the chemical and physical properties of porous concrete pavement

    Get PDF
    Black rice husk is a waste from this agriculture industry. It has been found that majority inorganic element in rice husk is silica. In this study, the effect of Nano from black rice husk ash (BRHA) on the chemical and physical properties of concrete pavement was investigated. The BRHA produced from uncontrolled burning at rice factory was taken. It was then been ground using laboratory mill with steel balls and steel rods. Four different grinding grades of BRHA were examined. A rice husk ash dosage of 10% by weight of binder was used throughout the experiments. The chemical and physical properties of the Nano BRHA mixtures were evaluated using fineness test, X-ray Fluorescence spectrometer (XRF) and X-ray diffraction (XRD). In addition, the compressive strength test was used to evaluate the performance of porous concrete pavement. Generally, the results show that the optimum grinding time was 63 hours. The result also indicated that the use of Nano black rice husk ash ground for 63hours produced concrete with good strengt

    Trustworthiness in Mobile Cyber Physical Systems

    Get PDF
    Computing and communication capabilities are increasingly embedded in diverse objects and structures in the physical environment. They will link the ‘cyberworld’ of computing and communications with the physical world. These applications are called cyber physical systems (CPS). Obviously, the increased involvement of real-world entities leads to a greater demand for trustworthy systems. Hence, we use "system trustworthiness" here, which can guarantee continuous service in the presence of internal errors or external attacks. Mobile CPS (MCPS) is a prominent subcategory of CPS in which the physical component has no permanent location. Mobile Internet devices already provide ubiquitous platforms for building novel MCPS applications. The objective of this Special Issue is to contribute to research in modern/future trustworthy MCPS, including design, modeling, simulation, dependability, and so on. It is imperative to address the issues which are critical to their mobility, report significant advances in the underlying science, and discuss the challenges of development and implementation in various applications of MCPS

    Intelligent Sensors for Human Motion Analysis

    Get PDF
    The book, "Intelligent Sensors for Human Motion Analysis," contains 17 articles published in the Special Issue of the Sensors journal. These articles deal with many aspects related to the analysis of human movement. New techniques and methods for pose estimation, gait recognition, and fall detection have been proposed and verified. Some of them will trigger further research, and some may become the backbone of commercial systems

    Intelligent Transportation Related Complex Systems and Sensors

    Get PDF
    Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITS) are being widely adopted worldwide to improve the efficiency and safety of the transportation system. They enable users to be better informed and make safer, more coordinated, and smarter decisions on the use of transport networks. Current ITSs are complex systems, made up of several components/sub-systems characterized by time-dependent interactions among themselves. Some examples of these transportation-related complex systems include: road traffic sensors, autonomous/automated cars, smart cities, smart sensors, virtual sensors, traffic control systems, smart roads, logistics systems, smart mobility systems, and many others that are emerging from niche areas. The efficient operation of these complex systems requires: i) efficient solutions to the issues of sensors/actuators used to capture and control the physical parameters of these systems, as well as the quality of data collected from these systems; ii) tackling complexities using simulations and analytical modelling techniques; and iii) applying optimization techniques to improve the performance of these systems. It includes twenty-four papers, which cover scientific concepts, frameworks, architectures and various other ideas on analytics, trends and applications of transportation-related data

    Data Science and Knowledge Discovery

    Get PDF
    Data Science (DS) is gaining significant importance in the decision process due to a mix of various areas, including Computer Science, Machine Learning, Math and Statistics, domain/business knowledge, software development, and traditional research. In the business field, DS's application allows using scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data to support the decision process. After collecting the data, it is crucial to discover the knowledge. In this step, Knowledge Discovery (KD) tasks are used to create knowledge from structured and unstructured sources (e.g., text, data, and images). The output needs to be in a readable and interpretable format. It must represent knowledge in a manner that facilitates inferencing. KD is applied in several areas, such as education, health, accounting, energy, and public administration. This book includes fourteen excellent articles which discuss this trending topic and present innovative solutions to show the importance of Data Science and Knowledge Discovery to researchers, managers, industry, society, and other communities. The chapters address several topics like Data mining, Deep Learning, Data Visualization and Analytics, Semantic data, Geospatial and Spatio-Temporal Data, Data Augmentation and Text Mining

    Using integer linear programming in timetabling

    Get PDF
    Automatisoitu aikataulujen luominen on haastava tutkimusala, johon kuuluvat myös koulujen ja yliopistojen lukujärjestykset. Tässä työssä luodaan yleiskatsaus lukujärjestysongelmaan sekä muodostetaan ja ratkaistaan esimerkkiongelma kahdella eri kustannusfunktiolla. Lukujärjestysten muodostamisessa on kyse oppimistapahtumien ja resurssien eli opettajien, oppilasryhmien sekä opetustilojen jakamisesta eri aikaväleille, eli tunneille, tiettyjä rajoitteita noudattaen. Automatisoidussa lukujärjestysten luomisessa voidaan käyttää hyödyksi esimerkiksi lineaarista binäärilukuoptimointia. Tähän optimointimalliin luodaan rajoitteet ja kustannusfunktio, jonka arvo pyritään optimoimaan rajoitteita noudattaen. Rajoitteet voivat olla joko kovia tai pehmeitä ja ne voivat vaihdella tapauksittain. Kovien rajoitteiden on toteuduttava, jotta lukujärjestys voidaan muodostaa, kun taas kustannusfunktioon sisältyvien pehmeiden rajoitteiden avulla määritetään muodostettavan lukujärjestyksen laatu. Eri pehmeille rajoitteille voidaan antaa erisuuruiset painoarvot sen mukaan, kuinka tärkeinä niiden toteutumista pidetään verrattuna muihin pehmeisiin rajoitteisiin. Tämän työn esimerkkitapauksessa kustannusfunktion avulla minimoidaan hyppytuntien määrää ja opetustilojen käytöstä aiheutuvia kustannuksia. Mikäli oppimistapahtumia on melko vähän, kuten tässä työssä, nämä kaksi optimoitavaa tekijää eivät riitä muodostamaan mielekkäitä lukujärjestyksiä. Lukujärjestyksiin muodostuu yksittäisen oppimistapahtuman päiviä, mikä ei ole toivottavaa. Täten laadukkaiden lukujärjestysten luomiseksi tarvitsee kustannusfunktioon sisällyttää myös muita pehmeitä rajoitteita

    Uncertain Multi-Criteria Optimization Problems

    Get PDF
    Most real-world search and optimization problems naturally involve multiple criteria as objectives. Generally, symmetry, asymmetry, and anti-symmetry are basic characteristics of binary relationships used when modeling optimization problems. Moreover, the notion of symmetry has appeared in many articles about uncertainty theories that are employed in multi-criteria problems. Different solutions may produce trade-offs (conflicting scenarios) among different objectives. A better solution with respect to one objective may compromise other objectives. There are various factors that need to be considered to address the problems in multidisciplinary research, which is critical for the overall sustainability of human development and activity. In this regard, in recent decades, decision-making theory has been the subject of intense research activities due to its wide applications in different areas. The decision-making theory approach has become an important means to provide real-time solutions to uncertainty problems. Theories such as probability theory, fuzzy set theory, type-2 fuzzy set theory, rough set, and uncertainty theory, available in the existing literature, deal with such uncertainties. Nevertheless, the uncertain multi-criteria characteristics in such problems have not yet been explored in depth, and there is much left to be achieved in this direction. Hence, different mathematical models of real-life multi-criteria optimization problems can be developed in various uncertain frameworks with special emphasis on optimization problems

    Agnostic content ontology design patterns for a multi-domain ontology

    Get PDF
    This research project aims to solve the semantic heterogeneity problem. Semantic heterogeneity mimics cancer in that semantic heterogeneity unnecessarily consumes resources from its host, the enterprise, and may even affect lives. A number of authors report that semantic heterogeneity may cost a significant portion of an enterprise’s IT budget. Also, semantic heterogeneity hinders pharmaceutical and medical research by consuming valuable research funds. The RA-EKI architecture model comprises a multi-domain ontology, a cross-industry agnostic construct composed of rich axioms notably for data integration. A multi-domain ontology composed of axiomatized agnostic data model patterns would drive a cognitive data integration application system usable in any industry sector. This project’s objective is to elicit agnostic data model patterns here considered as content ontology design patterns. The first research question of this project pertains to the existence of agnostic patterns and their capacity to solve the semantic heterogeneity problem. Due to the theory-building role of this project, a qualitative research approach constitutes the appropriate manner to conduct its research. Contrary to theory testing quantitative methods that rely on well-established validation techniques to determine the reliability of the outcome of a given study, theorybuilding qualitative methods do not possess standardized techniques to ascertain the reliability of a study. The second research question inquires on a dual method theory-building approach that may demonstrate trustworthiness. The first method, a qualitative Systematic Literature Review (SLR) approach induces the sought knowledge from 69 retained publications using a practical screen. The second method, a phenomenological research protocol elicits the agnostic concepts from semi-structured interviews involving 22 senior practitioners with 21 years in average of experience in conceptualization. The SLR retains a set of 89 agnostic concepts from 2009 through 2017. The phenomenological study in turn retains 83 agnostic concepts. During the synthesis stage for both studies, data saturation was calculated for each of the retained concepts at the point where the concepts have been selected for a second time. The quantification of data saturation constitutes an element of the trustworthiness’s transferability criterion. It can be argued that this effort of establishing the trustworthiness, i.e. credibility, dependability, confirmability and transferability can be construed as extensive and this research track as promising. Data saturation for both studies has still not been reached. The assessment performed in the course of the establishment of trustworthiness of this project’s dual method qualitative research approach yields very interesting findings. Such findings include two sets of agnostic data model patterns obtained from research protocols using radically different data sources i.e. publications vs. experienced practitioners but with striking similarities. Further work is required using exactly the same protocols for each of the methods, expand the year range for the SLR and to recruit new co-researchers for the phenomenological protocol. This work will continue until these protocols do not elicit new theory material. At this point, new protocols for both methods will be designed and executed with the intent to measure theoretical saturation. For both methods, this entails in formulating new research questions that may, for example, focus on agnostic themes such as finance, infrastructure, relationships, classifications, etc. For this exploration project, the road ahead involves the design of new questionnaires for semi-structured interviews. This project will need to engage in new knowledge elicitation techniques such as focus groups. The project will definitely conduct other qualitative research methods such as research action for eliciting new knowledge and know-how from actual development and operation of an ontology-based cognitive application. Finally, a mixed methods qualitative-quantitative approach would prepare the transition toward theory testing method using hypothetico-deductive techniques
    corecore