10,431 research outputs found

    Identifying Design-Build Decision-Making Factors and Providing Future Research Guidelines: Social Network and Association Rule Analysis

    Get PDF
    There is a dire need to rebuild existing infrastructure with strategic and efficient methods. Design-build (DB) becomes a potential solution that provides fast-tracked delivery as a more time and cost-efficient project delivery method. Past research studied factors influencing DB but without providing a holistic analytic approach. This paper fills this knowledge gap. First, a systematic literature review is performed using the preferred reporting items for systematic reviews and meta-analyses techniques, and a set of factors affecting DB projects are then identified and clustered, using k-means clustering, based on the whole literature discussions. Second, a graph theory approach, social network analysis (SNA), is conducted methodically to detect the understudied factors. Third, the clustered factors are analyzed using association rule (AR) analysis to identify factors that have not been cross-examined together. To this end, the findings of this research highlighted the need to investigate a group of important understudied factors that affect DB decision-making and procedures that are related to management, decision-making and executive methods, and stakeholder and team related aspects, among others. Also, while the majority of the existing research focused on theoretical efforts, there is far less work associated with computational/mathematical approaches that develop actual DB frameworks. Accordingly, future research is recommended to tackle this critical need by developing models that can assess DB performance, success, and implementation, among other aspects. Furthermore, since none of the studies evaluated DB while factoring in all 34 identified relevant factors, it is recommended that future research simultaneously incorporates most, if not all, these factors to provide a well-rounded and comprehensive analysis for DB decision-making. In addition, future studies need to tackle broader sectors rather than focusing over and over on the already saturated ones. As such, this study consolidated past literature and critically used it to offer robust support for the advancement of DB knowledge within the construction industry

    Explainable fault prediction using learning fuzzy cognitive maps

    Get PDF
    IoT sensors capture different aspects of the environment and generate high throughput data streams. Besides capturing these data streams and reporting the monitoring information, there is significant potential for adopting deep learning to identify valuable insights for predictive preventive maintenance. One specific class of applications involves using Long Short-Term Memory Networks (LSTMs) to predict faults happening in the near future. However, despite their remarkable performance, LSTMs can be very opaque. This paper deals with this issue by applying Learning Fuzzy Cognitive Maps (LFCMs) for developing simplified auxiliary models that can provide greater transparency. An LSTM model for predicting faults of industrial bearings based on readings from vibration sensors is developed to evaluate the idea. An LFCM is then used to imitate the performance of the baseline LSTM model. Through static and dynamic analyses, we demonstrate that LFCM can highlight (i) which members in a sequence of readings contribute to the prediction result and (ii) which values could be controlled to prevent possible faults. Moreover, we compare LFCM with state-of-the-art methods reported in the literature, including decision trees and SHAP values. The experiments show that LFCM offers some advantages over these methods. Moreover, LFCM, by conducting a what-if analysis, could provide more information about the black-box model. To the best of our knowledge, this is the first time LFCMs have been used to simplify a deep learning model to offer greater explainability

    ABC: Adaptive, Biomimetic, Configurable Robots for Smart Farms - From Cereal Phenotyping to Soft Fruit Harvesting

    Get PDF
    Currently, numerous factors, such as demographics, migration patterns, and economics, are leading to the critical labour shortage in low-skilled and physically demanding parts of agriculture. Thus, robotics can be developed for the agricultural sector to address these shortages. This study aims to develop an adaptive, biomimetic, and configurable modular robotics architecture that can be applied to multiple tasks (e.g., phenotyping, cutting, and picking), various crop varieties (e.g., wheat, strawberry, and tomato) and growing conditions. These robotic solutions cover the entire perception–action–decision-making loop targeting the phenotyping of cereals and harvesting fruits in a natural environment. The primary contributions of this thesis are as follows. a) A high-throughput method for imaging field-grown wheat in three dimensions, along with an accompanying unsupervised measuring method for obtaining individual wheat spike data are presented. The unsupervised method analyses the 3D point cloud of each trial plot, containing hundreds of wheat spikes, and calculates the average size of the wheat spike and total spike volume per plot. Experimental results reveal that the proposed algorithm can effectively identify spikes from wheat crops and individual spikes. b) Unlike cereal, soft fruit is typically harvested by manual selection and picking. To enable robotic harvesting, the initial perception system uses conditional generative adversarial networks to identify ripe fruits using synthetic data. To determine whether the strawberry is surrounded by obstacles, a cluster complexity-based perception system is further developed to classify the harvesting complexity of ripe strawberries. c) Once the harvest-ready fruit is localised using point cloud data generated by a stereo camera, the platform’s action system can coordinate the arm to reach/cut the stem using the passive motion paradigm framework, as inspired by studies on neural control of movement in the brain. Results from field trials for strawberry detection, reaching/cutting the stem of the fruit with a mean error of less than 3 mm, and extension to analysing complex canopy structures/bimanual coordination (searching/picking) are presented. Although this thesis focuses on strawberry harvesting, ongoing research is heading toward adapting the architecture to other crops. The agricultural food industry remains a labour-intensive sector with a low margin, and cost- and time-efficiency business model. The concepts presented herein can serve as a reference for future agricultural robots that are adaptive, biomimetic, and configurable

    A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks

    Full text link
    Cyber threat attribution is the process of identifying the actor of an attack incident in cyberspace. An accurate and timely threat attribution plays an important role in deterring future attacks by applying appropriate and timely defense mechanisms. Manual analysis of attack patterns gathered by honeypot deployments, intrusion detection systems, firewalls, and via trace-back procedures is still the preferred method of security analysts for cyber threat attribution. Such attack patterns are low-level Indicators of Compromise (IOC). They represent Tactics, Techniques, Procedures (TTP), and software tools used by the adversaries in their campaigns. The adversaries rarely re-use them. They can also be manipulated, resulting in false and unfair attribution. To empirically evaluate and compare the effectiveness of both kinds of IOC, there are two problems that need to be addressed. The first problem is that in recent research works, the ineffectiveness of low-level IOC for cyber threat attribution has been discussed intuitively. An empirical evaluation for the measure of the effectiveness of low-level IOC based on a real-world dataset is missing. The second problem is that the available dataset for high-level IOC has a single instance for each predictive class label that cannot be used directly for training machine learning models. To address these problems in this research work, we empirically evaluate the effectiveness of low-level IOC based on a real-world dataset that is specifically built for comparative analysis with high-level IOC. The experimental results show that the high-level IOC trained models effectively attribute cyberattacks with an accuracy of 95% as compared to the low-level IOC trained models where accuracy is 40%.Comment: 20 page

    PIKS: A Technique to Identify Actionable Trends for Policy-Makers Through Open Healthcare Data

    Full text link
    With calls for increasing transparency, governments are releasing greater amounts of data in multiple domains including finance, education and healthcare. The efficient exploratory analysis of healthcare data constitutes a significant challenge. Key concerns in public health include the quick identification and analysis of trends, and the detection of outliers. This allows policies to be rapidly adapted to changing circumstances. We present an efficient outlier detection technique, termed PIKS (Pruned iterative-k means searchlight), which combines an iterative k-means algorithm with a pruned searchlight based scan. We apply this technique to identify outliers in two publicly available healthcare datasets from the New York Statewide Planning and Research Cooperative System, and California's Office of Statewide Health Planning and Development. We provide a comparison of our technique with three other existing outlier detection techniques, consisting of auto-encoders, isolation forests and feature bagging. We identified outliers in conditions including suicide rates, immunity disorders, social admissions, cardiomyopathies, and pregnancy in the third trimester. We demonstrate that the PIKS technique produces results consistent with other techniques such as the auto-encoder. However, the auto-encoder needs to be trained, which requires several parameters to be tuned. In comparison, the PIKS technique has far fewer parameters to tune. This makes it advantageous for fast, "out-of-the-box" data exploration. The PIKS technique is scalable and can readily ingest new datasets. Hence, it can provide valuable, up-to-date insights to citizens, patients and policy-makers. We have made our code open source, and with the availability of open data, other researchers can easily reproduce and extend our work. This will help promote a deeper understanding of healthcare policies and public health issues

    Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse

    Get PDF
    This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses. This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups. In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena

    No Ground Truth at Sea – Developing High-Accuracy AI Decision-Support for Complex Environments

    Get PDF
    As AI decision-support systems are increasingly developed for applications outside of traditional organizational confinements, developers are confronted with new sources of complexity they need to address. However, we know little about how AI applications are developed for natural use domains with high environmental complexity, stemming from physical influences outside of the developers’ control. This study investigates what challenges emerge from such complexity and how developers mitigate them. Drawing upon a rich longitudinal single-case study on the development of AI decision-support for maritime navigation, findings show that achieving high output accuracy is complicated by the physical environment hindering training data creation. Further, developers chose to reduce the output accuracy and adapt the HMI design to successfully situate the AI application in an existing sociotechnical context. This study contributes to IS literature following recent calls for phenomenon-based examination of emerging challenges when extending the scope frontier of AI and provides practical recommendations for developing AI decision-support for complex environments

    Utilizing artificial intelligence in perioperative patient flow:systematic literature review

    Get PDF
    Abstract. The purpose of this thesis was to map the existing landscape of artificial intelligence (AI) applications used in secondary healthcare, with a focus on perioperative care. The goal was to find out what systems have been developed, and how capable they are at controlling perioperative patient flow. The review was guided by the following research question: How is AI currently utilized in patient flow management in the context of perioperative care? This systematic literature review examined the current evidence regarding the use of AI in perioperative patient flow. A comprehensive search was conducted in four databases, resulting in 33 articles meeting the inclusion criteria. Findings demonstrated that AI technologies, such as machine learning (ML) algorithms and predictive analytics tools, have shown somewhat promising outcomes in optimizing perioperative patient flow. Specifically, AI systems have proven effective in predicting surgical case durations, assessing risks, planning treatments, supporting diagnosis, improving bed utilization, reducing cancellations and delays, and enhancing communication and collaboration among healthcare providers. However, several challenges were identified, including the need for accurate and reliable data sources, ethical considerations, and the potential for biased algorithms. Further research is needed to validate and optimize the application of AI in perioperative patient flow. The contribution of this thesis is summarizing the current state of the characteristics of AI application in perioperative patient flow. This systematic literature review provides information about the features of perioperative patient flow and the clinical tasks of AI applications previously identified

    k-Means

    Get PDF

    The nexus between e-marketing, e-service quality, e-satisfaction and e-loyalty: a cross-sectional study within the context of online SMEs in Ghana

    Get PDF
    The spread of the Internet, the proliferation of mobile devices, and the onset of the COVID-19 pandemic have given impetus to online shopping in Ghana and the subregion. This situation has also created opportunities for SMEs to take advantage of online marketing technologies. However, there is a dearth of studies on the link between e-marketing and e-loyalty in terms of online shopping, thereby creating a policy gap on the prospects for business success for online SMEs in Ghana. Therefore, the purpose of the study was to examine the relationship between the main independent variable, e-marketing and the main dependent variable, e-loyalty, as well as the mediating roles of e-service quality and e-satisfaction in the link between e-marketing and e-loyalty. The study adopted a positivist stance with a quantitative method. The study was cross-sectional in nature with the adoption of a descriptive correlational design. A Structural Equation Modelling approach was employed to examine the nature of the associations between the independent, mediating and dependent variables. A sensitivity analysis was also conducted to control for the potential confounding effects of the demographic factors. A sample size of 1,293 residents in Accra, Ghana, who had previously shopped online, responded to structured questionnaire in an online survey via Google Docs. The IBM SPSS Amos 24 software was used to analyse the data collected. Positive associations were found between the key constructs in the study: e-marketing, e-service quality, e-satisfaction and e-Loyalty. The findings from the study gave further backing to the diffusion innovation theory, resource-based view theory, and technology acceptance model. In addition, e-service quality and e-satisfaction individually and jointly mediated the relationship between e-marketing and e-loyalty. However, these mediations were partial, instead of an originally anticipated full mediation. In terms of value and contribution, this is the first study in a developing economy context to undertake a holistic examination of the key marketing performance variables within an online shopping context. The study uniquely tested the mediation roles of both e-service quality and e-satisfaction in the link between e-marketing and e-loyalty. The findings of the study are novel in the e-marketing literature as they unearthed the key antecedents of e-loyalty for online SMEs in a developing economy context. The study suggested areas for further related studies and also highlighted the limitations
    • 

    corecore