11,717 research outputs found

    Ono: an open platform for social robotics

    Get PDF
    In recent times, the focal point of research in robotics has shifted from industrial ro- bots toward robots that interact with humans in an intuitive and safe manner. This evolution has resulted in the subfield of social robotics, which pertains to robots that function in a human environment and that can communicate with humans in an int- uitive way, e.g. with facial expressions. Social robots have the potential to impact many different aspects of our lives, but one particularly promising application is the use of robots in therapy, such as the treatment of children with autism. Unfortunately, many of the existing social robots are neither suited for practical use in therapy nor for large scale studies, mainly because they are expensive, one-of-a-kind robots that are hard to modify to suit a specific need. We created Ono, a social robotics platform, to tackle these issues. Ono is composed entirely from off-the-shelf components and cheap materials, and can be built at a local FabLab at the fraction of the cost of other robots. Ono is also entirely open source and the modular design further encourages modification and reuse of parts of the platform

    Strategic Project Portfolio Management by Predicting Project Performance and Estimating Strategic Fit

    Get PDF
    Candidate project selections are extremely crucial for infrastructure construction companies. First, they determine how well the planned strategy will be realized during the following years. If the selected projects do not align with the competences of the organization major losses can occur during the projects’ execution phase. Second, participating in tendering competitions is costly manual labour and losing the bid directly increase the overhead costs of the organization. Still, contractors rarely utilize statistical methods to select projects that are more likely to be successful. In response to these two issues, a tool for project portfolio selection phase was developed based on existing literature about strategic fit estimation and project performance prediction. One way to define the strategic fit of a project is to evaluate the alignment between the characteristics of a project to the strategic objectives of an organisation. Project performance on the other-hand can be measured with various financial, technical, production, risk or human-resource related criteria. Depending on which measure is highlighted, the likelihood of succeeding with regards to a performance measure can be predicted with numerous machine learning methods of which decision trees were used in this study. By combining the strategic fit and likelihood of success measures, a two-by-two matrix was formed. The matrix can be used to categorize the project opportunities into four categories, ignore, analyse, cash-in and focus, that can guide candidate project selections. To test and demonstrate the performance of the matrix, the case company’s CRM data was used to estimate strategic fit and likelihood of succeeding in tendering competitions. First, the projects were plotted on the matrix and their position and accuracy was analysed per quartile. Afterwards, the project selections were simulated and compared against the case company’s real selections during a six-month period. The first implication after plotting the projects on the matrix was that only a handful of projects were positioned in the focus category of the matrix, which indicates a discrepancy between the planned strategy and the competences of the case company in tendering competitions. Second, the tendering competition outcomes were easier to predict in the low strategic fit quartiles as the project selections in them were more accurate than in the high strategic fit categories. Finally, the matrix also quite accurately filtered the worst low strategic fit projects out from the market. The simulation was done in two stages. First, by emphasizing the likelihood of success predictions the matrix increased the hit rate and average strategic fit of the selected project portfolio. When strategic fit values were emphasized on the other hand, the simulation did not yield useful results. The study contributes to the project portfolio management literature by developing a practice-oriented tool that emphasizes the strategical and statistical perspectives of the candidate project selection phase

    Critical analysis for big data studies in construction: significant gaps in knowledge

    Get PDF
    Purpose The purpose of this paper is to identify the gaps and potential future research avenues in the big data research specifically in the construction industry. Design/methodology/approach The paper adopts systematic literature review (SLR) approach to observe and understand trends and extant patterns/themes in the big data analytics (BDA) research area particularly in construction-specific literature. Findings A significant rise in construction big data research is identified with an increasing trend in number of yearly articles. The main themes discussed were big data as a concept, big data analytical methods/techniques, big data opportunities – challenges and big data application. The paper emphasises “the implication of big data in to overall sustainability” as a gap that needs to be addressed. These implications are categorised as social, economic and environmental aspects. Research limitations/implications The SLR is carried out for construction technology and management research for the time period of 2007–2017 in Scopus and emerald databases only. Practical implications The paper enables practitioners to explore the key themes discussed around big data research as well as the practical applicability of big data techniques. The advances in existing big data research inform practitioners the current social, economic and environmental implications of big data which would ultimately help them to incorporate into their strategies to pursue competitive advantage. Identification of knowledge gaps helps keep the academic research move forward for a continuously evolving body of knowledge. The suggested new research avenues will inform future researchers for potential trending and untouched areas for research. Social implications Identification of knowledge gaps helps keep the academic research move forward for continuous improvement while learning. The continuously evolving body of knowledge is an asset to the society in terms of revealing the truth about emerging technologies. Originality/value There is currently no comprehensive review that addresses social, economic and environmental implications of big data in construction literature. Through this paper, these gaps are identified and filled in an understandable way. This paper establishes these gaps as key issues to consider for the continuous future improvement of big data research in the context of the construction industry

    End-to-end GRU model for construction crew management

    Get PDF
    Crew management is critical towards improving construction task productivity. Traditional methods for crew management on-site are heavily dependent on the experience of site managers. This paper proposes an end-to-end Gated Recurrent Units (GRU) based framework which provides site managers a more reliable and robust method for managing crews and improving productivity. The proposed framework predicts task productivity of all possible crew combinations, within a given size, from the pool of available workers using an advanced GRU model. The model has been trained with an existing database of masonry work and was found to outperform other machine learning models. The results of the framework suggest which crew combinations have the highest predicted productivity and can be used by superintendents and project managers to improve construction task productivity and better plan future projects

    The impact of technology on data collection: Case studies in privacy and economics

    Get PDF
    Technological advancement can often act as a catalyst for scientific paradigm shifts. Today the ability to collect and process large amounts of data about individuals is arguably a paradigm-shift enabling technology in action. One manifestation of this technology within the sciences is the ability to study historically qualitative fields with a more granular quantitative lens than ever before. Despite the potential for this technology, wide-adoption is accompanied by some risks. In this thesis, I will present two case studies. The first, focuses on the impact of machine learning in a cheapest-wins motor insurance market by designing a competition-based data collection mechanism. Pricing models in the insurance industry are changing from statistical methods to machine learning. In this game, close to 2000 participants, acting as insurance companies, trained and submitted pricing models to compete for profit using real motor insurance policies --- with a roughly equal split between legacy and advanced models. With this trend towards machine learning in motion, preliminary analysis of the results suggest that future markets might realise cheaper prices for consumers. Additionally legacy models competing against modern algorithms, may experience a reduction in earning stability --- accelerating machine learning adoption. Overall, the results of this field experiment demonstrate the potential for digital competition-based studies of markets in the future. The second case studies the privacy risks of data collection technologies. Despite a large body of research in re-identification of anonymous data, the question remains: if a dataset was big enough, would records become anonymous by being "lost in the crowd"? Using 3 months of location data, we show that the risk of re-identification decreases slowly with dataset size. This risk is modelled and extrapolated to larger populations with 93% of people being uniquely identifiable using 4 points of auxiliary information among 60M people. These results show how the privacy of individuals is very unlikely to be preserved even in country-scale location datasets and that alternative paradigms of data sharing are still required.Open Acces

    PARAMETRIC COST MODELLING OF COMPONENTS FOR TURBOMACHINES: PRELIMINARY STUDY

    Get PDF
    AbstractThe ever-increasing competitiveness, due to the market globalisation, has forced the industries to modify their design and production strategies. Hence, it is crucial to estimate and optimise costs as early as possible since any following changes will negatively impact the redesign effort and lead time.This paper aims to compare different parametric cost estimation methods that can be used for analysing mechanical components. The current work presents a cost estimation methodology which uses non-historical data for the database population. The database is settled using should cost data obtained from analytical cost models implemented in a cost estimation software. Then, the paper compares different parametric cost modelling techniques (artificial neural networks, deep learning, random forest and linear regression) to define the best one for industrial components.Such methods have been tested on 9 axial compressor discs, different in dimensions. Then, by considering other materials and batch sizes, it was possible to reach a training dataset of 90 records. From the analysis carried out in this work, it is possible to conclude that the machine learning techniques are a valid alternative to the traditional linear regression ones

    Parametric cost modelling of components for turbomachines: Preliminary study

    Get PDF
    The ever-increasing competitiveness, due to the market globalisation, has forced the industries to modify their design and production strategies. Hence, it is crucial to estimate and optimise costs as early as possible since any following changes will negatively impact the redesign effort and lead time. This paper aims to compare different parametric cost estimation methods that can be used for analysing mechanical components. The current work presents a cost estimation methodology which uses non-historical data for the database population. The database is settled using should cost data obtained from analytical cost models implemented in a cost estimation software. Then, the paper compares different parametric cost modelling techniques (artificial neural networks, deep learning, random forest and linear regression) to define the best one for industrial components. Such methods have been tested on 9 axial compressor discs, different in dimensions. Then, by considering other materials and batch sizes, it was possible to reach a training dataset of 90 records. From the analysis carried out in this work, it is possible to conclude that the machine learning techniques are a valid alternative to the traditional linear regression ones

    A sub-sector approach to cost-benefit analysis: Small-scale sisal processing in Tanzania

    Get PDF
    project appraisal; cost-benefit analysis, sisal decorticating technology,rural innovations
    corecore