767 research outputs found

    The State-of-the-Art of Set Visualization

    Get PDF
    Sets comprise a generic data model that has been used in a variety of data analysis problems. Such problems involve analysing and visualizing set relations between multiple sets defined over the same collection of elements. However, visualizing sets is a non-trivial problem due to the large number of possible relations between them. We provide a systematic overview of state-of-the-art techniques for visualizing different kinds of set relations. We classify these techniques into six main categories according to the visual representations they use and the tasks they support. We compare the categories to provide guidance for choosing an appropriate technique for a given problem. Finally, we identify challenges in this area that need further research and propose possible directions to address these challenges. Further resources on set visualization are available at http://www.setviz.net

    Robotic ubiquitous cognitive ecology for smart homes

    Get PDF
    Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work

    Framework of active robot learning

    Get PDF
    A thesis submitted to the University of Bedfordshire, in fulfilment of the requirements for the degree of Master of Science by researchIn recent years, cognitive robots have become an attractive research area of Artificial Intelligent (AI). High-order beliefs for cognitive robots regard the robots' thought about their users' intention and preference. The existing approaches to the development of such beliefs through machine learning rely on particular social cues or specifically defined award functions . Therefore, their applications can be limited. This study carried out primary research on active robot learning (ARL) which facilitates a robot to develop high-order beliefs by actively collecting/discovering evidence it needs. The emphasis is on active learning, but not teaching. Hence, social cues and award functions are not necessary. In this study, the framework of ARL was developed. Fuzzy logic was employed in the framework for controlling robot and for identifying high-order beliefs. A simulation environment was set up where a human and a cognitive robot were modelled using MATLAB, and ARL was implemented through simulation. Simulations were also performed in this study where the human and the robot tried to jointly lift a stick and keep the stick level. The simulation results show that under the framework a robot is able to discover the evidence it needs to confirm its user's intention

    Agro-ecological evaluation of sustainable area for citrus crop production in Ramsar District, Iran

    Get PDF
    Citrus growing is regarded as an important cash crop in Ramsar, Iran. Ramsar District has a temperate climate zone, while citrus is a sub-tropical fruit. Few studies on citrus crop in terms of negative environmental factors have been carried out by researchers around the world. This study aims to integrate Geographical Information System (GIS) and Analytical Network Process (ANP) model for determination of citrus suitability zones. This study evaluates the agro-ecological suitability, determine potentials and constraints of the region based on effective criteria using ANP model. ANP model was used to determine suitable, moderate and unsuitable areas based on (i) socio-economic, morphometry and hydro-climate factors using 15 layers based on experts’ opinion; (ii) Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) satellite image of the year 2003 with 98.45% overall accuracy, and (iii) developed Multiple Linear Regression (MLR) model for citrus prediction. Thereby, weighted overlay of 15 factors was obtained using GIS. In this study, the citrus orchards map of 2003 and the new map of the citrus areas of 2014 namely Citrus State Development Program (CSDP) of the study area were compared. The results of this study demonstrated: (i) suitable areas (free risk areas) based on negative environmental factors and areas which are susceptible to citrus plantation; (ii) high-risk areas which are unsuitable for citrus plantation, and (iii) the high weights derived by ANP model were assigned to altitude, frost and minimum temperature. The MLR model was successfully developed to predict citrus yield in Ramsar District by 10% error. The MLR model would propose optimum citrus crop production areas. As conclusion, the main outcome of this study could help growers and decision makers to enhance the current citrus management activities for current and future citrus planning

    Metasemantics and fuzzy mathematics

    Get PDF
    The present thesis is an inquiry into the metasemantics of natural languages, with a particular focus on the philosophical motivations for countenancing degreed formal frameworks for both psychosemantics and truth-conditional semantics. Chapter 1 sets out to offer a bird's eye view of our overall research project and the key questions that we set out to address. Chapter 2 provides a self-contained overview of the main empirical findings in the cognitive science of concepts and categorisation. This scientific background is offered in light of the fact that most variants of psychologically-informed semantics see our network of concepts as providing the raw materials on which lexical and sentential meanings supervene. Consequently, the metaphysical study of internalistically-construed meanings and the empirical study of our mental categories are overlapping research projects. Chapter 3 closely investigates a selection of species of conceptual semantics, together with reasons for adopting or disavowing them. We note that our ultimate aim is not to defend these perspectives on the study of meaning, but to argue that the project of making them formally precise naturally invites the adoption of degreed mathematical frameworks (e.g. probabilistic or fuzzy). In Chapter 4, we switch to the orthodox framework of truth-conditional semantics, and we present the limitations of a philosophical position that we call "classicism about vagueness". In the process, we come up with an empirical hypothesis for the psychological pull of the inductive soritical premiss and we make an original objection against the epistemicist position, based on computability theory. Chapter 5 makes a different case for the adoption of degreed semantic frameworks, based on their (quasi-)superior treatments of the paradoxes of vagueness. Hence, the adoption of tools that allow for graded membership are well-motivated under both semantic internalism and semantic externalism. At the end of this chapter, we defend an unexplored view of vagueness that we call "practical fuzzicism". Chapter 6, viz. the final chapter, is a metamathematical enquiry into both the fuzzy model-theoretic semantics and the fuzzy Davidsonian semantics for formal languages of type-free truth in which precise truth-predications can be expressed

    Using iterative cluster merging with improved gap statistics to perform online phenotype discovery in the context of high-throughput RNAi screens

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens.</p> <p>Results</p> <p>Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using <it>Drosophila </it>embryos [Additional files <supplr sid="S1">1</supplr>, <supplr sid="S2">2</supplr>], dataset for cell cycle phase identification using HeLa cells [Additional files <supplr sid="S1">1</supplr>, <supplr sid="S3">3</supplr>, <supplr sid="S4">4</supplr>] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a <it>Drosophila </it>genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms.</p> <p>Conclusion</p> <p>We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens.</p

    Constructing 3D faces from natural language interface

    Get PDF
    This thesis presents a system by which 3D images of human faces can be constructed using a natural language interface. The driving force behind the project was the need to create a system whereby a machine could produce artistic images from verbal or composed descriptions. This research is the first to look at constructing and modifying facial image artwork using a natural language interface. Specialised modules have been developed to control geometry of 3D polygonal head models in a commercial modeller from natural language descriptions. These modules were produced from research on human physiognomy, 3D modelling techniques and tools, facial modelling and natural language processing. [Continues.

    Integrated CO2e assessment and decision support model for supplier selections

    Get PDF
    The fast growing stakeholder interest in sustainability leads to an increased attention both on the ecological and social perspective of industrial companies and its products. While in the past the focus predominantly laid on the environmental impact of the product use phase, it recently shifted towards the manufacturing phase. Hence, both, focal companies and supply chain members are obliged to create and apply new strategies to reduce greenhouse gas emissions (GHG). From a purchasing perspective, the selection of more environmentally efficient suppliers is a possibility to significantly reduce CO2_{2}e emissions. Therefore, transparency is required in form of site-specific and comparable data on suppliers’ environmental performance. This data is lacking and the detailed environmental performance criteria has not been integrated in supplier selection decisions yet. In this dissertation a model is developed and applied to close the transparency gap and to integrate CO2_{2}e as an additional supplier selection criteria in decision-making. For this purpose, a multi-criteria decision analysis approach is developed to derivate criteria weights and a supplier ranking based on expert opinion and quantitative supplier performance data. As decision making based on expert consultation is associated with a certain level of subjectivity, a sensitivity analysis is performed to evaluate the robustness of the model and the results. By means of ‘what-if’ scenario simulations, the dynamic behavior of the model is further investigated to examine how decisions may change when CO2_{2}e is formulated and considered as a new criteria. In addition, a systematic and modular Life Cycle Assessment (LCA) based approach is developed to enable an efficient evaluation and comparability of the sustainability performance of raw material suppliers on a production site level, based on publically available data. The model combines a bottom-up calculation of technical process flows with top-down reported site-specific CO2_{2} emissions, and explicitly considers technical restrictions and trading of inter-mediate products. The developed site-specific performance model is applied in two case studies for primary steel production sites in Europe and primary aluminum sites in Germany. The results, which were validated with industry experts, differ by 58 % for the comparison between the most and least efficient production site for steel and by 9 % for the examined aluminum production sites and show an opportunity to reduce GHG emissions by selecting more environmentally efficient suppliers. The combined, integrated CO2_{2}e assessment and decision support model is subsequently applied on an automotive case study for the selection of the most adequate supplier for a powertrain part from an environmental and economic efficiency perspective. The results show that in some cases the integration of the CO2_{2}e performance can have a significant impact on the ranking of the most preferable supplier, despite an initially investigated low importance of the new CO2_{2}e decision criteria

    A perspective on neuroscience data standardization with Neurodata Without Borders

    Full text link
    Neuroscience research has evolved to generate increasingly large and complex experimental data sets, and advanced data science tools are taking on central roles in neuroscience research. Neurodata Without Borders (NWB), a standard language for neurophysiology data, has recently emerged as a powerful solution for data management, analysis, and sharing. We here discuss our efforts to implement NWB data science pipelines. We describe general principles and specific use cases that illustrate successes, challenges, and non-trivial decisions in software engineering. We hope that our experience can provide guidance for the neuroscience community and help bridge the gap between experimental neuroscience and data science.Comment: 19 pages, 8 figure

    Integrating site-specific environmental impact assessment in supplier selection: exemplary application to steel procurement

    Get PDF
    In times of fast-growing stakeholder interest in sustainability, the ecological and social perspective of industrial companies and its products is gaining increasing importance. In particular, the emission of greenhouse gases (GHG) in the automotive industry has come to the forefront of public and governmental attention. The transport sector accounts for 27% of all European GHG emissions and constitutes the largest emitter of CO2e (CO2 equivalents) among all energy demanding technologies. Due to increasingly efficient combustion engines and technology innovation towards e-mobility, the emissions from car manufacturing gain in importance. So far little focus has been laid upon the emissions created throughout the production process in automotive supply chains from a purchasing perspective. The purchasing of raw material from environmentally efficient suppliers can constitute a possibility to significantly reduce CO2e emissions in automotive supply chains and thus contribute to the two degrees global warming goal. Supplier selection decisions, which cover approximately 75% of the value adding process of a car, are today mainly cost and quality-driven. In order to integrate CO2e as decision criterion for supplier selections, site-specific and comparable data on CO2e emissions from the upstream supply chain is necessary, but currently lacking. To estimate CO2e emissions of steel suppliers’ production sites, a model has been developed to estimate manufacturing processes on a site-specific level without the necessity of confidential primary data. The model is applied on 22 integrated steel mills in EU-15. The results, which can be transferred and used for various products and industries, e.g. the construction industry, demonstrate the partially large disparities of manufacturing efficiency regarding CO2e emissions among steel manufacturers due to different levels of process integration and internal process know-how. A range between 1879 and 2990 (kg CO2e/t crude steel) has been revealed. Finally, the estimated data on CO2e performance of suppliers is applied in a case study of supplier selection of a German automobile manufacturer in order to simulate environmental as well as economic effects
    corecore