1,868 research outputs found

    Self-Playing Labyrinth Game Using Camera and Industrial Control System

    Get PDF
    In this master’s thesis, an industrial control system together with a network camera and servo motors were used to automate a ball and plate labyrinth system. The two servo motors, each with its own servo drive, were connected by joint arms to the plate resting on two interconnected gimbal frames, one for each axis. A background subtraction-based ball position tracking algorithm was developed to measure the ball-position using the camera. The camera acted as a sensor node in a control network with a programmable logical controller used together with the servo drives to implement a cascaded PID control loop to control the ball position. The ball reference position could either be controlled with user input from a tablet device, or automatically to make the labyrinth self-playing. The resulting system was able to control the ball position through the labyrinth using the camera for position feedback

    Obstacle and Change Detection Using Monocular Vision

    Get PDF
    We explore change detection using videos of change-free paths to detect any changes that occur while travelling the same paths in the future. This approach benefits from learning the background model of the given path as preprocessing, detecting changes starting from the first frame, and determining the current location in the path. Two approaches are explored: a geometry-based approach and a deep learning approach. In our geometry-based approach, we use feature points to match testing frames to training frames. Matched frames are used to determine the current location within the training video. The frames are then processed by first registering the test frame onto the training frame through a homography of the previously matched feature points. Finally, a comparison is made to determine changes by using a region of interest (ROI) of the direct path of the robot in both frames. This approach performs well in many tests with various floor patterns, textures and complexities in the background of the path. In our deep learning approach, we use an ensemble of unsupervised dimensionality reduction models. We first extract feature points within a ROI and extract small frame samples around the feature points. The frame samples are used as training inputs and labels for our unsupervised models. The approach aims at learning a compressed feature representation of the frame samples in order to have a compact representation of background. We use the distribution of the training samples to directly compare the learned background to test samples with a classification of background or change using a majority vote. This approach performs well using just two models in the ensemble and achieves an overall accuracy of 98.0% with a 4.1% improvement over the geometry-based approach

    Proceedings of the 9th Arab Society for Computer Aided Architectural Design (ASCAAD) international conference 2021 (ASCAAD 2021): architecture in the age of disruptive technologies: transformation and challenges.

    Get PDF
    The ASCAAD 2021 conference theme is Architecture in the age of disruptive technologies: transformation and challenges. The theme addresses the gradual shift in computational design from prototypical morphogenetic-centered associations in the architectural discourse. This imminent shift of focus is increasingly stirring a debate in the architectural community and is provoking a much needed critical questioning of the role of computation in architecture as a sole embodiment and enactment of technical dimensions, into one that rather deliberately pursues and embraces the humanities as an ultimate aspiration

    Codesign of edge intelligence and automated guided vehicle control

    Get PDF
    Abstract. In recent years, edge Artificial Intelligence (AI) coupled with other technologies such as autonomous systems have gained a lot of attention. This work presents a harmonic design of Autonomous Guided Vehicles (AGV) control, edge intelligence, and human input to enable autonomous transportation in industrial environments. The AGV has the capability to navigate between a source and destinations and pick/place objects. The human input implicitly provides the preferences of the destination and exact drop point, which are derived from the AI at the network edge and shared with the AGV over a wireless network. Design and integration of autonomous control of AGV, edge intelligence, and communication therein are carried out in this work and presented as a unified demonstration. The demonstration indicates that the proposed design of hardware, software, and intelligence design achieves the Technology Readiness Level (TRL) of range 4–5

    Field Testing of a Stochastic Planner for ASV Navigation Using Satellite Images

    Full text link
    We introduce a multi-sensor navigation system for autonomous surface vessels (ASV) intended for water-quality monitoring in freshwater lakes. Our mission planner uses satellite imagery as a prior map, formulating offline a mission-level policy for global navigation of the ASV and enabling autonomous online execution via local perception and local planning modules. A significant challenge is posed by the inconsistencies in traversability estimation between satellite images and real lakes, due to environmental effects such as wind, aquatic vegetation, shallow waters, and fluctuating water levels. Hence, we specifically modelled these traversability uncertainties as stochastic edges in a graph and optimized for a mission-level policy that minimizes the expected total travel distance. To execute the policy, we propose a modern local planner architecture that processes sensor inputs and plans paths to execute the high-level policy under uncertain traversability conditions. Our system was tested on three km-scale missions on a Northern Ontario lake, demonstrating that our GPS-, vision-, and sonar-enabled ASV system can effectively execute the mission-level policy and disambiguate the traversability of stochastic edges. Finally, we provide insights gained from practical field experience and offer several future directions to enhance the overall reliability of ASV navigation systems.Comment: 33 pages, 20 figures. Project website https://pcctp.github.io. arXiv admin note: text overlap with arXiv:2209.1186

    Visual Place Recognition in Changing Environments

    Get PDF
    Localization is an essential capability of mobile robots and place recognition is an important component of localization. Only having precise localization, robots can reliably plan, navigate and understand the environment around them. The main task of visual place recognition algorithms is to recognize based on the visual input if the robot has seen previously a given place in the environment. Cameras are one of the popular sensors robots get information from. They are lightweight, affordable, and provide detailed descriptions of the environment in the form of images. Cameras are shown to be useful for the vast variety of emerging applications, from virtual and augmented reality applications to autonomous cars or even fleets of autonomous cars. All these applications need precise localization. Nowadays, the state-of-the-art methods are able to reliably estimate the position of the robots using image streams. One of the big challenges still is the ability to localize a camera given an image stream in the presence of drastic visual appearance changes in the environment. Visual appearance changes may be caused by a variety of different reasons, starting from camera-related factors, such as changes in exposure time, camera position-related factors, e.g. the scene is observed from a different position or viewing angle, occlusions, as well as factors that stem from natural sources, for example seasonal changes, different weather conditions, illumination changes, etc. These effects change the way the same place in the environments appears in the image and can lead to situations where it becomes hard even for humans to recognize the places. Also, the performance of the traditional visual localization approaches, such as FABMAP or DBow, decreases dramatically in the presence of strong visual appearance changes. The techniques presented in this thesis aim at improving visual place recognition capabilities for robotic systems in the presence of dramatic visual appearance changes. To reduce the effect of visual changes on image matching performance, we exploit sequences of images rather than individual images. This becomes possible as robotic systems collect data sequentially and not in random order. We formulate the visual place recognition problem under strong appearance changes as a problem of matching image sequences collected by a robotic system at different points in time. A key insight here is the fact that matching sequences reduces the ambiguities in the data associations. This allows us to establish image correspondences between different sequences and thus recognize if two images represent the same place in the environment. To perform a search for image correspondences, we construct a graph that encodes the potential matches between the sequences and at the same time preserves the sequentiality of the data. The shortest path through such a data association graph provides the valid image correspondences between the sequences. Robots operating reliably in an environment should be able to recognize a place in an online manner and not after having recorded all data beforehand. As opposed to collecting image sequences and then determining the associations between the sequences offline, a real-world system should be able to make a decision for every incoming image. In this thesis, we therefore propose an algorithm that is able to perform visual place recognition in changing environments in an online fashion between the query and the previously recorded reference sequences. Then, for every incoming query image, our algorithm checks if the robot is in the previously seen environment, i.e. there exists a matching image in the reference sequence, as well as if the current measurement is consistent with previously obtained query images. Additionally, to be able to recognize places in an online manner, a robot needs to recognize the fact that it has left the previously mapped area as well as relocalize when it re-enters environment covered by the reference sequence. Thus, we relax the assumption that the robot should always travel within the previously mapped area and propose an improved graph-based matching procedure that allows for visual place recognition in case of partially overlapping image sequences. To achieve a long-term autonomy, we further increase the robustness of our place recognition algorithm by incorporating information from multiple image sequences, collected along different overlapping and non-overlapping routes. This allows us to grow the coverage of the environment in terms of area as well as various scene appearances. The reference dataset then contains more images to match against and this increases the probability of finding a matching image, which can lead to improved localization. To be able to deploy a robot that performs localization in large scaled environments over extended periods of time, however, collecting a reference dataset may be a tedious, resource consuming and in some cases intractable task. Avoiding an explicit map collection stage fosters faster deployment of robotic systems in the real world since no map has to be collected beforehand. By using our visual place recognition approach the map collection stage can be skipped, as we are able to incorporate the information from a publicly available source, e.g., from Google Street View, into our framework due to its general formulation. This automatically enables us to perform place recognition on already existing publicly available data and thus avoid costly mapping phase. In this thesis, we additionally show how to organize the images from the publicly available source into the sequences to perform out-of-the-box visual place recognition without previously collecting the otherwise required reference image sequences at city scale. All approaches described in this thesis have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software

    Dynamic shading systems: A review of design parameters, platforms and evaluation strategies

    Get PDF
    The advancements in software and hardware technologies provide opportunities for solar shading systems to function dynamically within their context. This development has helped dynamic shading systems respond to variable environmental parameters such as sun angles and solar insolation. However, the technical understanding of system design, mechanism and controlling methods presents a challenge for architects and designers. Therefore, this study aims to review the current applications and trends of dynamic shading systems to clarify the potentials and limitations in enhancing system performance based on integrated design objectives. This study assessed several systems on the basis of a critical review to identify different models, applications and methodologies. This study is divided into two main sections: (i) design elements and platforms that engage with specific methods in creating a dynamic shading system and (ii) evaluation strategies to examine system performance. The systems were investigated based on the multiplicity and integration of the parameters involved through various components, such as architectural, mechanical, operational and automation components. The review analysed various studies on the following two bases: (1) geometric-based analysis, which distinguishes between simple and complex shading models, and (2) performance-based analysis, which assesses the shading systems based on two groups of methodologies, namely, theoretical and experimental. The outcome of the review reflects a clear classification of shading models and a comprehensive analysis of their performance. This study generally provides a systematic framework for architects based on thorough research and investigation. Finally, the study introduced several findings and recommendations to improve the performance of dynamic shading systems
    • …
    corecore