158 research outputs found

    Object Recognition Using Convolutional Neural Networks

    Get PDF
    This chapter intends to present the main techniques for detecting objects within images. In recent years there have been remarkable advances in areas such as machine learning and pattern recognition, both using convolutional neural networks (CNNs). It is mainly due to the increased parallel processing power provided by graphics processing units (GPUs). In this chapter, the reader will understand the details of the state-of-the-art algorithms for object detection in images, namely, faster region convolutional neural network (Faster RCNN), you only look once (YOLO), and single shot multibox detector (SSD). We will present the advantages and disadvantages of each technique from a series of comparative tests. For this, we will use metrics such as accuracy, training difficulty, and characteristics to implement the algorithms. In this chapter, we intend to contribute to a better understanding of the state of the art in machine learning and convolutional networks for solving problems involving computational vision and object detection

    Quantitative Behavior Tracking of Xenopus laevis Tadpoles for Neurobiology Research

    Get PDF
    Xenopus laevis tadpoles are a useful animal model for neurobiology research because they provide a means to study the development of the brain in a species that is both physiologically well-understood and logistically easy to maintain in the laboratory. For behavioral studies, however, their individual and social swimming patterns represent a largely untapped trove of data, due to the lack of a computational tool that can accurately track multiple tadpoles at once in video feeds. This paper presents a system that was developed to accomplish this task, which can reliably track up to six tadpoles in a controlled environment, thereby enabling new research studies that were previously not feasible

    AlphaTracker: a multi-animal tracking and behavioral analysis tool

    Get PDF
    Computer vision has emerged as a powerful tool to elevate behavioral research. This protocol describes a computer vision machine learning pipeline called AlphaTracker, which has minimal hardware requirements and produces reliable tracking of multiple unmarked animals, as well as behavioral clustering. AlphaTracker pairs a top-down pose-estimation software combined with unsupervised clustering to facilitate behavioral motif discovery that will accelerate behavioral research. All steps of the protocol are provided as open-source software with graphic user interfaces or implementable with command-line prompts. Users with a graphical processing unit (GPU) can model and analyze animal behaviors of interest in less than a day. AlphaTracker greatly facilitates the analysis of the mechanism of individual/social behavior and group dynamics

    Deep Learning Framework for Controlling Work Sequence in Collaborative Human–Robot Assembly Processes

    Get PDF
    project UIDB/EMS/00667/2020 (UNIDEMI)The human–robot collaboration (HRC) solutions presented so far have the disadvantage that the interaction between humans and robots is based on the human’s state or on specific gestures purposely performed by the human, thus increasing the time required to perform a task and slowing down the pace of human labor, making such solutions uninteresting. In this study, a different concept of the HRC system is introduced, consisting of an HRC framework for managing assembly processes that are executed simultaneously or individually by humans and robots. This HRC framework based on deep learning models uses only one type of data, RGB camera data, to make predictions about the collaborative workspace and human action, and consequently manage the assembly process. To validate the HRC framework, an industrial HRC demonstrator was built to assemble a mechanical component. Four different HRC frameworks were created based on the convolutional neural network (CNN) model structures: Faster R-CNN ResNet-50 and ResNet-101, YOLOv2 and YOLOv3. The HRC framework with YOLOv3 structure showed the best performance, showing a mean average performance of 72.26% and allowed the HRC industrial demonstrator to successfully complete all assembly tasks within a desired time window. The HRC framework has proven effective for industrial assembly applicationspublishersversionpublishe

    Artificial intelligence in construction asset management: a review of present status, challenges and future opportunities

    Get PDF
    The built environment is responsible for roughly 40% of global greenhouse emissions, making the sector a crucial factor for climate change and sustainability. Meanwhile, other sectors (like manufacturing) adopted Artificial Intelligence (AI) to solve complex, non-linear problems to reduce waste, inefficiency, and pollution. Therefore, many research efforts in the Architecture, Engineering, and Construction community have recently tried introducing AI into building asset management (AM) processes. Since AM encompasses a broad set of disciplines, an overview of several AI applications, current research gaps, and trends is needed. In this context, this study conducted the first state-of-the-art research on AI for building asset management. A total of 578 papers were analyzed with bibliometric tools to identify prominent institutions, topics, and journals. The quantitative analysis helped determine the most researched areas of AM and which AI techniques are applied. The areas were furtherly investigated by reading in-depth the 83 most relevant studies selected by screening the articles’ abstracts identified in the bibliometric analysis. The results reveal many applications for Energy Management, Condition assessment, Risk management, and Project management areas. Finally, the literature review identified three main trends that can be a reference point for future studies made by practitioners or researchers: Digital Twin, Generative Adversarial Networks (with synthetic images) for data augmentation, and Deep Reinforcement Learning

    Multi-scale cellular imaging of DNA double strand break repair

    Get PDF
    Live-cell and high-resolution fluorescence microscopy are powerful tools to study the organization and dynamics of DNA double-strand break repair foci and specific repair proteins in single cells. This requires specific induction of DNA double-strand breaks and fluorescent markers to follow the DNA lesions in living cells. In this review, where we focused on mammalian cell studies, we discuss different methods to induce DNA double-strand breaks, how to visualize and quantify repair foci in living cells., We describe different (live-cell) imaging modalities that can reveal details of the DNA double-strand break repair process across multiple time and spatial scales. In addition, recent developments are discussed in super-resolution imaging and single-molecule tracking, and how these technologies can be applied to elucidate details on structural compositions or dynamics of DNA double-strand break repair.</p
    • …
    corecore