154 research outputs found

    Development of bent-up triangular tab shear transfer (BTTST) enhancement in cold-formed steel (CFS)-concrete composite beams

    Get PDF
    Cold-formed steel (CFS) sections, have been recognised as an important contributor to environmentally responsible and sustainable structures in developed countries, and CFS framing is considered as a sustainable 'green' construction material for low rise residential and commercial buildings. However, there is still lacking of data and information on the behaviour and performance of CFS beam in composite construction. The use of CFS has been limited to structural roof trusses and a host of nonstructural applications. One of the limiting features of CFS is the thinness of its section (usually between 1.2 and 3.2 mm thick) that makes it susceptible to torsional, distortional, lateral-torsional, lateral-distortional and local buckling. Hence, a reasonable solution is resorting to a composite construction of structural CFS section and reinforced concrete deck slab, which minimises the distance from the neutral-axis to the top of the deck and reduces the compressive bending stress in the CFS sections. Also, by arranging two CFS channel sections back-to-back restores symmetricity and suppresses lateraltorsional and to a lesser extent, lateral-distortional buckling. The two-fold advantages promised by the system, promote the use of CFS sections in a wider range of structural applications. An efficient and innovative floor system of built-up CFS sections acting compositely with a concrete deck slab was developed to provide an alternative composite system for floors and roofs in buildings. The system, called Precast Cold-Formed SteelConcrete Composite System, is designed to rely on composite actions between the CFS sections and a reinforced concrete deck where shear forces between them are effectively transmitted via another innovative shear transfer enhancement mechanism called a bentup triangular tab shear transfer (BTTST). The study mainly comprises two major components, i.e. experimental and theoretical work. Experimental work involved smallscale and large-scale testing of laboratory tests. Sixty eight push-out test specimens and fifteen large-scale CFS-concrete composite beams specimens were tested in this program. In the small-scale test, a push-out test was carried out to determine the strength and behaviour of the shear transfer enhancement between the CFS and concrete. Four major parameters were studied, which include compressive strength of concrete, CFS strength, dimensions (size and angle) of BTTST and CFS thickness. The results from push-out test were used to develop an expression in order to predict the shear capacity of innovative shear transfer enhancement mechanism, BTTST in CFS-concrete composite beams. The value of shear capacity was used to calculate the theoretical moment capacity of CFSconcrete composite beams. The theoretical moment capacities were used to validate the large-scale test results. The large-scale test specimens were tested by using four-point load bending test. The results in push-out tests show that specimens employed with BTTST achieved higher shear capacities compared to those that rely only on a natural bond between cold-formed steel and concrete and specimens with Lakkavalli and Liu bent-up tab (LYLB). Load capacities for push-out test specimens with BTTST are ii relatively higher as compared to the equivalent control specimen, i.e. by 91% to 135%. When compared to LYLB specimens the increment is 12% to 16%. In addition, shear capacities of BTTST also increase with the increase in dimensions (size and angle) of BTTST, thickness of CFS and concrete compressive strength. An equation was developed to determine the shear capacity of BTTST and the value is in good agreement with the observed test values. The average absolute difference between the test values and predicted values was found to be 8.07%. The average arithmetic mean of the test/predicted ratio (n) of this equation is 0.9954. The standard deviation (a) and the coefficient of variation (CV) for the proposed equation were 0.09682 and 9.7%, respectively. The proposed equation is recommended for the design of BTTST in CFSconcrete composite beams. In large-scale testing, specimens employed with BTTST increased the strength capacities and reduced the deflection of the specimens. The moment capacities, MU ) e X p for all specimens are above Mu>theory and show good agreement with the calculated ratio (>1.00). It is also found that, strength capacities of CFS-concrete composite beams also increase with the increase in dimensions (size and angle) of BTTST, thickness of CFS and concrete compressive strength and a CFS-concrete composite beam are practically designed with partial shear connection for equal moment capacity by reducing number of BTTST. It is concluded that the proposed BTTST shear transfer enhancement in CFS-concrete composite beams has sufficient strength and is also feasible. Finally, a standard table of characteristic resistance, P t a b of BTTST in normal weight concrete, was also developed to simplify the design calculation of CFSconcrete composite beams

    Oncoming Vehicle Detection with Variable-Focus Liquid Lens

    Get PDF
    Computer vision plays an important role in autonomous vehicle, robotics and manufacturing fields. Depth perception in computer vision requires stereo vision, or fuse together a single camera with other depth sensors such as radar and Lidar. Depth from focus using adjustable lens has not been applied in autonomous vehicle. The goal of this paper is to investigate the application of depth from focus for oncoming vehicle detection. Liquid lens is used to adjust optical power while acquiring images with the camera. The distance of the oncoming vehicle can be estimated by measuring the oncoming vehicle’s sharpness in the images with known lens settings. The results show the system detecting oncoming vehicle at ±2 meter and ±4 meter using depth from focus technique. Estimation of oncoming vehicles above 4 meter can be done by analysing the relative size of the vehicle detected

    Driver tracking and posture detection using low-resolution infrared sensing

    Get PDF
    Intelligent sensors are playing an ever-increasing role in automotive safety. This paper describes the development of a low-resolution infrared (IR) imaging system for continuous tracking and identification of driver postures and movements. The resolution of the imager is unusually low at 16 x 16 pixels. An image processing technique has been developed using neural networks operating on a segmented thermographic image to categorize driver postures. The system is able reliably to identify 18 different driver positions, and results have been verified experimentally with 20 subjects driving in a car simulator. IR imaging offers several advantages over visual sensors; it will operate in any lighting conditions and is less intrusive in terms of the privacy of the occupants. Hardware costs for the low-resolution sensor are an order of magnitude lower than those of conventional IR imaging systems. The system has been shown to have the potential to play a significant role in future intelligent safety systems

    Unobtrusive and pervasive video-based eye-gaze tracking

    Get PDF
    Eye-gaze tracking has long been considered a desktop technology that finds its use inside the traditional office setting, where the operating conditions may be controlled. Nonetheless, recent advancements in mobile technology and a growing interest in capturing natural human behaviour have motivated an emerging interest in tracking eye movements within unconstrained real-life conditions, referred to as pervasive eye-gaze tracking. This critical review focuses on emerging passive and unobtrusive video-based eye-gaze tracking methods in recent literature, with the aim to identify different research avenues that are being followed in response to the challenges of pervasive eye-gaze tracking. Different eye-gaze tracking approaches are discussed in order to bring out their strengths and weaknesses, and to identify any limitations, within the context of pervasive eye-gaze tracking, that have yet to be considered by the computer vision community.peer-reviewe

    Trash and recyclable material identification using convolutional neural networks (CNN)

    Get PDF
    The aim of this research is to improve municipal trash collection using image processing algorithms and deep learning technologies for detecting trash in public spaces. This research will help to improve trash management systems and create a smart city. Two Convolutional Neural Networks (CNN), both based on the AlexNet network architecture, were developed to search for trash objects in an image and separate recyclable items from the landfill trash objects, respectively. The two-stage CNN system was first trained and tested on the benchmark TrashNet indoor image dataset and achieved great performance to prove the concept. Then the system was trained and tested on outdoor images taken by the authors in the intended usage environment. Using the outdoor image dataset, the first CNN achieved a preliminary 93.6% accuracy to identify trash and non-trash items on an image database of assorted trash items. A second CNN was then trained to distinguish trash that will go to a landfill from the recyclable items with an accuracy ranging from 89.7% to 93.4% and overall, 92%. A future goal is to integrate this image processing-based trash identification system in a smart trashcan robot with a camera to take real-time photos that can detect and collect the trash all around it

    Development of Cognitive Capabilities in Humanoid Robots

    Get PDF
    Merged with duplicate record 10026.1/645 on 03.04.2017 by CS (TIS)Building intelligent systems with human level of competence is the ultimate grand challenge for science and technology in general, and especially for the computational intelligence community. Recent theories in autonomous cognitive systems have focused on the close integration (grounding) of communication with perception, categorisation and action. Cognitive systems are essential for integrated multi-platform systems that are capable of sensing and communicating. This thesis presents a cognitive system for a humanoid robot that integrates abilities such as object detection and recognition, which are merged with natural language understanding and refined motor controls. The work includes three studies; (1) the use of generic manipulation of objects using the NMFT algorithm, by successfully testing the extension of the NMFT to control robot behaviour; (2) a study of the development of a robotic simulator; (3) robotic simulation experiments showing that a humanoid robot is able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to cooperate with human users, and to adapt its abilities to changes in internal and environmental conditions. The model and the experimental results reported in this thesis, emphasise the importance of embodied cognition, i.e. the humanoid robot's physical interaction between its body and the environment

    Modeling and applications of the focus cue in conventional digital cameras

    Get PDF
    El enfoque en cámaras digitales juega un papel fundamental tanto en la calidad de la imagen como en la percepción del entorno. Esta tesis estudia el enfoque en cámaras digitales convencionales, tales como cámaras de móviles, fotográficas, webcams y similares. Una revisión rigurosa de los conceptos teóricos detras del enfoque en cámaras convencionales muestra que, a pasar de su utilidad, el modelo clásico del thin lens presenta muchas limitaciones para aplicación en diferentes problemas relacionados con el foco. En esta tesis, el focus profile es propuesto como una alternativa a conceptos clásicos como la profundidad de campo. Los nuevos conceptos introducidos en esta tesis son aplicados a diferentes problemas relacionados con el foco, tales como la adquisición eficiente de imágenes, estimación de profundidad, integración de elementos perceptuales y fusión de imágenes. Los resultados experimentales muestran la aplicación exitosa de los modelos propuestos.The focus of digital cameras plays a fundamental role in both the quality of the acquired images and the perception of the imaged scene. This thesis studies the focus cue in conventional cameras with focus control, such as cellphone cameras, photography cameras, webcams and the like. A deep review of the theoretical concepts behind focus in conventional cameras reveals that, despite its usefulness, the widely known thin lens model has several limitations for solving different focus-related problems in computer vision. In order to overcome these limitations, the focus profile model is introduced as an alternative to classic concepts, such as the near and far limits of the depth-of-field. The new concepts introduced in this dissertation are exploited for solving diverse focus-related problems, such as efficient image capture, depth estimation, visual cue integration and image fusion. The results obtained through an exhaustive experimental validation demonstrate the applicability of the proposed models

    High-quality face capture, animation and editing from monocular video

    Get PDF
    Digitization of virtual faces in movies requires complex capture setups and extensive manual work to produce superb animations and video-realistic editing. This thesis pushes the boundaries of the digitization pipeline by proposing automatic algorithms for high-quality 3D face capture and animation, as well as photo-realistic face editing. These algorithms reconstruct and modify faces in 2D videos recorded in uncontrolled scenarios and illumination. In particular, advances in three main areas offer solutions for the lack of depth and overall uncertainty in video recordings. First, contributions in capture include model-based reconstruction of detailed, dynamic 3D geometry that exploits optical and shading cues, multilayer parametric reconstruction of accurate 3D models in unconstrained setups based on inverse rendering, and regression-based 3D lip shape enhancement from high-quality data. Second, advances in animation are video-based face reenactment based on robust appearance metrics and temporal clustering, performance-driven retargeting of detailed facial models in sync with audio, and the automatic creation of personalized controllable 3D rigs. Finally, advances in plausible photo-realistic editing are dense face albedo capture and mouth interior synthesis using image warping and 3D teeth proxies. High-quality results attained on challenging application scenarios confirm the contributions and show great potential for the automatic creation of photo-realistic 3D faces.Die Digitalisierung von Gesichtern zum Einsatz in der Filmindustrie erfordert komplizierte Aufnahmevorrichtungen und die manuelle Nachbearbeitung von Rekonstruktionen, um perfekte Animationen und realistische Videobearbeitung zu erzielen. Diese Dissertation erweitert vorhandene Digitalisierungsverfahren durch die Erforschung von automatischen Verfahren zur qualitativ hochwertigen 3D Rekonstruktion, Animation und Modifikation von Gesichtern. Diese Algorithmen erlauben es, Gesichter in 2D Videos, die unter allgemeinen Bedingungen und unbekannten Beleuchtungsverhältnissen aufgenommen wurden, zu rekonstruieren und zu modifizieren. Vor allem Fortschritte in den folgenden drei Hauptbereichen tragen zur Kompensation von fehlender Tiefeninformation und der allgemeinen Mehrdeutigkeit von 2D Videoaufnahmen bei. Erstens, Beiträge zur modellbasierten Rekonstruktion von detaillierter und dynamischer 3D Geometrie durch optische Merkmale und die Shading-Eigenschaften des Gesichts, mehrschichtige parametrische Rekonstruktion von exakten 3D Modellen mittels inversen Renderings in allgemeinen Szenen und regressionsbasierter 3D Lippenformverfeinerung mittels qualitativ hochwertigen Daten. Zweitens, Fortschritte im Bereich der Computeranimation durch videobasierte Gesichtsausdrucksübertragung und temporaler Clusterbildung, Übertragung von detaillierten Gesichtsmodellen, deren Mundbewegung mit Ton synchronisiert ist, und die automatische Erstellung von personalisierten "3D Face Rigs". Schließlich werden Fortschritte im Bereich der realistischen Videobearbeitung vorgestellt, welche auf der dichten Rekonstruktion von Hautreflektionseigenschaften und der Mundinnenraumsynthese mittels bildbasierten und geometriebasierten Verfahren aufbauen. Qualitativ hochwertige Ergebnisse in anspruchsvollen Anwendungen untermauern die Wichtigkeit der geleisteten Beiträgen und zeigen das große Potential der automatischen Erstellung von realistischen digitalen 3D Gesichtern auf
    corecore