1,527 research outputs found

    Robot Localization Using Visual Image Mapping

    Get PDF
    One critical step in providing the Air Force the capability to explore unknown environments is for an autonomous agent to be able to determine its location. The calculation of the robot\u27s pose is an optimization problem making use of the robot\u27s internal navigation sensors and data fusion of range sensor readings to find the most likely pose. This data fusion process requires the simultaneous generation of a map which the autonomous vehicle can then use to avoid obstacles, communicate with other agents in the same environment, and locate targets. Our solution entails mounting a Class 1 laser to an ERS-7 AIBO. The laser projects a horizontal line on obstacles in the AIBO camera\u27s field of view. Range readings are determined by capturing and processing multiple image frames, resolving the laser line to the horizon, and extract distance information to each obstacle. This range data is then used in conjunction with mapping a localization software to accurately navigate the AIBO

    Aerospace medicine and biology: A continuing bibliography with indexes, supplement 183

    Get PDF
    This bibliography lists 273 reports, articles, and other documents introduced into the NASA scientific and technical information system in July 1978

    Biomechanics of predator–prey arms race in lion, zebra, cheetah and impala

    Get PDF
    The fastest and most manoeuvrable terrestrial animals are found in savannah habitats, where predators chase and capture running prey. Hunt outcome and success rate are critical to survival, so both predator and prey should evolve to be faster and/or more manoeuvrable. Here we compare locomotor characteristics in two pursuit predator–prey pairs, lion–zebra and cheetah–impala, in their natural savannah habitat in Botswana. We show that although cheetahs and impalas were universally more athletic than lions and zebras in terms of speed, acceleration and turning, within each predator–prey pair, the predators had 20% higher muscle fibre power than prey, 37% greater acceleration and 72% greater deceleration capacity than their prey. We simulated hunt dynamics with these data and showed that hunts at lower speeds enable prey to use their maximum manoeuvring capacity and favour prey survival, and that the predator needs to be more athletic than its prey to sustain a viable success rate

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality

    Get PDF
    3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character\u27s behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant\u27s pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision

    Fall Prediction and Controlled Fall for Humanoid Robots

    Get PDF
    Humanoids which resemble humans in their body structure and degrees of freedom are anticipated to work like them within infrastructures and environments constructed for humans. In such scenarios, even humans who have exceptional manipulation, balancing, and locomotion skills are vulnerable to fall, humanoids being their approximate imitators are no exception to this. Furthermore, their high center of gravity position in relation to their small support polygon makes them more prone to fall, unlike other robots such as quadrupeds. The consequences of these falls are so devastating that it can instantly annihilate both the robot and its surroundings. This has become one of the major stumbling blocks which humanoids have to overcome to operate in real environments. As a result, in this thesis, we have strived to address the imminent fall over of humanoids by developing different control techniques. The fall over problem as such can be divided into three subissues: fall prediction, controlled fall, and its recovery. In the presented work, the first two issues have been addressed, and they are presented in three parts. First, we define what is fall over for humanoids, different sources for it to happen, the effect fall over has both on the robot and to its surroundings, and how to deal with them. Following which, we give a brief introduction to the overall system which includes both the hardware and software components which have been used throughout the work for varied purposes. Second, the first sub-issue is addressed by proposing a generic method to predict the falling over of humanoid robots in a reliable, robust, and agile manner across various terrains, and also amidst arbitrary disturbances. The aforementioned characteristics are strived to attain by proposing a prediction principle inspired by the human balance sensory systems. Accordingly, the fusion of multiple sensors such as inertial measurement unit and gyroscope (IMU), foot pressure sensor (FPS), joint encoders, and stereo vision sensor, which are equivalent to the human\u2019s vestibular, proprioception, and vision systems are considered. We first define a set of feature-based fall indicator variables (FIVs) from the different sensors, and the thresholds for those FIVs are extracted analytically for four major disturbance scenarios. Further, an online threshold interpolation technique and an impulse adaptive counter limit are proposed to manage more generic disturbances. For the generalized prediction process, both the instantaneous and cumulative sum of each FIVs are normalized, and a suitable value is set as the critical limit to predict the fall over. To determine the best combination and the usefulness of multiple sensors, the prediction performance is evaluated on four different types of terrains, in three unique combinations: first, each feature individually with their respective FIVs; second, an intuitive performance based (PF); and finally, Kalman filter based (KF) techniques, which involve the usage of multiple features. For PF and KF techniques, prediction performance evaluations are carried out with and without adding noise. Overall, it is reported that KF performs better than PF and individual sensor features under different conditions. Also, the method\u2019s ability to predict fall overs during the robot\u2019s simple dynamic motion is also tested and verified through simulations. Experimental verification of the proposed prediction method on flat and uneven terrains was carried out with the WALK-MAN humanoid robot. Finally, in reference to the second sub-issue, i.e., the controlled fall, we propose two novel fall control techniques based on energy concepts, which can be applied online to mitigate the impact forces incurred during the falling over of humanoids. Both the techniques are inspired by the break-fall motions, in particular, Ukemi motion practiced by martial arts people. The first technique reduces the total energy using a nonlinear control tool, called energy shaping (ES) and further distributes the reduced energy over multiple contacts by means of energy distribution polygons (EDP). We also include an effective orientation control to safeguard the end-effectors in the event of ground impacts. The performance of the proposed method is numerically evaluated by dynamic simulations under the sudden falling over scenario of the humanoid robot for both lateral and sagittal falls. The effectiveness of the proposed ES and EDP concepts are verified by diverse comparative simulations regarding total energy, distribution, and impact forces. Following the first technique, we proposed another controller to generate an online rolling over motion based on the hypothesis that multi-contact motions can reduce the impact forces even further. To generate efficient rolling motion, critical parameters are defined by the insights drawn from a study on rolling, which are contact positions and attack angles. In addition, energy-injection velocity is proposed as an auxiliary rolling parameter to ensure sequential multiple contacts in rolling. An online rolling controller is synthesized to compute the optimal values of the rolling parameters. The first two parameters are to construct a polyhedron, by selecting suitable contacts around the humanoid\u2019s body. This polyhedron distributes the energy gradually across multiple contacts, thus called energy distribution polyhedron. The last parameter is to inject some additional energy into the system during the fall, to overcome energy drought and tip over successive contacts. The proposed controller, incorporated with energy injection, minimization, and distribution techniques result in a rolling like motion and significantly reduces the impact forces, and it is verified in numerical experiments with a segmented planar robot and a full humanoid model
    • …
    corecore