8,895 research outputs found

    Aquaculture and marketing of the Florida Bay Scallop in Crystal River, Florida

    Get PDF
    The overall goal of this study was to develop a new fishery resource product through open-water aquaculture for the west coast of Florida that would compete as a non-traditional product through market development. Specific objectives were as follows: I. To grow a minimum of 50, 000 juvenile scallops to a minimum market size of40 mm in a cage and float system in the off-shore waters of Crystal River, Florida. 2. To determine the growth rate, survival, and time to market size for the individuals in this system and area to other similar projects like Virginia. 3. To introduce local fishermen and the aquaculture students at Crystal River High School to the hatchery, nursery, and grow-out techniques. 4. To determine the economic and financial characteristics of bay scallop culture in Florida and assess the sensitivity of projected costs and earnings to changes in key technical, managerial, and market related parameters. 5. To determine the market acceptability and necessary marketing strategy for whole bay scallop product in Florida. (PDF has 99 pages.

    Automated interpretation of benthic stereo imagery

    Get PDF
    Autonomous benthic imaging, reduces human risk and increases the amount of collected data. However, manually interpreting these high volumes of data is onerous, time consuming and in many cases, infeasible. The objective of this thesis is to improve the scientific utility of the large image datasets. Fine-scale terrain complexity is typically quantified by rugosity and measured by divers using chains and tape measures. This thesis proposes a new technique for measuring terrain complexity from 3D stereo image reconstructions, which is non-contact and can be calculated at multiple scales over large spatial extents. Using robots, terrain complexity can be measured without endangering humans, beyond scuba depths. Results show that this approach is more robust, flexible and easily repeatable than traditional methods. These proposed terrain complexity features are combined with visual colour and texture descriptors and applied to classifying imagery. New multi-dataset feature selection methods are proposed for performing feature selection across multiple datasets, and are shown to improve the overall classification performance. The results show that the most informative predictors of benthic habitat types are the new terrain complexity measurements. This thesis presents a method that aims to reduce human labelling effort, while maximising classification performance by combining pre-clustering with active learning. The results support that utilising the structure of the unlabelled data in conjunction with uncertainty sampling can significantly reduce the number of labels required for a given level of accuracy. Typically 0.00001–0.00007% of image data is annotated and processed for science purposes (20–50 points in 1–2% of the images). This thesis proposes a framework that uses existing human-annotated point labels to train a superpixel-based automated classification system, which can extrapolate the classified results to every pixel across all the images of an entire survey

    Automated interpretation of benthic stereo imagery

    Get PDF
    Autonomous benthic imaging, reduces human risk and increases the amount of collected data. However, manually interpreting these high volumes of data is onerous, time consuming and in many cases, infeasible. The objective of this thesis is to improve the scientific utility of the large image datasets. Fine-scale terrain complexity is typically quantified by rugosity and measured by divers using chains and tape measures. This thesis proposes a new technique for measuring terrain complexity from 3D stereo image reconstructions, which is non-contact and can be calculated at multiple scales over large spatial extents. Using robots, terrain complexity can be measured without endangering humans, beyond scuba depths. Results show that this approach is more robust, flexible and easily repeatable than traditional methods. These proposed terrain complexity features are combined with visual colour and texture descriptors and applied to classifying imagery. New multi-dataset feature selection methods are proposed for performing feature selection across multiple datasets, and are shown to improve the overall classification performance. The results show that the most informative predictors of benthic habitat types are the new terrain complexity measurements. This thesis presents a method that aims to reduce human labelling effort, while maximising classification performance by combining pre-clustering with active learning. The results support that utilising the structure of the unlabelled data in conjunction with uncertainty sampling can significantly reduce the number of labels required for a given level of accuracy. Typically 0.00001–0.00007% of image data is annotated and processed for science purposes (20–50 points in 1–2% of the images). This thesis proposes a framework that uses existing human-annotated point labels to train a superpixel-based automated classification system, which can extrapolate the classified results to every pixel across all the images of an entire survey

    Resource Allocation Framework: Validation of Numerical Models of Complex Engineering Systems against Physical Experiments

    Get PDF
    An increasing reliance on complex numerical simulations for high consequence decision making is the motivation for experiment-based validation and uncertainty quantification to assess, and when needed, to improve the predictive capabilities of numerical models. Uncertainties and biases in model predictions can be reduced by taking two distinct actions: (i) increasing the number of experiments in the model calibration process, and/or (ii) improving the physics sophistication of the numerical model. Therefore, decision makers must select between further code development and experimentation while allocating the finite amount of available resources. This dissertation presents a novel framework to assist in this selection between experimentation and code development for model validation strictly from the perspective of predictive capability. The reduction and convergence of discrepancy bias between model prediction and observation, computed using a suitable convergence metric, play a key role in the conceptual formulation of the framework. The proposed framework is demonstrated using two non-trivial case study applications on the Preston-Tonks-Wallace (PTW) code, which is a continuum-based plasticity approach to modeling metals, and the ViscoPlastic Self-Consistent (VPSC) code which is a mesoscopic plasticity approach to modeling crystalline materials. Results show that the developed resource allocation framework is effective and efficient in path selection (i.e. experimentation and/or code development) resulting in a reduction in both model uncertainties and discrepancy bias. The framework developed herein goes beyond path selection in the validation of numerical models by providing a methodology for the prioritization of optimal experimental settings and an algorithm for prioritization of code development. If the path selection algorithm selects the experimental path, optimal selection of the settings at which these physical experiments are conducted as well as the sequence of these experiments is vital to maximize the gain in predictive capability of a model. The Batch Sequential Design (BSD) is a methodology utilized in this work to achieve the goal of selecting the optimal experimental settings. A new BSD selection criterion, Coverage Augmented Expected Improvement for Predictive Stability (C-EIPS), is developed to minimize the maximum reduction in the model discrepancy bias and coverage of the experiments within the domain of applicability. The functional form of the new criterion, C-EIPS, is demonstrated to outperform its predecessor, the EIPS criterion, and the distance-based criterion when discrepancy bias is high and coverage is low, while exhibiting a comparable performance to the distance-based criterion in efficiently maximizing the predictive capability of the VPSC model as discrepancy decreases and coverage increases. If the path selection algorithm selects the code development path, the developed framework provides an algorithm for the prioritization of code development efforts. In coupled systems, the predictive accuracy of the simulation hinges on the accuracy of individual constituent models. Potential improvement in the predictive accuracy of the simulation that can be gained through improving a constituent model depends not only on the relative importance, but also on the inherent uncertainty and inaccuracy of that particular constituent. As such, a unique and quantitative code prioritization index (CPI) is proposed to accomplish the task of prioritizing code development efforts, and its application is demonstrated on a case study of a steel frame with semi-rigid connections. Findings show that the CPI is effective in identifying the most critical constituent of the coupled system, whose improvement leads to the highest overall enhancement of the predictive capability of the coupled model

    Computer vision

    Get PDF
    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed

    Designing Inclusive Playscapes Across Sensorial + Socio-Spatial Boundaries

    Get PDF
    Our emotional experience in public environments is considered to be superficial, although their configurations impact how well we can see, hear, move around, and interact in them daily. ‘Lonely, but not alone’ describes many of today’s urban dwellers. For some people, participation in civic life can be challenging, especially since the barriers (physical, psychological, etc.) faced by some are not always apparent to others, even to designers. This Major Research Project explores the relationship between the level of playfulness expressed in an urban space and user experience. Along with case study investigations and the Delphi method, 42 citizens (estimated to be 21 years of age or older) participated via interviews in Toronto, Canada. An urban design framework of 64 playful design features called The Multi-Playscape Toolkit, which can be used by urban designers and architects, has been developed and now contributes to the knowledge base. Using the Toronto context, recommendations are provided to promote more urban playfulness, more lenient policymaking, and more inclusive design practices in our public spaces

    Crack Control and Bond Performance of Alternative Coated Reinforcements in Concrete

    Get PDF
    Concrete cracking in structures is a ubiquitous problem which can lead to the deterioration of the structure. Other than affecting the strength aspect of a structure, cracking impacts the serviceability criteria as well. Although cracking phenomenon in any structure is highly inevitable, it has to be minimized in order to maintain a structure’s life effectively. Cracking in reinforced concrete structures is related to the bond strength developed between the bar and the concrete. It also depends on an ability of the bar to resist the stresses due to shrinkage to minimize the crack. Another important aspect is the resistance offered by the reinforcement to minimize the residual crack width after withdrawal of high loads beyond or near the yielding capacity. All these parameters were considered and have been studied as a part of this dissertation through experimental testing. The variables used in the tests are the alternative coated reinforcements like textured epoxy, hot dipped galvanized, and continuously galvanized reinforcements. Variables also included uncoated (black) and conventional epoxy (smooth epoxy) reinforcements which have been used in structure for many decades. Considering all the tests conducted, an overview analysis was done to determine the best performing bar coating for crack control and rebar-concrete bond. The results show that textured epoxy bars were the best performer in 47% of tests. On the other hand, smooth epoxy bars were the worst performer in 47% of tests. Uncoated, hot dipped galvanized, and continuously galvanized bars were typically in-between textured and smooth epoxy bars in their performance. This dissertation also analytically evaluated the bond mechanics associated with the variable bar coatings considered in the experimental program. Two different models of bar force variation at and around a crack location were considered to calculate the length over which forces transfer between the bar and concrete. The calculated lengths were compared to data from an associated peer study. It is inferred from the results that a small portion of a bar is de-bonded adjacent to the cracks and the forces transfer gradually at locations beyond the debonding. This inference applies to all the bar coatings in the data except the continuously galvanized reinforcement. Conclusions for continuously galvanized reinforcement could not be made because of limited and randomness in the data

    Image analysis for extracapsular hip fracture surgery

    Get PDF
    PhD ThesisDuring the implant insertion phase of extracapsular hip fracture surgery, a surgeon visually inspects digital radiographs to infer the best position for the implant. The inference is made by “eye-balling”. This clearly leaves room for trial and error which is not ideal for the patient. This thesis presents an image analysis approach to estimating the ideal positioning for the implant using a variant of the deformable templates model known as the Constrained Local Model (CLM). The Model is a synthesis of shape and local appearance models learned from a set of annotated landmarks and their corresponding local patches extracted from digital femur x-rays. The CLM in this work highlights both Principal Component Analysis (PCA) and Probabilistic PCA as regularisation components; the PPCA variant being a novel adaptation of the CLM framework that accounts for landmark annotation error which the PCA version does not account for. Our CLM implementation is used to articulate 2 clinical metrics namely: the Tip-Apex Distance and Parker’s Ratio (routinely used by clinicians to assess the positioning of the surgical implant during hip fracture surgery) within the image analysis framework. With our model, we were able to automatically localise signi cant landmarks on the femur, which were subsequently used to measure Parker’s Ratio directly from digital radiographs and determine an optimal placement for the surgical implant in 87% of the instances; thereby, achieving fully automatic measurement of Parker’s Ratio as opposed to manual measurements currently performed in the surgical theatre during hip fracture surgery

    Ergonomic standards for pedestrian areas for disabled people: literature review and consultations

    Get PDF
    As part of the project for the Transport and Road Research Laboratory concerned with the development of design guidance for pedestrian areas and footways to satisfy the needs of disabled and elderly people, a thorough examination of the literature was required. In addition the literature search was to be complemented by a wide-ranging series of discussions with local authorities, organisations representing the interests of elderly and disabled people, and other interested agencies. This Working Paper sets out the findings of this exercise. The objective of the literature review and the consultations was to identify the key impediments for elderly and disabled people when using pedestrian areas and footways. The current guidelines and standards relating to footways, pedestrianised areas and access to buildings were to be identified and their adequacy commented upon, as were the conflicts such recommendations raise between various groups of disabled people and with able-bodied people. The consultations were intended to provide greater insights into what the literature highlighted, and to suggest possible solutions. The literature review produced over 400 key references and a list of 35 impediments. A more detailed examination of the literature and the consultations reduced this list to six key impediments namely: parking; public transport waiting areas; movement distances; surface conditions; ramps, and information provision. The type and scale of problem created by the above impediments for various groups of disabled and elderly people are discussed, together with their measurement and assessment. The type and adequacy of existing design standards and guidance relating to these impediments are also outlined

    ULTRA CLOSE-RANGE DIGITAL PHOTOGRAMMETRY AS A TOOL TO PRESERVE, STUDY, AND SHARE SKELETAL REMAINS

    Get PDF
    Skeletal collections around the world hold valuable and intriguing knowledge about humanity. Their potential value could be fully exploited by overcoming current limitations in documenting and sharing them. Virtual anthropology provides effective ways to study and value skeletal collections using three-dimensional (3D) data, e.g. allowing powerful comparative and evolutionary studies, along with specimen preservation and dissemination. CT- and laser scanning are the most used techniques for three-dimensional reconstruction. However, they are resource-intensive and, therefore, difficult to be applied to large samples or skeletal collections. Ultra close-range digital photogrammetry (UCR-DP) enables photorealistic 3D reconstructions from simple photographs of the specimen. However, it is the least used method in skeletal anthropology and the lack of appropriate protocols often limit the quality of its outcomes. This Ph.D. thesis explored UCR-DP application in skeletal anthropology. The state-of-the-art of this technique was studied, and a new approach based on cloud computing was proposed and validated against current gold standards. This approach relies on the processing capabilities of remote servers and a free-for-academic use software environment; it proved to produce measurements equivalent to those of osteometry and, in many cases, they were more precise than those of CT-scanning. Cloud-based UCR-DP allowed the processing of multiple 3D models at once, leading to a low-cost, quick, and effective 3D production. The technique was successfully used to digitally preserve an initial sample of 534 crania from the skeletal collections of the Museo Sardo di Antropologia ed Etnografia (MuSAE, Università degli Studi di Cagliari). Best practices in using the technique for skeletal collection dissemination were studied and several applications were developed including MuSAE online virtual tours, virtual physical anthropology labs and distance learning, durable online dissemination, and values-led participatorily designed interactive and immersive exhibitions at the MuSAE. The sample will be used in a future population study of Sardinian skeletal characteristics from the Neolithic to modern times. In conclusion, cloud-based UCR-DP offers many significant advantages over other 3D scanning techniques: greater versatility in terms of application range and technical implementation, scalability, photorealistic restitution, reduced requirements relating to hardware, labour, time, and cost, and is, therefore, the best choice to document and value effectively large skeletal samples and collections
    corecore