15,204 research outputs found

    UMSL Bulletin 2023-2024

    Get PDF
    The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    TabR: Unlocking the Power of Retrieval-Augmented Tabular Deep Learning

    Full text link
    Deep learning (DL) models for tabular data problems are receiving increasingly more attention, while the algorithms based on gradient-boosted decision trees (GBDT) remain a strong go-to solution. Following the recent trends in other domains, such as natural language processing and computer vision, several retrieval-augmented tabular DL models have been recently proposed. For a given target object, a retrieval-based model retrieves other relevant objects, such as the nearest neighbors, from the available (training) data and uses their features or even labels to make a better prediction. However, we show that the existing retrieval-based tabular DL solutions provide only minor, if any, benefits over the properly tuned simple retrieval-free baselines. Thus, it remains unclear whether the retrieval-based approach is a worthy direction for tabular DL. In this work, we give a strong positive answer to this question. We start by incrementally augmenting a simple feed-forward architecture with an attention-like retrieval component similar to those of many (tabular) retrieval-based models. Then, we highlight several details of the attention mechanism that turn out to have a massive impact on the performance on tabular data problems, but that were not explored in prior work. As a result, we design TabR -- a simple retrieval-based tabular DL model which, on a set of public benchmarks, demonstrates the best average performance among tabular DL models, becomes the new state-of-the-art on several datasets, and even outperforms GBDT models on the recently proposed ``GBDT-friendly'' benchmark (see the first figure).Comment: Code: https://github.com/yandex-research/tabular-dl-tab

    Eunomia: Enabling User-specified Fine-Grained Search in Symbolically Executing WebAssembly Binaries

    Full text link
    Although existing techniques have proposed automated approaches to alleviate the path explosion problem of symbolic execution, users still need to optimize symbolic execution by applying various searching strategies carefully. As existing approaches mainly support only coarse-grained global searching strategies, they cannot efficiently traverse through complex code structures. In this paper, we propose Eunomia, a symbolic execution technique that allows users to specify local domain knowledge to enable fine-grained search. In Eunomia, we design an expressive DSL, Aes, that lets users precisely pinpoint local searching strategies to different parts of the target program. To further optimize local searching strategies, we design an interval-based algorithm that automatically isolates the context of variables for different local searching strategies, avoiding conflicts between local searching strategies for the same variable. We implement Eunomia as a symbolic execution platform targeting WebAssembly, which enables us to analyze applications written in various languages (like C and Go) but can be compiled into WebAssembly. To the best of our knowledge, Eunomia is the first symbolic execution engine that supports the full features of the WebAssembly runtime. We evaluate Eunomia with a dedicated microbenchmark suite for symbolic execution and six real-world applications. Our evaluation shows that Eunomia accelerates bug detection in real-world applications by up to three orders of magnitude. According to the results of a comprehensive user study, users can significantly improve the efficiency and effectiveness of symbolic execution by writing a simple and intuitive Aes script. Besides verifying six known real-world bugs, Eunomia also detected two new zero-day bugs in a popular open-source project, Collections-C.Comment: Accepted by ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) 202

    Optimising water quality outcomes for complex water resource systems and water grids

    Get PDF
    As the world progresses, water resources are likely to be subjected to much greater pressures than in the past. Even though the principal water problem revolves around inadequate and uncertain water supplies, water quality management plays an equally important role. Availability of good quality water is paramount to sustainability of human population as well as the environment. Achieving water quality and quantity objectives can be conflicting and becomes more complicated with challenges like, climate change, growing populations and changed land uses. Managing adequate water quality in a reservoir gets complicated by multiple inflows with different water quality levels often resulting in poor water quality. Hence, it is fundamental to approach this issue in a more systematic, comprehensive, and coordinated fashion. Most previous studies related to water resources management focused on water quantity and considered water quality separately. However, this research study focused on considering water quantity and quality objectives simultaneously in a single model to explore and understand the relationship between them in a reservoir system. A case study area was identified in Western Victoria, Australia with water quantity and quality challenges. Taylors Lake of Grampians System in Victoria, Australia receives water from multiple sources of differing quality and quantity and has the abovesaid problems. A combined simulation and optimisation approach was adopted to carry out the analysis. A multi-objective optimisation approach was applied to achieve optimal water availability and quality in the storage. The multi-objective optimisation model included three objective functions which were: water volume and two water quality parameters: salinity and turbidity. Results showed competing nature of water quantity and quality objectives and established the trade-offs. It further showed that it was possible to generate a range of optimal solutions to effectively manage those trade-offs. The trade-off analysis explored and informed that selective harvesting of inflows is effective to improve water quality in storage. However, with strict water quality restriction there is a considerable loss in water volume. The robustness of the optimisation approach used in this study was confirmed through sensitivity and uncertainty analysis. The research work also incorporated various spatio-temporal scenario analyses to systematically articulate long-term and short-term operational planning strategies. Operational decisions around possible harvesting regimes while achieving optimal water quantity and quality and meeting all water demands were established. The climate change analysis revealed that optimal management of water quantity and quality in storage became extremely challenging under future climate projections. The high reduction in storage volume in the future will lead to several challenges such as water supply shortfall and inability to undertake selective harvesting due to reduced water quality levels. In this context, selective harvesting of inflows based on water quality will no longer be an option to manage water quantity and quality optimally in storage. Some significant conclusions of this research work included the establishment of trade-offs between water quality and quantity objectives particular to this configuration of water supply system. The work demonstrated that selective harvesting of inflows will improve the stored water quality, and this finding along with the approach used is a significant contribution to decision makers working within the water sector. The simulation-optimisation approach is very effective in providing a range of optimal solutions, which can be used to make more informed decisions around achieving optimal water quality and quantity in storage. It was further demonstrated that there are range of planning periods, both long-term (>10 years) and short-term (<1 year), all of which offer distinct advantages and provides useful insights, making this an additional key contribution of the work. Importantly, climate change was also considered where it was found that diminishing water resources, particularly to this geographic location, makes it increasingly difficult to optimise both quality and quantity in storage providing further useful insights from this work.Doctor of Philosoph

    Multimodal spatio-temporal deep learning framework for 3D object detection in instrumented vehicles

    Get PDF
    This thesis presents the utilization of multiple modalities, such as image and lidar, to incorporate spatio-temporal information from sequence data into deep learning architectures for 3Dobject detection in instrumented vehicles. The race to autonomy in instrumented vehicles or self-driving cars has stimulated significant research in developing autonomous driver assistance systems (ADAS) technologies related explicitly to perception systems. Object detection plays a crucial role in perception systems by providing spatial information to its subsequent modules; hence, accurate detection is a significant task supporting autonomous driving. The advent of deep learning in computer vision applications and the availability of multiple sensing modalities such as 360° imaging, lidar, and radar have led to state-of-the-art 2D and 3Dobject detection architectures. Most current state-of-the-art 3D object detection frameworks consider single-frame reference. However, these methods do not utilize temporal information associated with the objects or scenes from the sequence data. Thus, the present research hypothesizes that multimodal temporal information can contribute to bridging the gap between 2D and 3D metric space by improving the accuracy of deep learning frameworks for 3D object estimations. The thesis presents understanding multimodal data representations and selecting hyper-parameters using public datasets such as KITTI and nuScenes with Frustum-ConvNet as a baseline architecture. Secondly, an attention mechanism was employed along with convolutional-LSTM to extract spatial-temporal information from sequence data to improve 3D estimations and to aid the architecture in focusing on salient lidar point cloud features. Finally, various fusion strategies are applied to fuse the modalities and temporal information into the architecture to assess its efficacy on performance and computational complexity. Overall, this thesis has established the importance and utility of multimodal systems for refined 3D object detection and proposed a complex pipeline incorporating spatial, temporal and attention mechanisms to improve specific, and general class accuracy demonstrated on key autonomous driving data sets

    A Simulation of the Impacts of Climate Change on Civil Aircraft Takeoff Performance

    Get PDF
    Climate change affects the near-surface environmental conditions that prevail at airports worldwide. Among these, air density and headwind speed are major determinants of takeoff performance, and their sensitivity to global warming carries potential operational and economic implications for the commercial air transport industry. Previous archival and prospective research observed a weakening in headwind strength and predicted an increase in near-surface temperatures, respectively, resulting in an increase in takeoff distances and weight restrictions. The main purpose of the present study was to update and generalize the extant prospective research using a more representative sample of worldwide airports, a wider range of climate scenarios, and next-generation climate models. The research questions included how much additional thrust and payload removal will be required to offset the centurial changes in takeoff conditions. This study relied on a quantitative method using the simulation instrument. Forecast climate data corresponding to four shared socioeconomic pathways (SSP1‒2.6, SSP2‒4.5, SSP3‒7.0, and SSP5‒8.5) over the available 2015‒2100 period were sourced from a high-resolution CMIP6 global circulation model. These data were used to characterize the six-hourly near-surface environmental conditions prevailing at all 881 airports worldwide having at least one million passengers in pre-COVID‒19 traffic. The missing air density was iii numerically derived from the air temperature, pressure, and humidity variables, while the headwind speed for each airport’s active runway configuration was triangulated from the wind vector components. Separately, a direct takeoff-dynamics simulation model was developed from first principles and calibrated against published performance data under international standard atmospheric conditions for two narrowbody and two widebody aircraft. The model was used to simulate 1.8 billion unique takeoffs, each initiated at 75% of maximum takeoff thrust and 100% of maximum takeoff mass. When the resulting takeoff distance required exceeded that available, the takeoff thrust was gradually increased to 100%, after which the takeoff mass was gradually decreased to an estimated breakeven load factor. In total, 65 billion takeoff iterations were simulated. Longitudinal changes to takeoff thrust, distance, and payload were recorded and examined by aircraft type, climate scenario, and climate zone. The results show that despite a marked centurial increase in the global mean air temperature of 9.4%‒18.0% relative to the year 2015 under SSP2‒4.5 and SSP3‒7.0, air density will only decrease by 0.6%‒1.1% due to its weak sensitivity to temperature. Likewise, mean headwinds were observed to remain almost unchanged relative to the 2015 baseline. As a result, the global mean takeoff thrust was found to increase by no more than 0.3 percentage point while payload removals did not exceed 1.1 passenger. Significant deviations from the mean were observed at climatic outlier airports, including those located around the Siberian plateau, where takeoff operations may become more difficult. This study contributes to the air transport climate adaption body of knowledge by providing contrasting results relative to earlier research that reported strong impacts of global warming on takeoff performance

    Making friends with failure in STS

    Get PDF

    Real-Time Hybrid Visual Servoing of a Redundant Manipulator via Deep Reinforcement Learning

    Get PDF
    Fixtureless assembly may be necessary in some manufacturing tasks and environ-ments due to various constraints but poses challenges for automation due to non-deterministic characteristics not favoured by traditional approaches to industrial au-tomation. Visual servoing methods of robotic control could be effective for sensitive manipulation tasks where the desired end-effector pose can be ascertained via visual cues. Visual data is complex and computationally expensive to process but deep reinforcement learning has shown promise for robotic control in vision-based manipu-lation tasks. However, these methods are rarely used in industry due to the resources and expertise required to develop application-specific systems and prohibitive train-ing costs. Training reinforcement learning models in simulated environments offers a number of benefits for the development of robust robotic control algorithms by reducing training time and costs, and providing repeatable benchmarks for which algorithms can be tested, developed and eventually deployed on real robotic control environments. In this work, we present a new simulated reinforcement learning envi-ronment for developing accurate robotic manipulation control systems in fixtureless environments. Our environment incorporates a contemporary collaborative industrial robot, the KUKA LBR iiwa, with the goal of positioning its end effector in a generic fixtureless environment based on a visual cue. Observational inputs are comprised of the robotic joint positions and velocities, as well as two cameras, whose positioning reflect hybrid visual servoing with one camera attached to the robotic end-effector, and another observing the workspace respectively. We propose a state-of-the-art deep reinforcement learning approach to solving the task environment and make prelimi-nary assessments of the efficacy of this approach to hybrid visual servoing methods for the defined problem environment. We also conduct a series of experiments ex-ploring the hyperparameter space in the proposed reinforcement learning method. Although we could not prove the efficacy of a deep reinforcement approach to solving the task environment with our initial results, we remain confident that such an ap-proach could be feasible to solving this industrial manufacturing challenge and that our contributions in this work in terms of the novel software provide a good basis for the exploration of reinforcement learning approaches to hybrid visual servoing in accurate manufacturing contexts
    • …
    corecore