172 research outputs found

    Effect of Confinement and Temperature on the Behavior of EPS Geofoam

    Get PDF
    EPS geofoam blocks underlying compacted soil and structural loads become subjected to multi-axial loading. Effects of confining pressure on the stress-strain behavior of EPS geofoam have been investigated in previous studies. Some studies found increases in confining stress lead to corresponding decreases in both modulus and compressive strength. Increasing confining stress has also been reported to result in higher compressive strength. Regardless of the sense and attributed significance of the effects of confinement on EPS geofoam behavior, the implied effects on performance are generally not considered in practice. A series of triaxial compression tests were conducted on EPS geofoams of different densities over a range of confining pressures. Results from the investigation indicate increases in confinement lead to decrease in yield stress and post yield compressive resistances, depending on the EPS density and range of confining pressures. The practical significances of confining stress effects are discussed. An approach for incorporating the more significant effects of confining stress on EPS geofoam behavior is considered. Evaluations of EPS-soil-structure interactions require reasonable representation of stress-strain relationships for numerical modeling. A method proposed in this work uses density of geofoam block and resin material properties to represent the stress-strain response of EPS geofoam. The stress-strain curves obtained from such representation are compared with results from laboratory tests and models by others. The stress-strain curves generated by the proposed method predict very well the relations especially for denser geofoams. A modified hyperbolic stress-strain relationships that can account for confining stress effects is also proposed. The modified hyperbolic model only requires three parameters that can be obtained from triaxial tests. Prediction accuracy of this model is compared with data from triaxial tests which were not part of data sets used to obtain model parameters. Comparison is made with other models proposed by different authors and the stress-strain relationships obtained by this approach predict test data well. Characteristics of inherent and stress induced anisotropy of EPS geofoam was investigated by triaxial tests conducted on pre-stressed EPS geofoam. Induced anisotropy was observed to reduce the modulus significantly. A series of creep tests were performed on different densities of EPS geofoam with and without confining pressures. The results showed confining pressures can significantly affect the creep responses of EPS geofoam. Effects of confining pressures on creep deformations were more pronounced for lower densities. Creep tests were performed in a temperature controlled chamber to evaluate effects of cyclic temperatures. Coupled effects of temperature and creep were studied for different stress levels. Comparisons were made to actual field observations and FLAC model results. Strains and induced stresses from seasonal temperature variations were relatively small

    Detection of seismic site effects: Development of experimental methods and application to the city of Nice

    Get PDF
    Les effets de site reprĂ©sentent un enjeu important pour la prĂ©vention parasismique, puisqu’ils peuvent aggraver considĂ©rablement les dommages lors d’un sĂ©isme. La connaissance de la rĂ©ponse des sols aux sĂ©ismes permet d’adapter la rĂ©glementation parasismique Ă  ces contraintes lors de l’élaboration de microzonages et de Plan de PrĂ©vention des risques. Les mĂ©thodes expĂ©rimentales de dĂ©termination des effets de site visent Ă  obtenir les paramĂštres de l’amplification sismique par la mesure de sĂ©ismes ou de bruit de fond sismique. Cet article prĂ©sente les principaux rĂ©sultats obtenus jusqu’en 2005 par l’ERA « Risque sismique » dans la mise au point de ces techniques et leur application comparĂ©e sur la ville de Nice. Cette ville a effectivement constituĂ© le principal chantier d’application grĂące Ă  de multiples campagnes d’enregistrements sismiques. De nombreux sĂ©ismes ont d’abord Ă©tĂ© analysĂ©s par la mĂ©thode des fonctions de transfert. La mĂ©thode « H/V bruit de fond » a Ă©tĂ© appliquĂ©e sur plus de 600 points et les rĂ©sultats interpolĂ©s spatialement. Un modĂšle gĂ©otechnique du sous-sol a ensuite permis de comparer les rĂ©sultats obtenus avec la gĂ©ologie et des simulations numĂ©riques de la propagation des ondes. La complĂ©mentaritĂ© des mĂ©thodes et leurs atouts respectifs sont mis en valeur sur ce territoire Ă  fort enjeu.Site effects represent a critical challenge in the field of earthquake prevention since these effects are capable of seriously exacerbating damage whenever an earthquake strikes. Knowledge of the soil response to seismic activity allows adapting earthquake protection regulations to better incorporate these constraints during the process of defining microzones and producing the Risk Prevention Plan. The experimental methods employed to determine site effects seek to obtain seismic amplification parameters by means of measuring earthquakes or seismic background noise. This article presents the main set of results derived until 2005 by the “Seismic risk” ERA research team during development of these techniques as well as their comparative application to Nice, a city that has constituted the main field application site thanks to its hosting of multiple seismic recording campaigns. For starters, many earthquakes have been analyzed according to the transfer function method. The “Horizontal/Vertical background noise” method was in particular applied to over 600 points, with results being spatially interpolated. A geotechnical model of the subsoil then served to compare the results output with both the geology and numerical simulations of wave propagation. The complementarity of methods and their respective advantages are highlighted in this seismically-active setting

    VANET Applications: Hot Use Cases

    Get PDF
    Current challenges of car manufacturers are to make roads safe, to achieve free flowing traffic with few congestions, and to reduce pollution by an effective fuel use. To reach these goals, many improvements are performed in-car, but more and more approaches rely on connected cars with communication capabilities between cars, with an infrastructure, or with IoT devices. Monitoring and coordinating vehicles allow then to compute intelligent ways of transportation. Connected cars have introduced a new way of thinking cars - not only as a mean for a driver to go from A to B, but as smart cars - a user extension like the smartphone today. In this report, we introduce concepts and specific vocabulary in order to classify current innovations or ideas on the emerging topic of smart car. We present a graphical categorization showing this evolution in function of the societal evolution. Different perspectives are adopted: a vehicle-centric view, a vehicle-network view, and a user-centric view; described by simple and complex use-cases and illustrated by a list of emerging and current projects from the academic and industrial worlds. We identified an empty space in innovation between the user and his car: paradoxically even if they are both in interaction, they are separated through different application uses. Future challenge is to interlace social concerns of the user within an intelligent and efficient driving

    Editorial: Volume 9, Issue 1

    Get PDF
    This year has been an exciting year for the Journal of Financial Therapy’s sponsoring organization, Financial Therapy Association. An idea was sparked many years ago by the FTA Board of Directors that a designation or credential should be created. With the ushering in of CFT-Iℱ, it is a critical moment for further research to be conducted that will continue to help inform the practice of financial therapy. Now, more than ever, we must connect the areas of practice, research, and theory to not only inform best practices of financial therapy, but also to legitimize the work. Meaningful research to inform for the field cannot be conducted without collaboration with practitioners, and credible practice cannot be done be without theoretical and empirical support

    Regression and Singular Value Decomposition in Dynamic Graphs

    Full text link
    Most of real-world graphs are {\em dynamic}, i.e., they change over time. However, while problems such as regression and Singular Value Decomposition (SVD) have been studied for {\em static} graphs, they have not been investigated for {\em dynamic} graphs, yet. In this paper, we introduce, motivate and study regression and SVD over dynamic graphs. First, we present the notion of {\em update-efficient matrix embedding} that defines the conditions sufficient for a matrix embedding to be used for the dynamic graph regression problem (under l2l_2 norm). We prove that given an n×mn \times m update-efficient matrix embedding (e.g., adjacency matrix), after an update operation in the graph, the optimal solution of the graph regression problem for the revised graph can be computed in O(nm)O(nm) time. We also study dynamic graph regression under least absolute deviation. Then, we characterize a class of matrix embeddings that can be used to efficiently update SVD of a dynamic graph. For adjacency matrix and Laplacian matrix, we study those graph update operations for which SVD (and low rank approximation) can be updated efficiently

    A specified procedure for distress identification and assessment for urban road surfaces based on PCI

    Get PDF
    In this paper, a simplified procedure for the assessment of pavement structural integrity and the level of service for urban road surfaces is presented. A sample of 109 Asphalt Concrete (AC) urban pavements of an Italian road network was considered to validate the methodology. As part of this research, the most recurrent defects, those never encountered and those not defined with respect to the list collected in the ASTM D6433 have been determined by statistical analysis. The goal of this research is the improvement of the ASTM D6433 Distress Identification Catalogue to be adapted to urban road surfaces. The presented methodology includes the implementation of a Visual Basic for Application (VBA) language-based program for the computerization of Pavement Condition Index (PCI) calculation with interpolation by the parametric cubic spline of all of the density/deduct value curves of ASTM D6433 distress types. Also, two new distress definitions (for manholes and for tree roots) and new density/deduct curve values were proposed to achieve a new distress identification manual for urban road pavements. To validate the presented methodology, for the 109 urban pavements considered, the PCI was calculated using the new distress catalogue and using the ASTM D6433 implemented on PAVER TM. The results of the linear regression between them and their statistical parameters are presented in this paper. The comparison of the results shows that the proposed method is suitable for the identification and assessment of observed distress in urban pavement surfaces at the PCI-based scale

    MULTIMAP AND MULTISET DATA STRUCTURES IN STAPL

    Get PDF
    The Standard Template Adaptive Parallel Library (STAPL) is an e_cient programming framework whose components make it easier to implement parallel applications that can utilize multiple processors to solve large problems concurrently [1]. STAPL is developed using the C++ programming language and provides parallel equivalents of many algorithms and data structures (containers) found in its Standard Template Library (STL). Although STAPL contains a large collection of parallel data structures and algorithms, there are still many algorithms and containers that are not yet implemented in STAPL. Multimap and multiset are two associative containers that are included in STL but not yet implemented in STAPL. The goal of this work is to design and implement the parallel multimap and parallel multiset containers that provide the same functionality as their STL counterparts while enabling parallel computation on large scale data

    Dynamic Multiple Work Stealing Strategy for Flexible Load Balancing

    Get PDF
    Lazy-task creation is an efficient method of overcoming the overhead of the grain-size problem in parallel computing. Work stealing is an effective load balancing strategy for parallel computing. In this paper, we present dynamic work stealing strategies in a lazy-task creation technique for efficient fine-grain task scheduling. The basic idea is to control load balancing granularity depending on the number of task parents in a stack. The dynamic-length strategy of work stealing uses run-time information, which is information on the load of the victim, to determine the number of tasks that a thief is allowed to steal. We compare it with the bottommost first work stealing strategy used in StackThread/MP, and the fixed-length strategy of work stealing, where a thief requests to steal a fixed number of tasks, as well as other multithreaded frameworks such as Cilk and OpenMP task implementations. The experiments show that the dynamic-length strategy of work stealing performs well in irregular workloads such as in UTS benchmarks, as well as in regular workloads such as Fibonacci, Strassen\u27s matrix multiplication, FFT, and Sparse-LU factorization. The dynamic-length strategy works better than the fixed-length strategy because it is more flexible than the latter; this strategy can avoid load imbalance due to overstealing

    Aging-Aware Request Scheduling for Non-Volatile Main Memory

    Full text link
    Modern computing systems are embracing non-volatile memory (NVM) to implement high-capacity and low-cost main memory. Elevated operating voltages of NVM accelerate the aging of CMOS transistors in the peripheral circuitry of each memory bank. Aggressive device scaling increases power density and temperature, which further accelerates aging, challenging the reliable operation of NVM-based main memory. We propose HEBE, an architectural technique to mitigate the circuit aging-related problems of NVM-based main memory. HEBE is built on three contributions. First, we propose a new analytical model that can dynamically track the aging in the peripheral circuitry of each memory bank based on the bank's utilization. Second, we develop an intelligent memory request scheduler that exploits this aging model at run time to de-stress the peripheral circuitry of a memory bank only when its aging exceeds a critical threshold. Third, we introduce an isolation transistor to decouple parts of a peripheral circuit operating at different voltages, allowing the decoupled logic blocks to undergo long-latency de-stress operations independently and off the critical path of memory read and write accesses, improving performance. We evaluate HEBE with workloads from the SPEC CPU2017 Benchmark suite. Our results show that HEBE significantly improves both performance and lifetime of NVM-based main memory.Comment: To appear in ASP-DAC 202
    • 

    corecore