6,695 research outputs found

    Automated Pin-Dot Marking Effects on A709-Gr50 Steel Plate Fatigue Capacity

    Get PDF
    During fabrication of multi-piece steel bridge assemblies, markings are often made on the steel surface to identify/track individual pieces or to provide reference for fabrication layout or later erection. Automated marking methods such as computer numerically controlled (CNC) pin-dot marking offer fabrication efficiencies; however, for marked steel sections subjected to frequent or repeated loading (i.e. bridge girders) many code specifications require experimental testing to verify any marking effects on fatigue capacity. In this study, the effects of automated pin-dot markings on the fatigue capacity of A709-Gr50 bridge steel are experimentally investigated from 13 specimens considering 2 marking frequencies (corresponding to marking speeds of 50in./min and 10in./min), 2 applied stress ranges (35ksi and 45ksi), and 2 material orientations (both longitudinal and transverse plate rolling directions). Results from the 13 high-cycle fatigue tests, along with other fatigue test results from the literature indicate that the surface markings from the automated marking systems have no effect on the fatigue capacity of the A709-Gr50 plate. All marked specimens achieved higher fatigue capacities than would be expected for unmarked specimens meeting the AASHTO fatigue detail category ‘A’ designation

    A Distributed Method for Trust-Aware Recommendation in Social Networks

    Full text link
    This paper contains the details of a distributed trust-aware recommendation system. Trust-base recommenders have received a lot of attention recently. The main aim of trust-based recommendation is to deal the problems in traditional Collaborative Filtering recommenders. These problems include cold start users, vulnerability to attacks, etc.. Our proposed method is a distributed approach and can be easily deployed on social networks or real life networks such as sensor networks or peer to peer networks

    Adaptive Partitioning for Large-Scale Dynamic Graphs

    Get PDF
    Abstract—In the last years, large-scale graph processing has gained increasing attention, with most recent systems placing particular emphasis on latency. One possible technique to improve runtime performance in a distributed graph processing system is to reduce network communication. The most notable way to achieve this goal is to partition the graph by minimizing the num-ber of edges that connect vertices assigned to different machines, while keeping the load balanced. However, real-world graphs are highly dynamic, with vertices and edges being constantly added and removed. Carefully updating the partitioning of the graph to reflect these changes is necessary to avoid the introduction of an extensive number of cut edges, which would gradually worsen computation performance. In this paper we show that performance degradation in dynamic graph processing systems can be avoided by adapting continuously the graph partitions as the graph changes. We present a novel highly scalable adaptive partitioning strategy, and show a number of refinements that make it work under the constraints of a large-scale distributed system. The partitioning strategy is based on iterative vertex migrations, relying only on local information. We have implemented the technique in a graph processing system, and we show through three real-world scenarios how adapting graph partitioning reduces execution time by over 50 % when compared to commonly used hash-partitioning. I

    Robustness of Multistory Buildings with Masonry Infill

    Get PDF

    Chapter 4: Infrastructure Considerations for CO2 Utilization, in: Carbon Dioxide Utilization Markets and Infrastructure Status and Opportunities: A First Report

    Get PDF
    This chapter describes considerations for developing infrastructure for carbon dioxide (CO2) utilization, taking into account the CO2-derived products identified in Chapter 3 and the existing infrastructure discussed in Chapter 2. Infrastructure needs throughout the CO2 utilization value chain are examined, from capture to purification, transportation, conversion, and, where applicable, transportation of the CO2-derived product. Requirements for enabling infrastructure, namely, clean electricity, hydrogen, water, land, and energy storage, are also considered

    Quantifying Information Overload in Social Media and its Impact on Social Contagions

    Full text link
    Information overload has become an ubiquitous problem in modern society. Social media users and microbloggers receive an endless flow of information, often at a rate far higher than their cognitive abilities to process the information. In this paper, we conduct a large scale quantitative study of information overload and evaluate its impact on information dissemination in the Twitter social media site. We model social media users as information processing systems that queue incoming information according to some policies, process information from the queue at some unknown rates and decide to forward some of the incoming information to other users. We show how timestamped data about tweets received and forwarded by users can be used to uncover key properties of their queueing policies and estimate their information processing rates and limits. Such an understanding of users' information processing behaviors allows us to infer whether and to what extent users suffer from information overload. Our analysis provides empirical evidence of information processing limits for social media users and the prevalence of information overloading. The most active and popular social media users are often the ones that are overloaded. Moreover, we find that the rate at which users receive information impacts their processing behavior, including how they prioritize information from different sources, how much information they process, and how quickly they process information. Finally, the susceptibility of a social media user to social contagions depends crucially on the rate at which she receives information. An exposure to a piece of information, be it an idea, a convention or a product, is much less effective for users that receive information at higher rates, meaning they need more exposures to adopt a particular contagion.Comment: To appear at ICSWM '1

    On the Capacity Degradation in Broadband MIMO Satellite Downlinks with Atmospheric Impairments

    Get PDF
    Abstract—We investigate the impact of atmospheric impairments on the theoretical bandwidth efficiency of Multiple-Input Multiple-Output (MIMO) geostationary satellite links which are shaped to optimize the channel bandwidth efficiency. We analyze the impairments caused by precipitation, since this is the most severe atmospheric effect causing capacity degradations. By theory, the MIMO channel capacity is strongly affected by signal attenuation as well as signal phase shifts that might reduce the number and strength of spatial subchannels (eigenmodes). We will show, however, that the characteristics of the phase disturbances prevent a loss of capacity. Regarding the additional attenuation, which the signals may encounter passing through the troposphere, we will quantify outage values for several levels of link capacity degradation. Although a loss of capacity cannot be avoided in total, it still turns out that MIMO systems outperform conventional Single-Input Single-Output (SISO) designs in terms of reliability. Even in the presence of atmospheric perturbations, MIMO systems still provide enormous capacity gains and vast reliability improvements. Thus, the MIMO satellite systems presented are perfectly suited to establish the backbone network of future broadband wireless standards (e.g. DVB-SH), supporting high data rates for a variety of worldwide services. I
    • …
    corecore