331 research outputs found

    Neuromuscular adaptations in endurance-trained boys and men

    Get PDF
    Competitive sports participation in youth is becoming increasingly more common in the Western world. It is widely accepted that sports participation, specifically endurance training, is beneficial for physical, psychomotor, and social development of children. The research on the effect of endurance training in children has focused mainly on healthrelated benefits and physiological adaptations, particularly on maximal oxygen uptake. However, corresponding research on neuromuscular adaptations to endurance training and the latter's possible effects on muscle strength in youth is lacking. In children and adults, resistance training can enhance strength and mcrease muscle activation. However, data on the effect of endurance training on strength and neuromuscular adaptations are limited. While some evidence exists demonstrating increased muscle activation and possibly increased strength in endurance athletes compared with untrained adults, the neuromuscular adaptations to endurance training in children have not been examined. Thus, the purpose of this study was to examine maximal isometric torque and rate of torque development (RID), along with the pattern of muscle activation during elbow and knee flexion and extension in muscle-endurancetrained and untrained men and boys. Subjects included 65 males: untrained boys (n=18), endurance-trained boys (n=12), untrained men (n=20) and endurance-trained men (n=15). Maximal isometric torque and rate of torque development were measured using an isokinetic dynamometer (Biodex III), and neuromuscular activation was assessed using surface electromyography (SEMG). Muscle strength and activation were assessed in the dominant arm and leg, in a cross-balanced fashion during elbow and knee flexion and extension. The main variables included peak torque (T), RTD, rate of muscle activation (Q30), Electro-mechanical delay (EMD), time to peak RTD and co-activation index. Age differences in T, RTD, electro-mechanical delay (EMD) and rate of muscle activation (Q30) were consistently observed in the four contractions tested. Additionally, Q30, nonnalized for peak EMG amplitude, was consistently higher in the endurancetrained men compared with untrained men. Co-activation index was generally low in all contractions. For example, during maximal voluntary isometric knee extension, men were stronger, had higher RTD and Q30, whether absolute or nonnalized values were used. Moreover, boys exhibited longer EMD (64.8 ± 18.5 ms vs. 56.6 ± 15.3 ms, for boys and men respectively) and time to peak RTD (112.4 ± 33.4 ms vs. 100.8 ± 39.1 ms for boys and men, respectively). In addition, endurance-trained men had lower T compared with untrained men, yet they also exhibited significantly higher nonnalized Q30 (1.9 ± 1.2 vs. 1.1 ± 0.7 for endurance-trained men and untrained men, respectively). No training effect was apparent in the boys. In conclusion, the findings demonstrate muscle strength and activation to be lower in children compared with adults, regardless of training status. The higher Q30 of the endurance-trained men suggests neural adaptations, similar to those expected in response to resistance training. The lower peak torque may su9gest a higher relative involvement oftype I muscle fibres in the endurance-trained athletes. Future research is required to better understand the effect of growth and development on muscle strength and activation patterns during dynamic and sub-maximal isometric contractions. Furthennore, training intervention studies could reveal the effects of endurance training during different developmental stages, as well as in different muscle groups

    LUD as an Instrument for (Sub)Metropolitanization: The 1000-District in Rishon-Lezion, Israel as a Case Study

    Get PDF
    Interest in the role of large urban development (LUD) projects in regeneration efforts of cities has risen in recent years. Studies of their planning process have often focused on global cities, examining challenges associated with their joint (public–private) governance structure, as well as those emanating from the need to balance local and global needs and interests. With few exceptions, the ways in which these projects fit in with metropolitan aspirations of small and medium cities were largely overlooked. In this article, we explore how a large-scale project was used by local authorities to reposition a secondary city as a sub-metropolitan center. Using the case of the 1000-District (Mitcham HaElef) in the Israeli city of Rishon-Lezion, it argues that while the project was originally designed to resolve the city’s scarce employment problem, it was gradually used to endow it with higher-order urban qualities, re-situating it as a sub-metropolitan center in the Tel-Aviv area. To support our argument, we focus on the project’s housing and employment components, including changes they were subjected to along the planning process, as well as the marketing campaign, which sought to re-present the city as a viable sub-metropolitan alternative. Drawing on qualitative methods, including personal interviews and content analysis, the article illustrates how one city’s large project is instrumentalized to attain metro-scale objectives. In so doing, it contributes to a nuanced understanding of the complexity of LUD planning, its stated objectives at various scales, and implications for actors in and beyond metropolitan jurisdictions

    Constructing cost-effective infrastructure networks

    Full text link
    The need for reliable and low-cost infrastructure is crucial in today's world. However, achieving both at the same time is often challenging. Traditionally, infrastructure networks are designed with a radial topology lacking redundancy, which makes them vulnerable to disruptions. As a result, network topologies have evolved towards a ring topology with only one redundant edge and, from there, to more complex mesh networks. However, we prove that large rings are unreliable. Our research shows that a sparse mesh network with a small number of redundant edges that follow some design rules can significantly improve reliability while remaining cost-effective. Moreover, we have identified key areas where adding redundant edges can impact network reliability the most by using the SAIDI index, which measures the expected number of consumers disconnected from the source node. These findings offer network planners a valuable tool for quickly identifying and addressing reliability issues without the need for complex simulations. Properly planned sparse mesh networks can thus provide a reliable and a cost-effective solution to modern infrastructure challenges

    A dynamic pattern of local auxin sources is required for root regeneration

    Get PDF
    Following removal of its stem cell niche, the root meristem can regenerate by recruitment of remnant cells from the stump. Regeneration is initiated by rapid accumulation of auxin near the injury site but the source of this auxin is unknown. Here, we show that auxin accumulation arises from the activity of multiple auxin biosynthetic sources that are newly specified near the cut site and that their continuous activity is required for the regeneration process. Auxin synthesis is highly localized and PIN-mediate transport is dispensable for auxin accumulation and tip regeneration. Roots lacking the activity of the regeneration competence factor ERF115, or that are dissected at a zone of low-regeneration potential, fail to activate local auxin sources. Remarkably, restoring auxin supply is sufficient to confer regeneration capacity to these recalcitrant tissues. We suggest that regeneration competence relies on the ability to specify new local auxin sources in a precise spatio-temporal pattern

    Temperature measurement in the Intel® CoreTM Duo Processor

    Get PDF
    Modern CPUs with increasing core frequency and power are rapidly reaching a point that the CPU frequency and performance are limited by the amount of heat that can be extracted by the cooling technology. In mobile environment, this issue is becoming more apparent, as form factors become thinner and lighter. Often, mobile platforms trade CPU performance in order to reduce power and manage the box thermals. Most of today's high performance CPUs provide thermal sensor on the die to allow thermal management, typically in the form of analog thermal diode. Operating system algorithms and platform embedded controllers read the temperature and control the processor power. In addition to full temperature reading, some products implement digital sensors with fixed temperature threshold, intended for fail safe operation. Temperature measurements using the diode suffer some inherent inaccuracies : ? Measurement accuracy - An external device connects to the diode and performs the A/D conversion. The combination of diode behavior, electrical noise and conversion accuracy result with measurement error ? Distance to the die hot spot - Due to routing restrictions, the diode is not placed at the hottest spot on the die. The temperature difference between the diode and the hot spot varies with the workload and the reported temperature dose not accurately represent the die max temperature. This offset is increasing as power density of the CPU increase. multiple core CPUs introduce harder problem to address as the workload and the thermal distribution changes with the different active cores. ? Manufacturing temperature accuracy - Inaccuracies in the test environment induce additional temperature inaccuracy between the measured temperature vs. the actual temperature. As a result to these effects, the thermal control algorithm requires to add temperature guard bend to account for the control feedback errors. These impact the performance and reliability of the silicon. In order to address the thermal control issues, the Intel® CoreTM Duo has introduced a new digital temperature reading capability on die. Multiple thermal sensors are distributed on the die on different possible hot spots. An A/D logic built around these sensors translates the temperature into a digital value, accessible to operating system thermal control S/W, or driver based control mechanism. Providing high accuracy temperature reading requires a calibration process. During high volume manufacturing, each sensor is calibrated to provide good accuracy and linearity. The die specification and reliability limitation is defined by the hottest spot on the die. In addition the calibration of the sensor is done at the same test conditions as the specification testing. Any test control inaccuracy is eliminated because the part is guaranteed to meet specifications at max temperature, as measured by the digital thermometer. As a result, the use of integrated thermal sensor enables improved reliability and performance at high workloads while meeting specifications at ant time. In this paper we will present the implementation and calibration details of the digital thermometer. We will show some studies of the temperature distribution on die and compare traditional diode based measurement to the digital sensor implementation

    Characterization of Secure Multiparty Computation Without Broadcast

    Get PDF
    A major challenge in the study of cryptography is characterizing the necessary and sufficient assumptions required to carry out a given cryptographic task. The focus of this work is the necessity of a broadcast channel for securely computing symmetric functionalities (where all the parties receive the same output) when one third of the parties, or more, might be corrupted. Assuming all parties are connected via a peer-to-peer network, but no broadcast channel (nor a secure setup phase) is available, we prove the following characterization: * A symmetric n-party functionality can be securely computed facing n/3<=t<n/2 corruptions (i.e., honest majority), if and only if it is \emph{(n-2t)-dominated}; a functionality is k-dominated, if \emph{any} k-size subset of its input variables can be set to determine its output. * Assuming the existence of one-way functions, a symmetric n-party functionality can be securely computed facing t>=n/2 corruptions (i.e., no honest majority), if and only if it is 1-dominated and can be securely computed with broadcast. It follows that, in case a third of the parties might be corrupted, broadcast is necessary for securely computing non-dominated functionalities (in which small subsets of the inputs cannot determine the output), including, as interesting special cases, the Boolean XOR and coin-flipping functionalities

    From Fairness to Full Security in Multiparty Computation

    Get PDF
    In the setting of secure multiparty computation (MPC), a set of mutually distrusting parties wish to jointly compute a function, while guaranteeing the privacy of their inputs and the correctness of the output. An MPC protocol is called fully secure if no adversary can prevent the honest parties from obtaining their outputs. A protocol is called fair if an adversary can prematurely abort the computation, however, only before learning any new information. We present highly efficient transformations from fair computations to fully secure computations, assuming the fraction of honest parties is constant (e.g., 1% of the parties are honest). Compared to previous transformations that require linear invocations (in the number of parties) of the fair computation, our transformations require super-logarithmic, and sometimes even super-constant, such invocations. The main idea is to delegate the computation to chosen random committees that invoke the fair computation. Apart from the benefit of uplifting security, the reduction in the number of parties is also useful, since only committee members are required to work, whereas the remaining parties simply listen to the computation over a broadcast channel. One application of these transformations is a new δ\delta-bias coin-flipping protocol, whose round complexity has a super-logarithmic dependency on the number of parties, improving over the protocol of Beimel, Omri, and Orlov (Crypto 2010) that has a linear dependency. A second application is a new fully secure protocol for computing the Boolean OR function, with a super-constant round complexity, improving over the protocol of Gordon and Katz (TCC 2009) whose round complexity is linear in the number of parties. Finally, we show that our positive results are in a sense optimal, by proving that for some functionalities, a super-constant number of (sequential) invocations of the fair computation is necessary for computing the functionality in a fully secure manner
    • …
    corecore