2,530 research outputs found

    Boosting with early stopping: Convergence and consistency

    Full text link
    Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulting estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previous iterations. An unusual regularization technique, early stopping, is employed based on CV or a test set. This paper studies numerical convergence, consistency and statistical rates of convergence of boosting with early stopping, when it is carried out over the linear span of a family of basis functions. For general loss functions, we prove the convergence of boosting's greedy optimization to the infinimum of the loss function over the linear span. Using the numerical convergence result, we find early-stopping strategies under which boosting is shown to be consistent based on i.i.d. samples, and we obtain bounds on the rates of convergence for boosting estimators. Simulation studies are also presented to illustrate the relevance of our theoretical results for providing insights to practical aspects of boosting. As a side product, these results also reveal the importance of restricting the greedy search step-sizes, as known in practice through the work of Friedman and others. Moreover, our results lead to a rigorous proof that for a linearly separable problem, AdaBoost with \epsilon\to0 step-size becomes an L^1-margin maximizer when left to run to convergence.Comment: Published at http://dx.doi.org/10.1214/009053605000000255 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A study of effectiveness of ground improvement for liquefaction mitigation

    Get PDF
    Our ability to identify the existence of soil liquefaction potential is now better than our ability to know how to mitigate it economically and effectively. Based on a recent survey conducted by the Deep Foundation Institute (DFI) on liquefaction mitigation, remedial design methods and their effectiveness verification are regarded as somewhat to highly non-uniform by the majority of the geotechnical engineering community in the U.S. Many recent reconnaissance reports also indicate that previous remediation design of ground improvement for liquefaction mitigation is not as reliable as expected. Hence, the lack of a uniform framework to evaluate and compare improvement effectiveness is an important factor leading to the insufficient or inefficient remedial design for liquefaction mitigation by ground improvement. An efficient, representative and comprehensive collection, evaluation and comparison of quantitative effectiveness data is a great challenge in liquefaction mitigation practice and the key issue is the establishment of a uniform evaluation framework. These expectations and objectives comply well with the requirements of an evolving evolutionary seismic design guideline termed Performance-Based Design (PBD). The process of establishing such an evaluation and comparison framework is also a process of re-evaluation of the improved performances and seismic design optimization, within the framework of PBD. To establish the uniform framework, a comprehensive numerical study is conducted to identify the failure mechanisms of a well-documented case history; an unimproved caisson quay wall in liquefiable soil reported in the Kobe earthquake in 1995. After the calibration of numerical model based on the case study, in the second step, various remedial methods including the stone column method, vibro-compaction method and deep soil mixing method are evaluated to improve the performance of this specific quay wall. For each analyzed countermeasure, a comprehensive parametric study is conducted to optimize the remedial design by determining the optimum design parameters. Eventually, all of the improved performance data or termed Engineering Demand Parameters (EDPs) in terms of quay wall seismic deformation and performance grades are plotted together to show the difference of improvement effectiveness achieved by various examined cases. These cases differ in the use of remedial methods and/or design parameters. The results are also used to rank the analyzed remedial methods and optimize their designs. In addition, as a stand-alone product of this study, a simplified chart method is proposed to estimate the improved deformation of caisson quay walls placed in liquefiable soil. Based on the method, the improved deformation of the walls after an earthquake can be reasonably estimated with the input of peak ground acceleration (PGA) of ground motion, improvement zone dimensions and improved soil properties. The results of this study show that failure of the examined caisson quay wall is induced by the deformation of the foundation and backfill soils. For all the analyzed remedial methods, improving the top 10 to 15 m of foundation soil under the wall and first 20 to 25 m of backfill soil behind the quay wall shows the best improvement efficiency. The improved performances of the quay wall are estimated to be acceptable, which only requires reasonable restoration effort to fully recover the damage under the earthquake motion with a probability of exceedance of 10 percent during its life-span. Different remedial designs using various methods are classified into three categories depending on their improved performance grades. Future research is recommended to include verification, implementation and updating of the proposed framework to advance the state-of-the-art of liquefaction mitigation using ground improvement

    Clogging effects of portland cement pervious concrete

    Get PDF
    Portland Cement Pervious Concrete (PCPC) is a unique and effective mean to solve the important environmental issues and to support green, sustainable growth, by reducing stormwater and providing treatment of pollutants contained within. As a replacement for conventional impermeable pavement, PCPC has seen increasing used in recent year. Clogging of PCPC leading to potential problems in serviceability has been regarded as one of the primary drawbacks of PCPC systems. The clogging potential of three void ratios of pervious concrete were examined using three different soil types: sand, clayey silt and clayey silty sand. Pervious concrete cylindrical specimens were exposed to sediments mixed in water to simulate runoff with small and large load of soil sediments. Pressure washing, vacuuming and a combination of these were used as rehabilitation methods to clean the clogged specimens. The clogging tests were conducted using falling head permeability apparatus by allowing the dirty water to flow through the specimen. A clogging cycle included both clogging and cleaning procedure. The permeability was determined during the clogging procedure and after the cleaning procedure in each clogging cycle. 20 clogging cycles were repeated on each sample to simulate the 20 years of pavement service life. The results show that permeability reduction magnitude as well as rate and permeability recovery by rehabilitation are significantly affected by sediment types, void ratios of specimens, and selection of rehabilitation methods. The results provide a quantitative evaluation of the clogging effect of pervious concrete, and the comparison of tested rehabilitation methods in terms of permeability recovery

    Distributed sensing coverage maintenance in sensor networks

    Get PDF
    Sensing coverage is one of the key performance indicators of a large-scale sensor network. Sensing coverage holes may appear anywhere in the network field at any time due to random deployment, depletion of sensor battery power, or natural events in the deployment environment such as strong wind blowing some sensors away. Discovering the exact boundaries of coverage holes is important because it enables fast and efficient patching of coverage holes. In this thesis, we propose a framework of sensing coverage maintenance in sensor networks. In our framework, a sensor network consists of stationary and mobile sensors, where mobile sensors are used as patching hosts. We divide the coverage maintenance into two components: coverage hole discovery and coverage hole patching, and propose new solutions to both components. (1) We present two efficient distributed algorithms that periodically discover the precise boundaries of coverage holes. Our algorithms can handle the case that the transmission range of a sensor is smaller than twice the sensing range of the sensor. This case is largely ignored by previous work. (2) We present an efficient hole patching algorithm, which runs in linear time, based on the knowledge of the precise boundary of each coverage hole. We further propose new solutions for looking up available patching hosts, and movement planning. We present rigorous mathematical proofs of the correctness of the proposed hole discovery algorithms. We also show the running time and the performance bound in terms of mobile sensors needed of our hole patching algorithm through solid mathematical analysis. Our simulation results show that our distributed discovery algorithms are much more efficient than their centralized counterparts in terms of network overhead and total discovery time while still achieving the same correctness in discovering the boundaries of coverage holes. Furthermore, our patching algorithm performs well in terms of number of mobile sensors needed with a linear running time, and our hole patching scheme can achieve fast hole patching time when moving mobile sensors in a parallel manner

    The Structure of O-antigen from Lipopolysaccharide of Rhizobium leguminosarum 128C53 and Its Nod-Fix-Mutant

    Get PDF
    The LPS of R. leguminosarum 128C53 smr rifr (a streptomycin and rifampicin resistant strain of wild type 128C53) and its mutant ANU54 (nod-, fix-) were isolated from the bacterial pellet by hot phenol/water extraction followed by gel filtration chromatography. The O-antigen was isolated by mild-acid hydrolysis of the LPS and purified by gel filtration chromatography using a Sephadex G-50 or G-25. The following results were the same for both the parent and mutant. The composition and linkage of the O-antigen were determined by gas chromatography (GC), GC-mass spectrometry, 1H and 13C nuclear magnetic resonance spectroscopy (NMR) techniques. The data indicate that the O-antigen of the LPS from parent R. leguminosarum 128C53 and its mutant ANU54 are identical. The O-antigen contains a tetrasaccharide repeating unit. The backbone consists of one 1,3-linked-rhamnose and two 1,3-linked-fucose residues. A terminal mannose is linked to the 2-position of one of the two fucose residues. The 1H-NMR analysis indicates that all the glycosyl residues are alpha linked. The exact position of the mannose residue is under further investigation

    Comorbidity of cardiovascular disease, diabetes and chronic kidney disease in Australia

    Full text link
    This is the first report of a projected series regarding the comorbidity of cardiovascular disease (CVD), diabetes and chronic kidney disease (CKD) in Australia. Comorbidity refers to any two or more of these diseases that occur in one person at the same time. The questions to be answered in this report include: 1. How many Australians have comorbidity of CVD, diabetes and CKD? 2. What is the proportion of hospitalisations with these comorbidities? 3. How much do these comorbidities contribute to deaths? 4. What is the magnitude of comorbidity in the context of each individual disease? 5. Are there differences in the distribution of these comorbidities among age groups and sexes

    Spin-valley qubit in nanostructures of monolayer semiconductors: Optical control and hyperfine interaction

    Get PDF
    We investigate the optical control possibilities of spin-valley qubit carried by single electrons localized in nanostructures of monolayer TMDs, including small quantum dots formed by lateral heterojunction and charged impurities. The quantum controls are discussed when the confinement induces valley hybridization and when the valley hybridization is absent. We show that the bulk valley and spin optical selection rules can be inherited in different forms in the two scenarios, both of which allow the definition of spin-valley qubit with desired optical controllability. We also investigate nuclear spin induced decoherence and quantum control of electron-nuclear spin entanglement via intervalley terms of the hyperfine interaction. Optically controlled two-qubit operations in a single quantum dot are discussed.Comment: 17pages, 10 figure
    corecore