1,125,085 research outputs found

    Random Search Plus: A more effective random search for machine learning hyperparameters optimization

    Get PDF
    Machine learning hyperparameter optimization has always been the key to improve model performance. There are many methods of hyperparameter optimization. The popular methods include grid search, random search, manual search, Bayesian optimization, population-based optimization, etc. Random search occupies less computations than the grid search, but at the same time there is a penalty for accuracy. However, this paper proposes a more effective random search method based on the traditional random search and hyperparameter space separation. This method is named random search plus. This thesis empirically proves that random search plus is more effective than random search. There are some case studies to do a comparison between them, which consists of four different machine learning algorithms including K-NN, K-means, Neural Networks and Support Vector Machine as optimization objects with three different size datasets including Iris flower, Pima Indians diabetes and MNIST handwritten dataset. Compared to traditional random search, random search plus can find a better hyperparameters or do an equivalent optimization as random search but with less time at most cases. With a certain hyperparameter space separation strategy, it can only need 10% time of random search to do an equivalent optimization or it can increase both the accuracy of supervised leanings and the silhouette coefficient of a supervised learning by 5%-30% in a same runtime as random search. The distribution of the best hyperparameters searched by the two methods in the hyperparameters space shows that random search plus is more global than random search. The thesis also discusses about some future works like the feasibility of using genetic algorithm to improve the local optimization ability of random search plus, space division of non-integer hyperparameters, etc

    A simplified search strategy for identifying randomised controlled trials for systematic reviews of health care interventions : a comparison with more exhaustive strategies

    Get PDF
    Background It is generally believed that exhaustive searches of bibliographic databases are needed for systematic reviews of health care interventions. The CENTRAL database of controlled trials (RCTs) has been built up by exhaustive searching. The CONSORT statement aims to encourage better reporting, and hence indexing, of RCTs. Our aim was to assess whether developments in the CENTRAL database, and the CONSORT statement, mean that a simplified RCT search strategy for identifying RCTs now suffices for systematic reviews of health care interventions. Methods RCTs used in the Cochrane reviews were identified. A brief RCT search strategy (BRSS), consisting of a search of CENTRAL, and then for variants of the word random across all fields (random.af.)inMEDLINEandEMBASE,wasdevisedandrun.Anytrialsincludedinthemetaanalyses,butmissedbytheBRSS,wereidentified.Themetaanalyseswerethenrerun,withandwithoutthemissedRCTs,andthedifferencesquantified.Theproportionoftrialswithvariantsofthewordrandominthetitleorabstractwascalculatedforeachyear.ThenumberofRCTsretrievedbysearchingwith"random.af.) in MEDLINE and EMBASE, was devised and run. Any trials included in the meta-analyses, but missed by the BRSS, were identified. The meta-analyses were then re-run, with and without the missed RCTs, and the differences quantified. The proportion of trials with variants of the word random in the title or abstract was calculated for each year. The number of RCTs retrieved by searching with "random.af." was compared to the highly sensitive search strategy (HSSS). Results The BRSS had a sensitivity of 94%. It found all journal RCTs in 47 of the 57 reviews. The missing RCTs made some significant differences to a small proportion of the total outcomes in only five reviews, but no important differences in conclusions resulted. In the post-CONSORT years, 1997–2003, the percentage of RCTs with random in the title or abstract was 85%, a mean increase of 17% compared to the seven years pre-CONSORT (95% CI, 8.3% to 25.9%). The search using random$.af. reduced the MEDLINE retrieval by 84%, compared to the HSSS, thereby reducing the workload of checking retrievals. Conclusion A brief RCT search strategy is now sufficient to locate RCTs for systematic reviews in most cases. Exhaustive searching is no longer cost-effective, because in effect it has already been done for CENTRAL

    Do Targeted Hiring Subsidies and Profiling Techniques Reduce Unemployment?

    Get PDF
    To reduce unemployment targeted hiring subsidies for long-term unemployed are often recommended. To explore their effect on employment and wages, we devise a model with two types of unemployed and two methods of search, a public employment service (PES) and random search. The eligibility of a new match depends on the applicant's unemployment duration and on the method of search. The hiring subsidy raises job destruction and extends contrary to Mortensen-Pissarides (1999, 2003) the duration of a job search, so that equilibrium unemployment increases. Like the subsidy, organizational reforms, which advance the search effectiveness of the PES, crowd out the active jobseekers and reduce overall employment as well as social welfare. Nevertheless, reforms are a visible success for the PES and its target group, as they significantly increase the service's placement rate and lower the duration of a job search via the PES.matching model, hiring subsidy, endogenous separation rate, active labour market policy, PES, random search
    corecore