292,291 research outputs found

    Eco-efficient process based on conventional machining as an alternative technology to chemical milling of aeronautical metal skin panels

    Get PDF
    El fresado químico es un proceso diseñado para la reducción de peso de pieles metálicas que, a pesar de los problemas medioambientales asociados, se utiliza en la industria aeronáutica desde los años 50. Entre sus ventajas figuran el cumplimiento de las estrictas tolerancias de diseño de piezas aeroespaciales y que pese a ser un proceso de mecanizado, no induce tensiones residuales. Sin embargo, el fresado químico es una tecnología contaminante y costosa que tiende a ser sustituida. Gracias a los avances realizados en el mecanizado, la tecnología de fresado convencional permite alcanzar las tolerancias requeridas siempre y cuando se consigan evitar las vibraciones y la flexión de la pieza, ambas relacionadas con los parámetros del proceso y con los sistemas de utillaje empleados. Esta tesis analiza las causas de la inestabilidad del corte y la deformación de las piezas a través de una revisión bibliográfica que cubre los modelos analíticos, las técnicas computacionales y las soluciones industriales en estudio actualmente. En ella, se aprecia cómo los modelos analíticos y las soluciones computacionales y de simulación se centran principalmente en la predicción off-line de vibraciones y de posibles flexiones de la pieza. Sin embargo, un enfoque más industrial ha llevado al diseño de sistemas de fijación, utillajes, amortiguadores basados en actuadores, sistemas de rigidez y controles adaptativos apoyados en simulaciones o en la selección estadística de parámetros. Además se han desarrollado distintas soluciones CAM basadas en la aplicación de gemelos virtuales. En la revisión bibliográfica se han encontrado pocos documentos relativos a pieles y suelos delgados por lo que se ha estudiado experimentalmente el efecto de los parámetros de corte en su mecanizado. Este conjunto de experimentos ha demostrado que, pese a usar un sistema que aseguraba la rigidez de la pieza, las pieles se comportaban de forma diferente a un sólido rígido en términos de fuerzas de mecanizado cuando se utilizaban velocidades de corte cercanas a la alta velocidad. También se ha verificado que todas las muestras mecanizadas entraban dentro de tolerancia en cuanto a la rugosidad de la pieza. Paralelamente, se ha comprobado que la correcta selección de parámetros de mecanizado puede reducir las fuerzas de corte y las tolerancias del proceso hasta un 20% y un 40%, respectivamente. Estos datos pueden tener aplicación industrial en la simplificación de los sistemas de amarre o en el incremento de la eficiencia del proceso. Este proceso también puede mejorarse incrementando la vida de la herramienta al utilizar fluidos de corte. Una correcta lubricación puede reducir la temperatura del proceso y las tensiones residuales inducidas a la pieza. Con este objetivo, se han desarrollado diferentes lubricantes, basados en el uso de líquidos iónicos (IL) y se han comparado con el comportamiento tribológico del par de contacto en seco y con una taladrina comercial. Los resultados obtenidos utilizando 1 wt% de los líquidos iónicos en un tribómetro tipo pin-on-disk demuestran que el IL no halogenado reduce significativamente el desgaste y la fricción entre el aluminio, material a mecanizar, y el carburo de tungsteno, material de la herramienta, eliminando casi toda la adhesión del aluminio sobre el pin, lo que puede incrementar considerablemente la vida de la herramienta.Chemical milling is a process designed to reduce the weight of metals skin panels. This process has been used since 1950s in the aerospace industry despite its environmental concern. Among its advantages, chemical milling does not induce residual stress and parts meet the required tolerances. However, this process is a pollutant and costly technology. Thanks to the last advances in conventional milling, machining processes can achieve similar quality results meanwhile vibration and part deflection are avoided. Both problems are usually related to the cutting parameters and the workholding. This thesis analyses the causes of the cutting instability and part deformation through a literature review that covers analytical models, computational techniques and industrial solutions. Analytics and computational solutions are mainly focused on chatter and deflection prediction and industrial approaches are focused on the design of workholdings, fixtures, damping actuators, stiffening devices, adaptive control systems based on simulations and the statistical parameters selection, and CAM solutions combined with the use of virtual twins applications. In this literature review, few research works about thin-plates and thin-floors is found so the effect of the cutting parameters is also studied experimentally. These experiments confirm that even using rigid workholdings, the behavior of the part is different to a rigid body at high speed machining. On the one hand, roughness values meet the required tolerances under every set of the tested parameters. On the other hand, a proper parameter selection reduces the cutting forces and process tolerances by up to 20% and 40%, respectively. This fact can be industrially used to simplify workholding and increase the machine efficiency. Another way to improve the process efficiency is to increase tool life by using cutting fluids. Their use can also decrease the temperature of the process and the induced stresses. For this purpose, different water-based lubricants containing three types of Ionic Liquids (IL) are compared to dry and commercial cutting fluid conditions by studying their tribological behavior. Pin on disk tests prove that just 1wt% of one of the halogen-free ILs significantly reduces wear and friction between both materials, aluminum and tungsten carbide. In fact, no wear scar is noticed on the ball when one of the ILs is used, which, therefore, could considerably increase tool life

    Fast Cross-Validation via Sequential Testing

    Full text link
    With the increasing size of today's data sets, finding the right parameter configuration in model selection via cross-validation can be an extremely time-consuming task. In this paper we propose an improved cross-validation procedure which uses nonparametric testing coupled with sequential analysis to determine the best parameter set on linearly increasing subsets of the data. By eliminating underperforming candidates quickly and keeping promising candidates as long as possible, the method speeds up the computation while preserving the capability of the full cross-validation. Theoretical considerations underline the statistical power of our procedure. The experimental evaluation shows that our method reduces the computation time by a factor of up to 120 compared to a full cross-validation with a negligible impact on the accuracy

    Massively-Parallel Feature Selection for Big Data

    Full text link
    We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for feature selection (FS) in Big Data settings (high dimensionality and/or sample size). To tackle the challenges of Big Data FS PFBP partitions the data matrix both in terms of rows (samples, training examples) as well as columns (features). By employing the concepts of pp-values of conditional independence tests and meta-analysis techniques PFBP manages to rely only on computations local to a partition while minimizing communication costs. Then, it employs powerful and safe (asymptotically sound) heuristics to make early, approximate decisions, such as Early Dropping of features from consideration in subsequent iterations, Early Stopping of consideration of features within the same iteration, or Early Return of the winner in each iteration. PFBP provides asymptotic guarantees of optimality for data distributions faithfully representable by a causal network (Bayesian network or maximal ancestral graph). Our empirical analysis confirms a super-linear speedup of the algorithm with increasing sample size, linear scalability with respect to the number of features and processing cores, while dominating other competitive algorithms in its class

    Search for non-relativistic Magnetic Monopoles with IceCube

    Get PDF
    The IceCube Neutrino Observatory is a large Cherenkov detector instrumenting 1km31\,\mathrm{km}^3 of Antarctic ice. The detector can be used to search for signatures of particle physics beyond the Standard Model. Here, we describe the search for non-relativistic, magnetic monopoles as remnants of the GUT (Grand Unified Theory) era shortly after the Big Bang. These monopoles may catalyze the decay of nucleons via the Rubakov-Callan effect with a cross section suggested to be in the range of 1027cm210^{-27}\,\mathrm{cm^2} to 1021cm210^{-21}\,\mathrm{cm^2}. In IceCube, the Cherenkov light from nucleon decays along the monopole trajectory would produce a characteristic hit pattern. This paper presents the results of an analysis of first data taken from May 2011 until May 2012 with a dedicated slow-particle trigger for DeepCore, a subdetector of IceCube. A second analysis provides better sensitivity for the brightest non-relativistic monopoles using data taken from May 2009 until May 2010. In both analyses no monopole signal was observed. For catalysis cross sections of 1022(1024)cm210^{-22}\,(10^{-24})\,\mathrm{cm^2} the flux of non-relativistic GUT monopoles is constrained up to a level of Φ901018(1017)cm2s1sr1\Phi_{90} \le 10^{-18}\,(10^{-17})\,\mathrm{cm^{-2}s^{-1}sr^{-1}} at a 90% confidence level, which is three orders of magnitude below the Parker bound. The limits assume a dominant decay of the proton into a positron and a neutral pion. These results improve the current best experimental limits by one to two orders of magnitude, for a wide range of assumed speeds and catalysis cross sections.Comment: 20 pages, 20 figure

    Forward Attention in Sequence-to-sequence Acoustic Modelling for Speech Synthesis

    Full text link
    This paper proposes a forward attention method for the sequenceto- sequence acoustic modeling of speech synthesis. This method is motivated by the nature of the monotonic alignment from phone sequences to acoustic sequences. Only the alignment paths that satisfy the monotonic condition are taken into consideration at each decoder timestep. The modified attention probabilities at each timestep are computed recursively using a forward algorithm. A transition agent for forward attention is further proposed, which helps the attention mechanism to make decisions whether to move forward or stay at each decoder timestep. Experimental results show that the proposed forward attention method achieves faster convergence speed and higher stability than the baseline attention method. Besides, the method of forward attention with transition agent can also help improve the naturalness of synthetic speech and control the speed of synthetic speech effectively.Comment: 5 pages, 3 figures, 2 tables. Published in IEEE International Conference on Acoustics, Speech and Signal Processing 2018 (ICASSP2018

    Parallel Hybrid Trajectory Based Metaheuristics for Real-World Problems

    Get PDF
    G. Luque, E. Alba, Parallel Hybrid Trajectory Based Metaheuristics for Real-World Problems, In Proceedings of Intelligent Networking and Collaborative Systems, pp. 184-191, 2-4 September, 2015, Taipei, Taiwan, IEEE PressThis paper proposes a novel algorithm combining path relinking with a set of cooperating trajectory based parallel algorithms to yield a new metaheuristic of enhanced search features. Algorithms based on the exploration of the neighborhood of a single solution, like simulated annealing (SA), have offered accurate results for a large number of real-world problems in the past. Because of their trajectory based nature, some advanced models such as the cooperative one are competitive in academic problems, but still show many limitations in addressing large scale instances. In addition, the field of parallel models for trajectory methods has not deeply been studied yet (at least in comparison with parallel population based models). In this work, we propose a new hybrid algorithm which improves cooperative single solution techniques by using path relinking, allowing both to reduce the global execution time and to improve the efficacy of the method. We applied here this new model using a large benchmark of instances of two real-world NP-hard problems: DNA fragment assembly and QAP problems, with competitive results.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
    corecore