1,599 research outputs found
Peacock's "History of Arithmetic", an Attempt to reconcile empiricism to universality
When the Whig Anglican algebraist Rev. George Peacock (1791-1858) conceived of his new abstract view of Symbolical Algebra in the 1830s, he had already written an impressive little known « History of Arithmetic » for the Encyclopaedia Metropolitana, eventually published in 1845, back in the 1820s. This paper studies why this « History of Arithmetic » was conceived and how it reinforced Peacock's general view of algebra as a symbolizing process. As a fellow, tutor and lecturer at Trinity College since 1814, Peacock was already involved in the renewal of mathematics curriculum and mathematical research in Cambridge, as well as in the cultivation and the diffusion of science. As a reformer, Peacock along with his colleagues in Cambridge, faced the Industrial Revolution, its varied pressures on the country's academic institutions, and its concern with transformation processes. As soon as the 1820s, Peacock sought out a universal genesis from arithmetic to algebra, founded on the mathematical language of operations, and he launched his « History of Arithmetic » as a large inquiry into the vocabulary that all known tribes and nations used for elementary computations. In this way, he supported a moderate empiricist approach to science, deeply rooted in Locke's philosophy of human understanding. With a comparative and philological approach to numeral languages in hand, Peacock presented first arithmetic and then algebra as the progressive developments of asbtract calculating languages,.symbolising algorithmical processes. This view accounted for the special place he gave to Indian and Arabic arithmetics in his exposition of contemporaneous knowledge on numbers.Lorsque l'algébriste Whig anglican George Peacock (1791-1858) présente sa nouvelle conception de l'algèbre symbolique en 1830, il a déjà écrit une "History of Arithmetic", impressionnante mais peu connue, publiée dans l'Encyclopaedia Metropolitana en 1846. Cet article analyse les orientations philosophiques qui nourrissent cette "History of Arithmetic", et en quoi elles affirment déjà sa conception de l'algèbre comme une étape dans le processus de symbolisation des opérations. En tant que tuteur, "lecturer" et "fellow" de Trinity College depuis 1814, Peacock, tout comme ses collègues à Cambridge, est directement impliqué dans le renouvellement des études et de la recherche en mathématiques, ainsi que dans le renforcement du rôle de la science en Grande-Bretagne. En tant que réformateur, il est directement confronté aux effets de la révolution industrielle, notamment sur les institutions universitaires et les sociétés savantes, et sur l'importance qu'elle insuffle aux processus de transformation. Dès les années 1820, Peacock envisage un mode universel de développement de l'arithmétique, puis de l'algèbre, fondé sur l'expérience des opérations, et perçu à travers le vocabulaire qui les représente. L'"History of Arithmetic" est structurée comme une vaste enquête sur le langage littéral et numérique qu'utilisent les différentes tribus et nations du monde connu pour représenter les calculs élémentaires. Peacock soutient ainsi un empirisme modéré, profondément enraciné dans la philosophie de Locke sur l'entendement humain. Cette approche comparatiste et philologique des langages sur les opérations permet d'envisager l'arithmétique, puis l'algèbre, comme étapes successives du processus de symbolisation des opérations, conçues comme algorithmiques. L'importance accordée aux arithmétiques développées en Inde et dans le monde arabe prend ici tout son sens
Lunar electric power systems utilizing the SP-100 reactor coupled to dynamic conversion systems
An integration study was performed by Rocketdyne under contract to NASA-LeRC. The study was concerned with coupling an SP-0100 reactor to either a Brayton or Stirling power conversion system. The application was for a surface power system to supply power requirements to a lunar base. A power level of 550 kWe was selected based on the NASA Space Exploration Initiative 90-day study. Reliability studies were initially performed to determine optimum power conversion redundancy. This study resulted in selecting three operating engines and one stand-by unit. Integration design studies indicated that either the Brayton or Stirling power conversion systems could be integrated with the PS-100 reactor. The Stirling system had an integration advantage because of smaller piping size and fewer components. The Stirling engine, however, is more complex and heavier than the Brayton rotating unit, which tends to off-set the Stirling integration advantage. From a performance consideration, the Brayton had a 9 percent mass advantage, and the Stirling had a 50 percent radiator advantage
Predicting Aircraft Descent Length with Machine Learning
International audiencePredicting aircraft trajectories is a key element in the detection and resolution of air traffic conflicts. In this paper, we focus on the ground-based prediction of final descents toward the destination airport. Several Machine Learning methods – ridge regression, neural networks, and gradient-boosting machine – are applied to the prediction of descents toward Toulouse airport (France), and compared with a baseline method relying on the Eurocontrol Base of Aircraft Data (BADA). Using a dataset of 15,802 Mode-S radar trajectories of 11 different aircraft types, we build models which predict the total descent length from the cruise altitude to a given final altitude. Our results show that the Machine Learning methods improve the root mean square error on the predicted descent length of at least 20 % for the ridge regression, and up to 24 % for the gradient-boosting machine, when compared with the baseline BADA method
Machine Learning and Mass Estimation Methods for Ground-Based Aircraft Climb Prediction
International audienceIn this paper, we apply Machine Learning methods to improve the aircraft climb prediction in the context of ground-based applications. Mass is a key parameter for climb prediction. As it is considered a competitive parameter by many airlines, it is currently not available to ground-based trajectory predictors. Consequently, most predictors today use a reference mass that may be different from the actual aircraft mass. In previous papers, we have introduced a least square method to estimate the mass from past trajectory points, using the physical model of the aircraft. Another mass estimation method, based on an adaptive mechanism, has also been proposed by Schultz et. al. We now introduce a new approach, where the mass is considered as the response variable of a prediction model that is learned from a set of example trajectories. This Machine Learning approach is compared with the results obtained when using the BADA (Base of Aircraft Data) reference mass or the two state-of-the-art mass estimation methods. In these experiments, 9 different aircraft types are considered. When compared with the baseline method (resp. the mass estimation methods), the Machine Learning approach reduces the RMSE (Root Mean Square Error) on the predicted altitude by at least 58 % (resp. 27 %) when assuming the speed profile to be known, and by at least 29 % (resp. 17 %) when using the BADA speed profile except for the aircraft types E145 and F100. For these types, the observed speed profile is far from the BADA speed profile
Machine Learning Applied to Airspeed Prediction During Climb
International audienceIn this paper, we apply Machine Learning methods to improve the aircraft climb prediction in the context of groundbased applications. Mass and speed intent are key parameters for climb prediction. As they are considered as competitive parameters by many airlines, they are currently not available to groundbased trajectory predictors. Consequently, most predictors today use reference parameters that may be quite different from the actual ones. In our most recent paper ([1]), we have demonstrated that Machine Learning techniques provide a mass estimation significantly more precise than two state-of-the-art mass estimation methods. In this paper, we apply similar techniques to the speed intent. We first build a set of examples by adjusting CAS/Mach speed profile to each climb trajectory in our database. Then, using the adjusted values (ccas; cM) in this database, we learn a model able to predict the (cas;M) values of a new trajectory, using its past points as input. We apply this technique to actual Mode-C radar data and we consider 9 different aircraft types. When compared with the reference speed profiles provided by BADA, the reduction of the speed RMSE ranges from 36 % to 79 %, depending on the aircraft type. Using the predicted mass and speed profile, BADA is used to compute the predicted future trajectory with a 10 minute horizon. When compared with BADA used with the reference parameters, the reduction of the future altitude RMSE ranges from 45 % to 87 %
Energy rate prediction using an equivalent thrust setting profile
International audienceGround-based aircraft trajectory prediction is a major concern in air traffic management. A safe and efficient prediction is a prerequisite for the implementation of automated tools that detect and solve conflicts between trajectories. This paper focuses on the climb phase because predictions are less accurate in this phase. The Eurocontrol BADA1 model, as a total energy model, relies on the prediction of energy rate. In a kinetic model, this energy rate comes from the power provided by the forces applied to the aircraft. Computing these forces requires knowledge of the aircraft state (mass, airspeed, etc), atmospheric conditions (wind, temperature) and aircraft intent (maximum climb thrust or reduced climb thrust, for example). Some of this information like the mass and thrust setting are not available to ground-based systems. In this paper, we try to infer an equivalent weight and an equivalent thrust profile. These parameters are not meant to be true, however they are designed to improve the energy rate prediction. One common thrust setting profile for all the trajectories is built. This thrust profile is designed in such a way that the estimated equivalent weight provides a good energy rate prediction. We have compared the energy rate prediction using these equivalent parameters and BADA standard parameters
Association Rules Mining with Auto-Encoders
Association rule mining is one of the most studied research fields of data
mining, with applications ranging from grocery basket problems to explainable
classification systems. Classical association rule mining algorithms have
several limitations, especially with regards to their high execution times and
number of rules produced. Over the past decade, neural network solutions have
been used to solve various optimization problems, such as classification,
regression or clustering. However there are still no efficient way association
rules using neural networks. In this paper, we present an auto-encoder solution
to mine association rule called ARM-AE. We compare our algorithm to FP-Growth
and NSGAII on three categorical datasets, and show that our algorithm discovers
high support and confidence rule set and has a better execution time than
classical methods while preserving the quality of the rule set produced
Neural Bandits for Data Mining: Searching for Dangerous Polypharmacy
Polypharmacy, most often defined as the simultaneous consumption of five or
more drugs at once, is a prevalent phenomenon in the older population. Some of
these polypharmacies, deemed inappropriate, may be associated with adverse
health outcomes such as death or hospitalization. Considering the combinatorial
nature of the problem as well as the size of claims database and the cost to
compute an exact association measure for a given drug combination, it is
impossible to investigate every possible combination of drugs. Therefore, we
propose to optimize the search for potentially inappropriate polypharmacies
(PIPs). To this end, we propose the OptimNeuralTS strategy, based on Neural
Thompson Sampling and differential evolution, to efficiently mine claims
datasets and build a predictive model of the association between drug
combinations and health outcomes. We benchmark our method using two datasets
generated by an internally developed simulator of polypharmacy data containing
500 drugs and 100 000 distinct combinations. Empirically, our method can detect
up to 33\% of PIPs while maintaining an average precision score of 99\% using
10 000 time steps
- …