1,270 research outputs found
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented
volumes of data from field measurements, experiments and large-scale
simulations at multiple spatiotemporal scales. Machine learning offers a wealth
of techniques to extract information from data that could be translated into
knowledge about the underlying fluid mechanics. Moreover, machine learning
algorithms can augment domain knowledge and automate tasks related to flow
control and optimization. This article presents an overview of past history,
current developments, and emerging opportunities of machine learning for fluid
mechanics. It outlines fundamental machine learning methodologies and discusses
their uses for understanding, modeling, optimizing, and controlling fluid
flows. The strengths and limitations of these methods are addressed from the
perspective of scientific inquiry that considers data as an inherent part of
modeling, experimentation, and simulation. Machine learning provides a powerful
information processing framework that can enrich, and possibly even transform,
current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Differential Privacy from Locally Adjustable Graph Algorithms: -Core Decomposition, Low Out-Degree Ordering, and Densest Subgraphs
Differentially private algorithms allow large-scale data analytics while
preserving user privacy. Designing such algorithms for graph data is gaining
importance with the growth of large networks that model various (sensitive)
relationships between individuals. While there exists a rich history of
important literature in this space, to the best of our knowledge, no results
formalize a relationship between certain parallel and distributed graph
algorithms and differentially private graph analysis. In this paper, we define
\emph{locally adjustable} graph algorithms and show that algorithms of this
type can be transformed into differentially private algorithms.
Our formalization is motivated by a set of results that we present in the
central and local models of differential privacy for a number of problems,
including -core decomposition, low out-degree ordering, and densest
subgraphs. First, we design an -edge differentially private (DP)
algorithm that returns a subset of nodes that induce a subgraph of density at
least
where is the density of the densest subgraph in the input graph (for any
constant ). This algorithm achieves a two-fold improvement on the
multiplicative approximation factor of the previously best-known private
densest subgraph algorithms while maintaining a near-linear runtime.
Then, we present an -locally edge differentially private (LEDP)
algorithm for -core decompositions. Our LEDP algorithm provides approximates
the core numbers (for any constant ) with multiplicative
and additive error.
This is the first differentially private algorithm that outputs private
-core decomposition statistics
Recommended from our members
An Assessment of PIER Electric Grid Research 2003-2014 White Paper
This white paper describes the circumstances in California around the turn of the 21st century that led the California Energy Commission (CEC) to direct additional Public Interest Energy Research funds to address critical electric grid issues, especially those arising from integrating high penetrations of variable renewable generation with the electric grid. It contains an assessment of the beneficial science and technology advances of the resultant portfolio of electric grid research projects administered under the direction of the CEC by a competitively selected contractor, the University of California’s California Institute for Energy and the Environment, from 2003-2014
A Factored Similarity Model with Trust and Social Influence for Top-N Recommendation
Many trust-aware recommendation systems have emerged to overcome the problem of data sparsity, which bottlenecks the performance of traditional Collaborative Filtering (CF) recommendation algorithms. However, these systems most rely on the binary social network information, failing to consider the variety of trust values between users. To make up for the defect, this paper designs a novel Top-N recommendation model based on trust and social influence, in which the most influential users are determined by the Improved Structural Holes (ISH) method. Specifically, the features in Matrix Factorization (MF) were configured by deep learning rather than random initialization, which has a negative impact on prediction of item rating. In addition, a trust measurement model was created to quantify the strength of implicit trust. The experimental result shows that our approach can solve the adverse impacts of data sparsity and enhance the recommendation accuracy
Distributed Computing and Artificial Intelligence, 11th International Conference
The 11th International Symposium on Distributed Computing and Artificial Intelligence 2014 (DCAI 2014) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This year’s technical program presents both high quality and diversity, with contributions in well-established and evolving areas of research (Algeria, Brazil, China, Croatia, Czech Republic, Denmark, France, Germany, Ireland, Italy, Japan, Malaysia, Mexico, Poland, Portugal, Republic of Korea, Spain, Taiwan, Tunisia, Ukraine, United Kingdom), representing a truly “wide area network” of research activity. DCAI'14 Special Sessions have been a very useful tool in order to complement the regular program with new or emerging topics of particular interest to the participating community. Special Sessions that emphasize on multi-disciplinary and transversal aspects, such as AI-driven methods for Multimodal Networks and Processes Modeling and Multi-Agents Macroeconomics have been especially encouraged and welcome. This symposium is organized by the Bioinformatics, Intelligent System and Educational Technology Research Group of the University of Salamanca. The present edition was held in Salamanca, Spain, from 4th to 6th June 2014
Bio-inspired computation: where we stand and what's next
In recent years, the research community has witnessed an explosion of literature dealing with the adaptation of behavioral patterns and social phenomena observed in nature towards efficiently solving complex computational tasks. This trend has been especially dramatic in what relates to optimization problems, mainly due to the unprecedented complexity of problem instances, arising from a diverse spectrum of domains such as transportation, logistics, energy, climate, social networks, health and industry 4.0, among many others. Notwithstanding this upsurge of activity, research in this vibrant topic should be steered towards certain areas that, despite their eventual value and impact on the field of bio-inspired computation, still remain insufficiently explored to date. The main purpose of this paper is to outline the state of the art and to identify open challenges concerning the most relevant areas within bio-inspired optimization. An analysis and discussion are also carried out over the general trajectory followed in recent years by the community working in this field, thereby highlighting the need for reaching a consensus and joining forces towards achieving valuable insights into the understanding of this family of optimization techniques
Bio-inspired computation: where we stand and what's next
In recent years, the research community has witnessed an explosion of literature dealing with the adaptation of behavioral patterns and social phenomena observed in nature towards efficiently solving complex computational tasks. This trend has been especially dramatic in what relates to optimization problems, mainly due to the unprecedented complexity of problem instances, arising from a diverse spectrum of domains such as transportation, logistics, energy, climate, social networks, health and industry 4.0, among many others. Notwithstanding this upsurge of activity, research in this vibrant topic should be steered towards certain areas that, despite their eventual value and impact on the field of bio-inspired computation, still remain insufficiently explored to date. The main purpose of this paper is to outline the state of the art and to identify open challenges concerning the most relevant areas within bio-inspired optimization. An analysis and discussion are also carried out over the general trajectory followed in recent years by the community working in this field, thereby highlighting the need for reaching a consensus and joining forces towards achieving valuable insights into the understanding of this family of optimization techniques
Analysis of vehicle pedestrian crash severity using advanced machine learning techniques
In 2015, over 17% of pedestrians were killed during vehicle crashes in Hong Kong while it raised to 18% from 2017 to 2019 and expected to be 25% in the upcoming decade. In Hong Kong, buses and the metro are used for 89% of trips, and walking has traditionally been the primary way to use public transportation. This susceptibility of pedestrians to road crashes conflicts with sustainable transportation objectives. Most studies on crash severity ignored the severity correlations between pedestrian-vehicle units engaged in the same impacts. The estimates of the factor effects will be skewed in models that do not consider these within-crash correlations. Pedestrians made up 17% of the 20,381 traffic fatalities in which 66% of the fatalities on the highways were pedestrians. The motivation of this study is to examine the elements that pedestrian injuries on highways and build on safety for these endangered users. A traditional statistical model's ability to handle misfits, missing or noisy data, and strict presumptions has been questioned. The reasons for pedestrian injuries are typically explained using these models. To overcome these constraints, this study used a sophisticated machine learning technique called a Bayesian neural network (BNN), which combines the benefits of neural networks and Bayesian theory. The best construction model out of several constructed models was finally selected. It was discovered that the BNN model outperformed other machine learning techniques like K-Nearest Neighbors, a conventional neural network (NN), and a random forest (RF) model in terms of performance and predictions. The study also discovered that the time and circumstances of the accident and meteorological features were critical and significantly enhanced model performance when incorporated as input. To minimize the number of pedestrian fatalities due to traffic accidents, this research anticipates employing machine learning (ML) techniques. Besides, this study sets the framework for applying machine learning techniques to reduce the number of pedestrian fatalities brought on by auto accidents
Optimización del diseño estructural de pavimentos asfálticos para calles y carreteras
gráficos, tablasThe construction of asphalt pavements in streets and highways is an activity that requires optimizing the consumption of significant economic and natural resources. Pavement design optimization meets contradictory objectives according to the availability of resources and users’ needs. This dissertation explores the application of metaheuristics to optimize the design of asphalt pavements using an incremental design based on the prediction of damage and vehicle operating costs (VOC). The costs are proportional to energy and resource consumption and polluting emissions. The evolution of asphalt pavement design and metaheuristic optimization techniques on this topic were reviewed. Four computer programs were developed: (1) UNLEA, a program for the structural analysis of multilayer systems. (2) PSO-UNLEA, a program that uses particle swarm optimization metaheuristic (PSO) for the backcalculation of pavement moduli. (3) UNPAVE, an incremental pavement design program based on the equations of the North American MEPDG and includes the computation of vehicle operating costs based on IRI. (4) PSO-PAVE, a PSO program to search for thicknesses that optimize the design considering construction and vehicle operating costs. The case studies show that the backcalculation and structural design of pavements can be optimized by PSO considering restrictions in the thickness and the selection of materials. Future developments should reduce the computational cost and calibrate the pavement performance and VOC models. (Texto tomado de la fuente)La construcción de pavimentos asfálticos en calles y carreteras es una actividad que requiere la optimización del consumo de cuantiosos recursos económicos y naturales. La optimización del diseño de pavimentos atiende objetivos contradictorios de acuerdo con la disponibilidad de recursos y las necesidades de los usuarios. Este trabajo explora el empleo de metaheurísticas para optimizar el diseño de pavimentos asfálticos empleando el diseño incremental basado en la predicción del deterioro y los costos de operación vehicular (COV). Los costos son proporcionales al consumo energético y de recursos y las emisiones contaminantes. Se revisó la evolución del diseño de pavimentos asfálticos y el desarrollo de técnicas metaheurísticas de optimización en este tema. Se desarrollaron cuatro programas de computador: (1) UNLEA, programa para el análisis estructural de sistemas multicapa. (2) PSO-UNLEA, programa que emplea la metaheurística de optimización con enjambre de partículas (PSO) para el cálculo inverso de módulos de pavimentos. (3) UNPAVE, programa de diseño incremental de pavimentos basado en las ecuaciones de la MEPDG norteamericana, y el cálculo de costos de construcción y operación vehicular basados en el IRI. (4) PSO-PAVE, programa que emplea la PSO en la búsqueda de espesores que permitan optimizar el diseño considerando los costos de construcción y de operación vehicular. Los estudios de caso muestran que el cálculo inverso y el diseño estructural de pavimentos pueden optimizarse mediante PSO considerando restricciones en los espesores y la selección de materiales. Los desarrollos futuros deben enfocarse en reducir el costo computacional y calibrar los modelos de deterioro y COV.DoctoradoDoctor en Ingeniería - Ingeniería AutomáticaDiseño incremental de pavimentosEléctrica, Electrónica, Automatización Y Telecomunicacione
- …