16,204 research outputs found
Using Fuzzy Linguistic Representations to Provide Explanatory Semantics for Data Warehouses
A data warehouse integrates large amounts of extracted and summarized data from multiple sources for direct querying and analysis. While it provides decision makers with easy access to such historical and aggregate data, the real meaning of the data has been ignored. For example, "whether a total sales amount 1,000 items indicates a good or bad sales performance" is still unclear. From the decision makers' point of view, the semantics rather than raw numbers which convey the meaning of the data is very important. In this paper, we explore the use of fuzzy technology to provide this semantics for the summarizations and aggregates developed in data warehousing systems. A three layered data warehouse semantic model, consisting of quantitative (numerical) summarization, qualitative (categorical) summarization, and quantifier summarization, is proposed for capturing and explicating the semantics of warehoused data. Based on the model, several algebraic operators are defined. We also extend the SQL language to allow for flexible queries against such enhanced data warehouses
Simulation modelling and visualisation: toolkits for building artificial worlds
Simulations users at all levels make heavy use of compute resources to drive computational
simulations for greatly varying applications areas of research using different simulation
paradigms. Simulations are implemented in many software forms, ranging from highly standardised
and general models that run in proprietary software packages to ad hoc hand-crafted
simulations codes for very specific applications. Visualisation of the workings or results of a
simulation is another highly valuable capability for simulation developers and practitioners.
There are many different software libraries and methods available for creating a visualisation
layer for simulations, and it is often a difficult and time-consuming process to assemble a
toolkit of these libraries and other resources that best suits a particular simulation model. We
present here a break-down of the main simulation paradigms, and discuss differing toolkits and
approaches that different researchers have taken to tackle coupled simulation and visualisation
in each paradigm
Inductive machine learning of optimal modular structures: Estimating solutions using support vector machines
Structural optimization is usually handled by iterative methods requiring repeated samples of a physics-based model, but this process can be computationally demanding. Given a set of previously optimized structures of the same topology, this paper uses inductive learning to replace this optimization process entirely by deriving a function that directly maps any given load to an optimal geometry. A support vector machine is trained to determine the optimal geometry of individual modules of a space frame structure given a specified load condition. Structures produced by learning are compared against those found by a standard gradient descent optimization, both as individual modules and then as a composite structure. The primary motivation for this is speed, and results show the process is highly efficient for cases in which similar optimizations must be performed repeatedly. The function learned by the algorithm can approximate the result of optimization very closely after sufficient training, and has also been found effective at generalizing the underlying optima to produce structures that perform better than those found by standard iterative methods
DxNAT - Deep Neural Networks for Explaining Non-Recurring Traffic Congestion
Non-recurring traffic congestion is caused by temporary disruptions, such as
accidents, sports games, adverse weather, etc. We use data related to real-time
traffic speed, jam factors (a traffic congestion indicator), and events
collected over a year from Nashville, TN to train a multi-layered deep neural
network. The traffic dataset contains over 900 million data records. The
network is thereafter used to classify the real-time data and identify
anomalous operations. Compared with traditional approaches of using statistical
or machine learning techniques, our model reaches an accuracy of 98.73 percent
when identifying traffic congestion caused by football games. Our approach
first encodes the traffic across a region as a scaled image. After that the
image data from different timestamps is fused with event- and time-related
data. Then a crossover operator is used as a data augmentation method to
generate training datasets with more balanced classes. Finally, we use the
receiver operating characteristic (ROC) analysis to tune the sensitivity of the
classifier. We present the analysis of the training time and the inference time
separately
- …