10 research outputs found
Dynamically adaptive partition-based interest management in distributed simulation
Performance and scalability of distributed simulations depends primarily on the effectiveness of the employed interest management (IM) schema that aims at reducing the overall computational and messaging effort on the shared data to a necessary minimum. Existing IM approaches, which are based on variations or combinations of two principle data distribution techniques, namely region-based and grid-based techniques, perform poorly if the simulation develops an overloaded host. In order to facilitate distributing the processing load from overloaded areas of the shared data to less loaded hosts, the partition-based technique is introduced that allows for variable-size partitioning the shared data. Based on this data distribution technique, an IM approach is sketched that is dynamically adaptive to access latencies of simulation objects on the shared data as well as to the physical location of the objects. Since this re-distribution is decided depending on the messaging effort of the simulation objects for updating data partitions, any load balanced constellation has the additional advantage to be of minimal overall messaging effort. Hence, the IM schema dynamically resolves messaging overloading as well as overloading of hosts with simulation objects and therefore facilitates dynamic system scalability
Dynamically adaptive partition-based data distribution management
Workshop on Principles of Advanced and Distributed Simulation, PADS 2005; Monterey, CA; United States; 1 June 2005 through 3 June 2005Performance and scalability of distributed simulations depends primarily on the effectiveness of the employed data distribution management (DDM) algorithm, which aims at reducing the overall computational and messaging effort on the shared data to a necessary minimum. Existing DDM approaches, which are variations and combinations of two basic techniques, namely region-based and grid-based techniques, perform purely in the presence of load differences. We introduce the partition-based technique that allows for variable-size partitioning shared data. Based on this technique, a novel DDM algorithm is introduced that is dynamically adaptive to cluster formations in the shared data as well as in the physical location of the simulation objects. Since the re-distribution is sensitive to inter-relationships between shared data and simulation objects, a balanced constellation has the additional advantage to be of minimal messaging effort. Furthermore, dynamic system scalability is facilitated, as bottlenecks are avoided
Making accident data compatible with ITS-based traffic management: Turkish case
One of the most important reasons of the high rate of accidents would largely lend itself to ineffective data collection and evaluation process since the necessary information cannot be obtained effectively from the traffic accidents reports (TAR). The discord and dealing with non-relevant data may appear at four levels: (1) Country and Cultural, (2) Institutional and organizational, (3) Data collection, (4) Data analysis and Evaluation. The case findings are consistent with this knowledge put forward in the literature; there is a transparency problem in coordination between the institutions as well as the inefficient TAR data, which is open to manipulation; the problem of under-reporting and inappropriate data storage prevails before the false statistical evaluation methods. The old-fashioned data management structure causes incompatibility with the novel technologies, avoiding timely interventions in reducing accidents and alleviating the fatalities. Transmission of the data to the interest agencies for evaluation and effective operation of the ITS-based systems should be considered. The problem areas were explored through diagnoses at institutional, data collection, and evaluation steps and the solutions were determined accordingly for the case city of Izmir.The Turkish Scientific and Technical Research Institut
Generating ontologies from relational data with fuzzy-syllogistic reasoning
Existing standards for crisp description logics facilitate information exchange between systems that reason with crisp ontologies. Applications with probabilistic or possibilistic extensions of ontologies and reasoners promise to capture more information, because they can deal with more uncertainties or vagueness of information. However, since there are no standards for either extension, information exchange between such applications is not generic. Fuzzy-syllogistic reasoning with the fuzzy-syllogistic system4S provides 2048 possible fuzzy inference schema for every possible triple concept relationship of an ontology. Since the inference schema are the result of all possible set-theoretic relationships between three sets with three out of 8 possible fuzzy-quantifiers, the whole set of 2048 possible fuzzy inferences can be used as one generic fuzzy reasoner for quantified ontologies. In that sense, a fuzzy syllogistic reasoner can be employed as a generic reasoner that combines possibilistic inferencing with probabilistic ontologies, thus facilitating knowledge exchange between ontology applications of different domains as well as information fusion over them
Approximate reasoning with fuzzy-syllogistic systems
The well known Aristotelian syllogistic system consists of 256 moods. We have found earlier that 136 moods are distinct in terms of equal truth ratios that range in τ=[0,1]. The truth ratio of a particular mood is calculated by relating the number of true and false syllogistic cases the mood matches. A mood with truth ratio is a fuzzy-syllogistic mood. The introduction of (n-1) fuzzy existential quantifiers extends the system to fuzzy-syllogistic systems nS, 1<n, of which every fuzzy-syllogistic mood can be interpreted as a vague inference with a generic truth ratio that is determined by its syllogistic structure. We experimentally introduce the logic of a fuzzy-syllogistic ontology reasoner that is based on the fuzzy-syllogistic systems nS. We further introduce a new concept, the relative truth ratio rτ=[0,1] that is calculated based on the cardinalities of the syllogistic cases
The fuzzy syllogistic system
9th Mexican International Conference on Artificial Intelligence, MICAI 2010; Pachuca; Mexico; 8 November 2010 through 13 November 2010A categorical syllogism is a rule of inference, consisting of two premisses and one conclusion. Every premiss and conclusion consists of dual relationships between the objects M, P, S. Logicians usually use only true syllogisms for deductive reasoning. After predicate logic had superseded syllogisms in the 19th century, interest on the syllogistic system vanished. We have analysed the syllogistic system, which consists of 256 syllogistic moods in total, algorithmically. We have discovered that the symmetric structure of syllogistic figure formation is inherited to the moods and their truth values, making the syllogistic system an inherently symmetric reasoning mechanism, consisting of 25 true, 100 unlikely, 6 uncertain, 100 likely and 25 false moods. In this contribution, we discuss the most significant statistical properties of the syllogistic system and define on top of that the fuzzy syllogistic system. The fuzzy syllogistic system allows for syllogistic approximate reasoning inductively learned M, P, S relationships.2009-İYTE-BAP-1
Developing applications on-board of robots with becerik
7th International Conference on MEMS, NANO and Smart Systems, ICMENS 2011; Kuala Lumpur; Malaysia; 4 November 2011 through 6 November 2011Robot applications are mostly first developed on a computer and thereafter loaded onto the robot. However, in many situations, developing applications directly on the robot may be more effective. For instance, children who have not learned using a computer yet and who develop their robot applications while playing. Or for instance in the robots' operating environment, where there is no computer available. In this contribution we present the properties of the software tool becerik, for developing applications on-board a robot and for running them in multi-tasking mode concurrently. Furthermore, we introduce the programming language of the applications that has the same name becerik, which consists of only 6 commands. © (2012) Trans Tech Publications, Switzerland
A survey of robotic agent architectures
2017 International Artificial Intelligence and Data Processing Symposium (IDAP); SEP 16-17, 2017; Malatya, TURKEYRobotic agents consist of various compositions of properties that are found in their mechatronics, behavioural and cognitive architectures. Common properties of each architecture type serve as criteria for assessing the degree of intelligence of most embodied agent models. Although embodied intelligence has long been accepted for robotic agents, the literature is short on combined evaluations that discuss all properties of all architecture types in one framework. Here we provide a review of existing taxonomies for each type of architecture and attempt to combine them all in a single taxonomy for robotic agents
A data coding and screening system for accident risk patterns: A learning system
17th International Conference on Urban Transport and the Environment - UT 2011; Pisa; Italy; 6 June 2011 through 8 June 2011Accidents on urban roads can occur for many reasons, and the contributing factors together pose some complexity in the analysis of the casualties. In order to simplify the analysis and track changes from one accident to another for comparability, an authentic data coding and category analysis methods are developed, leading to data mining rules. To deal with a huge number of parameters, first, most qualitative data are converted into categorical codes (alpha-numeric), so that computing capacity would also be increased. Second, the whole data entry per accident are turned into ID codes, meaning each crash is possibly unique in attributes, called 'accident combination', reducing the large number of similar value accident records into smaller sets of data. This genetical code technique allows us to learn accident types with its solid attributes. The learning (output averages) provides a decision support mechanism for taking necessary cautions for similar combinations. The results can be analyzed by inputs, outputs (attributes), time (years) and the space (streets). According to Izmir's case results; sampled data and its accident combinations are obtained for 3 years (2005 - 2007) and their attributes are learned. © 2011 WIT Press.The Scientific and Technological Research Council of Turke
Making accident data compatible with ITS-based traffic management: Turkish case
One of the most important reasons of the high rate of accidents would largely lend itself to ineffective data collection and evaluation process since the necessary information cannot be obtained effectively from the traffic accidents reports (TAR). The discord and dealing with non-relevant data may appear at four levels: (1) Country and Cultural, (2) Institutional and organizational, (3) Data collection, (4) Data analysis and Evaluation. The case findings are consistent with this knowledge put forward in the literature; there is a transparency problem in coordination between the institutions as well as the inefficient TAR data, which is open to manipulation; the problem of under-reporting and inappropriate data storage prevails before the false statistical evaluation methods. The old-fashioned data management structure causes incompatibility with the novel technologies, avoiding timely interventions in reducing accidents and alleviating the fatalities. Transmission of the data to the interest agencies for evaluation and effective operation of the ITS-based systems should be considered. The problem areas were explored through diagnoses at institutional, data collection, and evaluation steps and the solutions were determined accordingly for the case city of Izmir.The Turkish Scientific and Technical Research Institut