3,839 research outputs found

    A Perceptually Based Comparison of Image Similarity Metrics

    Full text link
    The assessment of how well one image matches another forms a critical component both of models of human visual processing and of many image analysis systems. Two of the most commonly used norms for quantifying image similarity are L1 and L2, which are specific instances of the Minkowski metric. However, there is often not a principled reason for selecting one norm over the other. One way to address this problem is by examining whether one metric, better than the other, captures the perceptual notion of image similarity. This can be used to derive inferences regarding similarity criteria the human visual system uses, as well as to evaluate and design metrics for use in image-analysis applications. With this goal, we examined perceptual preferences for images retrieved on the basis of the L1 versus the L2 norm. These images were either small fragments without recognizable content, or larger patterns with recognizable content created by vector quantization. In both conditions the participants showed a small but consistent preference for images matched with the L1 metric. These results suggest that, in the domain of natural images of the kind we have used, the L1 metric may better capture human notions of image similarity

    Minimizing the probabilistic magnitude of active vision errors using genetic algorithm

    Get PDF
    Spatial quantization errors are resulted in digitization. The errors are serious when the size of the pixel is significant compared to the allowable tolerance in the object dimension on the image. In placing the active sensor to perform inspection, displacement of the sensors in orientation and location is common. The difference between observed dimensions obtained by the displaced sensor and the actual dimensions is defined as displacement errors. The density functions of quantization errors and displacement errors depend on the camera resolution and camera locations and orientations. We use genetic algorithm to minimize the probabilistic magnitude of the errors subject to the sensor constraints, such as the resolution, field-of-view, focus, and visibility constraints. Since the objective functions and the constraint functions are both complicated and nonlinear, traditional nonlinear programming may not be efficient and trapping at a local minimum may occur. Using crossover operations, mutation operations, and the stochastic selection in genetic algorithm, trapping can be avoided.published_or_final_versio

    Integrated smoothed location model and data reduction approaches for multi variables classification

    Get PDF
    Smoothed Location Model is a classification rule that deals with mixture of continuous variables and binary variables simultaneously. This rule discriminates groups in a parametric form using conditional distribution of the continuous variables given each pattern of the binary variables. To conduct a practical classification analysis, the objects must first be sorted into the cells of a multinomial table generated from the binary variables. Then, the parameters in each cell will be estimated using the sorted objects. However, in many situations, the estimated parameters are poor if the number of binary is large relative to the size of sample. Large binary variables will create too many multinomial cells which are empty, leading to high sparsity problem and finally give exceedingly poor performance for the constructed rule. In the worst case scenario, the rule cannot be constructed. To overcome such shortcomings, this study proposes new strategies to extract adequate variables that contribute to optimum performance of the rule. Combinations of two extraction techniques are introduced, namely 2PCA and PCA+MCA with new cutpoints of eigenvalue and total variance explained, to determine adequate extracted variables which lead to minimum misclassification rate. The outcomes from these extraction techniques are used to construct the smoothed location models, which then produce two new approaches of classification called 2PCALM and 2DLM. Numerical evidence from simulation studies demonstrates that the computed misclassification rate indicates no significant difference between the extraction techniques in normal and non-normal data. Nevertheless, both proposed approaches are slightly affected for non-normal data and severely affected for highly overlapping groups. Investigations on some real data sets show that the two approaches are competitive with, and better than other existing classification methods. The overall findings reveal that both proposed approaches can be considered as improvement to the location model, and alternatives to other classification methods particularly in handling mixed variables with large binary size

    Classifiers for modeling of mineral potential

    Get PDF
    [Extract] Classification and allocation of land-use is a major policy objective in most countries. Such an undertaking, however, in the face of competing demands from different stakeholders, requires reliable information on resources potential. This type of information enables policy decision-makers to estimate socio-economic benefits from different possible land-use types and then to allocate most suitable land-use. The potential for several types of resources occurring on the earth's surface (e.g., forest, soil, etc.) is generally easier to determine than those occurring in the subsurface (e.g., mineral deposits, etc.). In many situations, therefore, information on potential for subsurface occurring resources is not among the inputs to land-use decision-making [85]. Consequently, many potentially mineralized lands are alienated usually to, say, further exploration and exploitation of mineral deposits. Areas with mineral potential are characterized by geological features associated genetically and spatially with the type of mineral deposits sought. The term 'mineral deposits' means .accumulations or concentrations of one or more useful naturally occurring substances, which are otherwise usually distributed sparsely in the earth's crust. The term 'mineralization' refers to collective geological processes that result in formation of mineral deposits. The term 'mineral potential' describes the probability or favorability for occurrence of mineral deposits or mineralization. The geological features characteristic of mineralized land, which are called recognition criteria, are spatial objects indicative of or produced by individual geological processes that acted together to form mineral deposits. Recognition criteria are sometimes directly observable; more often, their presence is inferred from one or more geographically referenced (or spatial) datasets, which are processed and analyzed appropriately to enhance, extract, and represent the recognition criteria as spatial evidence or predictor maps. Mineral potential mapping then involves integration of predictor maps in order to classify areas of unique combinations of spatial predictor patterns, called unique conditions [51] as either barren or mineralized with respect to the mineral deposit-type sought

    A Governance Perspective for System-of-Systems

    Get PDF
    The operating landscape of 21st century systems is characteristically ambiguous, emergent, and uncertain. These characteristics affect the capacity and performance of engineered systems/enterprises. In response, there are increasing calls for multidisciplinary approaches capable of confronting increasingly ambiguous, emergent, and uncertain systems. System of Systems Engineering (SoSE) is an example of such an approach. A key aspect of SoSE is the coordination and the integration of systems to enable ‘system-of-systems’ capabilities greater than the sum of the capabilities of the constituent systems. However, there is a lack of qualitative studies exploring how coordination and integration are achieved. The objective of this research is to revisit SoSE utility as a potential multidisciplinary approach and to suggest ‘governance’ as the basis for enabling ‘system-of-systems’ coordination and integration. In this case, ‘governance’ is concerned with direction, oversight, and accountability of ‘system-of-systems.’ ‘Complex System Governance’ is a new and novel basis for improving ‘system-of-system’ performance through purposeful design, execution, and evolution of essential metasystem functions.

    Planning with Discrete Harmonic Potential Fields

    Get PDF
    In this work a discrete counterpart to the continuous harmonic potential field approach is suggested. The extension to the discrete case makes use of the strong relation HPF-based planning has to connectionist artificial intelligence (AI). Connectionist AI systems are networks of simple, interconnected processors running in parallel within the confines of the environment in which the planning action is to be synthesized. It is not hard to see that such a paradigm naturally lends itself to planning on weighted graphs where the processors may be seen as the vertices of the graph and the relations among them as its edges. Electrical networks are an effective realization of connectionist AI. The utility of the discrete HPF (DHPF) approach is demonstrated in three ways. First, the capability of the DHPF approach to generate new, abstract, planning techniques is demonstrated by constructing a novel, efficient, optimal, discrete planning method called the M* algorithm. Also, its ability to augment the capabilities of existing planners is demonstrated by suggesting a generic solution to the lower bound problem faced by the A* algorithm. The DHPF approach is shown to be useful in solving specific planning problems in communication. It is demonstrated that the discrete HPF paradigm can support routing on-the-fly while the network is still in a transient state. It is shown by simulation that if a path to the target always exist and the switching delays in the routers are negligible, a packet will reach its destination despite the changes in the network which may simultaneously take place while the packet is being routed
    • …
    corecore