12 research outputs found

    Blind restoration of images with penalty-based decision making : a consensus approach

    Get PDF
    In this thesis we show a relationship between fuzzy decision making and image processing . Various applications for image noise reduction with consensus methodology are introduced. A new approach is introduced to deal with non-stationary Gaussian noise and spatial non-stationary noise in MRI

    Trust networks for recommender systems

    Get PDF
    Recommender systems use information about their user’s profiles and relationships to suggest items that might be of interest to them. Recommenders that incorporate a social trust network among their users have the potential to make more personalized recommendations compared to traditional systems, provided they succeed in utilizing the additional (dis)trust information to their advantage. Such trust-enhanced recommenders consist of two main components: recommendation technologies and trust metrics (techniques which aim to estimate the trust between two unknown users.) We introduce a new bilattice-based model that considers trust and distrust as two different but dependent components, and study the accompanying trust metrics. Two of their key building blocks are trust propagation and aggregation. If user a wants to form an opinion about an unknown user x, a can contact one of his acquaintances, who can contact another one, etc., until a user is reached who is connected with x (propagation). Since a will often contact several persons, one also needs a mechanism to combine the trust scores that result from several propagation paths (aggregation). We introduce new fuzzy logic propagation operators and focus on the potential of OWA strategies and the effect of knowledge defects. Our experiments demonstrate that propagators that actively incorporate distrust are more accurate than standard approaches, and that new aggregators result in better predictions than purely bilattice-based operators. In the second part of the dissertation, we focus on the application of trust networks in recommender systems. After the introduction of a new detection measure for controversial items, we show that trust-based approaches are more effective than baselines. We also propose a new algorithm that achieves an immediate high coverage while the accuracy remains adequate. Furthermore, we also provide the first experimental study on the potential of distrust in a memory-based collaborative filtering recommendation process. Finally, we also study the user cold start problem; we propose to identify key figures in the network, and to suggest them as possible connection points for newcomers. Our experiments show that it is much more beneficial for a new user to connect to an identified key figure instead of making random connections

    A semantical and computational approach to covering-based rough sets

    Get PDF

    Development of statistical methodologies applied to anthropometric data oriented towards the ergonomic design of products

    Get PDF
    Ergonomics is the scientific discipline that studies the interactions between human beings and the elements of a system and presents multiple applications in areas such as clothing and footwear design or both working and household environments. In each of these sectors, knowing the anthropometric dimensions of the current target population is fundamental to ensure that products suit as well as possible most of the users who make up the population. Anthropometry refers to the study of the measurements and dimensions of the human body and it is considered a very important branch of Ergonomics because its considerable influence on the ergonomic design of products. Human body measurements have usually been taken using rules, calipers or measuring tapes. These procedures are simple and cheap to carry out. However, they have one major drawback: the body measurements obtained and consequently, the human shape information, is imprecise and inaccurate. Furthermore, they always require interaction with real subjects, which increases the measure time and data collecting. The development of new three-dimensional (3D) scanning techniques has represented a huge step forward in the way of obtaining anthropometric data. This technology allows 3D images of human shape to be captured and at the same time, generates highly detailed and reproducible anthropometric measurements. The great potential of these new scanning systems for the digitalization of human body has contributed to promoting new anthropometric studies in several countries, such as United Kingdom, Australia, Germany, France or USA, in order to acquire accurate anthropometric data of their current population. In this context, in 2006 the Spanish Ministry of Health commissioned a 3D anthropometric survey of the Spanish female population, following the agreement signed by the Ministry itself with the Spanish associations and companies of manufacturing, distribution, fashion design and knitted sectors. A sample of 10415 Spanish females from 12 to 70 years old, randomly selected from the official Postcode Address File, was measured. The two main objectives of this study, which was conducted by the Biomechanics Institute of Valencia, were the following: on the one hand, to characterize the shape and body dimensions of the current Spanish women population to develop a standard sizing system that could be used by all clothing designers. On the other hand, to promote a healthy image of beauty through the representation of suited mannequins. In order to tackle both objectives, Statistics plays an essential role. Thus, the statistical methodologies presented in this PhD work have been applied to the database obtained from the Spanish anthropometric study. Clothing sizing systems classify the population into homogeneous groups (size groups) based on some key anthropometric dimensions. All members of the same group are similar in body shape and size, so they can wear the same garment. In addition, members of different groups are very different with respect to their body dimensions. An efficient and optimal sizing system aims at accommodating as large a percentage of the population as possible, in the optimum number of size groups that better describes the shape variability of the population. Besides, the garment fit for the accommodated individuals must be as good as possible. A very valuable reference related to sizing systems is the book Sizing in clothing: Developing effective sizing systems for ready-to-wear clothing, by Susan Ashdown. Each clothing size is defined from a person whose body measurements are located toward the central value for each of the dimensions considered in the analysis. The central person, which is considered as the size representative (the size prototype), becomes the basic pattern from which the clothing line in the same size is designed. Clustering is the statistical tool that divides a set of individuals in groups (clusters), in such a way that subjects of the same cluster are more similar to each other than to those in other groups. In addition, clustering defines each group by means of a representative individual. Therefore, it arises in a natural way the idea of using clustering to try to define an efficient sizing system. Specifically, four of the methodologies presented in this PhD thesis aimed at segmenting the population into optimal sizes, use different clustering methods. The first one, called trimowa, has been published in Expert Systems with Applications. It is based on using an especially defined distance to examine differences between women regarding their body measurements. The second and third ones (called biclustAnthropom and TDDclust, respectively) will soon be submitted in the same paper. BiclustAnthropom adapts to the field of Anthropometry a clustering method addressed in the specific case of gene expression data. Moreover, TDDclust uses the concept of statistical depth for grouping according to the most central (deep) observation in each size. As mentioned, current sizing systems are based on using an appropriate set of anthropometric dimensions, so clustering is carried out in the Euclidean space. In the three previous proposals, we have always worked in this way. Instead, in the fourth and last approach, called kmeansProcrustes, a clustering procedure is proposed for grouping taking into account the women shape, which is represented by a set of anatomical markers (landmarks). For this purpose, the statistical shape analysis will be fundamental. This contribution has been submitted for publication. A sizing system is intended to cover the so-called standard population, discarding the individuals with extreme sizes (both large and small). In mathematical language, these individuals can be considered outliers. An outlier is an observation point that is distant from other observations. In our case, a person with extreme anthopometric measurements would be considered as a statistical outlier. Clothing companies usually design garments for the standard sizes so that their market share is optimal. Nevertheless, with their foreign expansion, a lot of brands are spreading their collection and they already have a special sizes section. In last years, Internet shopping has been an alternative for consumers with extreme sizes looking for clothes that follow trends. The custom-made fabrication is other possibility with the advantage of making garments according to the customers' preferences. The four aforementioned methodologies (trimowa, biclustAnthropom, TDDclust and kmeansProcrustes) have been adapted to only accommodate the standard population. Once a particular garment has been designed, the assessing and analysis of fit is performed using one or more fit models. The fit model represents the body dimensions selected by each company to define the proportional relationships needed to achieve the fit the company has determined. The definition of an efficient sizing system relies heavily on the accuracy and representativeness of the fit models regarding the population to which it is addressed. In this PhD work, a statistical approach is proposed to identify representative fit models. It is based on another clustering method originally developed for grouping gene expression data. This method, called hipamAnthropom, has been published in Decision Support Systems. From well-defined fit models and prototypes, representative and accurate mannequins of the population can be made. Unlike clothing design, where representative cases correspond with central individuals, in the design of working and household environments, the variability of human shape is described by extreme individuals, which are those that have the largest or smallest values (or extreme combinations) in the dimensions involved in the study. This is often referred to as the accommodation problem. A very interesting reference in this area is the book entitled Guidelines for Using Anthropometric Data in Product Design, published by The Human Factors and Ergonomics Society. The idea behind this way of proceeding is that if a product fits extreme observations, it will also fit the others (less extreme). To that end, in this PhD thesis we propose two methodological contributions based on the statistical archetypal analysis. An archetype in Statistics is an extreme individual that is obtained as a convex combination of other subjects of the sample. The first of these methodologies has been published in Computers and Industrial Engineering, whereas the second one has been submitted for publication. The outline of this PhD report is as follows: Chapter 1 reviews the state of the art of Ergonomics and Anthropometry and introduces the anthropometric survey of the Spanish female population. Chapter 2 presents the trimowa, biclustAnthropom and hipamAnthropom methodologies. In Chapter 3 the kmeansProcrustes proposal is detailed. The TDDclust methodology is explained in Chapter 4. Chapter 5 presents the two methodologies related to the archetypal analysis. Since all these contributions have been programmed in the statistical software R, Chapter 6 presents the Anthropometry R package, that brings together all the algorithms associated with each approach. In this way, from Chapter 2 to Chapter 6 all the methodologies and results included in this PhD thesis are presented. At last, Chapter 7 provides the most important conclusions

    Facilitating Brownfield Redevelopment Projects: Evaluation, Negotiation, and Policy

    Get PDF
    A risky project evaluation technique called the fuzzy real options analysis is developed to evaluate brownfield redevelopment projects. Other decision making techniques, such as multiple criteria analysis and conflict analysis, can be incorporated into fuzzy real options analysis to facilitate negotiations on brownfield redevelopment among decision makers (DMs). The value of managerial flexibility, which is important in negotiations and policy making for brownfield redevelopment, is overlooked when the traditional evaluation method, net present value (NPV), is employed. Findings of this thesis can be used to promote brownfield redevelopment, thereby helping to eliminate environmental threats and enhance regional sustainability. A brownfield is an abandoned or underutilized property that contains, or may contain, pollutants, hazardous substances, or contaminants from previous usage, typically industrial activity. Brownfields often occur when the local economy transits from industrial to service-oriented seeking more profit. Governments actively promote brownfield redevelopment to eliminate public health threats, help economic transition, and enhance sustainability. However, developers are reluctant to participate in brownfield redevelopment because they often regard these projects as unprofitable when using classic evaluation techniques. On the other hand, case studies show that brownfield redevelopment projects can be good business opportunities for developers. An improved evaluation method is developed in order to estimate the value of a brownfield more accurately. The main reason that makes the difference between estimates and ''actual'' values lies in the failure of the deterministic project evaluation tool to price the value of uncertainty, which leads to efforts to enhance the decision making under uncertainty. Real options modelling, which extends the ability of option pricing models in real asset evaluation, is employed in risky project evaluation because of its capacity to handle uncertainties. However, brownfield redevelopment projects contain uncertain factors that have no market price, thus violating the assumption of option pricing models for which all risks have been reflected in the market. This problem, called private risk, is addressed by incorporating fuzzy numbers into real options in this thesis, which can be called fuzzy real options. Fuzzy real options are shown to generalize the original model to deal with additional kinds of uncertainties, making them more suitable for project evaluation. A numerical technique based on hybrid variables is developed to price fuzzy real options. We proposed an extension of Least Squares Monte-Carlo simulation (LSM) that produces numerical evaluations of options. A major advantage of this methodology lies in its ability to produce results regardless of whether or not an analytic solution exists. Tests show that the generalized LSM produces similar results to the analytic valuation of fuzzy real options, when this is possible. To facilitate parameter estimation for the fuzzy real options model, another numerical method is proposed to represent the likelihood of contamination of a brownfield using fuzzy boundaries. Linguistic quantifiers and ordered weighted averaging (OWA) techniques are utilized to determine the likelihood of pollution at sample locations based on multiple environmental indicators, acting as a fuzzy deduction rule to calculate the triangle membership functions of the fuzzy parameters. Risk preferences of DMs are expressed as different ''ORness'' levels of OWA operators, which affect likelihood estimates. When the fuzzy boundaries of a brownfield are generated by interpolation of sample points, the parameters of fuzzy real options, drift rate and volatility, can be calculated as fuzzy numbers. Hence, this proposed method can act as an intermediary between DMs and the fuzzy real options models, making this model much easier to apply. The values of DMs to a brownfield can be input to the graph model for conflict resolution (GMCR) to identify possible resolutions during brownfield redevelopment negotiation among all possible states, or combinations of DMs' choices. Major redevelopment policies are studied using a brownfield redevelopment case, Ralgreen Community in Kitchener, Ontario, Canada. The fuzzy preference framework and probability-based comparison method to rank fuzzy variables are employed to integrate fuzzy real options and GMCR. Insights into this conflict and general policy suggestions are provided. A potential negotiation support system (NSS) implementing these numerical methods is discussed in the context of negotiating brownfield redevelopment projects. The NSS combines the computational modules, decision support system (DSS) prototypes, and geographic information systems (GIS), and message systems. A public-private partnership (PPP) will be enhanced through information sharing, scenario generation, and conflict analysis provided by the NSS, encouraging more efficient brownfield redevelopment and leading to greater regional sustainability. The integrated usage of fuzzy real options, OWA, and GMCR takes advantage of fuzziness and randomness, making better evaluation technique available in a multiple DMs negotiation setting. Decision techniques expand their range from decision analysis, multiple criteria analysis, to a game-theoretic approach, contributing to a big picture on decision making under uncertainty. When these methods are used to study brownfield redevelopment, we found that creating better business opportunities, such as allowing land use change to raise net income, are more important in determining equilibria than remediation cost refunding. Better redevelopment policies can be proposed to aid negotiations among stakeholders

    Application of advanced techniques for the remote detection, modelling and spatial analysis of mesquite (prosopis spp.) invasion in Western Australia

    Get PDF
    Invasive plants pose serious threats to economic, social and environmental interests throughout the world. Developing strategies for their management requires a range of information that is often impractical to collect from ground based surveys. In other cases, such as retrospective analyses of historical invasion rates and patterns, data is rarely, if ever, available from such surveys. Instead, historical archives of remotely sensed imagery provides one of the only existing records, and are used in this research to determine invasion rates and reconstruct invasion patterns of a ca 70 year old exotic mesquite population (Leguminoseae: Prosopis spp.) in the Pilbara Region of Western Australia, thereby helping to identify ways to reduce spread and infill. A model was then developed using this, and other, information to predict which parts of the Pilbara are most a risk. This information can assist in identifying areas requiring the most vigilant intervention and pre-emptive measures. Precise information of the location and areal extent of an invasive species is also crucial for land managers and policy makers for crafting management strategies aimed at control, confinement or eradication of some or all of the population. Therefore, the third component of this research was to develop and test high spectral and spatial resolution airborne imagery as a potential monitoring tool for tracking changes at various intervals and quantifying the effectiveness of management strategies adopted. To this end, high spatial resolution digital multispectral imagery (4 channels, 1 m spatial resolution) and hyperspectral imagery (126 channels, 3 m spatial resolution) was acquired and compared for its potential for distinguishing mesquite from coexisting species and land covers.These three modules of research are summarised hereafter. To examine the rates and patterns of mesquite invasion through space and time, canopies were extracted from a temporal series of panchromatic aerial photography over an area of 450 ha using unsupervised classification. Non-mesquite trees and shrubs were not discernible from mesquite using this imagery (or technique) and so were masked out using an image acquired prior to invasion. The accuracy of the mesquite extractions were corroborated in the field and found to be high (R2 = 0.98, P36 m2 (66-94%) with both approaches and image types. However, both approaches used on the hyperspectral imagery were more reliable at capturing patches >36 m2 than the DMSI using either approach. The lowest omission and commission rates were obtained using pairwise separation on the hyperspectral imagery, which was significantly more accurate than DMSI using an overall separation approach (Z=2.78, P36 m2. However, hyperspectral imagery processed using pairwise separation appears to be superior, even though not statistically different to hyperspectral imagery processed using overall separation or DMSI processed using pairwise separation at the 95% confidence level. Mapping smaller patches may require the use of very high spatial resolution imagery, such as that achievable from unmanned airborne vehicles, coupled with a hyperspectral instrument. Alternatively, management may continue to rely on visual airborne surveys flown at low altitude and speed, which have proven to be capable at mapping small and isolated mesquite shrubs in the study area used in this research

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4

    Get PDF
    The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals. First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others. More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on. Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered

    Graph-Based Approaches to Protein StructureComparison - From Local to Global Similarity

    Get PDF
    The comparative analysis of protein structure data is a central aspect of structural bioinformatics. Drawing upon structural information allows the inference of function for unknown proteins even in cases where no apparent homology can be found on the sequence level. Regarding the function of an enzyme, the overall fold topology might less important than the specific structural conformation of the catalytic site or the surface region of a protein, where the interaction with other molecules, such as binding partners, substrates and ligands occurs. Thus, a comparison of these regions is especially interesting for functional inference, since structural constraints imposed by the demands of the catalyzed biochemical function make them more likely to exhibit structural similarity. Moreover, the comparative analysis of protein binding sites is of special interest in pharmaceutical chemistry, in order to predict cross-reactivities and gain a deeper understanding of the catalysis mechanism. From an algorithmic point of view, the comparison of structured data, or, more generally, complex objects, can be attempted based on different methodological principles. Global methods aim at comparing structures as a whole, while local methods transfer the problem to multiple comparisons of local substructures. In the context of protein structure analysis, it is not a priori clear, which strategy is more suitable. In this thesis, several conceptually different algorithmic approaches have been developed, based on local, global and semi-global strategies, for the task of comparing protein structure data, more specifically protein binding pockets. The use of graphs for the modeling of protein structure data has a long standing tradition in structural bioinformatics. Recently, graphs have been used to model the geometric constraints of protein binding sites. The algorithms developed in this thesis are based on this modeling concept, hence, from a computer scientist's point of view, they can also be regarded as global, local and semi-global approaches to graph comparison. The developed algorithms were mainly designed on the premise to allow for a more approximate comparison of protein binding sites, in order to account for the molecular flexibility of the protein structures. A main motivation was to allow for the detection of more remote similarities, which are not apparent by using more rigid methods. Subsequently, the developed approaches were applied to different problems typically encountered in the field of structural bioinformatics in order to assess and compare their performance and suitability for different problems. Each of the approaches developed during this work was capable of improving upon the performance of existing methods in the field. Another major aspect in the experiments was the question, which methodological concept, local, global or a combination of both, offers the most benefits for the specific task of protein binding site comparison, a question that is addressed throughout this thesis
    corecore