209 research outputs found

    Π5 – Τεχνική έκθεση (βιβλιογραφική ανασκόπηση συνδυασμού μεθόδων τεχνητής νοημοσύνης και πολυκριτήριας ανάλυσης)

    Get PDF
    Η παρούσα βιβλιογραφική ανασκόπηση επικεντρώνεται στην ανάλυση της σχέσης ανάμεσα σε τεχνικές ΠΑΑ και μεθοδολογίες από το πεδίο της ΤΝ, καθώς και του τρόπου με τον οποίο η ευστάθεια αντιμετωπίζεται στα δύο πεδία. Η καταγραφή αυτή συμβάλει στον εντοπισμό συνεργειών που μπορούν να προκύψουν από την ανάπτυξη διαδικασιών που συνδυάζουν ιδέες, έννοιες και αρχές από τα πεδία της ΠΑΑ και της ΤΝ για την καλύτερη μελέτη της ευστάθειας σε προβλήματα λήψης αποφάσεων. H βιβλιογραφική ανασκόπηση πραγματοποιείται στα πλαίσια διαδικασιών ανάπτυξης μοντέλων αποφάσεων μέσω της αναλυτικής-συνθετικής προσέγγισης (preference disaggregation approach, Jacquet-Lagrèze & Siskos, 2001) της ΠΑΑ, η οποία όπως θα αναλυθεί έχει σημαντικά κοινά στοιχεία με τεχνικές από το χώρο της ΤΝ και ιδιαίτερα με μεθοδολογίες μηχανικής μάθησης (machine learning)

    Π7 – Τεχνική έκθεση (πειραματική αξιολόγηση μέτρων ευστάθειας σε προβλήματα ταξινόμησης)

    Get PDF
    Στα πλαίσια του παρόντος ερευνητικού έργου, έγινε μια υπολογιστική πειραματική αξιολόγηση των μεθοδολογιών (καθώς και ενός νέου πρωτότυπου υποδείγματος γραμμικού προγραμματισμού) με στόχο να διερευνηθεί ο τρόπος με τον οποίο τα αποτελέσματά τους σχετίζονται με την έννοια της ευστάθειας. Η ανάλυση πραγματοποιείται για πολυκριτήρια προβλήματα ταξινόμησης (Zopounidis and Doumpos, 2002) και βασίζεται σε τεχνητά δεδομένα διαμορφωμένα με κατάλληλα επιλεγμένα χαρακτηριστικά. Στην έρευνα εξετάζονται γραμμικές και μη γραμμικές προσθετικές συναρτήσεις αξιών, οι οποίες αποτελούν μια ιδιαίτερα διαδεδομένη μορφή πολυκριτήριων μοντέλων αποφάσεων. Τα αποτελέσματα συμβάλουν στην καλύτερη κατανόηση των χαρακτηριστικών μεθοδολογιών που βασίζονται στην ΑΣΠ και του τρόπου με τον οποίο αυτά συνδέονται με την έννοια της ευστάθεια

    Data-driven Preference Learning Methods for Multiple Criteria Sorting with Temporal Criteria

    Full text link
    The advent of predictive methodologies has catalyzed the emergence of data-driven decision support across various domains. However, developing models capable of effectively handling input time series data presents an enduring challenge. This study presents novel preference learning approaches to multiple criteria sorting problems in the presence of temporal criteria. We first formulate a convex quadratic programming model characterized by fixed time discount factors, operating within a regularization framework. Additionally, we propose an ensemble learning algorithm designed to consolidate the outputs of multiple, potentially weaker, optimizers, a process executed efficiently through parallel computation. To enhance scalability and accommodate learnable time discount factors, we introduce a novel monotonic Recurrent Neural Network (mRNN). It is designed to capture the evolving dynamics of preferences over time while upholding critical properties inherent to MCS problems, including criteria monotonicity, preference independence, and the natural ordering of classes. The proposed mRNN can describe the preference dynamics by depicting marginal value functions and personalized time discount factors along with time, effectively amalgamating the interpretability of traditional MCS methods with the predictive potential offered by deep preference learning models. Comprehensive assessments of the proposed models are conducted, encompassing synthetic data scenarios and a real-case study centered on classifying valuable users within a mobile gaming app based on their historical in-app behavioral sequences. Empirical findings underscore the notable performance improvements achieved by the proposed models when compared to a spectrum of baseline methods, spanning machine learning, deep learning, and conventional multiple criteria sorting approaches

    Consumer load modeling and fair mechanisms in the efficient transactive energy market

    Get PDF
    Doctor of PhilosophyDepartment of Electrical and Computer EngineeringSanjoy DasTwo significant and closely related issues pertaining to the grid-constrained transactive distribution system market are investigated in this research. At first, the problem of spatial fairness in the allocation of energy among energy consumers is addressed, where consumer agents that are located at large distances from the substation – in terms of grid layout, are charged at higher rates than those close to it. This phenomenon, arising from the grid’s voltage and flow limits is aggravated during demand peaks. Using the Jain’s index to quantify fairness, two auction mechanisms are proposed. Both approaches are bilevel, with aggregators acting as interface agents between the consumers and the upstream distribution system operator (DSO). Furthermore, in spite of maximizing social welfare, neither mechanism makes use of the agents’ utility functions. The first mechanism is cost-setting, with the DSO determining unit costs. It implements the Jain’s index as a second term to the social welfare. Next, a power setting auction mechanism is put forth where the DSO’s role is to allocate energy in response to market equilibrium unit costs established at each aggregator from an iterative bidding process among its consumers. The Augmented Lagrangian Multigradient Approach (ALMA), which is based on vector gradient descent, is proposed in this research for implementation at the upper level. The mechanism’s lower level comprises of multiple auctions realized by the aggregators. The quasi-concavity of the Jain’s index is theoretically established, and it has been shown that ALMA converges to the Pareto front representing tradeoffs between social welfare and fairness. The effectiveness of both mechanisms is established through simulations carried out using a modified IEEE 37-bus system platform. The issue of extracting patterns of energy usage from time series energy use profiles of individual consumers is the focus of the second phase of this research. Two novel approaches for non-intrusive load disaggregation based on non-negative matrix factorization (NMF), are proposed. Both algorithms distinguish between fixed and shiftable load classes, with the latter being characterized by binary OFF and ON cycles. Fixed loads are represented as linear combinations of a set of basis vectors that are learned by NMF. One approach imposes L0 normed constraints on each shiftable load using a new method called binary load decomposition. The other approach models shiftable loads as Gaussian mixture models (GMM), therefore using expectation-maximization for unsupervised learning. This hybrid NMF-GMM algorithm enjoys the theoretical advantage of being interpretable as a maximum-likelihood procedure within a probabilistic framework. Numerical studies with real load profiles demonstrate that both algorithms can effectively disaggregate total loads into energy used by individual appliances. Using disaggregated loads, a maximum-margin regression approach to derive more elaborate, temperature-dependent utility functions of the consumers, is proposed. The research concludes by identifying the various ways gleaning such information can lead to more effective auction mechanisms for multi-period operation

    Strategic Freedom, Constraint, and Symmetry in One-period Markets with Cash and Credit Payment

    Get PDF
    In order to explain in a systematic way why certain combinations of market, financial, and legal structures may be intrinsic to certain capabilities to exchange real goods, we introduce criteria for abstracting the qualitative functions of markets. The criteria involve the number of strategic freedoms the combined institutions, considered as formalized strategic games, present to traders, the constraints they impose, and the symmetry with which those constraints are applied to the traders. We pay particular attention to what is required to make these "strategic market games" well-defined, and to make various solutions computable by the agents within the bounds on information and control they are assumed to have. As an application of these criteria, we present a complete taxonomy of the minimal one-period exchange economies with symmetric information and inside money. A natural hierarchy of market forms is observed to emerge, in which institutionally simpler markets are often found to be more suitable to fewer and less-diversified traders, while the institutionally richer markets only become functional as the size and diversity of their users gets large.Strategic market games, Clearinghouses, Credit evaluation, Default

    Π18.1 – Έκθεση 1ου Επιστημονικού Workshop

    Get PDF
    Το συγκεκριμένο παραδοτέο αφορά το 1ο Επιστημονικό Workshop του έργου που πραγματοποιήθηκε στην Αθήνα, το χρονικό διάστημα 12-14 Σεπτεμβρίου 2012. Σύμφωνα με το χρονοδιάγραμμα υλοποίησης του έργου, τη χρονική στιγμή διεξαγωγής του workshop έχουν ολοκληρωθεί κυρίως οι βιβλιογραφικές δράσεις του ερευνητικού προγράμματο

    On the 3D electromagnetic quantitative inverse scattering problem: algorithms and regularization

    Get PDF
    In this thesis, 3D quantitative microwave imaging algorithms are developed with emphasis on efficiency of the algorithms and quality of the reconstruction. First, a fast simulation tool has been implemented which makes use of a volume integral equation (VIE) to solve the forward scattering problem. The solution of the resulting linear system is done iteratively. To do this efficiently, two strategies are combined. First, the matrix-vector multiplications needed in every step of the iterative solution are accelerated using a combination of the Fast Fourier Transform (FFT) method and the Multilevel Fast Multipole Algorithm (MLFMA). It is shown that this hybridMLFMA-FFT method is most suited for large, sparse scattering problems. Secondly, the number of iterations is reduced by using an extrapolation technique to determine suitable initial guesses, which are already close to the solution. This technique combines a marching-on-in-source-position scheme with a linear extrapolation over the permittivity under the form of a Born approximation. It is shown that this forward simulator indeed exhibits a better efficiency. The fast forward simulator is incorporated in an optimization technique which minimizes the discrepancy between measured data and simulated data by adjusting the permittivity profile. A Gauss-Newton optimization method with line search is employed in this dissertation to minimize a least squares data fit cost function with additional regularization. Two different regularization methods were developed in this research. The first regularization method penalizes strong fluctuations in the permittivity by imposing a smoothing constraint, which is a widely used approach in inverse scattering. However, in this thesis, this constraint is incorporated in a multiplicative way instead of in the usual additive way, i.e. its weight in the cost function is reduced with an improving data fit. The second regularization method is Value Picking regularization, which is a new method proposed in this dissertation. This regularization is designed to reconstruct piecewise homogeneous permittivity profiles. Such profiles are hard to reconstruct since sharp interfaces between different permittivity regions have to be preserved, while other strong fluctuations need to be suppressed. Instead of operating on the spatial distribution of the permittivity, as certain existing methods for edge preservation do, it imposes the restriction that only a few different permittivity values should appear in the reconstruction. The permittivity values just mentioned do not have to be known in advance, however, and their number is also updated in a stepwise relaxed VP (SRVP) regularization scheme. Both regularization techniques have been incorporated in the Gauss-Newton optimization framework and yield significantly improved reconstruction quality. The efficiency of the minimization algorithm can also be improved. In every step of the iterative optimization, a linear Gauss-Newton update system has to be solved. This typically is a large system and therefore is solved iteratively. However, these systems are ill-conditioned as a result of the ill-posedness of the inverse scattering problem. Fortunately, the aforementioned regularization techniques allow for the use of a subspace preconditioned LSQR method to solve these systems efficiently, as is shown in this thesis. Finally, the incorporation of constraints on the permittivity through a modified line search path, helps to keep the forward problem well-posed and thus the number of forward iterations low. Another contribution of this thesis is the proposal of a new Consistency Inversion (CI) algorithm. It is based on the same principles as another well known reconstruction algorithm, the Contrast Source Inversion (CSI) method, which considers the contrast currents – equivalent currents that generate a field identical to the scattered field – as fundamental unknowns together with the permittivity. In the CI method, however, the permittivity variables are eliminated from the optimization and are only reconstructed in a final step. This avoids alternating updates of permittivity and contrast currents, which may result in a faster convergence. The CI method has also been supplemented with VP regularization, yielding the VPCI method. The quantitative electromagnetic imaging methods developed in this work have been validated on both synthetic and measured data, for both homogeneous and inhomogeneous objects and yield a high reconstruction quality in all these cases. The successful, completely blind reconstruction of an unknown target from measured data, provided by the Institut Fresnel in Marseille, France, demonstrates at once the validity of the forward scattering code, the performance of the reconstruction algorithm and the quality of the measurements. The reconstruction of a numerical MRI based breast phantom is encouraging for the further development of biomedical microwave imaging and of microwave breast cancer screening in particular
    corecore