110 research outputs found

    Developing a model and a language to identify and specify the integrity constraints in spatial datacubes

    Get PDF
    La qualité des données dans les cubes de données spatiales est importante étant donné que ces données sont utilisées comme base pour la prise de décision dans les grandes organisations. En effet, une mauvaise qualité de données dans ces cubes pourrait nous conduire à une mauvaise prise de décision. Les contraintes d'intégrité jouent un rôle clé pour améliorer la cohérence logique de toute base de données, l'un des principaux éléments de la qualité des données. Différents modèles de cubes de données spatiales ont été proposés ces dernières années mais aucun n'inclut explicitement les contraintes d'intégrité. En conséquence, les contraintes d'intégrité de cubes de données spatiales sont traitées de façon non-systématique, pragmatique, ce qui rend inefficace le processus de vérification de la cohérence des données dans les cubes de données spatiales. Cette thèse fournit un cadre théorique pour identifier les contraintes d'intégrité dans les cubes de données spatiales ainsi qu'un langage formel pour les spécifier. Pour ce faire, nous avons d'abord proposé un modèle formel pour les cubes de données spatiales qui en décrit les différentes composantes. En nous basant sur ce modèle, nous avons ensuite identifié et catégorisé les différents types de contraintes d'intégrité dans les cubes de données spatiales. En outre, puisque les cubes de données spatiales contiennent typiquement à la fois des données spatiales et temporelles, nous avons proposé une classification des contraintes d'intégrité des bases de données traitant de l'espace et du temps. Ensuite, nous avons présenté un langage formel pour spécifier les contraintes d'intégrité des cubes de données spatiales. Ce langage est basé sur un langage naturel contrôlé et hybride avec des pictogrammes. Plusieurs exemples de contraintes d'intégrité des cubes de données spatiales sont définis en utilisant ce langage. Les designers de cubes de données spatiales (analystes) peuvent utiliser le cadre proposé pour identifier les contraintes d'intégrité et les spécifier au stade de la conception des cubes de données spatiales. D'autre part, le langage formel proposé pour spécifier des contraintes d'intégrité est proche de la façon dont les utilisateurs finaux expriment leurs contraintes d'intégrité. Par conséquent, en utilisant ce langage, les utilisateurs finaux peuvent vérifier et valider les contraintes d'intégrité définies par l'analyste au stade de la conception

    Integritätsbedingungen für Geodaten

    Get PDF
    [no abstract

    HybridMDSD: Multi-Domain Engineering with Model-Driven Software Development using Ontological Foundations

    Get PDF
    Software development is a complex task. Executable applications comprise a mutlitude of diverse components that are developed with various frameworks, libraries, or communication platforms. The technical complexity in development retains resources, hampers efficient problem solving, and thus increases the overall cost of software production. Another significant challenge in market-driven software engineering is the variety of customer needs. It necessitates a maximum of flexibility in software implementations to facilitate the deployment of different products that are based on one single core. To reduce technical complexity, the paradigm of Model-Driven Software Development (MDSD) facilitates the abstract specification of software based on modeling languages. Corresponding models are used to generate actual programming code without the need for creating manually written, error-prone assets. Modeling languages that are tailored towards a particular domain are called domain-specific languages (DSLs). Domain-specific modeling (DSM) approximates technical solutions with intentional problems and fosters the unfolding of specialized expertise. To cope with feature diversity in applications, the Software Product Line Engineering (SPLE) community provides means for the management of variability in software products, such as feature models and appropriate tools for mapping features to implementation assets. Model-driven development, domain-specific modeling, and the dedicated management of variability in SPLE are vital for the success of software enterprises. Yet, these paradigms exist in isolation and need to be integrated in order to exhaust the advantages of every single approach. In this thesis, we propose a way to do so. We introduce the paradigm of Multi-Domain Engineering (MDE) which means model-driven development with multiple domain-specific languages in variability-intensive scenarios. MDE strongly emphasize the advantages of MDSD with multiple DSLs as a neccessity for efficiency in software development and treats the paradigm of SPLE as indispensable means to achieve a maximum degree of reuse and flexibility. We present HybridMDSD as our solution approach to implement the MDE paradigm. The core idea of HybidMDSD is to capture the semantics of particular DSLs based on properly defined semantics for software models contained in a central upper ontology. Then, the resulting semantic foundation can be used to establish references between arbitrary domain-specific models (DSMs) and sophisticated instance level reasoning ensures integrity and allows to handle partiucular change adaptation scenarios. Moreover, we present an approach to automatically generate composition code that integrates generated assets from separate DSLs. All necessary development tasks are arranged in a comprehensive development process. Finally, we validate the introduced approach with a profound prototypical implementation and an industrial-scale case study.Softwareentwicklung ist komplex: ausführbare Anwendungen beinhalten und vereinen eine Vielzahl an Komponenten, die mit unterschiedlichen Frameworks, Bibliotheken oder Kommunikationsplattformen entwickelt werden. Die technische Komplexität in der Entwicklung bindet Ressourcen, verhindert effiziente Problemlösung und führt zu insgesamt hohen Kosten bei der Produktion von Software. Zusätzliche Herausforderungen entstehen durch die Vielfalt und Unterschiedlichkeit an Kundenwünschen, die der Entwicklung ein hohes Maß an Flexibilität in Software-Implementierungen abverlangen und die Auslieferung verschiedener Produkte auf Grundlage einer Basis-Implementierung nötig machen. Zur Reduktion der technischen Komplexität bietet sich das Paradigma der modellgetriebenen Softwareentwicklung (MDSD) an. Software-Spezifikationen in Form abstrakter Modelle werden hier verwendet um Programmcode zu generieren, was die fehleranfällige, manuelle Programmierung ähnlicher Komponenten überflüssig macht. Modellierungssprachen, die auf eine bestimmte Problemdomäne zugeschnitten sind, nennt man domänenspezifische Sprachen (DSLs). Domänenspezifische Modellierung (DSM) vereint technische Lösungen mit intentionalen Problemen und ermöglicht die Entfaltung spezialisierter Expertise. Um der Funktionsvielfalt in Software Herr zu werden, bietet der Forschungszweig der Softwareproduktlinienentwicklung (SPLE) verschiedene Mittel zur Verwaltung von Variabilität in Software-Produkten an. Hierzu zählen Feature-Modelle sowie passende Werkzeuge, um Features auf Implementierungsbestandteile abzubilden. Modellgetriebene Entwicklung, domänenspezifische Modellierung und eine spezielle Handhabung von Variabilität in Softwareproduktlinien sind von entscheidender Bedeutung für den Erfolg von Softwarefirmen. Zur Zeit bestehen diese Paradigmen losgelöst voneinander und müssen integriert werden, damit die Vorteile jedes einzelnen für die Gesamtheit der Softwareentwicklung entfaltet werden können. In dieser Arbeit wird ein Ansatz vorgestellt, der dies ermöglicht. Es wird das Multi-Domain Engineering Paradigma (MDE) eingeführt, welches die modellgetriebene Softwareentwicklung mit mehreren domänenspezifischen Sprachen in variabilitätszentrierten Szenarien beschreibt. MDE stellt die Vorteile modellgetriebener Entwicklung mit mehreren DSLs als eine Notwendigkeit für Effizienz in der Entwicklung heraus und betrachtet das SPLE-Paradigma als unabdingbares Mittel um ein Maximum an Wiederverwendbarkeit und Flexibilität zu erzielen. In der Arbeit wird ein Ansatz zur Implementierung des MDE-Paradigmas, mit dem Namen HybridMDSD, vorgestellt

    High-performance online spatial and temporal aggregations on multi-core CPUs and many-core GPUs, in:

    Get PDF
    a b s t r a c t With the increasing availability of locating and navigation technologies on portable wireless devices, huge amounts of location data are being captured at ever growing rates. Spatial and temporal aggregations in an Online Analytical Processing (OLAP) setting for the large-scale ubiquitous urban sensing data play an important role in understanding urban dynamics and facilitating decision making. Unfortunately, existing spatial, temporal and spatiotemporal OLAP techniques are mostly based on traditional computing frameworks, i.e., disk-resident systems on uniprocessors based on serial algorithms, which makes them incapable of handling largescale data on parallel hardware architectures that have already been equipped with commodity computers. In this study, we report our designs, implementations and experiments on developing a data management platform and a set of parallel techniques to support highperformance online spatial and temporal aggregations on multi-core CPUs and many-core Graphics Processing Units (GPUs). Our experiment results show that we are able to spatially associate nearly 170 million taxi pickup location points with their nearest street segments among 147,011 candidates in about 5-25 s on both an Nvidia Quadro 6000 GPU device and dual Intel Xeon E5405 quad-core CPUs when their Vector Processing Units (VPUs) are utilized for computing intensive tasks. After spatially associating points with road segments, spatial, temporal and spatiotemporal aggregations are reduced to relational aggregations and can be processed in the order of a fraction of a second on both GPUs and multi-core CPUs. In addition to demonstrating the feasibility of building a high-performance OLAP system for processing large-scale taxi trip data for real-time, interactive data explorations, our work also opens the paths to achieving even higher OLAP query efficiency for large-scale applications through integrating domain-specific data management platforms, novel parallel data structures and algorithm designs, and hardware architecture friendly implementations

    A Design Theory for Secure Semantic E-Business Processes (SSEBP)

    Get PDF
    This dissertation develops and evaluates a Design theory. We follow the design science approach (Hevener, et al., 2004) to answer the following research question: "How can we formulate a design theory to guide the analysis and design of Secure Semantic eBusiness processes (SSeBP)?" Goals of SSeBP design theory include (i) unambiguously represent information and knowledge resources involved in eBusiness processes to solve semantic conflicts and integrate heterogeneous information systems; (ii) analyze and model business processes that include access control mechanisms to prevent unauthorized access to resources; and (iii) facilitate the coordination of eBusiness process activities-resources by modeling their dependencies. Business processes modeling techniques such as Business Process Modeling Notation (BPMN) (BPMI, 2004) and UML Activity Diagrams (OMG, 2003) lack theoretical foundations and are difficult to verify for correctness and completeness (Soffer and Wand, 2007). Current literature on secure information systems design methods are theoretically underdeveloped and consider security as a non-functional requirement and as an afterthought (Siponen et al. 2006, Mouratidis et al., 2005). SSeBP design theory is one of the first attempts at providing theoretically grounded guidance to design richer secure eBusiness processes for secure and coordinated seamless knowledge exchange among business partners in a value chain. SSeBP design theory allows for the inclusion of non-repudiation mechanisms into the analysis and design of eBusiness processes which lays the foundations for auditing and compliance with regulations such as Sarbanes-Oxley. SSeBP design theory is evaluated through a rigorous multi-method evaluation approach including descriptive, observational, and experimental evaluation. First, SSeBP design theory is validated by modeling business processes of an industry standard named Collaborative Planning, Forecasting, and Replenishment (CPFR) approach. Our model enhances CPFR by incorporating security requirements in the process model, which is critically lacking in the current CPFR technical guidelines. Secondly, we model the demand forecasting and capacity planning business processes for two large organizations to evaluate the efficacy and utility of SSeBP design theory to capture the realistic requirements and complex nuances of real inter-organizational business processes. Finally, we empirically evaluate SSeBP, against enhanced Use Cases (Siponen et al., 2006) and UML activity diagrams, for informational equivalence (Larkin and Simon, 1987) and its utility in generating situational awareness (Endsley, 1995) of the security and coordination requirements of a business process. Specific contributions of this dissertation are to develop a design theory (SSeBP) that presents a novel and holistic approach that contributes to the IS knowledge base by filling an existing research gap in the area of design of information systems to support secure and coordinated business processes. The proposed design theory provides practitioners with the meta-design and the design process, including the system components and principles to guide the analysis and design of secure eBusiness processes that are secure and coordinated

    Towards an Assembly Reference Ontology for Assembly Knowledge Sharing

    Get PDF
    Information and Communication Technologies (ICT) have been increasingly used to support the decision making in manufacturing organizations however they lack the ability to fully support the capture and sharing of specific domain knowledge across multiple domains. The ability of ICT based systems to share knowledge is impeded by the semantic conflicts arising from loosely defined meanings and intents of the participating concepts. This research work exploits the concept of formal ontologies to rigorously define the semantics of domain concepts to support knowledge sharing within the assembly domain. In this thesis, a novel research framework has been proposed in the form of a assembly reference ontology which can provide a common semantic base to support knowledge sharing across the assembly design and assembly process planning domains. The framework consists of a set of key reference concepts identified to represent the assembly domain related knowledge. These concepts have been specialized from the most generic level to the most specialized level and have been formally defined to support the capture and sharing of assembly knowledge. The proposed framework also supports the creation of application specific ontologies by providing them with a common semantic base. The research concept has been experimentally investigated by using a selected set of assembly reference concepts which have been used to formally represent and relate assembly design and assembly process planning knowledge. The results of the experiments verify that the implemented ontology facilitates the system to understand the semantics of concepts and supports knowledge sharing across the assembly design and assembly process planning domains. The experimental results also show that the proposed framework can also support the development of a range of application specific ontologies

    Pertanika Journal of Science & Technology

    Get PDF

    Pertanika Journal of Science & Technology

    Get PDF

    Towards specification formalisms for data warehouse systems design

    Get PDF
    Text in English with abstracts and keywords in English, Afrikaans and SetswanaSeveral studies have been conducted on formal methods; however, few of these studies have used formal methods in the data warehousing area, specifically system development. Many reasons may be linked to that, such as that few experts know how to use them. Formal methods have been used in software development using mathematical notations. Despite the advantages of using formal methods in software development, their application in the data warehousing area has been restricted when compared with the use of informal (natural language) and semi-formal notations. This research aims to determine the extent to which formal methods may mitigate failures that mostly occur in the development of data warehouse systems. As part of this research, an enhanced framework was proposed to facilitate the usage of formal methods in the development of such systems. The enhanced framework focuses mainly on the requirements definition, the Unified Modelling Language (UML) constructs, the Star model and formal specification. A medium-sized case study of a data mart was considered to validate the enhanced framework. This dissertation also discusses the object-orientation paradigm and UML notations. The requirements specification of a data warehouse system is presented in natural language and formal notation to show how a formal specification may be drifted from natural language to UML structures and thereafter to the Z specification using an established strategy as a guideline to construct a Z specificationAlhoewel verskeie studies oor formele metodes gedoen is, het min hiervan formele metodes in die databergingarea, spesifiek stelselontwerp, gebruik. Dit kan aan baie redes toegeskryf word, soos dat min kundiges weet hoe om dit te gebruik. Formele metodes is in sagtewareontwikkeling gebruik wat wiskundige notasies gebruik. Ten spyte van die voordele van formele metodes in sagtewareontwikkeling, is die toepassing daarvan in die databergingarea beperk wanneer dit met die gebruik van informele (natuurlike taal) en semiformele notasies vergelyk word. Hierdie navorsing beoog om te bepaal tot watter mate formele metodes foute kan uitskakel wat hoofsaaklik in die ontwikkeling van databeringstelsels voorkom. As deel van hierdie navorsing is 'n beter raamwerk voorgestel om die gebruik van formele metodes in die ontwikkeling van sulke stelsels te fasiliteer. Die beter raamwerk fokus hoofsaaklik op die definisie van vereistes, die Unified Modelling Language (UML) - konstukte, die Star-model en formele spesifikasies. Die mediumgrootte gevallestudie van 'n datamark is oorweeg om die beter raamwerk geldig te verklaar. Hierdie verhandeling bespreek ook die voorwerpgeoriënteerde paradigma en die UML-notasies. Die vereiste spesifikasie van 'n databergingstelsel word in natuurlike taal en formele notasie voorgehou om aan te dui hoe 'n formele spesifikasie van natuurlik taal na UML strukture kan verskuif en daarna na die Z-spesifiekasie deur 'n gevestigde strategie as 'n riglyn te gebruik om 'n Z-spesifikasie te konstrueer.Go nnile le dithutopatlisiso di le mmalwa ka mekgwa e e fomale, fela ga se dithutopatlisiso tse dintsi tsa tseno tse di dirisitseng mekgwa e e fomale mo karolong ya bobolokelobogolo jwa data, bogolo segolo mo ntlheng ya thadiso ya ditsamaiso tsa dikhomphiutha. Go ka nna le mabaka a le mantsi a a ka golaganngwang le seno, go tshwana le gore ga se baitseanape ba le kalo ba ba itseng go e dirisa. Mekgwa e e fomale e e dirisitswe mo tlhabololong ya dirweboleta go dirisiwa matshwao a dipalo. Le fa go na le melemo ya go dirisa mekgwa e e fomale mo tlhabololong ya dirweboleta, tiriso ya yona mo bobolokelobogolong jwa data e lekanyeditswe fa e tshwantshanngwa le tiriso ya matshwao a a seng fomale (puo ya tlwaelo) le a a batlang a le fomale. Patlisiso eno e ikaelela go bona gore a mekgwa e e fomale e ka fokotsa go retelelwa go go diragalang gantsi mo tlhabololong ya ditsamaiso tsa bobolokelobogolo jwa data. Jaaka karolo ya patlisiso eno, go tshitshintswe letlhomeso le le tokafaditsweng go bebofatsa tiriso ya mekgwa e e fomale mo tlhabololong ya ditsamaiso tse di jalo. Letlhomeso le le tokafaditsweng le tota ditlhokego tsa tlhaloso, megopolo ya Unified Modelling Language (UML), sekao sa Star le ditlhokego tse di rulaganeng. Go dirisitswe patlisiso ya tobiso e e magareng ya data mart go tlhomamisa letlhomeso le le tokafaditsweng. Tlhotlhomisi eno gape e lebelela pharataeme e e totileng sedirwa/selo le matshwao a UML. Ditlhokego tsa tsamaiso ya polokelokgolo ya data di tlhagisiwa ka puo ya tlholego le matshwao a a fomale go bontsha ka moo tlhagiso e e fomale e ka lebisiwang go tswa kwa puong ya tlholego go ya kwa dipopegong tsa UML mme morago e lebe kwa tlhalosong ya ditlhokego ya Z go dirisiwa togamaano e e ntseng e le gona jaaka kaedi ya go aga tlhaloso ya ditlhokego ya Z.School of ComputingM. Sc. (Computing

    Automating the multidimensional design of data warehouses

    Get PDF
    Les experiències prèvies en l'àmbit dels magatzems de dades (o data warehouse), mostren que l'esquema multidimensional del data warehouse ha de ser fruit d'un enfocament híbrid; això és, una proposta que consideri tant els requeriments d'usuari com les fonts de dades durant el procés de disseny.Com a qualsevol altre sistema, els requeriments són necessaris per garantir que el sistema desenvolupat satisfà les necessitats de l'usuari. A més, essent aquest un procés de reenginyeria, les fonts de dades s'han de tenir en compte per: (i) garantir que el magatzem de dades resultant pot ésser poblat amb dades de l'organització, i, a més, (ii) descobrir capacitats d'anàlisis no evidents o no conegudes per l'usuari.Actualment, a la literatura s'han presentat diversos mètodes per donar suport al procés de modelatge del magatzem de dades. No obstant això, les propostes basades en un anàlisi dels requeriments assumeixen que aquestos són exhaustius, i no consideren que pot haver-hi informació rellevant amagada a les fonts de dades. Contràriament, les propostes basades en un anàlisi exhaustiu de les fonts de dades maximitzen aquest enfocament, i proposen tot el coneixement multidimensional que es pot derivar des de les fonts de dades i, conseqüentment, generen massa resultats. En aquest escenari, l'automatització del disseny del magatzem de dades és essencial per evitar que tot el pes de la tasca recaigui en el dissenyador (d'aquesta forma, no hem de confiar únicament en la seva habilitat i coneixement per aplicar el mètode de disseny elegit). A més, l'automatització de la tasca allibera al dissenyador del sempre complex i costós anàlisi de les fonts de dades (que pot arribar a ser inviable per grans fonts de dades).Avui dia, els mètodes automatitzables analitzen en detall les fonts de dades i passen per alt els requeriments. En canvi, els mètodes basats en l'anàlisi dels requeriments no consideren l'automatització del procés, ja que treballen amb requeriments expressats en llenguatges d'alt nivell que un ordenador no pot manegar. Aquesta mateixa situació es dona en els mètodes híbrids actual, que proposen un enfocament seqüencial, on l'anàlisi de les dades es complementa amb l'anàlisi dels requeriments, ja que totes dues tasques pateixen els mateixos problemes que els enfocament purs.En aquesta tesi proposem dos mètodes per donar suport a la tasca de modelatge del magatzem de dades: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Totes dues consideren els requeriments i les fonts de dades per portar a terme la tasca de modelatge i a més, van ser pensades per superar les limitacions dels enfocaments actuals.1. MDBE segueix un enfocament clàssic, en el que els requeriments d'usuari són coneguts d'avantmà. Aquest mètode es beneficia del coneixement capturat a les fonts de dades, però guia el procés des dels requeriments i, conseqüentment, és capaç de treballar sobre fonts de dades semànticament pobres. És a dir, explotant el fet que amb uns requeriments de qualitat, podem superar els inconvenients de disposar de fonts de dades que no capturen apropiadament el nostre domini de treball.2. A diferència d'MDBE, AMDO assumeix un escenari on es disposa de fonts de dades semànticament riques. Per aquest motiu, dirigeix el procés de modelatge des de les fonts de dades, i empra els requeriments per donar forma i adaptar els resultats generats a les necessitats de l'usuari. En aquest context, a diferència de l'anterior, unes fonts de dades semànticament riques esmorteeixen el fet de no tenir clars els requeriments d'usuari d'avantmà.Cal notar que els nostres mètodes estableixen un marc de treball combinat que es pot emprar per decidir, donat un escenari concret, quin enfocament és més adient. Per exemple, no es pot seguir el mateix enfocament en un escenari on els requeriments són ben coneguts d'avantmà i en un escenari on aquestos encara no estan clars (un cas recorrent d'aquesta situació és quan l'usuari no té clares les capacitats d'anàlisi del seu propi sistema). De fet, disposar d'uns bons requeriments d'avantmà esmorteeix la necessitat de disposar de fonts de dades semànticament riques, mentre que a l'inversa, si disposem de fonts de dades que capturen adequadament el nostre domini de treball, els requeriments no són necessaris d'avantmà. Per aquests motius, en aquesta tesi aportem un marc de treball combinat que cobreix tots els possibles escenaris que podem trobar durant la tasca de modelatge del magatzem de dades.Previous experiences in the data warehouse field have shown that the data warehouse multidimensional conceptual schema must be derived from a hybrid approach: i.e., by considering both the end-user requirements and the data sources, as first-class citizens. Like in any other system, requirements guarantee that the system devised meets the end-user necessities. In addition, since the data warehouse design task is a reengineering process, it must consider the underlying data sources of the organization: (i) to guarantee that the data warehouse must be populated from data available within the organization, and (ii) to allow the end-user discover unknown additional analysis capabilities.Currently, several methods for supporting the data warehouse modeling task have been provided. However, they suffer from some significant drawbacks. In short, requirement-driven approaches assume that requirements are exhaustive (and therefore, do not consider the data sources to contain alternative interesting evidences of analysis), whereas data-driven approaches (i.e., those leading the design task from a thorough analysis of the data sources) rely on discovering as much multidimensional knowledge as possible from the data sources. As a consequence, data-driven approaches generate too many results, which mislead the user. Furthermore, the design task automation is essential in this scenario, as it removes the dependency on an expert's ability to properly apply the method chosen, and the need to analyze the data sources, which is a tedious and timeconsuming task (which can be unfeasible when working with large databases). In this sense, current automatable methods follow a data-driven approach, whereas current requirement-driven approaches overlook the process automation, since they tend to work with requirements at a high level of abstraction. Indeed, this scenario is repeated regarding data-driven and requirement-driven stages within current hybrid approaches, which suffer from the same drawbacks than pure data-driven or requirement-driven approaches.In this thesis we introduce two different approaches for automating the multidimensional design of the data warehouse: MDBE (Multidimensional Design Based on Examples) and AMDO (Automating the Multidimensional Design from Ontologies). Both approaches were devised to overcome the limitations from which current approaches suffer. Importantly, our approaches consider opposite initial assumptions, but both consider the end-user requirements and the data sources as first-class citizens.1. MDBE follows a classical approach, in which the end-user requirements are well-known beforehand. This approach benefits from the knowledge captured in the data sources, but guides the design task according to requirements and consequently, it is able to work and handle semantically poorer data sources. In other words, providing high-quality end-user requirements, we can guide the process from the knowledge they contain, and overcome the fact of disposing of bad quality (from a semantical point of view) data sources.2. AMDO, as counterpart, assumes a scenario in which the data sources available are semantically richer. Thus, the approach proposed is guided by a thorough analysis of the data sources, which is properly adapted to shape the output result according to the end-user requirements. In this context, disposing of high-quality data sources, we can overcome the fact of lacking of expressive end-user requirements.Importantly, our methods establish a combined and comprehensive framework that can be used to decide, according to the inputs provided in each scenario, which is the best approach to follow. For example, we cannot follow the same approach in a scenario where the end-user requirements are clear and well-known, and in a scenario in which the end-user requirements are not evident or cannot be easily elicited (e.g., this may happen when the users are not aware of the analysis capabilities of their own sources). Interestingly, the need to dispose of requirements beforehand is smoothed by the fact of having semantically rich data sources. In lack of that, requirements gain relevance to extract the multidimensional knowledge from the sources.So that, we claim to provide two approaches whose combination turns up to be exhaustive with regard to the scenarios discussed in the literaturePostprint (published version
    corecore