8,940 research outputs found

    Towards a Reliable Framework of Uncertainty-Based Group Decision Support System

    Full text link
    This study proposes a framework of Uncertainty-based Group Decision Support System (UGDSS). It provides a platform for multiple criteria decision analysis in six aspects including (1) decision environment, (2) decision problem, (3) decision group, (4) decision conflict, (5) decision schemes and (6) group negotiation. Based on multiple artificial intelligent technologies, this framework provides reliable support for the comprehensive manipulation of applications and advanced decision approaches through the design of an integrated multi-agents architecture.Comment: Accepted paper in IEEE-ICDM2010; Print ISBN: 978-1-4244-9244-

    Business Model Innovation For Potentially Disruptive Technologies: The Case Of Big Pharmaceutical Firms Accommodating Biotechnologies

    Get PDF
    Potenziell disruptive Technologien sind schwer zu vermarkten, weil sie mit Werten verbunden sind, die für etablierte Unternehmen neu sind. Ohne geeignete Geschäftsmodellinnovation gelingt es den etablierten Unternehmen nicht, neue, potenziell disruptive Technologien auf den Markt zu bringen. Die aufkeimende Literatur über disruptive Innovationen bietet nur begrenzte Empfehlungen zu spezifischen Geschäftsmodellelementen, die dazu dienen können, potenziell disruptive Technologien zu integrieren. Um diese Forschungslücke zu schließen, wird in dieser Arbeit untersucht, wie große Pharmaunternehmen Biotechnologien in die Gestaltung ihrer Geschäftsmodellinnovation einbezogen haben, um erfolgreiche Elemente der Geschäftsmodellgestaltung zu ermitteln. Es wird ein qualitativer Forschungsansatz gewählt, der aus drei Studien besteht. Zunächst werden nach einer systematischen Literaturrecherche zur Geschäftsmodellforschung in der pharmazeutischen Industrie 45 Arbeiten ausgewählt und qualitativ ausgewertet. Zweitens werden qualitative halbstrukturierte Interviews mit 16 Experten in großen Pharmaunternehmen geführt. Die Transkripte werden mit der Methode der Qualitativen Inhaltsanalyse ausgewertet. Schließlich wird eine Clusteranalyse durchgeführt, um den von allen digitalen Angeboten großer Pharmaunternehmen vorgeschlagenen und gelieferten Wert zu ermitteln. In dieser Arbeit werden erstmals zwei Geschäftsmodelle großer Pharmaunternehmen aus der Zeit vor und nach der Einführung der Biotechnologien beschrieben. In dieser Arbeit wird argumentiert, dass für die Anpassung an potenziell disruptive Technologien folgende Geschäftsmodellelemente empfohlen werden: Kollaborationsportfolios und digitale Servitisierung. Erstens sollten etablierte Unternehmen ein Portfolio von Kooperationsformaten entwickeln, indem sie die Breite der Partner (einschließlich der Wettbewerber) diversifizieren und alle Aktivitäten in ihrer Wertschöpfungskette abdecken. Zweitens sollten die etablierten Unternehmen den Wert, den sie anbieten, und die Art und Weise, wie sie diesen Wert für etablierte und neue Kundensegmente bereitstellen, innovativ gestalten, indem sie ihre Produkte mit ergänzenden Dienstleistungen bündeln, insbesondere mit solchen, die digital ermöglicht werden. Digitale Dienstleistungen dienen dazu, die Bedürfnisse der Kunden mit denen des Herstellers zu verknüpfen. Neben der Weiterentwicklung der Theorie über disruptive Innovationen können die empfohlenen Elemente des Geschäftsmodells von führenden mittelständischen Pharmaunternehmen (z. B. Fresenius oder Servier) und Unternehmen aus anderen Branchen direkt genutzt werden, um andere potenziell disruptive Technologien zu vermarkten. Diese Forschung unterstützt politische Entscheidungsträger bei der Entwicklung von Strategien zur Förderung der Kommerzialisierung potenziell disruptiver Innovationen in ihrem spezifischen Kontext.Potentially disruptive technologies are challenging to commercialize because they are associated with values new to established firms. Without fitting business model innovation, incumbent firms fail to bring new potentially disruptive technologies to the market. The burgeoning literature on disruptive innovation provides only limited recommendations on specific business model elements that can serve to accommodate potentially disruptive technologies. To close this research gap, this thesis explores how big pharmaceutical firms accommodated biotechnologies in the design of their business model innovation to discover successful business model design elements. A qualitative research approach consisting in three studies is adopted. First, following a systematic literature review on business model research in the pharmaceutical industry, 45 papers are selected and qualitatively analyzed. Second, qualitative semi-structured interviews are conducted with 16 experts in big pharmaceutical firms. The transcripts are analyzed using the qualitative content analysis method. Finally, a cluster analysis is conducted to identify value proposed and delivered by all digital offers of big pharmaceutical firms. This thesis is the first to describe two business model designs of big pharmaceutical firms from before and since the accommodation of biotechnologies. This research argues that business model designs recommended for the accommodation of potentially disruptive technologies are collaboration portfolios and digital servitization. First, established firms should devise a portfolio of collaboration formats by diversifying breadth of partners (including competitors), and by covering all activities in their value chain. Second, incumbent firms should innovate in the value they offer and how they deliver it to mainstream and new customer segments though bundling their products with complementary services, especially those that are digitally enabled. Digital services serve for back-coupling customers’ needs with the producer. Besides advancing theory on disruptive innovation, the recommended business model design elements can be directly used by top midsize pharmaceutical firms (e.g., Fresenius or Servier) and firms from other industries to commercialize other potentially disruptive technologies. This research supports policy makers in devising strategies for the promotion of the commercialization of potentially disruptive innovations in their specific contexts

    Machine learning methods for discriminating natural targets in seabed imagery

    Get PDF
    The research in this thesis concerns feature-based machine learning processes and methods for discriminating qualitative natural targets in seabed imagery. The applications considered, typically involve time-consuming manual processing stages in an industrial setting. An aim of the research is to facilitate a means of assisting human analysts by expediting the tedious interpretative tasks, using machine methods. Some novel approaches are devised and investigated for solving the application problems. These investigations are compartmentalised in four coherent case studies linked by common underlying technical themes and methods. The first study addresses pockmark discrimination in a digital bathymetry model. Manual identification and mapping of even a relatively small number of these landform objects is an expensive process. A novel, supervised machine learning approach to automating the task is presented. The process maps the boundaries of ≈ 2000 pockmarks in seconds - a task that would take days for a human analyst to complete. The second case study investigates different feature creation methods for automatically discriminating sidescan sonar image textures characteristic of Sabellaria spinulosa colonisation. Results from a comparison of several textural feature creation methods on sonar waterfall imagery show that Gabor filter banks yield some of the best results. A further empirical investigation into the filter bank features created on sonar mosaic imagery leads to the identification of a useful configuration and filter parameter ranges for discriminating the target textures in the imagery. Feature saliency estimation is a vital stage in the machine process. Case study three concerns distance measures for the evaluation and ranking of features on sonar imagery. Two novel consensus methods for creating a more robust ranking are proposed. Experimental results show that the consensus methods can improve robustness over a range of feature parameterisations and various seabed texture classification tasks. The final case study is more qualitative in nature and brings together a number of ideas, applied to the classification of target regions in real-world sonar mosaic imagery. A number of technical challenges arose and these were surmounted by devising a novel, hybrid unsupervised method. This fully automated machine approach was compared with a supervised approach in an application to the problem of image-based sediment type discrimination. The hybrid unsupervised method produces a plausible class map in a few minutes of processing time. It is concluded that the versatile, novel process should be generalisable to the discrimination of other subjective natural targets in real-world seabed imagery, such as Sabellaria textures and pockmarks (with appropriate features and feature tuning.) Further, the full automation of pockmark and Sabellaria discrimination is feasible within this framework

    Computational Trust in Web Content Quality: A Comparative Evalutation on the Wikipedia Project

    Get PDF
    The problem of identifying useful and trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publication. It is not hard to predict that in the future the direct reliance on this material will expand and the problem of evaluating the trustworthiness of this kind of content become crucial. The Wikipedia project represents the most successful and discussed example of such online resources. In this paper we present a method to predict Wikipedia articles trustworthiness based on computational trust techniques and a deep domain-specific analysis. Our assumption is that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia – i.e. content quality in a collaborative environment – mapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. We present a series of experiment. The first is a study-case over a specific category of articles; the second is an evaluation over 8 000 articles representing 65% of the overall Wikipedia editing activity. We report encouraging results on the automated evaluation of Wikipedia content using our domain-specific expertise method. Finally, in order to appraise the value added by using domain-specific expertise, we compare our results with the ones obtained with a pre-processed cluster analysis, where complex expertise is mostly replaced by training and automatic classification of common features
    corecore