8,940 research outputs found
Towards a Reliable Framework of Uncertainty-Based Group Decision Support System
This study proposes a framework of Uncertainty-based Group Decision Support
System (UGDSS). It provides a platform for multiple criteria decision analysis
in six aspects including (1) decision environment, (2) decision problem, (3)
decision group, (4) decision conflict, (5) decision schemes and (6) group
negotiation. Based on multiple artificial intelligent technologies, this
framework provides reliable support for the comprehensive manipulation of
applications and advanced decision approaches through the design of an
integrated multi-agents architecture.Comment: Accepted paper in IEEE-ICDM2010; Print ISBN: 978-1-4244-9244-
Business Model Innovation For Potentially Disruptive Technologies: The Case Of Big Pharmaceutical Firms Accommodating Biotechnologies
Potenziell disruptive Technologien sind schwer zu vermarkten, weil sie mit Werten verbunden sind, die für etablierte Unternehmen neu sind. Ohne geeignete Geschäftsmodellinnovation gelingt es den etablierten Unternehmen nicht, neue, potenziell disruptive Technologien auf den Markt zu bringen. Die aufkeimende Literatur über disruptive Innovationen bietet nur begrenzte Empfehlungen zu spezifischen Geschäftsmodellelementen, die dazu dienen können, potenziell disruptive Technologien zu integrieren. Um diese Forschungslücke zu schließen, wird in dieser Arbeit untersucht, wie große Pharmaunternehmen Biotechnologien in die Gestaltung ihrer Geschäftsmodellinnovation einbezogen haben, um erfolgreiche Elemente der Geschäftsmodellgestaltung zu ermitteln.
Es wird ein qualitativer Forschungsansatz gewählt, der aus drei Studien besteht. Zunächst werden nach einer systematischen Literaturrecherche zur Geschäftsmodellforschung in der pharmazeutischen Industrie 45 Arbeiten ausgewählt und qualitativ ausgewertet. Zweitens werden qualitative halbstrukturierte Interviews mit 16 Experten in großen Pharmaunternehmen geführt. Die Transkripte werden mit der Methode der Qualitativen Inhaltsanalyse ausgewertet. Schließlich wird eine Clusteranalyse durchgeführt, um den von allen digitalen Angeboten großer Pharmaunternehmen vorgeschlagenen und gelieferten Wert zu ermitteln.
In dieser Arbeit werden erstmals zwei Geschäftsmodelle großer Pharmaunternehmen aus der Zeit vor und nach der Einführung der Biotechnologien beschrieben. In dieser Arbeit wird argumentiert, dass für die Anpassung an potenziell disruptive Technologien folgende Geschäftsmodellelemente empfohlen werden: Kollaborationsportfolios und digitale Servitisierung. Erstens sollten etablierte Unternehmen ein Portfolio von Kooperationsformaten entwickeln, indem sie die Breite der Partner (einschließlich der Wettbewerber) diversifizieren und alle Aktivitäten in ihrer Wertschöpfungskette abdecken. Zweitens sollten die etablierten Unternehmen den Wert, den sie anbieten, und die Art und Weise, wie sie diesen Wert für etablierte und neue Kundensegmente bereitstellen, innovativ gestalten, indem sie ihre Produkte mit ergänzenden Dienstleistungen bündeln, insbesondere mit solchen, die digital ermöglicht werden. Digitale Dienstleistungen dienen dazu, die Bedürfnisse der Kunden mit denen des Herstellers zu verknüpfen.
Neben der Weiterentwicklung der Theorie über disruptive Innovationen können die empfohlenen Elemente des Geschäftsmodells von führenden mittelständischen Pharmaunternehmen (z. B. Fresenius oder Servier) und Unternehmen aus anderen Branchen direkt genutzt werden, um andere potenziell disruptive Technologien zu vermarkten. Diese Forschung unterstützt politische Entscheidungsträger bei der Entwicklung von Strategien zur Förderung der Kommerzialisierung potenziell disruptiver Innovationen in ihrem spezifischen Kontext.Potentially disruptive technologies are challenging to commercialize because they are associated with values new to established firms. Without fitting business model innovation, incumbent firms fail to bring new potentially disruptive technologies to the market. The burgeoning literature on disruptive innovation provides only limited recommendations on specific business model elements that can serve to accommodate potentially disruptive technologies. To close this research gap, this thesis explores how big pharmaceutical firms accommodated biotechnologies in the design of their business model innovation to discover successful business model design elements.
A qualitative research approach consisting in three studies is adopted. First, following a systematic literature review on business model research in the pharmaceutical industry, 45 papers are selected and qualitatively analyzed. Second, qualitative semi-structured interviews are conducted with 16 experts in big pharmaceutical firms. The transcripts are analyzed using the qualitative content analysis method. Finally, a cluster analysis is conducted to identify value proposed and delivered by all digital offers of big pharmaceutical firms.
This thesis is the first to describe two business model designs of big pharmaceutical firms from before and since the accommodation of biotechnologies. This research argues that business model designs recommended for the accommodation of potentially disruptive technologies are collaboration portfolios and digital servitization. First, established firms should devise a portfolio of collaboration formats by diversifying breadth of partners (including competitors), and by covering all activities in their value chain. Second, incumbent firms should innovate in the value they offer and how they deliver it to mainstream and new customer segments though bundling their products with complementary services, especially those that are digitally enabled. Digital services serve for back-coupling customers’ needs with the producer.
Besides advancing theory on disruptive innovation, the recommended business model design elements can be directly used by top midsize pharmaceutical firms (e.g., Fresenius or Servier) and firms from other industries to commercialize other potentially disruptive technologies. This research supports policy makers in devising strategies for the promotion of the commercialization of potentially disruptive innovations in their specific contexts
Recommended from our members
An Architecture for Multilevel Learning and Robotic Control based on Concept Generation
Robot and multi-robot systems are inherently complex systems, for which designing the programs to control their behaviours proves complicated. Moreover, control programs that have been successfully designed for a particular environment and task can become useless if either of these change. It is for this reason that this thesis investigates the use of machine learning within robot and multi-robot systems. It explores an architecture for machine learning, applied to autonomous mobile robots based on dividing the learning task into two individual but interleaved sub-tasks.
The first sub-task consists of finding an appropriate representation on which to base behaviour learning. The thesis explores the viability of using multidimensional classification techniques to generalise the original sensor and motor representations into abstract hierarchies of 'concepts'. To construct concepts the research used standard classification techniques, and experimented with a novel method of multidimensional data classification based on 'Q-analysis'. Results suggest that this may be a powerful new approach to concept learning.
The second sub-task consists of using the previously acquired concepts as the representation for behaviour learning. The thesis explores whether it is possible to learn robotic behaviours represented using concepts. Results show that is possible to learn low-level behaviours such as navigation and higher-level ones such as ball passing in robot football.
The thesis concludes that the proposed architecture is viable for robotic behaviour learning and control, and that incorporating Q-analysis based classification results in a promising new approach to the control of robot and multi-robot systems
Recommended from our members
Automatic Multilevel Feature Abstraction in Adaptable Machine Vision Systems
Vision is a complex task which can be accomplished with apparent ease by biological systems, but for which the design of artificial systems is difficult. Although machine vision systems can be successfully designed for a specific task, under certain conditions, they are likely to fail if circumstances change. This was the motivation for the research into ways in which systems can be self-designing and adaptable to new visual tasks. The research was conducted in three vital areas of concern for machine vision systems.
The first area is finding a suitable architecture for forming an appropriate representation for the current task. The research investigated the application of Hypernetworks theory to building a multilevel, generally-applicable representation, through repeated application of a fundamental 'self-similarity' principle, that parts of objects assembled under a particular relation at one level, form whole objects at the next. Results show that this is potentially a powerful approach for autonomously generating an adaptable system-architecture suitable for multiple visual tasks.
The second area is the autonomous extraction of suitable low-level features, which the research investigated through random generation of minimally-constrained pixel-configurations and algorithmic generation of homogeneous and heterogeneous polygons. The results suggest that, despite the simplicity of the features making them vulnerable to image transformations, these are promising approaches worth developing further.
The third area is automatic feature selection. The research explored management of 'dimensionality' and of 'combinatorial explosion', as well as how to locate relevant features at multiple representation levels, in the context of 'emergence' of structure. Results indicate that this approach can find useful 'intermediate-level' constructs through analysis of the connectivity of the simplices representing objects at higher levels.
The research concludes that the proposed novel approaches to tackling the above issues, in particular the application of hypernetworks to the formation of multilevel representations and the resulting emergence of higher-level structure, is fruitful
Machine learning methods for discriminating natural targets in seabed imagery
The research in this thesis concerns feature-based machine learning processes and methods for discriminating qualitative natural targets in seabed imagery. The applications considered, typically involve time-consuming manual processing stages in an industrial setting. An aim of the research is to facilitate a means of assisting human analysts by expediting the tedious interpretative tasks, using machine methods. Some novel approaches are devised and investigated for solving the application problems.
These investigations are compartmentalised in four coherent case studies linked by common underlying technical themes and methods. The first study addresses pockmark discrimination in a digital bathymetry model. Manual identification and mapping of even a relatively small number of these landform objects is an expensive process. A novel, supervised machine learning approach to automating the task is presented. The process maps the boundaries of ≈ 2000 pockmarks in seconds - a task that would take days for a human analyst to complete. The second case study investigates different feature creation methods for automatically discriminating sidescan sonar image textures characteristic of Sabellaria spinulosa colonisation.
Results from a comparison of several textural feature creation methods on sonar waterfall imagery show that Gabor filter banks yield some of the best results. A further empirical investigation into the filter bank features created on sonar mosaic imagery leads to the identification of a useful configuration and filter parameter ranges for discriminating the target textures in the imagery. Feature saliency estimation is a vital stage in the machine process. Case study three concerns distance measures for the evaluation and ranking of features on sonar imagery. Two novel consensus methods for creating a more robust ranking are proposed. Experimental results show that the consensus methods can improve robustness over a range of feature parameterisations and various seabed texture
classification tasks. The final case study is more qualitative in nature and brings together a number of ideas, applied to the classification of target regions in real-world
sonar mosaic imagery.
A number of technical challenges arose and these were
surmounted by devising a novel, hybrid unsupervised method. This fully automated machine approach was compared with a supervised approach in an application to the problem of image-based sediment type discrimination. The hybrid unsupervised method produces a plausible class map in a few minutes of processing time. It is concluded that the versatile, novel process should be generalisable to the discrimination of other subjective natural targets in real-world seabed imagery, such as Sabellaria textures and pockmarks (with appropriate features and feature tuning.) Further, the full automation
of pockmark and Sabellaria discrimination is feasible within this framework
Computational Trust in Web Content Quality: A Comparative Evalutation on the Wikipedia Project
The problem of identifying useful and trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publication. It is not hard to predict that in the future the direct reliance on this material will expand and the problem of evaluating the trustworthiness of this kind of content become crucial. The Wikipedia project represents the most successful and discussed example of such online resources. In this paper we present a method to predict Wikipedia articles trustworthiness based on computational trust techniques and a deep domain-specific analysis. Our assumption is that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia – i.e. content quality in a collaborative environment – mapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. We present a series of experiment. The first is a study-case over a specific category of articles; the second is an evaluation over 8 000 articles representing 65% of the overall Wikipedia editing activity. We report encouraging results on the automated evaluation of Wikipedia content using our domain-specific expertise method. Finally, in order to appraise the value added by using domain-specific expertise, we compare our results with the ones obtained with a pre-processed cluster analysis, where complex expertise is mostly replaced by training and automatic classification of common features
- …