9,530 research outputs found
Two Case Studies of Subsystem Design for General-Purpose CSCW Software Architectures
This paper discusses subsystem design guidelines for the software architecture of general-purpose computer supported cooperative work systems, i.e., systems that are designed to be applicable in various application areas requiring explicit collaboration support. In our opinion, guidelines for subsystem level design are rarely given most guidelines currently given apply to the programming language level. We extract guidelines from a case study of the redesign and extension of an advanced commercial workflow management system and place them into the context of existing software engineering research. The guidelines are then validated against the design decisions made in the construction of a widely used web-based groupware system. Our approach is based on the well-known distinction between essential (logical) and physical architectures. We show how essential architecture design can be based on a direct mapping of abstract functional concepts as found in general-purpose systems to modules in the essential architecture. The essential architecture is next mapped to a physical architecture by applying software clustering and replication to achieve the required distribution and performance characteristics
Encapsulation of Soft Computing Approaches within Itemset Mining a A Survey
Data Mining discovers patterns and trends by extracting knowledge from large databases. Soft Computing techniques such as fuzzy logic, neural networks, genetic algorithms, rough sets, etc. aims to reveal the tolerance for imprecision and uncertainty for achieving tractability, robustness and low-cost solutions. Fuzzy Logic and Rough sets are suitable for handling different types of uncertainty. Neural networks provide good learning and generalization. Genetic algorithms provide efficient search algorithms for selecting a model, from mixed media data. Data mining refers to information extraction while soft computing is used for information processing. For effective knowledge discovery from large databases, both Soft Computing and Data Mining can be merged. Association rule mining (ARM) and Itemset mining focus on finding most frequent item sets and corresponding association rules, extracting rare itemsets including temporal and fuzzy concepts in discovered patterns. This survey paper explores the usage of soft computing approaches in itemset utility mining
Recommended from our members
A micromechanical fracture analysis to investigate the effect of healing particles on the overall mechanical response of a self-healing particulate composite
A computational fracture analysis is conducted on a selfâhealing particulate composite employing a finite element model of an actual microstructure. The key objective is to quantify the effects of the actual morphology and the fracture properties of the healing particles on the overall mechanical behaviour of the (MoSi2) particleâdispersed Yttria Stabilised Zirconia (YSZ) composite. To simulate fracture, a cohesive zone approach is utilised whereby cohesive elements are embedded throughout the finite element mesh allowing for arbitrary crack initiation and propagation in the microstructure. The fracture behaviour in terms of the composite strength and the percentage of fractured particles is reported as a function of the mismatch in fracture properties between the healing particles and the matrix as well as a function of particle/matrix interface strength and fracture energy. The study can be used as a guiding tool for designing an extrinsic selfâhealing material and understanding the effect of the healing particles on the overall mechanical properties of the material
From Earth to Orbit: An assessment of transportation options
The report assesses the requirements, benefits, technological feasibility, and roles of Earth-to-Orbit transportation systems and options that could be developed in support of future national space programs. Transportation requirements, including those for Mission-to-Planet Earth, Space Station Freedom assembly and operation, human exploration of space, space science missions, and other major civil space missions are examined. These requirements are compared with existing, planned, and potential launch capabilities, including expendable launch vehicles (ELV's), the Space Shuttle, the National Launch System (NLS), and new launch options. In addition, the report examines propulsion systems in the context of various launch vehicles. These include the Advanced Solid Rocket Motor (ASRM), the Redesigned Solid Rocket Motor (RSRM), the Solid Rocket Motor Upgrade (SRMU), the Space Shuttle Main Engine (SSME), the Space Transportation Main Engine (STME), existing expendable launch vehicle engines, and liquid-oxygen/hydrocarbon engines. Consideration is given to systems that have been proposed to accomplish the national interests in relatively cost effective ways, with the recognition that safety and reliability contribute to cost-effectiveness. Related resources, including technology, propulsion test facilities, and manufacturing capabilities are also discussed
Model-Driven Engineering in the Large: Refactoring Techniques for Models and Model Transformation Systems
Model-Driven Engineering (MDE) is a software engineering paradigm that
aims to increase the productivity of developers by raising the
abstraction level of software development. It envisions the use of
models as key artifacts during design, implementation and deployment.
From the recent arrival of MDE in large-scale industrial software
development â a trend we refer to as MDE in the large â, a set of
challenges emerges: First, models are now developed at distributed
locations, by teams of teams. In such highly collaborative settings, the
presence of large monolithic models gives rise to certain issues, such
as their proneness to editing conflicts. Second, in large-scale system
development, models are created using various domain-specific modeling
languages. Combining these models in a disciplined manner calls for
adequate modularization mechanisms. Third, the development of models is
handled systematically by expressing the involved operations using model
transformation rules. Such rules are often created by cloning, a
practice related to performance and maintainability issues.
In this thesis, we contribute three refactoring techniques, each aiming
to tackle one of these challenges. First, we propose a technique to
split a large monolithic model into a set of sub-models. The aim of this
technique is to enable a separation of concerns within models, promoting
a concern-based collaboration style: Collaborators operate on the
submodels relevant for their task at hand. Second, we suggest a
technique to encapsulate model components by introducing modular
interfaces in a set of related models. The goal of this technique is to
establish modularity in these models. Third, we introduce a refactoring
to merge a set of model transformation rules exhibiting a high degree of
similarity. The aim of this technique is to improve maintainability and
performance by eliminating the drawbacks associated with cloning. The
refactoring creates variability-based rules, a novel type of rule
allowing to capture variability by using annotations.
The refactoring techniques contributed in this work help to reduce the
manual effort during the refactoring of models and transformation rules
to a large extent. As indicated in a series of realistic case studies,
the output produced by the techniques is comparable or, in the case of
transformation rules, partly even preferable to the result of manual
refactoring, yielding a promising outlook on the applicability in
real-world settings
Recommended from our members
Recent Advances in Encapsulation, Protection, and Oral Delivery of Bioactive Proteins and Peptides using Colloidal Systems
There are many areas in medicine and industry where it would be advantageous to orally deliver bioactive proteins and peptides (BPPs), including ACE inhibitors, antimicrobials, antioxidants, hormones, enzymes, and vaccines. A major challenge in this area is that many BPPs degrade during storage of the product or during passage through the human gut, thereby losing their activity. Moreover, many BPPs have undesirable taste profiles (such as bitterness or astringency), which makes them unpleasant to consume. These challenges can often be overcome by encapsulating them within colloidal particles that protect them from any adverse conditions in their environment, but then release them at the desired site-of-action, which may be inside the gut or body. This article begins with a discussion of BPP characteristics and the hurdles involved in their delivery. It then highlights the characteristics of colloidal particles that can be manipulated to create effective BPP-delivery systems, including particle composition, size, and interfacial properties. The factors impacting the functional performance of colloidal delivery systems are then highlighted, including their loading capacity, encapsulation efficiency, protective properties, retention/release properties, and stability. Different kinds of colloidal delivery systems suitable for encapsulation of BPPs are then reviewed, such as microemulsions, emulsions, solid lipid particles, liposomes, and microgels. Finally, some examples of the use of colloidal delivery systems for delivery of specific BPPs are given, including hormones, enzymes, vaccines, antimicrobials, and ACE inhibitors. An emphasis is on the development of food-grade colloidal delivery systems, which could be used in functional or medical food applications. The knowledge presented should facilitate the design of more effective vehicles for the oral delivery of bioactive proteins and peptides
Introducing mobile edge computing capabilities through distributed 5G Cloud Enabled Small Cells
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.Peer ReviewedPostprint (author's final draft
- âŠ