602,741 research outputs found

    The readiness of systems engineering at a South African engineering organisation

    Get PDF
    Abstract: The purpose of this study is to explore and gain a broad perspective on the systems engineering methods currently employed at a South African research council. The aim is to question if these methods are ideal by comparing them with their alternatives. This paper focuses on the systems engineering methods used within the various competency areas of one of the engineering business units. The suitability of these methods for the nature of work being done in the respective competency areas is also explored. Systems Engineering Management Base Theory (SEMBASE) is used as a framework to find the gaps in each competency area and conclude possible ways of improvement thereof. A qualitative research method is used for this study. The data and information received from the interviews are analysed for emerging patterns that will confirm the theory. The findings show that the competency area focusing on Systems Engineering and Enterprise Architecture is adequately aware of the systems engineering processes and is well-equipped with its tools. The research also reveals that the Technical Competency Areas are not always aware of all the systems engineering processes and lack certain systems engineering tools. Some competency areas are indirectly using Systems Engineering but are unaware of it. More training and awareness is required to fill these gaps amongst the Technical Competency Areas

    Requirements elicitation through viewpoint control in a natural language environment

    Get PDF
    While requirements engineering is about building a conceptual model of part of reality, requirements validation involves assessing the model for correctness, completeness, and consistency. Viewpoint resolution is the process of comparing different views of a given situation and reconciling different opinions. In his doctoral dissertation Leite [72] proposes viewpoint resolution as a means for early validation of requirements of large systems. Leite concentrates on the representation of two different views using a special language, and the identification of their syntactic differences. His method relies heavily on redundancy: two viewpoints (systems analysts) should consider the same topic, use the same vocabulary, and use the same rule-based language which constrains how the rules should be expressed. The quality of discrepancies that can be detected using his method depends on the quality of the viewpoints. The hypothesis of this thesis is that, independently of the quality of the viewpoints, the number of viewpoints, the language, and the domain, it is possible to detect better quality discrepancies and to point out problems earlier than Leite's method allows. In the first part of this study, viewpoint-oriented requirements engineering methods are classified into categories based on the kind of multiplicity the methods address: multiple human agents, multiple specification processes, or multiple representation schemes. The classification provides a framework for the comparison and the evaluation of viewpoint-based methods. The study then focuses on the critical evaluation of Leite's method both analytically and experimentally. Counter examples were designed to identify the situations the method cannot handle. The second part of the work concentrates on the development of a method for the very early validation of requirements that improves on Leite's method and pushes the boundaries of the validation process upstream towards fact-finding, and downstream towards conflicts resolution. The Viewpoint Control Method draws its principles from the fields of uncertainty management and natural language engineering. The basic principle of the method is that, in order to make sense of a domain one must learn about the information sources and create models of their behaviour. These models are used to assess pieces of information, in natural language, received from the sources and to resolve conflicts between them. The models are then reassessed in the light of feedback from the results of the process of information evaluation and conflict resolution. Among the implications of this approach is the very early detection of problems, and the treatment of conflict resolution as an explicit and an integral part of the requirements engineering process. The method is designed to operate within a large environment called LOLITA that supports relevant aspects of natural language engineering. In the third part of the study the Viewpoint Control Method is applied and experimentally evaluated, using examples and practical case studies. Comparing the proposed approach to Leite's shows that the Viewpoint Control Method is of wider scope, is able to detect problems earlier, and is able to point out better quality problems. The conclusions of the investigation support the view that underlines the naivety of assuming competence or objectivity of each source of information

    Methods of Technical Prognostics Applicable to Embedded Systems

    Get PDF
    HlavnĂ­ cĂ­lem dizertace je poskytnutĂ­ ucelenĂ©ho pohledu na problematiku technickĂ© prognostiky, kterĂĄ nachĂĄzĂ­ uplatněnĂ­ v tzv. prediktivnĂ­ ĂșdrĆŸbě zaloĆŸenĂ© na trvalĂ©m monitorovĂĄnĂ­ zaƙízenĂ­ a odhadu Ășrovně degradace systĂ©mu či jeho zbĂœvajĂ­cĂ­ ĆŸivotnosti a to zejmĂ©na v oblasti komplexnĂ­ch zaƙízenĂ­ a strojĆŻ. V současnosti je technickĂĄ diagnostika poměrně dobƙe zmapovanĂĄ a reĂĄlně nasazenĂĄ na rozdĂ­l od technickĂ© prognostiky, kterĂĄ je stĂĄle rozvĂ­jejĂ­cĂ­m se oborem, kterĂœ ovĆĄem postrĂĄdĂĄ větĆĄĂ­ mnoĆŸstvĂ­ reĂĄlnĂœch aplikaci a navĂ­c ne vĆĄechny metody jsou dostatečně pƙesnĂ© a aplikovatelnĂ© pro embedded systĂ©my. DizertačnĂ­ prĂĄce pƙinĂĄĆĄĂ­ pƙehled zĂĄkladnĂ­ch metod pouĆŸitelnĂœch pro Ășčely predikce zbĂœvajĂ­cĂ­ uĆŸitnĂ© ĆŸivotnosti, jsou zde popsĂĄny metriky pomocĂ­, kterĂœch je moĆŸnĂ© jednotlivĂ© pƙístupy porovnĂĄvat aĆ„ uĆŸ z pohledu pƙesnosti, ale takĂ© i z pohledu vĂœpočetnĂ­ nĂĄročnosti. Jedno z dizertačnĂ­ch jader tvoƙí doporučenĂ­ a postup pro vĂœběr vhodnĂ© prognostickĂ© metody s ohledem na prognostickĂĄ kritĂ©ria. DalĆĄĂ­m dizertačnĂ­m jĂĄdrem je pƙedstavenĂ­ tzv. částicovĂ©ho filtrovanĂ­ (particle filtering) vhodnĂ© pro model-based prognostiku s ověƙenĂ­m jejich implementace a porovnĂĄnĂ­m. HlavnĂ­ dizertačnĂ­ jĂĄdro reprezentuje pƙípadovou studii pro velmi aktuĂĄlnĂ­ tĂ©ma prognostiky Li-Ion baterii s ohledem na trvalĂ© monitorovĂĄnĂ­. PƙípadovĂĄ studie demonstruje proces prognostiky zaloĆŸenĂ© na modelu a srovnĂĄvĂĄ moĆŸnĂ© pƙístupy jednak pro odhad doby pƙed vybitĂ­m baterie, ale takĂ© sleduje moĆŸnĂ© vlivy na degradaci baterie. SoučástĂ­ prĂĄce je zĂĄkladnĂ­ ověƙenĂ­ modelu Li-Ion baterie a nĂĄvrh prognostickĂ©ho procesu.The main aim of the thesis is to provide a comprehensive overview of technical prognosis, which is applied in the condition based maintenance, based on continuous device monitoring and remaining useful life estimation, especially in the field of complex equipment and machinery. Nowadays technical prognosis is still evolving discipline with limited number of real applications and is not so well developed as technical diagnostics, which is fairly well mapped and deployed in real systems. Thesis provides an overview of basic methods applicable for prediction of remaining useful life, metrics, which can help to compare the different approaches both in terms of accuracy and in terms of computational/deployment cost. One of the research cores consists of recommendations and guide for selecting the appropriate forecasting method with regard to the prognostic criteria. Second thesis research core provides description and applicability of particle filtering framework suitable for model-based forecasting. Verification of their implementation and comparison is provided. The main research topic of the thesis provides a case study for a very actual Li-Ion battery health monitoring and prognostics with respect to continuous monitoring. The case study demonstrates the prognostic process based on the model and compares the possible approaches for estimating both the runtime and capacity fade. Proposed methodology is verified on real measured data.

    Synergistic combination of systems for structural health monitoring and earthquake early warning for structural health prognosis and diagnosis

    Get PDF
    Earthquake early warning (EEW) systems are currently operating nationwide in Japan and are in beta-testing in California. Such a system detects an earthquake initiation using online signals from a seismic sensor network and broadcasts a warning of the predicted location and magnitude a few seconds to a minute or so before an earthquake hits a site. Such a system can be used synergistically with installed structural health monitoring (SHM) systems to enhance pre-event prognosis and post-event diagnosis of structural health. For pre-event prognosis, the EEW system information can be used to make probabilistic predictions of the anticipated damage to a structure using seismic loss estimation methodologies from performance-based earthquake engineering. These predictions can support decision-making regarding the activation of appropriate mitigation systems, such as stopping traffic from entering a bridge that has a predicted high probability of damage. Since the time between warning and arrival of the strong shaking is very short, probabilistic predictions must be rapidly calculated and the decision making automated for the mitigation actions. For post-event diagnosis, the SHM sensor data can be used in Bayesian updating of the probabilistic damage predictions with the EEW predictions as a prior. Appropriate Bayesian methods for SHM have been published. In this paper, we use pre-trained surrogate models (or emulators) based on machine learning methods to make fast damage and loss predictions that are then used in a cost-benefit decision framework for activation of a mitigation measure. A simple illustrative example of an infrastructure application is presented

    Early Quantitative Assessment of Non-Functional Requirements

    Get PDF
    Non-functional requirements (NFRs) of software systems are a well known source of uncertainty in effort estimation. Yet, quantitatively approaching NFR early in a project is hard. This paper makes a step towards reducing the impact of uncertainty due to NRF. It offers a solution that incorporates NFRs into the functional size quantification process. The merits of our solution are twofold: first, it lets us quantitatively assess the NFR modeling process early in the project, and second, it lets us generate test cases for NFR verification purposes. We chose the NFR framework as a vehicle to integrate NFRs into the requirements modeling process and to apply quantitative assessment procedures. Our solution proposal also rests on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. We extend its use for NFR testing purposes, which is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the advantages of our approach and the open questions related to its design as well

    Finding Top-k Dominance on Incomplete Big Data Using Map-Reduce Framework

    Full text link
    Incomplete data is one major kind of multi-dimensional dataset that has random-distributed missing nodes in its dimensions. It is very difficult to retrieve information from this type of dataset when it becomes huge. Finding top-k dominant values in this type of dataset is a challenging procedure. Some algorithms are present to enhance this process but are mostly efficient only when dealing with a small-size incomplete data. One of the algorithms that make the application of TKD query possible is the Bitmap Index Guided (BIG) algorithm. This algorithm strongly improves the performance for incomplete data, but it is not originally capable of finding top-k dominant values in incomplete big data, nor is it designed to do so. Several other algorithms have been proposed to find the TKD query, such as Skyband Based and Upper Bound Based algorithms, but their performance is also questionable. Algorithms developed previously were among the first attempts to apply TKD query on incomplete data; however, all these had weak performances or were not compatible with the incomplete data. This thesis proposes MapReduced Enhanced Bitmap Index Guided Algorithm (MRBIG) for dealing with the aforementioned issues. MRBIG uses the MapReduce framework to enhance the performance of applying top-k dominance queries on huge incomplete datasets. The proposed approach uses the MapReduce parallel computing approach using multiple computing nodes. The framework separates the tasks between several computing nodes that independently and simultaneously work to find the result. This method has achieved up to two times faster processing time in finding the TKD query result in comparison to previously presented algorithms

    Exemplar Based Deep Discriminative and Shareable Feature Learning for Scene Image Classification

    Full text link
    In order to encode the class correlation and class specific information in image representation, we propose a new local feature learning approach named Deep Discriminative and Shareable Feature Learning (DDSFL). DDSFL aims to hierarchically learn feature transformation filter banks to transform raw pixel image patches to features. The learned filter banks are expected to: (1) encode common visual patterns of a flexible number of categories; (2) encode discriminative information; and (3) hierarchically extract patterns at different visual levels. Particularly, in each single layer of DDSFL, shareable filters are jointly learned for classes which share the similar patterns. Discriminative power of the filters is achieved by enforcing the features from the same category to be close, while features from different categories to be far away from each other. Furthermore, we also propose two exemplar selection methods to iteratively select training data for more efficient and effective learning. Based on the experimental results, DDSFL can achieve very promising performance, and it also shows great complementary effect to the state-of-the-art Caffe features.Comment: Pattern Recognition, Elsevier, 201

    Advancing Alternative Analysis: Integration of Decision Science.

    Get PDF
    Decision analysis-a systematic approach to solving complex problems-offers tools and frameworks to support decision making that are increasingly being applied to environmental challenges. Alternatives analysis is a method used in regulation and product design to identify, compare, and evaluate the safety and viability of potential substitutes for hazardous chemicals.Assess whether decision science may assist the alternatives analysis decision maker in comparing alternatives across a range of metrics.A workshop was convened that included representatives from government, academia, business, and civil society and included experts in toxicology, decision science, alternatives assessment, engineering, and law and policy. Participants were divided into two groups and prompted with targeted questions. Throughout the workshop, the groups periodically came together in plenary sessions to reflect on other groups' findings.We conclude the further incorporation of decision science into alternatives analysis would advance the ability of companies and regulators to select alternatives to harmful ingredients, and would also advance the science of decision analysis.We advance four recommendations: (1) engaging the systematic development and evaluation of decision approaches and tools; (2) using case studies to advance the integration of decision analysis into alternatives analysis; (3) supporting transdisciplinary research; and (4) supporting education and outreach efforts

    A comparative evaluation of dynamic visualisation tools

    Get PDF
    Despite their potential applications in software comprehension, it appears that dynamic visualisation tools are seldom used outside the research laboratory. This paper presents an empirical evaluation of five dynamic visualisation tools - AVID, Jinsight, jRMTool, Together ControlCenter diagrams and Together ControlCenter debugger. The tools were evaluated on a number of general software comprehension and specific reverse engineering tasks using the HotDraw objectoriented framework. The tasks considered typical comprehension issues, including identification of software structure and behaviour, design pattern extraction, extensibility potential, maintenance issues, functionality location, and runtime load. The results revealed that the level of abstraction employed by a tool affects its success in different tasks, and that tools were more successful in addressing specific reverse engineering tasks than general software comprehension activities. It was found that no one tool performs well in all tasks, and some tasks were beyond the capabilities of all five tools. This paper concludes with suggestions for improving the efficacy of such tools
    • 

    corecore