26 research outputs found

    Navigation of mobile robots using artificial intelligence technique.

    Get PDF
    The ability to acquire a representation of the spatial environment and the ability to localize within it are essential for successful navigation in a-priori unknown environments. This document presents a computer vision method and related algorithms for the navigation of a robot in a static environment. Our environment is a simple white colored area with black obstacles and robot (with some identification mark-a circle and a rectangle of orange color which helps in giving it a direction) present over it. This environment is grabbed in a camera which sends image to the desktop using data cable. The image is then converted to the binary format from jpeg format using software which is then processed in the computer using MATLAB. The data acquired from the program is then used as an input for another program which controls the robot drive motors using wireless controls. Robot then tries to reach its destination avoiding obstacles in its path. The algorithm presented in this paper uses the distance transform methodology to generate paths for the robot to execute. This paper describes an algorithm for approximately finding the fastest route for a vehicle to travel one point to a destination point in a digital plain map, avoiding obstacles along the way. In our experimental setup the camera used is a SONY HANDYCAM. This camera grabs the image and specifies the location of the robot (starting point) in the plain and its destination point. The destination point used in our experimental setup is a table tennis ball, but it can be any other entity like a single person, a combat unit or a vehicle

    Management: A continuing bibliography with indexes

    Get PDF
    This bibliography lists 344 reports, articles, and other documents introduced into the NASA scientific and technical information system in 1978

    Design for Support in the Initial Design of Naval Combatants

    Get PDF
    The decline of defence budgets coupled with the escalation of warship procurement costs have significantly contributed to fleet downsizing in most major western navies despite little reduction in overall commitments, resulting in extra capability and reliability required per ship. Moreover, the tendency of governments to focus on short-term strategies and expenditure has meant that those aspects of naval ship design that may be difficult to quantify, such as supportability, are often treated as secondary issues and allocated insufficient attention in Early Stage Design. To tackle this, innovation in both the design process and the development of individual ship designs is necessary, especially at the crucial early design stages. Novelty can be achieved thanks to major developments in computer technology and in adopting an architecturally-orientated approach to early stage ship design. The existing technical solutions aimed at addressing supportability largely depend on highly detailed ship design information, thus fail to enable rational supportability assessments in the Concept Phase. This research therefore aimed at addressing the lack of a quantitative supportability evaluation approach applicable to early stage naval ship design. Utilising Decision Analysis, Effectiveness Analysis, and Analytic Hierarchy Process, the proposed approach tackled the difficulty of quantifying certain aspects of supportability in initial ship design and provided a framework to address the issue of inconsistent and often conflicting preferences of decision makers. Since the ship’s supportability is considered to be significantly affected by its configuration, the proposed approach utilised the advantages of an architecturally-orientated early stage ship design approach and a new concept design tool developed at University College London. The new tool was used to develop concept level designs of a frigate-sized combatant and a number of variations of it, namely configurational rearrangement with enhancement of certain supportably features, and an alternative ship design style. The design cases were then used to demonstrate the proposed evaluation approach. The overall aim of proposing a quantitative supportability evaluation approach applicable to concept naval ship design was achieved, although several issues and limitations emerged during both the development as well as the implementation of the approach. Through identification of the research limitations, areas for future work aimed at improving the proposal have been proposed

    Improving cyber security in industrial control system environment.

    Get PDF
    Integrating industrial control system (ICS) with information technology (IT) and internet technologies has made industrial control system environments (ICSEs) more vulnerable to cyber-attacks. Increased connectivity has brought about increased security threats, vulnerabilities, and risks in both technology and people (human) constituents of the ICSE. Regardless of existing security solutions which are chiefly tailored towards technical dimensions, cyber-attacks on ICSEs continue to increase with a proportionate level of consequences and impacts. These consequences include system failures or breakdowns, likewise affecting the operations of dependent systems. Impacts often include; marring physical safety, triggering loss of lives, causing huge economic damages, and thwarting the vital missions of productions and businesses. This thesis addresses uncharted solution paths to the above challenges by investigating both technical and human-factor security evaluations to improve cyber security in the ICSE. An ICS testbed, scenario-based, and expert opinion approaches are used to demonstrate and validate cyber-attack feasibility scenarios. To improve security of ICSs, the research provides: (i) an adaptive operational security metrics generation (OSMG) framework for generating suitable security metrics for security evaluations in ICSEs, and a list of good security metrics methodology characteristics (scope-definitive, objective-oriented, reliable, simple, adaptable, and repeatable), (ii) a technical multi-attribute vulnerability (and impact) assessment (MAVCA) methodology that considers and combines dynamic metrics (temporal and environmental) attributes of vulnerabilities with the functional dependency relationship attributes of the vulnerability host components, to achieve a better representation of exploitation impacts on ICSE networks, (iii) a quantitative human-factor security (capability and vulnerability) evaluation model based on human-agent security knowledge and skills, used to identify the most vulnerable human elements, identify the least security aspects of the general workforce, and prioritise security enhancement efforts, and (iv) security risk reduction through critical impact point assessment (S2R-CIPA) process model that demonstrates the combination of technical and human-factor security evaluations to mitigate risks and achieve ICSE-wide security enhancements. The approaches or models of cyber-attack feasibility testing, adaptive security metrication, multi-attribute impact analysis, and workforce security capability evaluations can support security auditors, analysts, managers, and system owners of ICSs to create security strategies and improve cyber incidence response, and thus effectively reduce security risk.PhD in Manufacturin

    Assessing and improving the quality of model transformations

    Get PDF
    Software is pervading our society more and more and is becoming increasingly complex. At the same time, software quality demands remain at the same, high level. Model-driven engineering (MDE) is a software engineering paradigm that aims at dealing with this increasing software complexity and improving productivity and quality. Models play a pivotal role in MDE. The purpose of using models is to raise the level of abstraction at which software is developed to a level where concepts of the domain in which the software has to be applied, i.e., the target domain, can be expressed e??ectively. For that purpose, domain-speci??c languages (DSLs) are employed. A DSL is a language with a narrow focus, i.e., it is aimed at providing abstractions speci??c to the target domain. This makes that the application of models developed using DSLs is typically restricted to describing concepts existing in that target domain. Reuse of models such that they can be applied for di??erent purposes, e.g., analysis and code generation, is one of the challenges that should be solved by applying MDE. Therefore, model transformations are typically applied to transform domain-speci??c models to other (equivalent) models suitable for di??erent purposes. A model transformation is a mapping from a set of source models to a set of target models de??ned as a set of transformation rules. MDE is gradually being adopted by industry. Since MDE is becoming more and more important, model transformations are becoming more prominent as well. Model transformations are in many ways similar to traditional software artifacts. Therefore, they need to adhere to similar quality standards as well. The central research question discoursed in this thesis is therefore as follows. How can the quality of model transformations be assessed and improved, in particular with respect to development and maintenance? Recall that model transformations facilitate reuse of models in a software development process. We have developed a model transformation that enables reuse of analysis models for code generation. The semantic domains of the source and target language of this model transformation are so far apart that straightforward transformation is impossible, i.e., a semantic gap has to be bridged. To deal with model transformations that have to bridge a semantic gap, the semantics of the source and target language as well as possible additional requirements should be well understood. When bridging a semantic gap is not straightforward, we recommend to address a simpli??ed version of the source metamodel ??rst. Finally, the requirements on the transformation may, if possible, be relaxed to enable automated model transformation. Model transformations that need to transform between models in di??erent semantic domains are expected to be more complex than those that merely transform syntax. The complexity of a model transformation has consequences for its quality. Quality, in general, is a subjective concept. Therefore, quality can be de??ned in di??erent ways. We de??ned it in the context of model transformation. A model transformation can either be considered as a transformation de??nition or as the process of transforming a source model to a target model. Accordingly, model transformation quality can be de??ned in two di??erent ways. The quality of the de??nition is referred to as its internal quality. The quality of the process of transforming a source model to a target model is referred to as its external quality. There are also two ways to assess the quality of a model transformation (both internal and external). It can be assessed directly, i.e., by performing measurements on the transformation de??nition, or indirectly, i.e., by performing measurements in the environment of the model transformation. We mainly focused on direct assessment of internal quality. However, we also addressed external quality and indirect assessment. Given this de??nition of quality in the context of model transformations, techniques can be developed to assess it. Software metrics have been proposed for measuring various kinds of software artifacts. However, hardly any research has been performed on applying metrics for assessing the quality of model transformations. For four model transformation formalisms with di??fferent characteristics, viz., for ASF+SDF, ATL, Xtend, and QVTO, we de??ned sets of metrics for measuring model transformations developed with these formalisms. While these metric sets can be used to indicate bad smells in the code of model transformations, they cannot be used for assessing quality yet. A relation has to be established between the metric sets and attributes of model transformation quality. For two of the aforementioned metric sets, viz., the ones for ASF+SDF and for ATL, we conducted an empirical study aiming at establishing such a relation. From these empirical studies we learned what metrics serve as predictors for di??erent quality attributes of model transformations. Metrics can be used to quickly acquire insights into the characteristics of a model transformation. These insights enable increasing the overall quality of model transformations and thereby also their maintainability. To support maintenance, and also development in a traditional software engineering process, visualization techniques are often employed. For model transformations this appears as a feasible approach as well. Currently, however, there are few visualization techniques available tailored towards analyzing model transformations. One of the most time-consuming processes during software maintenance is acquiring understanding of the software. We expect that this holds for model transformations as well. Therefore, we presented two complementary visualization techniques for facilitating model transformation comprehension. The ??rst-technique is aimed at visualizing the dependencies between the components of a model transformation. The second technique is aimed at analyzing the coverage of the source and target metamodels by a model transformation. The development of the metric sets, and in particular the empirical studies, have led to insights considering the development of model transformations. Also, the proposed visualization techniques are aimed at facilitating the development of model transformations. We applied the insights acquired from the development of the metric sets as well as the visualization techniques in the development of a chain of model transformations that bridges a number of semantic gaps. We chose to solve this transformational problem not with one model transformation, but with a number of smaller model transformations. This should lead to smaller transformations, which are more understandable. The language on which the model transformations are de??ned, was subject to evolution. In particular the coverage visualization proved to be bene??cial for the co-evolution of the model transformations. Summarizing, we de??ned quality in the context of model transformations and addressed the necessity for a methodology to assess it. Therefore, we de??ned metric sets and performed empirical studies to validate whether they serve as predictors for model transformation quality. We also proposed a number of visualizations to increase model transformation comprehension. The acquired insights from developing the metric sets and the empirical studies, as well as the visualization tools, proved to be bene??cial for developing model transformations

    Strategic and large scale government IT projects management: innovation report

    Get PDF
    This research focuses on the Implementation of IT systems and public sector and national ID card projects in particular. Such projects have high expectations but low success rates. The study Investigated the factors contributing to IT projects failure through on extensive review of the existing literature. This was enriched and tested by close Involvement with the UAE national ID card project, surveys and In depth interviews with senior managers from other ID card projects and presentations and attendance at over 50 conferences on this subject. Many of the factors leading to either success or failure identified in many practical studies could be addressed through a well designed project management methodology. Based on the literature, practical experience, observations and feedback from practitioners a project management methodology; named PROMOTE - PROject Management Of Technology Endeavours - was developed and tested for the planning and Implementing large scale IT projects mainly In a government context. The US$200+ million dollar national ID programme In the United Arab Emirates was the main test vehicle. Its Innovations include a hybrid systems development/project management customer based philosophy, a number of new tools and techniques and the Introduction of a mentor for the project manager. To help assess the general applicability of the methodology it was also tested In the Saudi Arabia, Oman, and Bahrain national ID initiatives. The methodology phases were refined several times (and other phases were added) to address the problems Identified from UAE project, the literature, the experiences reported at GCC committee meetings and from other large scale Implementations around the world (from conferences and study visits to other countries). From the testing conducted, the methodology is believed to add a significant contribution to the field of IT projects Implementation and In Increasing the success chances of such projects. Such success should have a profound Impact on government services. The study also recognises that a better understanding of the new methodology and its contributions Is only possible through further research and application In other large scale IT projects. This should allow the extension of the applicability of this methodology to a much wider spectrum

    Management: A continuing bibliography with indexes

    Get PDF
    This biliography lists 919 reports, articles, and other documents introduced into the NASA scientific and technical information system in 1981
    corecore