35 research outputs found

    CLARAty Functional-Layer Software

    Get PDF
    Functional-layer software for the Coupled Layer Architecture for Robotics Autonomy (CLARAty) is being developed. [CLARAty was described in Coupled-Layer Architecture for Advanced Software for Robots (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: CLARAty was proposed to improve the modularity of robotic software while tightening the coupling between planning/execution and control subsystems. Whereas prior robotic software architectures have typically contained three levels, the CLARAty architecture contains two layers: a decision layer and a functional layer.] Just as an operating system provides abstraction from computational hardware, the CLARAty functional-layer software provides for abstraction for the different robotic systems. The functional-layer software establishes interrelated, object-oriented hierarchies that contain active and passive objects that represent the different levels of system abstrations and components. The functional-layer software is decomposed into a set of reusable core components and a set of extended components that adapt the reusable set to specific hardware implementations. The reusable components (a) provide behavior and interface definitions and implementations of basic functionality, (b) provide local executive capabilities, (c) manage local resources, and (d) support state and resource queries by the decision layer. Software for robotic systems can be built by use of these components

    Software for Automation of Real-Time Agents, Version 2

    Get PDF
    Version 2 of Closed Loop Execution and Recovery (CLEaR) has been developed. CLEaR is an artificial intelligence computer program for use in planning and execution of actions of autonomous agents, including, for example, Deep Space Network (DSN) antenna ground stations, robotic exploratory ground vehicles (rovers), robotic aircraft (UAVs), and robotic spacecraft. CLEaR automates the generation and execution of command sequences, monitoring the sequence execution, and modifying the command sequence in response to execution deviations and failures as well as new goals for the agent to achieve. The development of CLEaR has focused on the unification of planning and execution to increase the ability of the autonomous agent to perform under tight resource and time constraints coupled with uncertainty in how much of resources and time will be required to perform a task. This unification is realized by extending the traditional three-tier robotic control architecture by increasing the interaction between the software components that perform deliberation and reactive functions. The increase in interaction reduces the need to replan, enables earlier detection of the need to replan, and enables replanning to occur before an agent enters a state of failure

    Personal radiofrequency electromagnetic field exposure of adolescents in the Greater London area in the SCAMP cohort and the association with restrictions on permitted use of mobile communication technologies at school and at home

    Get PDF
    Personal measurements of radiofrequency electromagnetic fields (RF-EMF) have been used in several studies to characterise personal exposure in daily life, but such data are limitedly available for adolescents, and not yet for the United Kingdom (UK). In this study, we aimed to characterise personal exposure to RF-EMF in adolescents and to study the association between exposure and rules applied at school and at home to restrict wireless communication use, likely implemented to reduce other effects of mobile technology (e.g. distraction). We measured exposure to RF-EMF for 16 common frequency bands (87.5 MHz–3.5 GHz), using portable measurement devices (ExpoM-RF), in a subsample of adolescents participating in the cohort Study of Cognition, Adolescents and Mobile Phones (SCAMP) from Greater London (UK) (n = 188). School and home rules were assessed by questionnaire and concerned the school's availability of WiFi and mobile phone policy, and parental restrictions on permitted mobile phone use. Adolescents recorded their activities in real time using a diary app on a study smartphone, while characterizing their personal RF-EMF exposure in daily life, during different activities and times of the day. Data analysis was done for 148 adolescents from 29 schools who recorded RF-EMF data for a median duration of 47 h. The majority (74%) of adolescents spent part of their time at school during the measurement period. Median total RF-EMF exposure was 40 μW/m2 at home, 94 μW/m2 at school, and 100 μW/m2 overall. In general, restrictions at school or at home made little difference for adolescents’ measured exposure to RF-EMF, except for uplink exposure from mobile phones while at school, which was found to be significantly lower for adolescents attending schools not permitting phone use at all, compared to adolescents attending schools allowing mobile phone use during breaks. This difference was not statistically significant for total personal exposure. Total exposure to RF-EMF in adolescents living in Greater London tended to be higher compared to exposure levels reported in other European countries. This study suggests that school policies and parental restrictions are not associated with a lower RF-EMF exposure in adolescents

    A Learning-Based Approach to the Detection of SQL Attacks

    No full text
    Abstract. Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks

    An Experience Developing an IDS Stimulator for the Black-Box Testing of Network Intrusion Detection Systems

    No full text
    Signature-based intrusion detection systems use a set of attack descriptions to analyze event streams, looking for evidence of malicious behavior. If the signatures are expressed in a well-defined language, it is possible to analyze the attack signatures and automatically generate events or series of events that conform to the attack descriptions. This approach has been used in tools whose goal is to force intrusion detection systems to generate a large number of detection alerts. The resulting ``alert storm'' is used to de-sensitize intrusion detection system administrators and hide attacks in the event stream. We apply a similar technique to perform testing of intrusion detection systems. Signatures from one intrusion detection system are used as input to an event stream generator that produces randomized synthetic events that match the input signatures. The resulting event stream is then fed to a number of different intrusion detection systems and the results are analyzed. This paper presents the general testing approach and describes the first prototype of a tool, called Mucus, that automatically generates network traffic using the signatures of the Snort network-based intrusion detection system. The paper describes preliminary cross-testing experiments with both an open-source and a commercial tool and reports the results.An evasion attack that was discovered as a result of analyzing the test results is also presented
    corecore