5,246 research outputs found

    A knowledge-based system design/information tool

    Get PDF
    The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS

    Data-driven Soft Sensors in the Process Industry

    Get PDF
    In the last two decades Soft Sensors established themselves as a valuable alternative to the traditional means for the acquisition of critical process variables, process monitoring and other tasks which are related to process control. This paper discusses characteristics of the process industry data which are critical for the development of data-driven Soft Sensors. These characteristics are common to a large number of process industry fields, like the chemical industry, bioprocess industry, steel industry, etc. The focus of this work is put on the data-driven Soft Sensors because of their growing popularity, already demonstrated usefulness and huge, though yet not completely realised, potential. A comprehensive selection of case studies covering the three most important Soft Sensor application fields, a general introduction to the most popular Soft Sensor modelling techniques as well as a discussion of some open issues in the Soft Sensor development and maintenance and their possible solutions are the main contributions of this work

    Cybersecurity of Autonomous Systems in the Transportation Sector: An Examination of Regulatory and Private Law Approaches with Recommendations for Needed Reforms

    Get PDF
    The past twenty-five years gave rise to increasing levels of automation within the transportation sector. From initial subsystems, like vessel satellite tracking and automobile chassis control, automation continues apace. The future promises fully autonomous devices such as unmanned aerial systems (“UAS”) and self-driving cars (“UAV”). These autonomous and automatic systems and devices (“AASD”) provide safety, efficiency, and productivity benefits. Yet AASD operate under continual threat of cyber-attack. ¶ Compromised AASD can produce dire consequences in the transportation sector. The possible consequences extend far beyond financial harms to severe bodily injury or even death. Given both the prevalence of cyber threats and their potentially deadly consequences, the public holds a legitimate interest in ensuring that incentives exist to address the cybersecurity of such systems. ¶ This paper examines both the private and public law mechanisms for influencing AASD cybersecurity behaviors in the transportation sector; and undertakes the first comprehensive comparison of existing agency regulatory schemes. The findings presented herein propose: (1) additional legislation to promote sharing of cyber event data; and (2) transportation sector regulatory best practices that require mandatory submission and review of cybersecurity plans by OEMs and service providers when compromise of their products or services threatens safety of life or critical infrastructure. None of the recommendations advanced herein require regulators to direct the adoption of any specific technical solution or specific cybersecurity standard. Thus, industry participants can remain nimble in the face of evolving cyber threats, while ensuring public safety through what proves to be needed regulatory oversight

    Recovering Tech\u27s Humanity

    Get PDF

    Superstructure Optimization of Naphtha Processing System with Environmental Considerations

    Get PDF
    The objective of this research project is to develop an optimization-based mathematical model in the form of a mixed-integer linear program (MILP) for determining the optimal configuration of a petroleum refmery. The scope for this project is to formulate the superstructure representation model for a refinery focusing on the subsystem of naphtha hydroprocessing in order to select the most economical and cost efficient process route. The alternatives for all streams are evaluated and the optimal configuration is proposed based on market demand by incorporating logical constraints and mass balance using the GAMS modeling language platform. Based on the information and knowledge about the physics of the problem of naphtha processing unit, we represent all these possible processing alternatives on a superstructure. Carbon dioxide emission factors bave also been considered in which relevant data is obtained using the carbon weighting tonne (CWT) method. Computational studies are conducted on a representative numerical example to illustrate the proposed modeling approach

    Little Things and Big Challenges: Information Privacy and the Internet of Things

    Get PDF
    The Internet of Things (loT), the wireless connection of devices to ourselves, each other, and the Internet, has transformed our lives and our society in unimaginable ways. Today, billions of electronic devices and sensors collect, store, and analyze personal information from how fast we drive, to how fast our hearts beat, to how much and what we watch on TV. Even children provide billions of bits of personal information to the cloud through smart toys that capture images, recognize voices, and more. The unprecedented and unbridled new information flow generated from the little things of the loT is creating big challenges for privacy regulators. Traditional regulators are armed with conventional tools not fully capable of handling the privacy challenges of the loT. A critical review of recent Federal Trade Commission (FTC) enforcement decisions sheds light on a recommended path for the future regulation of the loT. This Article first examines the pervasiveness of the loT and the data it collects in order to clarify the challenges facing regulators. It also highlights traditional privacy laws, principles, and regulations and explains why those rules do not fit the novel challenges and issues resulting from the loT. Then it presents an in-depth analysis of four key FTC enforcement decisions to highlight how the FTC has and can regulate the loT without undermining the innovation and benefits that this technology-and the data it providesbrings to our society. Specifically, the Article describes how the FTC, faced with the privacy challenge that accompanies the interconnected world of the loT, has managed to apply traditional standards of unfairness and deceptive practices to protect private information. The FTC has been flexible and nimble with its interpretations of such standards and, in its most recent loT case, FTC v. VIZIO, established a new tool in its toolkit for regulating loT devices: an unfair tracking standard. As the de facto data protection authority in the United States, the FTC can use this new tool to work toward standardizing its treatment of loT privacy issues instead of trying to fit those concerns neatly under the deception authority of section 5 of the FFC Act. However, this new tool also means that the FTC has the opportunity-and responsibility-to provide guidance on how it will wield that authority. To assure that innovation is not stifled and that this new rule is fairly applied (whether by the FFC or other agencies that may follow suit), it is imperative that the FFC diligently address concerns about the scope of this new rule and communicate that guidance to businesses, other regulators, and consumers alike. The new FTC administration should, as the primary regulator of information privacy and the loT, continue the strong practice established by the previous administration, which is to provide guidance to businesses, consumers, and other regulators navigating the big challenges caused by the little things in the loT

    A Middleware-based Approach for Context-aware Computing

    Get PDF
    Ubiquitous computing environments integrate a large number of heterogeneous devices, which convey an increasing level of complexity when developing ubiquitous applications. A solution to this problem resorts to the use of a software abstraction layer, known as middleware, which encapsulates the underlying elements of the environment and offers unified and standardised access to applications which need to make use of the resources of the environment. Moreover, a middleware layer can also provide high-level built-in services, such as context management services.Ministerio de Educación y Ciencia TIN2006-15617-C0

    Managing the Medical Matrix: A DAIS for Artificial Intelligence in Health Care (and Beyond)

    Get PDF
    AI offers “huge and wide-reaching potential” in health; the futures of health care and AI are deeply interconnected. Use of AI provides the field with never-before imagined opportunities to streamline and delve more deeply into medical care, including disease identification, diagnosing conditions, and a simpler way to crowdsource and develop treatment plans. Its broad inclusion in the field has created a pressing need for more, and better, regulation. Improved regulation is especially critical because of the possibility that mismanaged AI will allow for incorrect diagnosis of patients or biased predictions and outcomes. In fact, numerous examples of such bias – and attempts to manage bias – already exist, which raises major ethical questions surrounding the use of AI and presents the issue of how to avoid health disparities in AI. In this Note, I argue that AI is not being adequately managed at the federal level. I further argue that the lack of management is largely due to a general failure to mandate standards for data sourcing, cleaning, and testing. The health care field is rife with examples of the effects of poor management, some of which have immediate and devastating impacts on patients; however, mismanagement of AI is not limited to health care alone. The potential problems that arise from lack of oversight span across industry lines. Thus, no single industry or existing federal agency can claim full ownership of, or expertise in, AI as a tool. I therefore propose that the best possible solution would be to form an entirely new top-level federal agency. This new agency would be tasked with creating federally mandated standards for ethical AI data sourcing, cleaning, and testing across industries. It would provide comprehensive management of AI datasets that do not fall under the umbrella of an existing agency such as the Food & Drug Administration (FDA). I further propose that the new regulatory body be named the “Department of Artificial Intelligence Standardization,” or DAIS
    corecore