515,149 research outputs found

    Consumer perceptions and reactions concerning AI (Avian Influenza)

    Full text link
    This paper focuses on the results of different consumer surveys conducted between 2004 and 2006 with regard to consumers ' perceptions and reactions concerning AI in Vietnam, (mainly in Hanoi). The main results observed are as follows: - A high proportion of consumers consider AI to be a food-related risk. However, over time, there has been a slight shift from a fear of consuming poultry to a fear of preparing it (slaughtering it). - AI has had a profound effect on poultry consumption, even outside peak crisis times, more in terms of the quantity consumed (approximately a third less in 2006) than in terms of the number of consumers (6% less). - Blood and internal organs are considered particularly risky, while eggs are viewed as being safer. Poultry from industrial farms is considered to be more risky than poultry from small farms. - Purchasing practices have also been affected by AI: in Hanoi, consumers declare that they prefer to buy poultry directly from producers that they know, or from supermarkets in the case of the wealthiest consumers. A high proportion still buy live poultry from market traders, but more consumers now ask sellers to slaughter it for them. With a view to lessening market shocks in the wake of the crisis while maintaining the priority of consumer safety, a number of measures should nevertheless be implemented: Risk communication should not over-emphasize AI as a food-related risk. - Reliable safe distribution channels should be promoted (with reliable quality signs and controls) in order to encourage safe production and poultry consumption. Otherwise, a market recovery will only benefit supermarkets and large-scale farmers capable of supplying supermarkets. - As numerous live birds are still slaughtered in urban market places, facilities should be provided for safe slaughter. At the same time, more attention should be paid to the provision of a real "cold chain" with a view to promoting the sale of slaughtered poultry. (Résumé d'auteur

    Superintelligence as a Cause or Cure for Risks of Astronomical Suffering

    Get PDF
    Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks) , where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can both be a suffering risk or help avoid it. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks

    Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control

    Full text link
    Designing a safe, trusted, and ethical AI may be practically impossible; however, designing AI with safe, trusted, and ethical use in mind is possible and necessary in safety and mission-critical domains like aerospace. Safe, trusted, and ethical use of AI are often used interchangeably; however, a system can be safely used but not trusted or ethical, have a trusted use that is not safe or ethical, and have an ethical use that is not safe or trusted. This manuscript serves as a primer to illuminate the nuanced differences between these concepts, with a specific focus on applications of Human-AI teaming in aerospace system control, where humans may be in, on, or out-of-the-loop of decision-making

    Artificial intelligence and the space station software support environment

    Get PDF
    In a software system the size of the Space Station Software Support Environment (SSE), no one software development or implementation methodology is presently powerful enough to provide safe, reliable, maintainable, cost effective real time or near real time software. In an environment that must survive one of the most harsh and long life times, software must be produced that will perform as predicted, from the first time it is executed to the last. Many of the software challenges that will be faced will require strategies borrowed from Artificial Intelligence (AI). AI is the only development area mentioned as an example of a legitimate reason for a waiver from the overall requirement to use the Ada programming language for software development. The limits are defined of the applicability of the Ada language Ada Programming Support Environment (of which the SSE is a special case), and software engineering to AI solutions by describing a scenario that involves many facets of AI methodologies
    • …
    corecore