7 research outputs found

    The Endosymbiotic Coral Algae Symbiodiniaceae Are Sensitive to a Sensory Pollutant: Artificial Light at Night, ALAN

    Get PDF
    Artificial Light at Night, ALAN, is a major emerging issue in biodiversity conservation, which can negatively impact both terrestrial and marine environments. Therefore, it should be taken into serious consideration in strategic planning for urban development. While the lionā€™s share of research has dealt with terrestrial organisms, only a handful of studies have focused on the marine milieu. To determine if ALAN impacts the coral reef symbiotic algae, that are fundamental for sustainable coral reefs, we conducted a short experiment over a period of one-month by illuminating isolated Symbiodiniaceae cell cultures from the genera Cladocopium (formerly Clade C) and Durusdinium (formerly Clade D) with LED light. Cell cultures were exposed nightly to ALAN levels of 0.15 Ī¼mol quanta mā€“2 sā€“1 (āˆ¼4ā€“5 lux) with three light spectra: blue, yellow and white. Our findings showed that even in very low levels of light at night, the photo-physiology of the algaeā€™s Electron Transport Rate (ETR), Non-Photochemical Quenching, (NPQ), total chlorophyll, and meiotic index presented significantly lower values under ALAN, primarily, but not exclusively, in Cladocopium cell cultures. The findings also showed that diverse Symbiodiniaceae types have different photo-physiology and photosynthesis performances under ALAN. We believe that our results sound an alarm for the probable detrimental effects of an increasing sensory pollutant, ALAN, on the eco-physiology of symbiotic corals. The results of this study point to the potential effects of ALAN on other organisms in marine ecosystem such as fish, zooplankton, and phytoplankton in which their biorhythms is entrained by natural light and dark cycles

    Audio Guidance to Enable Vision-Impaired Individuals to Move Independently

    Get PDF
    At present, when an individual who is blind or has low-vision runs or walks for exercise, they might use a treadmill, rely on a guide dog, or use a tethered human guide. Independent and safe exercise, whether walking or running, is one way to increase personal agency and improve the quality of life for vision-impaired persons. This disclosure describes techniques that use on-device machine learning to enable a vision-impaired individual to independently walk or run, e.g., for exercise. A tape or guideline is painted along the running path. A mobile device camera detects the guideline. An app on the phone estimates the user\u27s position to the left or to the right of the guideline. The app provides audio cues in stereo to direct the person to stay in close proximity to the guideline while walking or running

    SUGGESTING TITLES FOR AUDIO RECORDINGS

    Get PDF
    Techniques of this disclosure may enable a computing device to suggest one or more titles based on the content of audio being recorded or audio that was previously recorded, and other data such as time and location. Rather than applying a general default title or audio file name, the computing device may request authorization from a user to analyze the contents of a recorded audio file and, after receiving explicit authorization from the user, analyze the audio, including speech, and automatically suggest titles that are indicative of the content of the audio and/or other data. The computing device may convert speech included in the audio into text and extract a plurality of terms from the text based on various factors, such as word classes (e.g., convert audio that includes ā€œthis meatball recipe adds parmesan cheeseā€ into text and extract a plurality of nouns such as ā€œmeatball,ā€ ā€œrecipe,ā€ ā€œparmesan,ā€ and ā€œcheeseā€ from the text). Based on various factors, such as term frequency in the text and the relative uniqueness of the terms in the spoken language, the computing device may identify a plurality of words from the plurality of terms to represent the overall content of the audio (e.g., identify ā€œmeatballā€ and ā€œrecipeā€ from ā€œmeatball,ā€ ā€œrecipe,ā€ ā€œparmesan,ā€ and ā€œcheeseā€ based on term frequency in the text). The computing device may also classify non-speech audio (e.g. applause, dog barking, music) and use the classification, including metadata associated with the classified audio object, such as song titles, to identify a plurality of words to represent the overall content of the audio. The speech terms, non-speech audio classification, classified audio object metadata, and other data may be combined to identify a plurality of words to represent the overall content of the audio. The computing device may display the identified words as suggested words to be included in the title of the audio file. The user may select one or more of the identified words as the title or combine one or more of the identified words with one or more other words entered by the user. The computing device may use the selected and/or entered words as the title for the audio and/or for the name of the audio file
    corecore