10 research outputs found

    The multimodal edge of human aerobotic interaction

    No full text
    This paper presents the idea of a multimodal human aerobotic interaction. An overview of the aerobotic system and its application is given. The joystick-based controller interface and its limitations is discussed. Two techniques are suggested as emerging alternatives to the joystick-based controller interface used in human aerobotic interaction. The first technique is a multimodal combination of speech, gaze, gesture, and other non-verbal cues already used in regular human-humaninteraction. The second is telepathic interaction via brain computer interfaces. The potential limitations of these alternatives is highlighted, and the considerations for further works are presented

    Decoupling of DNA methylation and activity of intergenic LINE-1 promoters in colorectal cancer

    Get PDF
    <p>Hypomethylation of LINE-1 repeats in cancer has been proposed as the main mechanism behind their activation; this assumption, however, was based on findings from early studies that were biased toward young and transpositionally active elements. Here, we investigate the relationship between methylation of 2 intergenic, transpositionally inactive LINE-1 elements and expression of the LINE-1 chimeric transcript (LCT) 13 and LCT14 driven by their antisense promoters (L1-ASP). Our data from DNA modification, expression, and 5′RACE analyses suggest that colorectal cancer methylation in the regions analyzed is not always associated with LCT repression. Consistent with this, in HCT116 colorectal cancer cells lacking DNA methyltransferases DNMT1 or DNMT3B, LCT13 expression decreases, while cells lacking both DNMTs or treated with the DNMT inhibitor 5-azacytidine (5-aza) show no change in LCT13 expression. Interestingly, levels of the H4K20me3 histone modification are inversely associated with LCT13 and LCT14 expression. Moreover, at these LINE-1s, H4K20me3 levels rather than DNA methylation seem to be good predictor of their sensitivity to 5-aza treatment. Therefore, by studying individual LINE-1 promoters we have shown that in some cases these promoters can be active without losing methylation; in addition, we provide evidence that other factors (e.g., H4K20me3 levels) play prominent roles in their regulation.</p

    Isoaspartate, Carbamoyl phosphate synthase-1, and carbonic anhydrase-III as biomarkers of liver injury

    Get PDF
    We had previously shown that alcohol consumption can induce cellular isoaspartate protein damage via an impairment of the activity of protein isoaspartyl methyltransferase (PIMT), an enzyme that triggers repair of isoaspartate protein damage. To further investigate the mechanism of isoaspartate accumulation, hepatocytes cultured from control or 4-week ethanol-fed rats were incubated in vitro with tubercidin or adenosine. Both these agents, known to elevate intracellular S-adenosylhomocysteine levels, increased cellular isoaspartate damage over that recorded following ethanol consumption in vivo. Increased isoaspartate damage was attenuated by treatment with betaine. To characterize isoaspartate-damaged proteins that accumulate after ethanol administration, rat liver cytosolic proteins were methylated using exogenous PIMT and 3H-S- adenosylmethionine and proteins resolved by gel electrophoresis. Three major protein bands of ~75-80 kDa, ~95-100 kDa, and ~155-160 kDa were identified by autoradiography. Column chromatography used to enrich isoaspartate-damaged proteins indicated that damaged proteins from ethanol-fed rats were similar to those that accrued in the livers of PIMT knockout (KO) mice. Carbamoyl phosphate synthase-1 (CPS-1) was partially purified and identified as the ~160kDa protein target of PIMT in ethanol-fed rats and in PIMT KO mice. Analysis of the liver proteome of 4-week ethanol-fed rats and PIMT KO mice demonstrated elevated cytosolic CPS-1 and betaine homocysteine S-methyltransferase-1 when compared to their respective controls, and a significant reduction of carbonic anhydrase-III (CA-III) evident only in ethanol-fed rats. Ethanol feeding of rats for 8 weeks resulted in a larger (~2.3-fold) increase in CPS-1 levels compared to 4- week ethanol feeding indicating that CPS-1 accumulation correlated with the duration of ethanol consumption. Collectively, our results suggest that elevated isoaspartate and CPS-1, and reduced CA-III levels could serve as biomarkers of hepatocellular injury

    The performance and cognitive workload analysis of a multimodal speech and visual gesture (mSVG) UAV control interface

    No full text
    This paper conducts a comparison of the performance and cognitive workload between three UAV control interfaces on an nCA (navigation control autonomy) Tier 1-III flight navigation task. The first interface is the standard RC Joystick (RCJ) controller, the second interface is the multimodal speech and visual gesture (mSVG) interface, and the third interface is the modified version of the RCJ interface with altitude, attitude, and position (AAP) assist. The modified RCJ interface was achieved with the aid of the Keyboard (KBD). A model of the mSVG interface previously designed and tested was used in this comparison. An experiment study was designed to measure the completion time and navigation accuracy of participants using each of the three interfaces, on a developed path_v02 test flight path. Thirty-seven (37) participants volunteered. The NASA task load index (TLX) survey questionnaire was administered at the end of each interface experiment to access the participants experience and to estimate the interface cognitive workload. A commercial software, the RealFlight Drone Simulator (RFDS) was used to estimate the RCJ skill level of the participants. From the results of the experiment, it was shown that the flying hours, the number of months flying, and the RFDS Level 4 challenge performance was a good estimator for participants RCJ flying skill level. A two-way result was obtained in the comparison of the RCJ and mSVG interfaces. It was concluded that, although the mSVG was better than the standard RCJ interface, the AAP-assisted RCJ was found to be as effective as (in some cases better than) the mSVG interface. It was also shown, from the speech gesture ratio result, that theparticipants had a preference for gesture over speech when using the mSVG interface. Some further works such as an outdoor field test and a performance comparison at higher nCA levels were suggested

    Effects of varying noise levels and lighting levels on multimodal speech and visual gesture interaction with aerobots

    No full text
    This paper investigated the effects of varying noise levels and varying lighting levels on speech and gesture control command interfaces for aerobots. The aim was to determine the practical suitability of the multimodal combination of speech and visual gesture in human aerobotic interaction, by investigating the limits and feasibility of use of the individual components. In order to determine this, a custom multimodal speech and visual gesture interface was developed using CMU (Carnegie Mellon University) sphinx and OpenCV (Open source Computer Vision) libraries, respectively. An experiment study was designed to measure the individual effects of each of the two main components of speech and gesture, and 37 participants were recruited to participate in the experiment. The ambient noise level was varied from 55 dB to 85 dB. The ambient lighting level was varied from 10 Lux to 1400 Lux, under different lighting colour temperature mixtures of yellow (3500 K) and white (5500 K), and different background for capturing the finger gestures. The results of the experiment, which consisted of around 3108 speech utterance and 999 gesture quality observations, were presented and discussed. It was observed that speech recognition accuracy/success rate falls as noise levels rise, with 75 dB noise level being the aerobot’s practical application limit, as the speech control interaction becomes very unreliable due to poor recognition beyond this. It was concluded that multi-word speech commands were considered more reliable and effective than single-word speech commands. In addition, some speech command words (e.g., land) were more noise resistant than others (e.g., hover) at higher noise levels, due to their articulation. From the results of the gesture-lighting experiment, the effects of both lighting conditions and the environment background on the quality of gesture recognition, was almost insignificant, less than 0.5%. The implication of this is that other factors such as the gesture capture system design and technology (camera and computer hardware), type of gesture being captured (upper body, whole body, hand, fingers, or facial gestures), and the image processing technique (gesture classification algorithms), are more important in developing a successful gesture recognition system. Some further works were suggested based on the conclusions drawn from this findings which included using alternative ASR (Automatic Speech Recognition) speech models and developing more robust gesture recognition algorithm

    A practical mSVG interaction method for patrol, search, and rescue aerobots

    No full text
    This paper briefly presents the multimodal speech and visual gesture (mSVG) control for aerobots at higher nCA autonomy levels, using a patrol, search, and rescue application example. The developed mSVG control architecture was presented and briefly discussed. This was successfully tested using both MATLAB simulation and python based ROS Gazebo UAV simulations. Some limitations were identified, which formed the basis for the further works presented

    Effects of Varying Noise Levels and Lighting Levels on Multimodal Speech and Visual Gesture Interaction with Aerobots

    No full text
    This paper investigated the effects of varying noise levels and varying lighting levels on speech and gesture control command interfaces for aerobots. The aim was to determine the practical suitability of the multimodal combination of speech and visual gesture in human aerobotic interaction, by investigating the limits and feasibility of use of the individual components. In order to determine this, a custom multimodal speech and visual gesture interface was developed using CMU (Carnegie Mellon University) sphinx and OpenCV (Open source Computer Vision) libraries, respectively. An experiment study was designed to measure the individual effects of each of the two main components of speech and gesture, and 37 participants were recruited to participate in the experiment. The ambient noise level was varied from 55 dB to 85 dB. The ambient lighting level was varied from 10 Lux to 1400 Lux, under different lighting colour temperature mixtures of yellow (3500 K) and white (5500 K), and different background for capturing the finger gestures. The results of the experiment, which consisted of around 3108 speech utterance and 999 gesture quality observations, were presented and discussed. It was observed that speech recognition accuracy/success rate falls as noise levels rise, with 75 dB noise level being the aerobot&#8217;s practical application limit, as the speech control interaction becomes very unreliable due to poor recognition beyond this. It was concluded that multi-word speech commands were considered more reliable and effective than single-word speech commands. In addition, some speech command words (e.g., land) were more noise resistant than others (e.g., hover) at higher noise levels, due to their articulation. From the results of the gesture-lighting experiment, the effects of both lighting conditions and the environment background on the quality of gesture recognition, was almost insignificant, less than 0.5%. The implication of this is that other factors such as the gesture capture system design and technology (camera and computer hardware), type of gesture being captured (upper body, whole body, hand, fingers, or facial gestures), and the image processing technique (gesture classification algorithms), are more important in developing a successful gesture recognition system. Some further works were suggested based on the conclusions drawn from this findings which included using alternative ASR (Automatic Speech Recognition) speech models and developing more robust gesture recognition algorithm

    Multimodal human aerobotic interaction

    No full text
    This chapter discusses HCI interfaces used in controlling aerial robotic systems (otherwise known as aerobots). The autonomy control level of aerobot is also discussed. However, due to the limitations of existing models, a novel classification model of autonomy, specifically designed for multirotor aerial robots, called the navigation control autonomy (nCA) model is also developed. Unlike the existing models such as the AFRL and ONR, this model is presented in tiers and has a two-dimensional pyramidal structure. This model is able to identify the control void existing beyond tier-one autonomy components modes and to map the upper and lower limits of control interfaces. Two solutions are suggested for dealing with the existing control void and the limitations of the RC joystick controller – the multimodal HHI-like interface and the unimodal BCI interface. In addition to these, some human factors based performance measurement is recommended, and the plans for further works presented

    Quantifying the effects of varying light-visibility and noise-sound levels in practical multimodal speech and visual gesture (mSVG) interaction with aerobots

    No full text
    This paper discusses the research work conducted to quantify the effective range of lighting levels and ambient noise levels in order to inform the design and development of a multimodal speech and visual gesture (mSVG) control interface for the control of a UAV. Noise level variation from 55 dB to 85 dB is observed under control lab conditions to determine where speech commands for a UAV fails, and to consider why, and possibly suggest a solution around this. Similarly, lighting levels are varied within the control lab condition to determine a range of effective visibility levels. The limitation of this work and some further work from this were also presented

    The multimodal speech and visual gesture (mSVG) control model for a practical patrol, search, and rescue aerobot

    No full text
    This paper describes a model of the multimodal speech and visual gesture (mSVG) control for aerobots operating at higher nCA autonomy levels, within the context of a patrol, search, and rescue application. The developed mSVG control architecture, its mathematical navigation model, and some high level command operation models were discussed. This was successfully tested using both MATLAB simulation and python based ROS Gazebo UAV simulations. Some limitations were identified, which formed the basis for the further works presented
    corecore