20 research outputs found
Reorganization of language centers in patients with brain tumors located in eloquent speech areas – A pre- and postoperative preliminary fMRI study
Introduction
The aim of this study was to determine in pre- and postsurgical fMRI studies the rearrangement of the Broca's and Wernicke's areas and the lateralization index for these areas in patients with brain tumors located near speech centers. Impact of the surgical treatment on the brain plasticity was evaluated.
Materials and methods
Pre- and postoperative fMRI examinations were performed in 10 patients with low grade glial, left-sided brain tumors located close to the Broca's (5 patients) or Wernicke's area (5 patients). BOLD signal was recorded in regions of interest: Broca's and Wernicke's areas, and their anatomic right-sided homologues.
Results
In the preoperative fMRI study the left Broca's area was activated in all cases. The right Broca's area was activated in all the patients with no speech disorders. In the postoperative fMRI the activation of both Broca's areas increased in two cases. In other two cases activation of one of the Broca's area increased along with the decrease in the contralateral hemisphere.
In all patients with temporal lobe tumors, the right Wernicke's area was activated in the pre- and postsurgical fMRI. After the operation, in two patients with speech disorder, the activation of both Broca's areas decreased and the activation of one of the Wernicke's areas increased.
Conclusions
In the cases of tumors localized near the left Broca's area, a transfer of the function to the healthy hemisphere seems to take place. Resection of tumors located near Broca's or Wernicke's areas may lead to relocation of the brain language centers
Variable structure robot control systems: The RAPP approach
International audienceThis paper presents a method of designing variable structure control systems for robots. As the on-board robot computational resources are limited, but in some cases the demands imposed on the robot by the user are virtually limitless, the solution is to produce a variable structure system. The task dependent part has to be exchanged, however the task governs the activities of the robot. Thus not only exchange of some task-dependent modules is required, but also supervisory responsibilities have to be switched. Such control systems are necessary in the case of robot companions, where the owner of the robot may demand from it to provide many services.
Mixing deep learning with classical vision for object recognition
Nowadays, when one needs a system for image recognition, it is mostly a matter of finding pre-trained CNN and, sometimes, adding additional training based on transferred knowledge. Accurate 6-DOF object localization in the image is a more laborious task and requires more complex training data to be available. On the other hand, if we know the model of the object, it is straightforward to acquire its pose from the image (RGB or RGB-D). In this paper, we try to show the advantages of mixing deep learning object recognition/detection with classical 6-DOF pose estimation algorithms, with a focus on applications in service robotics
Modreg: A Modular Framework for RGB-D Image Acquisition and 3D Object Model Registration
RGB-D sensors became a standard in robotic applications requiring object recognition, such as object grasping and manipulation. A typical object recognition system relies on matching of features extracted from RGB-D images retrieved from the robot sensors with the features of the object models. In this paper we present ModReg: a system for registration of 3D models of objects. The system consists of a modular software associated with a multi-camera setup supplemented with an additional pattern projector, used for the registration of high-resolution RGB-D images. The objects are placed on a fiducial board with two dot patterns enabling extraction of masks of the placed objects and estimation of their initial poses. The acquired dense point clouds constituting subsequent object views undergo pairwise registration and at the end are optimized with a graph-based technique derived from SLAM. The combination of all those elements resulted in a system able to generate consistent 3D models of objects
Efficient generation of 3D surfel maps using RGB–D sensors
The article focuses on the problem of building dense 3D occupancy maps using commercial RGB-D sensors and the SLAM approach. In particular, it addresses the problem of 3D map representations, which must be able both to store millions of points and to offer efficient update mechanisms. The proposed solution consists of two such key elements, visual odometry and surfel-based mapping, but it contains substantial improvements: storing the surfel maps in octree form and utilizing a frustum culling-based method to accelerate the map update step. The performed experiments verify the usefulness and efficiency of the developed system
Peripheral arterial response during haemodialysis – is two-dimensional speckle-tracking a useful arterial reactivity assessment tool?
The reaction of arteries to haemodialysis : can a change in the cross-sectional area be an important parameter in the assessment of the vessels' condition?
Purpose: The objectives of our study were to evaluate the changes in the cross-section area of carotid and femoral arteries caused by fluid loss during haemodialysis (HD) and to determine the direction and amount of these changes. Material and methods: Seventy-four HD patients (28 women and 46 men) were studied. We performed ultrasound exams of the distal common carotid and proximal femoral arteries in each patient before and after a HD session. Recorded exams were analysed using EchoPac software. Arterial cross-section area values were acquired for further analysis. Results: We found a statistically significant decrease in arterial systolic cross-section area values after HD sessions (carotid arteries area before HD equalled 0.6731 cm2 and 0.6333 cm2, p = 0.00001 after HD, femoral arteries area before HD equalled 0.8263 cm2 and 0.7635 cm2, p = 0.00001 after HD). The decrease of systolic carotid cross-section area correlated with the amount of fluid lost during HD sessions (correlation coefficient of 0.3122, p = 0.010) and the percentage of the body mass lost during HD (correlation coefficient of 0.3577, p = 0.003). No statistically significant changes were found in the femoral cross-section area. Conclusions: Our findings suggest that the arterial cross-section area may be used in the assessment of response to body fluid loss. We were able to measure changes due to fluid loss during the HD session. The carotid cross-section values decreased after the procedure and correlated with the amount of fluid lost during the HD session
Agent Structure of Multimodal User Interface to the National Cybersecurity Platform – Part 1
Ten dwuczęściowy artykuł przedstawia interfejs do Narodowej Platformy
Cyberbezpieczeństwa (NPC). Wykorzystuje on gesty i komendy wydawane głosem do sterowania
pracą platformy. Ta część artykułu przedstawia strukturę interfejsu oraz sposób jego działania,
ponadto prezentuje zagadnienia związane z jego implementacją. Do specyfikacji interfejsu
wykorzystano podejście oparte na agentach upostaciowionych, wykazując że podejście to może
być stosowane do tworzenia nie tylko systemów robotycznych, do czego było wykorzystywane
wielokrotnie uprzednio. Aby dostosować to podejście do agentów, które działają na pograniczu
środowiska fizycznego i cyberprzestrzeni, należało ekran monitora potraktować jako część
środowiska, natomiast okienka i kursory potraktować jako elementy agentów. W konsekwencji
uzyskano bardzo przejrzystą strukturę projektowanego systemu. Część druga tego artykułu
przedstawia algorytmy wykorzystane do rozpoznawania mowy i mówców oraz gestów, a także
rezultaty testów tych algorytmów.This two part paper presents an interface to the National Cybersecurity Platform utilising
gestures and voice commands as the means of interaction between the operator and the platform.
Cyberspace and its underlying infrastructure are vulnerable to a broad range of risk stemming from
diverse cyber-threats. The main role of this interface is to support security analysts and operators
controlling visualisation of cyberspace events like incidents or cyber-attacks especially when
manipulating graphical information. Main visualization control modalities are gesture- and voice-based
commands. Thus the design of gesture recognition and speech-recognition modules is provided. The
speech module is also responsible for speaker identification in order to limit the access to trusted users
only, registered with the visualisation control system. This part of the paper focuses on the structure
and the activities of the interface, while the second part concentrates on the algorithms employed for
the recognition of: gestures, voice commands and speakers