170 research outputs found

    Towards Developing Gripper to obtain Dexterous Manipulation

    Get PDF
    Artificial hands or grippers are essential elements in many robotic systems, such as, humanoid, industry, social robot, space robot, mobile robot, surgery and so on. As humans, we use our hands in different ways and can perform various maneuvers such as writing, altering posture of an object in-hand without having difficulties. Most of our daily activities are dependent on the prehensile and non-prehensile capabilities of our hand. Therefore, the human hand is the central motivation of grasping and manipulation, and has been explicitly studied from many perspectives such as, from the design of complex actuation, synergy, use of soft material, sensors, etc; however to obtain the adaptability to a plurality of objects along with the capabilities of in-hand manipulation of our hand in a grasping device is not easy, and not fully evaluated by any developed gripper. Industrial researchers primarily use rigid materials and heavy actuators in the design for repeatability, reliability to meet dexterity, precision, time requirements where the required flexibility to manipulate object in-hand is typically absent. On the other hand, anthropomorphic hands are generally developed by soft materials. However they are not deployed for manipulation mainly due to the presence of numerous sensors and consequent control complexity of under-actuated mechanisms that significantly reduce speed and time requirements of industrial demand. Hence, developing artificial hands or grippers with prehensile capabilities and dexterity similar to human like hands is challenging, and it urges combined contributions from multiple disciplines such as, kinematics, dynamics, control, machine learning and so on. Therefore, capabilities of artificial hands in general have been constrained to some specific tasks according to their target applications, such as grasping (in biomimetic hands) or speed/precision in a pick and place (in industrial grippers). Robotic grippers developed during last decades are mostly aimed to solve grasping complexities of several objects as their primary objective. However, due to the increasing demands of industries, many issues are rising and remain unsolved such as in-hand manipulation and placing object with appropriate posture. Operations like twisting, altering orientation of object within-hand, require significant dexterity of the gripper that must be achieved from a compact mechanical design at the first place. Along with manipulation, speed is also required in many robotic applications. Therefore, for the available speed and design simplicity, nonprehensile or dynamic manipulation is widely exploited. The nonprehensile approach however, does not focus on stable grasping in general. Also, nonprehensile or dynamic manipulation often exceeds robot\u2019s kinematic workspace, which additionally urges installation of high speed feedback and robust control. Hence, these approaches are inapplicable especially when, the requirements are grasp oriented such as, precise posture change of a payload in-hand, placing payload afterward according to a strict final configuration. Also, addressing critical payload such as egg, contacts (between gripper and egg) cannot be broken completely during manipulation. Moreover, theoretical analysis, such as contact kinematics, grasp stability cannot predict the nonholonomic behaviors, and therefore, uncertainties are always present to restrict a maneuver, even though the gripper is capable of doing the task. From a technical point of view, in-hand manipulation or within-hand dexterity of a gripper significantly isolates grasping and manipulation skills from the dependencies on contact type, a priory knowledge of object model, configurations such as initial or final postures and also additional environmental constraints like disturbance, that may causes breaking of contacts between object and finger. Hence, the property (in-hand manipulation) is important for a gripper in order to obtain human hand skill. In this research, these problems (to obtain speed, flexibility to a plurality of grasps, within-hand dexterity in a single gripper) have been tackled in a novel way. A gripper platform named Dexclar (DEXterous reConfigurable moduLAR) has been developed in order to study in-hand manipulation, and a generic spherical payload has been considered at the first place. Dexclar is mechanism-centric and it exploits modularity and reconfigurability to the aim of achieving within-hand dexterity rather than utilizing soft materials. And hence, precision, speed are also achievable from the platform. The platform can perform several grasps (pinching, form closure, force closure) and address a very important issue of releasing payload with final posture/ configuration after manipulation. By exploiting 16 degrees of freedom (DoF), Dexclar is capable to provide 6 DoF motions to a generic spherical or ellipsoidal payload. And since a mechanism is reliable, repeatable once it has been properly synthesized, precision and speed are also obtainable from them. Hence Dexclar is an ideal starting point to study within-hand dexterity from kinematic point of view. As the final aim is to develop specific grippers (having the above capabilities) by exploiting Dexclar, a highly dexterous but simply constructed reconfigurable platform named VARO-fi (VARiable Orientable fingers with translation) is proposed, which can be used as an industrial end-effector, as well as an alternative of bio-inspired gripper in many robotic applications. The robust four fingered VARO-fi addresses grasp, in-hand manipulation and release (payload with desired configuration) of plurality of payloads, as demonstrated in this thesis. Last but not the least, several tools and end-effectors have been constructed to study prehensile and non-prehensile manipulation, thanks to Bayer Robotic challenge 2017, where the feasibility and their potentiality to use them in an industrial environment have been validated. The above mentioned research will enhance a new dimension for designing grippers with the properties of dexterity and flexibility at the same time, without explicit theoretical analysis, algorithms, as those are difficult to implement and sometime not feasible for real system

    PATTERN RECOGNITION IN CLASS IMBALANCED DATASETS

    Get PDF
    Class imbalanced datasets constitute a significant portion of the machine learning problems of interest, where recog­nizing the ‘rare class’ is the primary objective for most applications. Traditional linear machine learning algorithms are often not effective in recognizing the rare class. In this research work, a specifically optimized feed-forward artificial neural network (ANN) is proposed and developed to train from moderate to highly imbalanced datasets. The proposed methodology deals with the difficulty in classification task in multiple stages—by optimizing the training dataset, modifying kernel function to generate the gram matrix and optimizing the NN structure. First, the training dataset is extracted from the available sample set through an iterative process of selective under-sampling. Then, the proposed artificial NN comprises of a kernel function optimizer to specifically enhance class boundaries for imbalanced datasets by conformally transforming the kernel functions. Finally, a single hidden layer weighted neural network structure is proposed to train models from the imbalanced dataset. The proposed NN architecture is derived to effectively classify any binary dataset with even very high imbalance ratio with appropriate parameter tuning and sufficient number of processing elements. Effectiveness of the proposed method is tested on accuracy based performance metrics, achieving close to and above 90%, with several imbalanced datasets of generic nature and compared with state of the art methods. The proposed model is also used for classification of a 25GB computed tomographic colonography database to test its applicability for big data. Also the effectiveness of under-sampling, kernel optimization for training of the NN model from the modified kernel gram matrix representing the imbalanced data distribution is analyzed experimentally. Computation time analysis shows the feasibility of the system for practical purposes. This report is concluded with discussion of prospect of the developed model and suggestion for further development works in this direction

    Analyzing the Fundamental Aspects and Developing a Forecasting Model to Enhance the Student Admission and Enrollment System of MSOM Program

    Get PDF
    A forecasting model, associated with predictive analysis, is an elementary requirement for academic leaders to plan course requirements. The M.S. in Operations Management (MSOM) program at the University of Arkansas desires to understand future student enrollment more accurately. The available literature shows that there is an absence of forecasting models based on quantitative, qualitative and predictive analysis. This study develops a combined forecasting model focusing on three admission stages. The research uses simple regression, Delphi analysis, Analysis of Variance (ANOVA), and classification tree system to develop the models. It predicts that 272, 173, and 136 new students will apply, matriculate and enroll in the MSOM program during Fall 2017, respectively. In addition, the predictive analysis reveals that 45% of applicants do not enroll in the program. The tuition fee of the program is negatively associated with the student enrollment and significantly influences individuals’ decision. Moreover, the students’ enrollment in the program is distributed over 6 semesters after matriculation. The classification tree classifies that 61% of applicants with non-military status will join the program. Based on the outcomes, this study proposes a set of recommendations to improve the admission process

    Application And Prospect Of Orthogonal On-Off Keying As An Error Control Coding In Laser Communication

    Get PDF
    In this thesis work, a bandwidth efficient coded modulation technique has been proposed for LASER based communication. It offers reliable error correction and bandwidth optimization for free space connectivity. Laser communication has been emerging as a potential alternative of Radio Frequency (RF) communication for long-haul connectivity. It provides several advantages over RF systems. Some of specific advantages are its high speed data transfer, higher bandwidth, robust security, immunity to electromagnetic interference, lower implementation cost. However, as any other communication system, it requires an error control system that detects and corrects error in the data transmission. The proposed technique blends well with the laser communication systems to provide a very good error correction capability. For a decent code length it can correct around 24 percent of error. The proposed method is named as Orthogonal On-Off Keying of LASER and uses a Bi-orthogonal matrix to generate the encoded data for laser. While decoding, it uses the cross correlation between two orthogonal codes to detect and correct any errors during transmission. Orthogonal codes also known as Walsh codes are used for error correction in Code Divisional Multiple Access of RF communication. But it has not been investigated for laser systems. Besides on-off keying this code also has enormous potential in other modulation techniques as well. Here in this thesis project, the error control performance has been verified using an experimental setup. Furthermore, a method for more bandwidth efficient transmission of data using laser have been proposed and discussed. This method of bit splitting offers bandwidth efficiency of unity code rate. Altogether, the proposed technique is a promising solution for error detection and correction and at the same time a bandwidth efficient system.Another advantage of orthogonal coding is self-synchronization capability as the modulated signals share orthogonal space as well. All the codes in orthogonal matrix are unique as per their properties and can be identified separately. As a result, it does not require any synchronization bits while transmission. This results in reduction of complexity in implementation and thus yield savings economically. Like orthogonal matrix, traditional block coding also uses a block of information bits. Say, they are segmented into a block of k bits. This block is transformed into a larger block of n bits by adding horizontal and vertical parity bits. This is denoted as (n,k) block code. The problem with this type of coding is, the added (n-k) bits carry no information and only helps in error detection and correction. However this bigger block of data can detect and correct only one error in the transmission and it fails, if more than one error occurs. On the other hand, the data is mapped using an orthogonal table in this type of coding where codes have unique properties. No redundant bits are required to be added as they possess parity generation property with themselves. Also, the distance property of orthogonal codes makes it stronger for error detection. For codes of greater length in size, such approach is capable to correct more than one error. The test bed is implemented using a LASER transmitter and hardware interface that includes a computer to receive the data and transmit. The operations such as data capture, modulation, coding and injection of error are carried by a software written in MATLAB®. On the receiver side, a high speed photo-detector is placed with a hardware interface with another computer. This one has the other part of the program to receive the bits and decode to extract the transmitted data. To test the error control capability, errors are intentionally transmitted by altering number of bits in the modulated signal. Upon reception, the data is compared with the transmitted bits and evaluated. This test goes through 8, 16, 32 and 64 bits of orthogonally mapped data, several different speed of transmission and a range of error percentage. All the results were compared and investigated for prediction and error tolerance

    Practical AI Value Alignment Using Stories

    Get PDF
    As more machine learning agents interact with humans, it is increasingly a prospect that an agent trained to perform a task optimally - using only a measure of task performance as feedback--can violate societal norms for acceptable behavior or cause harm. Consequently, it becomes necessary to prioritize task performance and ensure that AI actions do not have detrimental effects. Value alignment is a property of intelligent agents, wherein they solely pursue goals and activities that are non-harmful and beneficial to humans. Current approaches to value alignment largely depend on imitation learning or learning from demonstration methods. However, the dynamic nature of values makes it difficult to learn values through imitation learning-based approaches. To overcome the limitations of imitation learning-based approaches, in this work, we introduced a complementary technique in which a value-aligned prior is learned from naturally occurring stories that embody societal norms. This value-aligned prior can detect the normative and non-normative behavior of human society as well as describe the underlying social norms associated with these behaviors. To train our models, we sourced data from the children’s educational comic strip, Goofus \& Gallant. Additionally, we have built another dataset by utilizing a crowdsourcing platform. This dataset was created specifically to identify the norms or principles exhibited in the actions depicted within the comic strips. To build a normative prior model, we trained multiple machine learning models to classify natural language descriptions and visual demonstrations of situations found in the comic strip as either normative or non-normative and into different social norms. Finally, to train a value-aligned agent, we introduced a reinforcement learning-based method, in which we train an agent with two reward signals: a standard task performance reward plus a normative behavior reward. The test environment provides the standard task performance reward, while the normative behavior reward is derived from the value-aligned prior model. We show how variations on a policy shaping technique can balance these two sources of reward and produce policies that are both effective and perceived as being more normative. We test our value-alignment technique on different interactive text-based worlds; each world is designed specifically to challenge agents with a task as well as provide opportunities to deviate from the task to engage in normative and/or altruistic behavior

    8-Year-Old Child with Cerebral Palsy Treated with Pelvic Osteotomies Using 3.5 mm Blade Plate Having Subsequent Bilateral Implant Aseptic Loosening: A Case Report

    Get PDF
    Background: Cerebral palsy (CP) is a central problem of the brain due to neurological insult that affects muscle posture, tone, and movement, resulting in poor motor control and dysfunctional muscle balance affecting hip joints in the growing child. Surgical treatment of hip and, if present, acetabular dysplasia addresses the femoral neck-shaft angle, appropriate muscle lengthening, and deficiency of acetabular coverage, as necessary. The surgeons perform proximal femoral osteotomies (PFOs) mostly with fixed angled blade plates (ABP) with proven success. The technique using an ABP is common and requires detailed attention to perform and to teach.  The case: In this case, an eight-year-old ambulatory patient with CP underwent bilateral proximal varus femoral derotational and pelvic osteotomies for the neuromuscular hip condition with a 3.5 mm Locking Cannulated Blade System (OP-LCP) by OrthoPediatrics Corp instead of the use of the conventional 4.5 mm ABP procedure, resulting in aseptic loosening. Conclusion: Due to the child’s underdeveloped posture, the surgeon utilized the 3.5 mm instrumentation for a child-size implant, which worked sufficiently for the surgery but may not have loosened if a similar child-size blade plate system of 4.5 mm screws was implanted. While the ABP and OP-LCP systems are effective and safe for internal corrections of PFOs, the OP-LCP system may aid the residents in learning the procedure with higher confidence, fewer technical inaccuracies, and refined outcomes. Both systems are safer and viable for the treatment of neuromuscular hip conditions

    Machine Tool Communication (MTComm) Method and Its Applications in a Cyber-Physical Manufacturing Cloud

    Get PDF
    The integration of cyber-physical systems and cloud manufacturing has the potential to revolutionize existing manufacturing systems by enabling better accessibility, agility, and efficiency. To achieve this, it is necessary to establish a communication method of manufacturing services over the Internet to access and manage physical machines from cloud applications. Most of the existing industrial automation protocols utilize Ethernet based Local Area Network (LAN) and are not designed specifically for Internet enabled data transmission. Recently MTConnect has been gaining popularity as a standard for monitoring status of machine tools through RESTful web services and an XML based messaging structure, but it is only designed for data collection and interpretation and lacks remote operation capability. This dissertation presents the design, development, optimization, and applications of a service-oriented Internet-scale communication method named Machine Tool Communication (MTComm) for exchanging manufacturing services in a Cyber-Physical Manufacturing Cloud (CPMC) to enable manufacturing with heterogeneous physically connected machine tools from geographically distributed locations over the Internet. MTComm uses an agent-adapter based architecture and a semantic ontology to provide both remote monitoring and operation capabilities through RESTful services and XML messages. MTComm was successfully used to develop and implement multi-purpose applications in in a CPMC including remote and collaborative manufacturing, active testing-based and edge-based fault diagnosis and maintenance of machine tools, cross-domain interoperability between Internet-of-things (IoT) devices and supply chain robots etc. To improve MTComm’s overall performance, efficiency, and acceptability in cyber manufacturing, the concept of MTComm’s edge-based middleware was introduced and three optimization strategies for data catching, transmission, and operation execution were developed and adopted at the edge. Finally, a hardware prototype of the middleware was implemented on a System-On-Chip based FPGA device to reduce computational and transmission latency. At every stage of its development, MTComm’s performance and feasibility were evaluated with experiments in a CPMC testbed with three different types of manufacturing machine tools. Experimental results demonstrated MTComm’s excellent feasibility for scalable cyber-physical manufacturing and superior performance over other existing approaches

    Deriving Value from End-To-End (E2E) Solutions in A Developing Country: A Study of Loan Processing in Bangladesh

    Get PDF
    One of the key differences between developed and developing economies is that the ‘IT Productivity Paradox’ (the relatively slow growth of economic and firm productivity despite advances in IT) is still evident in developing economies. End-to-End (E2E) solutions are seen as a way of improving process and firm productivity by concurrently implementing IT systems and process improvements, and are of particular interest to companies based in developing countries as they have the potential to deliver the necessary improvements in business processes that are hypothesized to drive firm, sectoral and economic performance improvements. This study aims to identify the factors (labeled, ‘IT value conversion contingencies’) that impede firms in developing countries from realizing value from E2E solutions. The paper begins by developing a conceptual model based on the extant literature on IT value conversion contingencies, E2E solutions and developing countries. This model is tested in the context of E2E loan processing solutions in the banking sector in Bangladesh. Using survey responses from 30 of the 48 banks operating in Bangladesh, the study identifies the IT value conversion factors of E2E solutions and reveals the impact of such factors on how banks derive value from E2E solutions

    Machine Tool Communication (MTComm) Method and Its Applications in a Cyber-Physical Manufacturing Cloud

    Get PDF
    The integration of cyber-physical systems and cloud manufacturing has the potential to revolutionize existing manufacturing systems by enabling better accessibility, agility, and efficiency. To achieve this, it is necessary to establish a communication method of manufacturing services over the Internet to access and manage physical machines from cloud applications. Most of the existing industrial automation protocols utilize Ethernet based Local Area Network (LAN) and are not designed specifically for Internet enabled data transmission. Recently MTConnect has been gaining popularity as a standard for monitoring status of machine tools through RESTful web services and an XML based messaging structure, but it is only designed for data collection and interpretation and lacks remote operation capability. This dissertation presents the design, development, optimization, and applications of a service-oriented Internet-scale communication method named Machine Tool Communication (MTComm) for exchanging manufacturing services in a Cyber-Physical Manufacturing Cloud (CPMC) to enable manufacturing with heterogeneous physically connected machine tools from geographically distributed locations over the Internet. MTComm uses an agent-adapter based architecture and a semantic ontology to provide both remote monitoring and operation capabilities through RESTful services and XML messages. MTComm was successfully used to develop and implement multi-purpose applications in in a CPMC including remote and collaborative manufacturing, active testing-based and edge-based fault diagnosis and maintenance of machine tools, cross-domain interoperability between Internet-of-things (IoT) devices and supply chain robots etc. To improve MTComm’s overall performance, efficiency, and acceptability in cyber manufacturing, the concept of MTComm’s edge-based middleware was introduced and three optimization strategies for data catching, transmission, and operation execution were developed and adopted at the edge. Finally, a hardware prototype of the middleware was implemented on a System-On-Chip based FPGA device to reduce computational and transmission latency. At every stage of its development, MTComm’s performance and feasibility were evaluated with experiments in a CPMC testbed with three different types of manufacturing machine tools. Experimental results demonstrated MTComm’s excellent feasibility for scalable cyber-physical manufacturing and superior performance over other existing approaches
    • …
    corecore