PbD : Visual Guidance for Robot-Arm Manipulation

Robot control requires comprehending the workspace and limitations of the complex multi-joint robotic system. The structure of visual guidance system accompanying the training scenario affects the efficiency of eye-hand coordination and decision making of how to avoid and compensate for a situation of crash functioning when operators should directly move the robot arm to the desired position. We have demonstrated that the use of the visual guidance in a form of semi-transparent virtual image (phantom) indicating the right robot-arm configuration enabled trainees to achieve significant improvements in the target acquisition task. Their cumulative motor experience was increased by 1.8 times, while only by 1.2 times under the passive learning condition.


Introduction
The progress in learning of new tricks (eye-hand coordination and visual-motor task) is affected by two complementary factors: success and failure as indicated by Leibowitch with colleagues (2010) and in other related research [1][2][3][4].In general, they was demonstrated that the learning function of the trainee performance exhibits a sigmoid behavior when the successful trials/events gradually overcome and dominate over failed attempts.However, this is not the optimal strategy in a case of learning new categories of complex functions (understanding, interpreting and performing) during interaction with a complex system.
It can be demonstrated that even manipulation by the virtual robot arm in a specific task requires a very different behavioral pattern than a regular use of a mouse.A regular computer user is focused on a specific location of the cursor or pointer.In contrast, an operator of the robot arm should distribute visual attention to efficiently control a spatial configuration of the robot avoiding the known limitations of the system.Being unaware of how to resolve/compensate for a situation of crash-functioning a trainee could easily cause malfunction resulting in frustration.
How to present a visual guidance for training unskilled persons to control the robotic-arm?Visual guidance systems (VGS) are widely used in various fields of human activity.For training the operators, when visual channel is already overloaded, additional visual cues should prove their effectiveness [5].

Experimental setup and Procedure
A virtual model of a 4-DOF robot arm was implemented in MS Visual Basic environment and can be manipulated with a regular mouse and a keyboard.As it can be seen from Figure 1, the task of the experiment consisted of an acquisition of the target object by placing the joint4 over the object and moving the object to the target location along the optimal trajectory, avoiding collisions with obstacles (Figure 3) while keeping the workspace within a range assigned.A range of joint movements was set to 90 degrees.The trainee had to manipulate the virtual robot arm by placing mouse cursor within one of the four joints and holding the mouse button down by changing angles of the arm linkages to approach the object location.When joint4 was placed over the object, it was attached to the joint and locked.The trainee should manipulate with the robot arm by avoiding collisions with obstacles and failures of the robot arm configuration (such as 'out of workspace'), Figure 3.
The out of workspace events and collisions were indicated by highlighting the corresponding joint with yellow color for 3 seconds.The committed errors were also accompanied by a negative sound.It was expected that the operator would be able to fix the problem immediately and avoid it in the future activity.
To prevent a failure detection delay, an indication of the state of the system could be activated on demand (by pressing the X-key on the keyboard), or automatically, using an index of AngleTolerance (0.8-1).However, we did not use the failure prediction during the test.
After two minutes of learning how to control the virtual robot arm, detailed instructions were given to the participants regarding the procedure of the experiment, and how to behave when an error occurs.Before actual test (the first block of trials), the template of the unused task was individually demonstrated and commented.The subjects were allowed to practice shortly with the sequence of actions.They started the test trials by clicking the spacebar on the keyboard.
The task (trial) was completed automatically and accompanied by sound feedback when more than 80% of the object was placed over the target location.When trainees were ready to continue the test, they had to press the space bar again.
Between trials, the participants had a short break, could ask any questions and report the problems if needed.General comments of the participants were taken into account but none of the subjects had an experience with use of more than one technique.Therefore, we did not use the post-test questionnaire which could not provide us with further statistically useful data.
Twenty subjects from the local university, unskilled in robot-arm control voluntarily took part in our experiments.We expected to have a successful performance in the virtual robot-arm control relying on the regular computer skills of the subjects.The age of the participants ranged from 21 to 29 years with a mean of 26.5 years.Five subjects were females and fifteen were males.All subjects had normal or corrected-to-normal visual acuity (wore glasses) and none of them reported any sensory motor problems.The subjects were regular computer users and used the right hand mouse settings.The formal consent of the participants for collecting and using the obtained data was gathered.No private or confidential information was collected and stored.The participants were given an opportunity to refuse the continuation of the experiment at any point during the experiment without any explanation of the reason and they were told that such a refusal would not have any further negative effect to them or to an experiment.
To accomplish the test, the participants had to complete the same task ten times (1-10) according to the training protocol (Figure 2).Still ten subjects performed ten different tasks requiring the use of different control patterns.
Ten trials were divided into three blocks.Each block started from a base-line trial without indicating a right configuration of the virtual robot through the grey robot-phantom (NoVisGuide).Then, one group (A) of the participants performed the second NoVisGuide trial.Another group (B) of the participants performed the task with continuous demonstration of the right configuration of the virtual robot through the grey robot-phantom as shown in Figure 3.A right configuration of the robot -the templates of the target acquisition tasks were captured once at the test performed by the skilled person ("expert") and stored in a log file.The blocks were finalized with the skilltransfer trial (NoVisGuide).During the third block, the second group (B) of the participants performed the task VisGuide-Continuous twice before the skill-transfer trial.Moreover, the interval between blocks of training varied from 3 to 5 hours.
To evaluate the performance dynamics of the trainees we recorded temporal factors and behavioral parameters of the human activity such as the test completion time, the movement time, the length of tracks, the pattern of managing the robot and erroneous activity committed (collisions with obstacles and exceeding the range of joint movements).All the data have been collected and subjected to statistical analysis using the SPSS17 statistical package (SPSS Inc., Chicago, IL).

Statistical analysis of the results
During the test, each of 20 participants performed only 10 trials of the same task.The tasks had a little different index of difficulty (length of tracks and a number of situations which could create a failure configuration).Consequently, different amounts of efforts were required to accomplish these tasks.
The main activity of the trainee consisted in coordinating the sequence of mouse movements to control the linkages of the robot arm in a right way.robot workspace and to perform the target acquisition task while keeping the specific configuration of the robot, planning processes, and decision-making.Indirectly, the time of observation and decision-making could be found as the difference between the task completion (overall) time and the movement time -the subjects directly spent to dynamically change the robot-arm configuration.
The comparative box plots for the temporal factors of the trainees' performance are presented in Fig. 4-6.The mean and variation of these parameters have been collected and presented in Table 1.Positive dynamics were observed in all temporal parameters recorded.
However, we did not find a statistically significant difference (improvement) in total behavioral activity (i.e., how many times the trainees used different joints to control the robot configuration) when patterns of managing by the virtual robot arm were averaged for different tasks.At least, during the transfer trials the recorded behavioral patterns were sub-optimal and never matched the expert behavior (template).
As it can be seen from Table 1, after the second block of training trials the paired samples t-test revealed statistically significant differences between temporal factors of the trainees' performance in two experimental conditions.
During the entire training session (3 blocks) with the use of the visual guidance indicating the right robot-arm configuration, the trainees achieved significant improvements in the target acquisition task.Their cumulative motor experience was increased by 1.8 times, while only by 1.2 times under the passive learning condition.The average task completion time was decreased by more than 40% vs. 12% observed due to a passive learning effect.

Conclusions
Being evaluated with the composite temporal parameter, such as the task completion time, the effect of the VGS on the human performance might be diminished when learning takes place continuously.Analysis of the human performance demonstrated that in the absence of visual guidance (VG) the improvements occur mostly due to optimizing the movement components that could remain at suboptimal level.Initially and in more degree the VG affects the cognitive components, which step by step improve movements.Thus, the presented evaluation study showed that visual guidance could support skill acquisition in robotic tasks.

Figure 1 .
Figure 1.Robot-phantom is accompanying input and indicating the right configuration of the robot arm.Joints 1-4 are controlled with the left mouse button.

Figure 2 .
Figure 2. The training protocol for evaluating effects of the visual feedback on the human performance

Figure 3 .
Figure 3. Animation of the operator activity.Template track is indicated as a green-blue trajectory, sample track by dark-blue and yellow color.

Figure 5 .
Figure 5.The movement time averaged over 10 subjects and presented as in Figure 4.

Figure 6 .
Figure 6.The time of observation and decision making averaged over 10 subjects and presented as in Figure 4.The cognitive components have been related to different aspects of distributing attention to observe the

Table 1 .
The comparative statistics of the temporal factors of the trainees' performance averaged over 10 participants.