The present paper reviews a series of prehension experiments recently conducted at Simon Fraser University's Human Motor Systems Laboratory, and attempts to place them into the larger context of multi-segmental control theory. Two related lines of experiments are reported: (a) experiments involving prehension during walking, and (b) experiments involving trunk-assisted reaching. Three-dimensional analyses of movements were performed via both world-and body-centered coordinates. Our results are supportive of the idea that both types of tasks are carried out using task-specific synergies. Furthermore, we assert that the actions of these synergies are comprised of variable contributions of different movement systems and result in smooth, world-centered end-point trajectories. We show evidence that this “motor equivalence” is the result of increasing the complexity of a given task. Finally, the implications of the present findings on prevailing motor control theory are discussed in terms of the theoretical mechanisms underlying the coordination of the transport and grasp components of prehension.
Ronald C. Marteniuk and Christopher P. Bertram
Ronald G. Marteniuk and Christopher P. Bertram
Smeets and Brenner have suggested that it may be time to abandon Jeannerod’s “classical approach” to studying human prehension, and have presented a mathematical model as an alternative. We argue that this model provides insufficient grounds for widespread acceptance, and question whether or not such an approach furthers the science of motor control.
Ronald G. Marteniuk, Chris J. Ivens and Christopher P. Bertram
A pointing task was performed both while subjects stood beside and while subjects walked past targets that involved differing movement amplitudes and differing sizes. The hand kinematics were considered relative both to a fixed frame of reference in the movement environment (end effector kinematics) and to the subject's body (kinematics of the hand alone). From the former view, there were few differences between standing and walking versions of the task, indicating similarity of the kinematics of the hand. However, when the hand was considered alone, marked differences in the kinematics and spatial trajectories between standing and walking were achieved. Furthermore, kinematic analyses of the trunk showed that subjects used differing amounts of both flexion-extension and rotation movements at the waist depending on whether they were standing or walking as well as on the constraints imposed by target width and movement amplitude. The present results demonstrate the existence of motor equivalence in a combined upper and lower extremity task and that this motor equivalence is a control strategy to cope with increasing task demands. Given the complexity involved in controlling the arm, the torso, and the legs (during locomotion), the movements involved in the present tasks appear to be planned and controlled by considering the whole body as a single unit.
Martin Lemay, Christopher P. Bertram and George E. Stelmach
Pointing to a visual target that disappears prior to movement requires the maintenance of a memory representation about the location of the target. It has been shown that a target can be stored egocentrically, allocentrically, or in both frames of reference simultaneously. The main goal of the present study was to compare the accuracy and kinematics of a pointing movement to a remembered target when egocentric, allocentric, or combined egocentric and allocentric coding was possible. The task was to localize, memorize, and reach to a remembered target. Condition 1 was the “no-context” condition and involved presenting the target in a completely dark environment (egocentric condition). For 2 other conditions, the target was presented within a visual context provided by an illuminated square. Condition 2 was the “stationary-context” condition and involved keeping the context at the same position during the whole trial (egocentric and/or allocentric coding). Condition 3 was a “moved-context” condition that involved shifting the context to a different location during the recall delay (allocentric coding). Movement accuracy and kinematics results were strikingly similar for the moved-context and stationary-context conditions. These results suggest that when both allocentric and egocentric coding are possible, an allocentric strategy is used.