Given knowledge from the game simulation, let's outline our assumptions. There is a physics system that takes care of applying the motor commands to the data structures representing each player. We'll suppose it is done with kinematics, in a physically plausible fashion. A physics system will also resolve any conflicts that happen within the simulation. This is the locomotion, which falls outside of the control of the AI.
We'll also assume that an animation system takes care of the low-level aspects of motion. Specifically, moving individual limbs and blending the appropriate animation cycles is handled transparently from the AI.
For the interface to be of any use to the animat, it's essential for the information returned to be consistent with the physics of the environment. Consequently, we'll assume the backend implementation of these interfaces rely on the movement simulation.