The brain’s output is movement and the primary function of the sensory systems, including vision, is to guide goal directed movements. Upon encountering an object, the visual system has to extract features that are relevant for proper motor interaction with that object. Although much is known about the abstract representation of object shapes in the brain, the computations and neural mechanisms that support visual object processing for action are poorly understood. In this talk, I will provide a framework for studying the visual processing of objects for action. My studies explore the processing of object images as well as real objects and complex body movements of other individuals. First, using evidence from neuroimaging, I will establish the presence of object representations in the human parietal cortex that may have a role in extracting object features relevant for actions. Next, I will present the results of a series of behavioral experiments that use motion tracking along with machine learning techniques to study visual processing of others’ body movements in the context of real-time social interactions. The research program sketched in this talk aims to bridge between the study of visual processing and that of the goal directed movements.