Likan Zhan

Visual World Paradigm

An eye-tracking technique

Likan Zhan · 2018-02-07

1. A brief description

This paradigm relies on two seminal work published by Cooper (1974) and by Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy (1995).

2. Recent Applications

  1. Groot, Huettig, & Olivers (2017)

On all trials, participants memorized a spoken word for a verbal recognition test at the end of the trial. During the retention period, they performed a visual search task. In crucial trials, the search target were absent. In a crucial trial, for example, the word to remember was “banana”. They then saw four object printed on the screen. These contained an object that was semantically related (such as the monkey), an object that was visually related (such as the canoe), and two objects that were unrelated (such as the hat and the tambourine). In the visual search stage, participant were asked to search the banana (the template condition) or the figurine (Accessory condition). The article observed that participants’ eye movements are significantly different between the accessory condition and the template condition, suggesting that language-induced attentional biases are subject to task requirements.

  1. Saryazdi & Chambers (2018)

To explore the effects of the degree of image realism, researchers conducted two eye tracking studies using the visual world paradigm. The test image consist of four objects, such as a cigarette, a banana, an earings, and an apple. The test images consist of both the phorographs and the clipart images of the same objects. The two experiments differ in whether the test audios are noun-biased (Experiment 1), such as John will move the apple/banana, or verb-biased (Experiment 2), such as John will move/peal the apple/banana. Researchers found a modest benefit for clipart stimuli during real-time processing, but only for noun-driving mappings, i.e., the effect of realism was observed in experiment 1 but not in experiment 2.

3. References

Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84–107. Journal Article. doi:10.1016/0010-0285(74)90005-x

Groot, F. de, Huettig, F., & Olivers, C. N. L. (2017). Language-induced visual and semantic biases in visual search are subject to task requirements. Visual Cognition, 25(1-3), 225–240. Journal Article. doi:10.1080/13506285.2017.1324934

Saryazdi, R., & Chambers, C. G. (2018). Mapping language to visual referents: Does the degree of image realism matter? Acta Psychologica, 182, 91–99. Journal Article. doi:10.1016/j.actpsy.2017.11.003

Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268(5217), 1632–1634. Journal Article. doi:10.1126/science.7777863