Relational information between objects is available to guide search.
Loading...
Authors
Schmidt, Joseph C.
Issue Date
1-May-12
Type
Dissertation
Language
en_US
Keywords
Alternative Title
Abstract
Objects in the real world exist relative to other objects, resulting in an intricate web of spatial relationships. Do we use this relational information when we search for objects? Current search theory suggests that object relationships can only be established using focal attention (Logan, 1994; 1995). If this is true, pre-attentive search guidance by relational information should be impossible. In a series of seven experiments, I demonstrate that search guidance by relational information is possible, even in the absence of real-world contextual constraints that may magnify relational guidance. Experiment 1 shows search guidance by relational information only, i.e. in the absence of target feature guidance. Experiment 2 indicates that relational guidance is evident in highly heterogeneous displays as well. Experiment 3 demonstrates that relational guidance does not affect search when targets are cued using text labels referring to four object classes, suggesting that the effective coding of relational information may require highly specific target features. Experiment 4 shows that relational guidance is selectively not expressed when functional relationships between objects are contrary to real-world expectations (e.g. a hammer below a nail), suggesting that relational guidance is affected by object spatial associations in long-term memory. Experiment 5 further demonstrates that with minimal practice there is a small automatic contribution to relational guidance, though with continued practice relational guidance increases or disappears depending upon task demands. Experiments 6 and 7 show that relational guidance is unaffected by various grouping cues, suggesting that object spatial relationships are not coded by low-level visual processes, but rather by higher order pointers that code the categorical spatial relationships between objects (above, below, left, right). Collectively, these experiments suggest that object spatial relationships are encoded into the guiding target template at preview, thereby making this relational information available to guide search and removing the need to assume a pre-attentive coding of relational information between peripherally viewed search objects.
Description
100 pg.
Citation
Publisher
The Graduate School, Stony Brook University: Stony Brook, NY.