It is an open question whether the search for multiple targets is less efficient than the search for a single target. Here we explored visual search guidance for multiple targets by tracking subjects' eye movements, with the broad goal being to better understand the close relationship between visual search and working memory. One series of experiments showed that the search for multiple targets is less efficient and less guided than the search for a single target. Using a retro-cue paradigm we were able to distinguish between two potential causes of this load effect; one related to capacity limitations on visual working memory during encoding, and the other due to a mismatch between the features of multiple targets in working memory and the single target in the search display. We found that the proportion of initial fixations on the target, a conservative measure of search guidance, was influenced by feature mismatch but not by a memory encoding limitation; guidance was affected by the targets indicated by the retro cue (feature mismatch) and not the number of targets shown at preview (memory encoding). We therefore conclude that multiple target search is less guided than single target search due to too many features in visual working memory (from multiple targets) weakening the guidance to the actual target appearing in the search display. A second series of experiments explored how multiple targets are represented, in terms of feature shared by two targets (common features) or features unique to two targets (distinctive features), and how this representation changes with the conceptual relationship between the search targets. We found that two dissimilar targets are represented by distinctive features, whereas two similar targets are represented by common features, although in both cases search guidance improves with the use of distinctive features.