The COGVIM project (Cognitive Grounding of Visual Metaphor, URL: https://cogvim.org/) explored the differences between visual and linguistic metaphors, that is, metaphors expressed through still images (such as advertisements, political cartoons and artworks) and metaphors expressed through words (for example in everyday conversation, in academic discourse, or in the news). An example of visual metaphor is the political cartoon displayed here:
In this image, the author (cartoonist Angel Boligan) displays an ATM embedded in a church confessional. The image stimulates the viewer to associate, metaphorically, the ATM machine with the confessional, and find possible similarities between the two entities. Conversely, an example of a linguistic metaphor is the linguistic statement “forget the fiscal cliff” (COCA corpus). Here, the word cliff is used in a metaphorical way, in relation to the economic crisis domain.
We predicted that visual and linguistic metaphors would construct metaphorical comparisons relying on different aspects of the meaning of the concepts aligned in the metaphor. In particular, based on extensive literature reviews, we distinguished between three types of conceptual structures, which could be operationalized into three different types of semantic representation by means of vector spaces. Based on the three different semantic representations, it was possible to investigate three different types of similarity between pairs of concepts that are aligned in visual or linguistic metaphors): 1) attributional similarity, that is, the similarity between metaphor concepts based on the entity-related semantic features that are shared by the two concepts; 2) relational similarity, that is, the similarity between metaphor concepts based on the shared contextual structures that characterize each pair of metaphor concepts; 3) language-based similarity, that is, the similarity between metaphor concepts based on the linguistic contexts that are shared by two metaphor concepts.
The metaphor similarity modeled across the three different distributional spaces in WP1, WP2, and WP3 shows different patterns for, respectively, visual and linguistic metaphors. Therefore, the results support our hypothesis that visual and linguistic metaphors are constructed and represented on the basis of different types of semantic information, which are exploited to cue metaphorical comparisons between two aligned concepts (Bolognesi 2016a; Bolognesi 2016b; Bolognesi, Aina 2017).
In particular, in WP1 we found that the attributional similarity in visual metaphors is significantly higher than for linguistic metaphors. This similarity is based on shared features between metaphor terms that express entity-related properties, such as for example perceptual and systemic properties, or components of a given concept (e.g. CONFESSIONAL and ATM both have flat surfaces, while there are no reported entity-related properties, shared between SUBJECT and AREA).
In WP2 we found that the experience-based relational similarity in visual metaphors is significantly higher than for linguistic metaphors. This similarity is based on shared features between metaphor terms that express experience-based related properties, such as for example locations in which the concepts appear, or objects and participants that populate these environments (e.g. CONFESSIONAL and ATM, both typically appear together with human participants that interact closely with them, while there are no reported experience-based properties shared between SUBJECT and AREA).
Conversely, in WP3 we found that the language-based similarity is significantly higher for linguistic metaphors than for visual ones. This similarity is based on shared linguistic contexts between metaphor terms, such as for example grammatical and lexical structures in which two metaphor terms are typically used (e.g. for SUBJECT and AREA: a debated subject/area; subject/area of study, while for CONFESSIONAL and ATM there are significantly less shared linguistic contexts in which the two words are used).
A closer inspection of the data suggested, in addition, that visual metaphors tend to be on average more creative (i.e. less conventionalized) than linguistic metaphors, and they tend to involve on average more concrete concepts, because these have to be graphically depicted. In particular, it was observed that on one hand linguistic metaphors used in natural language tend to be highly conventionalized (e.g., “I seewhat you mean”, where see means ‘understand’), while visual metaphors are typically constructed as ad-hoc comparisons, to serve a specific communicative goal within a given genre. As a result, they are typically more creative (e.g., an anti-smoking campaign showing a cigarette in the shape of a coffin). Conversely, visual metaphors tend to involve more concrete concepts, because they have to be graphically depicted.
These two additional variables, Conventionality and Concreteness, were therefore investigated, to check whether they could also explain the different similarity patters obtained in the three work packages. A series of regression analyses showed that only the independent variable ‘Modality of expression’ (visual or linguistic) could significantly predict the amount of similarity between metaphors terms across the three distributional semantic spaces, while ‘Concreteness’ and ‘Conventionality’ did not (Bolognesi, in preparation).
These findings all together support multimodal accounts of cognition, where multiple modality-specific semantic representations contribute to shape our conceptual system. This is a crucial topic within the cognitive linguistic framework and within the field of metaphor studies, where, instead, it is often assumed that there is a one-to-one correspondence between words and concepts, and that the linguistic system matches and exhausts the semantic information retrieved from perceptual experiences.
Finally, we observed that in order to achieve a full interpretation of the metaphorical images, the viewer often needs to perform additional cognitive operations (besides identifying the shared features between two aligned concepts). Such complex cognitive operations explain how abstract concepts can emerge from concrete, depicted instances, and include combinations of metaphors with metonymies, and feature projection from one domain onto the other, which is used to create ad-hoc similarities, as opposed to pre-existing ones (Bolognesi, Steen, in preparation).
These final observations about the emergence of abstract concepts from concrete categories led the COGVIM team to organize an international and interdisciplinary symposium on the structure, processing and modeling of abstract concepts (https://abstractconceptsnet.wordpress.com/), which was attended by 100+ international delegates. The emerging debate will be collected in a special issue of the journal Topics in Cognitive Science (Eds. Bolognesi, Steen, in preparation) while a book proposal for the book series Human Cognitive Processing (Benjamins Publishers), currently under review, will collect a selection of papers based on the best presentations delivered at the symposium.