- WHO Researcher and project coordinator: dr. Marianna Bolognesi.
- WHERE UvA University of Amsterdam, Metaphor Lab Amsterdam.
- WHEN 2015-2017.
- HOW CogViM includes 3 Work Packages and several activities.
Visual metaphors are highly structured images, commonly used in social campaigns, advertising, political cartoons, and art, where two concepts are represented in a way that requires one concept to be interpreted in terms of another.
According to Conceptual Metaphor Theory (CMT, Lakoff & Johnson 1980), the most influential theory of metaphor in contemporary research, metaphors are matters of thought: we understand concept1 in terms of concept2, the latter being generally perceived as easier or more concrete. For example, when we say “I see what you mean”, we actually intend “I understand what you mean”. Therefore, we convey the concept of understanding (quite abstract, and complex) through the concept of seeing (quite concrete, imageable and easy to grasp).
According to CMT, because metaphors hold a conceptual status (they concern concepts rather than just words) we expect to find the same (conceptual) metaphors expressed in different modalities (images, sounds, gestures etc). However, while linguistic manifestations have been widely and extensively investigated, other modalities still lack comprehensive empirical evidence that has been consistently analyzed to support (or challenge) this theory.
Visual metaphors, a somewhat neglected modality of expression of metaphor, might also hold an unexpected key for achieving a better understanding of how abstract concepts are grounded in bodily experiences, rather than (just) in the linguistic system. As a matter of fact, in visual metaphors abstract concepts are evoked by concrete things, which are depicted in the image.
The development of Cogvim entails the construction of a model for visual metaphor’s cognitive grounding, based on an innovative network-approach aimed at identifying, retrieving, analyzing and classifying the knowledge about the two concepts compared in a metaphor, that comes into play when we interpret the metaphor. This objective will be achieved through three different methods, through which I will run extensive semi-automated analyses of pairs of concepts involved in metaphors, conducted across three large-scale electronic databases of semantic information:
- a set of semantic features collected from participants’ descriptions (e.g. McRae et al. 2005);
- a set of annotated images, formalized by metadata from Flickr (Bolognesi 2014);
- a semantically annotated corpus of texts (Baroni, Lenci 2010).
These three databases of semantic information contain different types of knowledge, and all together they account for the conceptual richness of our conceptual representations.
In conclusion, the Research Questions that COGVIM will address are the following:
RQ1: What type of semantic information do we activate in our mind, and transfer from one concept to the other, when we understand a visual metaphor as opposed to a verbal metaphor?
RQ1a: How does the semantic information that we retrieve from mental simulations of the concepts involved in metaphors contribute to our understanding of visual and verbal metaphors?
RQ1b: How does the semantic information that we retrieve from processing concepts in experiential contexts contribute to our understanding of visual and verbal metaphors?
RQ1c: How does the semantic information that we activate when we use a concept in linguistic contexts contribute to our understanding of visual and verbal metaphors?
RQ2: Considering the results achieved from the above RQs, what type of similarity (or, in a broader sense, relatedness) characterizes the alignment (i.e. the comparison) between two concepts in a visual and in a verbal metaphor?
By providing answers to these questions, this project will accomplish the following scientific goals:
- integrate the contemporary theory of metaphor (still biased towards verbal expressions) in the grounded cognition framework by providing empirical data gathered from extensive quantitative analyses that have never been performed before, comparing and contrasting visual and verbal metaphors, through innovative methods based on computational investigations;
- contribute to a better understanding of the grounding of abstract concepts in the human mind.
Baroni, M., & Lenci, A. (2010). Distributional Memory: A general framework for corpus-based semantics, Computational Linguistics, 36(4), 673-721.
Bolognesi, M. (2014). Distributional Semantics meets Embodied Cognition: Flickr® as a database of semantic features. Selected Papers from the 4th UK Cognitive Linguistics Conference, 18-35, London, UK.
Lakoff G., Johnson M. (1980) Metaphors we live by. Chicago: University Press.
McRae, K., Cree, G. S., Seidenberg, M. S., & McNorgan, C. (2005). Semantic feature production norms for a large set of living and nonliving things. Behavior Research Methods, 37, 547-559.