** Exact topics and schedule subject to change, based on student interests and course discussions. **

Date Topics Readings
1/20 Week 1: Course Introduction [slides]
  • Course syllabus and requirements
  • Definitions and challenges
  • None!
1/27 Week 2: Social Intelligence [synopsis]
  • How would you define social intelligence? What aspects of the proposed definitions most resonated with you? Will these definitions of social intelligence generalize to artificial social intelligence?
  • Should we always try to separate social intelligence from other types of intelligence, more specifically abstract intelligence (often referred to as g-factor or general intelligence)? What are the similarities and differences?
  • Given the ups and downs of social intelligence research, what method(s) would you support to measure social intelligence (e.g., self-reports, behavioral measures,...)? Will these measures generalize to a setup involving artificial social intelligence?
  • What are the main components (aka, sub-constructs) of social intelligence? In other words, can we separate the concept of social intelligence into separate factors, separate sub-concepts? What are these components (aka, sub-constructs, sub-concepts)? Can you start drafting a taxonomy of social intelligence sub-concepts?
  • How are the methods used in psychometric testing (as mentioned on the reading papers) could be applied to our recent methods used in AI, mostly based on deep learning approaches? Are these compatible? How can we handle this uncertainty both in defining and measuring social intelligence?
2/3 Week 3: Social Skills and Competence [synopsis]
  • How would define the concepts of social skills and social competency? How does social intelligence relate to these concepts (social skills and competency)? What aspects of the proposed definitions most resonated with you? Will these definitions of social intelligence generalize to artificial social intelligence?
  • Would it be best to study “artificial social skills and competency”, instead of “artificial social intelligence”? Do you see both concepts working together? What are the main differences? Which one should we prioritize?
  • Can someone create a list of all social skills? What would this taxonomy look like and how can we validate it? Will the same taxonomy of social skills also apply to artificial social intelligence technologies?
  • Social skills and competency are often discussed in relationship with human personality traits. Should we think of social skills as long-term traits, or more shorter-term? More contextualized? Are they learned or innate?
  • Start thinking on how we can build a framework that integrates social intelligence, social skills and social competency? How to evaluate such a framework? How to study this very large problem of artificial social intelligence in a systematic way, over many years? What would be your proposed research agenda?
2/10 Week 4: Emotional Intelligence [synopsis]
  • Are emotions the same as emotional intelligence? How would you define emotions? How would define emotional intelligence? How do you relate these two concepts of emotions and emotional intelligence? Do we want AI systems to have emotions, emotional intelligence or both?
  • What would be the equivalent of this emotions vs emotional intelligence comparison when talking about social intelligence? Should we compare social intelligence to social interactions? Social behaviors? Or social skills?
  • Should we use the term emotional intelligence or emotional competence? Should we talk about emotional skills, in a similar way to our discussion about social skills?
  • Is it important to differentiate social intelligence from emotional intelligence? Similarly, should we differentiate social skills from emotional skills? Social competence vs emotional competence? Or should we integrate them? Or the union of all these concepts?
  • Can you identify examples and use cases where clearly a situation is referring to emotional intelligence and not social intelligence? Or vice-versa?
  • What are the skills and/or abilities would you expect an AI system to have so that we can call it emotionally intelligent? Is there a core set of skills/abilities that are particularly important? We could call them the “core skills/abilities”. What would be the core set of abilities and skills for social intelligence? Social competence?
2/17 Week 5: Artificial Social Agents Part 1: Measurements [synopsis]
  • With a view towards artificial social agents (robots, virtual humans,...), how should we measure the quality of social intelligence, social competence and social skills? Should we apply the measures from human social intelligence directly to artificial social intelligence? Where are the main differences?
  • Researchers in robotics, virtual humans and artificial intelligence started building questionnaires and benchmarks to measure social intelligence. How could we improve these proposed questionnaires and benchmarks, possibly using the knowledge gained from studying human social intelligence, competence, and skills?
  • The Tian & Oviatt paper studies socio-emotional competence with a slightly different perspective, looking instead to the errors happening during human-robot interactions? Should we focus more on the errors or on the competence? How are these two concepts related? Does competence mean lack of errors? Should we try to bring competence and errors in the same framework?
  • Many of the questionnaires for social robots and virtual humans are based on direct interaction with the AI system. After the interaction, the human subject is asked to fill the questionnaire to evaluate and measure social intelligence. This is in contrast with a more “offline” approach, such as Social IQA, where scenarios are written and shared with the AI system directly, to evaluate its social intelligence. Which approach should we prioritize? What are the pros and cons of both paradigms (evaluation after direct interaction vs simulated written scenarios)?
  • What are the other evaluation paradigms you think would be helpful to measure social intelligence of artificial social agents?
  • Do you expect social intelligence to be measured differently between an physically-embodied AI (e.g., robot) or a virtually-embodied AI (e.g., virtual human)? What about non-embodied agents or agents with different embodiment (e.g., HAL 9000)? How should we measure social intelligence?
2/24 Week 6: Social Cognition and Social Interactions [synopsis]
  • How would you relate the theories from social cognition research with the concepts previously discussed in this course, including social intelligence, social skills and social competence? Can you draft an updated taxonomy (or theoretical framework) which integrates all these concepts? What are the differences between these different views?
  • “Theory of Mind” is a term that evolved over the decades, and has become popular in recent years for researchers in robotics and AI, when discussing how technologies should interact with humans. From your readings, what is missing from Theory of Mind to have a complete artificial social intelligence?
  • When reflecting on how to build artificial social agents, it is helpful to think about what are the core abilities and skills needed for AI to successfully participate in social interactions. Based on your readings and insights, what are the core elements of social interactions for artificial social agents? Lightart et al. 2021 suggests an initial list. What core elements are missing? Which elements are definitely key to social interactions?
  • Some researchers argue that social cognition research should include social interactions (aka interactionism theory). From your perspective, how much of social intelligence can be assessed offline, using tests such as the ones often used for Theory of Mind assessment? How much of social intelligence is also interactional (online) and requires different assessment methods?
  • Social perception is also an important aspect of social intelligence. The Brunswik’s Lens model is a landmark model for studying human interpersonal communication. Since signals are likely to be interpreted differently, from different people, how can we model all these different perspectives of the same social interaction?
3/3 Week 7: Interpersonal Communication [synopsis]
  • A significant portion of many social interactions involves communication of verbal and nonverbal messages. This interpersonal communication between dyads and small groups is a relevant facet of social interactions. From your readings, what are the main functions of interpersonal communication? How do you compare the functions of verbal communication vs functions of nonverbal communication?
  • How are these functions of interpersonal communication aligned with abilities and skills discussed in the previous week when studying social and emotional intelligence? Are these functions overlapping? Some of them unique to interpersonal communication? Some of them unique to social intelligence?
  • The interpersonal communication process is a dynamic multimodal process which involves many components (e.g., see Models of Interpersonal Communication). Based on your readings, what would you identify as the main components of the interpersonal communication process?
  • Are all components of interpersonal communication also important when assessing and modeling social intelligence? Would you see some components as “basic communication” that is not as relevant to social intelligence and social competence?
  • Interpersonal communication is a joint process between two or more people (e.g., as discussed in the “Cognitive pragmatics” paper). How does this common ground and joint action process relate to last week reading on social cognition and theory of mind?
  • As we start thinking about building and evaluating artificial social intelligence, what aspects of interpersonal communication will be core to this endeavor? Can we build artificial social intelligence without first modeling interpersonal communication? What are the key components of interpersonal communication needed for artificial social intelligence?
3/10 Week 8: No classes – Spring break
  • None!
  • None!
3/17 Week 9: Artificial Social Agents Part 2: Conversational AI [synopsis]
  • As a first step in reviewing research related to artificial social agents, we start with language-based interactions (aka non-embodied interactions, conversational agents). From our previous weeks discussing skills and abilities related to social intelligence and competence, what subset of these skills and abilities are primarily driven by language-based conversations? Is language central to all of them?
  • A big enabler of recent progress in this area of (language-based) conversational AI is related to large-scale language models (e.g., GPT, OPT, T5, TNLG) and their extensions to conversation modeling (e.g., DialogGPT, Blenderbot). What aspects of social intelligence are particularly well suited to be learned by these large-scale models? What social intelligence skills and abilities may not be well modeled by these large-scale models?
  • In parallel to this recent progress in large-scale models, researchers are still continuing to study more theoretically-inspired research in conversational modeling. What aspects of social intelligence are particularly well suited for these approaches, not based primarily on large-scale models? What social intelligence skills and abilities may not be well modeled by these approaches?
  • We start seeing hybrid approaches that integrate both theory-inspired approaches with large-scale models. How would you propose to integrate these two lines of research? Should large-scale models be the foundation of these integrated approaches? Or should large-scale models be seen just as a module of a larger system? Can you think of other integrative/hybrid approaches?
  • Evaluation of these conversational AI systems is still a big issue. Focusing on the social intelligence parts of the problem, how can we properly evaluate conversational AI systems? How should automatic metrics be used as part of the evaluation and creation of conversational AI systems? Similarly, how should we integrate human feedback?
3/24 Week 10: Artificial Social Agents Part 3: Embodiment and Nonverbal [synopsis]
  • Last week, we studied recent technologies to model written-text conversations. This week, we study spoken and embodied interactions, which also include nonverbal communication. What are the new technical challenges that emerge when building spoken and embodied conversational AI?
  • Large language models have shown impressive performances for text-based conversational AI. Why did we not see the same improvements for embodied conversational AI? Is it just a question of more data? Better annotations? Or should nonverbal and multimodal interactions be modeled differently?
  • Take a moment to review the papers from week 5 which listed methods to evaluate social agents, including proposed taxonomies for social skills and social intelligence. Do current models and systems properly address the abilities and skills of social intelligence? How should we build new technologies for better embodied social agents?
  • As we start thinking beyond dyadic interactions and dialogues, what are the elements of social intelligence that are particularly important in small group settings? What are the new technical challenges to build social agents in these group settings? Can you think of concepts and technologies originally built for dyadic interactions which will not generalize to the group settings?
  • One extensively-discussed problem is about the type of embodiment. For example, should the social agent have physical embodiment (e.g., robots) or virtual (e.g., virtual humans). Think about the impact of embodiment on the problem of building social agents, more specifically its social skills and social intelligence. What are the differences? Which ones will generalize between embodiments?
  • As a small extra exercise, imagine a world without transformers or large-scale language models (i.e., Vaswami et al. paper was never published). How would you start addressing this problem of building embodied social agents if these tools (transformer, LLMs) were not available? In other words, take a fresh look at the problem of artificial social intelligence, without grounding it too much on current models and systems. How would you solve the problem? Where to start?
3/31 Week 11: Multimodal Social Understanding [synopsis]
  • A popular trend is to learn large-scale pre-trained models, such as the MERLOT Reserve model. What social skills and abilities are likely to be well suited to be modeled and understood by large-pretrained models? Which ones will require a different approach?
  • A different approach is to explicitly represent and model intermediate stages of the social understanding process, including the detection of the unimodal and multimodal behaviors related to social interactions. One example is to explicitly learn and model graphically the behaviors, interactions and relationships between people using graphs or similar representations. When should we use such an approach to better understand social interactions? What are the pros and cons of this approach?
  • Many large-scale pre-trained models are learned through masking. In what situations do you expect masking to succeed, and as importantly, when will it fail? If not using masking, how should we learn models for social understanding? How to integrate theories and knowledge of social intelligence in these large-scale pretrained models? Do we need motion to understand social interactions, or can it be done from images?
  • To succeed in social understanding, do we need agents to experience and interact themselves? Can it be learned from observations only? Can it be learned just from text (e.g., Social IQA dataset)? When is nonverbal and multimodal information helpful? What else would be needed to learn efficiently? What are the differences between social understanding when second-person view (e.g., social agent) vs third-person view (outside observer)?
  • Is question-answering the right way to evaluate multimodal social understanding? Are current benchmarks properly evaluating social understanding, or only a subset such as social perception? What would be an alternative to this benchmarking paradigm? What would it take for you to be convinced that an AI system is socially intelligent and can understand social interactions?
  • What is the most impactful project in social understanding you can imagine doing if you had 6 months to do it, given the current state-of-the-art? How would you approach it and what data would you need? What if you had 3 years and no limit on budget, what problem in social understanding would have the biggest impact?
4/7 Week 12: Relationships, Trust, and Ethics [synopsis]
  • During the past weeks, we studied technologies to enable artificial social agents. To help us understand the potential issues related to trust and ethics, it is good to first reflect on what applications are best suited for these technologies. What are the domains where you see artificial social agents having the most impact, in the short term and in the long term? In general, in which applications modeling is it particularly important to model social intelligence?
  • As socially-intelligent technologies are being deployed in houses and also in companies, people will start iterating with them multiple times, potentially over a long period of time, and maybe even creating relationships. What would be all the dimensions to consider when thinking of human-agent relationships? Should we replicate human-human interactions and relationships? How to define this relationship? A taxonomy of human-agent relationships?
  • How can we build social agents that are able to build long-term relationships? What are the main technologies missing to be able to build such relationships?
  • Another important facet is trust between humans and technology. What are all the facets of trust that are particularly important for social interactions with artificial social agents? Do they differ for human-human trust? Or differ from other AI technologies?
  • How can we build trust between artificial social agents and people? What design principles should we follow to ensure trustworthy technologies? How to be careful not to deceive users and properly manage expectations?
  • What are the potential ethical issues related to artificial social agents? From all the ethical issues related to AI technologies in general, which ones are particularly important in artificial social intelligence?
  • What are the design principles and research strategies that we should consider when building artificial social intelligence? As academic researchers, how can we help design and implement these principles and strategies? In our own research, how to integrate these principles?
4/14 Week 13: No classes – CMU Carnival
  • None!
  • None!
4/21 Week 14: Wrapup Discussion
4/28 Week 15: Project Presentations