15-382 - Fall 2025

Collective Intelligence:

Key Information

Sunday and Tuesday: 10:00am - 11:15am (1199)

Thursday: 10:00am - 11:15am (1199)

Yusuf Ansari

9.0

15% Final, 10% Midterm,
20% Graded Recitation Assignments, 5% In-Class Quizzes + Participation,
25% Homework, 25% Final Project


Overview

The course focuses on the modeling and control of systems composed of multiple autonomous components or agents that interact dynamically with one another and with their environment, adapting through mutual, often non-linear, and decentralized interactions. Such systems can be broadly referred to as multi-agent systems when the components can be meaningfully viewed as agents (e.g., decision-makers), or more generally as complex systems, when the components can be simpler entities such as molecules, oscillators, or neurons.
In this sense, multi-agent systems can be seen as a subset of complex systems.

Examples of such agents/components include:

  • Biological systems
    • Ants foraging for food
    • Flocking birds
    • Pedestrians in crowds
  • Social and economic systems
    • Vehicles navigating city traffic
    • Firms competing in markets
  • Physical systems
    • Molecules in a laser synchronizing to produce coherent light
    • Coupled oscillators aligning their phases
  • Artificial systems
    • Robots in swarm or multi-robot systems
    • Artificial neurons in neural networks
    • LLMs (Large Language Models) interacting collaboratively to produce collective reasoning

Among complex systems, the most interesting cases are those that can, through decentralized interactions, produce useful system-level behaviors, display self-organized spatial-temporal patterns, effectively perform computations, information dissemination, and decision-making. Loosely speaking, when this happens we can say that the system is displaying a form of collective intelligence.

The course addresses a number of such systems varying in complexity and capabilities of the individual components/agents, interaction modes, and goals.

Objectives

The course aims to expose students to mathematical and computational models from the following fields and domains:

  • Complex Systems Theory
  • Continuous- and Discrete-time Dynamical systems
  • Lattice models: Cellular Automata
  • Social Choice Theory
  • Game theory
  • Distributed Consensus
  • Task Allocation
  • Swarm Intelligence
  • Interaction Networks
  • Pattern Formation
  • Energy-based Neural Models
  • Collective Learning
  • Multi-LLM Systems

For each topic, the course introduces general models and examines their properties, focusing on:

  • Predicting system evolution (direct problems);

  • Designing local interaction rules to achieve specific tasks or behaviors (inverse problems);

  • Understanding self-organization and emergence;

  • Exploring real-world application domains.

Students will bridge theory and practice by implementing these models and exploring their properties in practical applications.

Students who successfully complete the course will have acquired a comprehensive understanding of multi-agent complex adaptive systems, including the challenges of modeling, prediction, and control, as well as hands-on experience implementing these systems in various domains. The interdisciplinary nature of the course exposes students to models and problems from biology, economics, robotics, and operations research.

Prerequisites

Due to the course's broad scope, students should have a solid programming experience, a strong background in general CS, and a good understanding of calculus and probability theory. Accordingly, the formal prerequisites are: 21-120 (or 21-111 + 21-112) + 21-122 + 21-127 + 21-241 + 36-218 + 15-122.

Talk to the instructor if you are unsure whether your background is suitable or not for the course. /


Course Layout

The course consists of two weekly lectures and one weekly recitation. Lectures present the core material, while recitations focus on working through examples, filling in background as needed, and reinforcing understanding of key concepts. Selected exercises from the recitations will be graded. Active participation in all classes is expected.

For each one of the different topics, the course will present relevant techniques, formal results, and practical applications.

Homework will include both written questions and programming assignments. Written work develops skills in analyzing algorithms, deriving results, and critically engaging with the material. Programming assignments emphasize learning by doing: implementing and testing algorithms in synthetic and real-world scenarios, and using the outcomes to deepen conceptual understanding. Programming will play a central role in the course.

Classes will feature short In-Class Quizzes, with the aim of promoting attendance, engagement, and continual monitoring of the understanding of the presented concepts and techniques.

Grading: policies and schedules

Grading is based on the following distribution: 25% Homework, 25% Final Project, 10% Midterm Exam, 15% Final Exam, 20% Graded Recitation Assignments, 5% In-Class Quizzes + Participation.

The final project requires students to integrate concepts, theories, and algorithms from at least three different lecture topics and apply them to a practical problem. Students will select and review relevant scientific papers, implement and adapt key ideas to new scenarios, introduce novel mechanisms or analyses, conduct experiments, and summarize their work in a comprehensive report and an oral presentation.

In case of large classes, final projects can be done in a group of two or three.

The midterm exam will cover theoretical concepts, formalizations, and algorithmic analysis for defined sections of the course. The final exam will cover the whole course. During exams students are allowed to consult a 2-pages cheatsheet (written in any desired format).

Readings & Viewings

No single textbook covers the full range of topics. Lecture slides and supplementary materials (e.g., book excerpts, videos) will be provided to support different sections of the course.

A list of interesting, useful, technical, or just enjoyable readings around the topics of the course that can be used to clarify or expand the concepts, include the following (start with watching the first three videos from the list, they are quite entertaining!):

Schedule of Classes (Subject to change)


Dates Topic Slides References
8/24 Introduction: General notions on collective intelligence and complex systems; examples of collective intelligence from biology, physics, social, and robotic sciences; properties of complex systems; emergence and self-organization; course organization, road map. pdf Interesting, useful, or just enjoyable readings to start with:
  • L. Fisher, The perfect swarm, Basic Books, 2011
  • T. Malone, M. Bernstein (eds.), Handbook of collective intelligence, MIT Press, 2015
  • M. Mitchell, Complexity a guided tour, Oxford University Press, 2011
  • J. Ladyman, K. Wiesner, What is a complex system? , Yale University Press, 2020
  • M. Crichton, Prey, Harper, 2002
8/26 Dynamical systems 1: Approaches to study complex systems; dynamical systems; linear ODEs; properties and solutions; linear combination of exponentials; matrix exponential. pdf A classic textbook on ODEs, check chapters (2, 4, 10): W. Trench, Elementary differential equations with boundary problems, 2013
8/28 Dynamical systems 2: Study of phase portraits, flows; role of eigenvalues and eigenvectors in exponential modes; equilibrium and stability; types of equilibria; analysis of stability cases for linear ODEs. pdf

8/31 Dynamical systems 3: Perturbations and stability of linear systems; special cases for the eigenvalues; non-linear systems, properties and examples; linearization; derivatives, gradients, jacobians; linearized systems. pdf
9/2 Dynamical systems 4: Study of the phase space for non-linear systems; nullclines; limit cycles, conditions of existence; from 2D to 3D or higher dimensionality systems; strange attractors; deterministic chaos; fractal dimensions. pdf
9/4 Recitation: Dynamical systems

9/7 Discrete Dynamical Systems: From continuous to discrete dynamical systems; iterated maps, recurrence equations; examples of behaviors; cobweb diagrams; fixed points; stability of the points; logistic map; structural stability and transition to chaos; bifurcations; Lyapunov coefficients; analysis of the phase diagrams. pdf
9/9 Cellular Automata: From n-dimensional maps to Cellular Automata; spatially bounded lattice models; topology and neighborhoods; structure and parameters of a CA; formal definition; common lattices and boundaries; updating schemes; elementary CAs, Wolfram code; direct and inverse problems; Wolfram's classification of CA; dependence on initial conditions; propagation of information; importance of the neighborhood; a few remarkable rules: 110, 30, 184; examples of various applications. pdf
9/11 Recitation: Discrete Dynamical Systems, Cellular Automata

9/14 Break; No Classes
9/16 Networks 1: Importance of interconnection properties; topology, graphs, networks; overview of network science; multiple levels: node, edged, sub-graph, graph; key properties in graphs: degree distribution, path lengths, clustering coefficients, connected components; tools for measuring the key properties; case study with real networks, analysis of results. pdf
2/18 Networks 2: Network models: Erdos-Renyi (Random), Watts-Strogatz (Small world, High clustering), Barabasi-Albert (Power law, Scale-free); relationships with real-world networks along key network properties (degree distribution, path lengths, clustering, connected components); Milgram and small-world; hubs and power laws; centrality concepts; introduction to the concept of network centrality. pdf

9/21 Networks 3: Node importance and concept of centrality; degree centrality measures (degree, betweenness, closeness), self-consistent measures (eigenvector and alpha-centrality), message passing algorithms, discussion of alpha-centrality parameters, radius of centrality; PageRank algorithm, www, probabilistic eigenvector centrality, Markov chains, stationary distributions, fixed point equation, web pages ranking. pdf
9/23 Swarm Intelligence 1, Particle Swarm Optimization (PSO): Concepts of Swarm Intelligence, inspiration from nature's MAS, agent-based models, mobility; different paradigms of swarm intelligence depending on communication modalities, networking, mobility; Particle Swarm Optimization (PSO), biological inspiration; notion of function optimization; elements of PSO's agents (personal best, local best, global best); impact of neighborhood definition; tweaking of various parameters; performance on benchmark functions. pdf Summary notes
9/25 Recitation: Networks, Swarm Intelligence (PSO)

9/28 Swarm Intelligence 2, PSO 2, Optimization 1: Examples of PSO's performance; PSO with no collaboration (individual agents); relationships with hill climbing agents; gradient-based agents: gradients, gradient descent (GD), step size, convex vs. non convex functions, GD and loss functions, batch and stochastic GD variants, momentum gradient descent, Adam; Adam vs. PSO; local search concept; Hybrid PSO: PSO + Local search (e.g., Adam), exploration + intensification. pdf Summary notes
9/30 Swarm Intelligence 3, Optimization 2, ACO 1: Local search; properties and examples of neighborhoods; general local search metaheuristic; combinatorial optimization models; traveling salesman routing problems; k-exchange examples in local search (2-opt, 3-opt, 4-opt); iterative solution modification vs. solution construction; general solution construction metaheuristic; introduction to Ant Colony Optimization (ACO), notion of stigmergy. pdf Summary notes
10/2 Swarm Intelligence 4, ACO 2: ACO and stigmergy; stigmergic variables; diverging and converging stigmergy; pheromone laying-following in real ant colonies; shortest paths behavior; ACO metaheuristic, solution construction, monte carlo path sampling, sequential learning by generalized policy iteration; pheromone and heuristic arrays; states and decision nodes; ant-routing table; stochastic decision policy; pheromone updating strategies; examples of various ACO implementations and strategies for combinatorial optimization problems; design space; ACO for distributed and dynamic problems: routing in wired networks. pdf ACO book

10/5 Robot Swarms: From purely computational to embodied agent and mlti-agent systems: Cyber-Physical Multi-Agent Systems (CP-MAS); taxonomy of CP-MAS; advantages and challenges; human-in-the-loop; robot swarms / multi-robot systems: properties, examples, design, role of communication and decentralization/distribution; examples of CP MAS: adaptive routing in MANETs; drone flocking; swarm assembly; modular self-reconfigurable robot systems; stigmergy and swarm foraging / navigation; heteregeneous swarms; extra (optional) content on various aspects of human-in-the-loop of swarm operations.
Recitation: Swarm Intelligence, Optimization
pdf
10/7 Task Allocation Models 1: Task allocation in multi-robot/agent systems (who does what, and when, where, how); division of labor and other forms of and role division/aggregation vs. task allocation; forms and measures of task allocation; a formal definition of MRTA; utility function; taxonomy of approaches; task taxonomies, (SR-MR, ST-MT, IA-TA, dependencies); objective functions and evaluation metrics; optimization approach; linear assignment model: various forms and algorithms, limitations. pdf
10/9 Task Allocation 2, Optimization Models: Optimization model (cont.); Linear assignment, special cases; generalized assignment (ST-SR-TA, MT-SR-IA); VRP models for MT-SR-TA; generalized assignment with dependencies; TSP, VRP, Orienteering models.
Recitation: Robot Swarms, Task Allocation Models
pdf Summary notes

10/12 Fall break
10/14 Fall break
10/16 Fall break

10/19 Task Allocation 3, Optimization Models: More optimization models: set covering, set partitioning, set packing and corresponding task allocation models; VRP as set partitioning; coalition formation and set models; other cases for the Gerkey-Mataric taxonomy; summary of mapping MRTA problem to combinatorial optimization model; centralized vs decentralized solution approaches. pdf
10/21 Task Allocation 4, Market-based Models: Market-based ideas for task allocation; auctions, various forms of auctions; auctions with human participants vs. auctions with robots; general auction mechanisms: single item, multi item, combinatorial; auctions in task allocation: parallel, combinatorial, sequential auctions; analysis parallel, combinatorial and sequential auctions, optimality, limitations; bounds. pdf
10/23 Midterm exam

10/26 Task Allocation 5, Market-based Models, Ant-Based Models: Comparative analysis of parallel, combinatorial, and sequential auctions; bidding vs. optimality; bidding vs. team performance criteria; MinSum, MiniMax, MiniAverage criteria; time and communication complexity bounds; Emergent task allocation ideas; inspiration from ant colony behaviors for sorting and clustering tasks (cemetery organization, brood care); task allocation based on response threshold / (divergent) stigmergy; ant algorithm for single and multiple task allocation; adaptive task allocation and specialization through reinforcement learning; from ants to MAS. pdf
10/28 Task Allocation 6, Learning-based Models: (Learning-based approaches for task allocation: reinforcement learning, clustering; examples, background on MDPs and RL.

Project discussion: Format, ideas, challenges.
10/30 Recitation: Task Allocation Models

11/2 Collective Decision-Making 1, Wisdom of Crowds: Collective decision-making: consensus vs. voting, continuos vs. discrete outcomes, independent vs. dependent agents; Wisdom of Crowds examples; expters vs. wisdom of crowds; crowd of experts vs. real crowds; diversity prediction theorem, role of bias and diversity; key condition for crowds' wisdom; madness of crowds; related framworks. pdf
11/4 Collective Decision-Making 2, Distributed Consensus 1: Distributed consensu; general law, single integrator; continuos-time and discrete-time; graph Laplacian; properties of graph Laplacian: Dirichelet energ and its readings; algebraic prioperties, relationship with continous Laplacian. See full set of slides in next lecture
11/6 Collective Decision-Making 3, Distributed Consensus 2: Continous Laplacian; energetic, spectral, and graph-theoretic propoerties of graph Laplacian; eigenvalues and connected components; examples of impact of different Laplacians on consensus; rendez-vous problem; discretization with Euler approximation, stability conditions; Formation control using single integrator laws; examples of formation control; flocking and velocity alignment; from single to double integrator models; Kuramoto model of oscillator synchronization as a non-linear (yet linearizable) example. pdf (L23-L24)

11/9 Neural Models 1, General concepts: From neuroscience to AI; artificial neural networks (ANNs): definitions, units/neurons, topologies; feed-forward models, Multi-Layer Perceptrons (MLPs), Convolutional Neural Networks (CNNs); recurrent models. pdf
11/11 Neural Models 2, MLPs: ANNs as (non-linear) function approximators; how to train a NN, loss function, iteratiove gradient-based models; types of units / activation functions; MLPs at work; hidden layers and feature maps; general properties of MLPs/ANNs; (recap) gradient descent methods; chain rule for derivation; general concepts about backpropagation. pdf
11/13 Community day; No Classes

11/16 Neural Models 3, MLPs, CNNs: MLPSs at work, step-by-step example; soft-max layer; cross-entropy loss for classification; parameters to learn; optimizers; epochs; learning curves and confusion matrix; learned feature maps; overfitting and backpropagation; from MLPs to CNNs: motivations and advantages; general concepts. pdf
11/18 Neural Models 4, CNNs: linear filters and convolutions: formulation, examples, application; convolutions in 2D images: concepts and example of applications for feature extraction; from convolution filters to CNNs; receptive fields; stride and padding; networks weight and kernel coefficients; shared weights; feature maps; pooling layers; hierachical feature extraction + flattening + MLP + softmax; CNN example architectures; collective intelligence (multi-agent) view of a CNN; computing the training paramters, optimization; transfer learning. pdf
11/20 Neural Models 5, Graph Neural Networks: Data and networks so far; characteristics of data framed as/into networks/graphs; challenges of graph data; permutation invariance and equivariance; graph learning tasks; node embeddings; node classifictaion problem; graph convolution: message passing, aggregation, updating; invariances of graph convolution; edge and graph level tasks. pdf

11/23 Recitation: Wisdom of Crowds, Consensus Models, Neural Models
11/25 Course Summary & Project Checkpoint
11/27 Proposal for Publication-oriented Collective Project

Federated Learning 1: Introduction to scenarios and motivations for Federated Learning (FL); paradigms of FL: cross-device, cross-silo, horizontal FL, vertical FL, federated transfer learning; architetures, strategies; generalizties on aggreration techniques.
Unified pdf FL1+FL2

11/30 Federated Learning 2: Algorithms, properties, limitations and comparative analysis of aggregation techniques: Federated Averaging, Federated Stochastic Gradient Descent, Federated Distillation; impact of non-IID data, non-convex loss functions, number of local training epochs/updating; FL with LLMs as clients. Unified pdf FL1+FL2
12/2 Agentic AI, Multi-Agent(ic) Systems: Concepts, comparison with traditional GenAI; observe-reason-plan-act-evaluate loop; examples, relationships to regular AI Agents and robots; from single to multiple Agentic AI systems; collaboration and collective intelligence aspects; CrewAI as an example of multi-agent(ic) system, agent orchestration; key principles: role playing, focus, tools, cooperation, memory, guardrails. Intro AgenticAI pdf
CrewAI pdf
CrewAI example Notebook
12/4 Project Checkpoint, Q&A

Homework Assignments


Topic Due Dates
HW 1: Dynamical systems
HW 2: Cellular Automata
HW 3: Neyworks
HW 4: Swarm Intelligence
HW 5: Task Allocation
HW 6: Pattern formation, Social Decision-making and Consensus
HW 7: Game Theory
HW 8: Neural Models
HW 9: Collective Learning

Homework Policies

  • Homework is due by the posted deadline. Assignments submitted past the deadline will incur the use of late days.

  • You have 3 late days, but cannot use more than 2 late days per homework. No credit will be given for homework submitted more than 2 days after the due date. After your 3 late days have been used you will receive 40% off for each additional day late.

  • You can discuss the exercises with your classmates, but you should write up your own solutions. If you find a solution in any source other than the material provided on the course website or the textbook, you must mention the source. You can work on the programming questions in pairs, but theoretical questions are always submitted individually. Make sure that you include a README file with your andrew id and your collaborator's andrew id.

  • In general, for both assignments and exams, CMU's directives for academic integrity apply and must be duly followed. Students are required to refer to CMU's general policies on cheating and plagiarism: https://www.cmu.edu/policies/student-and-student-life/academic-integrity.html

Exam dates and policies

The class includes one midterm exam and one final exam. Midterm is set set for TBD, the final is TBD.

During exams students are allowed to consult 4-page cheatsheet (written in any desired format). No other material is allowed, including textbooks, computers/smartphones, or copies of lecture handouts.

Students with special needs or disabilities are kindly asked to talk about it to the teacher at the beginning of the semester and make use of the assistance procedures provided by CMU https://scotty.qatar.cmu.edu/qword/student-affairs/office-of-health-and-wellness/assistance-for-individuals-with-disabilities

Office Hours

Name Email Hours Location
Gianni Di Caro gdicaro@cmu.edu Thursdays 2:30-3:30pm + Drop in at my office any time ... M 1007