This course is about modeling and control of systems involving a large number of autonomous components that interact with each other, dynamically adapting to their changing environment as a result of mutual, possibly non-linear, interactions.
Examples of such components include ants foraging for food, cars in city traffic, pedestrians moving in crowds, birds flocking, firms competing in a market, robots in a swarm or multi-robot system.
Under certain conditions, such systems can produce useful system-level behaviors, display self-organized spatial-temporal patterns, effectively perform computations, information dissemination, and decision-making. Loosely speaking, when this happens we can say that the system is displaying a form of collective intelligence.
During the course we will study a number of such systems with different characteristics in terms of complexity and capabilities of the individual components, interaction modes, goals. In particular, Collective intelligence will expose students to relevant mathematical and computational models from following domains:
For each system, we will present the general model and study general properties in terms of self-organization and emergence, predictability of system’s evolution (direct problem), design of local interaction rules with the aim of attaining a specific task or behavior (inverse problem).
Students who successfully complete the course will have acquired general knowledge of multi-component complex adaptive systems in terms of: (i) the challenges related to modeling, prediction, and control aspects, (ii) the practice of effectively implementing such systems.
Moreover, by studying systems related to different domains (biology, economy, robotics, operations research) the student will be exposed to a truly interdisciplinary background of mathematical models and application problems.
The only strict pre-requisite is 15-122 (Principles of imperative programming). However, since the course will cover a number of different topics, students should have previous (and solid) programming experience as well as a solid background in general CS, calculus, and basics of probability theory.
Talk to the instructor if you are unsure whether your background is suitable or not for the course. /
The course will be based on lectures. Recitations will be used to work out examples and cover aspects that may not be in the students' background. Students are expected to attend all classes and to actively participate with questions.
For each one of the different topics, the course will present relevant techniques, discuss formal results, and show the application to problems of practical and theoretical interest.
Homework will include papers to read and review, questions to be answered and programming assignments. Written questions will involve working through different algorithms, deriving and proving mathematical results, and critically analyzing the material presented in class. Programming assignments will mainly involve writing code to implement and test algorithms in relevant scenarios. A strong emphasis will be given on ``learning by doing'', therefore the programming part will play a prominent role.
Grading is based on the following distribution: 55% homework, 30% final project, 15% midterm exam.
The final project will focus on at least two of the topics discussed during the course and will consist of: select and read a few related papers, work out and implement the main ideas but in a (slightly) different scenario, add something new either in the "mechanisms" and/or in the analysis, perform experiments. At the end, write a final report on the work done starting from analyzing in a critical way the work reported in the selected papers from the literature.
During the last week, the student will make a presentation describing the work done and showing the achieved results.
Final projects can be done in group(s). However, each student will have to compile his/her own individual report, and the presentation will have to be organized in some "collective" way (i.e., each participant has to present a substantial part of the work).
The midterm exam will include theory, concepts, and math, and is set for
set for ... During the exam students are allowed to consult 1-page cheatsheet
(written in any desired format).
In addition to the slides, during the course additional material will be provided by the instructor to cover specific parts of the course.
|1/7||Introduction: General notions on complex systems; examples of collective intelligence from biology, physics, social, and robotic sciences; emergence and self-organization; course road map|| Interesting, useful, or just enjoyable readings:
|1/9||Continuous-time dynamical systems 1: Time, entropy, and self-organization; examples from physics and chemistry; modeling complex systems; introduction to dynamical systems (ODEs)|
|1/11||Continuous-time dynamical systems 2: Flows and phase portraits; separation of variables; population growth models and solutions||A (relatively old) survey on population growth|
|1/14||Continuous-time dynamical systems 3: Stability concepts; Lyapounov and structural stability; fundamental theorems on the solutions of linear ODEs; eigenvalues and eigenvectors in linear systems; introduction to the classification of equilibrium points||A classic textbook on ODEs, check chapter 10 (but also 2 and 4): W. Trench, Elementary differential equations with boundary problems, 2013|
|1/16||Continuous-time dynamical systems 4: Qualitative analysis of stability in linear systems; phase portraits vs. cases for eigenvalues and eigenvectors; classification of equilibrium points|
|1/18||Continuous-time dynamical systems 5: Analysis of solutions and stability in non-linear systems; stability of perturbation of the vector field near equlibrium points; linearization techniques; derivatives and Jacobian matrix; damped pendulum case study; separatrices and basin of attraction; nullclines||Book extract on Lotka-Volterra models for prey-predator models studied using nullclines|
|1/21||Continuous-time dynamical systems 6: Discussion of Homework 1, python software for the analysis of dynamical systems, review of general concepts so far||See HW1|| Symbolic manipulation: SymPy
Scientific computing: SciPy Library
Plotting and visualization: Matplotlib
|1/23||Continuous-time dynamical systems 7: Global stability notions; basin of attraction; Lyapunov functions; geometry of attractors; limit cycles; existence of closed / periodic trajectories; Poincare'-Bendixon theorem; introduction to strange attrators and deterministic chaos in higher dimensions|
|1/25||Continuous-time dynamical systems 8: Attractors; strange attractors; Lorenz non-linear system; divergence and volume contraction in the phase space; bifurcations (saddle, transcritical, pitchfork, super- and sub-critical, Hopf); route to chaos; characterization of deterministic chaos||
|1/28||Discrete-time dynamical systems 1: Formalization of the notion of dynamical system; flows and maps; equilibrium points and stability; logistic map; analysis of the transition from order to chaos; bifurcations and period doubling phenomena; orbit diagrams|
|1/30||Discrete-time dynamical systems 2: Study of the occurrence of periodic windows in the orbit diagram of the logistic map; different types of bifurcations; sensitivity to initial conditions and Lyapounov exponents; Henon map; from n-dimensional maps to Cellular Automata; spatially bounded lattice models; historical notes on CAs||S. Wolfram, A new kind of science, 2002. THE book on Cellular Automata and way much more, a (maybe controversial but anyway original) phylosophical view on the computational universe, freely available after the 15th anniversary|
|2/1||Cellular Automata 1 / Discrete-time dynamical systems 3: CA as lattice models; structure and parameters of a CA; formal definition; lattices and boundaries; updating schemes; elementary CA; Wolfram code; direct and inverse problems; stability and Lyapounov exponents; diversity of behaviors|
|2/4||Cellular Automata 2 / Discrete-time dynamical systems 4 / Homework 2: Wolfram's classification of Cellular Automata; dependence on initial conditions; propagation of information; Lyapounov's exponents; importance of the neighborhood; direct vs. inverse problem; a few remarkable rules: 110, 30, 184|
|2/6||Cellular Automata 3 / Discrete-time dynamical systems 5: Application case study: CAs as simulation models of car traffic; rule 184; density-dependent behavior and phase transitions; Nagel-Schreckenberg one-dimensional model; boundary conditions; Velocity-Dependent Randomization (VDR) model; metastability and lifetime of traffic jams; effect of traffic lights; Rickert-Nagel-Schreckenberg two-dimensional model with lane changes|
|2/8||Cellular Automata 4 / Discrete-time dynamical systems 6: Application of Cellular Automata for modeling complex dynamical systems; Conway's Game of Life: rules, life forms, properties, propagation of information; forest fire model; prey-predator model; bacterial diffusion model, quorum sensing: rock-paper-scissors CA; simple fluid simulation; generative music approaches|| Check the videos' links included in the slides!
Scientific articles on specific CA models:
|2/11||Swarm Intelligence 1 / PSO 1: Limitations of the Cellular Automata model; from CA to a more general agent-based modeling; basic notions on optimization problems; mathematical programming and black-box optimization; general structure of Particle Swarm Optimization (PSO); swarm intelligence: a definition and general concepts; swarm intelligence frameworks vs. communication and agent types|
|2/13||National sports day|
|2/15||Swarm Intelligence 2 / PSO 2: Reynold's boids; NetLogo; Particle Swarm Optimization (PSO): general pseudo-code; interpretation of the different components; exploration-exploitation dilemma; properties of different neighborhoods; properties of acceleration coefficients; constriction coefficient; fully-informed PSO; variance of performance; theoretical properties||A. Engelbrecht, Computational Intelligence, ch. 16, Particle Swarm Optimization|
|2/18||Swarm Intelligence 3 / Classic Optimization: Single agent PSO; properties of directional derivatives and gradient vectors; algorithm of gradient descent / ascent; effect of constant step-size; application example to the minimization of the loss fucntion in supervised learning; stochastic gradient descent; adaptive step-size as one-dimensional optimization; analytic step-size determination using the 2nd order Taylor expansion; properties of the Hessian matrix; quadratic forms; geometry in the neighborhood of a local optimum; adaptive step-size without the Hessian||Summary of one-dimensional search methods|
|2/20||Swarm Intelligence 4 / Optimization meta-heuristics: Multiple restart gradient descent; black-box optimization approaches; iterative solution modification meta-heuristic; local search; hill-climbing; global search: basic ideas of iterated local search, tabu search, simulated annealing; from single-agent to multi-agent strategies; solution construction meta-heuristic; neighborhoods and information exchange in natural systems; swarm intelligence algorithm design; swarm intelligence frameworks; stigmergic communication; converging and diverging stigmergy; pheromone laying-following behavior in ant colonies and shortest path finding|
|2/22||Swarm Intelligence 5 / Ant Colony Optimization 1: From ant colonies to ant-like agents; path finding on decision graphs; pheromone and heuristic information for local decision-making; stochastic policy learning; exploration vs. exploitation; pheromone variables as parametric collective learning; Ant Colony Optimization (ACO) meta-heuristic in psedo-code; notions on generalized policy iteration and Monte Carlo path sampling; TSP-case study|
|2/25||Swarm Intelligence 6 / Ant Colony Optimization 2: Combining pheromone and heuristic information in Ant-routing table; functional forms for the ant stochastic decision policy; different strategies and timing for pheromone updating (i.e., policy updating); AntSystem, the first ACO instance, for TSP; issues in the AS strategies for decision policy and pheromone updating; ACS, an improved system, with global best updating for exploitation and online-step-by-step pheromone updating for exploration; ACS + Local Search / Daemon action (2-Opt ot 3-Opt); MMAS for dealing with pheromone bounds and restart in case of stagnation; elitistic ant selection for pheromone updating; choices to make when designing ACO algorithms; notes on theoretical results; issues in dynamic scenarios (data routing in networks)||ACO book|
|2/27||Swarm Intelligence 7 / Ant Colony Optimization 3: Types of problem scenarios; centralized, distributed, decentralized; stationary, dynamic; routing problems; data routing in networks; generalities on network routing protocols and algorithms; routing in dynamic networks: MANETs; ACO for routing in MANETs: AntHocNet; pheromone tables; reactive path setup; proactive path improvementl stochastic data routing; creation and maintenance of path bundles||AntHocNet article|
|3/1||Task Allocation 1: Generalities on task allocation, division of labor, role assignment, social organization; optimization formulation for multi-agent task allocation; problem taxonomy; optimization models of task allocation: linear assignment, iterated assignment, online assignment, generalized assignment; complexity and basic approaches||Gerkey and Mataric taxonomy of TA problems|
|3/11||Task Allocation 2: Optimization models of task allocation: VRP, TSP, Orienteering models, set covering, set packing, set partitioning, vehicle routing as set partitioning; coalition models: set partitioning and set covering; subset selection vs. sequence ordering in combinatorial optimization and task allocation|
|3/13||Review of topics studied so far|
|3/18||Task Allocation 3: Emergent task allocation: ant-inspired methods; market-based methods; auction models with humans and with robots; single-item, multiple-items, combinatorial auctions; types of auctions for task allocation|
|3/20||Task Allocation 4: Auction models for robot task allocation: parallel, combinatorial, and sequential auctions; multi-robot routing example; properties of parallel auctions; properties of combinatorial auctions; properties of sequential auctions; performance criteria and bidding rules; complexity of computing optimal bids (TSP); computational and communication complexity of the different auction mechanisms; optimal task allocation model for the multi-robot routing problem: VRP; worst-case deviation from optimality using bidding rules in sequential auctions|
|3/22||Correction of midterm, discussion of the new homework|
|3/25||Game Theory 1: Basic notions and definitions; classification of game scenarios; pure and randomized strategies; games in normal-form; prisoner's dilemma; dominant strategies; pareto optimality vs. equilibria; Nash equilibrium|
|3/27||Game Theory 2: Properties of Nash equilibrium; coordination and competition games; mixed strategies and Nash equilibrium; Nash theorem; computation of mixed Nash equilibria|
|3/29||Game Theory 3: Finding mixed Nash equilibria in zero-sum symmetric games; game of chicken and correlated equilibrium; computation of correlated quilibrium; implementation of CE; relations beteen NE and CE; Stackelberg, leader-follower games; tractability of computing Stackelberg equilibrium; security games: modeling and basic properties|
|4/1||Evolutionary Game Theory 1: Obtaining cooperative / altruistic strategies; population-based game theory; biological motivations; payoffs, fitness, and descendants; types of games; core questions about equilibrium and stability; mutants and ESS||Slides grouped in lecture 4/5|
|4/3||Evolutionary Game Theory 2: ESS in the hawk-dove pairwise game; group selection vs. Darwinian selection; symmatric pairwise contest games; Nash equilibria vs. ESS: need for being resistant to mutant invasion; finding an ESS using Nash equilibrium||Slides grouped in lecture 4/5|
|4/5||Evolutionary Game Theory 3: geometrical representation of utility functions and Nash equilibria; mixed strategies and state space; game simplex; using calculus for finding NE; application in pairwise contests in monomorphic populations|
|4/8||Evolutionary Game Theory 4: games against the field; polymorhpic populations and two-strategy pairwise contests; replicator dynamics; fixed points; Nash equilibria and fixed points||Slides grouped in lecture 4/12|
|4/10||Evolutionary Game Theory 5: ESS, evolutionary end points, and replicator dynamics; asymptotically stable fixed points; relations between ESS, Nash, asymptotically stable fixed points, and fixed points in two-actions population games; linearization methods||Slides grouped in lecture 4/12|
|4/12||Evolutionary Game Theory 6: relations between ESS, Nash, asymptotically stable fixed points, and fixed points in population games with more than two pure strategies; examples of different dynamics and equilibria; rock-paper-scissor population games|
|4/15||Wisdom of the Crowd + Networks: wisdom of the crowd; experts vs. non experts; prediction diversity theorem; role of independence and diversity; role of interconnection networks on the dynamics and behavior of a complex system; directed vs. undirected networks; adjacency matrix; number pf paths and walks; diameter; average node degree; degree distribution; sparsity of real networks; power-law and scale free property; small world property; local clustering; real vs. random networks; Watts-Strogatz model|
|4/17||Interaction Networks: reseach paper on the role of the interaction networks (how information spreads based on neighborhood structure and network topology) in the emergence of diversity of behaviors in complex systems, and in, particular, in Cellullar Automata (evolved using Genetic Algorithms)|
|4/19||Summary + Q&A|
|Homework 1: Study of solutions and stability of non-linear dynamical models of competing populations||hw1.pdf||28 Jan|
|Homework 2: Modeling of complex systems, Stability analysis, Bifurcations, Iterated maps, Deterministic chaos||hw2.pdf||13 Feb|
|Homework 3: Cellular Automata, PSO||hw3.pdf||11 Mar|
|Homework 4: Optimization algorithms, ACO, Task allocation||hw4.pdf||5 April|
|Homework 5: Classical and Evolutionary Game Theory||hw5.pdf||19 April|
|Homework 6: Course summary, applications to related concepts in complex systems|| hw-final.pdf
Homework is due on autolab by the posted deadline. Assignments submitted past the deadline will incur the use of late days.
You have 6 late days, but cannot use more than 2 late days per homework. No credit will be given for homework submitted more than 2 days after the due date. After your 6 late days have been used you will receive 20% off for each additional day late.
You can discuss the exercises with your classmates, but you should write up your own solutions. If you find a solution in any source other than the material provided on the course website or the textbook, you must mention the source. You can work on the programming questions in pairs, but theoretical questions are always submitted individually. Make sure that you include a README file with your andrew id and your collaborator's andrew id.
In general, for both assignments and exams, CMU's directives for academic integrity apply and must be duly followed.
|Gianni Di Carofirstname.lastname@example.org||Thursdays 2:30-3:30pm + Drop in at my office any time ...||M 1007|