PsyNet: the online human behavior lab of the future

A scalable, open platform for high-powered, interactive, and cross-cultural behavioral experiments.

Example PsyNet results illustrating large-scale experimental paradigms enabled by the platform

PsyNet is an open, scalable experimental platform for running high-powered online behavioral studies—including large-scale interactive experiments that are difficult or impossible to run in traditional laboratory settings. It supports experiments with rich timelines, adaptive algorithms, and real-time interaction between participants, while also providing tooling for recruitment, monitoring, and data management.

Over the past decade, online participant pools have enabled new classes of behavioral research: large social networks, cultural transmission, governance decisions, and high-dimensional perceptual modeling. PsyNet was designed to make these studies reusable, auditable, and easier to deploy, while maintaining strong standards around privacy, ethics, and reproducibility.


PsyNet architecture: browser-based participant interface connected to a backend experiment server and database
PsyNet prototype architecture: browser → Python server cluster → database, enabling scalable online experiments.

PsyNet builds on the open-source Dallinger framework and extends it with modular components for complex experimental structure, adaptive sampling, real-time computation, and a pipeline that supports replication from source code. It is intended for experiments spanning perception, learning, cultural evolution, collective intelligence, and human–AI interaction.

At full scale, PsyNet supports a set of core capabilities that together function as an “online human behavior lab”: reusable experiment components, automated recruitment and payment workflows, robust data storage, end-to-end reproducibility, privacy and security, support for diverse recruiting, automated data quality assurance, real-time monitoring, and integration with cloud computation.


PsyNet dashboard used for real-time monitoring of a live experiment
PsyNet monitoring dashboard for live deployments (recruitment, progress, and experiment state).

This infrastructure makes it possible to run experiments that combine: (i) large-scale recruitment, (ii) structured multi-stage designs, (iii) interactive or networked participant dynamics, and (iv) computational models in the loop—without requiring bespoke engineering for each new study.


What PsyNet enables

  • Massive interactive experiments involving hundreds or thousands of participants.
  • Modular building blocks for multi-stage designs, networks, and feedback loops.
  • Integration of adaptive algorithms and model-based experimentation (e.g., sampling from subjective distributions).
  • Automated infrastructure for recruitment, monitoring, compensation, storage, and reproducible deployment.
  • Experiments that scale across languages, sites, and populations, including cross-cultural and developmental research.

Why it matters

PsyNet helps make behavioral science more scalable, interactive, reproducible, and globally inclusive—supporting research on perception, learning, cultural transmission, collective intelligence, governance, and human–AI systems in controlled, ethical, and computationally powerful online environments.

(Balietti et al., 2016; Carr et al., 2020; Centola & Baronchelli, 2015; Harrison et al., 2020; Henrich et al., 2010; Jayles et al., 2017; Lahdelma & Eerola, 2020; Lazer et al., 2018; Salganik et al., 2006; Sanborn & Griffiths, 2008; Shirado & Christakis, 2017; Wisdom et al., 2013)

Related Publications

2020

  1. Simplicity and informativeness in semantic category systems
    Jon W. Carr, Kenny Smith, Jennifer Culbertson, and 1 more author
    Cognition, 2020
  2. Gibbs sampling with people
    Peter M. C. Harrison, Raja Marjieh, Debora Adolfi, and 5 more authors
    In Advances in Neural Information Processing Systems, 2020
  3. Cultural familiarity and musical expertise impact the pleasantness of consonance/dissonance but not its perceived tension
    Imre Lahdelma and Tuomas Eerola
    Scientific Reports, 2020

2018

  1. The science of fake news
    David M. J. Lazer, Matthew A. Baum, Yochai Benkler, and 5 more authors
    Science, 2018

2017

  1. How social information can improve estimation accuracy in human groups
    Benjamin Jayles, Hyunju Kim, Robin Escobedo, and 5 more authors
    Proceedings of the National Academy of Sciences, 2017
  2. Locally noisy autonomous agents improve global human coordination in network experiments
    Hirokazu Shirado and Nicholas A. Christakis
    Nature, 2017

2016

  1. Peer review and competition in the Art Exhibition Game
    Stefano Balietti, Robert L. Goldstone, and Dirk Helbing
    Proceedings of the National Academy of Sciences, 2016

2015

  1. The spontaneous emergence of conventions: An experimental study of cultural evolution
    Damon Centola and Andrea Baronchelli
    Proceedings of the National Academy of Sciences, 2015

2013

  1. Social learning strategies in networked groups
    Thomas N. Wisdom, Xiaoqian Song, and Robert L. Goldstone
    Cognitive Science, 2013

2010

  1. Most people are not WEIRD
    Joseph Henrich, Steven J. Heine, and Ara Norenzayan
    Nature, 2010

2008

  1. Markov chain Monte Carlo with people
    Adam Sanborn and Thomas L. Griffiths
    In Advances in Neural Information Processing Systems, 2008

2006

  1. Experimental study of inequality and unpredictability in an artificial cultural market
    Matthew J. Salganik, Peter S. Dodds, and Duncan J. Watts
    Science, 2006