Project will build AI models to explain, predict and influence the social world

UChicago-led collaboration seeks to remove ‘black box’ from AI and machine learning

Data-driven models are increasingly used to simulate and make predictions about complex systems, from online shopping preferences and the performance of the stock market to the spread of disease and political unrest. But while powerful methods in machine learning and computational social science improve at predicting the future, they often lack the ability to explain why those results occur, rendering these models less helpful for shaping interventions and policy.  

Social MIND, or Social Machine Intelligence for Novel Discovery, aims to reorient these models to emphasize prediction, explanation and intervention. With a $2 million grant from the Defense Advanced Research Projects Agency (DARPA) as part of its Ground Truth program, the collaboration between researchers at the University of Chicago and the Massachusetts Institute of Technology will build a “model of models” that combines computational approaches and pits them against each other to reveal the underlying factors driving social systems, as well as potential points of intervention.

“What we’re trying to do in social science is develop powerful explanations,” said James Evans, co-primary investigator of Social MIND and professor of sociology at UChicago. “Machine learning models generate predictions, most of which are not elegant or beautiful explanations. We need to tune, tame and refocus machine learning on the task of identifying the best explanations, those that allow us to understand and change the world.”

In general, machine learning involves running statistical analysis methods on historical data to make predictions, recognize patterns or generate realistic, simulated outputs such as text or speech. But while methods such as decision trees or neural networks can produce highly accurate results, it is often difficult to determine the reasoning behind model predictions or to disentangle cause from correlation.

While this lack of causality may not matter for some applications, such as buying and selling stocks, it’s inadequate for users looking for actionable insight. For example, a tech company simulating consumer response to a new product would want to know which features drive higher sales, or a health agency modeling an epidemic needs to find key inflection points where an intervention would be most effective.

“Often when we try to predict the future, we want to influence the future. To influence, we need to understand the right explanation for what happened,” Evans said. “That’s what tells us the levers to pull and push to get the outcomes we want. We’re trying to develop machine learning methods that identify the true explanation, not just the best prediction.”

To tackle this challenge, DARPA has engaged a stable of research teams to create realistic simulations of the social world, including the social dynamics of urban life, economies and trade, unstable politics, and disaster. Social MIND will use data produced by these “hidden” social models and seek to generate the true causal explanation, predict social states in future generations of the system, and craft interventions that can help improve these modeled social worlds in desirable ways.

The research team will also test these models with a “Bayesian Occam’s Razor,” deconstructing the best performers into social modules that can be recombined to create a richer “model of models” that provides new powerful simulations—and explanations—of complex social systems.

Researchers will then use these model ensembles to suggest experiments that can test their assertions, gather more simulated data, and further refine and improve explanations and predictions, generating adaptive models that grow alongside the social systems they seek to replicate and understand.

Accomplishing this task will require considerable computational power and infrastructure, which will be developed with researchers from the University of Chicago Department of Computer Science. CLIPPER and Deep Learning Hub, serving systems for running and comparing machine learning frameworks, will be adapted to estimate and select models with the desired qualities. The project will also create new frameworks for developing optimal experiments to yield improved models, explanations, predictions and interventions.

“Our aspiration for this project is to expand the reach of AI by supplementing the power of machine learning, alongside high performance computing and causal modeling, to not only anticipate, but also directly understand and improve the world,” Evans said.