|
|
|
CS Colloquium Abstract: It is extremely important in many application domains to have transparency in predictive modeling. Domain experts do not tend to prefer "black box" predictive model models. They would like to understand how predictions are made, and possibly, prefer models that emulate the way a human expert might make a decision, with a few important variables, and a clear convincing reason to make a particular prediction. I will discuss recent work on interpretable predictive modeling with decision lists. I will describe several approaches, including: - an algorithm where not only the predictions, but the whole algorithm
itself is interpretable to a human Collaborators are: Dimitris Bertsimas, Allison Chang, Ben Letham, Tyler McCormick, David Madigan, and Shawn Qian Bio: Cynthia Rudin is an assistant professor at the
MIT Sloan School of Management in the operations research and statistics
group. She works on machine learning and knowledge discovery problems
relating to data-driven prioritization. Previously, Dr. Rudin was an associate
research scientist at the Center for Computational Learning Systems at
Columbia University, and prior to that, an NSF postdoctoral research fellow
at NYU. She holds an undergraduate degree from the University at Buffalo,
and received a PhD in applied and computational mathematics from Princeton
University in 2004. She was given an NSF CAREER award in 2011. Her work
has been featured in articles appearing in IEEE Computer, Businessweek,
ScienceNews, WIRED Science, U.S. News and World Report, Slashdot, Discovery
Channel / Discovery News, CIO magazine, and Energy Daily, and very recently,
on Boston Public Radio.
|
||||||||||||
![]() |
|