Loading…
Tuesday, October 8 • 4:00pm - 4:50pm
PRO WORKSHOP (AI): Explaining Black Box Models

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Complex machine learning models are frequently termed “black boxes”. While these models deliver highly accurate predictions, they frequently provide no insight into the factors driving the predictions. In turn, this can make it problematic to ensure that the model’s predictions are actionable. For instance, in churn prediction, explaining why the customer is predicted to churn is as important as the prediction itself, as it enables action to be taken to address highlighted issues and prevent the churn.In recent years, as operationalized ML solutions become ever more complex and prevalent in every facet of life, the interpretability of models has become a critical consideration. In the last couple of years, an increasing number of papers and open source software libraries have appeared to help data scientists explain their predictions. In this presentation, we discuss recent approaches to interpreting the results from complex machine learning models, including the well-known LIME package. We present best practices for their use and deployment and provide cautionary examples and caveats associated with leveraging these techniques.

AI DevWorld 2019 Speakers
avatar for Lawrence Spracklen

Lawrence Spracklen

Vice President of Engineering and Data Science, SupportLogic
Dr. Lawrence Spracklen leads engineering at SupportLogic, where he leads a team applying AI to the enterprise technical support space. Prior to joining SupportLogic, Lawrence lead engineering teams at two other ML startups; Alpine Data and Ayasdi. Before this, Lawrence spent over... Read More →


Tuesday October 8, 2019 4:00pm - 4:50pm PDT
AI DevWorld -- Workshop Stage 1