Highlights From ICML 2016

July 5, 2016

icml-2016-highlights

I just got back from the International Conference on Machine Learning (ICML) in New York City. For five days, machine learning researchers and practitioners presented and discussed the latest trends and cutting edge algorithms in the field.

I was really excited to use my yearly professional development opportunity to attend this conference. We help a variety of clients collect detailed data about how their website is being used. However, after you collect all of this data, it’s important to know how to use it. Machine learning provides us with statistical tools and a logical framework for using data to make decisions.

As a result, it is having a huge impact on our field, impacting everything from how search results are returned, to A/B testing, to how companies allocate their marketing budget. Attending this conference kept me up-to-date with the latest machine learning updates and gave me new ideas on how we can help our clients make use of their Google Analytics data.

What Is Machine Learning?

You may have seen this term in the job description for data scientist and big data positions. Machine learning is a cross-disciplinary field that combines advanced mathematical and statistical theory with the power of modern computing to develop methods to make data-driven decisions about real-world problems. As the demand for data scientists has grown, the popularity of machine learning has exploded. There were over 3,000 attendees at this machine learning conference, more than double the number from 2015!

So, What Is the Latest News?

Right now the hot topic in machine learning is neural networks (“neural nets” for short). This was reflected at the conference by a large number of popular talks on the subject. A neural network is a fairly complex type of algorithm that was originally built to mimic how the human brain processes information. It has achieved stunning results in a number of applications, such as computer vision.

However, there were many exciting results in other areas as well. For example, there were several talks on bandits and reinforcement learning, which provides advanced tools for running online experiments (similar to A/B or multivariate testing) and is used in Google’s content experiments. There were also results in online learning, which helps us update algorithms and statistical models in real-time. Causal inference was also discussed, which can (sometimes) determine if certain actions are actually responsible for creating the results, or if there were other factors at play.

In addition to discussing the latest algorithms, there were four featured speakers that gave longer talks on the impact and evolution of machine learning in several different applications. In the first talk, Susan Athey discussed how machine learning and online ecommerce are affecting the field of computational economics. If you are interested in the economics behind online advertising and internet search, you should take a look at her excellent research. The other featured talks discussed fraud detection, computer vision, and the application of advanced linear algebra techniques to increase computational speed.

Workshops

The last two days of the conference were formatted into workshops. These workshops consisted of a number of different talks focused on a particular theme or application. There was also time for audience discussion and question/answer with a panel of experts. I sat in on the workshops on Human Interpretability in Machine Learning and Online Advertising Systems.

The workshop on Human Interpretability in Machine Learning was popular among scientists, particularly social scientists (I guess some people consider them scientists), as well as medical researchers. There were several important questions discussed that have direct applications to web analytics:

  1. What is the real goal of our statistical model or analysis? For example, do we want to find the right ad for the right user? Or are we trying to validate ideas about how our users behave? Or are we exploring new ideas for marketing strategies? While we know that certain models work better for some goals than others, it can be hard to quantify the success of more intangible goals. In other words, prediction accuracy may not always be the best measure of how well an algorithm works.
  2. Even if our goal is prediction, we may still need to explain our model to other business stakeholders. How can we do that in a meaningful way? Do we need to have different explanations for different types of stakeholders? How do we know that our explanations are understandable for the business user? Finally, are we willing to sacrifice some accuracy in the model in order to have a more transparent and understandable model? What does that trade-off look like if we cannot precisely measure success in interpretation while also trying to target multiple audiences?
  3. Machine learning algorithms are only as good as the data they are based on. This is particularly important when the data reflects discrimination that has occurred in the real world. As a result, our algorithms may discriminate against certain groups of people in unintended ways. One way to address this problem is to leave variables like race, gender, or religion out of our datasets. However, these traits may show up in unexpected places like a user’s zip code. A different approach is to create a model that includes these variables, find the source of the discrimination in our model, and then manually update the model to account for the discrimination. In either case, it is vitally important to be critical of our data and our models to ensure they are not acting in unintended ways.
  4. Can more interpretable models help us avoid mistakes on a small but important subset of users? Can interpretability help us anticipate any long term negative effects of the model or a model’s susceptibility to certain changes in the world?

Talks in the Online Advertising Systems workshop were given by employees at Google, Facebook, Microsoft, Criteo, Nanigans, Telecom ParisTech, RTB House, and Quantcast. Below are some interesting questions that were discussed at the workshop:

  1. Often, online experiments optimize for short term effects. For example, a click-bait news article might be promoted over a well-researched, in-depth analysis piece. How can we change our algorithms to focus on good long term effects for our business? A few solutions were proposed including bidding based on the estimated lifetime value of a customer and changing our success metrics from clicks to sales.
  2. How can we design models to deal with situations not seen or seen infrequently in our data, such as a small group of high-value users or new types of products?
  3. How can we make our models robust and relevant in a market that is constantly in flux? One way to address this problem is by creating intuitive models that give us a better understanding of causal relationships and changing parameters. This way, we can combine human intuition and creativity with the power of our model. Many experts suggested updating your model frequently, or even in real-time, to keep up with the changing trends online. Finally, constant exploration and experimentation through contextual bandit learning was also proposed.
  4. There was a workshop on personalization which I, unfortunately, could not attend because, well, I haven’t figured out how to be in two places at once. But I hear it was b-a-n-a-n-a-s.
  5. Can we identify users across different devices using cookie matching?
  6. How can we determine the correct attribution model and how does this choice affect overall online profitability? Although no solutions (beyond testing a few out) were presented, this was discussed as a major issue in the field.

If you are interested in learning more, you can: