Thursday, November 27, 2008

Online Machine Learning Testing == Extreme Testing

Posted by Alek Icev, Test Engineering Manager

As you may know our core vision is to build "The perfect search engine that would understand exactly what you mean and give back exactly what you want.". In order to do that we learn from our data, we learn from the past and we love Machine Learning. Everyday we are trying to answer the following questions.
  • Is this email spam?
  • Is this search result relevant?
  • What product category does that query belong to?
  • What is the ad that users are most likely to click on for the query "flowers"?
  • Is this click fraudulent?
  • Is this ad likely to result in a purchase (not merely a click)?
  • Is this image pornographic?
  • Does this page contain malware? Should this query bring up a maps onebox?
Solving many problems require Machine Learning techniques. On all of them we can build prediction models that will learn from the past and try to give the most precise answers to our users. We use variety of Machine Learning algorithms at Google and we are experimenting with numerous old and new advancements in this field in order to find the most accurate, fast and reliable solution for the different problems that we are attacking. Of course one the biggest challenges that we are facing in the Test Engineering community is how are we going to test these algorithms. The amount of the data that Google generates goes beyond all of the known boundaries of environments where the current Machine Learning Solutions were being crafted and tested. We want to open discussion around the ideas how to test different online machine algorithms. From time to time we will present an algorithm and some ideas how to test it and solicit the feedback from the wider audience i.e. try to build a wisdom of the crowds over the testing ideas.

So let's look at the Stochastic Gradient Descent Algorithm

Where X is the set of input values of Xi ,W is set of the importance factors(weights) of every value Xi. A positive weight means that that risk factor increases the probability of the outcome, while a negative weight means that that risk factor decreases the probability of that outcome. t is the target output value, η is the learning rate(the role of the learning rate is to control the level to which the weights are modified at every iteration and f(z) is the output generated by the function that maps large input domain to a small set of output values in this case. The function f(z) in this case is the logistic function:

f(z)=

z = x0w0 + x1w1 + x2w2 + ... + xkwk

The logistic function has nice characteristics since it can take any input, and basically squash it to 0 or 1. Ideal for predicting probabilities on events that are dependent on multiple factors(Xi) each with different importance weights(Wi). The Stochastic Gradient Descent provides fast convergence to find the optimal minimums of the error(E) that the function is making on the prediction as well as if there are multiple local minimums the algorithms guarantees converging to the global minimum of the prediction error. So let's go back now into the real online world where we want to give answers (predictions) to our users in milliseconds and ask the question how are we going to design automated tests for the Stochastic Gradient Descent Algorithm embedded into a live online prediction system. The environment is pretty agile and dynamic, the code is being changed every hour, you want your tests to run on 24/7 basis, you want to detect errors upstream in the development process, but you don't want to block the development process with tests that are running days, on the other side you want to release new features fast, but the release process has to be error prone(imagine the world with google being down for 5 mins, that is a global catastrophe, isn't it?!

So let's look at some of the test strategies:

Should we try to train the model(set of the importance factors) and test the model with the subset of the training data? What if this takes far more than hours, maybe days to do that? Should we try to reduce the set of importance factors (Xi) and get the convergence(E->0) on the reduced model?

Should we try to reduce the training data set(the variety of set of values for X as an input to the algorithm) and keep the original model and get the convergence by any price? Should we be happy with reducing both the model size and the training set? Are we going to worry for over-fitting in the test environment? Given the original data is online data and evolves fast, are we going to be satisfied with fixed data test set or change the input test data frequently? What are the triggers that will make you do so? What else should we do?

Drop us a note, all ideas are more than welcome.


[NFGB] Link - from Google Testing Blog
Related From Google Blogs:
Syncing your Google Calendar
Tip: Your email, wherever you are on the web, with Toolbar
Triple silken pumpkin pie takes the prize
Photo albums on orkut– the control is in your hands

No comments: