On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks. As a result, the practical success of neural nets has outpaced our ability to understand how they work. This is "Understanding Black-box Predictions via Influence Functions --- Pang Wei Koh, Percy Liang" by TechTalksTV on Vimeo, the home for high quality How can we explain the predictions of a black-box model? Wojnowicz, M., Cruz, B., Zhao, X., Wallace, B., Wolff, M., Luan, J., and Crable, C. "Influence sketching": Finding influential samples in large-scale regressions. In many cases, they have far more than enough parameters to memorize the data, so why do they generalize well? Deep learning via Hessian-free optimization. Haoping Xu, Zhihuan Yu, and Jingcheng Niu. Understanding Black-box Predictions via Influence Functions. Data poisoning attacks on factorization-based collaborative filtering. 2172: 2017: . The reference implementation can be found here: link. With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. Insights from a noisy quadratic model. The most barebones way of getting the code to run is like this: Here, config contains default values for the influence function calculation We show that even on non-convex and non-differentiable models where the theory breaks down, approximations to influence functions can still provide valuable information. Pang Wei Koh and Percy Liang. Adler, P., Falk, C., Friedler, S. A., Rybeck, G., Scheidegger, C., Smith, B., and Venkatasubramanian, S. Auditing black-box models for indirect influence. This will naturally lead into next week's topic, which applies similar ideas to a different but related dynamical system. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. We'll see first how Bayesian inference can be implemented explicitly with parameter noise.
The deep bootstrap framework: Good online learners are good offline generalizers. Theano D. Team. This site last compiled Wed, 08 Feb 2023 10:43:27 +0000. Why neural nets generalize despite their enormous capacity is intimiately tied to the dynamics of training. Rather, the aim is to give you the conceptual tools you need to reason through the factors affecting training in any particular instance. Lage, E. Chen, J. grad_z on the other hand is only dependent on the training To scale up influence functions to modern machine learning below is divided into parameters affecting the calculation and parameters To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. , loss , input space . On linear models and convolutional neural networks, we demonstrate that influence functions are useful for multiple purposes: understanding model behavior, debugging models, detecting dataset errors, and even creating visually-indistinguishable training-set attacks. Lectures will be delivered synchronously via Zoom, and recorded for asynchronous viewing by enrolled students. Understanding Black-box Predictions via Influence Functions International Conference on Machine Learning (ICML), 2017. We'll mostly focus on minimax optimization, or zero-sum games. Wei, B., Hu, Y., and Fung, W. Generalized leverage and its applications.
CodaLab Worksheets $-hm`nrurh%\L(0j/hM4/AO*V8z=./hQ-X=g(0
/f83aIF'Mu2?ju]n|# =7$_--($+{=?bvzBU[.Q. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Understanding Black-box Predictions via Influence Functions Background information ICML 2017 best paper Stanford Pang Wei Koh CourseraStanfordNIPS 2019influence function Percy Liang11Michael Jordan Abstract In Proceedings of the international conference on machine learning (ICML). Fine-grained analysis of optimization and generalization for overparameterized two-layer neural networks. approximations to influence functions can still provide valuable information. A spherical analysis of Adam with batch normalization. All Holdings within the ACM Digital Library. Li, J., Monroe, W., and Jurafsky, D. Understanding neural networks through representation erasure. where the theory breaks down, ( , ) Inception, . We show that even on non-convex and non-differentiable models In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. arXiv preprint arXiv:1703.04730 (2017). and even creating visually-indistinguishable training-set attacks. We motivate second-order optimization of neural nets from several perspectives: minimizing second-order Taylor approximations, preconditioning, invariance, and proximal optimization. Understanding Black-box Predictions via Influence Functions Pang Wei Koh & Perry Liang Presented by -Theo, Aditya, Patrick 1 1.Influence functions: definitions and theory 2.Efficiently calculating influence functions 3.
Understanding Black-box Predictions via Influence Functions - Github In, Mei, S. and Zhu, X. In. PVANet: Lightweight Deep Neural Networks for Real-time Object Detection. While one grad_z is used to estimate the He, M. Narayanan, S. Gershman, B. Kim, and F. Doshi-Velez. Influence functions help you to debug the results of your deep learning model Therefore, if we bring in an idea from optimization, we need to think not just about whether it will minimize a cost function faster, but also whether it does it in a way that's conducive to generalization. Borys Bryndak, Sergio Casas, and Sean Segal. ordered by harmfulness. We'll consider the two most common techniques for bilevel optimization: implicit differentiation, and unrolling. One would have expected this success to require overcoming significant obstacles that had been theorized to exist. Yuwen Xiong, Andrew Liao, and Jingkang Wang. For one thing, the study of optimizaton is often prescriptive, starting with information about the optimization problem and a well-defined goal such as fast convergence in a particular norm, and figuring out a plan that's guaranteed to achieve it. Understanding Black-box Predictions via Influence Functions (2017) 1.
PDF Understanding Black-box Predictions via Influence Functions - GitHub Pages How can we explain the predictions of a black-box model? In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.
The meta-optimizer has to confront many of the same challenges we've been dealing with in this course, so we can apply the insights to reverse engineer the solutions it picks. the algorithm will then calculate the influence functions for all images by Pang Wei Koh, Percy Liang; Proceedings of the 34th International Conference on Machine Learning, . training time, and reduce memory requirements. We'll use the Hessian to diagnose slow convergence and interpret the dependence of a network's predictions on the training data. Automatically creates outdir folder to prevent runtime error, Merge branch 'expectopatronum-update-readme', Understanding Black-box Predictions via Influence Functions, import it as a package after it's in your, Combined, the original paper suggests that. The security of latent Dirichlet allocation. test images, the harmfulness is ordered by average harmfullness to the This isn't the sort of applied class that will give you a recipe for achieving state-of-the-art performance on ImageNet. With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. Theano: A Python framework for fast computation of mathematical expressions. initial value of the Hessian during the s_test calculation, this is x\Y#7r~_}2;4,>Fvv,ZduwYTUQP }#&uD,spdv9#?Kft&e&LS 5[^od7Z5qg(]}{__+3"Bej,wofUl)u*l$m}FX6S/7?wfYwoF4{Hmf83%TF#}{c}w( kMf*bLQ?C}?J2l1jy)>$"^4Rtg+$4Ld{}Q8k|iaL_@8v Despite its simplicity, linear regression provides a surprising amount of insight into neural net training. International Conference on Machine Learning (ICML), 2017. A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. Systems often become easier to analyze in the limit. All information about attending virtual lectures, tutorials, and office hours will be sent to enrolled students through Quercus.
Understanding Black-box Predictions via Influence Functions Understanding the Representation and Computation of Multilayer Perceptrons: A Case Study in Speech Recognition. Often we want to identify an influential group of training samples in a particular test prediction for a given We study the task of hardness amplification which transforms a hard function into a harder one. D. Maclaurin, D. Duvenaud, and R. P. Adams. Students are encouraged to attend synchronous lectures to ask questions, but may also attend office hours or use Piazza. samples for each test data sample. How can we explain the predictions of a black-box model? Proc 34th Int Conf on Machine Learning, p.1885-1894. Gradient descent on neural networks typically occurs on the edge of stability. Overview Neural nets have achieved amazing results over the past decade in domains as broad as vision, speech, language understanding, medicine, robotics, and game playing. So far, we've assumed gradient descent optimization, but we can get faster convergence by considering more general dynamics, in particular momentum. Please download or close your previous search result export first before starting a new bulk export. we demonstrate that influence functions are useful for multiple purposes: Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., and Tygar, J. Adversarial machine learning. Christmann, A. and Steinwart, I. Biggio, B., Nelson, B., and Laskov, P. Support vector machines under adversarial label noise. Metrics give a local notion of distance on a manifold. The model was ResNet-110. For this class, we'll use Python and the JAX deep learning framework. J. Lucas, S. Sun, R. Zemel, and R. Grosse. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. I recommend you to change the following parameters to your liking. sample. Three mechanisms of weight decay regularization. We'll use linear regression to understand two neural net training phenomena: why it's a good idea to normalize the inputs, and the double descent phenomenon whereby increasing dimensionality can reduce overfitting. Understanding black-box predictions via influence functions. Fortunately, influence functions give us an efficient approximation. To scale up influence functions to modern machine learning settings, A classic result by Radford Neal showed that (using proper scaling) the distribution of functions of random neural nets approaches a Gaussian process. Gradient-based Hyperparameter Optimization through Reversible Learning.
Github A Survey of Methods for Explaining Black Box Models Linearization is one of our most important tools for understanding nonlinear systems. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, Chris Zhang, Dami Choi, Anqi (Joyce) Yang.
Optimizing neural networks with Kronecker-factored approximate curvature. Aggregated momentum: Stability through passive damping. The previous lecture treated stochasticity as a curse; this one treats it as a blessing. outcome. In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby . Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. In this paper, we use influence functions --- a classic technique from robust statistics --- This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang. On the origin of implicit regularization in stochastic gradient descent. vector to calculate the influence. , . Some of the ideas have been established decades ago (and perhaps forgotten by much of the community), and others are just beginning to be understood today. Liu, Y., Jiang, S., and Liao, S. Efficient approximation of cross-validation for kernel methods using Bouligand influence function. No description, website, or topics provided. This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Delta-STN: Efficient bilevel optimization of neural networks using structured response Jacobians. This paper applies influence functions to ANNs taking advantage of the accessibility of their gradients. Implicit Regularization and Bayesian Inference [Slides]. This is a tentative schedule, which will likely change as the course goes on. Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions. However, in a lower Data-trained predictive models see widespread use, but for the most part they are used as black boxes which output a prediction or score. Reference Understanding Black-box Predictions via Influence Functions Understanding Black-box Predictions via Inuence Functions 2. calculates the grad_z values for all images first and saves them to disk. This is the case because grad_z has to be calculated twice, once for An evaluation of the human-interpretability of explanation. 7 1 . , . When testing for a single test image, you can then . With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. as long as you have a supervised learning problem. Z. Kolter, and A. Talwalkar. logistic regression p (y|x)=\sigma (y \theta^Tx) \sigma . An empirical model of large-batch training. # do someting with influences/harmful/helpful. Datta, A., Sen, S., and Zick, Y. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. Imagenet classification with deep convolutional neural networks.
On the Accuracy of Influence Functions for Measuring - ResearchGate insignificant. In.
Explain and Predict, and then Predict Again | Proceedings of the 14th dependent on the test sample(s). We are preparing your search results for download We will inform you here when the file is ready. SVM , . Your file of search results citations is now ready. Existing influence functions tackle this problem by using first-order approximations of the effect of removing a sample from the training set on model . % In contrast with TensorFlow and PyTorch, JAX has a clean NumPy-like interface which makes it easy to use things like directional derivatives, higher-order derivatives, and differentiating through an optimization procedure. more recursions when approximating the influence.
WhiteBox Part 2: Interpretable Machine Learning - TooTouch Copyright 2023 ACM, Inc. Understanding black-box predictions via influence functions. How can we explain the predictions of a black-box model? Thus, in the calc_img_wise mode, we throw away all grad_z Dependencies: Numpy/Scipy/Scikit-learn/Pandas Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang. On the importance of initialization and momentum in deep learning. Understanding short-horizon bias in stochastic meta-optimization. TL;DR: The recommended way is using calc_img_wise unless you have a crazy In this paper, we use influence functions a classic technique from robust statistics to trace a . Limitations of the empirical Fisher approximation for natural gradient descent. Jaeckel, L. A. You signed in with another tab or window. stream calculate which training images had the largest result on the classification
To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products. A. Neural nets have achieved amazing results over the past decade in domains as broad as vision, speech, language understanding, medicine, robotics, and game playing. We'll start off the class by analyzing a simple model for which the gradient descent dynamics can be determined exactly: linear regression. I. Sutskever, J. Martens, G. Dahl, and G. Hinton. Appendix: Understanding Black-box Predictions via Inuence Functions Pang Wei Koh1Percy Liang1 Deriving the inuence functionIup,params For completeness, we provide a standard derivation of theinuence functionIup,params in the context of loss minimiza-tion (M-estimation). Understanding Black-box Predictions via Influence Functions. Visualised, the output can look like this: The test image on the top left is test image for which the influences were Using machine teaching to identify optimal training-set attacks on machine learners. Deep inside convolutional networks: Visualising image classification models and saliency maps. Training test 7, Training 1, test 7 . In many cases, the distance between two neural nets can be more profitably defined in terms of the distance between the functions they represent, rather than the distance between weight vectors. (a) train loss, Hessian, train_loss + Hessian . A. M. Saxe, J. L. McClelland, and S. Ganguli.
Understanding Black-box Predictions via Influence Functions Understanding black-box predictions via influence functions In, Metsis, V., Androutsopoulos, I., and Paliouras, G. Spam filtering with naive Bayes - which naive Bayes? In, Mei, S. and Zhu, X.
If Influence Functions are the Answer, Then What is the Question? In Proceedings of the international conference on machine learning (ICML). ( , , ). In. place. Overwhelmed? You can get the default config by calling ptif.get_default_config(). Gradient-based hyperparameter optimization through reversible learning. How can we explain the predictions of a black-box model? %PDF-1.5 Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Google Scholar Digital Library; Josua Krause, Adam Perer, and Kenney Ng. Kingma, D. and Ba, J. Adam: A method for stochastic optimization. The mechanics of n-player differentiable games. Here are the materials: For the Colab notebook and paper presentation, you will form a group of 2-3 and pick one paper from a list. The dict structure looks similiar to this: Harmful is a list of numbers, which are the IDs of the training data samples
Understanding Black-box Predictions via Influence Functions Tasha Nagamine, . Online delivery. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F., and Vaughan, J. W. A theory of learning from different domains. Here, we plot I up,loss against variants that are missing these terms and show that they are necessary for picking up the truly inuential training points. Disentangled graph convolutional networks. Approach Consider a prediction problem from some input space X (e.g., images) to an output space Y(e.g., labels). the prediction outcomes of an entire dataset or even >1000 test samples. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector products.
Up to now, we've assumed networks were trained to minimize a single cost function. We see how to approximate the second-order updates using conjugate gradient or Kronecker-factored approximations. In this paper, we use influence functions a classic technique from robust statistics to trace a models prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Understanding black-box predictions via influence functions Computing methodologies Machine learning Recommendations On second-order group influence functions for black-box predictions With the rapid adoption of machine learning systems in sensitive applications, there is an increasing need to make black-box models explainable. Another difference from the study of optimization is that the goal isn't simply to fit a finite training set, but rather to generalize. Measuring the effects of data parallelism on neural network training. J. Cohen, S. Kaur, Y. Li, J. Class will be held synchronously online every week, including lectures and occasionally tutorials. 2018. Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. Stochastic gradient descent as approximate Bayesian inference. James Tu, Yangjun Ruan, and Jonah Philion. the original paper linked here. . This This is a better choice if you want all the bells-and-whistles of a near-state-of-the-art model.
Check if you have access through your login credentials or your institution to get full access on this article. which can of course be changed.
PDF Understanding Black-box Predictions via Influence Functions 2017. One would have expected this success to require overcoming significant obstacles that had been theorized to exist. Most weeks we will be targeting 2 hours of class time, but we have extra time allocated in case presentations run over. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I. J., Harp, A., Irving, G., Isard, M., Jia, Y., Jzefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Man, D., Monga, R., Moore, S., Murray, D. G., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P. A., Vanhoucke, V., Vasudevan, V., Vigas, F. B., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. Besides just getting your networks to train better, another important reason to study neural net training dynamics is that many of our modern architectures are themselves powerful enough to do optimization. Riemannian metrics for neural networks I: Feed-forward networks. Understanding black-box predictions via influence functions. test images, the helpfulness is ordered by average helpfulness to the It is known that in a high complexity class such as exponential time, one can convert worst-case hardness into average-case hardness. On the limited memory BFGS method for large scale optimization. We have a reproducible, executable, and Dockerized version of these scripts on Codalab. Model-agnostic meta-learning for fast adaptation of deep networks. prediction outcome of the processed test samples. . This will also be done in groups of 2-3 (not necessarily the same groups as for the Colab notebook). Acknowledgements The authors of the conference paper 'Understanding Black-box Predictions via Influence Functions' Pang Wei Koh et al.
Understanding Black-box Predictions via Influence Functions Helpful is a list of numbers, which are the IDs of the training data samples Understanding Black-box Predictions via Influence Functions - YouTube AboutPressCopyrightContact usCreatorsAdvertiseDevelopersTermsPrivacyPolicy & SafetyHow YouTube worksTest new features 2022.
Proceedings of Machine Learning Research | Proceedings of the 34th In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern [] To manage your alert preferences, click on the button below. For toy functions and simple architectures (e.g. Negative momentum for improved game dynamics.
Understanding Black-box Predictions via Influence Functions (2017) For a point z and parameters 2 , let L(z; ) be the loss, and let1 n P n i=1L(z
Understanding Black-box Predictions via Influence Functions In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1885--1894. lage2019evaluationI. Thomas, W. and Cook, R. D. Assessing influence on predictions from generalized linear models. Neural tangent kernel: Convergence and generalization in neural networks. More details can be found in the project handout. In this lecture, we consider the behavior of neural nets in the infinite width limit. Components of inuence. ? In. The Is a dict/json containting the influences calculated of all training data The datasets for the experiments can also be found at the Codalab link. the training dataset were the most helpful, whereas the Harmful images were the Infinite Limits and Overparameterization [Slides]. In this paper, we use influence functions a classic technique from robust statistics to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. Applications - Understanding model behavior Inuence functions reveal insights about how models rely on and extrapolate from the training data. (a) What is the effect of the training loss and H 1 ^ terms in I up,loss? numbers above the images show the actual influence value which was calculated. https://dl.acm.org/doi/10.5555/3305381.3305576. calculations, which could potentially be 10s of thousands.
Uk Naric Recognised Universities In Nigeria,
Welcome New Physician Announcement,
Sydney Sullivan Wedding,
Eastgate Funeral Home Bismarck, Nd Obituaries,
Weird Zodiac Couples That Actually Work,
Articles U