Wednesday, January 27, 2021

Stochastic Gradient Descent Algorithm With Python and NumPy

Stochastic gradient descent is an optimization algorithm often used in machine learning applications to find the model parameters that correspond to the best fit between predicted and actual outputs. It’s an inexact but powerful technique.

Stochastic gradient descent is widely used in machine learning applications. Combined with backpropagation, it’s dominant in neural network training applications.

In this tutorial, you’ll learn:

  • How gradient descent and stochastic gradient descent algorithms work
  • How to apply gradient descent and stochastic gradient descent to minimize the loss function in machine learning
  • What the learning rate is, why it’s important, and how it impacts results
  • How to write your own function for stochastic gradient descent

Basic Gradient Descent Algorithm

The gradient descent algorithm is an approximate and iterative method for mathematical optimization. You can use it to approach the minimum of any differentiable function.

Although gradient descent sometimes gets stuck in a local minimum or a saddle point instead of finding the global minimum, it’s widely used in practice. Data science and machine learning methods often apply it internally to optimize model parameters. For example, neural networks find weights and biases with gradient descent.

Cost Function: The Goal of Optimization

The cost function, or loss function, is the function to be minimized (or maximized) by varying the decision variables. Many machine learning methods solve optimization problems under the surface. They tend to minimize the difference between actual and predicted outputs by adjusting the model parameters (like weights and biases for neural networks, decision rules for random forest or gradient boosting, and so on).

In a regression problem, you typically have the vectors of input variables ๐ฑ = (๐‘ฅ₁, …, ๐‘ฅแตฃ) and the actual outputs ๐‘ฆ. You want to find a model that maps ๐ฑ to a predicted response ๐‘“(๐ฑ) so that ๐‘“(๐ฑ) is as close as possible to ๐‘ฆ. For example, you might want to predict an output such as a person’s salary given inputs like the person’s number of years at the company or level of education.

Your goal is to minimize the difference between the prediction ๐‘“(๐ฑ) and the actual data ๐‘ฆ. This difference is called the residual.

In this type of problem, you want to minimize the sum of squared residuals (SSR), where SSR = ฮฃแตข(๐‘ฆแตข − ๐‘“(๐ฑแตข))² for all observations ๐‘– = 1, …, ๐‘›, where ๐‘› is the total number of observations. Alternatively, you could use the mean squared error (MSE = SSR / ๐‘›) instead of SSR.

Both SSR and MSE use the square of the difference between the actual and predicted outputs. The lower the difference, the more accurate the prediction. A difference of zero indicates that the prediction is equal to the actual data.

SSR or MSE is minimized by adjusting the model parameters. For example, in linear regression, you want to find the function ๐‘“(๐ฑ) = ๐‘₀ + ๐‘₁๐‘ฅ₁ + ⋯ + ๐‘แตฃ๐‘ฅแตฃ, so you need to determine the weights ๐‘₀, ๐‘₁, …, ๐‘แตฃ that minimize SSR or MSE.

In a classification problem, the outputs ๐‘ฆ are categorical, often either 0 or 1. For example, you might try to predict whether an email is spam or not. In the case of binary outputs, it’s convenient to minimize the cross-entropy function that also depends on the actual outputs ๐‘ฆแตข and the corresponding predictions ๐‘(๐ฑแตข):

mmst-gda-eqs-1

In logistic regression, which is often used to solve classification problems, the functions ๐‘(๐ฑ) and ๐‘“(๐ฑ) are defined as the following:

mmst-gda-eqs-2

Again, you need to find the weights ๐‘₀, ๐‘₁, …, ๐‘แตฃ, but this time they should minimize the cross-entropy function.

Gradient of a Function: Calculus Refresher

In calculus, the derivative of a function shows you how much a value changes when you modify its argument (or arguments). Derivatives are important for optimization because the zero derivatives might indicate a minimum, maximum, or saddle point.

The gradient of a function ๐ถ of several independent variables ๐‘ฃ₁, …, ๐‘ฃแตฃ is denoted with ∇๐ถ(๐‘ฃ₁, …, ๐‘ฃแตฃ) and defined as the vector function of the partial derivatives of ๐ถ with respect to each independent variable: ∇๐ถ = (∂๐ถ/∂๐‘ฃ₁, …, ∂๐ถ/๐‘ฃแตฃ). The symbol ∇ is called nabla.

The nonzero value of the gradient of a function ๐ถ at a given point defines the direction and rate of the fastest increase of ๐ถ. When working with gradient descent, you’re interested in the direction of the fastest decrease in the cost function. This direction is determined by the negative gradient, −∇๐ถ.

Read the full article at https://realpython.com/gradient-descent-algorithm-python/ »


[ Improve Your Python With ๐Ÿ Python Tricks ๐Ÿ’Œ – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]



from Real Python
read more

No comments:

Post a Comment

TestDriven.io: Working with Static and Media Files in Django

This article looks at how to work with static and media files in a Django project, locally and in production. from Planet Python via read...