Lọc theo danh mục
  • Năm xuất bản
    Xem thêm
  • Lĩnh vực
liên kết website
Lượt truy cập
 Lượt truy cập :  30,326,427
  • Công bố khoa học và công nghệ Việt Nam

27.35

Toán học và thống kê

Bùi Thị Thanh Xuân(1), Dương Thị Nhung, Hà Thị Thanh

Về phương pháp gradient Descent giải bài toán tối ưu trong học máy

Gradient descent method solving optimization problem in machine learning

Khoa học và công nghệ (Đại học Thái Nguyên

2016

2

95-99

1859-2171

The gradient descent method was originally consider to be direct method for linear equations, but its favorable properties as an iterative method was soon realized, and it was later generalized to more general optimization problems. It provides a very effective way to optimize large, deterministic systems by gradient descent. Stochastic gradient descent (SGD) is a gradient descent optimization method for minimizing an objective function that is written as a sum of differentiable functions. The convergence of stochastic gradient descent has been analyzed using the theories of convex minimization and of stochastic approximation. Briefly, when the learning rates decrease with an appropriate rate, and subject to relatively mild assumptions, stochastic gradient descent converges almost surely to a global minimum when the objective function is convex or pseudo-convex, and otherwise converges almost surely to a local minimum. Stochastic gradient descent is a popular algorithm for training a wide range of models in machine learning, including (linear) support vector machines, logistic regression and graphical models.

TTKHCNQG, CTv 178