### How to Compute the Gradient of the Analytically Unknown Value Function

#####
**Malkhaz Shashiashvili**

*Kutaisi International University*

### Abstract

It is well known that vast majority of the real-world optimization problems cannot be solved analytically in closed form since they are highly nonlinear by their intrinsic nature. The basic observation: The value function V(x) of the optimization problem is often convex or concave in multidimensional argument x ( or at least semi convex or semi concave). Therefore, we should use the advantage of convexity to construct convergent numerical approximations to grad V(x). Suppose for simplicity that V(x) is convex.
Our basic idea: Replace the approximation V(h,x) by some convex approximation C(h,x) in a hope that the latter one will better imitate the shape of the unknown convex function V(x) and hence the gradient grad C(h,x) can be announced as the reasonable approximation to the unknown grad V(x)!.

### Presentation