Abstract:[Purposes]The generate adversarial network models in machine learning are proposed by a min-max optimization problem, which has attracted extensive attention of scholars. At present, most of optimization algorithms are designed based on the standard gradient descent ascent algorithm. However, in some applications, the gradient information of the objective function is often computationally expensive or difficult to obtain. [Methods]Therefore, for a class of convex-concave min-max optimization problems, a zero-order optimistic gradient descent ascent algorithm (ZO-OGDA) is proposed by using the information of function values to approximate the gradient information based on a smoothing method. The proposed ZO-OGDA algorithm extends the OGDA algorithm to the gradient-free case. [Findings]Then, based on the convergence analysis theory of the proximal point algorithm with errors, the iteration complexity of the proposed ZO-OGDA algorithm to obtain ε-stationary points with order of ε-1)s obtained. [Conclusions]Finally, numerical experiments on the matrix game model is performed. The numerical results show that the performance of the proposed ZO-OGDA algorithm is similar to that of the OGDA algorithm.