QPLayer enables to use a QP as a layer within standard learning architectures. More precisely, QPLayer differentiates over the primal and dual solutions of QP of the form
where is the optimization variable. The objective function is defined by a positive semidefinite matrix and a vector . The linear constraints are defined by the equality-contraint matrix and the inequality-constraint matrix and the vectors , and so that and and .
We provide in the file qplayer_sudoku.py an example which enables training LP layer in two different settings: (i) either we learn only the equality constraint matrix , or (ii) we learn on the same time and , such that is structurally in the range space of . The procedure (i) is harder since a priori the fixed right hand side does not ensure the QP to be feasible. Yet, this learning procedure is more structured, and for some problem can produce better prediction quicker (i.e., in fewer epochs).