For the sequential quadratic programming method (SQP), we show that close to a solution satisfying the same assumptions that are required for its local quadratic convergence (namely uniqueness of the Lagrange multipliers and the second-order sufficient optimality condition), the direction given by the SQP subproblem using the Hessian of the Lagrangian is a descent direction for the standard l1 -penalty function. We emphasize that this property is not straightforward at all, because the Hessian of the Lagrangian need not be positive definite under these assumptions or, in fact, under any other reasonable set of assumptions. In particular, this descent property was not known previously, under any assumptions (even including the stronger linear independence constraint qualification, strict complementarity, etc.). We also check the property in question by experiments on nonconvex problems from the Hock–Schittkowski test collection for a model algorithm. While to propose any new and complete SQP algorithm is not our goal here, our experiments confirm that the descent condition, and a model method based on it, work as expected. This indicates that the new theoretical findings that we report might be useful for full/practical SQP implementations which employ second derivatives and linesearch for the l1 -penalty function. In particular, our results imply that in SQP methods where using subproblems without Hessian modifications is an option, this option has a solid theoretical justification at least on late iterations. © 2016 Informa UK Limited, trading as Taylor & Francis Group.