Аннотации:
© 2020, Pleiades Publishing, Ltd. We consider continuous and impulse control of a Markov chain (MC) with a finite set of states in continuous time. Continuous control determines the intensity of transitions between MC states, while transition times and their directions are random. Nevertheless, sometimes it is necessary to ensure a transition that leads to an instantaneous change in the state of the MC. Since such transitions require different influences and can produce different effects on the state of the MC, such controls can be interpreted as impulse controls. In this work, we use the martingale representation of a controllable MC and give an optimality condition, which, using the principle of dynamic programming, is reduced to a form of quasi-variational inequality. The solution to this inequality can be obtained in the form of a dynamic programming equation, which for an MC with a finite set of states reduces to a system of ordinary differential equations with one switching line. We prove a sufficient optimality condition and give examples of problems with deterministic and random impulse action.