Switching state-space models have been widely used in
many applications arising from science, engineering, economic, and medical
research. In this paper, we present a Monte Carlo Expectation Maximization
(MCEM) algorithm for learning the parameters and classifying the states of a
state-space model with a Markov switching. A stochastic implementation based on
the Gibbs sampler is introduced in the expectation step of the MCEM algorithm.
We study the asymptotic properties of the proposed algorithm, and we also
describe how a nesting approach and the Rao-Blackwellised forms can be employed
to accelerate the rate of convergence of the MCEM algorithm. Finally, the
performance and the effectiveness of the proposed method are demonstrated by
applications to simulated and physiological experimental data.