To use all functions of this page, please activate cookies in your browser.
my.bionity.com
With an accout for my.bionity.com you can always see everything at a glance – and you can configure your own website and individual newsletter.
- My watch list
- My saved searches
- My saved topics
- My newsletter
Hidden Markov modelA hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with unknown parameters, and the challenge is to determine the hidden parameters from the observable parameters. The extracted model parameters can then be used to perform further analysis, for example for pattern recognition applications. A HMM can be considered as the simplest dynamic Bayesian network. In a regular Markov model, the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but variables influenced by the state are visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states. Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, musical score following, partial discharges and bioinformatics. Additional recommended knowledge
Architecture of a hidden Markov modelThe diagram below shows the general architecture of an instantiated HMM. Each oval shape represents a random variable that can adopt a number of values. The random variable x(t) is the hidden state at time t (with the model from the above diagram, ). The random variable y(t) is the observation at time t (). The arrows in the diagram (often called a trellis diagram) denote conditional dependencies. From the diagram, it is clear that the value of the hidden variable x(t) (at time t) only depends on the value of the hidden variable x(t − 1) (at time t − 1). This is called the Markov property. Similarly, the value of the observed variable y(t) only depends on the value of the hidden variable x(t) (both at time t). Probability of an observed sequenceThe probability of observing a sequence of length L is given by where the sum runs over all possible hidden node sequences . Brute force calculation of P(Y) is intractable for most real-life problems, as the number of possible hidden node sequences is typically extremely high. The calculation can however be sped up enormously using an algorithm called the forward algorithm.[1] Using hidden Markov modelsThere are three canonical problems associated with HMM:
A concrete exampleAssume you have a friend who lives far away and to whom you talk daily over the telephone about what he did that day. Your friend is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. You have no definite information about the weather where your friend lives, but you know general trends. Based on what he tells you he did each day, you try to guess what the weather must have been like. You believe that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but you cannot observe them directly, that is, they are hidden from you. On each day, there is a certain chance that your friend will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since your friend tells you about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM). You know the general weather trends in the area, and what your friend likes to do on average. In other words, the parameters of the HMM are known. You can write them down in the Python programming language: states = ('Rainy', 'Sunny') observations = ('walk', 'shop', 'clean') start_probability = {'Rainy': 0.6, 'Sunny': 0.4} transition_probability = { 'Rainy' : {'Rainy': 0.7, 'Sunny': 0.3}, 'Sunny' : {'Rainy': 0.4, 'Sunny': 0.6}, } emission_probability = { 'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5}, 'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1}, } In this piece of code, This example is further elaborated in the Viterbi algorithm page. Applications of hidden Markov models
HistoryHidden Markov Models were first described in a series of statistical papers by Leonard E. Baum and other authors in the second half of the 1960s. One of the first applications of HMMs was speech recognition, starting in the mid-1970s.[3] In the second half of the 1980s, HMMs began to be applied to the analysis of biological sequences, in particular DNA. Since then, they have become ubiquitous in the field of bioinformatics.[4] See also
NotesReferences
|
|
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Hidden_Markov_model". A list of authors is available in Wikipedia. |