Department of Electrical and Computer Engineering Hierarchical

30 Slides1.41 MB

Department of Electrical and Computer Engineering Hierarchical Dirichlet Processes and Infinite HMMs Submitted to: Dr. Joseph Picone, Examining Committee Chair Dr. Iyad Obeid, Committee Member, Depat. of Electrical and Computer Engineering Dr. Marc Sobel, Committee Member, Department of Statistics Dr. Chang-Hee Won, Committee Member, Depat. of Electrical and Computer Engineering Dr. Slobodan Vucetic, Committee Member, Dept. of Computer and Information Sciences March 6, 2012 prepared by: Amir Harati, PhD Candidate PhD Advisor: Dr. Joseph Picone, Professor and Chair Department of Electrical and Computer Engineering Temple University , College of Engineering 1947 North 12th Street Philadelphia, Pennsylvania 19122 Tel: 215-204-7597 Email: [email protected] Preliminary Exam

Motivation Parametric models can capture a bounded amount of information from the data. Real data is complex and therefore parametric assumptions is wrong. Nonparametric models can lead to model selection/averaging solutions without paying the cost of these methods. In addition Bayesian methods often provide a mathematically well defined framework, with better extendibility. All possible data sets of size n From [1]

Motivation Speech recognizer architecture. Performance of the system depends on the quality of acoustic models. HMMs and mixture models are frequently used for acoustic modeling. Number of models and parameter sharing is among the most important model selection problems in speech recognizer. Can hierarchical nonparametric Bayesian modeling help us? Input Speech Acoustic Front-end Acoustic Models P(A/W) Language Model P(W) Search Recognized Utterance

Outline Background Hierarchical Dirichlet Process. Posterior Sampling in CRF. Augmented Posterior Representation Sampler. HDP-HMM Direct Assignment Sampler. Block Sampler. Sequential Sampler. Demonstrations. Future works and Discussion.

Background ( , ) is a measurable space where is the sigma algebra A measure over ( , ) is a function from 0, such that: Examples of Dirichlet distributions 0 Ai : Ai Ai i For a probability measure 1 A Dirichlet distribution is a distribution over the K-dimensional probability simplex. From [2]

Background A Dirichlet Process (DP) is a random probability measure over , such that for any measurable partition over we have: Stick-breaking construction: And we write: DP is discrete with probability one: Polya urn scheme: i 1 1 k G0 N 1 k 1 N 1 i 1 ,., i 1 , , G0 G is the base distribution and acts 0 like mean of DP. Is the concentration parameter and is proportional to the inverse of the variance. Chinese restaurant process (CRP) : K i 1 ,., i 1 , , G0 k 1 mk * G0 N 1 k N 1

Hierarchical Dirichlet Process (HDP) Grouped data clustering problem: consider topic modeling problem. In this problem, each document is a group and we are interested to model each document with a mixture while sharing mixtures across the groups. For each group we need a DP. We use a hierarchical architecture to share clusters across the groups. Sharing of atoms obtained by using a common DP as the based distribution for each group. GEM ( ) j , DP ( , ) G0 , H DP ( , H ) G j , G0 DP ( , G0 ) ji G j G j x ji ji F ji for j J k H , H ( ) z ji j j x ji k k 1 , z ji F ji From [3,4]

HDP Stick-breaking construction From [5]

HDP Chinese Restaurant Franchise (CRF) Each group is corresponding to a restaurant. There is a franchise wide menu with unbounded number of entries. Number of dishes is logarithmically proportional to the number of tables and double logarithmically to the number of data. Reinforcement effect: New customers tends to sit at tables with many other customers and choose dishes that are chosen by many other tables. From [6]

HDP Posterior Distribution K H k 1 m k ** k G0 , H , DP m , m 0 , 1 ,., K , G0 , θ* Dir , m 1 ,., m K G0 , H DP , H K G0 0G0 k k 1 ** k j 0 , j1 ,., jK , θ j Dir ( 0 , 1 n j 1 ,., K n j K ) K G0 k 1 n j k ** k G j , G0 , θ j DP n j , n j G j , G0 DP ( 0 , G0 ) K G j j 0G j k 1 jk k** Interpretation: At the beginning 0 is large and therefore j 0 is large and G j is concentrated around G0 . After many tables become occupied 0 gets smaller and as result j 0 becomes smaller but G j will not be concentrated around G0 . NEW DRAWS ARE NOT LIKELY BUT IF THEY HAPPENS, THEY WOULD BE DIFFERENT FROM THE AVERAGE.

Posterior sampling in CRF Sampling table assignment (t): Given foods and table labels sample n jt ji f x ji k jt x , ji p t ji t t ji , k ji new p x ji t , t ji t , k , x f k ji x ji x ji h h , k 1,.,, K j i Dk \ x ji f k new x ji h f x ji d , k k f x j i d f x j i d new If a new table is in selected, sample its food: p k jt new k t, k jt new m kx ji f k x ji x ji , If k previously used x new f k newji x ji , If k k -jt if t t new j i Dk x ji Sample foods (k): p k jt t, k if t previously used m k ji x ji x fk x ji f k newji x ji m k 1 m K p x ji t ji , t ji t new , k m kx jt f k x jt x jt , If k previously used x new f k newjt x jt , If k k For exponential family we can just update the cached statistics . For Gaussian emissions, we can calculate the likelihoods using : 1 f x t x ; , d 1 , k 1,., K x ji k x k ji f k newji x ji k d 1 ji k k k k k 1 t d 1 x ji ; , , k k new d 1 k L k L L k k l 1 x (l ) L T k k l 1 x (l ) x (l ) T T

Posterior Representation Sampler p( z ji z Sample Z: ji x ji jk f k x ji , ) x j 0 f k newji x ji If k previously used For each j , k 1,., K 2 set m jk 0 and n 0 . For each customer in restaurant j eating dish k sample: k If k k new x Ber n k if a new component is chosen : Increment n and if x 1 increment v0 Beta ,1 new 0 , Alternatively we can simulate a CRF: m jk Knew 1 0 0 , 0 (1 v0 ) v j , 0 , 0 Beta 0 v0 , 0 (1 v0 ) new j0 , new jK 1 j 0 v j , j 0 (1 v j ) Sample and Sample m : Antoniak showed that if GEM , zi then the distribution of unique draws from has this form: p K N , ( ) s N , K K N) Where s(N,K) is the Stirling (number of first kind. p(m jk m z, , , n j k ) k k n j k s( n j k , m) k m 0 , 1 ,., K , G0 , θ* Dir , m 1 ,., m K j 0 , j1 ,., jK , θ j Dir ( 0 , 1 n j 1 ,., K n j K )

Topic Modeling [3,4]

Hidden Markov models (HMMs) HMMs are a dynamic variant of mixture models. An HMM can be characterized by transition and emission matrices. Number of states and number of mixtures should be specified a priori. Topology is also fixed. Infinite HMMs : an HMM with unbounded number of states and mixtures per state. For each state we have to replace the transition matrix with a DP. DPs should be linked to make state sharing possible. HDP is used to tie state transition distributions . Each state can independently use another DP to model an unbounded emission mixture. Original HDP-HMM suffering from lack of state persistence. This problem is solved by adding a sticky parameter.

HDP-HMM Definition: GEM ( ) j , DP( , j ) j GEM ( ) kj** H , H ( ) zt z t 1 , j st j xt kj** j 1 j 1 zt 1 , zt zt k , j 1 , zt F zt st CRF with loyal customers: Each restaurant has a special dish which is also served in other restaurants. If a customer eats the specialty dish (likely) then his children goes to the same restaurant and likely eat the same dish. However, if the customer eats another dish then his children go to the restaurant indexed by that dish and more likely eat their specialty dish. From [7]

Direct Assignment Sampler Sample augmented state: K zt f x (z , k) f k t t K 1 ( xt ) ( zt , K 1) k 1 f ( x ) (s , j) f k, j t t j 1 k , K k 1 ( xt ) ( st , K k 1) nkj t f k , j ( xt ) p x x z k , st j , t nk t t f k , K k 1 ( xt ) p xt x z k , st j new , t t nk f k new ,0 ( xt ) p xt x z k new , t t n f k xt k nz t t 1 zt 1 , k Sampling by sampling first auxiliary variables: – – – K k f x f k, j j 1 t k , K k 1 ( xt ) , k – 1,., K 2 k zt 1 f K 1 xt f k new ,0 ( xt ), k K 1 k k n j k s (n j k , m) k k , j Alternatively, we can simulate a CRF. Sample override variable: j Binomial m jj , , j 1 z nkz t k , zt 1 zt 1 , k k , zt 1 t 1 t 1 nk t zt 1 , k Sample m using p (m jk m z, , , n j k ) K k st kj 1 kj p xt x z k , st j , t t kj d 1 xt ; kj , kj , k 1,., K , j 1,., K k kj kj d 1 1 , k 1,., K , j j new p xt x z k , st j new , t t new d 1 xt ; , kj d 1 1 p xt x z k new , t t new d 1 xt ; , , k k new k d 1 Adjust the number of informative tables: Sample j k m jk m jk m jj j j k ( n ) Dir , m 1 ,., m K m

Some Notes Sampling from the override variable is performed to cancel the bias introduced by sticky parameter. Sticky parameter practically change (override) the dish which is going to assigned to the table. In order to have an unbiased estimate we have to bring this into account. Direct assignment sampler suffer from slow convergence rates. Parameters are integrate d out , in other word this sampler can only used for inference not learning. If we want to perform learning we have to sample parameters by simulation (more computation). We need to sample all states at once. We are interested to do both of learning and inference at once.

Forward-Backward Probabilities Joint probability of state and mixture component can be wrote as: p z t , st x1:T , z, π, ψ,θ p zt zt 1 , x1:T , π, θ p st zt f xt zt , st p x1:t 1 z t , π, θ, ψ p xt 1:T z t , π, θ, ψ Forward probabilities include: p zt zt 1 , x1:T , π, θ p st z f xt z , s p x1:t 1 zt , π, θ, ψ t t t Backward probability is: p xt 1:T zt , π, θ, ψ In this work, forward probabilities are approximated by For backward probabilities we can write: p xt 1:T zt , π, θ, ψ mt ,t 1 zt 1 p z p s f x m z 1 zt st t zt 1 t zt t zt , st t 1,t t T 1 L mt ,t 1 k i 1 1 L f x m z And finally we have: t T t ki il l 1 t zt , st t t T k 1,.L t T 1 p z t k , st j x1:T , z, π, ψ,θ t 1,t zt 1k kj f xt zt , st mt 1,t zt p zt zt 1 , x1:T , π, θ p st zt f xt zt ,st f xt zt , st xt ; kj , kj

Block Sampler Compute the backward probabilities: L N x ki il t 1 ; il , il mt 1,t i i 1 l 1 Update the cache and then sample Dir / L m 1 ,., / L m L L mt ,t 1 k Sample augmented state: Sample k and k k Dir 1 nk1 ,., k nkk ,., L nkL L L k Dir / L nk 1 ,., / L nkL z t ,s t f k,j x t δ z t ,k δ s t ,j k 1 j 1 f k , j xt zt 1 ,k k , j N xt ; k , j , k , j mt 1,t k Sample override variable and adjust number of tables similar to the pervious algorithm. Sample kj k , j p , k , j Optionally sample hyper-parameters.

Particle Filter Dynamic system xt 1 f zt 1 , zt 1 g z t , Update : p zt x t 1 p xt 1 zt p zt x t p xt 1 y t p N N z x t 1 t i 1 (i ) ( zt ), t zt( i ) (i ) t Propagate : p zt 1 x t 1 p zt 1 zt , xt 1 p zt x t 1 dzt p xt 1 zt( i ) N j 1 p xt 1 zt( j )

Sequential Learning and Inference Calculate the weights: (i ) t vt(i ) N v( j ) j 1 t , vt(i ) n(i( i)) zt lt nz(i( i)) t t L(t i ) q (i ) l 1 l (i ) t (i ) lt (i ) t Z t(i ) , xt 1 zt( i )1 p zt( i )1 Z t(i ) , xt 1 ql(i ) Z t(i ) , xt 1 L(t i ) 1 l 1 L(t i ) 1 ( j ) ql j 1 Z t( j ) , xt 1 L( i ) 1, zt( i )1 L(t i ) 1 L(t i )1 (t i ) (i ) Lt 1 Lt otherwise fl xt 1 xt 1 , l 1,., L(ti ) ql(i ) Z t(i ) , xt 1 (i ) (i ) t L(Ti ) 1,t x f t 1 x , l L(ti ) 1 (i ) ( i ) L(t i ) 1 t 1 nzt( i ) t t Propagate the particles: nz( i( i)) , z ( i ) ,t 1 nz(i( i)) , z( i ) ,t 1 S z ( i ) ,t 1 S z ( i ) ,t s xt t t t 1 t 1 t 1 t 1 If a new state is initiated : t( i 1) t(i ) Resample the particles: N p N Z t xt 1 i 1 (i ) t Zt( i ) v0 Beta t(i ) ,1 Zt (i ) 0,t 1 , L((ii)) 1,t 0,(it) 0 , 0,( it) (1 v0 ) t Update the hyper-parameter. Resample t( i 1) Dir m ,1,t 1 ,., m , L( i ) ,t 1 , t( i )1 t 1 l zt( i )1

State persistence demo [7]

Fast switching Demo [7]

Comparison to Sparse Dirichlet Prior [7]

Speaker Diarization [7]

Speaker Diarization [7]

Alice in the wonder land [1,3] Training over 1000 character. Test over another 1000 character. output : Characters (including space and punctuation.)

Future works Can we use a similar approach to speaker diarization to discover a new set of acoustic units (instead of phonemes) ? This problem seems to fit particularly well in a non parametric settings since the number of units is not know a priori and should be estimated from the data. The only important difficulty is to form a dictionary for new units. How can we define a structured HDP-HMM (e.g. Left-right ) without violating Bayesian framework (no heuristic)? From experiments, we know speaker dependent models works significantly better than speaker independent models. For example, the performance of a speech recognizer with gender based models is better than performance of a speech recognizer with universal models for all speakers. Nonparametric Bayesian framework provides two important features that can facilitate speaker dependent systems: 1-Number of clusters of speakers is not known a priori and could possibly grow with obtaining new data. 2-Paramter sharing and model (and state )tying can be accomplished elegantly using proper hierarchies. Depending on the available training data, the system would have different number of models for different acoustic units. All acoustic units are tied. Moreover each model has different number of sates and different number of mixtures for each state.

References 1. Ghahramani, Z. (2010). Bayesian Hidden Markov Models and Extensions. Uppsala, Sweden: invited talk at CoNLL. 2. Ghahramani, Z. (2005) ,Tutorial on Nonparametric Bayesian Methods, talk UAI 3. Teh, Y., Jordan, M., Beal, M., & Blei, D. (2004). Hierarchical Dirichlet Processes. Technical Report 653 UC Berkeley. 4. Teh, Y., & Jordan, M. (2010). Hierarchical Bayesian Nonparametric Models with Applications. In N. Hjort, C. Holmes, P. Mueller, & S. Walker, Bayesian Nonparametrics: Principles and Practice. Cambridge, UK: Cambridge University Press 5. Y.W. Teh. (2009), Bayesian Nonparametrics, talk MLSS Cambridge 6. M. I. Jordan (2005), Dirichlet processes, Chinese restaurant processes and all that, Tutorial presentation at the NIPS Conference 7. Fox, E., Sudderth, E., Jordan, M., & Willsky, A. (2011). A Sticky HDP-HMM with Application to Speaker Diarization. The Annalas of Applied Statistics, 5, 1020-1056. 8. Rodriguez, A. (2011, July). On-Line Learning for the Infinite Hidden Markov. Communications in Statistics: Simulation and Computation, 40(6), 879-893.

Thank You!

Back to top button