Supriya Ghosh (Editor)

Multimodal learning

Updated on
Edit
Like
Comment
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit

The information in real world usually comes as different modalities. For example, images are usually associated with tags and text explanations; texts contain images to more clearly express the main idea of the article. Different modalities are characterized by very different statistical properties. For instance, images are usually represented as pixel intensities or outputs of feature extractors, while texts are represented as discrete word count vectors. Due to the distinct statistical properties of different information resources, it is very important to discover the relationship between different modalities. Multimodal learning is a good model to represent the joint representations of different modalities. The multimodal learning model is also capable to fill missing modality given the observed ones. The multimodal learning model combines two deep Boltzmann machines each corresponds to one modality. An additional hidden layer is placed on top of the two Boltzmann Machines to give the joint representation.

Contents

Motivation

Internal and external factors that stimulate desire and energy in people to be continually interested and committed to a job, role or subject, or to make an effort to attain a goal.

Motivation results from the interaction of both conscious and unconscious factors such as the (1) intensity of desire or need, (2) incentive or reward value of the goal, and (3) expectations of the individual and of his or her peers. These factors are the reasons one has for behaving a certain way. An example is a student that spends extra time studying for a test because he or she wants a better grade in the class.

So here it classifies human–computer interaction, which can be implemented with the use of models or algorithms.

A lot of models/algorithms have been implemented to retrieve and classify a certain type of data, e.g. image or text (where human who interacts with machine can extract image in a form of pictures and text that could be any message etc). However, data usually come with different modalities (it is the degree to which system's components may be separated or combined) which carry different information. For example, it is very common to caption an image to convey the information not presented by this image. Similarly, sometimes it is more straightforward to use an image to describe the information which may not be obvious from texts. As a results, if some different words appear in similar images, these words are very likely used to describe the same thing. Conversely, if some words are used in different images, these images may represent the same object. Thus, it is important to invite a novel model which is able to jointly represent the information such that the model can capture the correlation structure between different modalities. Moreover, it should also be able to recover missing modalities given observed ones, e.g. predicting possible image object according to text description. The Multimodal Deep Boltzmann Machine model satisfies the above purposes.

Background: Boltzmann machine

A Boltzmann machine is a type of stochastic neural network invented by Geoffrey Hinton and Terry Sejnowski in 1985. Boltzmann machines can be seen as the stochastic, generative counterpart of Hopfield nets. They are named after the Boltzmann distribution in statistical mechanics. The units in Boltzmann machines are divided into two groups-visible units and hidden units. General Boltzmann machines allow connection between any units. However, learning is impractical using general Boltzmann Machines because the computational time is exponential to the size of the machine. A more efficient architecture is called restricted Boltzmann machine where connection is only allowed between hidden unit and visible unit, which is described in the next section.

Restricted Boltzmann machine

A restricted Boltzmann machine is an undirected graphical model with stochastic visible variable and stochastic hidden variables. Each visible variable is connected to each hidden variable. The energy function of the model is defined as

E ( v , h ; θ ) = i = 1 D j = 1 F W i j v i h j i = 1 D b i v i j = 1 F a j h j

where θ = { v , h ; θ } are model parameters: W i j represents the symmetric interaction term between visible unit i and hidden unit j ; b i and a j are bias terms. The joint distribution of the system is defined as

P ( v ; θ ) = 1 Z ( θ ) h e x p ( E ( v , h ; θ ) )

where Z ( θ ) is a normalizing constant. The conditional distribution over hidden h and v can be derived as logistic function in terms of model parameters.

P ( h | v ; θ ) = j = 1 F p ( h j | v ) , with p ( h j = 1 | v ) = g ( i = 1 D W i j v i + a j ) P ( v | h ; θ ) = i = 1 D p ( v i | v ) , with p ( v i = 1 | h ) = g ( j = 1 F W i j h j + b i )

where g ( x ) = 1 ( 1 + e x p ( x ) ) is the logistic function.

The derivative of the log-likelihood with respect to the model parameters can be decomposed as the difference between the model's expectation and data-dependent expectation.

Gaussian-Bernoulli RBM

Gaussian-Bernoulli RBMs are a variant of restricted Boltzmann machine used for modeling real-valued vectors such as pixel intensities. It is usually used to model the image data. The energy of the system of the Gaussian-Bernoulli RBM is defined as

E ( v , h ; θ ) = i = 1 D ( v i b i ) 2 2 σ i 2 i = 1 D j = 1 F v i σ i W i j v i h j i = 1 D b i v i j = 1 F a j h j

where θ = { a , b , w , σ } are the model parameters. The joint distribution is defined the same as the one in restricted Boltzmann machine. The conditional distributions now become

P ( h | v ; θ ) = j = 1 F p ( h j | v ) , with p ( h j = 1 | v ) = g ( i = 1 D W i j v i σ i + a j ) P ( v | h ; θ ) = i = 1 D p ( v i | h ) , with p ( v i | h ) N ( σ i j = 1 F W i j h j + b i , σ i 2 )

In Gaussian-Bernoulli RBM, the visible unit conditioned on hidden units is modeled as a Gaussian distribution.

Replicated Softmax Model

The Replicated Softmax Model is also an variant of restricted Boltzmann machine and commonly used to model word count vectors in a document. In a typical text mining problem, let K be the dictionary size, and M be the number of words in the document. Let V be a M × K binary matrix with v i k = 1 only when the i t h word in the document is the k t h word in the dictionary. v ^ k denotes the count for the k t h word in the dictionary. The energy of the state { V , h } for a document contains M words is defined as

E ( V , h ) = j = 1 F k = 1 K W j k v ^ k h j k = 1 K b k v ^ k M j = 1 F a j h j

The conditional distributions are given by

p ( h j = 1 | V ) = g ( M a j + k = 1 K v ^ k W j k ) p ( v i k = 1 | h ) = e x p ( b k + j = 1 F h j W j k q = 1 K e x p ( b q + j = 1 F h j W j q )

Deep Boltzmann machines

A deep Boltzmann machine has a sequence of layers of hidden units.There are only connections between adjacent hidden layers, as well as between visible units and hidden units in the first hidden layer. The energy function of the system adds layer interaction terms to the energy function of general restricted Boltzmann machine and is defined by E ( v , h ; θ ) = i = 1 D j = 1 F 1 W i j ( 1 ) v i h j ( 1 ) j = 1 F 1 l = 1 F 2 W j l ( 2 ) h j ( 1 ) h l ( 2 ) l = 1 F 2 p = 1 F 3 W l p ( 3 ) h l ( 2 ) h p ( 3 ) i = 1 D b i v i j = 1 F 1 b j ( 1 ) h j ( 1 ) l = 1 F 2 b l ( 2 ) h l ( 2 ) p = 1 F 3 b p ( 3 ) h p ( 3 )

The joint distribution is

P ( v ; θ ) = 1 Z ( θ ) h e x p ( E ( v , h ( 1 ) , h ( 2 ) , h ( 3 ) ; θ ) )

Multimodal deep Boltzmann machines

Multimodal deep Boltzmann machine uses an image-text bi-modal DBM where the image pathway is modeled as Gaussian-Bernoulli DBM and text pathway as Replicated Softmax DBM, and each DBM has two hidden layers and one visible layer. The two DBMs join together at an additional top hidden layer. The joint distribution over the multi-modal inputs defined as P ( v m , v t ; θ ) = h ( 2 m ) , h ( 2 t ) , h ( 3 ) P ( h ( 2 m ) , h ( 2 t ) , h ( 3 ) ) ( h ( 1 m ) P ( v m , h ( 1 m ) | h ( 2 m ) ) ) ( h ( 1 t ) P ( v t , h ( 1 t ) | h ( 2 t ) ) ) = 1 Z M ( θ ) h e x p ( k j W k j ( 1 t ) v k t h j ( 1 t ) + j l W j l ( 2 t ) h j ( 1 t ) h l ( 2 t ) + k b k t v k t + M j b j ( 1 t ) h j ( 1 t ) + l b l ( 2 t ) h l ( 2 t ) i ( v i m b i m ) 2 2 σ 2 + i j v i m σ i W i j ( 1 m ) h j ( 1 m ) + j l W j l ( 2 m ) h j ( 1 m ) h l ( 2 m ) + j b j ( 1 m ) h j ( 1 m ) + l b l ( 2 m ) h l ( 2 m ) + l p W ( 3 t ) h l ( 2 t ) h p ( 3 ) + l p W ( 3 m ) h l ( 2 m ) h p ( 3 ) + p b p ( 3 ) h p ( 3 )

The conditional distributions over the visible and hidden units are

p ( h j ( 1 m ) = 1 | v m , h ( 2 m ) ) = g ( i = 1 D W i j ( 1 m ) v i m σ i + l = 1 F 2 m W j l ( 2 m ) h l ( 2 m ) + b j ( 1 m ) ) p ( h l ( 2 m ) = 1 | h ( 1 m ) , h ( 3 ) ) = g ( j = 1 F 1 m W j l ( 2 m ) h j ( 1 m ) + p = 1 F 3 W l p ( 3 m ) h p ( 3 ) + b l ( 2 m ) ) p ( h j ( 1 t ) = 1 | v t , h ( 2 t ) ) = g ( k = 1 K W k l ( 1 t ) v k ( t ) + l = 1 F 2 t W j l ( 2 t ) h l ( 2 t ) + M b j ( 1 t ) ) p ( h l ( 2 t ) = 1 | h ( 1 t ) , h ( 3 ) ) = g ( j = 1 F 1 t W j l ( 2 t ) h j ( 1 t ) + p = 1 F 3 W l p ( 3 t ) h p ( 3 ) + b l ( 2 t ) ) p ( h p 3 ) = 1 | h ( 2 ) ) = g ( l = 1 F 2 m W l p ( 3 m ) h l ( 2 m ) + l = 1 F 2 t W l p ( 3 t ) h l ( 2 t ) + b p ( 3 ) ) p ( v i k t = 1 | h ( 1 t ) ) = e x p ( j = 1 F 1 t h j ( 1 t ) W j k ( 1 t ) + b k t ) q = 1 K e x p ( j = 1 F 1 t h j ( 1 t ) W j q ( 1 t ) + b k t ) p ( v i m | h ( 1 m ) ) N ( σ i j = 1 F 1 m W i j ( 1 m ) h j ( 1 m ) + b i m , σ i 2 )

Inference and learning

Exact maximum likelihood learning in this model is intractable, but approximate learning of DBMs can be carried out by using a variational approach, where mean-field inference is used to estimate data-dependent expectations and an MCMC based stochastic approximation procedure is used to approximate the model’s expected sufficient statistics.

Application

Multimodal deep Boltzmann machines is successfully used in classification and missing data retrieval. The classification accuracy of multimodal deep Boltzmann machine outperforms support vector machines, latent Dirichlet allocation and deep belief network, when models are tested on data with both image-text modalities or with single modality. Multimodal deep Boltzmann machine is also able to predict the missing modality given the observed ones with reasonably good precision.

References

Multimodal learning Wikipedia