VALSEæ ¥å æ °ç -...

53
基于多视图深度生成式模型的视觉信息 编解码研究 何晖光 2017-5-24 中国科学院自动化研究所 中国科学院脑科学与智能技术卓越中心

Transcript of VALSEæ ¥å æ °ç -...

Page 1: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

基于多视图深度生成式模型的视觉信息编解码研究

何晖光

2017-5-24中国科学院自动化研究所

中国科学院脑科学与智能技术卓越中心

Page 2: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究背景

国内外研究现状

研究内容和结果

总结及展望

Page 3: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究背景

It is important to study brain-inspired intelligence

Computational neuroscience Brian-machine interfaces

FacebookElon Musk’s Neurolink

Page 4: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究背景

基于fMRI技术研究人脑视觉编解码机制

Page 5: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究背景

基于

fMRI信号的视觉编解码

f(R|S)f(S|R) S RR

编码

解码 编码

预测

观察

刺激

非线性映射

特征空间

线性优化

BOLD响应

BOLD响应

非线性映射

刺激

线性优化解码

Page 6: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究背景

brain encoding

Page 7: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究背景

brain decoding

Page 8: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究背景

What is brain encoding and decoding ?

Encoding model: a model that predicts brain activity from external stimulus

Decoding model: a model that predicts external stimulus from brain activity

Page 9: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究现状

视觉信息编解码的早期工作总结

Page 10: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

客体分类:

Haxby et al., 2001 (Science)

研究现状

Wang CM et al., 2012 (J. Neural Eng.)

Fusiform Face Area

Parahippocampal Place Area

Page 11: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

120张图片,92%1000张图片,82%

客体识别:

Kay et al., 2008 (Nature)

视觉信息重建:

Miyawaki, Y. et al., 2008 (Neuron)

研究现状

Page 12: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

视觉信息重建:

Nishimoto et al., 2011 (Current Biology)

研究现状

Page 13: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

Gallant CVPR 2015

语义重建:

研究现状

Page 14: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

卷积自编码用于视觉信息编解码研究内容

Pathway for brain encoding

Page 15: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

卷积自编码用于视觉信息编解码研究内容

Pathway for brain encoding

Obj-Encoding ={reconstruction loss(image) ,regression loss(BOLD response),weight penalty(L1 or L2)}

Page 16: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

卷积自编码用于视觉信息编解码研究内容

Pathway for brain decoding

Page 17: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

卷积自编码用于视觉信息编解码研究内容

Pathway for brain decoding

Obj-Decoding ={reconstruction loss(image)regression loss(feature map)weight penalty(L1 or L2)}

Page 18: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

根据大脑信号重建图像研究内容

主要目标

Page 19: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

根据大脑信号重建图像研究内容

Linear or nonlinear regression model

两阶段法

Approach 1: Convolutional auto-encoder + Regression

Page 20: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

根据大脑信号重建图像研究内容

Linear ornonlinear model

But, how to keep the similarity ?

两阶段法

Need to be similar !

Page 21: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

根据大脑信号重建图像研究内容

统一训练

Approach 2: Training convolutional auto-encoder and regression simultaneously

Obj = reconstruction loss(image) + regression loss(feature map) + weight penalty(L1 or L2)

Page 22: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究内容

encoding

decoding

传统的编解码研究方法是单向的

Page 23: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

研究内容

encoding

decoding

采用双向的研究思路是不是更好?

Page 24: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

自编码约束下的深度相关性分析

多视图表示学习模型研究内容

Page 25: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

多视图生成式自编码模型研究内容

Proposed method:Multi-View Deep Generative Model

SharedRepresentation

View 1

View 2

Key idea : learning the common latent feature, then generate stimuli and bold response simultaneously

How to train the model (learning model parameters) ?

Page 26: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

多视图生成式自编码模型研究内容

Variational Auto-Encoder (VAE)

VAE is interesting generative model, which combinesideas from deep learning with statistical inference.

It can be used to learn a low dimensional representation Z of high dimensional data X such as images.

In contrast to standard auto encoders, X and Z are random variables.

Kingma and Welling. “Auto-Encoding Variational Bayes, InternationalConference on Learning Representations.” ICLR, 2014.arXiv:1312.6114 [stat.ML].

Page 27: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

多视图生成式自编码模型研究内容

Principle Idea generative network

One Example: Wish to learn θ from the N training observations x(i) i=1,…,N

p(X Z)

Page 28: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

多视图生成式自编码模型研究内容

A model for generative (decoder) network

Page 29: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

多视图生成式自编码模型研究内容

Training use maximum likelihood of p(x) given the training data

Problem: cannot be calculated

Solution: • MCMC (too costly)• Approximate p(z|x) with q(z|x)

Training the generative network

Page 30: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

多视图生成式自编码模型研究内容

A model for encoder network

Page 31: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

多视图生成式自编码模型研究内容

How to build a multi-view VAE ?

Learning the parameters φ and θ via backpropagation

Page 32: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

多视图生成式自编码模型研究内容

Multi-view VAE

Left part : multiple layer perceptrons (MLPs) or convolutional neural networks (CNNs)

Right part : DNNs or just linear model (avoid over-fitting and better interpretability)

Page 33: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

Deep generative multi-view model (DGMM)

多视图生成式自编码模型研究内容

隐含变量Z的先验分布

图像视图X的似然

大脑信号视图Y的似然

辅助变量的先验分布

Page 34: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

Deep generative multi-view model (DGMM)

多视图生成式自编码模型研究内容

目标函数,极大似然估计求解

Page 35: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

Deep generative multi-view model (DGMM)

多视图生成式自编码模型研究内容

Page 36: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

Deep generative multi-view model (DGMM)

多视图生成式自编码模型研究内容

Variational autoencoder

Variational autoencoder

Page 37: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

Deep generative multi-view model (DGMM)

多视图生成式自编码模型研究内容

Brain encoding

Page 38: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

Deep generative multi-view model (DGMM)

多视图生成式自编码模型研究内容

Brain decoding

Page 39: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

实验数据介绍研究内容

Data set 1: ‘neuron’

fMRI : 797 brain voxels (V1)

Stimuli : 1400 (10x10 pixels)

Subject : 1

Page 40: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

实验数据介绍研究内容

Data set 2: handwritten digit (6 & 9)

fMRI : 3092 brain voxels (V1,V2,V3)

Stimuli : 100 (28 x 28 pixels)

Subject : 1

Page 41: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

实验数据介绍研究内容

Data set 3 : handwritten characters (B, R, A, I, N, S)

fMRI : 2536 brain voxels (V1,V2)

Stimuli : 360 (56 x 56 pixels)

Subject : 3

Page 42: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

对比方法介绍研究内容

Miyawaki et al.: Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron, 2008.

Bayesian CCA (BCCA) : Modular encoding and decoding models derived from Bayesian canonical correlation analysis. Neural Computation, 2013.

Deep Canonically Correlated Autoencoders (DCCAE) : On deep multi-view representation learning. ICML, 2015.

Deconvolutional Neural Network (De-CNN) : Neural encoding and decoding with deep learning for dynamic natural vision. arXiv:1608.03425v1, 2016.

Page 43: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

• Experimental results (HBM 2017 Oral)

图像重建效果研究结果

Page 44: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

• Experimental results

图像重建效果研究结果

Page 45: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

• Experimental results

图像重建效果研究结果

Page 46: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

• Experimental results

图像重建效果研究结果

Page 47: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

重建效果定量评估研究结果

Experimental results on three fMRI datasets

Page 48: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

框架的可扩展性介绍研究结果

Supervised encoding and decoding

Page 49: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

框架的可扩展性介绍研究结果

Multi-subject encoding and decoding

Page 50: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

• 提出了基于多视图生成模型的双向建模框架

• 在图像重构(信息解码)方面性能优异

• 由于自然图像刺激的fMRI样本量很少,目前复杂自然场景的重构效果还不理想

• 目前是静态编解码,下一步将采用动态编解码,比如变分RNN

• 解决编解码问题的方法可以借鉴机器翻译、图像翻译中的对偶学习思想

• 尝试其他类型的深度生成模型,如GAN等• GAN和VAE的结合也值得尝试

总结及展望

Page 51: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

That’s interesting work with significant implications. The ability to reconstruct brain images is an important stepping stone in the work to create better brain-machine interfaces.

—— MIT Technology Review

Page 52: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

参考文献:

Changde Du, Changying Du, Huiguang He*, Sharing deep generative representation for perceived image reconstruction from human brain activity, 2017https://arxiv.org/abs/1704.07575第一作者简介:

杜长德 Ph.D. candidate中科院自动化所类脑智能研究中心

[email protected]

Page 53: VALSEæ ¥å æ °ç - valser.orgvalser.org/webinar/slide/slides/20170524/VALSE-0524-何晖光.pdf · .j0¬ » ï p?ü 4+u f e8 5l.7 w Á 3ursrvhg phwkrg Ö 0xowl 9lhz 'hhs *hqhudwlyh

谢谢!

[email protected]