FIAS . Impressum . Privacy

Fias Bernstein Workshop I: Sensing and deciding in time

03 November 2010, FIAS lecture hall 0.100, Frankfurt, Germany

The workshop is meant as day with tutorial type talks and intense discussions with the goal of making progress with the
target vision architecture within the current Bernstein Focus Neurotechnology project. The focus is on theoretical models
involving sensing and deciding across time within vision.



9:00-9:15
Christoph von der Malsburg: Welcome and Introduction


9:15-10:05
Christos Dimitrakakis: Statistical decision problems

10:05-10:25

Coffee Break

10:25-11:15
Constantin Rothkopf: Introduction to probabilistic models in time

11:15-12:05
Rudra Hota: Democratic Cue Integration Based Segmentation

12:05-13:15

Lunch Break

13:15-14:15
Jörg Lücke and Zhenwen Dai: Probabilistic Mixture Models and Their Applications to Object Learning

14:15-14:45

Coffee Break

14:45-15:35
Thomas Weisswange: Development of cue integration through reinforcement learning

15:35-16:25
Pramod Chandrashekhariah: Current usage of features in computer vision

16:25-17:00


Discussion



Presenter: Christos Dimitrakakis
Title: Statistical decision problems
Abstract: A large number of, but not all, decision making problems can
be formulated in terms of statistical decision theory. Starting the
fundamentals of statistical inference and utility theory, we shall
outline a number of decision problems and their optimal solution in
a statistical framework. Our main focus will be the expression of state
and model estimation in Markov processes (such as hidden Markov models
and the Kalman filter) as special decision-theoretic problems and the
optimal combination of multiple sources of information to make decisions.


Presenter: Constantin Rothkopf
Title: Introduction to probabilistic models in time
Abstract: We will present and derive basic probabilistic models in time and
consider fundamental inference procedures. Further models
that have
been popular for applications in computer vision, robotics and

related fields such as ioHMM, hHMM, sKF are briefly presented. The
common theme will be the
estimation of latent variables given observations
over time and the
learning of the model's parameters. Furthermore, a
probabilistic view
of Democratic Cue Integration will be discussed.


Presenter: Rudra Hota
Title: Democratic Cue Integration Based Segmentation
Abstract: Figure-Ground separation in a video sequence is a problem of
great interest in vision community for many real world applications. It
involves many challenges such as change in illuminations, Shadows, and
Occlusion. Due to fact that no single cue can serve as a reliable basis
for segmentation, as the variability of natural scene is very high,
Cue-Integration seems to be a promising approach. In this presentation
we discuss a probabilistic approach of unifying bottom-up and top-down
cues for moving object separation, advantages and limitations of the
approach, practical challenges faced during the development, analysis
of results and scope for the further work.


Presenter: Jörg Lücke and Zhenwen Dai
Title: Probabilistic Mixture Models and Their Applications to Object Learning
Abstract: We introduce the basic concept of mixture models as an elementary
class of probabilistic generative models. We define and discuss a
mixture of Gaussians model as the most elementary mixture model and
show how a learning algorithm can be derived using Expectation
Maximization. We then discuss more elaborate mixture models that have
been used to learn objects from visual data. A crucial feature of such
models is the use of class and location variabels. We point out the
limitations of existing models and discuss recent developments
including our own work. Finally, we discuss how mixture models can
be combined with models for sequential data (HMMs etc).


Presenter: Thomas Weisswange
Title: Development of cue integration through reinforcement learning
 Abstract: Average human behavior in cue combination tasks is well
predicted by Bayesian inference models. As this capability is acquired over
developmental timescales, the question arises, how humans may be able to
learn cue integration. Here we investigated whether reward dependent
learning that is well established at the computational, behavioral, and
neuronal levels could contribute to such a development. It is shown that a
model free reinforcement learning algorithm can indeed learn to do cue
integration, i.e. weight uncertain cues according to their respective
inverse uncertainties. Additionally the model implicitly learns when to
integrate two cues, resembling inference of the causes underlying its
sensory inputs. Testing our model on more realistic data, from audio and
video recordings with a robot head, we could show that it can also improve
the estimation of e.g. depth from multiple audio or video cues compared
with standard techniques.
Thus, reward mediated learning could be the driving force for human
development of cue integration and causal inference, as well as a helpful
tool for improving the performance of e.g. robotic applications.


Presenter: Pramod Chandrashekhariah
Title: Current usage of features in computer vision
Abstract: Features play an important role in the field of vision for
characterising an image or object in the scene. Choosing right set of
features that accounts for variability in the environment is always a
challenge. In this talk we will discuss some of the popular features,
their formulations, significance, limitations, applications along with
examples (SIFT, Gabor-jet, HOG, Haar, Wavelet, Edgelet).

FIAS . Impressum . Privacy