Clark School Home UMD

Return to Faculty Directory

Aloimonos, J. Yiannis

Yiannis Aloimonos
Professor
4475 A.V. Williams Building
Phone: 
301.405.1743
Fax: 
301.314.9115

Research Interests 

Active vision: the study of the mechanisms responsible for recovering three-dimensional information from image sequences obtained by an active observer. Application of descriptions of visual space and space-time to analyzing and synthesizing visual data: eye and camera design, video editing and manipulation, graphics and virtualized reality, visualization, sensor networks, robots/navigation and the study of biological vision.

Video 

Background 

Professor Aloimonos holds a Ph.D. in Computer Science from the University of Rochester.

His research is devoted to the principles governing the design and analysis of real-time systems that possess perceptual capabilities, for the purpose of both explaining animal vision and designing seeing machines. Such capabilities have to do with the ability of the system to control its motion and the motion of its parts using visual input (navigation and manipulation) and the ability of the system to break up its environment into a set of categories relevant to its tasks and recognize these categories (categorization and recognition).

The work is being done in the framework of Active and Purposive Vision, a paradigm also known as Animate or Behavioral Vision. In simple terms, this approach suggests that Vision has a purpose, a goal. This goal is action; it can be theoretical, practical or aesthetic. When Vision is considered in conjunction with action, it becomes easier. The reason is that the descriptions of space-time that the system needs to derive are not general purpose, but are purposive. This means that these descriptions are good for restricted sets of tasks, such as tasks related to navigation, manipulation and recognition.

If Vision is the process of deriving purposive space-time descriptions as opposed to general ones, one is faced with the difficult question of where to star t (with which descriptions)? Understanding moving images is a capability shared by all "seeing" biological systems. It was therefore decided to start with descriptions that involve time. Another reason for this is that motion problems are purely geometric and understanding the geometry amounts to solving the problems. This led to a consideration of the problems of navigation. Within navigation, once again, one faces the same question: in which order should navigational capabilities be developed? This led to the development of a synthetic approach, according to which the order of development is related to the complexity of the underlying model. The appropriate starting point is the capability of understanding self-motion. By performing a geometric analysis of motion fields, global patterns of partial aspects of motion fields were found to be associated with particular 3D motion. This gave rise to a series of algorithms for recovering egomotion through pattern matching. The qualitative nature of the algorithms in conjunction with a nature of the well-defined input (the input is the normal flow, i.e. the component of the flow along the gradient of the image) makes the solution stable against noise.

Other problems, higher in the hierarchy of navigation, are independent motion detection, estimation of ordinal depth, and learning of space. To illustrate these topics, consider the case of ordinal depth. Traditionally, systems were supposed to estimate depth. Such metric information is too much to expect from systems that are supposed to just navigate successfully. Many tasks can be achieved by using an ordinal depth representation. Such a representation can be extracted without knowledge of the exact image motion or displacement. Recent studies on visual space distortion have triggered a new framework for understanding visual shape. A study of a spectrum of shape representations lying between the projective and Euclidean layers is currently underway.

The learning of space can be based on the principle of learning routes. A system knows the space around it if it can successfully visit a set of locations. With more memory available, relationships between the representations of different routes give rise to partial geocentric maps.

In hand-eye coordination, the concept of a perceptual kinematic map has been introduced. This is a map from the robot's joints to image features. Currently under investigation is the problem of creating a classification of the singularities of this map.

The work on active, anthropomorphic vision led to the study of fixation and the development of TALOS (TALOS), a system that implements dynamic fixation. Since fixation is a principle of Active Vision and fixating observers build representations relative to fixations, it is important to solve fixation in real time and demonstrate it in hardware. TALOS consists of a binocular head/eye system augmented with additional sensors. It is designed to perform fixation as it is moving, in real time.

The ideas of Purposive Vision have led to the study of Intelligence as a purposive activity. A four-valued logic is being developed for handling reasoning in a system of interacting purposive agents.

 


Related News 

Martins, Gupta, Aloimonos speak at 'Fostering Excellence in Robotics'
Workshop at American Control Conference introduced high school students to robotics. June 20, 2013

Aloimonos interviewed by All Things Considered
Research answers the question, "Does an orchestra play better with a conductor?" December 2, 2012

Telluride newspaper writes about Neuromorphic Cognition Engineering Workshop
ISR faculty, staff, students key to the workshop's planning and organization. July 18, 2011

Ching Teo and Yezhou Yang win in Qualcomm Innovation Fellowship competition
Teo is advised by ISR affiliate faculty member Yiannis Aloimonos. May 24, 2011

ISR students Datta, Teo are part of finalist teams in Qualcomm competition
Qualcomm Innovation Fellowship competition recognizes outstanding Ph.D. students. March 30, 2011

Aloimonis receives NSF grant for robots with vision that find objects
The research will allow robots to detect and find objects with Active Vision. September 15, 2010

Yiannis Aloimonos becomes ISR affiliate faculty member
Professor's research interests are centered in active vision. August 7, 2009

Toshiba's Yosuke Okamoto begins six-month visit
Engineer will conduct computer vision, image procesing research. October 18, 2007

Honda Visiting Scientist Morimichi Nishigaki gives final presentation
July 2004—Engineer summarizes his research into 'Ego-Motion Estimation Using Fewer Image Feature Points.' July 1, 2004