Imaging
& Visualization Mini-Conference
Bridging the Gap between University Research and Business Applications
Wednesday,
May 11th, 2005
Room
C198, CUNY
Graduate Center
5th Ave and 34th St,
New York City
Panel Discussions
Moderator
Michael Grossberg
Panelists
Ying-Li
Tian (IBM)
Michael
Chan (GE)
Robin
Bargar
(City Tech)
Dr. Anurag Mittal (Siemens)
Jeff
Young (Leica)
Diego Socolinsky (Equinox)
IBM Smart Video Surveillance
System
Dr.Yingli Tian, Member of
Research Staff, IBM TJ Watson Research Center
Abstract:
The increasing need for sophisticated surveillance systems and
the move to digital surveillance infrastructure has transformed
surveillance into a large scale data analysis and management challenge.
Smart surveillance systems use automatic image understanding techniques
to extract information from the surveillance data. While the majority
of the research and commercial systems have focused on the information
extraction aspect of the challenge, very few systems have explored the
use of extracted information in the search, retrieval, data management
and investigation context. The IBM smart surveillance system is one of
the few advanced surveillance systems which provides not only the
capability to automatically monitor a scene but also the capability to
manage the surveillance data, perform event based retrieval, receive
real time event alerts thru standard web infrastructure and extract
long term statistical patterns of activity.
Recognizing
Temporal Events in Video
Dr. Michael T.
Chan, GE Global Research
Abstract:
We describe some ongoing work on recognition of temporal events in
video that has surveillance applications. We use a probabilistic
framework based on Hidden Markov Models (HMMs) to represent the
spatio-temporal relations between interacting objects in a scene, and
uncertainty in visual observations, where the observables are semantic
spatial primitives encoded based on prior knowledge about the specific
event types of interest. Abstracting continuous trajectories into
semantic relations enables better generalization to other scenes for
which little training data may be available. We demonstrate the
effectiveness of the approach using aerial video data and simulated
data. At the end of the talk, I will also highlight some technically
related works that have other applications in security and
human-computer interaction.
Biographical Sketch:
Dr. Chan received his Bachelor degree in Electrical Engineering from
Oxford University, and his M.S. and Ph.D. degrees in Computer and
Information Science from the University of Pennsylvania. He is
currently at GE Global Research, where he has played different roles
including computer scientist and project leader. Prior to GE, he was a
research scientist at Rockwell Scientific Company and a research
associate at USC. His research interests include probabilistic models
for visual event recognition, biometric fusion, audio-visual speech
recognition, visual tracking, human-computer interaction,
multimedia communication and medical imaging. Dr. Chan was an active
contributor to the Advanced Displays and Interactive Displays FedLab
funded by ARL, and was a program co-chair of the DARPA-funded
Multimodal Speech Recognition Workshop in June 2002.
Out
of the CAVE: The Ascent of Visualization
Robin Bargar,
Dean, School of Technology and Design
New York City
College of Technology, CUNY
Abstract:
The 1990's ushered in a decade of "upscale visualization;" both in
terms of computational prowess, advancing upon previously intractable
problems, and in terms of large scale dedicated systems. Both endeavors
bore fruit; some we are still trying to harvest as it ripens on the
vine. With the end of the 90's the Supercomputing Meets Hollywood
mentality has passed into a more diversified field of devices and
users. How shall we assess our institutional position in terms of
formal vs.
informal training for students, as well as directions of research and
investments in display systems?
Title:
Real-time Vision & Modeling
Dr. Anurag Mittal, Dr. Visvanathan
Ramesh, Siemens Corporate Research,Princeton, NJ
Abstract:
Rapid
and Real Time Mapping
Technologies
Mr. Jeffrey M. Young, Regional Director, Leica Geosystems
-
Americas
Abstract:
Technologies available to first responders for rapid map
creation are undergoing significant advances. Multi-sensor, high
resolution airborne pixel-based and lidar systems are becoming more
widely used as the long-standing constraints of cost, downlink delivery
and spectral and spatial resolution limitations are mitigated. This
discussion will include a description and system architecture for
multiple use cases and a prototype of real time mapping capability.
This capability is intended to support the first responder community in
the event of a variety of human-induced incidents and natural disasters
as well as in response planning and post incident analysis activities..
Biographical Sketch
Mr. Young has over 27 years of sales, program and project
experience, including more than 15 years in senior management of GIS
corporations. Mr. Young is employeed as the Regional Director Leica
Geosystems - Americas located Denver, Colorado managing the activities
of regional sales persons, and overseeing the development
and operation of a network of image processing and photogrammetry
software distributors. Previously, Mr. Young was Vice President, Global
Solutions Sales & Marketing at Space Imaging, LLC. He also served
as Executive Director, Global Solution Sales, and as Director,
North American Sales. Prior to working at Space Imaging LLC,
previous positions and assignments held were Sales Director, Sales
Account Manager, GIS Sales Business Development Director, Business Unit
President-Criminal Justice and Public Safety, GIS-T Program Manager,
Program Manager, GIS System Consultant, Supervisor, and Office Manager.
As a Staff Scientist Mr Young has responsible for Geographic
Information System (GIS) solution design and applications development,
sales, business development, infrastructure management applications,
facility management applications, information system design, siting
studies, environmental monitoring,environmental constraints analysis,
land use analysis, terrain analysis, remote sensing, computer
cartography, geographic field techniques and computer training. Mr.
Young resides in Centennial, CO with his wife and two children and
enjoys traveling throughout the western United States.
Equinox Multimodal Image Fusion
Systems
Dr. Diego Socolinsky, Director,
Research & Development, Equinox Corporation
Abstract:
The wider availability of sophisticated imaging
sensors in various regions of the electromagnetic spectrum creates an
opportunity for exploitation never before possible. Combinations of
visible, thermal and other imaging modalities allow for unprecedented
capabilities in areas ranging from target detection and recognition,
biometric identification and fire control systems, to name but a
few. Equinox has an active system development program in this
area, supporting a variety of mission requirements for military and
civilian customers. We will showcase some examples of image fusion
systems currently under development.
Questions
for Our Panelists
Michael
Grossberg, City College of New
York
Imaging
(1) With digital imaging there has been an explosion in the collection
images and video. We also know from our own experience how important
visual information is for many tasks. Yet the use of technology does
not always follow our intuition. Video phones have been available for
more than two decades but never caught on. Will analysis of imaging and
video ever break out of niche applications and deliver on the promise
of a wide range of revolutionary applications and if so why?
Vision
(2) Computer vision is often explained in the popular press in terms
of object or even face recognition. Yet some have pointed out
that object recognition remains one of the least useful
algorithms. What do you see has the most promising aspects of image
understanding to applications in industry?
Visualization
(3) There was a great deal of excitement and perhaps hype
surrounding VRML and the potential for interactive 3D interfaces.
Yet besides limited niches we still see the primary tools
businesses use for visualizing information are 2D graphs, maps, as well
as tables. Where do you see visualization research having a
meaningful impact on our ability to processes, understand and access
information?
Display
(4) There are two conflicting trends we see, larger and larger
displays and smaller and smaller displays. It seems we want
enormous displays on our desktops, living rooms, and control rooms with
vast amounts of information such as high quality
DVD movies and the latest news. Conversely we want small unobtrusive
displays that fit on our cell phones and watches. What challenges and
opportunities does this create for visualization researchers?
Industry and Universities
(5) How does your company currently interact with university partners?
(6) What are your expectations from a partnership with researchers in a
university?
(7) If you have had past university interactions and partnerships
can you highlight the kind of frustrations you faced and what you
think your university partners should keep in mind?
Imaging and Visualization Future
(8) Over a medium term, what research areas do you look to universities
as helping you in meeting your research objectives?
(9) There is that famous line from the movie the graduate where
the family friend advises Dustin Hoffman's character to go into
"plastics" for success in business. If you were advising first year CS
graduate student choosing a field in imaging or visualization what
field would you point to as having the greatest potential in the long
term?