- I'm excited to teach Computer Vision this semester!
- Thank you Adobe! Your generous gift to GWU will help our image matching research
to fight human trafficking.
- Welcome, Maya Shende, Hong Xuan, and Derek LeFever to the best Computer Vision research group in Foggy Bottom!
- Download our app TraffickCam
and KSDK news.
- Congratulations Ian
Schillebeeckx, new PhD!
- Best Paper award for
Glitter Imaging for 3D Point Tracking at the IEEE/CVPR Workshop
Cameras and Displays.
- Published in Frontiers in Public Health:
Webcams, crowdsourcing, and enhanced crosswalks: Developing a novel method to analyze active transportation, Hipp, Manteiga, Burgess, Stylianou, Pless.
- NSF CAREER AWARD to my former student Nathan Jacobs.
- New DOE TERRA big data/agricultural robotics grant.
My research is in the area of computer vision with applications to
environmental science, medical imaging, robotics and virtual reality.
I am particularly interested in data-driven and geometric techniques
to more robustly understand images taken "in the wild". This research
exploits the fact that cameras are incredibly precise measurement
systems --- if they are calibrated properly, then the vast quantities
of visual data they collect can help to learn, understand, and
manipulate the world around us. At a high level, the current themes
of research in my lab are:
- Understand visual change at scales from the sidewalk to the planet:
What can you do with a billion images? Webcams, iPhones, and flocks
of micro-satellites a visual depiction of the Earth at an
unprecedented temporal and spatial coverage. Our aim is to organize
these disparate image sources to create coherent global imaging
systems that answer important questions facing our society and our
planet. Specifically, we work to understand physical principles that
govern image formation in realistic environments, and we use those in
application domains that include: Understanding patterns of
tree-growth at a continental scale, characterizing the use of public
spaces over time, and creating generalizable models to
learn how image appearance varies over time.
- Next generation imaging systems:
All Virtual Reality and Robotics applications require fast visual
reasoning systems to characterize the 3D world and pose of objects
within that world. The objective of this research effort is to
understand and overcome the fundamental limits of cameras and
materials that make that task difficult. Specifically, our work has
developed the theory of motion estimation from multi-perspective and
compressive sensing cameras, fundamental constraints for calibrating
cameras with other sensing systems, and the design of new light-field
modulating materials that make geometric inference especially easy.
- Democratizing Visual Analytics and Applications to Social Justice:
What can our community do to make visual reasoning more available to
more people? Our lab works to make web implementations and
smart-phone apps that support the broader publics ability to create
and use visual inference across many applications. This includes apps
to support Citizen Science through repeat photography, and the
geocalibration.org website that allows the precise geo-location of an
image with respect to Google Maps which was used to find the lost
grave of a Jane Doe crime victim from 30 years ago.
Dataset, Apps, and Frequenty Requested Code
- The Archive of Many Outdoor Scenes
- TraffickCam, an app to crowd source the creation of a continually
updated index of the appearance of hotel rooms to support
investigations of sex trafficking.
- Citizen Science Repeat Photography App and Data
- Code (12 KB zip file) for
matlab and web visualizations of image manifolds, from the paper: "A
Survey of Manifold Learning for Images", Robert Pless and Richard
Souvenir, IPSJ Transactions on Computer Vision and Applications,
vol. 1 pp. 83-94, 2009.
- Code (2.3 MB zip file) from our
paper: "Extrinsic Calibration of a Camera and Laser Range Finder",
Qilong Zhang, Robert Pless, IROS 2004,