What are the tools that behavioral scientists need to measure the face?
- A multimedia database shared on the Internet that contains images, sounds, numeric data, text, and various tools relevant to facial expression and its understanding. The images should include:
- still and moving images of the face that reflect a number of important variables and parameters, separately and in combination,
- voluntary productions of facial movements, with descriptive tags, accompanied by data from other sensors,
- spontaneous movements carefully cataloged with associated physiology,
- animation exemplars for use in perception studies,
- compilation of extant findings with more fine-grained description of facial behavior,
- large numbers of standardized facial images used to evaluate performance of alternative techniques of machine measurement and modeling,
- speech sounds and other vocalizations associated with facial messages.
- A survey of what images meeting the criteria in the above currently exist and can be incorporated into the database, and a report of the images that need to be collected.
- Specifications for video recordings and equipment that would enable sharing the productions of different laboratories and that anticipate the rapid developments in imaging and image compression technology.
- Standards for digitized images and sounds, data formats, and other elements shared in the database that are compatible with other national databases.
- A security system that protects the privileged nature of some items in the database while maximizing free access to open items.
- Analysis of database performance and design.
- Strategies and opportunities to share expensive equipment or complex software among laboratories.
Tools for Processing and Analyzing Faces and Related Data:
- Methods for detecting and tracking faces and heads in complex images.
- Programs to translate among different visual facial measurement methods.
- Automated facial measurements:
- detecting and tracking 3D head position and orientation,
- detecting and tracking eye movements, gaze direction and eye closure,
- detecting and measuring lip movement and mouth opening,
- detecting and measuring facial muscular actions, including the following independent capabilities:
- a. detection of brow movements,
- b. detection of smiles,
- c. detection of actions that are neither smiling or brow movements,
- d. techniques for temporally segmenting the flow of facial behavior,
- e. detection of onset, apex, and offset of facial muscle activity,
- f. detection of limited subsets of facial actions.
- Parametric and other models, including 3D, of the human face and head that enable accurate rendering of different expressions given a specific face.
- anatomically correct physical models of the head and face,
- complete image atlas of the head, including soft and hard tissue.
- Algorithms for assembling discrete measurements into meaningful chunks for interpretation.
- Programs for translating facial measurements in terms of emotion, cognitive process, and other phenomena incapable of direct observation.
- Programs for translating lip movements to speech.
- Automated gesture recognition.
- Programs for integrating and analyzing measurements of different modalities,
such as visual, speech, and EMG.
- Pattern discovery and recognition in multiple physiological measures.
- Further exploration of novel computer vision and image processing techniques in processing the face, such as the use of color and 3D.
- Development of "real-time" distance range sensors useful in constructing 3D head and face models.
- Development of interactive systems for facial analysis.
- Development and adaptation of parallel processing hardware to automated measurement.
- Video sensors and control equipment to enable "active vision" cameras that would free behavioral scientists from requirements to keep subjects relatively stationary.
See the NSF Workshop report on our Web pages for more detailed answers to this question.