[Baby-gaze-coding] Follow up: working group on automated gaze coding for developmental psychology
Kat Adams
kat.adams at nyu.edu
Fri Sep 21 16:59:21 EDT 2018
Hi all,
Thanks to those of you who were able to join the launch of this working
group yesterday. It was great to hear from researchers in both
developmental psychology and machine learning/vision areas. Here
<https://docs.google.com/document/d/1FoOcp-lmcEX92u1uKFnSqkQA9A6TYrlNmPBE1QDzzR4/edit?usp=sharing>
is an itinerary of the meeting with notes.
A take-away from the discussion is that there is a ton of interest in
developing automated eye gaze coding tools for developmental research, and
that the problem space is quite large: each experimental setup has
task-specific parameters and problems, and thus may be best fit by
different solutions rather than a "one size fits all" algorithm. In
addition, classifying useful features (e.g., gaze and head position) from
videos appears to be a more straightforward step (or at least one where we
can use off-the-shelf tools and benefit from ongoing development), whereas
translating those parameters into the outcome measures that are of interest
to the researcher is a more challenging step.
This leads us to narrow in on Goal 1 for the project: Identify and make
available an initial developmental dataset to use as a standard for testing
eye gaze algorithms, and specify a standard format for data (video,
metadata, & human coding) so that this dataset can grow to include a wide
variety of setups.
We propose to use some published Lookit data (https://osf.io/dqmcv/) as a
starting point just for ease of getting started, since it's publicly
available, but over time we'd like to add more video--especially from
varied lab setups and to make sure there's plenty of "easy" to code video
available.
Here are some ways you can get involved in Goal 1:
Shorter-term:
- Draft/propose a standard for video metadata format or video coding
format
- Send to the group an example of a video coding output file from your
lab, along with an explanation of how to interpret it, to help inform
choices about video coding standards
Longer-term we could really use someone to step up in each of these roles:
(pitch: what a great way for a grad student to make contacts in the field,
see a wide variety of data, ...! :) )
- Volunteer to coordinate the legal/privacy side of data sharing (i.e.,
considering possible IRB arrangements & applying/helping others apply
- Volunteer to coordinate the practical side of data sharing (e.g.,
keeping track of what types of video we need more of; be the person to talk
to if someone wants to contribute data but needs to convert the output of
their custom video coding script to the standard we're requesting) and/or
to help this person out with standardization as needed
In the meantime, Kim and Kat will continue to have one-on-one conversations
in the next month with vision researchers, and will send updates to this
listserv as the project moves forward. Please don't hesitate to reach out
to us if you have any questions or would like to be involved.
Best,
Kim and Kat
--
Doctoral Candidate
Developmental Psychology
NYU Steinhardt
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/baby-gaze-coding/attachments/20180921/eedd05a3/attachment.html
More information about the baby-gaze-coding
mailing list