Publications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Watch the traffic signal: a signal state extraction system for naturalistic driving videos
University of Wisconsin-Madison.
University of Wisconsin-Madison.
University of Wisconsin-Madison.
University of Wisconsin-Madison.
2016 (English)Conference paper (Refereed)
Abstract [en]

Naturalistic driving studies (NDS) are revolutionizing road safety research. Naturalistic driving data provide a new window into driver behavior that promises a deeper understanding than was ever possible with crash data, roadside observations, or driving simulator experiments. NDS collect extensive vehicle network data, vehicle internal videos (e.g., drivers and passengers), and vehicle external videos (e.g., forward roadway). The video record of the driver and surrounding road situation often provide a more revealing account of driver behavior. Data size becomes a double edged sword for most NDS. Fully or partially automated procedures are needed for data reduction. Both intentional and unintentional violations of traffic signals can cause severe consequences, like fatal right-angle crashes. Knowing the signal state when the driver navigated through the intersection is the first step towards judging the driver’s compliance and analyzing deeper human factors issues. A system that codes traffic signal state from georeferenced front-view videos was developed for use with the Second Strategic Highway Research Program (SHRP2) NDS data. GPS coordinates and a free online map database are used to identify candidate frames from lengthy videos for computer vision processing. The computer vision algorithm uses color histograms and shape matching to detect traffic signals. Vehicle movement is used to select, from multiple signals the one corresponding to the vehicle’s traffic lane. Temporal relationship between frames is employed for the purpose of refining detection results. Experiments were conducted on daytime videos and showed reasonable performance given the severe challenges posed by the SHRP2 data. Misclassifications were primarily due to other vehicles’ taillights, reddish yellow signals, yellowish red signals, green traffic signs, etc. Misses were due to distant frames with very few pixels corresponding to the signals and reduced color conspicuity compared to background scenes (e.g., green signals and the sky look similar). The system is an insightful first step towards using computer vision techniques to support signal state coding of the large volumes of naturalistic driving video data. Challenges revealed by the experimental results are valuable knowledge for future improvement.

Place, publisher, year, edition, pages
Linköping: Statens väg- och transportforskningsinstitut, 2016.
National Category
Signal Processing
Research subject
X RSXC
Identifiers
URN: urn:nbn:se:vti:diva-10585OAI: oai:DiVA.org:vti-10585DiVA: diva2:926745
Conference
17th International Conference Road Safety On Five Continents (RS5C 2016), Rio de Janeiro, Brazil, 17-19 May 2016.
Available from: 2016-05-09 Created: 2016-05-09 Last updated: 2016-05-09Bibliographically approved

Open Access in DiVA

No full text

Signal Processing

Search outside of DiVA

GoogleGoogle Scholar

Total: 53 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf