Extraction of key segments from day-long sound data

Akinori Kasai, Sunao Hara, Masanobu Abe

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We propose a method to extract particular sound segments from the sound recorded during the course of a day in order to provide sound segments that can be used to facilitate memory. To extract important parts of the sound data, the proposed method utilizes human behavior based on a multisensing approach. To evaluate the performance of the proposed method, we conducted experiments using sound, acceleration, and global positioning system data collected by five participants for approximately two weeks. The experimental results are summarized as follows: (1) various sounds can be extracted by dividing a day into scenes using the acceleration data; (2) sound recorded in unusual places is preferable to sound recorded in usual places; and (3) speech is preferable to nonspeech sound.

Original languageEnglish
Title of host publicationCommunications in Computer and Information Science
PublisherSpringer Verlag
Pages620-626
Number of pages7
Volume528
ISBN (Print)9783319213798
DOIs
Publication statusPublished - 2015
Event17th International Conference on Human Computer Interaction, HCI 2015 - Los Angeles, United States
Duration: Aug 2 2015Aug 7 2015

Publication series

NameCommunications in Computer and Information Science
Volume528
ISSN (Print)18650929

Other

Other17th International Conference on Human Computer Interaction, HCI 2015
CountryUnited States
CityLos Angeles
Period8/2/158/7/15

Fingerprint

Acoustic waves
Global positioning system
Data storage equipment
Experiments

Keywords

  • Acceleration
  • GPS
  • Life-log
  • Multisensing
  • Sound
  • Syllable Count

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Kasai, A., Hara, S., & Abe, M. (2015). Extraction of key segments from day-long sound data. In Communications in Computer and Information Science (Vol. 528, pp. 620-626). (Communications in Computer and Information Science; Vol. 528). Springer Verlag. https://doi.org/10.1007/978-3-319-21380-4_105

Extraction of key segments from day-long sound data. / Kasai, Akinori; Hara, Sunao; Abe, Masanobu.

Communications in Computer and Information Science. Vol. 528 Springer Verlag, 2015. p. 620-626 (Communications in Computer and Information Science; Vol. 528).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kasai, A, Hara, S & Abe, M 2015, Extraction of key segments from day-long sound data. in Communications in Computer and Information Science. vol. 528, Communications in Computer and Information Science, vol. 528, Springer Verlag, pp. 620-626, 17th International Conference on Human Computer Interaction, HCI 2015, Los Angeles, United States, 8/2/15. https://doi.org/10.1007/978-3-319-21380-4_105
Kasai A, Hara S, Abe M. Extraction of key segments from day-long sound data. In Communications in Computer and Information Science. Vol. 528. Springer Verlag. 2015. p. 620-626. (Communications in Computer and Information Science). https://doi.org/10.1007/978-3-319-21380-4_105
Kasai, Akinori ; Hara, Sunao ; Abe, Masanobu. / Extraction of key segments from day-long sound data. Communications in Computer and Information Science. Vol. 528 Springer Verlag, 2015. pp. 620-626 (Communications in Computer and Information Science).
@inproceedings{1de395d0709b492eaf179969d962dc30,
title = "Extraction of key segments from day-long sound data",
abstract = "We propose a method to extract particular sound segments from the sound recorded during the course of a day in order to provide sound segments that can be used to facilitate memory. To extract important parts of the sound data, the proposed method utilizes human behavior based on a multisensing approach. To evaluate the performance of the proposed method, we conducted experiments using sound, acceleration, and global positioning system data collected by five participants for approximately two weeks. The experimental results are summarized as follows: (1) various sounds can be extracted by dividing a day into scenes using the acceleration data; (2) sound recorded in unusual places is preferable to sound recorded in usual places; and (3) speech is preferable to nonspeech sound.",
keywords = "Acceleration, GPS, Life-log, Multisensing, Sound, Syllable Count",
author = "Akinori Kasai and Sunao Hara and Masanobu Abe",
year = "2015",
doi = "10.1007/978-3-319-21380-4_105",
language = "English",
isbn = "9783319213798",
volume = "528",
series = "Communications in Computer and Information Science",
publisher = "Springer Verlag",
pages = "620--626",
booktitle = "Communications in Computer and Information Science",

}

TY - GEN

T1 - Extraction of key segments from day-long sound data

AU - Kasai, Akinori

AU - Hara, Sunao

AU - Abe, Masanobu

PY - 2015

Y1 - 2015

N2 - We propose a method to extract particular sound segments from the sound recorded during the course of a day in order to provide sound segments that can be used to facilitate memory. To extract important parts of the sound data, the proposed method utilizes human behavior based on a multisensing approach. To evaluate the performance of the proposed method, we conducted experiments using sound, acceleration, and global positioning system data collected by five participants for approximately two weeks. The experimental results are summarized as follows: (1) various sounds can be extracted by dividing a day into scenes using the acceleration data; (2) sound recorded in unusual places is preferable to sound recorded in usual places; and (3) speech is preferable to nonspeech sound.

AB - We propose a method to extract particular sound segments from the sound recorded during the course of a day in order to provide sound segments that can be used to facilitate memory. To extract important parts of the sound data, the proposed method utilizes human behavior based on a multisensing approach. To evaluate the performance of the proposed method, we conducted experiments using sound, acceleration, and global positioning system data collected by five participants for approximately two weeks. The experimental results are summarized as follows: (1) various sounds can be extracted by dividing a day into scenes using the acceleration data; (2) sound recorded in unusual places is preferable to sound recorded in usual places; and (3) speech is preferable to nonspeech sound.

KW - Acceleration

KW - GPS

KW - Life-log

KW - Multisensing

KW - Sound

KW - Syllable Count

UR - http://www.scopus.com/inward/record.url?scp=84951760496&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84951760496&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-21380-4_105

DO - 10.1007/978-3-319-21380-4_105

M3 - Conference contribution

AN - SCOPUS:84951760496

SN - 9783319213798

VL - 528

T3 - Communications in Computer and Information Science

SP - 620

EP - 626

BT - Communications in Computer and Information Science

PB - Springer Verlag

ER -