About me

profile

Lars Schillingmann received the diploma in computer science from the Bielefeld University, Germany, in 2007. He wrote his diploma thesis about integrating visual context into speech recognition. Subsequently, he joined the research group for Applied Informatics (Angewandte Informatik) at Bielefeld University, Germany. He worked for the BMBF Joint-Project DESIRE. He continued working in the iTalk EU-Project on the topic of Acoustic Packaging. On this topic he received the Ph.D. degree in 2012. He was working in the EU-Project HUMAVIPS at the CoR-Lab, which aims at developing adequate robot behavior for interacting with a group of people. In April 2013, he joined the Emergent Robotics Lab., Graduate School of Engineering, Osaka University, Japan. In 2015 he continued working as a postdoctoral researcher in the Applied Informatics Group at Bielefeld University, Germany on speech processing, vision and machine learning learning topics in human-robot interaction.
Since June 2018 he is working as Development Engineer for Robert Bosch GmbH.

Publications

L. Schröder, V. Buchholz, V. Helmich, L. Hindemith, B. Wrede, and L. Schillingmann, “A Multimodal Interactive Storytelling Agent Using the Anthropomorphic Robot Head Flobi,” Proc. 5th Int. Conf. Hum. Agent Interact. - HAI ’17, pp. 381–385, 2017.

M. Brandt, B. Wrede, F. Kummert, and L. Schillingmann, “Confirmation detection in human-agent interaction using non-lexical speech cues,” in Symposium on Natural Communication for Human-Robot Collaboration, 2017.

A.-L. Vollmer and L. Schillingmann, “On Studying Human Teaching Behavior with Robots: a Review,” Rev. Philos. Psychol., 2017.

E. Wall, L. Schillingmann, and F. Kummert, “Online Nod Detection in Human-Robot Interaction,” in 26th IEEE International Symposium on Robot and Human Interactive Communication (IEEE RO-MAN 2017), 2017.

L. Schillingmann and Y. Nagai, “Yet another gaze detector: An embodied calibration free system for the iCub robot,” in 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015, pp. 8–13.
O. Palinko, A. Sciutti, L. Schillingmann, F. Rea, Y. Nagai, and G. Sandini, “Gaze Contingency in Turn-Taking for Human Robot Interaction: Advantages and Drawbacks,” in 24nd IEEE International Symposium on Robot and Human Interactive Communication (IEEE RO-MAN 2015), 2015.

L. Schillingmann, J. M. Burling, H. Yoshida, and Y. Nagai, “Gaze is not Enough: Computational Analysis of Infant’s Head Movement Measures the Developing Response to Social Interaction,” in Proceedings of the 37th Annual Meeting of the Cognitive Science Society, 2015.

L. Schillingmann, Joseph M. Burling, H. Yoshida, and Y. Nagai, “How do Infants Coordinate Head and Gaze?: Computational Analysis of Infant’s First Person View in Social Interactions,” Poster presented at the Biennial Meeting of the SRCD in Philadelphia, 2015.

L. Schillingmann, M. Rolf, S. Kumagaya, S. Ayaya, and Y. Nagai, “Assistance for Autistic People by Segmenting and Highlighting Cross-Modal Perceptual Information,” in International Sessions, the 31st Annual Conference of the Robotics Society of Japan, 2013.

M. Lohse, B. Wrede, and L. Schillingmann, “Enabling robots to make use of the structure of human actions - a user study employing Acoustic Packaging,” in 22nd IEEE International Symposium on Robot and Human Interactive Communication (IEEE RO-MAN 2013), 2013.

B. Wrede, L. Schillingmann, and K. J. Rohlfing, “Making Use of Multi-Modal Synchrony: A Model of Acoustic Packaging to Tie Words to Actions,” in in Theoretical and Computational Models of Word Learning: Trends in Psychology and Artificial Intelligence, no. Hershey, PA, USA, L. Gogate and G. Hollich, Eds. IGI Global, 2013, pp. 224–240.

L. Schillingmann, P. Wagner, C. Munier, B. Wrede, and K. Rohlfing, “Acoustic Packaging and the Learning of Words,” Poster at the International Conference on Development and Learning 2011.

L. Schillingmann, P. Wagner, C. Munier, B. Wrede, and K. Rohlfing, “Using Prominence Detection to Generate Acoustic Feedback in Tutoring Scenarios,” in Interspeech 2011, 2011.

I. Lütkebohle, J. Peltason, L. Schillingmann, B. Wrede, S. Wachsmuth, C. Elbrechter, and R. Haschke, “The curious robot-structuring interactive robot learning,” in International Conference on Robotics and Automation, 2009, pp. 2154–2160.

L. Schillingmann, B. Wrede, K. Rohlfing, and K. Fischer, “The Structure of Robot-Directed Interaction compared to Adult- and Infant-Directed Interaction using a Model for Acoustic Packaging,” in Spoken Dialogue and Human-Robot Interaction Workshop, 2009.

L. Schillingmann, B. Wrede, and K. J. Rohlfing, “A Computational Model of Acoustic Packaging,” IEEE Transactions on Autonomous Mental Development, vol. 1, no. 4, pp. 226–237, Dec. 2009.


L. Schillingmann, B. Wrede, and K. Rohlfing, “Towards a computational model of Acoustic Packaging,” in International Conference on Development and Learning, 2009.
[ICDL 2009 Best Paper Award]


L. Schillingmann, S. Wachsmuth, and B. Wrede, “Corpus-Based Training of Action-Specific Language Models,” in Special Interest Group on Discourse and Dialogue, 2007.

T. Plötz, G. A. Fink, P. Husemann, S. Kanies, K. Lienemann, T. Marschall, M. Martin, L. Schillingmann, M. Steinrücken, and H. Sudek, “Automatic Detection of Song Changes in Music Mixes Using Stochastic Models,” in International Conference on Pattern Recognition (ICPR’06), 2006, vol. 3, pp. 665–668.