Skip to main content

Erhan Öztop

I am a professor at Ozyegin University Computer Science Department, and I co-direct Ozu Robotics Laboratory and Ozu AI Laboratory 

Before joining OzU, I was conducting research at the Cognitive Mechanisms Laboratories of the Advanced Telecommunications Research Institute (ATR) as the vice head of the Communication and Cognitive Cybernetics (CCC) Department. 

My research interests include computational modeling of brain mechanisms of visuo-motor transformationsmachine learning,robotics, and cognitive neuroscience. In general, I am interested in the question of how humans and other biological systems process information and solve problems.


Please follow the links for brief descriptions

Convergent Human and Robot Learning for Effective Robot Skill Generation

Computational Modeling of Mirror Neurons

Sign-Representation of Boolean Functions

Dexterous Manual Manipulation

Previous Projects

Gifu Hand III

Human visuomotor learning for robot skill synthesis : Dexterous manipulation
This study explores how the human visuomotor learning ability can be utilized for obtaining dexterous manipulation and movement capabilities on robots (see also item 4 below). For example an effortless ball manipulation via realtime control of the Gifu Hand can be seen here . A more challenging task is to rotate the so called Chinese healing balls without dropping them. With training, the robot hand is integrated into human ‘body schema’ allowing the subject to perform this task with the robot hand. Here is a movie or this showing the obtained skill with this paradigm. This basic skill then can be tuned to improve performance (e.g. speed) as shown here .

Self-observation and auto-association as route to simple imitation
In the previous years, we have explored the associative memory hypothesis of imitation bootstrapping with the Gifu Hand. Click for a demo movie.

Application to Brain Machine Interface
Collaborating with Honda and neuroscientists at ATR/CNS, we employed the Gifu Hand in a brain-machine-interface (BMI) project. Using fMRI, human subjects’ brain activity are mapped to one of rock/scissors/paper hand postures that are replicated on the Gifu Hand in near real-time.Take a Google search on the project.

Realtime full body robot control of HOAP-II

Human visuomotor learning for robot skill synthesis: Reaching while keeping static balance
This is the extension of the ‘human visuomotor learning for robot skill synthesis’ paradigm to full body humanoid robots. This is a collaborative work with Jan Babic at Jozef Stefan Institute, Slovenia and Joshua Hale at ATR, Japan. Here is the human human control of the robot, where the subject was asked to keep the robot balanced while tracing a trajectory with his finger. The data collected is used to derive a balanced reaching skill. Here this skill is used to have the robot trace an elliptical trajectory.

Improving the human visuomotor learning for robot skill synthesis paradigm
This platform can carry a human. The idea is this: the subject controlling a humanoid robot will ‘ride’ the platform and ‘feel’ how the robot feels in terms of the dynamics of the center of mass of the robot. Here the force control of the platform can be seen.

The separation induced by a higher order neuron (a polynomial) for a dichotomy of the corners of the 3 dimensional cube.

Representation of Boolean functions (dichotomies over the n-cube) using polynomials (higher order neurons) with a small number of monomials (fan-in).
Higher-order neurons or sigma-pi units are extensions of linear neuron models, which capture the nonlinearity in the input-output relation of a mapping using products of input variables, called the monomials. The net input to a higher-order unit is the sum of the monomials weighted by adjustable parameters. The output is obtained by the application of a predefined activation function, usually a sigmoidal function, or a threshold function to the net input. There are many aspects of this powerful model that deserves attention. My main interest is to study the number of monomials that a higher order neuron would require to solve a given classification. More generally; given a set of classification problems what is the minimum number of monomials that can solve the given problem set? Recently, I have showed that any dichotomy of the n-cube can be realized with 0.75*2n or less monomials. This is the best bound known so far. Here is the reprint that has the proof of this claim.

DB, the robot used in human-robot interaction experiments

Motor interference: an objective tool to test the extent that a robot is perceived as human-like
It is generally accepted that (humanoid) robots will become part of out daily lives. So it is important to understand how well they will be accepted as social partners. In this direction, we have adopted the motor interference effect observed in human-human interactions to study study the human perception of robots as social partners. Motor interference refers to the differential effect of observing an action while performing a compatible or an incompatible action. An example of a compatible and incompatible movement pair is the vertical and. lateral hand movements. We have recently shown that a humanoid robot (DB) moving similar to a human elicits motor interference. We now are conducting experiments to tease apart the contribution of motion and form to this reaction. To get idea of the experiment setup click here.

Activity maps of the units that model the AIP neurons

Grasp Affordance Learning
Grasp Affordance refers to the intrinsic features of an object that are relevant for grasping. For example the color of pen, in general, is not part of its grasp affordance because it does not guide the grasping behavior. In macaque monkeys the parietal area AIP appears to be involved in affordance extraction. AIP with the ventral premotor cortex (F5) forms the core of the monkey grasping circuit. Recently I developed a model for AIP neurons which is based on the hypothesis that early grasping of infants (being mediated by other mechanisms) provide the learning data points for F5-AIP complex to learn a mapping from visual->motor representation. The critical test is then to see whether this visuo-motor learning leads to the emergence of unit responses that are comparable to actual AIP neurons. The simulation results show that this is correct. The future research plan is to compare the modeled AIP unit activities with AIP neuron discharge profiles in a quantitative way.

The cortical grasp planning and execution circuit of macaque monkeys.

Mirror Neurons and Imitation
According to the general opinion, high level functions such as imitation, action understanding and (precursors of) language are attributed to mirror neurons. However it is not clear how much the human mirror system has evolved to support imitation and language, if indeed there is a connection between these skills and the mirror neurons. Furthermore the number of studies that take a computational viewpoint to study these hypothesis is limited. Recently, guided by my earlier modeling of mirror neurons and mental state inference mechanisms I have made a meta-analysis of the computational models (that can be seen as models of mirror neurons) and current opinions about mirror neuron function. Here is the reprint.

Older projects and links 

PhD Related links



International Journal Publications

  1. Amirshirzad N, Asiye K, Oztop E (2019) Human Adaptation to Human-Robot Shared Control. IEEE Transactions on Human-Machine Systems, Early access doi: 10.1109/THMS.2018.2884719
  2. Imre M, Oztop E, Nagai Y, Ugur E (2019) Affordance-Based Altruistic Robotic Architecture for Human-Robot Collaboration. Adaptive Behavior, Early access doi:10.1177/1059712318824697
  3. Teramae T, Ishira K, Babic J, Morimoto J, Oztop E (2018) Human-in-the-loop control and task learning for pneumatically actuated muscle based robots. ontiers in Neurorobotics, doi: 10.3389/fnbot.2018.00071
  4. Taniguchi T, Ugur E, Hoffman M, Jamone L, Nagai T, Rosman B, Matsuka T, Iwahashi N, Oztop E, Piater J, Florentin W (2018) Symbol Emergence in Cognitive Developmental Systems: a Survey. IEEE Transactions on Cognitive and Learning Systems. Early access doi: 10.1109/TCDS.2018.2867772
  5. Ersen M, Oztop E, Sariel S (2017) Enabling Cognition for Robot Manipulation in Human Environments: Requirements, Recent Work and Open Problems. The IEEE Robotics and Automation Magazine 24 (3), pp. 108-122
  6. Babic J, Oztop E, Kawato M (2016) Human motor adaptation in full body movements: squat-to-stand under postural perturbations. Nature Scientific Reports 6: 32868 (doi:10.1038/srep32868)
  7. Sezener E, Oztop E (2015) Minimal sign representation of Boolean functions: algorithms and exact results for low dimensions. Neural Computation 27(8):1796-823
  8. Ugur E, Sahin E, Nagai Y, Oztop E (2015) Staged Development of Robot Skills: Behavior Formation, Affordance Learning and Imitation, IEEE Transactions on Autonomous Mental Development 7(2), pp. 119-139
  9. Ugur E, Nagai Y, Celikkanat H, Oztop E (2015) Parental scaffolding as a bootstrapping mechanism for learning grasp affordances and imitation skills. Robotica 33 (05): 1163-1180
  10. Peternel L, Petric T, Oztop E, Babic J (2014) Teaching robots to cooperate with humans in dynamic manipulation tasks based on multi-modal human-in-the-loop approach. Autonomous Robots. 36(1-2): 123-136
  11. Arbib M, Bonaiuto J, Bornkessel-Schlesewsky I, Kemmerer D, MacWhinney B, Nielsen F, Oztop E (2014) Action and Language Mechanisms in the Brain: Data, Models and Neuroinformatics. Neuroinformatics. 12: 209-225
  12. Oztop E, Kawato M, Arbib M (2013) Mirror Neurons: Functions, Mechanisms and Models. Neuroscience Letters 540: 43-55
  13. Kober J, Wilhelm A, Oztop E, Peters J (2012) Reinforcement Learning to Adjust Parameterized Motor Primitives to New Situations. Autonomous Robots 33(4): 361-379
  14. Moore B, Oztop E (2012) Robotic grasping and manipulation through human visuomotor learning. Robotics and Autonomous Systems 60: 441-451
  15. Gurbuz S, Oztop E, Inoue N (2012) Model free head pose estimation using stereovision. Pattern Recognition 45(1): 33-42
  16. Babic J, Hale JG, Oztop E (2011) Human sensorimotor learning for humanoid robot skill synthesis. Adaptive Behavior 19(4): 250-263
  17. Ugur E, Oztop E, Sahin E (2011) Goal emulation and planning in perceptual space using learned affordances. Robotics and Autonomous Systems 59, 580-595
  18. Oztop E (2009) Sign representation of Boolean functions using a small number of monomials. Neural Networks 22: 938-948
  19. Chaminade T, Oztop E, Gordon C, Kawato M (2008) From self-observation to imitation: visuomotor association on a robotic hand. Brain Research Bulletin 75(6):775-784
  20. Oztop E (2006) An upper bound on the minimum number of monomials required to separate dichotomies of {-1,1}n . Neural Computation 18: 3119-3138
  21. Oztop E, Kawato M, Arbib M (2006) Mirror neurons and imitation: A computationally guided review. Neural Networks 19:254-271
  22. Oztop E, Imamizu H, Cheng G, Kawato M (2006) A computational model of anterior intraparietal (AIP) neurons. Neurocomputing 69: 1354-1361
  23. Oztop E, Franklin DW, Chaminade T, Cheng G (2005) Human-humanoid interaction: Is a humanoid robot perceived as a human? International Journal of Humanoid Robotics 2:(4) 537-559
  24. Oztop E, Wolpert D, Kawato M (2005) Mental state inference using visual control parameters. Cognitive Brain Research 22: 129-151
  25. Oztop E, Bradley NS, Arbib MA (2004) Infant grasp learning: a computational model, Exp Brain Res. 158:480-503
  26. Oztop E., Arbib MA (2002) Schema design and implementation of the grasp-related mirror neuron system. Biological Cybernetics 87: (2) 116-140
  27. Arbib MA, Billard A, Iacoboni M, Oztop E (2000) Synthetic brain imaging: Grasping, mirror neurons and imitation. Neural Networks 13: (8-9) 975-997
  28. Oztop E, Mulayim AY, Atalay V, Yarman-Vural F. (1999) Repulsive attractive network for baseline extraction on document images. Signal Processing 75 (1) 1-10

Invited Book Chapters

  1. Oztop E, Ugur E, Shimizu Y, Imamizu H (2014) Humanoid Brain Science
    In: Cheng G (ed) Humanoid Robotics and Neuroscience: Science, Engineering and Society.Taylor & Francis
  2. Oztop E, Kawato M (2009) Models For The Control of Grasping In: Nowak D, Hermsdoerfer J (eds) Sensorimotor Control of Grasping: Physiology and Pathophysiology. Cambridge University Press
  3. Oztop E (2009) Mirror Neurons: Extraordinary or Ordinary? In: Minett JW, Wang W(eds) Language, Evolution, and the Brain. City University of Hong Kong Press
  4. Oztop E, Arbib M, Bradley N (2006) The Development of Grasping and the Mirror System. In: Arbib M (ed) Action to Language via the Mirror Neuron System. Cambridge University Press
  5. Crowley M, Marmol S, Oztop E. (2002) Crowley-Arbib Saccade Model. Chapter in The Neural Simulation Language, MIT Press, MA 2002

Domestic Journal Articles and Theses

  1. Oztop E (2007) Models of mirror system. Scholarpedia (online encyclopedia), 2(10):3276
  2. Oztop E, Kawato M (2005) Conceptual and Computational Models of Mirror Neurons. The Brain & Neural Networks (Journal of Japanese Neural Network Society) 12: 61-73
  3. Arbib MA, Oztop E, Zukow-Goldring P (2005) Language and the Mirror System: A Perception/Action Based Approach to Communicative Development.Cognitie, Creier, Comportament /Cognition, Brain, Behavior vol. IX(3) 239-272
  4. Oztop E (2002) Modeling the Mirror: Grasp Learning and Action Recognition. Ph.D. thesis, University of Southern California
  5. Oztop E (1996) A New Content Addressable Memory Utilizing High Order Neurons. Master thesis, Middle East Technical University, Turkey

Invited Talks at International Workshop and Conferences

  1. Oztop E (2016.10) Shared Learning and Coadaptation, NII Shonan Meeting on Cognitive development and symbol emergence in humans and robots, Kanagawa, Japan
  2. Oztop E (2016.2) Action representation in F5 Mirror Neurons: A computational view, The 2nd International Symposium on Cognitive Neuroscience Robotics: Before and Beyond Mirror Neurons, Osaka, Japan
  3. Oztop E (2013.11) Adaptive Systems through Human Sensorimotor Learning, IEEE IROS Workshop on Cognitive Robotics Systems: Replicating Human Actions and Activities, Tokyo, Japan
  4. Oztop E (2013.06) Adaptive Systems through Human Sensorimotor Learning, Chist-Era Conference (Keynote), Brussels, Belgium
  5. Oztop E (2011.5) From robot skill synthesis to understanding human motor control, The fourth Symposium on Cognitive Neuroscience Robotics, Osaka, Japan
  6. Oztop E (2010.9) Human sensorimotor learning for robot skill synthesis (Keynote Speech), IEEE International Symposium in Robot and Human Interactive Communication (RO-MAN 2010), Viareggio, Italy
  7. Oztop E (2010.5) Can we learn from biology about object representation for grasping and manipulation? Workshop on Representations for object grasping and manipulation, IEEE International Conference on Robotics and Automation, Anchorage, Alaska, USA
  8. Oztop E (2009.9) Human Visuomotor Learning for Robot Skill Synthesis. First IEEE Workshop on Computer Vision for Humanoid Robots in Real Environments, IEEE International Conference on Computer Vision, Kyoto, Japan
  9. Oztop E, Hale J, Babic J, Kawato M (2008.9) Robots as complex tools for humans to control: Human visuo-motor learning for robot skill synthesis. Workshop on Grasp and Task Learning by Imitation, International Conference on Intelligent Robots and Systems, Nice, France
  10. Oztop E (2007.4) Modeling Mirror Neurons. International Seminar on Language, Evolution, and the Brain, International Institute for Advanced Studies, Kyoto, Japan
  11. Oztop E, Babic J, Hale J, Cheng G, Kawato M (2007.11) From Biologically Realistic Imitation to Robot Teaching via Human Motor Learning. In: 14th International Conference on Neural Information Processing, Kitakyushi, Japan
  12. Oztop E. (2004.9) Modeling the Mirror Neurons. IROS, International Workshop on Robotic Imitation, Sendai, Japan

International Peer Reviewed Conference Papers (selected)

  1. Peternel L, Oztop E, Babic J (2016.10) A Shared Control Method for Online Human-in-the-Loop Robot Learning Based on Locally Weighted Regression. IEEE/RSJ International Conference on Intelligent Robots and Systems
  2. Amirshirzad N, Oztop E (2016.9) Synergistic human-robot shared control via human goal estimation. Annual Conference of Society of Instrument and Control Engineers (SICE), Tsukuba
  3. Zamani MA, Oztop E (2015.7) Simultaneous Human-Robot Adaptation for Effective Skill Transfer. International Conference on Advanced Robotics (ICAR), Istanbul
  4. Ozturk N, Oztop E (2015.7) “Cooperative Multi-Task Assignment for Heterogonous UAVs.International Conference on Advanced Robotics (ICAR), Istanbul
  5. Kirtay M, Oztop E (2013.5) Emergent Emotion via Neural Computational Energy Conservation on a Humanoid Robot. IEEE-RAS Intl. Conf. on Humanoid Robots, Georgia, Atlanta, USA
  6. Ugur E, Sahin E, Oztop E (2012.9) Self-discovery of motor primitives and learning grasp affordances. IEEE/RSJ International Conference on Intelligent Robots and Systems, Algarve, Portugal
  7. Ugur E, Celikkanat H, Sahin E, Nagai Y, Oztop E (2011.10) Learning to Grasp with Parental Scaffolding, IEEE-RAS Intl. Conf. on Humanoid Robots, Bled, Slovenia
  8. Ugur E, Sahin E, Oztop E (2011.5) Unsupervised learning of object affordances for planning in a mobile manipulation platform, IEEE Intl. Conf. on Robotics and Automation, 4326-4332, Shanghai, China
  9. Ugur E, Oztop E, Sahin E (2011.5) Going beyond the perception of affordances: Learning how to actualize them through behavioral parameters, IEEE Intl. Conf. on Robotics and Automation pp. 4768-4773, Shanghai, China
  10. Kober K, Oztop E, Peters J (2010.06) Reinforcement Learning to adjust Robot Movements to New Situations, Robotics: Science and Systems, Proc., Zaragoza, Spain
  11. Steffen J, Oztop E, Ritter H (2010.10) Structured Unsupervised Kernel Regression for Closed-loop Motion Control, IEEE International Conference on Robotics and Automation, Proc.,Taipei, Taiwan
  12. Moore B, Oztop E (2010.8) Redundancy parameterization for flexible motion control, ASME IDETC 2010, Montreal, Canada
  13. Ugur E, Sahin E, Oztop E (2009.11) Affordance learning from range data for multistep planning. International Conference on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems, Proc., Venice, Italy
  14. Oztop E, Lin LH, Kawato M, Cheng G (2007.4) Extensive Human Training for Robot Skill Synthesis: Validation on a Robotic Hand. In: IEEE International Conference on Robotics and Automation, Roma, Italy
  15. Oztop E, Lin LH, Kawato M, Cheng G (2006.12) Dexterous Skills Transfer by Extending Human Body Schema to a Robotic Hand. IEEE-RAS/RSJ International Conference on Humanoid Robots, Proc., Genova, Italy
  16. Gumpp T, Azad P, Welke K, Oztop E, Dillmann R, Cheng G (2006.12) Unconstrained Real-time Markerless Hand Tracking for Humanoid Interaction. IEEE-RAS/RSJ International Conference on Humanoid Robots, Proc., Genova, Italy
  17. Welke K, Oztop E, Ude A, Dillmann R, Cheng G (2006.12) Learning feature representations for an object recognition system. IEEE-RAS/RSJ International Conference on Humanoid Robots, Proc., Genova, Italy
  18. Oztop E, Chaminade T, Cheng G, Kawato M (2005.12) Imitation Bootstrapping: Experiments on a Robotic Hand IEEE-RAS/RSJ International Conference on Humanoid Robots, Proc., Tsukuba, Japan
  19. Oztop E, Imamizu H, Cheng G, Kawato M (2005.7) A Computational Model of Anterior Intraparietal (AIP) Neurons. 14th Annual Computational Neuroscience Meeting, Madison, Wisconsin, USA
  20. Oztop E, Franklin D, Chaminade T (2004.11) Human-Humanoid Interaction: Is a humanoid robot perceived as human? IEEE-RAS/RSJ International Conference on Humanoid Robots, Proc., Santa Monica, California, USA
  21. Oztop E, Wolpert D, Kawato M (2003) Mirror neurons: key for mental simulation? Proceedings of Computational Neuroscience Meeting, Alicante, Spain


Özyeğin University Robotics Laboratory 

Çekmeköy Campus, Nişantepe District,
Orman Street, 34794 Çekmeköy – İSTANBUL