2017년 June 14일

Plenary & Invited Speakers

Plenary Speaker: Prof. David Hogg, School of Computing, University of Leeds, UK

“Learning about human behaviour for interactive robotics”
A robot can learn to interact in a natural way with people through observing human behaviour. Part of the challenge is to learn visual concepts and language constructs, such as the meaning of words and grammatical structure. We review recent work on the acquisition of visual and language models for human behaviour from on-line media such as TV shows, and from extended observation, using a mobile robot, of people involved in everyday activities.

David Hogg is Professor of Artificial Intelligence at the University of Leeds. He is internationally recognized for his work on computer vision, particularly in the areas of video analysis and activity recognition. He works extensively across disciplinary boundaries, and over the past ten years has collaborated closely with researchers in design engineering and the performing arts. He has been a visiting professor at the MIT Media Lab, Pro-Vice-Chancellor for Research and Innovation at the University of Leeds, Chair of the ICT Strategic Advisory Team at the Engineering and Physical Sciences Research Council (EPSRC) in the UK, and most recently Chair of an international review panel for Robotics and Artificial Intelligence commissioned by EPSRC. He is currently Chair of the Academic Advisory Group of the Worldwide Universities Network (WUN), helping to promote collaborative research between over 20 prominent research intensive universities from around the globe. David is a Fellow of the European Association for Artificial Intelligence (EurAI), a Distinguished Fellow of the British Machine Vision Association, and a Fellow of the International Association for Pattern Recognition.

 

Plenary Speaker: Prof. Soo-Young Lee, Director of Brain Science Research Center, Prof. of School of Electrical Engineering, KAIST, Korea

“Intelligent Conversational Agent with Emotion and Personality”

Abstract & Biography: To be informed soon

 

 

 

 

 

 

Invited Speaker: Prof. Takashi Kubota, ISAS/JAXA, Japan

“AI and Robotics Technology for Planetary Exploration”

Nowadays JAXA has studied and developed a new roadmap for deep space exploration. ISAS research groups have earnestly studied new future lunar or planetary exploration missions including landers or rovers. Some explorers, such as surface exploration rovers, and wide area exploration by airplanes, are also under study. Subsurface exploration by mole-typed robots is also under development. Recently small body exploration missions have received a lot of attention in the world. In small body explorations, especially, detailed in-situ surface exploration by tiny probe is one of effective and fruitful means and is expected to make strong contributions towards scientific studies. JAXA is currently promoting Hayabusa-2 mission including sample and return attempt to/from the near-earth asteroid. AI and robotics technology is certainly required for deep space exploration, because of time delay and unknown environment exploration. Prof. Kubota firstly presents the future lunar or planetary exploration plans briefly. Then he talks about AI and Robotics technology for deep space exploration. He introduces the detail of the developed AI technology and exploration robots and shows some experimental results. He also presents the intelligent system for navigation, path planning, sampling, etc.

Takashi Kubota is a professor at Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), Japan. He received Dr. degree in electrical engineering in 1991 from the University of Tokyo. He is also a professor of the graduate school of the University of Tokyo. He is currently a space science program director of ISAS/JAXA. He was a visiting scientist in Jet Propulsion Laboratory in 1997 and 1998. He was in charge of guidance, navigation, and control in asteroid exploration mission HAYABUSA. His research interests include exploration robots, AI in space, Robotics, Image based navigation etc.

 

Invited Speaker: Prof, Bela Stantic, Director of “Big Data and Smart Analytics” Lab – IIIS, Griffith University, Australia

“Big Data Analytics and Robotics”

Abstract: to be informed soon.

Bela Stantic is internationally recognized in field of efficient management of complex data structures, such as found in Big Data and spatio-temporal data. He was invited to give many Keynotes and invited talks at highly ranked International conferences and prestigious institutions as well as he has been doing the journal editorial duties and reviews of many top ranked journals. He successfully applied his research interdisciplinary and he has published more than 100 journal and conference peer reviewed publications, which in turn helped to attract funding from different sources, totalling more than million dollars. He is a founder and Director of “Big Data and Smart Analytics” Lab within the Institute of Integrated and Intelligent Systems at Griffith University. Professor Stantic is currently Head of School of Information and Communication Technology.

 

 

Invited Speaker: Prof. Igor M. Verner, Technion – Israel Institute of Technology, Israel

“3D Modeling, IoT and Augmented Reality to Transparentize Robot Learning”

During interaction with learning robots people often experience difficulties understanding the robot intent and its practical realization. To answer this challenge, we propose a connected environment which integrates the robot and its digital twin, supports reinforcement learning of robot assistive tasks through digital experiments, and uses augmented reality tools to make the robot learning process transparent. Our research investigates a scenario, in which students learn by experiential inquiry into robot reinforcement learning of weightlifting. We utilized CAD to create a digital twin of the robot, IoT to provide connectivity, and virtual sensors to measure parameters of robot dynamics. Then we developed an augmented reality (AR) tool which connects the physical robot and its digital twin through IoT and facilitates student learning by real-time displaying the dynamic parameters of the robot. Results of our research indicate that providing controls for simultaneous interaction with a physical robot and its digital twin in a mixed reality environment opens a path for the breakthrough experience of learning with learning robots.

Igor M. Verner is an associate professor, director of technology teacher education, and head of the Center for Robotics and Digital Technology Education at the Faculty of Education in Science and Technology, Technion – Israel Institute of Technology. Dr. Verner received the M.S. degree in mathematics, the Ph.D. in computer aided design in manufacturing, and the teaching certificate in technology. For 25 years he has conducted research in educational robotics. The research topics include learning through creating robotic models of biological systems, didactics of robot competitions, spatial training in robotic environments, learning with learning robots, automation of school science laboratories, robotics in science museums, and learning by digital design and making. Dr. Verner is a coordinator of learning assessment of the MIT Beaver Works Summer Institute program and visiting scholar of the Teachers College Columbia University.

 

Invited Speaker: Prof. Eric T. Matson, Purdue University, West Lafayette, IN, USA

“Realizing Applied, Useful Self-organizing Counter UAV Systems with the HARMS Integration Model”

The future in the enhancement of cyber-physical system and robotic functionalities lies not only in the mechanical and electronic improvement of the robots’ sensors, mobility, stability and kinematics, but also, if not mostly, in their ability to connect to other actors (human, agents, robots, machines, and sensors HARMS). The capability to communicate openly, to coordinate their goals, to optimize the division of labor, to share their intelligence, to be fully aware of the entire situation, and thus to optimize their fully coordinated actions will be necessary. Additionally, the ability for two actors to work together without preference for any specific type of actor, but simply from necessity of capability, is provided by a requirement of indistiguishability, similar to the discernment feature of rough sets.

Once all of these actors can effectively communicate, they can take on group rational decision making, such as choosing which action to take that optimizes a group’s effectiveness or utility. Given group decision making, optimized capability-based organization can take place to enable human-like organizational behavior. Similar to human organizations, artificial collections with the capability to organize will exhibit emergent normative behavior. In this session, we will show how these models are applied to real world problems in security, first response, defense and agriculture.

Eric T. Matson is an Associate Professor in the Department of Computer and Information Technology in the College of Technology at Purdue University, West Lafayette. Prof. Matson was an International Faculty Scholar in the Department of Electrical Engineering at Kyung Hee University, Yongin City, Korea, a Visiting Professor with the LISSI, University of Paris et Creteil (Paris 12), Paris, France, Visiting Professor, Department of Computer Science and Engineering, Dongguk University, Seoul, South Korea and in the School of Informatics at Incheon National University in Incheon, South Korea. He is the Director of the Robotic Innovation, Commercialization and Education (RICE) Research Center, Director of the Korean Software Square at Purdue and the co-founder of the M2M Lab at Purdue University, which performs research at the areas of multiagent systems, cooperative robotics and wireless communication. The application areas are focused on safety and security robotics and agricultural robotics and systems. At Purdue, he is a University Faculty Scholar and is a member of the Board on Army Science and Technology (BAST) for the National Academies of Science, Engineering and Medicine (NAS).