2017년 June 14일

Plenary & Invited Speakers

Plenary Speaker: Prof. David Hogg, School of Computing, University of Leeds, UK

“Learning about human behaviour for interactive robotics”

A robot can learn to interact in a natural way with people through observing human behaviour. Part of the challenge is to learn visual concepts and language constructs, such as the meaning of words and grammatical structure. We review recent work on the acquisition of visual and language models for human behaviour from on-line media such as TV shows, and from extended observation, using a mobile robot, of people involved in everyday activities.

David Hogg is Professor of Artificial Intelligence at the University of Leeds. He is internationally recognized for his work on computer vision, particularly in the areas of video analysis and activity recognition. He works extensively across disciplinary boundaries, and over the past ten years has collaborated closely with researchers in design engineering and the performing arts. He has been a visiting professor at the MIT Media Lab, Pro-Vice-Chancellor for Research and Innovation at the University of Leeds, Chair of the ICT Strategic Advisory Team at the Engineering and Physical Sciences Research Council (EPSRC) in the UK, and most recently Chair of an international review panel for Robotics and Artificial Intelligence commissioned by EPSRC. He is currently Chair of the Academic Advisory Group of the Worldwide Universities Network (WUN), helping to promote collaborative research between over 20 prominent research intensive universities from around the globe. David is a Fellow of the European Association for Artificial Intelligence (EurAI), a Distinguished Fellow of the British Machine Vision Association, and a Fellow of the International Association for Pattern Recognition.

 

Plenary Speaker: Prof. Soo-Young Lee, Director of Brain Science Research Center, School of Electrical Engineering, KAIST, Korea

“Intelligent Conversational Agent with Emotion and Personality”

For the successful interaction between human and machine agents, the agents need understand both explicitly-presented human intention and unpresented human mind. Although the current intelligent agents mainly rely on the former with keystrokes, speech, and gestures, the latter will play an important role for the new and up-coming AI agents. In this talk we will start with a brief introduction of Deep Learning inspired by computational models of human auditory and visual pathways. Then, we will move to higher cognitive functions such as situation awareness and decision making, and present our continuing efforts to understand unpresented human mind, which may reside at the internal states of neural networks in human brain. Special emphasis was given to emotion, human memory, trustworthiness, and sympathy to others during interactions. Human memory changes much slowly in time, and is different from person to person and may be used to identify a person. On the other hands the sympathy and trustworthiness to others have much shorter time constants, and may be identified with a few user interactions. Therefore, AI agents will be able to interact with human appropriately with the information on “who you are” and “what you think”. The brain internal states are currently estimated from brain-related signals such as fMRI (functional Magnetic Resonance Imaging), EEG (Electroencephalography), and eye movements, which eventually will be utilized to provide near-ground-truth labels for simple audio-visual signals. At last, as an example, we will summarize the Emotional Conversational Agent Project, a Korean Flagship AI Program.

Soo-Young Lee is a professor of Electrical Engineering at Korea Advanced Institute of Science and Technology. In 1997, he established the Brain Science Research Centre at KAIST, and led Korean Brain Neuroinformatics Research Program from 1998 to 2008 with dual goals, i.e., understanding brain information processing mechanism and developing intelligent machine based on the algorithm. He is now also Director of KAIST Institute for Artificial Intelligence, and leading Emotional Conversational Agent Project, a Korean National Flagship AI Project. He is President of Asia-Pacific Neural Network Society in 2017, and had received Presidential Award from INNS and Outstanding Achievement Award from APNNA. His research interests have resided in the artificial cognitive systems with human-like intelligent behavior based on the biological brain information processing. He has worked on speech and image recognition, natural language processing, situation awareness, internal-state recognition, and human-like dialog systems. Especially, among many internal states, he is interested in emotion, sympathy, trust, and personality. Both computational models and cognitive neuroscience experiments are conducted. His group marked Top-1 for the emotion recognition challenge from facial images (EmotiW; Emotion Recognition in the Wild) in 2015.

 

Invited Speaker: Prof. Takashi Kubota, ISAS/JAXA, Japan

“AI and Robotics Technology for Planetary Exploration”

Nowadays JAXA has studied and developed a new roadmap for deep space exploration. ISAS research groups have earnestly studied new future lunar or planetary exploration missions including landers or rovers. Some explorers, such as surface exploration rovers, and wide area exploration by airplanes, are also under study. Subsurface exploration by mole-typed robots is also under development. Recently small body exploration missions have received a lot of attention in the world. In small body explorations, especially, detailed in-situ surface exploration by tiny probe is one of effective and fruitful means and is expected to make strong contributions towards scientific studies. JAXA is currently promoting Hayabusa-2 mission including sample and return attempt to/from the near-earth asteroid. AI and robotics technology is certainly required for deep space exploration, because of time delay and unknown environment exploration. Prof. Kubota firstly presents the future lunar or planetary exploration plans briefly. Then he talks about AI and Robotics technology for deep space exploration. He introduces the detail of the developed AI technology and exploration robots and shows some experimental results. He also presents the intelligent system for navigation, path planning, sampling, etc.

Takashi Kubota is a professor at Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), Japan. He received Dr. degree in electrical engineering in 1991 from the University of Tokyo. He is also a professor of the graduate school of the University of Tokyo. He is currently a space science program director of ISAS/JAXA. He was a visiting scientist in Jet Propulsion Laboratory in 1997 and 1998. He was in charge of guidance, navigation, and control in asteroid exploration mission HAYABUSA. His research interests include exploration robots, AI in space, Robotics, Image based navigation etc.

 

Invited Speaker: Prof. Bela Stantic, Director of “Big Data and Smart Analytics” Lab – IIIS, Griffith University, Australia

“Big Data Analytics and Robotics”

Big Data is becoming crucial part of many areas of research and practice. Every day we witness new sources of data and diverse sensors that are generating valuable data. Additionally a lot of useful Open data is accessible. However, traditional methods for data processing as well as analytics cannot efficiently deal with such a volume of diverse and high velocity of data, new methods and algorithms have to be proposed. These new Big Data methods have the potential to also enhance robotics and help robots to understand and operate in more complex environment, which will allow a qualitative leap in the performance and utilization of robots in a wide range of practical applications. This talk will highlight the Big Data approach and benefits it can provide to robotics. It will also elaborate details of several different projects currently running within the Big Data Lab.

Bela Stantic is internationally recognized in field of efficient management of complex data structures, such as found in Big Data and spatio-temporal data. He was invited to give many Keynotes and invited talks at highly ranked International conferences and prestigious institutions as well as he has been doing the journal editorial duties and reviews of many top ranked journals. He successfully applied his research interdisciplinary and he has published more than 100 journal and conference peer reviewed publications, which in turn helped to attract funding from different sources, totalling more than million dollars. He is a founder and Director of “Big Data and Smart Analytics” Lab within the Institute of Integrated and Intelligent Systems at Griffith University. Professor Stantic is currently Head of School of Information and Communication Technology.

 

Invited Speaker: Prof. Igor M. Verner, Technion – Israel Institute of Technology, Israel

“3D Modeling, IoT and Augmented Reality to Transparentize Robot Learning”

During interaction with learning robots people often experience difficulties understanding the robot intent and its practical realization. To answer this challenge, we propose a connected environment which integrates the robot and its digital twin, supports reinforcement learning of robot assistive tasks through digital experiments, and uses augmented reality tools to make the robot learning process transparent. Our research investigates a scenario, in which students learn by experiential inquiry into robot reinforcement learning of weightlifting. We utilized CAD to create a digital twin of the robot, IoT to provide connectivity, and virtual sensors to measure parameters of robot dynamics. Then we developed an augmented reality (AR) tool which connects the physical robot and its digital twin through IoT and facilitates student learning by real-time displaying the dynamic parameters of the robot. Results of our research indicate that providing controls for simultaneous interaction with a physical robot and its digital twin in a mixed reality environment opens a path for the breakthrough experience of learning with learning robots.

Igor M. Verner is an associate professor, director of technology teacher education, and head of the Center for Robotics and Digital Technology Education at the Faculty of Education in Science and Technology, Technion – Israel Institute of Technology. Dr. Verner received the M.S. degree in mathematics, the Ph.D. in computer aided design in manufacturing, and the teaching certificate in technology. For 25 years he has conducted research in educational robotics. The research topics include learning through creating robotic models of biological systems, didactics of robot competitions, spatial training in robotic environments, learning with learning robots, automation of school science laboratories, robotics in science museums, and learning by digital design and making. Dr. Verner is a coordinator of learning assessment of the MIT Beaver Works Summer Institute program and visiting scholar of the Teachers College Columbia University.

 

Invited Speaker: Prof. Eric T. Matson, Purdue University, West Lafayette, IN, USA

“Realizing Applied, Useful Self-organizing Counter UAV Systems with the HARMS Integration Model”

The future in the enhancement of cyber-physical system and robotic functionalities lies not only in the mechanical and electronic improvement of the robots’ sensors, mobility, stability and kinematics, but also, if not mostly, in their ability to connect to other actors (human, agents, robots, machines, and sensors HARMS). The capability to communicate openly, to coordinate their goals, to optimize the division of labor, to share their intelligence, to be fully aware of the entire situation, and thus to optimize their fully coordinated actions will be necessary. Additionally, the ability for two actors to work together without preference for any specific type of actor, but simply from necessity of capability, is provided by a requirement of indistiguishability, similar to the discernment feature of rough sets.

Once all of these actors can effectively communicate, they can take on group rational decision making, such as choosing which action to take that optimizes a group’s effectiveness or utility. Given group decision making, optimized capability-based organization can take place to enable human-like organizational behavior. Similar to human organizations, artificial collections with the capability to organize will exhibit emergent normative behavior. In this session, we will show how these models are applied to real world problems in security, first response, defense and agriculture.

Eric T. Matson is an Associate Professor in the Department of Computer and Information Technology in the College of Technology at Purdue University, West Lafayette. Prof. Matson was an International Faculty Scholar in the Department of Electrical Engineering at Kyung Hee University, Yongin City, Korea, a Visiting Professor with the LISSI, University of Paris et Creteil (Paris 12), Paris, France, Visiting Professor, Department of Computer Science and Engineering, Dongguk University, Seoul, South Korea and in the School of Informatics at Incheon National University in Incheon, South Korea. He is the Director of the Robotic Innovation, Commercialization and Education (RICE) Research Center, Director of the Korean Software Square at Purdue and the co-founder of the M2M Lab at Purdue University, which performs research at the areas of multiagent systems, cooperative robotics and wireless communication. The application areas are focused on safety and security robotics and agricultural robotics and systems. At Purdue, he is a University Faculty Scholar and is a member of the Board on Army Science and Technology (BAST) for the National Academies of Science, Engineering and Medicine (NAS).