Attend NeurIPS 2020

11 minute read

Published:

I was fortunate to attend this year’s virtual Conference on Neural Information Processing System (NeurIPS 2020) thanks to the registration support from the NeurIPS 3rd Robot Learning Workshop. Although this is not a conference I’m regularly involved in, attending this conference is quite an experience filled with serendipity. (Note: part of this post is published as an article “What a cognitive psychologist learned from robotics” on the Medium Platform)

Part of my motivation to attend NeuraIPS comes from Prof. Emre Neftci’s Neural Networks in Deep Learning course where he mentioned that the tickets to this conference was sold out in 11 min in 2018! I was so impressed by how popular this conference could be and was wondering what magic it makes.

It was great to see researchers from both academia and industry working on the same problems at NeurIPS. In contrast to other academic conferences that I regularly attend in the field of cognitive psychology and neuroscience, questions discussed in NeurIPS is closer to real-life scenarios. I enjoyed learning how FANNG companies apply machine learning algorithms in improving user experience at Expo workshops. E.g., How data scientists at Netflix come up with metrics to establish algorithms on users’ dynamic stream preferences in order to provide show recommendations tailored to each user. As someone who is into data science, learning about new ways and new variables that gives out information always excites me.

One talk I really like is the Breiman Lecture given by Prof. Marloes Maathuis, a statistician from ETH Zürich, focused on causal learning. The talk not only informed me how critical does the type of assumptions influence data interpretations, it is also really easy to digest and enjoyable – Prof. Maathuis is a great speaker. It is rare for me to have so much fun listening to a long statistics lecture.

Another half of my motivation to attend the conference is the one-day-long robotics learning workshop. This fall quarter, I took a course on Cognitive Robotics taught by Professor Jeff Krichmar, who is a neuroscientist and a roboticist. In his course, I learned to implement neural networks on robot simulation to test on some neuroscience theories. To a cognitive neuroscientist who is struggling with human subject recruitment during the pandemic – being able to create own participants (virtual robots) and conduct experiments on them – is purely fascinating. I was eager to learn more. Therefore, when I heard about the opportunity to attend the robot workshop at NeuraIPS, I did not hesitate to apply for it.

The day-long robotics learning workshop is focused on the topic of Grounding Machine Learning Development in the Real World. The most fruitful part is the discussion section where several roboticists discuss some hardcore questions in the field of robotics.

Interestingly, just like the pandemic pushes cognitive psychologists moving experiments online using remote testing platforms such as Amazon Mechanical Turk and Pavlovia, pandemic also pushes roboticists to move experiments from real-world physical experiments to robot simulation on computer platforms such as Webots , RaiSim, and Isaac Gym. This brings up the big topic at the workshop on sim2real problem. Here are some valid points I got from the discussion panel: On the one hand, simulators are never perfect enough to be directly applied to real life, even if ground based. On the other hand, simulators are necessary and really useful. Just like Doctor Strange in the last episode of the Avengers who pre-played millions of possible solutions in mind trying to find an optimal action to defeat Thanos, simulators largely decreased trials and errors of robot experiments in the physical world by producing well pre-trained networks. Possible solutions to the sim2real problems include 1) Adding randomness to simulations so that it can better represent the real-world situation, which is similar to doing counterbalance in psychology experimental design 2) Incorporating real-world data and doing sim2real and real2sim iteratively. This is a really good point and I think the iterative ground-reground thinking can also apply to cognitive research for a loop between human subject experiment and computational modeling. I would like to say that it’s similarly for research in human, even if we used real human for testing, it gets really hard to replicate real environment. There is always pros and cons. From a human researcher perspective, robotics gives a new opportunity for human researchers to verify how different things work out by applying mechanisms when learned in humans to robots and observe robots’ real-world behaviors.

Dr. Carolina Parada from Google Robotics gives a good landscape of the robotics development. Critically, she talked about the concept of two types of robots: robots used in industry and robots produced in industry. The prior one is used by programmers, while the later one is used by consumers. Often times, we are just thinking about the human-centered robots. Just like many people are just thinking about social psychology or psychological counseling when coming to the topic of psychology – which are really two branches out of the big tree of psychology.

I am a dog lover. My earliest impression on robotics comes from Sony’s Aibo dog. Later, I became a fan of Boston Dynamic’s Spot. Like it has only existed in many people’s imagination of the human future that one day we could have our own robot servers at home that could help with doing all the boring labors: Robotics has been a word that’s so cool and quite distant from my life.

The first time I learned the ‘not so cool’ side of robotics is when I read a literature on path integration (simply put, how to locate oneself during movement) where it mentions that very simple human path integration tasks such as walking in a circle (technical terms: loop closure tasks, or more prevalently used triangle completion tasks) is really challenging for robots. That’s when I realized that as a cognitive psychologist who study human, I have been taking so many of good human abilities (senses, memory, learning, etc.) for granted. Whereas robots are just like new-born babies, the world is so complicated and strange to them that they will usually start with failing to do anything: The story behind a cool robot video is usually million times of trials and errors. I started to appreciate the delicately designed stable human system that both me and my human subjects owned. Just like the realization of the robustness of the functioning system, learning about robotics gave me some new lens in understanding human:

Think developmentally. As I’ve mentioned above, many robots need a learning process to gain an ability which is similar to human development. As far as I knew, most studies in the field of cognitive psychology are cross-sectional. It is understandable because longitudinal studies really need to invest a big amount of time and money. However, to understand (and in robotics ‘to create’) one ability we really need to add a time dimension to it. Without understanding its developmental history, it’s hard to confidently say that we understand its underlying mechanisms.

Think holistically. Simply learning the process of how to make a robot move even at a baby step really let me zoom out of the human head and start to think more about the whole human body functioning as a whole. Well, I cast no doubt on the importance of CNS (central nervous system) in controlling human body, but to complete any tasks, we have to be a multi-tasker. To give you a simple example, to reach an apple on the desk, you have to see (visual perception, visual recognition) and locate it (inertial metrics), then walk toward it (motor system for torso and legs), and finally extend one of your hands to grab the apple (a quick decision making of action selection, proprioception of different joints of the selected hand). How many senses and how many brain areas are involved in this process? Many. Only understanding how visual dorsal stream work (the famous ‘where’ pathway in the brain) cannot give us the apple without a coordinated movement of different body parts. Let me clarify, the ‘mutli-tasker’ here refers to separate body mechanisms that contributes to one task, instead of different tasks that diverts your attention. I agree that we’d better focus on one task at a time to stay efficient.

Think cheaply. Unless a robot is charged 24/7, it can only do limited tasks before energy runs out. That pushes roboticists to design the robot system smartly or say ‘cheaply’. In many psychology studies, subjects are required to complete the same task hundreds of times in one experiment. As I’ve heard about research in monkeys, monkeys can complete a task thousands of times in one experiment that may take a human to be tested multiple times over weeks or even months. Well, in both cases, humans and monkeys were not keep ‘charged’ during the experiment. Of course, healthy humans and monkeys’ energy don’t run out that quickly, but is having an extensive number of repeated trials really a good way to stay underlying mechanisms of cognitive abilities? I don’t know the answer. I may still implement the traditional experimental psychology way to test my human subjects, but how to do things ‘cheaply’ is definitely one thing I will keep thinking.

Think tolerably. Robots make mistakes for different reasons. Sometimes a robot fails for a legitimate reason related to the system design and that could be fixable. Other times seems it’s hard to tell. If you’ve ever programmed, you may know that sometimes one way to ‘debug’ is just to restart the software or restart the computer. How that is automatically fixed - someone may tell – I have no idea. Would it be similar in humans? Maybe not all phenomenon we occasionally observed in human behaviors will have an explanation or is necessary to require an explanation. From a machine learning perspective, if we overfit human behaviors with branch explanations, it may overshadow the trunk explanation and makes it hard to transfer to understand other similar behaviors. Therefore, keeping some tolerance level in human observations may prevent us from being trapped by unnecessary details.

Although I may or may not go further into the robotics track of research in the future, it is definitely one of the coolest research areas I’ve learned during graduate school and will be keep paying attention to it. It is also interesting that when learning about another subject really helps me reflect on my own subject.

If you are brave enough to read to this line, let me recommend you an affordable joyful robot companion pet that you can give to your aging loved ones:

image of the joy for All's Orange Tabby Cat