Research Topics
Now that ubiquitous society has come, computerized/intelligent control systems are achievable everywhere. Our laboratory is working on system control, machine learning, signal processing, and interdisciplinary study based on these topics. In what follows, after stating general overview, we show our current research topics. Please note that these are just examples and we always challenge a new topic.
— Intelligent System Control: What is our target —
As the term “control” is used in our daily life, this concept is familiar and basic. Control technology is widely used in industry, but at this moment only simple control laws are adopted in practice and higher performance is strongly desired. This can be achieved by modeling dynamics of the object and analyzing/designing the system in an intelligent way. It is also desirable to build a learning mechanism as a human shows improvement with his/her progress. It is thus our target to construct a highly intelligent control system. Because of its high level of abstraction, you might feel that it is not practical right now, but the knowledge you learn here will be surely useful when you enter the real world.
Summaries of topics conducted by students are here!
Ongoing research topics (examples)
Robust/Adaptive learning control
We are studying advanced control technology such as robust control and its application. For example we are currently studying Feedback Error Learning (FEL) a famous biological model of motion learning, from a control-theoretic viewpoint. We are also interested in data-driven control systems by making full use of computers.
Networked control/Switched system
We consider how control technology should be, in this era of Iot where everything is connected to the Internet. Specifically we are proposing a switched control system that is robust against random delay of signal transmission (jitter) or apriodic sampling (packet loss), and verifying their effectiveness via experiment with small vehicles such as drone.

Distributed optimization/agreement algorithm
In order to handle big data or privacy issues, we are considering distributed algorithms for solving optimization/agreement problems. We are also conducting its theoretical and numerical analysis.
Network analysis
Networks are now appearing everywhere in society. We are therefore conducting qualitative and quantitative analysis of network features using graph theory and constraint satisfaction problems. We are also applying it to power system analysis, infectious disease models, and SNS information diffusion models.
Neural network
We are developing new neural networks, e.g., deep learning and reservoir computing, to acquire the excellent performance of our brain.
(Click to see examples of this topic)
Biologically-inspired learning
We are studying new (reinforcement) learning structures inspired by animals to be converted from mathematically convenient ones.
(Click to see examples of this topic)
Human-robot interaction
We are developing technologies for next-generation robots that can physically interact with human and aim to support various human motions.
(Click to see examples of this topic)
Multi-agent reinforcement learning
We are studying practical reinforcement learning algorithms for large-scale decentralized autonomous robot systems constructed in the future age with robots and AI.
(Click to see examples of this topic)
Research Equipments

Examples of topics:
Learning of well-clustered latent space with mixture distribution for variational autoencoder
Robot control adaptive to human behaviors exploiting the latent space
Recurrent neural networks with super long-term memory capacity
Reinforcement learning with both feedback and feedforward policies
Tool-switching system based on normalizing flows model

Examples of topics:
Design of curriculums for hierarchical and multi-objective task learning exploiting lifelong learning
Intrinsic motivation for truly autonomous robots to find their goals by themselves
Biologically-plausible optimistic reinforcement learning
Reinforcement learning with power-law-based future consideration
Reinforcement learning with both feedback and feedforward policies
Tool-switching system based on normalizing flows model

Examples of topics:
Robot control adaptive to human behaviors exploiting the latent space
Walking control of humanoid robots explicitly handling contact forces with human
Model-based reinforcement learning to fully use compliant actuators
Human-robot systems as distributed multi-agent systems

Examples of topics:
Reward shaping algorithm to cooperate/compete with other agents
Safe reinforcement learning for careful physical interaction among agents
Optimization of communication network among agents for large-scale systems
Intrinsic motivation for truly autonomous robots to find their goals by themselves
Human-robot systems as distributed multi-agent systems

Control engineering and machine learning frequently use methods that require expensive numerical calculations, and also require numerical simulations for verification. To do so, students are lent desktop PCs equipped with the latest CPU, sufficient RAM, and GPU for deep learning, etc. as needed.

Robot manipulators for the operation in space are required to be light and it requires large operation space. Since the arm part of such robot manipulators is generally made from flexible material, it is required to control the vibration of its tip caused by deflection of the link for the accurate position control. In this laboratory, we also try to obtain better response by adaptive control by learning.

A 1/10 model of actual automobile, produced as a tool for developing automated driving technology. Equipped with various sensors, it is used for indoor experiment to evaluate the performance by manipulating the velocity and steering angle.

This is well-known experiment system in control engineering. The pendulum will be controlled to be inverted by moving the rotor. In this laboratory, these are utilized for experiment and validation for control theory.

This is an education material for programming developed by LEGO and MIT. Line tracing car and the inverted pendulum robot can easily constructed. In this laboratory, we take the advantage for demonstrations of control theory.

This quadcoper developed by Parrot is linked to a motion capture system to measure its position and orientation. It is used for experimental verification of the observer, which is robust against occlusion and communication delay (or failure) in the measurement from the external system.

This is a 4-DOFs robot arm developed by Dobot. A sliding rail under this robot makes it work in a wide area precisely.
Currently, it is used for experiments to learn characters by utilizing lifelong learning (or continual learning), etc.

This robotic arm has four modular-type variable stiffness actuators (VSAs) developed by qbrobotics at respective joints. By adjusting the stiffness of the joints appropriately, dynamic movement and physical human-robot interaction are expected.
Currently, it is used for experiments for model-based reinforcement learning that learns and exploits dynamics as a model, etc.

This is a quadrupedal robot developed in our laboratory, named Doggy. By developing each leg as a simple module using 3D printer, it can be easily assembled by students.
Currently, it is used for experiments of an algorithm that deals with quadrupedal gait as hierarchical and multi-objective problems and learns sequentially from simple subtasks according to the appropriate curriculum, etc.

This hexapod is developed by Trossen Robotics. A stable gait can be generated due to sufficient support polygons and power of actuators.
Currently, it is used for experiments of an autonomous generation method of the optimal gait for the faced situation and an intuitive teleoperation method, etc.

This is an automatic driving system with three RGB-D cameras mounted on an electric wheelchair developed by WHILL, Inc.. It is used for research on shared autonomy, which supports overlooked fields of view, and for experimental verification of learning methods for predictive model that can store long-term memory.

This is a small-sized 5-axis robot arm developed by Robotis. It is used for experiments such as model learning during object manipulation and control technology based on tactile information.

This is a mobile robot using series elastic actuators developed by HEBI Robotics. Because it can measure the sense of applied force from humans, it is used for demonstrations such as guidance that takes into account physical interaction with a user.