My works mostly related to the areas of Robotics, Virtual Reality, Embedded Systems & Human Robot Interaction. Out of the research work, i try to apply the research results in to real world problems and provide services that can be used to benefit the society in near future.
In Teleoperation, I am specifically interested in not just telepresence, but involved in complex remote manipulations. To perform complex manipulations it is required to have higher degrees of freedom robot, controlled as if that machine is an extension of your own body. Furthermore, the ability to understand the remote environment through real-time audio/visual and haptic senssation provides much confidence and enhance the efficiency of remote manipulations.
While working as a researcher, i am collaborating with many international companies to bring the research findings into the public so that the society could be benifited.
In collaboration with ASICS Corporation, we demonstrated experiencing real-time 360 video and audio streaming from a Marathon Runner on February 28th at the Asics Store Tokyo. The project was initiated together with a former KMD Masters student Yusuke Mizushina who is currently working at ASICS Corporation Wearable Device Business Operation Team Corporate Strategy Department.
Delivering ultra low latency 360 video and audio content using conventional LTE mobile network was powered WebRTC technologies developed by NTT Communications SkyWay Service.
For more details please refer the Press Release (Japanese only)
“HUG Project” with Ducklings Inc. demonstrating the worlds first virtual wedding participation helping a 90-year-old grandmother to attend her grandson's wedding ceremony from a hospital miles away using telexistence technology.
The project allows the remote participant to remote control a social humanoid robot “Pepper” using FOVE eye tracking technology. The project demonstrated on Oct 10th between Ritz Carlton Hotel in Tokyo and a rural hospital in Aichi Prefecture and lot of local and international media caught the attention.
HUG Project was co-awarded the Grand Prize (最優秀賞) & Best Care Award (ベスト介護福祉賞) for the virtual reality communication “HUG PROJECT” at the Pepper App Challenge 2015 Winter which was held on Nov 28th at BELLE SALLE Shibuya Garden.
Please check the HUG Project Official website. for more details.
description apear here
This drone is a type of Telexistence system whereby the user can experience the feeling of flight. Telexitence is a technology that enables users to synchronize their motions and emotions with robots, such that they can be at a place other than where he or she actually exists, while being able to interact with a faraway remote environment.
In this case of the drone, a camera is attached to it and then whatever is captured by the camera is synchronized with the Head Mounted Display (HMD) of the user, such that the user is able to experience the flight of the drone. By integrating the flight unit with the user and thus crossing physical limitations such as height and physique, everyone can now enjoy a whole new concept of ‘space.’
TELUBee is a Telexistence gateway platform where a user can connect with a remote avatar-like robot among ubiquitously distributed multiple robots around the world and experience the distant world as if he is remotely existing there.
The system “TELUBee” includes distributed small-scaled telexistence robots; and a graphical user interface with a world map for selecting the desired geolocation where you want to experience. By wearing a HMD, user can see 3D stereoscopic remote view and is able to interact with remote participants using binaural audio communication. Remote Robots head motion is synchronized with the user’s head and the combination of audio/visual sensation he would experience allows him to feel the same kinesthetic sensation as would be physically there.
The robots are small, low cost and portable battery powered. Hence it can be used in anywhere where a Internet connection is present. “TELUBee” user interface can be a gateway to travel between many places around the world without spending much time and will be useful for sightseeing, attending remote meetings and have face-to-face conversations etc. . ..
TELESAR V is a fundamental concept named for the general technology that enables human beings to experience a real-time sensation of being and interacting in a remote location.
Conventional teleoperated robots often demonstrate higher degrees of freedom to manipulate specialized tools with precision. But these movements are mediated by the human participant’s natural movements, which sometimes generates confusing feedback. These teleoperated robots also need special training to understand the body boundaries when performing tasks. TELESAR V is a dexterous anthropomorphic slave robot that duplicates the same size and movements of an ordinary human and maps the user’s spinal, neck, head, and arm movements. With this system, users can perform tasks dexterously and feel the robot’s body boundaries through wide-angle high-definition vision, binaural stereo audio, and fingertip haptic sensations.
TELESAR Vは，自分の分身となるアバターロボットとまるで一体となったかのような感覚で，視覚・聴覚・触覚が融合した 遠隔体験を得ることができる「テレイグジスタンス」ロボットです．触原色原理に基づく触感伝送技術を搭載することで，布などの細やかな触感をも伝えることができます．
For more details please refer to my Ph.D. Thesis .
“pushPin” is a tangible programming interface for connecting everyday objects together to perform a series of actions based on an event driven manner. “pushPin Programming” metaphor resembles the traditional method of connecting devices using wires.
We have made wires wireless to reduce the tangling and complexity of having meshes, pairing them using coloured/iconed pins representing the end of the cables. Each device sends it’s stimulus based on user action and end devices waits for a response to perform it’s designated action. Moreover it provides easy to understand real-time program/reprogram, debugging features built onto the system to help users program their everyday objects without any prior programming knowledge.
For more details please refer to my Masters Thesis .
We present an interactive graphical editing interface for giving instructions to a garment-folding robot. The interface allows a user to teach a robot to fold a garment by performing simple editing operations (clicking and dragging). The interface consists of a simple garment folding simulation mechanism for detecting actions that would be impossible for the robot to perform, and it returns visual feedback to the user. The robot performs an actual garment-folding task by completing the instructions set by the user. We conducted a user study to evaluate our interface by comparing it with two other methods: controlling the robot using a game-pad, and folding a real garment by hand (teaching by demonstration). The results show that our interface provides natural and intuitive instruction comparable to folding by hand.
“Petimo” is an interactive robotic toy designed to protect children from potential risks in social networks and the virtual world and helps them to make a safely connected social networking environment. It adds a new physical dimension to social computing through physically touching the robots. This new dimension of authentication provides extra safety in making friends and this process updates a centralized database. The physical touch of robots will help prevent malevolent adult strangers being added as friends. Petimo can be connected to any social network and it provides safety and security for children.
An accelerometer and magnetic field sensor based motion tracking system that maps a user’s 6 degrees of freedom hand movements to planer movements on a computer screen and control signals such as “Click”, “Right Click”. The concept of free space motion tracking can be found in many applications such as inertial navigation systems, computer games, toys, 3D graphic manipulations and certain bio medical applications.
At the end of the project, a number of possible applications are introduced. Unlike the most popular camera tracking and image processing method, here, accelerometers in 3 perpendicular directions and a magnetic field sensor is used. This gives rise to advantages such as elimination of need of using cameras, independence of the lighting condition, not being restricted to a specific area of use and ability of on board processing due to the absence of heavy image processing. Data from the sensors are converted to digital, filtered and processed onboard using relevant mathematical equations. Special algorithms are implemented to give the user the freedom of using the device at any orientation he/ she wishes without getting restricted to a specific hold. Also, the control signals (Click, Right click and Double Click if being used as a mouse) are implemented without using separate buttons; they are also implemented by analyzing the acceleration profiles. The device will communicate with the PC directly via Bluetooth.
The ultimate goal was to make the choice of input signals in the hardware level by imitating the ordinary mouse movements adhering to its communication protocol, so that no additional driver implementation is necessary. The developed system is capable of detecting the three angles roll, pitch and yaw 1 degrees accuracy and mapping the accelerations into two dimensional displacements and driving the cursor pointer. In the 3-D space it’s possible to draw 3-D lines and visualize it. As a PC mouse it can accurately identify the control signals by hand gesture.
For more details please refer to my Undergraduate Thesis .
Age Invaders is a novel interactive intergeneration social-physical game that allows the elderly to play harmoniously together with children in physical space while parents can participate in the game play in real time through the internet. This game is based on the popular traditional Space Invader [Dirk ] arcade game.
The concept of the Age Invaders game is shown in Figure 1, two children are playing with two grandparents in this interactive phys- ical media space while two parents can join into the game via the internet as virtual players, thus increasing the inter-generational in- teraction. Figure 2 shows the realization of the Age Invaders game.
The game offers adaptable game parameters to suit the simultane- ous gaming of elderly and young. Adjusting game properties au- tomatically compensate for potential elderly disadvantages, for ex- ample slower reaction time and slow movement. For example, the rockets of the elderly are faster than those of the children’s, and viceversa. This property is important to sustain the players’ inter- action and interest in the game. Unlike standard computer games, Age Invaders requires and en- courages physical body movements rather than constraining the user in front of computer for many hours. It also incorporates puzzle solving games which encourage cognitive stimulating activities for the health benefits of the elderly and young.
Age Invaders uses a floor display, which is an unconventional in- terface for mixed reality entertainment. The key advantage of floor display over HMD or other wearable display is that it doesn’t re- quire the user to wear something bulky and perceive the world through the device and this is an important feature for elderly to participate in Augmented Reality games. The floor display is very intuitive and provides users with a direct connection to the virtual game world using their whole body as the interface.
Players can move around the game platform as they would for nor- mal activities without having to adapt to new display interface. In real time, as the players move and shoot rockets or bomb, it will appear that it is physically coming out of their bodies, which gives a real time link between the real world and the virtual world. This will immerse the players into the game and introduce high physicality [Price and Rogers 2004] which is important to sustain the players’ interest in the game and encourages them to collaborate actively.
The main goal of this project is to build an autonomous robot that can be used for terrain mapping. It will do so by exploring an area according to a pre defined algorithm while using ultra sonic and infrared ranging to detect obstacles in front of the robot and on its’ sides respectively. The algorithm controlling the robot movement will be optimized to keep the robot movement minimal as well as ensure that it does not have to travel over the same area twice. While the robot builds the terrain map it will also transmit this information to a PC using a wireless link for a real time display of the map building process. The robot built for this mini project would contain these features yet constrained to operate in an indoor controlled environment to keep things more manageable.