We propose “Layered Telepresence”, a novel method of experiencing simultaneous multi-presence. Users eye gaze and perceptual awareness are blended with real-time audio-visual information received from multiple telepresence robots. The system arranges audio-visual information received through multiple robots into apriority-driven layered stack. A weighted feature map was created based on the objects recognized for each layer using image-processing techniques, and pushes the most weighted layer aroundthe users gaze into the foreground. All other layers are pushedback to the background providing an artificial depth-of-field effect. The proposed method not only works with robots, but also eachlayer could represent any audio-visual content such as video see-through HMD, television screen or even your PC screen enabling true multitasking.
In this paper, a spatial coherent remote driving system was designed and implemented to operate a telexistence backhoe over a wireless network. Accordingly, we developed a 6 degrees of Freedom (DOF) slave robot that can mimic the human upper body movement; a cockpit with motion tracking system and a Head Mounted Display (HMD) where the operator was provided with a HD720p ultra low latency video and audio feedback from the remote backhoe, and a controller to manipulate the remote backhoe. Spatial coherent driving could help manipulating heavy machinery without any prior training and perform operations as if they were operating the machinery locally. Moreover, construction work could be performed uninterrupted (24/7) by operators remotely log-in from all over the world. This paper describes the design requirements of developing the telexistence backhoe followed by several field experiments carried out to verify the effectiveness of spatial coherent remote driving experience in construction sites
In this paper, a mobile telexistence system that provides mutual embodiment of user's body in a remote place is discussed. A fully mobile slave robot was designed and developed to deliver visual and motion mapping with user's head and body. The user can access the robot remotely using a Head Mounted Display (HMD) and set of head trackers. This system addresses three main points that are as follows: User's body representation in a remote physical environment, preserving body ownership toward the user during teleoperation, and presenting user's body interactions and visuals into the remote side. These previous three points were addressed using virtual projection of user's body into the egocentric local view, and projecting body visuals remotely. This system is intended to be used for teleconferencing and remote social activities when no physical manipulation is required.
We propose an indirect-vision, video-see-through augmented reality (AR) cockpit that uses telexistence technology to provide an AR enriched, virtually transparent view of the surroundings through monitors instead of windows. Such a virtual view has the potential to enhance driving performance and experience above conventional glass as well as head-up display equipped cockpits by combining AR overlays with images obtained by future image sensors that are superior to human eyes. As a proof of concept, we replaced the front windshield of an experimental car by a large stereoscopic monitor. A robotic stereo camera pair that mimics the driver's head motions provides stereoscopic images with seamless motion parallax to the monitor. Initial driving tests at moderate speeds on roads within our research facility confirmed the illusion of transparency. We will conduct human factors evaluations after implementing AR functions in order to show whether it is possible to achieve an overall benefit over conventional cockpits in spite of possible conceptual issues like latency, shift of viewpoint and short distance between driver and display.
In this paper, a new sports genre, "Aerial Sports" is introduced where the humans and robots collaborate to enjoy space as a whole new field. By integrating a flight unit with the user's voluntary motion, everyone can enjoy the crossing physical limitations such as height and physique. The user can dive into the drone by wearing a HMD and experience the provided binocular stereoscopic visuals and sensation of flight using his limbs effectively. In this paper, the requirements and design steps for a Synchronization of visual information and physical motion in a flight system is explained mainly for aerial sports experience. The requirements explained in this paper can be also adapted to the purpose such as search and rescue or entertainment purposes where the coupled body motion has advantages.
In this paper, a mutual body representation for Telexistence Robots that does not have physical arms were discussed. We propose a method of projecting user's hands as a virtual superimposition that not only the user sees through a HMD, but also to the remote participants by projecting virtual hands images into the remote environment with a small projector aligned with robot's eyes. These virtual hands are produced by capturing user's hands from the first point of view (FPV), and then segmented from the background. This method expands the physical body representation of the user, and allows mutual body communication between the user and remote participants while providing a better understanding user's hand motion and intended interactions in the remote place.
We propose "Interactive Instant Replay" system that the user can experience previously recorded sports play with 360-degrees spherical images and haptic sensation. The user wears a HMD, holds a Haptic Racket and experience the first person sports play scene with his own coupled body motion. The system proposed in this paper could be integrated with existing television broadcasting data that can be used in large sports events such as 2020 Olympic, to experience the same sports play experience at home.
Telexistence [Tachi 2010] systems require physical limbs for remote object manipulation [Fernando et al. 2012]. Having arms and hands synchronized with voluntary movements allows the user to feel robot's body as his body through visual, and haptic sensation. In this method, we introduce a novel technique that provides virtual arms for existing telexistence systems that does not have physical arms. Previous works [Mine et al. 1997; Poupyrev et al. 1998; Nedel et al. 2003] involved the study of using virtual representation of user hands in virtual environments for interactions. In this work, the virtual arms serves for several interactions in a physical remote environment, and most importantly they provide the user the sense of existence in that remote environment. These superimposed virtual arms follows the user's real-time arm movements and reacts to the dynamic lighting of real environment providing photorealistic rendering adapting to remote place lighting. Thus, it allows the user to have an experience of embodied enforcement towards the remote environment. Furthermore, these virtual arms can be extended to touch and feel unreachable remote objects, and to grab a functional virtual copy of a physical instance where device control is possible. This method does not only allow the user to experience a non-existing arm in telexistence, but also gives the ability to enforce remote environment in various ways.
Our project is based on our proposed haptic primary color theory, and our aim is to construct an intelligent information environment that is both visible and tangible, and integrates real-space communication, a humanmachine interface, and media processing. We have succeeded in transmitting fine haptic sensations such as material texture and temperature from an avatar robot's fingers to a human user's fingers. The avatar robot is a telexistence anthropomorphic robot dubbed TELESAR V with body and limbs having 53 degrees of freedom. This robot can transmit visual and auditory sensations of presence to human users, in addition to haptic sensations. Other results of this research project include RePro3D a full-parallax, autostereoscopic 3D (three-dimensional) display with haptic feedback using RPT (retroreflective projection technology); TECHTILE Toolkit, a prototyping tool for the design and education of haptic media; and Haptic Editor, an interactive editing system for creating haptic-enabled 3D content.
This paper describes a new type of robust control mechanism for a 15 DOF anthropomorphic robot hand in telexistence manipulations using a flexible fiber based master glove to experience the visual-kinesthetic sensation of one's own hand in remote manipulations. Accordingly, a master-slave telexistence system was constructed with the following: a 14 DOF modified optical fiber based data glove for capturing the complex finger postures of the master operator without any mechanical constraints; a novel finger posture mapping algorithm that is independent from the effects of different finger sizes and digit ratios; and a 15 DOF anthropomorphic slave robot hand for reconstructing the operators finger posture. This paper describes the importance of feeling one's fingers in a telexistence manipulation, control mechanism for accurate finger posture capture/reconstruction where the effectiveness has been verified through a set of experiments and a subjective evaluation.
During teleoperation manipulation, the synchronization of the user behavior and a remote avatar is important to deliver the sensation of being in that remote place. Current telexistence technologies allow full upper body posture synchronization through multi-DOF humanoid robot structures and allow the operator to control the remote body as his own. However, it does not preserve a consistent feedback, such as the human like skin tones, operator's hand shape and the current outfit he is wearing during the operation. Thus in this paper we propose a new method that provides the operator's body shape, complexion, and light correction using real-time visuals taken from a see-through camera placed in the HMD and superimposed over robot vision. By using hand and arm trajectory from a kinematics solver, a virtual representation is used to generate masking images that isolate his local body appearance and superimpose it into the virtual environment. Local body appearance is captured via a see-through HMD. This paper describes the design and implementation of the above technique and obtained basic results.
In this paper, we propose a haptic transmission system for telexistence to improve the ability to sense the presence of remote objects. This system can transmit information about the existence and surface textures of objects in remote locations. The system consists of a conjugated haptic sensor and display. The sensor on the robot's finger detects the pressure, vibration, and temperature of a remote object, and the display provides above information on the operator's finger. Based on this information, the operator can understand what he/she is touching and whether its surface is hard or soft, cold or hot, and smooth or rough. With the use of our system, the operator can recognize the difference between objects such as silk and denim.
In this paper, we focus on designing a customizable modular based virtual platform for modeling, simulating and testing telexistence applications, where the physical parameters are preserved in the virtual environment; for motor control of physical characteristics of robot as well as sensory feedback of vision, auditory, haptic. We proposed “Virtual Telesar” which allows telexistence engineers to model a prototype system before manufacturing it and to experience how the final model will look-like, perform manipulations etc... in real world without building it. The platform consists of three features: first, the user can define a robot using predefined modular components. Second, the user can customize and tune up parameters. Third, the user can have immersive experiences of operating the robot with visual, auditory and haptic sensation. In this paper, we describe a design concept of Virtual Telesar platform and report modeling result based on a physical robot and result of immersive experience with it.
This paper focuses on design of a dexterous anthropomorphic robot where the operator can perceive the transferring bodily consciousness to the slave robot during a tele-operation. Accordingly, we propose a telexistence surrogate anthropomorphic robot called “TELESAR V”, which was designed and constructed by development of the following: a 52 DOF slave robot with a torso, upper limbs, hands and head to model the operator's posture on all parts of the upper body and maintain a 6 DOF accuracy in arm endpoint; a HD Head mounted display with 6 DOF point of view accuracy for wide angle stereovision; and a mechanism for sensing and reproducing fingertip haptic and thermal sensation. This paper describes the development of the TELESAR V system, where the effectiveness has been verified through functional experiments.
Telexistence technology enables a highly realistic sensation of existence in a remote place without any actual travel. The concept was originally proposed by the first author in 1980, and its feasibility has been demonstrated through the construction of alter-ego robot systems such as TELESAR & TELESAR V, which were developed under the national large scale project on “Robots in Hazardous Environments” and the “CREST Haptic Telexistence Project.” A mutual telexistence system, such as TELESAR II & IV, capable of generating the sensation of being in a remote place in local space using a combination of an alter-ego robot and retro-reflective projection technology (RPT), has been developed, and the feasibility of mutual telexistence has been demonstrated. Thirty-two years of telexistence development are historically reviewed in this jubilee video.
Telexistence is fundamentally a concept named for the general technology that enables a human being to have a real-time sensation of being at a place other than where he actually exist, and to interact with the remote environment [Tachi et al. 1990]. We have achieved human-like neck movements to visually interact with a remote object in 3 dimensional space through previous versions of TELESAR [Watanabe et al. 2007]. We introduce "TELESAR V" which maps a user's spinal neck, head and arm movements into a dexterous slave robot and allowing the operator to feel the robot's body as his own body through visual, auditory, kinesthetic and fingertip tactile [Sato et al. 2007] sensation. With TELESAR V, operator can perform teleoperations confidently with no prior practise.
ImpAct is a haptic stylus which can change its length dynamically and measure its orientation changes in 3 degrees of freedom. Combining it with a 3D simulated projection rendering mechanism as shown in Figure 2, it can make the illusion of going through a display surface in to the digital space below(Figure 1). Furthermore, once user get into the digital space, more realistic interactions with digital objects are provided with kinesthetic haptic feedback by applying force-feedback on the scalable stem.
In this paper, we describe a ball game "Earthlings Attack!" that uses the contact between users and an active ball device as an information channel to the game content. When the ball device with built-in transmitter comes in contact with the user who wears the receiver, theis system transmits information from the ball device to the receiver through user's body with the human body communication. With this method, we aim at the interaction improvement of the augmentation of the interaction in such a way that presenting information on user's body according to the contact between each ball device and each user. This system also enables to use in a wide range field in the same network by managing contact information of both collectively.
"pushPin" is a tangible programming interface for connecting everyday objects together to perform a series of actions based on an event driven manner. "pushPin Programming" metaphor resembles the traditionalmethod of connecting devices using wires. We have made wires wireless to reduce the tangling and complexity of having meshes, pairing them using coloured/iconed pins representing the end of the cables. Each device sends it's stimulus based on user action and end devices waits for a response to perform it's designated action. In this paper, we introduce the concept and vision of everyday object pairing technique using stimulusresponse coordination and then describe the prototype implementation of pushPin system and its' appliances. We also show the effectiveness of our approach by carrying a user study.
Cuteness in interactive systems is a relatively new development, yet having its roots in the aesthetics of many historical and cultural elements. Symbols of cuteness abound in nature as in the creatures of neotenous proportions; drawing in the care and concern of the parent and the care from a protector. We provide an in depth look at the role of cuteness in interactive systems beginning with a history. We particularly focus on the Japanese culture of Kawaii, which has made a large impact around the world, especially in entertainment, fashion, and animation. We then take the approach of defining cuteness in contemporary popular perception. User studies are presented offering an in-depth understanding of key perceptual elements which are identified as cute. This knowledge provides for the possibility to create a cute filter which can transform inputs and automatically create more cute outputs. The development of cute social computing and entertainment projects are discussed as well, providing an insight into the next generation of interactive systems which bring happiness and comfort to users of all ages and cultures through the soft power of cute.
Addressing a key issue in ubiquitous computing and power generation, this paper presents several novel techniques that use the human body itself as an energy resource for power generation and as a biological data network. Particularly focusing on power from human walking, we conducted comprehensive studies determining and comparing critical factors such as efficiencies and power generation levels generated for real generation techniques. Additionally, we present a breakthrough development of low-cost miniature circuits with significantly improved efficiencies ideal for such generation application. This paper also describes novel systems of ubiquitous personal area networks that exchange digital information by physical touch.
A tangible programming interface called ``Push-pin'' is proposed, which involves end-users in designing smart home programs for home appliance automation. Programming on the Push-pin system is based on a stimulus-response model in which an appliance connected to another appliance via a network gets activated when the other appliance is activated. To interconnect two appliances, the user puts a pin associated with an output appliance such as a lamp or a robot cleaner into an input module such as a switch or a motion sensor. The input appliance sends stimulus data with the ID of the pin as the destination address.
Tele-existence applications for robotic systems are becoming popular and widespread. Tey enable users to control a remote machine while experiencing a sense of being in the remote location. Initially, tele-existence was used for remote de-mining and mission-critical tasks in space, to avoid risking human life. Recently it has been applied in many entertainment and gaming applications, to enable a community to play together in one virtual environment and share the experience. But existing tele-existence systems require a large-scale interface, a lot of processing power, and a large space for proper operation.
A novel interactive approach that helps children make friends in safe social networks and reassures parents that their children are protected.
As social networking widely spreads among the community, especially among the younger generation, the negative influence created on children has become a serious social concern. "Petimo" is an interactive robotic toy designed to protect children from potential risks in social networks and the virtual world and it helps them to make a safely connected social networking environment. It adds a new form of security to social computing through parental authentication, providing extra safety in making friends by physically touching each others robot which is a much preferred form especially by children and natural means of making friends. The concept of Petimo could be extended to any social network thus making it child-safe. As a proof-of-concept a 3D virtual world called "Petimo-World" is developed which includes all of the realizable basic features of traditional online social networks. With the system, children experience enhanced relationships with their friends through interactions in the real and virtual worlds by sending personal thoughts and feelings mediated by their robots with haptic, visual, and audible events.
This project proposes to use interactive graphical editing interface for an end user to give instructions to intelligent robots to complete a real world object manipulation task. Natural language is often considered as an ideal communication method for robots, but it not intuitive at specifying tasks that require visual (geometry) information. Learning from demonstration can be useful, but it is not easy to generalize a provided example into a working program. Our approach is to provide a specialized graphical editor that abstracts the target task and to have the user specify how to complete the task by performing simple editing operations (clicking and dragging). We show the effectiveness of our approach by building and testing an example application based on this concept, which is a graphical editor for teaching garment folding to a robot. This example shows that our approach is particularly effective for an end user to configure the robot behavior to satisfy their own needs, which cannot be covered by a single, pre-programmed solution for general audience.
As social networking widely spreads among the community, especially among the younger generation, the negative influence created on children has become a serious social concern. "Petimo" is an interactive robotic toy designed to protect children from potential risks in social networks and the virtual world and it helps them to make a safely connected social networking environment. It adds a new physical dimension to social computing through enabling a second authentication mode, providing extra safety in making friends by physically touching each others robot. Petimo can be connected to any social network and it provides safety and security for children. As a proof of concept, we have developed a 3D virtual world, "Petimo-World" which demonstrates all of the realizable basic features with traditional online social networks. Petimo-World stands out from all other virtual worlds with its interesting and sophisticated interactions such as the visualization of a friends' relationships through spatial distribution in the 3D space to clearly understand the closeness of the friendship, personalized avatars and sending of special gifts/emoticons.
In this paper, we describe the information transmission system that uses the contact between users and ball devices as an information channel in interactive contents. When the ball device with built-in transmitter comes in contact with the user who wear the receiver, this system transmits information with the human body communication through user's body. We aim at the improvement of the augmentation of the interaction in such a way that presenting information on user's body according to the contact between each ball device and each user by this method. This system also enables use in a wide range field in the same network by managing contact information of both collectively. We realize the application development mixed the contents environment of the real world with the virtual space by regarding ball devices as input interface. Moreover, we developed a interactive game content intended for children with this system, and conduct a user study.
This paper includes an accelerometer based 3- D motion tracking system which captures the human hand movement in 3-D and transform it into a 2-D plane where it acts as a wireless wearable PC mouse. This is developed as a general purpose device that would find many applications in various capacities such as Computer Games, Graphic designs, Toys and 3D object manipulations by simply redefining the communication protocols.