VPAC (Visual Perception and Cognition Laboratory) is trying to find out how our brain works to understand the external world around us by vision, auditory or tactile system, through psychophysical experiments and brain activity measurements.
We are also working on developing technology involving human visual system such as visualization of invisible information by infrared spectrum imaging by the crystallized knowledge obtained from fundamental researches.
About the lab:
All students are given a PC, a few displays, and their own seat. We have all the rooms on the 7th floor of the building called the Soken Building, and each room is assigned to each grade. Visit below web for the room images. In addition, we have a number of different rooms to conduct experiments, and the experimental equipment are also very sufficient.
Active field in the future:
Some major electric information companies or automobile-related companies are popular among students for their job position after getting degrees.
I am investigating human perception and action with virtual reality (VR). Recent research topics cover virtual embodiments, human augmentation, and social cognition with non-human agents. We humans live in a cyber world as well as the physical world nowadays, so that it is required to know how our bodily self-consciousness occur in various modified bodies both in VR and robotics, and how we communicate with others in modified bodies and non-human agents such as artificial avatars and robots. Psychophysics and cognitive experiments using full-body motion capture systems and head-mounted displays are applied to resolve these issues. Experiments are very tough, but interesting, and necessary for scientific findings.
Prof. Aono has been conducting research on data science.
His research can been seen at the following URL.
Typical class syllabus of Prof. Aono can be seen in a little detail at the following URL.
The IMLEX lecture corresponding to the above will be
"Data Science and Analysis 2" in spring term.
Please note that this lecture itself will be alternatively held by Prof. Kuriyama and Prof. Aono, where the content is different from lecturer to lecturer.
We are researching novel technologies based on artificial intelligence (AI) such as machine learning and neural network, for smartly generating, editing, and analyzing visual graphical contents such as animations or illustrations. With our technologies, a novel virtual world can be emerged in a richer manner, which is also bridged with a real world using smart devices of illuminations or fabrications. The various kinds of projects are ongoing, which are categorized as
- Machine learning for synthesizing human motion or artificial images
- Image conversions for fabrication or illustrations
- Optical color estimations for smart illuminations
Lecture: Data Science and Analysis 2
Students can learn various technologies related to visual data analysis using open source software, for the dataset such as multi-variable, network, Web documents.
This lecture will be done online and impose practical exercises using actual dataset.
About the lab:
Our member consists of one doctor student and 10 master students in 2021.
Active field in the future:
Visual media and game industries, artificial intelligence, software industry
My research area is robotics, with particular focuses on mobile robots, robot vision, and human-robot interaction. A key to realizing intelligent systems like robots is to have sophisticated algorithms for perception and planning. Robots are now becoming to possess an excellent sensing ability with the recent advancement of machine learning algorithms and high-quality sensors. The next step is to pursue how to reflect the sensed information to planning appropriate (safe, efficient, and comfortable) actions. A multinational team of AISL (Active Intelligent Systems Laboratory) is conducting a wide range of research projects such as mobile service robots, human-robot collaboration, and application of robotic technologies to industry and agriculture.
The Cognitive Neurotechnology Unit (CNt) aims to advance fundamental knowledge of human cognitive and emotional processing by measuring human cognitive behaviors and biological signals, such as EEG and pupillometry, and analyzing them with computational techniques. We are also conducting applied research on the obtained knowledge for brain-machine interface (BMI) and neuromarketing. Our laboratory is equipped with EEG systems (Biosemi ActiveTwo) and eye-tracking systems(Eyelink 1000, and SMI RED500). Why don't you measure human beings together with us to find out why various cognitions happen to you?
Welcome to Natural Language Processing laboratory.
Our research interests focus on natural language processing (NLP) and spoken language processing (SLP). Especially, we are currently working on (1) end-to-end NLP and SLP systems based on deep learning including machine translation, speech recognition, and speech translation, (2) information access and retrieval including spoken document retrieval, voice search, and automatic text summarization, (3) NLP for medical application including automatic disease code assignment on medical health records and patient detection on user's posts to social networking services.
My lecture is also related to those researches. You are going to look at statistical methods for natural language processing.
We, Ubiquitous Systems Laboratory, aim to develop technologies to enhance human activities in the real world. Integrating a wide range of the latest technologies, such as IoT devices, system software, middleware, and AI, we are developing novel sensors, embedded systems, networking systems, server-side systems, data processing systems, and human interface systems, as well as application developing environment, with an application-oriented manner. Specifically, current projects are focusing on the welfare environment, smart city services, infrastructure health monitoring. For instance, a nursing and rehabilitation support system with wearable sensors, an urban economic and festival support system with a smartphone, an analyzing and advising system for traffic issues, tension monitoring system for bolted joint thorough LPWA network, and so on. We are hoping to cooperate with students who are eager to emerge novel human life and society with solid technologies.
In our laboratory, we are conducting research on Computer Vision and Image Processing.
Computer Vision is a basic and important tool for obtaining visual information around us.
So, this can be applicable to various fields such as robots, autonomous driving, and mixed/augmented reality. The current projects include 3-D reconstruction, 3-D image analysis for agriculture, image matching, human tracking, car detection, and image enhancement for dichromats.
In this course, we will introduce the basic theory and methods for 3-D reconstruction from images. For following this course, you may need some knowledge on linear algebra, statistics, and basic of geometry. This course also includes some programming exercise for 3-D reconstruction.
I teach half of Human sensation and perception. In the course, we learn about how the eye, brain, and mind work, along with state-of-the-art measurement principles. My research is in the field of visual neuroscience, where to understand the neural basis of color and object recognition through human psychological experiments and animal brain measurements using non-human primates. Understanding the physiological coding of visual information is not just the basis for image processing, image recognition, and VR, but it also helps to quantify human behavior to optimize applications, design, and art.
My research field is computer vision. We mainly study camera pose estimation from images for applying computer vision applications such as a mixed reality and a bullet-time vision.
In my lecture, you will study 3-D reconstruction from images and camera pose estimation techniques, and then develop a mixed reality application. Our researches are expected to use entertainments and our daily life for augmenting a real 3-D space.
We will provide materials for students to understand the principles of virtual reality (VR), mixed reality (MR), augmented reality (AR), and X reality (cross reality: XR), on psychological, physiological, and functional levels.
In our laboratory, we are conducting experiments in psychophysics to build functional models of hearing. We can also say that we are conducting "virtual" auditory psychology experiments, by using simulations of changes in auditory function with aging, or by using a high-quality speech analysis-synthesis vocoder to generate unutterable speech sounds.
Our laboratory is only two years old. It's not a big lab, but students are helping each other, learning new knowledge, and expanding their research.
Advances in synthetic speech technology have accelerated the trend toward using sound as an interface. Graduates of our lab are expected to have a variety of career paths, from companies dealing with audio products to companies wanting to design new interfaces.
Naoki OHSHIMA has developed several communication robots that can be involved in conversations with humans using verbal and non-verbal means of communication. Through collaborative research with the ICD-LAB at the Toyohashi University of Technology, he intends to construct a multi-party conversation-based system that incorporates three friendly conversational robots. His research revolves around constructing and evaluating social interactions of robots that can influence human behaviors. His research findings will help to further understanding of the mechanisms of daily human communication. Through his lecture, students will understand the skills and tools necessary for building interactive and sociable future robots.