Abstract: In recent years, globalization has progressed, and Japan's international students have increased. However, many international students study while working part-time, and due to the impact of Covid-19, face-to-face conversation with people has become difficult. Therefore, regular study time alone has become insufficient for practicing the Japanese language. On the other hand, there are video distribution services that have become popular in recent years. Therefore, we thought we could create a language learning support system by using them. This research aims to develop a language learning support system that uses the subtitle function of a video distribution service to improve learning motivation and to solve the lack of time to learn a foreign language (the Japanese, in this case). This paper mainly reports on the development of the system by using those video content.
Keywords: Language learning support system; Language Learning with Netflix; LLN extension; python VLC module.
Abstract: Multi-value attribute has always been a difficult problem to deal with in machine learning. Most models are unique for data format matching. When it is multi-value, most models cannot be used directly. At the same time, a large number of multi-valued attributes will be encountered in the construction of medical model. These attributes often represent that patients have multiple symptoms. The processing methods of multi- valued attributes can be roughly divided into two categories, one is through data preprocessing, the other is through algorithm pattern matching. The solution to medical multi-valued attributes in this paper is mainly through preprocessing, from the perspective of multi-valued attribute representation and projection. The process is to use sparse matrix to represent multi- valued data, convert it to high-dimensional space, and then project it back to one dimension to complete the processing of such data.
Abstract: Traditional text similarity algorithm has the disadvantage of a large amount of text data and high complexity. Keywords are highly concentrated thematic ideas in the text. Extracting them can reduce the complexity of text similarity calculation. Therefore, this paper proposes a Chinese text similarity calculation method that integrates improved YAKE and neural network(YANN). With Aim to the problem that Yet Another Keyword Extractor(YAKE) algorithm is not suitable for Chinese text keyword extraction, keyword candidate stage. First the new feature value of words is calculated by using word span, position, frequency, word context relevance, and the number of different sentences. Next we calculate the keyword score of each candidate word after synthesizing all the features values, and output the keywords in the ascending order of the score. Finally, the keyword set is inputted into the trained word2vec model for vectorization. Summation and averaging where the keyword vector values are derived from the trained word2vec model, and the similarity between different texts is calculated by cosine similarity. The experimental results show that the method proposed in this paper has better performance than other algorithms in Chinese text keyword extraction, and the similarity calculation results prove the merit of the method used.
Abstract: Convolutional Sparse Representation (CSR) approximates images with the convolutional sum of dictionary filters and corresponding sparse coefficients. To improve classification accuracy of Convolutional Neural Networks (CNNs), this paper proposes to use the dictionary filters generated by CSR as initial parameters of CNNs' filters since the CSR filters express features of test images. Our method also estimates the error term of CSR with the L1 norm instead of the L2 norm to increase robustness against outliers in datasets for training. The results of experiments classifying CIFAR-10 show that the CNN using the initial parameters generated by the proposed method with the L1 error term shows the highest classification accuracy for small numbers of training images compared with the two methods: the proposed method with the L2 error term and the Xavier's method.
Abstract: During the Covid-19 pandemic, fever detection using infrared thermography became widespread. A person with a fever is detected based on the facial skin temperature measured in a non-invasive and free-of-restraint method. Recent studies have pointed out that the facial whole skin temperature, when measured immediately after entering a moderately moderate environment from a cold environment, is not practical for detecting persons with fever because it is greatly affected by the environmental temperature. On the other hand, the effect of cold and hot temperatures on the details of the entire face has not been evaluated. In this study, we compared the cold and hot environments and the acclimation to moderate temperatures to the effects of cold and hot environments on the whole face skin temperature distribution was evaluated in detail. The results showed that the periorbital area and side of the nose were least affected in the cold environment, and the side of the nose was least affected in the hot environment. And these parts are suggested to be suitable for core temperature estimation considering the environmental temperature.
Abstract: This research aims to develop a flexible input text device for physically disabled person using image processing. In proposed method we use a camera to detect the disabled person arm and hand movements via image processing technics, in order to identify these movements with exact disabled person intention on what he/she wants to convey. When the intention is detected, the disabled person can then input his/her intention or thought data through the device. The input method can be changed according to the user's disability level and is expected to have a positive impact for the rehabilitation of the user.
Abstract: The ongoing global warming causes an increased number of disasters around the globe and is going to have a severe impact on our society in the years to come. Thanks to the rapid development of technology, drones have emerged, and the Search and Rescue operation become effective and efficient. In this paper, we propose the assessment of multi-drones for Search and Rescue (SAR) operations. The proposed SAR uses an appropriate city model with a diverse population density where the multi-drones can be dispatched based on the city’s geographic information for the Search and rescue operation during a disaster.
Keywords: Drone; Search and Rescue; Disaster; Operations
Abstract: People usually belong to multiple groups, and in such situations "interrole conflicts"occur. Studies on interrole conflicts have mainly targeted subjects after adulthood, although they do not occur only after the developmental stage of adulthood. In addition, limited studies have been conducted on interrole conflicts during adolescence. Furthermore, simulation studies about interrole conflicts have rarely been conducted. The purpose of this study is to clarify the difference between adolescence and adulthood in how to deal with interrole conflict situations. We propose an interrole conflict game (ICG) as a new game-theoretic framework to deal with interrole conflicts and adopt reinforcement learning agents with characteristics of adolescence or adulthood as players. Our multi-agent simulation (MAS) experiment results suggest high learning rate and low discount rate that cause typical adolescent characteristics including risk-taking, impulsivity and novelty seeking can be played important roles to cope an interrole conflict and for emergence of equal cooperation among adolescents in the interrole conflict situation.
Keywords: interrole conflict game (ICG), reinforcement learning, multi-agent simulation (MAS), characteristics of adolescence or adulthood, emergence of equal cooperation.
Abstract:The rapid development of drone has changed the way of our lifestyle by helping us to deal with many issues we were unable to solve before. Such as scanning a large farm area for tracking boars using the drone integrated CMOS camera and so on. In this research, we built a map for drone guidance system in order to detect a fee parking space. The Simultaneous Localization and Mapping (SLAM) is used as a control method. "SLAM", is the process of mapping an area whilst keeping track of the location of the device within that area. A technology that enables autonomous flight even in an environment where GPS cannot be used . In addition, we used a Versatile and Accurate Monocular SLAM (ORB-SLAM) for real time operation and Large-Scale Direct Monocular Simultaneous Localization and Mapping (LSD-SLAM) which are typical SLAMs based visual SLAM. The experimental result shows that, although the generated map was somehow difficult to visualized but due the camera's self-position estimation gave a rough route of the path which enable the drone to locate the free parking lots. Our future focus will be to implement the automatic flying system based on the generated map and the improvement of the ORB-SLAM features.Keywords: slam; cmos camera ; orb-slam, parking lots; localization, guidance system
Abstract: With the development of sensor technology, multi-sensor information fusion technology has become an important research direction in the field of sensors. This paper focuses on thedevelopment of multi-sensor information fusion technology, the concepts, the levels of fusion and the main algorithms for information fusion. The development trends of multi-sensor information fusion algorithms and their characteristics are also foreseen now and in the future period.
Keywords: Multi-sensor; Information fusion; Fusion algorithm
Abstract: The robot working environment has changed. Robots are no longer restricted to factories and have gradually spread to urban areas. In this research work, we designed a line-following service robot using Arduino and using facial recognition to transport objects among offices. The line-following robot can proceed in its direction by following a black path; it spots the path, holds objects and recognizes and detects the picture of the target person to which the object belongs in the office. This office service robot is based on an Arduino UNO, DC motors, and batteries and is equipped with sensors, an Esp32, an IR sensor, a camera and a buzzer, since it moves among offices. Our robot can hold and transport objects, e.g., documents and letters, from the source to a destination by following a path and detecting and recognizing the target person who should receive the object. In addition, the buzzer within the robot will alarm and notify the target person with a specific sound that will be heard when the robot recognizes the target (him or her).
Keywords: Arduino, IR sensor, Esp32, Camera, Microcontroller, Line-following robot, Actuator, Facial recognition, Service robot, Buzzer.
Abstract: Measures against deterioration of infrastructures that were built during the high economic growth period are facing significant challenges with regard to the maintenance of infrastructures in Japan. The development of optimal nondestructive sensing and imaging technology according to the material and structure of buildings is underway to contribute to efficient and reliable maintenance of infrastructures. However, owing to the large number of materials and structures used for buildings, as well as the types of defects to be targeted, many basic studies are yet to reach the stage of practical use. In this study, we developed a magneto-optical (MO) sensor in order to visualize a “crack” in the plant steel structure and automatically detected the defects in the plant steel structure by performing deep learning on the MO image obtained. As a pretreatment for detecting anomalies in defects using the AI, we focused on the nondestructive inspection using MO imaging and performed an unprecedented image filter processing. As a result, automatically evaluation the several types of MO images using AI, the accuracy of defection identification was improved.
Abstract: This study proposes and develops an edge-AI-based autonomous mobile robot based on open-source software. The robot is capable of voice and object recognition; it can detect and approach an object specified by a user's voice. Because the robot is controlled by voice commands, the user can control the robot intuitively. In the present study, we used a robot operating system to facilitate the development. All functions, including voice recognition, object recognition, and motor control, were implemented in the edge AI computer based on open-source software. We conducted preliminary experiments to verify the performance of the developed system.
Keywords: Mobile robot, Edge-AI, Open-source software, Image recognition
Abstract: Many systems that provide automatic evaluation and feedback of electric guitar have been developed. However, they have a serious weakness, only “timing” and “pitch” are considered in evaluation. In fact, a wide range of factors are involved in the human evaluation. In order to solve this problem, previous studies proposed automatic evaluation methods. On the other hand, these are not possible to evaluate the sound using the special technique of electric guitar. In this study, we proposed a method for automatic proficiency estimation of vibrato in electric guitar. As the method, we extracted the acoustic features focusing on peaks of Mel fundamental frequency, number of peaks, width average, width variance, height average and height variance. We regressed the evaluation values using extracted acoustic features with a relevance vector machine; RVM. As the result, we were able to perform regression with a coefficient of determination 0.723. This result indicates extracted features are highly relevant to evaluation values by human and allows a tentative evaluation of the vibrato sound of the electric guitar.
Keywords: Electric Guitar, Vibrato, Proficiency Estimation, RVM, Mel Fundamental Frequency, Audio Signal Processing, Machine Learning, Music Information…
Abstract: Gastric cancer is one of the most serious cancers that affects and kills many people around the world every year. Early treatment of gastric cancer dramatically improves the survival rate. Endoscopy has become an important tool for early detection. Since invasive gastric cancer or the edge of the invasive gastric cancer is difficult to find by using conventional visible-light endoscopy, near-infrared imaging, which is bringing great progress to the medical field, is focused on in recent years. In order to apply near-infrared hyperspectral imaging (NIR-HSI) in real-time, wavelength feature extraction is important because a large amount of data needs to be analyzed. The purpose of this study is to detect gastric cancer using NIR-HSI and to select a suitable wavelength for the target in the near-infrared region (1000–1600 nm). NIR-HSI was used to take data from six specimens of gastric cancer and each pixel was labeled as normal or tumor on the hyperspectral image based on the histopathological diagnosis. 4 wavelengths were extracted from 95 wavelengths using the least absolute shrinkage and selection operator method. Supervised learning was performed using a support vector machine for both cases using all 95 wavelengths and the case using 4 selected wavelengths. In both cases, the approximate location of the tumor could be identified, indicating that an appropriate wavelength could be sel…