JICE Digital Products

 
Thumbnail Name Price Add to Cart Button Tags
Kendo Headgear Concussion Safety Evaluation Kendo Headgear Concussion Safety Evaluation

Megan Hsu, An-Phuc Ta, Satori Iwamoto , Harrison Chu , Heidi Spantzel , Arnold Spantzel , Hillary Chu , Hao Chen , Gary Chu

The Author field can not be Empty

California Northstate University College of Medicine,

The Institution field can't be Empty

Vol.9, Issue 1

Volume and Issue can't be empty

533-540

The Page Numbers field can't be Empty

2432-5422

29-12-2023

Publication Date field can't be Empty
spinner
Abstract: Concussions present a potential risk in various contact sports, including the martial art of kendo. While there are regulations implemented on the production quality of shinai, the bamboo-based swords used in kendo, there is a lack of active standards enforced by an independent governing body for kendo armor to prevent head injuries like concussions. Interestingly, two separate independent governing bodies exist to regulate and ensure the quality of commercially-sold shinai, but none for armor. This study aimed to answer two important questions: firstly, whether high-end kendo helmets offer superior protection against concussions compared to entry-level helmets, and secondly, whether adding extra padding inserts to the helmets enhances overall concussion protection. To conduct the study, we asked several kendo practitioners and one non-practitioner to deliver a series of head strikes using a shinai on a mannequin wearing different kendo helmets, both with and without additional protective padding. Our objective was to measure the force of each strike and assess the associated risk of concussion. The results indicated that both types of helmets sustained linear accelerations well below the threshold for the risk of concussion, which is set at 62.4g force. Notably, there were no statistical differences regarding impact forces received between the helmets (p-value equals 0.13). Interestingly, we found that commercial helmet padding inse…
The Development of a Smartphone Application based on Object Detection  and Indoor Navigation to Assist Visually Impaired The Development of a Smartphone Application based on Object Detection and Indoor Navigation to Assist Visually Impaired

The Author field can not be Empty

The Institution field can't be Empty

Volume and Issue can't be empty

The Page Numbers field can't be Empty

Publication Date field can't be Empty
spinner

Abstract: A mobile application designed to assist visually impaired person (VIP) in performing daily activities such as grocery shopping. The system utilizes mobile application for object and location recognition, enabling users to locate and identify items in a supermarket easily. The application provides guidance through voice and vibration. We implemented a neural network and utilized Google’s GPS API in the application and conducted simulations in a supermarket environment to demonstrate its effectiveness. The proposed mobile application has the potential to significantly improve the independence and quality of life of visually impaired individuals. Keywords: Visual Impairment; Image Recognition; Indoor Navigation; Mobile Application

Development of a Japanese language learning support system  for international students  using video content Development of a Japanese language learning support system for international students using video content

Tansuriyavong Suriyon, Kokoro Takamine, Zacharie Mbaitiga

The Author field can not be Empty

National Institute of Technology, Okinawa College, Japan

The Institution field can't be Empty

Volume 8, Issue 2

Volume and Issue can't be empty

537-539

The Page Numbers field can't be Empty

2432-5465

30-12-2022

Publication Date field can't be Empty
spinner

Abstract: In recent years, globalization has progressed, and Japan's international students have increased. However, many international students study while working part-time, and due to the impact of Covid-19, face-to-face conversation with people has become difficult. Therefore, regular study time alone has become insufficient for practicing the Japanese language. On the other hand, there are video distribution services that have become popular in recent years. Therefore, we thought we could create a language learning support system by using them. This research aims to develop a language learning support system that uses the subtitle function of a video distribution service to improve learning motivation and to solve the lack of time to learn a foreign language (the Japanese, in this case). This paper mainly reports on the development of the system by using those video content.

Keywords: Language learning support system; Language Learning with Netflix; LLN extension; python VLC module.

Processing of Multi-valued Attributes Based on Sparse Matrix Processing of Multi-valued Attributes Based on Sparse Matrix

Yunxiang Liu and Zigeng Wu

The Author field can not be Empty

Shanghai Institute of Technology, Department of Computer Science and Information Engineering, China

The Institution field can't be Empty

Volume 8, Issue 2

Volume and Issue can't be empty

533-536

The Page Numbers field can't be Empty

2432-5465

30-01-2023

Publication Date field can't be Empty
spinner

Abstract: Multi-value attribute has always been a difficult problem to deal with in machine learning. Most models are unique for data format matching. When it is multi-value, most models cannot be used directly. At the same time, a large number of multi-valued attributes will be encountered in the construction of medical model. These attributes often represent that patients have multiple symptoms. The processing methods of multi- valued attributes can be roughly divided into two categories, one is through data preprocessing, the other is through algorithm pattern matching. The solution to medical multi-valued attributes in this paper is mainly through preprocessing, from the perspective of multi-valued attribute representation and projection. The process is to use sparse matrix to represent multi- valued data, convert it to high-dimensional space, and then project it back to one dimension to complete the processing of such data.

Keywords: Multi-value attributes; Sparse matrix; high- dimensional projection

A Chinese text similarity algorithm based on Yake and neural network A Chinese text similarity algorithm based on Yake and neural network

Jiabao Lin, Wanjun Yu

The Author field can not be Empty

Shanghai Institute of Technology, China

The Institution field can't be Empty

Volume 8, Issue 2

Volume and Issue can't be empty

511-517

The Page Numbers field can't be Empty

2432-5465

30-12-2022

Publication Date field can't be Empty
spinner
Abstract: Traditional text similarity algorithm has the disadvantage of  a large amount of text data and high complexity. Keywords are highly concentrated thematic ideas in the text. Extracting them can reduce the complexity of text similarity calculation. Therefore, this paper proposes a Chinese text similarity calculation method that integrates improved YAKE and neural network(YANN). With Aim to the  problem that Yet Another Keyword Extractor(YAKE) algorithm is not suitable for Chinese text keyword extraction, keyword candidate stage. First the new feature value of words is calculated by using word span, position, frequency, word context relevance, and the number of different sentences. Next we calculate the keyword score of each candidate word after synthesizing all the features values, and output the keywords in the ascending order of the score. Finally, the keyword set is inputted into the trained word2vec model for vectorization. Summation and averaging where the keyword vector values are derived from the trained word2vec model, and the similarity between different texts is calculated by cosine similarity. The experimental results show that the method proposed in this paper has better performance than other algorithms in Chinese text keyword extraction, and the similarity calculation results prove the merit of the method used.   Initial parameters of CNNs generated by Convolutional Sparse Representation with L1 error term Initial parameters of CNNs generated by Convolutional Sparse Representation with L1 error term

Satoshi Yoda, Yuto Tsukiashi, Yoshimitsu Kuroki

The Author field can not be Empty

National Institute of Technology, Kurume College, Japan

The Institution field can't be Empty

Volume 8, Issue 2

Volume and Issue can't be empty

518-520

The Page Numbers field can't be Empty

2432-5465

30-12-2022

Publication Date field can't be Empty
spinner

Abstract: Convolutional Sparse Representation (CSR) approximates images with the convolutional sum of dictionary filters and corresponding sparse coefficients. To improve classification accuracy of Convolutional Neural Networks (CNNs), this paper proposes to use the dictionary filters generated by CSR as initial parameters of CNNs' filters since the CSR filters express features of test images. Our method also estimates the error term of CSR with the L1 norm instead of the L2 norm to increase robustness against outliers in datasets for training. The results of experiments classifying CIFAR-10 show that the CNN using the initial parameters generated by the proposed method with the L1 error term shows the highest classification accuracy for small numbers of training images compared with the two methods: the proposed method with the L2 error term and the Xavier's method.

Keywords: Convolutional Sparse Representation, Convolutional Neural Network

Evaluation of the effects of cold and hot environmental temperatures  on the distribution of whole facial skin temperature Evaluation of the effects of cold and hot environmental temperatures on the distribution of whole facial skin temperature

Hiroaki Takahashi, Kosuke Oiwa and Akio Nozawa

The Author field can not be Empty

Department of Science and Engineering, Aoyama Gakuin University

The Institution field can't be Empty

Volume 8, Issue 2

Volume and Issue can't be empty

521-525

The Page Numbers field can't be Empty

2432-5465

30-12-2022

Publication Date field can't be Empty
spinner

Abstract: During the Covid-19 pandemic, fever detection using infrared thermography became widespread. A person with a fever is detected based on the facial skin temperature measured in a non-invasive and free-of-restraint method. Recent studies have pointed out that the facial whole skin temperature, when measured immediately after entering a moderately moderate environment from a cold environment, is not practical for detecting persons with fever because it is greatly affected by the environmental temperature. On the other hand, the effect of cold and hot temperatures on the details of the entire face has not been evaluated. In this study, we compared the cold and hot environments and the acclimation to moderate temperatures to the effects of cold and hot environments on the whole face skin temperature distribution was evaluated in detail. The results showed that the periorbital area and side of the nose were least affected in the cold environment, and the side of the nose was least affected in the hot environment. And these parts are suggested to be suitable for core temperature estimation considering the environmental temperature.

Keywords: facial skin temperature, environmental temperature, facial…

Development of flexible Text Input Device Based on Image Processing for Each Level of Disability Person Development of flexible Text Input Device Based on Image Processing for Each Level of Disability Person

Kinjo Hiroki, Zacharie Mbaitiga

The Author field can not be Empty

National Institute of Technology, Okinawa College, Japan

The Institution field can't be Empty

Volume 8, Issue 2

Volume and Issue can't be empty

526-528

The Page Numbers field can't be Empty

2432-5465

30-12-2022

Publication Date field can't be Empty
spinner

Abstract: This research aims to develop a flexible input text device for physically disabled person using image processing. In proposed method we use a camera to detect the disabled person arm and hand movements via image processing technics, in order to identify these movements with exact disabled person intention on what he/she wants to convey. When the intention is detected, the disabled person can then input his/her intention or thought data through the device. The input method can be changed according to the user's disability level and is expected to have a positive impact for the rehabilitation of the user.

Keywords: Rehabilitation, Disability, Image processing, Input

Parallel Navigation of Multi-Drones using City Information for Search and  Rescue Operations Parallel Navigation of Multi-Drones using City Information for Search and Rescue Operations

Zacharie Mbaitiga, Tanaka Shosaku

The Author field can not be Empty

National Institute of Technology, Okinawa College, Japan

The Institution field can't be Empty

Volume 8, Issue 2

Volume and Issue can't be empty

529-532

The Page Numbers field can't be Empty

2432-5465

30-12-2022

Publication Date field can't be Empty
spinner

Abstract: The ongoing global warming causes an increased number of disasters around the globe and is going to have a severe impact on our society in the years to come. Thanks to the rapid development of technology, drones have emerged, and the Search and Rescue operation become effective and efficient. In this paper, we propose the assessment of multi-drones for Search and Rescue (SAR) operations. The proposed SAR uses an appropriate city model with a diverse population density where the multi-drones can be dispatched based on the city’s geographic information for the Search and rescue operation during a disaster.

Keywords: Drone; Search and Rescue; Disaster; Operations

Emergence of Equal Cooperation Induced by  Characteristics of Adolescence than Adulthood in  an Interrole Conflict Game among Reinforcement Learning Agents Emergence of Equal Cooperation Induced by Characteristics of Adolescence than Adulthood in an Interrole Conflict Game among Reinforcement Learning Agents

Takashi Sato

The Author field can not be Empty

National Institute of Technology, Okinawa College, Japan

The Institution field can't be Empty

Vol.8, Issue 1

Volume and Issue can't be empty

505-510

The Page Numbers field can't be Empty

2432-5465

31-03-2022

Publication Date field can't be Empty
spinner

Abstract: People usually belong to multiple groups, and in such situations "interrole conflicts"occur. Studies on interrole conflicts have mainly targeted subjects after adulthood, although they do not occur only after the developmental stage of adulthood. In addition, limited studies have been conducted on interrole conflicts during adolescence. Furthermore, simulation studies about interrole conflicts have rarely been conducted. The purpose of this study is to clarify the difference between adolescence and adulthood in how to deal with interrole conflict situations. We propose an interrole conflict game (ICG) as a new game-theoretic framework to deal with interrole conflicts and adopt reinforcement learning agents with characteristics of adolescence or adulthood as players. Our multi-agent simulation (MAS) experiment results suggest high learning rate and low discount rate that cause typical adolescent characteristics including risk-taking, impulsivity and novelty seeking can be played important roles to cope an interrole conflict and for emergence of equal cooperation among adolescents in the interrole conflict situation.

Keywords: interrole conflict game (ICG), reinforcement learning, multi-agent simulation (MAS), characteristics of adolescence or adulthood, emergence of equal cooperation.
Drone Guidance System based Simultaneous Localization and Mapping  for Free Parking Space Localization Drone Guidance System based Simultaneous Localization and Mapping for Free Parking Space Localization

Tomei Naoto and Mbaitiga Zacharie

The Author field can not be Empty

National Institute of Technology, Okinawa College, Japan

The Institution field can't be Empty

Vol.7, Issue 2

Volume and Issue can't be empty

501-504

The Page Numbers field can't be Empty

2432-5465

31-12-2021

Publication Date field can't be Empty
spinner
Abstract:The rapid development of drone has changed the way of our lifestyle by helping us to deal with many issues we were unable to solve before. Such as scanning a large farm area for tracking boars using the drone integrated CMOS camera and so on. In this research, we built a map for drone guidance system in order to detect a fee parking space. The Simultaneous Localization and Mapping (SLAM) is used as a control method. "SLAM", is the process of mapping an area whilst keeping track of the location of the device within that area. A technology that enables autonomous flight even in an environment where GPS cannot be used . In addition, we used a Versatile and Accurate Monocular SLAM (ORB-SLAM) for real time operation and Large-Scale Direct Monocular Simultaneous Localization and Mapping (LSD-SLAM) which are typical SLAMs based visual SLAM. The experimental result shows that, although the generated map was somehow difficult to visualized but due the camera's self-position estimation gave a rough route of the path which enable the drone to locate the free parking lots. Our future focus will be to implement the automatic flying system based on the generated map and the improvement of the ORB-SLAM features. Keywords: slam; cmos camera ; orb-slam, parking lots; localization, guidance system
A Review of Multi-sensor Information Fusion Technology Research A Review of Multi-sensor Information Fusion Technology Research

Peng Lu, Fengzhi Dai

The Author field can not be Empty

Tianjin University of Science and Technology, China

The Institution field can't be Empty

Vol.7, Issue 2

Volume and Issue can't be empty

494-500

The Page Numbers field can't be Empty

2432-5465

31-12-2021

Publication Date field can't be Empty
spinner

Abstract:  With the development of sensor technology, multi-sensor information fusion technology has become an important research direction in the field of sensors. This paper focuses on the development of multi-sensor information fusion technology, the concepts, the levels of fusion and the main algorithms for information fusion. The development trends of multi-sensor information fusion algorithms and their characteristics are also foreseen now and in the future period.

  Keywords: Multi-sensor; Information fusion; Fusion algorithm
Line-Following Service Robot Using Arduino with Facial Recognition for Offices Line-Following Service Robot Using Arduino with Facial Recognition for Offices

Nawin Najat Mohammed, Zana Ahmad Mohammed, and Ahmad Najam Faraj

The Author field can not be Empty

Computer Department, Collage of Science,University of Sulaimani,,KRG, Sulaimani, Iraq

The Institution field can't be Empty

Vol.7, Issue 2

Volume and Issue can't be empty

444-447

The Page Numbers field can't be Empty

2432-5465

31-12-2021

Publication Date field can't be Empty
spinner

Abstract:  The robot working environment has changed. Robots are no longer restricted to factories and have gradually spread to urban areas. In this research work, we designed a line-following service robot using Arduino and using facial recognition to transport objects among offices. The line-following robot can proceed in its direction by following a black path; it spots the path, holds objects and recognizes and detects the picture of the target person to which the object belongs in the office. This office service robot is based on an Arduino UNO, DC motors, and batteries and is equipped with sensors, an Esp32, an IR sensor, a camera and a buzzer, since it moves among offices. Our robot can hold and transport objects, e.g., documents and letters, from the source to a destination by following a path and detecting and recognizing the target person who should receive the object. In addition, the buzzer within the robot will alarm and notify the target person with a specific sound that will be heard when the robot recognizes the target (him or her).

Keywords: Arduino, IR sensor, Esp32, Camera, Microcontroller, Line-following robot, Actuator, Facial recognition, Service robot, Buzzer.

Magneto-optical images for nondestructive inspection of plant steel structures using deep generative models Magneto-optical images for nondestructive inspection of plant steel structures using deep generative models

Ryosuke Hashimoto, Toshiya Itaya, Hiroki Kato Junya Ito, Kyoma Nakagawa, Hitoshi Nishimura and Syunsuke Fukuchi

The Author field can not be Empty

National Institute of Technology (Kosen) Suzuka College,Japan

The Institution field can't be Empty

Vol.7, Issue 2

Volume and Issue can't be empty

448-454

The Page Numbers field can't be Empty

2432-5465

31-12-2021

Publication Date field can't be Empty
spinner

Abstract:  Measures against deterioration of infrastructures that were built during the high economic growth period are facing significant challenges with regard to the maintenance of infrastructures in Japan. The development of optimal nondestructive sensing and imaging technology according to the material and structure of buildings is underway to contribute to efficient and reliable maintenance of infrastructures. However, owing to the large number of materials and structures used for buildings, as well as the types of defects to be targeted, many basic studies are yet to reach the stage of practical use. In this study, we developed a magneto-optical (MO) sensor in order to visualize a “crack” in the plant steel structure and automatically detected the defects in the plant steel structure by performing deep learning on the MO image obtained. As a pretreatment for detecting anomalies in defects using the AI, we focused on the nondestructive inspection using MO imaging and performed an unprecedented image filter processing. As a result, automatically evaluation the several types of MO images using AI, the accuracy of defection identification was improved.

Keywords: artificial intelligence; variational autoencoder, nondestructive inspection; magneto-optical imaging

Object Searching Robot Controlled by Edge-AI Object Searching Robot Controlled by Edge-AI

Ryosuke Miyata, Osamu Fukuda, Nobuhiko Yamaguchi and Hiroshi Okumura

The Author field can not be Empty

Saga University, Graduate School of Science and Engineering, Japan

The Institution field can't be Empty

Vol.7, Issue 2

Volume and Issue can't be empty

455-461

The Page Numbers field can't be Empty

2432-5465

31-12-2021

Publication Date field can't be Empty
spinner

Abstract:  This study proposes and develops an edge-AI-based autonomous mobile robot based on open-source software. The robot is capable of voice and object recognition; it can detect and approach an object specified by a user's voice. Because the robot is controlled by voice commands, the user can control the robot intuitively. In the present study, we used a robot operating system to facilitate the development. All functions, including voice recognition, object recognition, and motor control, were implemented in the edge AI computer based on open-source software. We conducted preliminary experiments to verify the performance of the developed system.

Keywords: Mobile robot, Edge-AI, Open-source software, Image recognition

 

© 2017 Applied Science and Computer Science Publications.