Shop
Showing 25–36 of 180 results
-
Audio-based Wearable Contexts Recognition System for Apnea Detection
Abstract: Apnea or Sleep Apnea Syndrome is a condition when a person unconsciously stops
breathing during a sleeping state for longer than a certain time. Long-term and multiple apnea events induce various impairments. However, apnea detection in hospitals is an intensive and complicated procedure and this causes highly undiagnosed and low awareness of the disease. Existing wearable devices for apnea detections mostly used heartbeat signal patterns and SpO2 levels to detect the disease, however since apnea is a respiratory impairment, it is believed that using a breathing pattern is the most straightforward approach in apnea detection. Several recent studies investigated that swallowing frequency during sleep can increase along with the apnea severity. However, the number of wearable devices using swallowing to detect apnea is very limited. Thus, this study proposes a wearable system to recognize human contexts such as breathing, heartbeat pattern, and swallowing using an audio sensor. Experiments were conducted to compare and obtain the most suitable parameters for the system such as window sizes, types of audio feature values, and classification algorithms. The prototype of the device was built and able to detect breathing, swallowing, heartbeat, oral sounds, and body movement. The result shows the best accuracy of 76.9% using 1s window size and Mel’s Frequency Cepstral Coefficient (MFCC) features in contact microphone data. -
Evaluation of Markerless Gait Analysis Method Including Out of Camera Plane Rotate Motion During Gait
Abstract: A RGB camera gait analysis system that does not require markers, large space, and
preparation can provide valuable information for effective treatment decisions in clinical settings. In this paper, we propose a simple markerless gait analysis method that can measure even if the rotation angle of the foot changes. The proposed method combines OpenPose (OP) and IMU measurement data using a complementary filter as a sensor fusion method to improve the measurement accuracy of the ankle joint angle, which is predicted to be less accurate for gait with a large foot rotation angle. Nine healthy adult males walked at a self-selected comfortable speed in two different foot-progression angle gait conditions. Spatio-temporal parameters and lower limb joint angles in the two gait conditions were measured. The mean absolute error (MAE) and the coefficient of cross-correlation (CCC) were calculated to evaluate the accuracy. The spatio-temporal parameters measured by the proposed method had low MAE compared with a conventional markerless method. The similarity between the changes in the angles of the hip and knee joints and the changes in the angles measured by a three-dimensional motion capture system was found to be very strongly correlated (CCC > 0.7). The MAE of the hip and
knee joint angles measured by the proposed method was small compared with a conventional markerless method. In particular, the proposed method was able to improve the measurement accuracy of the ankle angle by using two IMUs. The experimental results suggest that the proposed method can be used for simple and accurate measurement even when the rotation angle of the foot changes. Although the proposed method has some limitations, it has great potential as a simple and reliable gait analysis system in the clinical field. -
Effect of the random forest with recursive feature elimination for breast cancer classification using a WDBC dataset
Abstract: A breast cancer is the most dangerous disease of the death cause among aged 40-55 women. We need a computer aided diagnosis system for breast cancer classification. In the previous study, the random forest which is known as an ensemble learning method was reported to be one of promising classifiers for classifying breast cancers using a Wisconsin Diagnostic Breast Cancer(WDBC) dataset. This paper presents the effect of the random forest with a recursive feature elimination for breast cancer classification on the WDBC dataset, compared to the state of the art ensemble learning techniques, such as XGBoost and LightGBM.
-
The Comparison of Two-Classes Basic Emotion Classification Methods Using a Single Heart rate change Parameter
Abstract: Emotion is a multifaceted phenomenon that plays a critical role in enhancing one’s quality of life by influencing motivation, perception, cognition, creativity, empathy, education, and decision-making. Additionally, negative emotions such as anger, shame, and anxiety are frequently triggered by stress, and the term destructive and threatening is used to indicate a connection between them. As a result, research into emotion recognition remains a critical issue at the moment. This study enrolled fifteen male university students. The heart rate was determined using a fingertip photoplethysmograph (PPG). The International Affective Picture System (IAPS) was used in this study to facilitate emotion changes. We used the Self-Assessment Manikin (SAM) to evaluate the subject’s emotions during the psychological assessment. As a pre-processing method, the FIR Band Pass Filter was established, and a single parameter called Heart rate change (HRC) was extracted from a PPG recording. Rather than employing complex classification techniques, we used binary classifiers such as logistic regression, Naïve Bayes, and Support Vector Machine (SVM) to distinguish between negative and positive emotions. We discovered that Naïve Bayes could provide greater than 50% accuracy and Area Under Curve (AUC) compared to the others using data from 30%, 40%, and 50% test sizes, respectively, particularly happiness (positive emotion) and anger (negative emotion). We concluded that the HRC as a single parameter could be considered the fundamental emotion classifier, though further research is necessary.
Keywords: Emotions; Binary Classifier; SAM; Photoplethysmograph
-
Classification of Breast Pathology based on Transfer Learning by MobileNet
Abstract: Breast cancer is the most common cancer among women worldwide. By using artificial intelligent technique, the efficiency of cancer diagnosis can be effectively improved. However, the computer-aided diagnosis (CAD) has problems such as long training time for large-resolution pathological images and insufficient data that can be marked for training. In this article, a transfer learning model for pathological diagnosis of breast cancer is developed to overcome these problems. MobileNet was adopted to train breast pathology images under four different resolutions (40X, 100X, 200X, 400X). A transfer learning framework was established to distinguish benign and malignant breast pathologies and eight subtypes. The accuracy of the two-class model at the best magnification (200X) can reach 91.24%, and the average accuracy is 89.31%. At the same time, the multi-classification model of eight subtypes of pathological slices also achieved quite satisfactory results. It is show that the presented transfer learning framework has great potential in exploring the CAD technique for breast cancer.
Keywords: Breast cancer; Pathological image; Computer aided diagnosis; Transfer learning.
-
A Review of Multi-sensor Information Fusion Technology Research
Abstract: With the development of sensor technology, multi-sensor information fusion technology has become an important research direction in the field of sensors. This paper focuses on the development of multi-sensor information fusion technology, the concepts, the levels of fusion and the main algorithms for information fusion. The development trends of multi-sensor information fusion algorithms and their characteristics are also foreseen now and in the future period.
Keywords: Multi-sensor; Information fusion; Fusion algorithm
-
Line-Following Service Robot Using Arduino with Facial Recognition for Offices
Abstract: The robot working environment has changed. Robots are no longer restricted to factories and have gradually spread to urban areas. In this research work, we designed a line-following service robot using Arduino and using facial recognition to transport objects among offices. The line-following robot can proceed in its direction by following a black path; it spots the path, holds objects and recognizes and detects the picture of the target person to which the object belongs in the office. This office service robot is based on an Arduino UNO, DC motors, and batteries and is equipped with sensors, an Esp32, an IR sensor, a camera and a buzzer, since it moves among offices. Our robot can hold and transport objects, e.g., documents and letters, from the source to a destination by following a path and detecting and recognizing the target person who should receive the object. In addition, the buzzer within the robot will alarm and notify the target person with a specific sound that will be heard when the robot recognizes the target (him or her).
Keywords: Arduino, IR sensor, Esp32, Camera, Microcontroller, Line-following robot, Actuator, Facial recognition, Service robot, Buzzer.
-
Magneto-optical images for nondestructive inspection of plant steel structures using deep generative models
Abstract: Measures against deterioration of infrastructures that were built during the high economic growth period are facing significant challenges with regard to the maintenance of infrastructures in Japan. The development of optimal nondestructive sensing and imaging technology according to the material and structure of buildings is underway to contribute to efficient and reliable maintenance of infrastructures. However, owing to the large number of materials and structures used for buildings, as well as the types of defects to be targeted, many basic studies are yet to reach the stage of practical use. In this study, we developed a magneto-optical (MO) sensor in order to visualize a “crack” in the plant steel structure and automatically detected the defects in the plant steel structure by performing deep learning on the MO image obtained. As a pretreatment for detecting anomalies in defects using the AI, we focused on the nondestructive inspection using MO imaging and performed an unprecedented image filter processing. As a result, automatically evaluation the several types of MO images using AI, the accuracy of defection identification was improved.
Keywords: artificial intelligence; variational autoencoder, nondestructive inspection; magneto-optical imaging
-
Object Searching Robot Controlled by Edge-AI
Abstract: This study proposes and develops an edge-AI-based autonomous mobile robot based on open-source software. The robot is capable of voice and object recognition; it can detect and approach an object specified by a user’s voice. Because the robot is controlled by voice commands, the user can control the robot intuitively. In the present study, we used a robot operating system to facilitate the development. All functions, including voice recognition, object recognition, and motor control, were implemented in the edge AI computer based on open-source software. We conducted preliminary experiments to verify the performance of the developed system.
Keywords: Mobile robot, Edge-AI, Open-source software, Image recognition
-
Proficiency Estimation Method of Vibrato in Electric Guitar
Abstract: Many systems that provide automatic evaluation and feedback of electric guitar have been developed. However, they have a serious weakness, only “timing” and “pitch” are considered in evaluation. In fact, a wide range of factors are involved in the human evaluation. In order to solve this problem, previous studies proposed automatic evaluation methods. On the other hand, these are not possible to evaluate the sound using the special technique of electric guitar. In this study, we proposed a method for automatic proficiency estimation of vibrato in electric guitar. As the method, we extracted the acoustic features focusing on peaks of Mel fundamental frequency, number of peaks, width average, width variance, height average and height variance. We regressed the evaluation values using extracted acoustic features with a relevance vector machine; RVM. As the result, we were able to perform regression with a coefficient of determination 0.723. This result indicates extracted features are highly relevant to evaluation values by human and allows a tentative evaluation of the vibrato sound of the electric guitar.
Keywords: Electric Guitar, Vibrato, Proficiency Estimation, RVM, Mel Fundamental Frequency, Audio Signal Processing, Machine Learning, Music Information Processing, Music Education
-
Gastric Cancer Detection by Two-step Learning in Near-Infrared Hyperspectral Imaging
Abstract: Gastric cancer is one of the most serious cancers that affects and kills many people around the world every year. Early treatment of gastric cancer dramatically improves the survival rate. Endoscopy has become an important tool for early detection. Since invasive gastric cancer or the edge of the invasive gastric cancer is difficult to find by using conventional visible-light endoscopy, near-infrared imaging, which is bringing great progress to the medical field, is focused on in recent years. In order to apply near-infrared hyperspectral imaging (NIR-HSI) in real-time, wavelength feature extraction is important because a large amount of data needs to be analyzed. The purpose of this study is to detect gastric cancer using NIR-HSI and to select a suitable wavelength for the target in the near-infrared region (1000–1600 nm). NIR-HSI was used to take data from six specimens of gastric cancer and each pixel was labeled as normal or tumor on the hyperspectral image based on the histopathological diagnosis. 4 wavelengths were extracted from 95 wavelengths using the least absolute shrinkage and selection operator method. Supervised learning was performed using a support vector machine for both cases using all 95 wavelengths and the case using 4 selected wavelengths. In both cases, the approximate location of the tumor could be identified, indicating that an appropriate wavelength could be selected. We were also able to improve the detection accuracy by creating new supervised data and adding another learner. The detection accuracy was 93.3% for accuracy, 69.8% for sensitivity, and 96.7% for specificity. These results show that gastric cancer can be detected even at four wavelengths. By applying the results of this study to the endoscope system, the possibility of constructing a NIR endoscope system for gastric cancer was suggested.
Keywords: Cancer detection; Gastric cancer; Near-infrared hyperspectral imaging; The least absolute shrinkage and selection operator; Support vector machine
-
Non-Predefined Life Signs Detection for Disaster Survivors Rescue
Abstract: No one can tell or predict when and where a natural disaster such as an earthquake or tornado will occur and the damages they may cause. Or an overflow of a large amount of water beyond its normal limit (water flood) shallowing city as we used to watch on news TV after the passage of a strong typhoon causing heavy rain. These natural disasters occur every day and anywhere around the globe are not new. And we cannot not prevent them from occurring in spite of the best technology we have in our possession now. But saving lives after their occurring is still possible and the best technology for this is the combination of AUV and the image processing. Image processing is one of the best ever invented technology by human since the course on technology development between scientists for sustainable development of our society. In order word “image processing is the technology that meets the needs of the 21st society we live in without compromising the ability of future generation to meet what they need to make the use of this technology efficiency. This paper proposes non-predefined life signs detection for disaster survivors rescue when a disaster occur and especially during a floodwater. In this research we use the matrix-based pairs of opposing pixels positioned directly around the observed point that belongs to the edge of the life signs target. At first one-dimension matrix for bitmap memorization values of the RGB components of pixel is used. Next these values of the RGB components of pixel color are copied from bitmap to matrix. The number of bytes in a row is rounded up to the nearest number divisible by four. As a result, the method clearly detects all life signs edge made by human using any type of item around them with 95%.
Keywords: Life signs; Image processing; Disaster; Edge detection; Rescue