top of page

YAGUCHI, Yuichi

Ph.D (Computer Science and Engineering)
Senior Associate Professor

Information System Division and CAIST ARC ROBOT, University of Aizu
Tsuruga, Ikkimachi, Aizuwakamatsu, Fukushima, 965-8580 Japan.
Phone: (+81) 0242-37-2643 (intra 3369) 
Fax: (+81) 0242-37-2729
EMail: yaguchi@u-aizu.ac.jp

画像2.jpg

Education

 

  • Ph.D (2011) Computer Science and Engineering
    Graduate School of Computer Science and Engineering, University of Aizu, Japan
    Thesis:
    Two-dimensional Algorithms for Pixel-wise Matching of Images and Their Applications 

     

  • MS (2008) Computer Science and Engineering
    Graduate School of Computer Science and Engineering, University of Aizu, Japan
    Thesis: 
    Multimedia Retrieval System for Web Video Repository

     

  • BS (2005) Computer Science and Engineering 
    School of Computer Science and Engineering, University of Aizu, Japan
    Thesis: 
    Song Wave Retrieval Based on Frame-wise Phoneme Recognition

Professional Experience

  • April 2022 - Current: Senior Associate Professor, Information System Division, University of Aizu

  • October 2021 - Current: ISO TC20/SC16/WG4 Expert from Japan (JUTM)

  • April 2021 - Current: Japan UAS Traffic Management Association (JUTM) WG1 Director, JUTM

  • April 2018 - Current: ARC Space, University of Aizu

  • April 2016 - Current: CAIST ARC Robot, University of Aizu

  • October 2012 - March 2022: Associate Professor, Information System Division, University of Aizu

  • April 2011 - November 2012: Assistant Lecturer, Information System Division, University of Aizu

  • April 2009 - March 2011: Research Fellow DC2, Japan Society of Promotion Science

  • December 2002 - March 2009: Sound Composer and Programmer, GClue. Inc.

Fields of Interests

 

  • Advanced Air Mobility System: Conflict Management and Detect and Avoid

  • Unmanned Aircraft System: Blockchain UTM, Path Planning for UAV Fleet

  • LiDAR and Visual SLAM Fusion

  • Robot Vision: Sensor Fusion, Object Recognition, UAV Application

  • Motion Planning for Multiple Robots: Drone Formation Flight, Drone and Rover Collaboration

  • Spotting Recognition and Image Matching: Continuous DP, Fast Spotter, 2DCDP, Coarese to Fine, 3DCDP

  • Multimedia Information Retrieval (MIR): Image/Sound Processing, Search Algorithm

  • Visualization: Associated Keyword Space Family (ASKS, S-ASKS, Incremental ASKS), t-SNE, Spherical Mapping
     

Computer Skills

 

  • Computer Language: C, C++, C#, Java, Python, Matlab, PHP, Bash, MIDI, SMAF, ActionScript.

  • Middleware and Tools: ROS, OpenRTM, OpenCV, OMPL, CUDA, Direct X, Enterprise Architect

  • Software: Visual Studio, MS Office, Matlab, Eclipse, Adobe Software (Illustrator, Photoshop, Premiere), Final Cut Pro X

  • OS: MacOS X, Unix (Solaris, Linux, Ubuntu, Debian), MS-DOS, Windows

  • Server Application: Apache, MySQL, Tomcat, Sendmail, Hadoop, VMware

 

Teaching Experience

  • Programming C - 1st grade, University of Aizu (2011 - 2014)

  • Programming Java I - 2nd grade, University of Aizu (2013 - 2017)

  • Logic Circuit Design - 2nd grade, University of Aizu (2011 -2012)

  • Algorithms and Data Structure II - 3rd grade, University of Aizu (2018 -)

  • Natural Language Processing and Information Retrieval - 3rd grade, University of Aizu (2018 -)

  • Image Processing - 4th grade, University of Aizu (2011 -)

  • Pattern Recognition and Machine Learning - Graduate School, University of Aizu (2016 - 2017)

  • Modern Control Theory - Graduate School, University of Aizu (2017 -)

  • Image Recognition and Understanding - Graduate School, University of Aizu (2013 -)
     

  • SCCP: Android Application Development - University of Aizu

  • Factories for Experiencing Starting Up Ventures 7: Movie broadcast through Internet Media - University of Aizu

 

Teaching Candidates

 

  • Pattern Matching Algorithms

  • Sound and Speech Recognition

  • Computer Vision and Image Processing

  • Document Retrieval and Recognition

 

 

Publishing

Journals and Articles
  1. Y. Watanobe, Y. Yaguchi, K. Nakamura, T. Miyaji, R. Yamada, K. Naruse, “Architecture and Framework for Data Acquisition in Cloud Robotics,” Int. J. of Information Technology, Communications and Convergence, November 2020 (Accepted)

  2. B. T. G. S. Kumara, I. Paik and Y. Yaguchi, “Context-Aware Web Service Clustering and Visualization,” International Journal of Web Services Research (IJWSR) Vol. 17, No. 4, Pages: 23, 2020.

  3. R. Yamada and Y. Yaguchi, "Evaluation of calibration methods to construct a 3-D environmental map with good color projection using both camera images and laser scanning data," Artif. Life Robot, Springer, Vol. 25, pp. 434-439, March 2020, Online: https://link.springer.com/article/10.1007/s10015-020-00594-7

  4. Y. Yaguchi and K. Tamagawa, "A waypoint navigation method with collision avoidance using an artificial potential method on random priority," Artif. Life Robot, Springer, Vol. 25, pp. 278-285, March 2020, Online: https://link.springer.com/article/10.1007/s10015-020-00583-w [Evidence: ESCI]

  5. N. Kato, Y. Kawamoto, A. Aneha, Y. Yaguchi, R. Miura, H. Nakamura, M. Kobayashi, T. Henmi, O. Akimoto, Y. Kamisawa, and A. Kitashima, "Location Awareness System for Drones Flying Beyond Visual Line of Sight Exploiting the 400 MHz Frequency Band," IEEE Wireless Communications, Vol. 26, No. 6, pp. 149-155, December 2019, Online: https://ieeexplore.ieee.org/document/8869711

  6. I. Otani, Y. Yaguchi, K. Nakamura and K. Naruse,"Quantitative Evaluation of Streaming Image Quality for The Robot Teleoperations," Artif. Life Robot, Springer, Vol. 24, No. 2, pp. 230-238. May 2019, Online: https://link.springer.com/article/10.1007/s10015-018-0495-1 [Evidence: ESCI]

  7. R. Yamada, Y. Yaguchi, M. Yoshida and S. Kobayashi, "Towards a system for analyzing accidents of unmanned aerial vehicles," Artif. Life Robot, Springer, Vol. 24, No. 1, pp. 94-99. March 2019, Online: https://link.springer.com/article/10.1007/s10015-018-0460-z 

  8. W. Chen, Y. Yaguchi, K. Naruse, Y. Watanobe, K. Nakamura and J. Ogawa, "A Study of Robotic Cooperation in Cloud Robotics: Architecture and Challenges," IEEE Access, Vol.6, pp. 36662 – 36682, July 2018, Online: https://ieeexplore.ieee.org/document/8403209 [Evidence: SCI]

  9. W. Chen, Y. Yaguchi, K. Naruse, Y. Watanobe and K. Nakamura, "QoS-aware Robotic Streaming Workflow Allocation in Cloud Robotics Systems," IEEE Transactions on Services Computing, vol. PP, no. 99, pp. 1-1. February 2018, Online: https://ieeexplore.ieee.org/document/8283811 [Evidence: SCI]  

  10. Y. Niitsuma, S. Torii, Y. Yaguchi and R. Oka, "Time-segmentation- And position-free recognition from video of air-drawn gestures and characters", Multimedia Tools and Applications, Springer, pp 1-25, May 2015, Online: https://link.springer.com/article/10.1007/s11042-015-2669-3 [Evidence: SCIE]

  11. S. Moriya and Y. Yaguchi, "Ultrasound tongue image denoising for comparison of first and second language tongue trajectories", The Journal of the Acoustical Society of America 140 (4), 3114-3114, November. 2016, Online: https://asa.scitation.org/doi/10.1121/1.4969745 [Evidence: SCI]

  12. I. Paik, W. Chen, B. T. G. S. Kumara, T. Tanaka, Z. Li and Y. Yaguchi, "Linked Data-Based Service Publication for Service Clustering", Advances in Computer Science and its Applications, Springer, pp. 1429-1435, January 2014. Online: https://link.springer.com/chapter/10.1007/978-3-642-41674-3_199 [Evidence: ESCI]

  13. Y. Yaguchi and R. Oka, "Spherical Visualization of Image Data with Clustering", JACIII, Vol. 17. No. 4, pp. 573-580, 2013. Online: https://www.fujipress.jp/jaciii/jc/jacii001700040573/ [Evidence: ESCI]

  14. S. Moriya, Y. Yaguchi, N. Terunuma, T. Sato and I. Wilson, "Normalization and matching routine for comparing first and second language tongue trajectories", The Journal of the Acoustical Society of America, Vol. 134, No. 5, pp 4244, November 2013, DOI:10.1121/1.4831607. https://asa.scitation.org/doi/10.1121/1.4831607 [Evidence: SCI]

  15. J. Ma, L. Zheng, M. Dong, X. He, M. Guo, Y. Yaguchi, R. Oka, "A segmentation-free method for image classification based on pixel-wise matching", J. Comput. Syst. Sci. Vol. 79, No. 2, pp. 256-268, 2013, Online: https://dl.acm.org/doi/10.1016/j.jcss.2012.05.009 [Evidence: SCIE]

  16. K. Sano, Y. Yaguchi and I. Wilson, "Comparing L1 and L2 phoneme trajectories in a feature space of sound and midsagittal ultrasound tongue images", The Journal of the Acoustical Society of America, Vol. 132, No. 3, pp. 1934, September 2012, https://asa.scitation.org/doi/abs/10.1121/1.4755107, DOI:10.1121/1.4755107 [Evidence: SCI]

  17. Y. Yaguchi, T. Wagatusma and R. Oka, "Spatial Clustering Technique for Data Mining", New Fundamental Technologies in Data Mining, January 2011, ISBN: 978-953-307-547-1, Online: https://www.intechopen.com/books/new-fundamental-technologies-in-data-mining/spatial-clustering-technique-for-data-mining [Evidence: Minor/Book Chapter]

  18. Y. Yaguchi, K. Iseki and R. Oka, "Full Pixel Matching between Images for Non-linear Registration of Objects", IPSJ Transactions on Computer Vision and Applications, No. 2, pp. 1-14, January, 2010, DOI:10.2197/ipsjtcva.2.1 Online; https://www.jstage.jst.go.jp/article/ipsjtcva/2/0/2_0_1/_article/-char/ja/ [Evidence: Univ. of Aizu Criteria 2]

  •  

Conference Proceedings

  1. Y. Watanobe, Y. Yaguchi, T. Miyaji, R. Yamada and K. Naruse, "Data Acquisition Framework for Cloud Robotics," iCAST 2019, pp. 1-7, Oct. 2019. Online: https://ieeexplore.ieee.org/document/8923436 [Evidence: IEEE]

  2. Y. Yaguchi, Y. Sakaguchi and K. Tamagawa, "A Design of Server-less Distributed UTM System," ICIUS2019, China, ID:0056, Aug. 2019. [Evidence: IEEE SMC Beijing Chapter]

  3. Y. Yaguchi, M. Itaha, S. Nakano, K. Yamagishi, T. Iyobe and A. Sasaki, "A Mesoscale Meteorological Observation System Using Drone Fleet," ICIUS2019, China, ID:0058, Aug. 2019. [Evidence: IEEE SMC Beijing Chapter]

  4. Y. Yaguchi, Y. Inoue and K. Nakamura, "Collision Avoidance for Drone Fleets using Potential Methods," 2018 15th International Conference on Intelligent Unmanned Systems (ICIUS), Jeju, Korea, pid. 58, Aug. 2018. [Evidence: Minor]

  5. S. Kobayashi, Y. Yaguchi, K. Nakamura, K. Naruse and S. Maekawa, "Pre-accident Situation Analysis Based on Locally of Motion," 2018 9th International Conference on Awareness Science and Technology (iCAST), pp. 1-6, Fukuoka, Japan, Sep. 2018, Online: https://ieeexplore.ieee.org/document/8517229, [Evidence: IEEE]

  6. K. Tamagawa and Y. Yaguchi, "Waypoint Correction Method for Collision Avoidance with Artificial Potential Method on Random Priority," ISAROB 2019, pp. 462-467, Beppu, Japan, Jan. 2019. [Evidence: IEEE RAS Japan Chapter]

  7. R. Yamada and Y. Yaguchi, "The 3-D environmental map synthesized from camera images and laser scanning data," ISAROB 2019, pp. 602-607, Beppu, Japan, Jan. 2019. [Evidence: IEEE RAS Japan Chapter]

  8. Y. Yaguchi, M. Omura and T. Okumura, "Geometrical mapping of diseases with calculated similarity measure", in Proc. of BIBM 2017, Workshop on BHI, pp. [-], November, 2017. Online: https://ieeexplore.ieee.org/document/8217816 [Evidence: IEEE]

  9. Y. Yaguchi, Y. Nitta, S. Ishizaka, T. Tannai, T. Mamiya, K. Naruse and S. Nakano, "Formation control for different maker drones from a game pad," 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Lisbon, 2017, pp. 1373-1378. Online: https://ieeexplore.ieee.org/document/8172483  [Evidence: IEEE RAS]

  10. Y. Yaguchi, K. Moriuchi and K. Anma, "Comparison of camera configuration for real-time drone route planning in 3D building maze," IEEE iCAST 2017, Taichung, 2017, pp. 244-249. doi: Online: https://ieeexplore.ieee.org/document/8256455, [Evidence: IEEE] (Best Paper Award)

  11. R. Yamada, Y. Yaguchi, M. Yoshida, "Performances of 3D mapping and odometry tools, and of a visualization system for analyzing accidents of unmanned aerial vehicles," ISAROB2018, pp. 389-394, Beppu, Japan, Jan. 2018. [Evidence: IEEE RAS Japan Chapter]

  12. Abstract: Our target is to replace the accident conditions of the unmanned aerial vehicles (UAVs) using data obtained from the sensors and flight recorder loaded on the UAVs to analyze their causes. In this paper, we have first investigated the performances of three types of tools for 3D mapping and odometry to reproduce the surrounding environment and its orbit, and found that the tool using the LIDAR data are more accurate and can reproduce broader areas compared with methods that use monocular and stereo camera images. Second, we applied an optical flow method to images taken by a monocular camera rotating with 4 types of velocities, and found that imaging over 120 fps is required to analyze accurately the velocity field of the rotating and falling UAV. Finally, we have developed a visualization system that displays the reproduced situations of the UAV flights and accidents on a computer screen.

  13. I. Otani, Y. Yaguchi, K. Nakamura and K. Naruse, "Quantitative Evaluation of Streaming Image Quality for The Robot Teleoperation," ISAROB2018, pp. 230-235, Beppu, Japan, Jan. 2018. [Evidence: IEEE RAS Japan Chapter]

  14. Abstract: In this paper, we define a novel measure of streaming video quality for remotely operated robots. Controlling robots remotely is crucial for disaster response, and many attempts have been made to create such systems. Wireless communication, which is used in remote-control systems for unmanned vehicles, change dynamically and the streaming quality also changes to the quality of the network; however, wireless conditions are not typically measured in conventional robot systems. We are developing a quality measure for remote control using video proprieties such as delay and degrading of image quality as Quality of Control (QoC). In this paper, we introduce this QoC measure using delay and degrading of image quality curves in simulation environments, and we discuss the implications for robot system design.

  15. C. H. Pham, Y. Yaguchi and K. Naruse, "Feature Descriptors: A Review of Multiple Cues Approaches," Prof. of IEEE CIT 2016, Nadi, Fiji, Dec. 2016, pp. 310-315. Online: https://ieeexplore.ieee.org/document/7876353 [Evidence: IEEE],

  16. Abstract: Feature descriptors have been playing an important role in many computer vision problems, such as image matching and object recognition. While classic descriptors using texture or shape as a single cue of descriptive information have been proved to be successful, recently, several approaches have been proposed introducing the combination of multiple cues to increase descriptive power and robustness. In this paper, we review the methodology of the most recent and popular multiple cues descriptors, and evaluate them with respect to their application and robustness to the variance of conditions.

  17. Y. Yaguchi, Y. Hiroto, T. Mamiya, R. Oka, "A Coarse-to-Fine Strategy for Full Pixel Image-Matching in High-Resolution Images," The 16th International Symposium on Advanced Intelligent Systems (ISIS2015) [Evidence: Minor (KIIS, SOFT, TAAI, TFSA)]

  18. Abstract: Image registration is a key issue in many computer vision problems such as object recognition and 3D shape reconstruction of structure from motion, where there is a need to identify many precise trajectories in a set of pictures. Two-dimensional continuous dynamic programming (2DCDP) is a full pixel image-matching technique for obtaining a large set of corresponding points. In this paper, we improve the 2DCDP algorithm to enable its application to larger images, where the conventional method would involve excessive memory requirements. First, we reduce pixels to a suitable size to enable coarse matching. Second, we obtain corresponding blocks in the original images from the results of the coarse matching. Finally, we extract the actual corresponding points in the original images to apply to each segmented subimage. From our experimental results, the proposed method is more effective for extracting precisely corresponding points than are previous methods.

  19. S. Moriya, Y. Yaguchi, and I. Wilson, "Normalization and matching routine for comparison of native speaker and non-native speaker tongue trajectories", in Proc. of ISIS2015, Korea, November, 2015 [Evidence: Minor (KIIS, SOFT, TAAI, TFSA)] [Best Paper Award]

  20. Abstract: The main purpose of this research is to specify articulation difference between native and non-native speakers by digitizing tongue motions and analyzing the difference between utterances. Differences in tongue motion directly influence speaker’s pronunciation, therefore it may be possible to improve non-native speaker’s efficiency of pronunciation practice with the relevant feedback and visualization. It is necessary for comparison of native and non-native speakers’ tongue motions to that end, however, normalization is absolutely necessary to remove the influence of anything except tongue motion before comparison, because every person has a unique shape and size. In this paper, we use coronalcross section of the tongue taken by ultrasound scanner to carry out the following: first record the ultrasound of speaker’s tongue motion using the corpus “The Boy Who Cried Wolf.” Then, sample tongue motion by using a histogram of oriented gradients and Karhunen-Loeve expansion. Next, apply eight prepared normalizations to tongue motions. Finally, compare each tongue motion per frame via dynamic time warping and correlation coefficient. The experimental result allowed us to compare with speaker’s tongue motions in sentences which were recorded in different environments or by different speakers and to point out non-native speaker’s speaking errors.

  21. Y. Nitsuma, S. Torii, Y. Yaguchi and R. Oka, “Time-segmentation- and Position-free Recognition from Video of Air-drawn Gestures and Characters”. ICPRAM 2014, pp. 588-599, 2014, Online: https://dl.acm.org/doi/abs/10.5220/0004816805880599 [Evidence: ACM SIGAI]

  22. Abstract: We report on the recognition from a video of isolated alphabetic characters and connected cursive characters, such as Hiragana or Kanji characters, drawn in the air. This topic involves a number of difficult problems in computer vision such as the segmentation and recognition of complex motion from a video. We utilize an algorithm called time-space continuous dynamic programming (TSCDP) that can realize both time and location-free (spotting) recognition. Spotting means that prior segmentation of the input video is not required. Each of the reference (model) characters used is represented by a single stroke composed of pixels. We conducted two experiments involving the recognition of 26 isolated alphabetic characters and 23 Japanese Hiragana and Kanji air-drawn characters. Moreover we conducted gesture recognition based on TSCDP and showed that TSCDP was free from many restrictions required for conventional methods.

  23. B. T. G. S. Kumara, I. Paik, H. Ohashi, Y. Yaguchi and W. Chen, “Context-Aware Filtering and Visualization of Web Service Clusters”, ICWS 2014: pp. 89-96, 2014, Online: https://ieeexplore.ieee.org/document/6928885, [Evidence: IEEE]

  24. Abstract: Web service filtering is an efficient approach to address some big challenges in service computing, such as discovery, clustering and recommendation. The key operation of the filtering process is measuring the similarity of services. Several methods are used in current similarity calculation approaches such as string-based, corpus-based, knowledge-based and hybrid methods. These approaches do not consider domain-specific contexts in measuring similarity because they have failed to capture the semantic similarity of Web services in a given domain and this has affected their filtering performance. In this paper, we propose a context-aware similarity method that uses a support vector machine and a domain dataset from a context-specific search engine query. Our filtering approach uses a spherical associated keyword space algorithm that projects filtering results from a three-dimensional sphere to a two-dimensional (2D) spherical surface for 2D visualization. Experimental results show that our filtering approach works efficiently.

  25. N. Terunuma, Y. Yaguchi, Y. Watanobe, R. Oka, "Information flow clustering via similarity of a propagation tree", Soft Computing and Intelligent Systems (SCIS), 2014 Joint 7th International Conference on and Advanced Intelligent Systems (ISIS), 15th International Symposium on, Year: 2014 Pages: 765 - 768, Online: https://ieeexplore.ieee.org/document/7044773, [Evidence] IEEE]

  26. Abstract: Social network services (SNSs) serve numerous users with large amounts of information of different kinds. On an SNS, information will propagate on a user network, which is represented as a complex network in general but can be reformed as a tree by using the direction of propagation and allowing duplication. Our goal in this study was to show the propagation of a particular kind of information on an SNS, as well as the clustering of a similar propagation scheme for each user. For this goal, we used elastic tree pattern matching to calculate the similarity of two tree structures. A set of users are propagated from source to destination in the same or similar way, and these users are given information from a similar source. We also aimed to find the high-influence person who is at the start of the same or similar propagation, which will indicate that she/he is the moderator of a topic. We used tumblr data for the experiment. Findings indicated that the similar part of each information propagation tree on tumblr was too small for the clustering propagation pattern.

  27. K. Amma, S. Wada, K. Nakayama, Y. Akamatsu, Y. Yaguchi, K. Naruse, "Visualization of spread of topic words on Twitter using stream graphs and relational graphs", Soft Computing and Intelligent Systems (SCIS), 2014 Joint 7th International Conference on and Advanced Intelligent Systems (ISIS), 15th International Symposium on, Year: 2014, Pages: 761 - 764, DOI: 10.1109/SCIS-ISIS.2014.7044759, Online: https://ieeexplore.ieee.org/document/7044759 [Evidence: IEEE]

  28. Abstract: In this paper, we examine occurrences, cooccurrences, and characteristics for influence and meaning of words by visualizing large amounts of data from Twitter. We classified words using morphological analysis of tweets and developed a stream graph by finding the frequency of each word. We analyzed the co-occurrence of words using quantification methods of the fourth type to find relationships and showed distances between words in a similarity graph. We present examples of the relationships found by our analysis.

  29. B. T. G. S. Kumara, Y. Yaguchi, I. Paik, W. Chen "Clustering and Spherical Visualization of Web Services". IEEE SCC 2013: pp. 89-96, 2013, Online: https://ieeexplore.ieee.org/document/6649682 [Evidence: IEEE]

  30. Abstract: Web service clustering is one of a very efficient approach to discover Web services efficiently. Current clustering approaches use traditional clustering algorithms such as agglomerative as the clustering algorithm. The algorithms have not provided visualization of service clusters that gives inspiration for a specific domain from visual feedback and failed to achieve higher noise isolation. Furthermore iterative steps of algorithms consider about the similarity of limited number of services such as similarity of cluster centers. This leads to reduce the cluster performance. In this paper we apply a spatial clustering technique called the Associated Keyword Space(ASKS) which is effective for noisy data and projected clustering result from a three-dimensional (3D) sphere to a two dimensional(2D) spherical surface for 2D visualization. One main issue, which affects to the performance of ASKS algorithm is creating the affinity matrix. We use semantic similarity values between services as the affinity values. Most of the current clustering approaches use similarity distance measurement such as keyword, ontology and information-retrieval-based methods. These approaches have problem of short of high quality ontology and loss of semantic information. In this paper, we calculate the service similarity by using hybrid term similarity method which uses ontology learning and information retrieval. Experimental results show our clustering approach is able to plot similar services into same area and aid to search Web services by visualization of the service data on a spherical surface.

  31. S. Wada, Y. Yaguchi, R. Ogata, Y. Watanobe, K. Naruse, R. Oka, "Associated Keyword analysis for temporal data with spatial visualization", iCAST-UMEDIA 2013, pp. 243-249, DOI: 10.1109/ICAwST.2013.6765441, Online:  https://ieeexplore.ieee.org/document/6765441 [Evidence: IEEE]

  32. Abstract: To extract temporal variations in the relation between two or more words in a large time-series script, we propose three procedures for adoption by the existing Associated Keyword Space system, as follows. First, we begin the calculations from a previous state. Second, we add a random seed if a new object was present in the previous state. Thrid, we forget those object relations from the previous state that have no affinity with the selected term. We have experimented with this improved algorithm using a large time-series of tweets from Twitter. With this approach, it is possible to check on the volatility of topics.

  33. J. Tazawa, Y. Okuyama, Y. Yaguchi, T. Miyazaki, R. Oka and K. Kuroda, "Hardware Implementation of Accumulated Value Calculation for Two-Dimensional Continuous Dynamic Programming", MCSoC 2012, pp. 8-15, 2012., Online:  https://ieeexplore.ieee.org/document/6354672 [Evidence: IEEE]

  34. Abstract: We propose an efficient hardware accelerator for the calculation of accumulated values of two-dimensional continuous dynamic programming (2DCDP). The 2DCDP is a powerful optimal pixel-matching algorithm between input and reference images which can be applied to image processing, such as image recognition, image search, feature tracking, 3D reconstruction, and so on. However, it requires large computation time due to its time and space complexities of O(N 4 ). We analyze the computation flow of the 2DCDP algorithm and propose a high-performance architecture for a hardware accelerator. Parallelized accumulated minimum local distance calculators and a toggle memory structure are newly introduced to reduce the computation cost and memory. The proposed architecture is implemented into an FPGA, Stratix IV, EP4SE820. Its maximum operation frequency is 125.71 MHz. The preliminary evaluation reveals that the parallel processing by 32 PEs for the accumulated value calculation for 32x32 input and reference images can be sped up to 77 times at the maximum operation frequency of 100 MHz compared to the processing with a multi-core processor.

  35. Y. Yaguchi, N. Horiguchi and I. Wilson, “Finding phoneme trajectories in a feature space of sound and midsagittal ultrasound tongue images” Awareness Science and Technology (iCAST), 2012 , Online: https://ieeexplore.ieee.org/document/6469606, [Evidence: IEEE]

  36. Abstract: Supporting the development of a pronunciation learning system, this paper reports an inspection of the trajectory of speech sentences in a feature space that is constructed from midsagittal tongue images and frame-wise speech sounds. One objective of this research is to estimate tongue shape and position from speech sounds, so we focus on determining how best to construct and interpret a feature space we call MUTIS (midsagittal ultrasound tongue image space). Experimental results indicate that higher dimensions of MUTIS are most effective for separating people, and that primarily the lower dimensions of VSS (vocal sound space) data are most effective for separating phonemes. Also, the trajectories within only the VSS data indicate clear differences between first language and second language speakers, but they do not do so within only the MUTIS data. These results indicate that the ultrasound tongue image expresses individual oral cavity over a wide area, and specific tongue shape has a lower contribution in ultrasound tongue images.

  37. K. Naruse, E. Sato and Y Yaguchi, “Development of accommodation facility selection recommendation system” Awareness Science and Technology (iCAST), 2012 , Online: https://ieeexplore.ieee.org/document/6469615 [Evidence: IEEE]

  38. Abstract: A collaborative filtering recommends a list of items to a given user, to which he is expected to prefer to, considering a history of purchase items of the user and those of other users, and it is applied to many web shopping sites. On the other hand, although many people often reserve accommodation facilities from web pages, it is difficult to apply the collaborative filtering to accommodation selection because a smaller sizes of an item (an accommodation) history than others reduces the number of user and item to which the collaborative filtering can be applied. To solve it, we introduce a virtual user, who is assumed to rank all of accommodation facilities, to the collaborative filtering, which can increase the number of recommendable user-item pairs. We design the virtual user by analyzing an actual data set of accommodation. Numerical experiments show the virtual user can increase the number of change on user-item pairs which is recommendable without the virtual user.

  39. Y. Yaguchi and R. Oka, “Spherical visualization of image data with clustering”, Awareness Science and Technology (iCAST), 2012, Online: https://ieeexplore.ieee.org/document/6469614 [Evidence: IEEE]

  40. Abstract: This paper proposes to aid the search for images by visualization of the image data on a spherical surface. Many photographs were lost in the Tohoku tsunami, and those that were eventually found are now being scanned. However, the owners of the lost photographs are finding it difficult to search for their images within a large set of scanned images that contain no additional information. In this paper, we apply a spatial clustering technique called the Associated Keyword Space (ASKS) projected from a three-dimensional (3D) sphere to a two-dimensional (2D) spherical surface for 2D visualization. ASKS supports clustering, and therefore, we construct an image search system in which similar images are clustered. In this system, similar images are identified by color inspection and by having similar characteristics. In this way, the system is able to support the search for images from within a huge number of images.

  41. T. Sasaki, Y. Yaguchi, Y. Watanobe and R. Oka, “Extracting a spatial ontology from a large Flickr tag dataset”, Awareness Science and Technology (iCAST), 2012, Online: https://ieeexplore.ieee.org/document/6469595, [Evidence: IEEE] [Best Paper Award]

  42. Abstract: We propose an easy framework for automatically constructing spatial ontologies that locate related concepts together in a space. The conventional graph representation is strong in showing direct relationships between entities, but it is difficult to process its topology when extracting features from the network, because similarity between networks is not well determined. Spatial ontologies are easy to cluster and classify according to the similarities or relationships between entities. We propose a method for creating a spatial ontology called “Associated Keyword Space” and apply it to 0.4M tag words collected from more than 1M images in Flickr. Tags in Flickr have many unknown word tags, but the spatial ontology can explain the clusters of meaning including unknown word tags. The results show that these unknown word tags can be found from neighbor tags that have clear meanings. As a result, an “area ontology” can be explained from the spatial ontology.

  43. H. Aota, K. Ota, Y. Yaguchi and R. Oka, “Deformable Multi-object Tracking Using Full Pixel Matching of Image”, In Proc. of ICETE 2010, e-Business and Telecommunications, CCIS, Vol. 222, 337-349, Online: https://link.springer.com/chapter/10.1007/978-3-642-25206-8_22, [Evidence: Springer CCIS]

  44. Abstract: We propose a novel method for the segmentation of deformable objects and the extraction of motion features for tracking objects in video data. The method adopts an algorithm called two-dimensional continuous dynamic programming (2DCDP) for extracting pixel-wise trajectories. A clustering algorithm is applied to a set of pixel trajectories to determine a shape of deformable objects each of which corresponds to a trajectory cluster. We conduct experiments to compare our method with conventional methods such as KLT tracker and SIFT. The experiment shows that our method is more powerful than the conventional methods.

  45. T. Matsuzaki, Y. Yaguchi and R. Oka, “Occlusion Robust Recognition and Tracking of Motion Objects”, The 3rd TrakMark (ICPR 2012 Workshop), pp. 24-27,

    [Evidence: Major – 2012 is not sponsored by IEEE but ICPR is higher rank of computer vision conference such as impact score 4.29 in https://www.guide2research.com/topconf/computer-vision]

  46. Y. Yaguchi, T. Matsuzaki, Y. Okuyama, K. Takahashi and R. Oka, “A Free-viewpoint TV System”. MVA 2011, pp. 116-119, Online:  https://www.researchgate.net/publication/265807047_A_free-viewpoint_TV_system,

    [Evidence: Major – 2011 is not sponsored by IEEE but current is sponsored by IEEE, also this conference is also higher rank of computer vision conference such as impact score 1.49 in https://www.guide2research.com/topconf/computer-vision]

  47. Abstract: We propose an implementation of a model-based free-viewpoint TV (FTV) system using only three un-calibrated cameras. FTV is next-generation media that enables us to see a scene from any viewpoint. A model-based approach for realizing FTV requires real-time 3D object capture using multiple cameras. Here, we pro-pose a system for reconstructing 3D object surfaces using the so-called 2D continuous dynamic program-ming (2DCDP) method with factorization. 2DCDP is a powerful technique for full-pixel optimal matching. It provides pixel correspondences between the images cap-tured by the three cameras. The proposed system works well as a promising FTV system.

  48. S. Mizoe, Y. Yaguchi, K. Takahashi, K. Ota and R. Oka, “Reconstructing 3D Land Surface from a Sequence of Aerial Images,” MVA 2011: 365-368,  https://www.researchgate.net/publication/289120719_Reconstructing_3D_land_surface_from_a_sequence_of_aerial_images [Evidence: Major – 2011 is not sponsored by IEEE but current is sponsored by IEEE, also this conference is also higher rank of computer vision conference such as impact score 1.49 in https://www.guide2research.com/topconf/computer-vision]

  49. Abstract: This paper proposes a method for reconstructing a 3D surface landscape from an aerial image sequence captured by a single noncalibrated camera. Reconstructing a 3D surface landscape is more difficult than constructing a landscape of buildings or objects in a room because of the lack of available information about camera parameters, the need for mosaicking of 3D surface elements, and the introduction of nonrigid objects. Therefore, conventional methods are not directly applicable. In order to solve these problems, we apply socalled 2-Dimensional Continuous Dynamic Programming (2DCDP) to obtain full pixel trajectories between successive image frames in a sequence of aerial images. Then we apply Tomasi-Kanade Factorization to the full pixel trajectories to reconstruct the 3D surface. We also develop a mosaicking technique for connecting all of the partially reconstructed surfaces. The experimental results show that our proposed method is very promising for reconstructing 3D surfaces, including a forest, a mountain, a lake and several houses. We conduct experiments to compare our method against a SIFT-based method using two sets of data, namely, artificial and real image sequence data.

  50. J. Ma, L. Zheng, Y. Yaguchi, M. Dong, R. Oka, “Object Recognition using Full Pixel Matching”. In Proc. CIT 2010: pp. 536-543, Online: https://ieeexplore.ieee.org/abstract/document/5578153/ [Evidence: IEEE]

  51. Abstract: We consider the optimization problem of object recognition for real world images. Although several approaches have been proposed, this paper aims to improve the recognition rate with our novel method. In this paper, a full pixel matching based object recognition method in which no advance segmentation procedure is occurred during matching procedure is proposed. Our method compares the similarity of two images with pixels rather than the previous works with regions, shapes, et. al, so that the recognition rate can be improved. Furthermore, in order to implement our method, we present and analyze two algorithms which are Decision Space based Algorithm (DSA) and Direction Pattern based Algorithm (DPA). In the experiment, the performance of recognition of our two algorithms is evaluated on caltech 101 dataset. We compare our algorithms to several conventional works, such that our method improve the performance of object recognition in recognition rate, robustness of recognition among the variation of appearance and deformation in the images, and segmentation free.

  52. J. Ma, L. Zheng, Y. Yaguchi, M. Dong and R. Oka, “Image classification based on segmentation-free object recognition”. In Proc. of ICIP 2010: pp. 2157-2160, Online: https://ieeexplore.ieee.org/document/5651227 [Evidence: IEEE]

  53. Abstract: This paper presents a new method for categorical classification. A method called two-dimensional continuous dynamic programming (2DCDP) is adopted to optimally capture the corresponding pixels within nonlinearly matched areas in an input image and a reference image representing an object without advance segmentation procedure. Then an image can be converted into a direction pattern which is made by matching pixels between a reference image and an input image. Finally, the category of the test image is deemed to be that which has the strongest correlation with the learning images. Experimental results show that the proposed method achieves a competitive performance on the Caltech 101 image dataset.

  54. H. Aota, K. Ota, Y. Yaguchi and R. Oka, “Extracting Objects by Clustering of Full Pixel Trajectories,” in Proc. of SIGMAP 2010 pp. 65-72, 2010. Online: https://ieeexplore.ieee.org/document/5742559 [Evidence: IEEE]

  55. Abstract: We propose a novel method for the segmentation of objects and the extraction of motion features for moving objects in video data. The method adopts an algorithm called two-dimensional continuous dynamic programming (2DCDP) for extracting pixel-wise trajectories. A clustering algorithm is applied to a set of pixel trajectories to determine objects each of which corresponds to a trajectory cluster. We conduct experiments to compare our method with conventional methods such as KLT tracker and SIFT. The experiment shows that our method is more powerful than the conventional methods.

  56. Y. Yoshida, K. Yamaguchi, Y. Yaguchi, Y. Okuyama, K. Kuroda and R. Oka, “Acceleration of Two-Dimensional Continuous Dynamic Programming by Memory Reduction and Parallel Processing,” IADIS International Conference Applied Computing, pp. 61-68, 2010. Online: http://www.iadisportal.org/digital-library/accelerate-two-dimensional-continuous-dynamic-programming-by-memory-reduction-and-parallel-processing [Evidence: Minor]

  57. Abstract: This paper contains a proposal for optimizing and accelerating the computation of two-dimensional continuous dynamic programming (2DCDP). 2DCDP processing is optimized by memory reduction and parallelization using OpenMP. We apply buffer resizing and utilize toggle-type buffers to reduce the required memory size. In addition, same-rank processes and pixel correspondence calculation are parallelized by OpenMP instructions to reduce the computation cost/time of 2DCDP. For accumulation, we also apply a realignment of buffering addresses for SIMD on multi-cores/multi-processors. The experimental results show that the computational time and the memory usage have reduced to about 1/4 and 1/5 of the original ones, respectively. Moreover, the concurrency of 2DCDP hot-spot is improved from 5.8 to 7.1 on a quad-core CPU with 8 threads.

  58. Y. Oki. and Y. Yaguchi. "How A Person Is Being Isolated from There? An Approach for Expressing Human Stress in Small-World Network," in Proc. of Humans and Computings 2009, Shizuoka, Japan., [Evidence: Minor]

  59. Y. Yaguchi, Y. Sakai, K. Yoshida, and R. Oka, "Web Video Data Clustering and Recognition using Histograms of Phoneme Symbols," in Proc. of CIT2009, vol. 2, Xiamen, China, October 2009, pp. 306-311., Online: https://ieeexplore.ieee.org/document/5329078 [Evidence: IEEE]

  60. Abstract: The clustering and recognition of Web video content play an important role in multimedia information retrieval. This paper proposes a method for both clustering and recognizing Web video content using a histogram of phoneme symbols (HoPS). HoPS contains information about speech and sound intervals. In this study, three experiments were conducted.The first experiment allocated HoPS feature of video intervals in a 3D space using PCA and quantification method IV (Q-IV). The second experiment applied the k-nearest neighbor (k-NN) method to analyze the difficulties in clustering. The third experiment recognized unknown video intervals by using the distance between HoPS of the query and a category average. The accuracy of the recognition results were 44.3% and 36.9% using the Mahalanobis distance and the correlation distance for the category average of training data, respectively.

  61. H. Aota, Y. Yaguchi and R. Oka, "Extracting Motion Feature of Object Based on Full Pixel Matching," in Proc. of CIT2009, vol. 2, Xiamen, China, October 2009, pp. 300-305. Online: https://ieeexplore.ieee.org/document/5329077 [Evidence: IEEE]

  62. Abstract: This paper proposes an approach to extract motion features from sequences of images of human behavior. A novel algorithm called two-dimensional continuous dynamic programming (2DCDP) is proposed, which can obtain a set of correspondence data for all pixels between sequential images. The 2DCDP algorithm performs segmentation-free detection of objects in an input image from representations in reference image. The output of the 2DCDP algorithm describes a complete vector field that indicates detailed motion of objects. Experiments were performed to show the precision of the motion features extraction.

  63. T. Wagatsuma, Y. Yaguchi and R. Oka, "Cross Media Data Mining using Associated Keyword Space," in Proc. of CIT2009, vol. 2, Xiamen, China, October 2009, pp. 289-294., Online: https://ieeexplore.ieee.org/document/5329086 [Evidence: IEEE]

  64. Abstract: This paper proposes a method to analyze and determine the unified similarity of various data, such as movies, images, sound, and text. Items from these sources of data have various relative-similarity properties, which are specified by quantitative methods, and these similarities can be represented by physical distances in a multidimensional scaling space. In this study, we introduce an example of a 3-D multimedia space using the Associated Keyword Space (ASKS) and demonstrate similarity relationships between various sources of data in this space.

  65. J. Ma, K. Iseki, Y. Yaguchi and R. Oka, "Segmentation-free Object Recognition Using Full Pixel Matching," inProc. of CIT2009, vol. 2, Xiamen, China, October 2009, pp. 283-288., Online: https://ieeexplore.ieee.org/document/5329085 [Evidence: IEEE]

  66. Abstract: We present a novel method for recognizing an object in an image using full pixel matching between a reference image and an input image without advance segmentation of the image. A method called two-dimensional continuous dynamic programming (2DCDP) is adopted to optimally calculate the accumulated local distances of all corresponding pixels in nonlinearly matched areas in an input image and a reference image representing an object. The object is recognized by using two parameters, the matching rate and the standard deviation of the amplitude of a vector of pixel displacement between matched pixels, so that images can be mapped into a two-dimensional space. Finally, a general decision space is proposed for nonlinear transformation in object recognition. Experimental results show that the proposed method performs well in recognizing objects.

  67. Y. Yaguchi, K. Iseki, N. T. Viet and R. Oka, "3D Object Reconstruction Using Full Pixel Matching," in Proc. of CAIP2009, Munster, Germany, September 2009, LNCS5702, pp. 873-880., Online: https://link.springer.com/chapter/10.1007/978-3-642-03767-2_106 [Evidence: Major (IAPR) - Higher Rank Conference: CORE2008 Rank A http://portal.core.edu.au/conf-ranks/955/ ]

  68. Abstract: This paper proposes an approach to reconstruct 3D object from a sequence of 2D images using 2D Continuous Dynamic Programming algorithm (2DCDP) as full pixel matching technique. To avoid using both calibrated images and fundamental matrix in reconstructing 3D objects, the study uses the same approach with Factorization but aims to demonstrate the effectiveness in pixel matching of 2DCDP compared with other conventional methods such as Scale-Invariant Feature Transform (SIFT) or Kanade-Lucas-Tomasi tracker (KLT). The experiments in this study use relatively few uncalibrated images but still obtain accurate 3D objects, suggesting that our method is promising and superior to conventional methods.

  69. Y. Yaguchi, K. Iseki, and R. Oka, "Optimal Pixel Matching between Images," in Proc. of PSIVT2009, Tokyo, Japan, January 2009, LNCS5414, pp. 597-610., Online: https://link.springer.com/chapter/10.1007/978-3-540-92957-4_52 [Evidence: ACM, IEEE Japan Chapter]

  70. Abstract: A two-dimensional continuous dynamic programming (2DCDP) method is proposed for two-dimensional spotting recognition of images. Spotting recognition is simultaneous segmentation and recognition of an image by optimal pixel matching between a reference and an input image. The proposed method performs optimal pixel-wise image matching and two-dimensional pixel alignment, which are not available in conventional algorithms. Experimental results show that 2DCDP precisely matches the pixels of non-linearly deformed images.

  71. Y. Yaguchi, K. Naruse and R. Oka, "Fast Spotter: An Approximation Algorithm for Continuous Dynamic Programming," in Proc. of CIT2008, Sydney, Australia, July2008, pp. 583-588., Online: https://ieeexplore.ieee.org/document/4594740 [Evidence: IEEE]

  72. Abstract: Spotting recognition is the simultaneous realization of both recognition and segmentation. It is able to extract suitable information from an input dataset satisfying a query, and has developed into a research topic known as word spotting that uses dynamic programming or hidden Markov models. Continuous dynamic programming (CDP) is a promising method for spotting recognition applied to sequential patterns. However, the computational burden for conducting a retrieval task using CDP increases as O(JIP), where I is the input length, J is the reference length and P is the number of paths. This paper proposes a faster nonlinear spotting method like CDP, called fast spotter (FS). FS is regarded as an approximation of CDP using A* search. FS reduces the computational burden to O(IP log 2 J) in the best case and executes in around half the time with an experimental dataset, enabling it to realize a large-scale speech retrieval system.

  73. H. Aota, Y. Yaguchi, and R. Oka, "Feature Detection of Electrical Feeder Lines with Galloping Motion," in Proc. of CIT2008, Sydney, Australia, July, 2008, pp. 327-332., Online: https://ieeexplore.ieee.org/document/4594696 [Evidence: IEEE]

  74. Abstract: To detect and observe the location and angle of electrical feeder lines in galloping movies, the observer must recognize the relation of two red balls acting as spacers for pairs of lines. This research proposes and develops a system to detect these balls using color recognition in image processing. Our experiments, show that this system is successful in finding feeder lines and recognizing the angles of the lines for almost all cases. Additionally, the system has the advantage that it can interpolate for missed frames during recognition. However, some movies have color problems.

  75. Y. Yaguchi, Y. Watanabe, K. Naruse and R. Oka, "Speech and Song Search on the Web: System Design and Implementation," in Proc. of CIT2007, Aizuwakamatsu, Japan, October 2007, pp. 270-275., Online: https://ieeexplore.ieee.org/document/4385093 [Evidence: IEEE]

  76. Abstract: This paper proposes a novel search system for speech and song segments. The amount of accumulated video data in the World Wide Web is expanding and its content is varied. Video content includes natural voices and singing voices, and these differ in their phoneme lengths. Our system uses frame-wise phoneme recognition and continuous dynamic programming (CDP). First, each target and query waveform is divided into fixed short-time frames; second, each frame of the waveform is used to estimate a phoneme label using Bayes estimation; third, the query sequences of phoneme labels are searched from target sequences by time-robustness CDP; and, finally, this system gets candidate answers. This method is robust along the time dimension, and thus has a great advantage for natural voice as well as song. This paper also introduces an implementation of this system, which is published on the Web, as a secondary search engine for Youtube data.

  77. Y. Yaguchi and R. Oka, "Accompaniment Included Song Waveform Retrieval Based on Frame-wise Phoneme Recognition," in The Journal of the ASA, Vol. 120, No. 5, Pt. 2 of 2, November 2006, Honolulu, Hawaii, US, pp. 3236. Online: https://www.researchgate.net/publication/272225917_Accompaniment_included_song_waveform_retrieval_based_on_framewise_phoneme_recognition [Evidence: Major – Impact Factor 1.883 in 2006 https://www.scijournal.org/impact-factor-of-j-acoust-soc-am.shtml ]

  78. Abstract: A novel approach is presented for a retrieval method that is useful for waveforms of songs with accompaniment. Audio signals of songs have some different acoustical characteristics from speech signals. Furthermore, the length per mora of signals is longer than that of speech. Therefore, the authors suggest a sound retrieval system for application to musical compositions, including songs, that extracts framewise acoustical characteristics and uses a retrieval method for absorbing phoneme length. First, the system prepares two sets of phoneme identification functions that have corresponding order, but for which phoneme sets belong to different environments of accompaniment‐included or accompaniment‐reduced. Next, musical compositions are put into database and the query song wave converts a waveform to a label sequence using framewise phoneme recognition derived by Bayesian estimation that applies each phoneme identification function according to whether it is accompaniment‐included or not. Finally, the system extracts an interval area, such as query data, from a database using spotting recognition that is derived using continuous dynamic programming (CDP). Retrieval method results agree well with earlier results [Y. Yaguchi and R. Oka, AIRS2005, LNCS3689, 503–509 (2005)] that applied the same musical composition set without accompaniment.

  79. Y. Yaguchi, H. Ohnishi, K. Yamaki, K. Naruse, R. Oka, S. D. Tripp, "Word Space: A New Approach to Descrie Word Meanings," in Proc. of CIT2006, Seoul, Korea, September 2006, pp. 11 (1-6)., Online: https://ieeexplore.ieee.org/abstract/document/4019836 [Evidence: IEEE]

  80. Abstract: The purpose of this research is to acquire new knowledge about the meanings of words by arranging the words in space. We allocated the words and displayed their relationship with each other in a three-dimensional space with a method called Associated Keyword Space (ASKS). In the experiment, the data obtained by ASKS were compared with Thesaurus.com and WordNet, conventional methods of meaning description. The result of the experiment showed two important points. First, the strength of each meaning of a verb with polysemy was expressed by the visual relationship with and distance to associated words. Second, polysemy of the verb was expressed by the extension of the synonyms.

  81. Y. Yaguchi, H. Ohnishi, S. Mori, K. Naruse, H. Takahashi and R. Oka, "A Mining Method for Linked Web Pages Using Associated Keword Space," in Proc. of SAINT2006, Arizona, US, January 2006, pp. 268-276., Online: https://ieeexplore.ieee.org/document/1581343 [Evidence: IEEE]

  82. Abstract: We propose a novel method for mining knowledge from linked Web pages. Unlike most conventional methods for extracting knowledge from linked data, which are based on graph theory, the proposed method is based on our associated keyword space (ASKS), which is a nonlinear version of linear multidimensional scaling (MDS), such as quantification method type IV (Q-IV). We constructed a three-dimensional ASKS space using linked HTML data from the World Wide Web. Experimental results confirm that the performance of ASKS is superior to that of Q-IV for discriminating clusters in the space obtained. We also demonstrate a mining procedure realized by 1) finding subspaces obtained in terms of logical calculations between subspaces in an ASKS space and 2) detecting emerging spatial patterns with geometrical features.

  83. Y. Yaguchi and R. Oka, "Song Wave Retrieval Based on Frame-Wise Phoneme Recognition," in Proc. of AIRS2005, Jeju-Island, Korea, October 2005, LNCS 3689, pp. 503-509., Online: https://link.springer.com/chapter/10.1007/11562382_41 [Evidence: LNCS]

  84. Abstract: We propose a song wave retrieval method. Both song wave data and a query wave for song wave data are transformed into phoneme sequences by frame-wise labeling of each frame feature. By applying a search algorithm, called Continuous Dynamic Programming (CDP), to these phoneme sequences, we can detect a set of similar parts in a song database, each of which is similar to a query song wave. Song retrieval rates hit 78% in four clauses from whole databases. Differences in each query from song wave data and speech wave data is investigated.

  85. Domestic Resarch Documents

  86. 吉野 大志、渡部 有隆、矢口 勇一、中村 啓太、小川 純、成瀬 継太郎, "メッセージ Broker 間のブリッジを応用した RT ミドルウェアにおける Pub/Sub 型通信インタフェース", SI2017, 3B1-04, 2017.

  87. 矢口勇一,森内啓介,"位置推定とデプスカメラを用いたドローンの3次元の逐次 迷路探索",ロボティクスメカトロニクス講演会2017(Robomech2017),1P1- H08,2017.

  88. 吉野大志,渡部有隆,矢口勇一,中村啓太,成瀬継太郎,"OpenRTM-aistにおけ るCORBA通信とメッセージBroker付きPub/Sub型メッセージ通信による共存システ ムの可能性と応用例",ロボティクスメカトロニクス講演会 2017(Robomech2017),2A2-J08,2017.

  89. 矢口勇一,新田喜章,石坂諭,丹内智博,間宮隆瑛,成瀬継太郎,中野修三," 異種ドローンによる複数台同時制御のためのRTコンポーネント",ロボティクス メカトロニクス講演会2017(Robomech2017),2A2-J11,2017.

  90. 安間奎伍,矢口勇一,渡部有隆,成瀬継太郎,"Cloud baseのRTM構築と Raspberry Piへの自動デプロイ",ロボティクスメカトロニクス講演会 2017(Robomech2017),2A2-K07,2017.

  91. 大谷育未,矢口勇一,"FaBoセンサのRTC化による簡単なロボットプロトタイピン グ",ロボティクスメカトロニクス講演会2017(Robomech2017),2A2-K12,2017.

  92. 吉田将司,矢口勇一,"ドローンのための単眼カメラによる周辺環境の再構築", ロボティクスメカトロニクス講演会2017(Robomech2017),2P2-A04,2017.

  93. 井上 千徳、矢口 勇一、 成瀬 継太郎、渡部 有隆、嶺田 築、 Pham, Cuong, Hung、 濱 谷 圭 輔、 Pathberiyage, Venushka, Thisara Dharmasiri、大山 良明、中澤 遙 菜、間宮 隆瑛、松本 拓、安間 奎伍、 吉野 大志、中村 啓太, "RT コンポーネントを用いた センサーデータ収集基盤の開発", SI2016, 1N3-1, 2016

  94. 安間 奎伍、矢口 勇一、 渡部 有隆、成瀬 継太郎, "RT ミドルウエアを用いたク ラウドロボティクス開発基盤の考察", SI2016, 3G2-4, 2016.

  95. 吉野 大志、安間 奎伍、 成瀬 継太郎、矢口 勇一、渡部 有隆、 中村 啓太, "Solace を用いた OpenRTMaist の Pub/Sub 型メッセージ通信 の実装と応用", SI2016, 3G3-3, 2016.

  96. 間宮隆瑛, 矢口勇一, 成瀬継太郎, 新田喜章, "OpenRTM-aistを用いたドローン制御スキームの開発", ROBOMECH2016, 2P2-02b3, 2016.

  97. 横倉佑紀, 鳥居駿平, 新妻佑記, 矢口勇一, 岡隆一, "フィギュアスケート放送映像からの演技動作のスポッティング認識", PRMU2013-106, pp.159-164, 大阪, January, 2014.

  98. 守屋周祐, 矢口勇一, 照沼直樹, 佐藤綜洋, ウィルソン イアン, "舌特徴空間における言語学習者の違いを比較するための正規化とマッチング手法", SP2013-80, pp.53-57, 奈良, November, 2013.

  99. 新妻佑記・鳥居駿平・矢口勇一・岡 隆一, "対象物に依存しない画素の系列モデルを用いた時空間連続DPによる動き認識", PRMU2013-30, pp.65-70, 東京, November, 2013.

  100. 鳥居駿平・新妻佑記・矢口勇一・岡 隆一, "移動するカメラによる動画像からの動作のスポッティング認識", PRMU2013-31, pp.71-76, 東京, November, 2013.

  101. 日野 陽平, 矢口 勇一, 轟 智則, "服の形状や色に基づく類似度を用いた球面マップ生成" , 可視化情報前項講演会2013会津, B109, 2013.

  102. 古沢 宏太, 矢口 勇一, 轟 智則, "ユーザの顔画像に類似するアバターの自動生成と花咲け !アバターへの適用", 可視化情報前項講演会2013会津, B110, 2013.

  103. 矢口 勇一, 轟 智則, 古沢 宏太, 日野 陽平, "Style Share: 球面上に展開する被服コーディネーションマップの生成", 可視化情報前項講演会2013会津, B111, 2013.

  104. 廣戸 裕大, 矢口 勇一, 渡部 有隆, 岡 隆一, "ソースコード類似性に基づく Aizu Online Judge の類似アルゴリズムマップの生成", 可視化情報前項講演会2013会津, B113, 2013.

  105. 和田 俊輔, 矢口 勇一, 尾形 亮, 渡部 有隆, 成瀬 継太郎, 岡 隆一, "空間可視化を用いた時系列データの連想単語解析", 可視化情報前項講演会2013会津, D109, 2013.

  106. 矢口 勇一, 鷲山 英喜, "津波による遺失物写真の返却に資する球面上への画像クラスタリング", 可視化情報前項講演会2013会津, E201, 2013.

  107. 小田島 拓, 松本 剛征, 矢口 勇一, 奥平 恭子, 清永 悠介, 佐々木 聰, 矢野 創, 今井 栄一, 出村 裕英, 山岸 明彦, たんぽぽ WG, "国際宇宙ステーションたんぽぽ計画のためのエアロゲル中の試料貫入孔の認識", 可視化情報前項講演会2013会津, E204, 2013.

  108. 中島秀穂,森 洋平,出村裕英,平田 成,成瀬継太郎,矢口勇一, "A validation study of image-based positioning in space mission", 電気関係学会東北支部連合大会, 2A16, 2013.

  109. 新妻佑記・松崎 隆・矢口勇一・岡 隆一, "時空間連続DPを用いた動画像からの空中文字認識", PRMU2012-222, pp.249-254, March, 2013.

  110. 矢口勇一・堀口尚哉・イアン ウィルソン, "発音習得のための超音波舌画像に対する音素片マッピング", PRMU2011-221SP2011-136, pp.149-154, Feburary, 2012.

  111. 田沢純子・矢口勇一・宮崎敏明・岡 隆一, "2次元連続DP(2DCDP)のハードウェア実装による高速化とメモリ使用量の軽減", CAS2011-25VLD2011-32SIP2011-54MSS2011-25, pp.141-146, July, 2011.

  112. 岡 隆一・矢口勇一・溝江真也, "連続DPの一般スキームについて ~ 画像スポッティングための全画素最適マッチング ~", PRMU2010-87IBISML2010-59, pp. 245-252, September, 2010.

  113. Yaguchi, Y., Aota, H. and Oka, R., "Pixel Tracker: Pixel-wise Tracking for Objects " in Proc. of MIRU2009, Shimane, Japan, July, 2009, pp. 1237-1224. (in Japanese)

  114. Ota, K., Yaguchi, Y. and Oka, R., "Reconition and Tracking of Moving Objects Using Mean-Shift and Pixel Tracking," in Proc. of MIRU2009, Shimane, Japan, July, 2009, pp. 1356-1363. (in Japanese)

  115. Yaguchi, Y., Ohshima, M., Kawai, T. and Oka, R., "Kansei Information Retrieval from Web Video Repository," inTechnical Report on IEICE, PRMU2009-11, Gifu, Japan, May, 2009, pp.59-64. (in Japanese)

  116. Aota, H., Yaguchi, Y., Iseki, K. and Oka, R., "Eigenvalue Trajectory Motion Feature Extraction via Pixel Transit Vector Pattern and Plane Distribution of Objects," in Proc. of DIA2009, Miyagi, Japan, March, 2009, pp. 194-198. (in Japanese)

  117. Iseki, K., Yaguchi, Y., Ota, K., Chiba, M., and Oka, R., "System for 3D Shape Reconstruction of Castllation from Moving Camera Images," in Technical Report on IEICE, PRMU2008-165, Kumamoto, December, 2008, pp. 105-110. (in Japanese)

  118. Iseki, K., Yaguchi, Y. and Oka, R., "3D Shape Reconstruction Using Optimal Pixel Matching Between Images," inTechnical Report on IEICE, PRMU2008-126, Osaka, Japan, November, 2008, pp. 101-108. (in Japanese)

  119. Yaguchi, Y., Iseki, K. and Oka, R., "Two-Dimensional Continuous Dynamic Programming for Image Spotting Recognition," in Proc. of MIRU2008, Nagano, Japan, July, 2008, pp. 707-714. (in Japanese)

  120. Aota, H., Yaguchi, Y., Oka, R. and Mizoe, H., "Extraction of Motion Features for Monitoring Accidents of Electrical Feeder Lines," in Proc. of MIRU2008, Nagano, Japan, July, 2008, pp. 885-890.

  121. Ohnishi, H., Yaguchi, Y., Naruse, K. and Oka, R., "Web Data Mining using Associated Keyword Space with Textual Information and Link Structure," in Technical Report on IEICE, SIG-WI2, Oita, Japan, July, 2007, pp. 707-714.

  122. Yaguchi, Y., Watanabe, Y., Naruse, K. and Oka, R., "Speech and song wave search in the Web: System Design and Implementation," in Technical Report on IEICE, SP2007, Fukushima, Japan, June, 2007, pp. 19-24.

  123. Ohnishi, H., Yaguchi. Y., Yamaki. K., Oka, R. and Naruse, K., "Word Space: A New Approach to Describe WordMeanings," in Technical Report on IEICE, DE2006-75, Niigata, Japan, July, 2006, pp. 149-154.

  124. Yaguchi, Y. and Oka, R., "Song Wave Retrieval based on Frame-wise Phoneme Recognition," in IPSJ SIG Technical Reports, 2005-SLP-057, Hokkaido, Japan, July, 2005, pp. 135-140. (in Japanese)

  125. Yaguchi, Y. and Oka, R., "Song Wave Retrieval based on Frame-wise Phoneme Recognition," in Technical Report on IEICE, SP2004-50, Tokushima, Japan, August, 2004, pp. 19-24. (in Japanese)

  126.  

  127. Patents  

  128. 画像パターンマッチング装置、画像パターンマッチング方法および画像パターンマッチング用プログラム特許第5247481号

  129.  

  130. Projects  

  131. ​東北総合通信局 (副座長: 矢口 勇一), "

  132. 菊池製作所 (研究再委託: 矢口 勇一), "UAV向けフライトレコーダーと不時着技術の開発", NEDO「次世代人工知能・ロボット中核技術開発」事業, 2016-2019

  133. 県事業:会津大学ロボットソフトウエア開発補助事業 (分担研究者)

  134. SYNC他 (分担研究者: 矢口 勇一), "", ふくしま医療福祉開発事業補助金, 

  135. 平田 成 (分担研究者: 矢口 勇一), "高精度形状モデルを基盤とした小惑星地質活動の解析", 科学研究費 基盤B, 2013-2016

  136. 矢口 勇一, "情報地球儀:球面クラスタリングによるデータの関係の可視化", 科学研究費 若手B, 2012-201

Groups and Activities

 

  • Fukushima Internet Television

    • Yae-Gourmet

    • Wakai-Chikara: Sekamimeshi Project

    • Google Impact Challange: Safety Network for Suicide

  • Shalom Church Aizu, Every Nations Mission Paul Team
     

My Hobby

 

  • Composing Musics

  • Playing Piano, Bass, Drums

  • Sports: Running, Soft Tennis, Soccer, Baseball

  • Reading Novels

 

 

 

bottom of page