Keynote speakers on ACIRS 2019
Kin-Choong Yow, University of Regina, Canada
Kin-Choong Yow obtained his B.Eng (Elect) with 1st Class Honours from the National University of Singapore in 1993, and his Ph.D. from Cambridge University, UK in 1998. He joined the University of Regina in September 2018, where he is presently an Associate Professor in the Faculty of Engineering and Applied Science. Prior to joining UofR, he was an Associate Professor in the Gwangju Institute of Science and Technology (GIST), Republic of Korea, (2013-2018), Professor at the Shenzhen Institutes of Advanced Technology (SIAT), P.R. China (2012-2013), and Associate Professor at the Nanyang Technological University (NTU), Singapore (1998-2013). In 1999-2005, he served as the Sub-Dean of Computer Engineering in NTU, and in 2006-2008, he served as the Associate Dean of Admissions in NTU. Kin-Choong Yow’s research interest is in Artificial General Intelligence and Smart Environments. Artificial General Intelligence (AGI) is a higher form of Machine Intelligence (or Artificial Intelligence) where the intelligent agent (or machine) is able to successfully perform any intellectual task that a human being can. Kin-Choong Yow has published over 80 top quality international journal and conference papers, and he has served as reviewer for a number of premier journals and conferences, including the IEEE Wireless Communications and the IEEE Transactions on Education. He has been invited to give presentations at various scientific meetings and workshops, such as the CNET Networks Event (2002) as well as the Microsoft Windows Server 2003 Launch (2003). He is also a member of the IEEE, ACM, and the Singapore Computer Society (SCS).
Speech Title: Autonomous Vehicle Guidance using Independently Moving Cameras
Abstract: Autonomous Vehicles (or self-driving cars) are one of the hottest toptics in the world of tech right now, with many companies like Waymo and Tesla having already released, or soon to release, self-driving features. Each vehicle is usually outfitted with a GPS unit, an inertial navigation system, and a range of sensors including laser rangefinders, radar, and video. The vehicle typically uses positional information from the GPS and inertial navigation system to localize itself and sensor data to refine its position estimate as well as to detect obstacles. In this talk, I will discuss how we can use independently moving cameras to provide input to an autonomous vehicle that will allow it to navigate through an environment to target locations without using global localization systems like GPS. In addition, I will talk about how we can use moving cameras to recover the distance of an object that is simultaneously moving with (but independently from) the autonomous vehicle.
Prof. Minoru Sasaki, Gifu University, Japan
He received M. Eng. and D. Eng. Degrees in mechanical engineering from Tohoku University in 1983 and 1985, respectively. He was a research associate at Tohoku University in 1985 and a lecturer at Miyagi National College of Technology, and a visiting professor at the University of California, Los Angeles. Since 1991, he has been with the Faculty of Engineering, Gifu University and is currently a professor.
Speech Title: Vibration control of a flexible manipulator
Abstract: Despite the numerous advantages associated with the flexible link manipulators; low power consumption, use of small actuators, high speed and their low cost due to fewer materials requirements than their rigid counterparts, link vibrations stand in the way to reaping these benefits. This leads to time wastage waiting for vibrations to decay to safe operating levels and the possibility of mechanical failure due to vibration fatigue. This drawback has been addressed using techniques such as adaptive filters, adaptive strain feedback gain, piezoelectric transducers, etc. What was gained in operation speed was lost in waiting for the vibrations to decay to healthy levels for operation. The plant presented in this article is a 3D, two-link flexible manipulator with a variable weight attached at the distal end. It has three rotary joints driven by dc servomotors and the two flexible links assumed to have damping of the kelvin-voigt type both in the lateral and the torsional senses. The control system consists of a computer, AD and DA converters interfaced to Matlab and controlled from dSPACE control desk environment. Measurement of angular position and velocity is achieved using encoders coupled to the servomotors while link strain measurement is done by strain gauges positioned at the bottom of each link. This ongoing work seeks to contribute by developing high-speed control methods for a 3D two-link flexible manipulator. To this end, we have modelled the manipulator and validated the similarities of the model with the actual manipulator in terms of joint angles, joint velocities, link strains and power spectrum in frequency domain. The first control method involved the development of an inverse controller which was augmented with lowpass filters. Experimental results show that with careful choice of the filter order and time constant, this technique significantly reduces transient vibrations in comparison to a system without a controller. Further extension of this technique yielded a two-degree-of-freedom controller design between a filtered inverse controller and strain feedback which addressed residue vibrations resulting in a superior controller. Collocated controller, though very effective in reducing the rise time are poor in increasing the damping of the system. Non-collocated control, in the control of flexible manipulators is limited by the difficulties in the estimation of the position of the end-effector from the joint angle. Research has shown that non-collocated control where a proportion of root strain is implemented as a negative feedback increases system damping. However, too high gain results in encoder noise feedback while too little is not good for vibration suppression. Determination of the right proportion of root strain to feed back, and the timing is a research problem that has not been solved yet. Further experiments were conducted seeking to tune strain feedback gains using the artificial neural network by training using online backpropagation algorithm. Results showed that neural network could intelligently choose appropriate gains depending on the intensity of the link strain, yielding a system characterized by minimal vibrations relative to a system with constant controller gain. Since loading the manipulator is accompanied by an increase in strain intensity and a reduction in the vibration frequency, the technique could sense changes in loading and adjust the gains accordingly.
Prof. Kenji Suzuki, Tokyo Institute of Technology, Japan
Kenji Suzuki, Ph.D. (by Published Work; Nagoya University) worked at Hitachi Medical Corp., Japan, Aichi Prefectural University, Japan, as a faculty member, and in Department of Radiology, University of Chicago, as Assistant Professor. In 2014, he joined Department of Electric and Computer Engineering and Medical Imaging Research Center, Illinois Institute of Technology, as Associate Professor (Tenured). In 2017, he was jointly appointed in World Research Hub Initiative (WRHI), Institute of Innovative Research (IIR), Tokyo Institute of Technology, Japan, as Specially Appointed Professor (equivalent to Visiting Professor). He published 330 papers (including 110 peer-reviewed journal papers). He has been actively studying deep learning in medical imaging and computer-aided diagnosis in the past 25 years. His papers were cited more than 13,000 times, and his h-index is 47. He is inventor on 30 patents (including ones of earliest deep-learning patents), which were licensed to several companies and commercialized. He published 11 books and 22 book chapters, and edited 13 journal special issues. He was awarded a number of grants as PI including NIH R01 and ACS. He served as the Editor of a number of leading international journals, including Pattern Recognition and Medical Physics. He served as a referee for 91 international journals such as Science Translational Medicine (IF: 16.8) and Nature Communications (IF: 12.4), an organizer of 62 international conferences, and a program committee member of 170 international conferences. He gave 120 invited talks and keynote speeches at international conferences. He received 26 awards, including Springer-Nature EANM Most Cited Journal Paper Award 2016 and 2017 Albert Nelson Marquis Lifetime Achievement Award.
Speech Title: AI Doctor and Smart Medical Imaging with Deep Learning
Abstract: It is said that artificial intelligence driven by deep learning would make the 4th Industrial Revolution. Deep leaning becomes one of the most active areas of research in computer vision, pattern recognition, and imaging fields, because “learning from examples or data” is crucial to handling a large amount of data (“big data”) coming from informatics and imaging systems. Deep learning is a versatile, powerful framework that can acquire image-processing and analysis functions through training with image examples; and it is an end-to-end machine-learning model that enables a direct mapping from raw input data to desired outputs, eliminating the need for handcrafted features in conventional feature-based machine learning. I invented ones of the earliest deep-learning models for image processing, semantic segmentation, object enhancement, and classification of patterns in medical imaging. I have been actively studying on deep learning in medical imaging in the past 23 years. In this talk, AI-aided diagnosis and smart medical imaging with deep learning are introduced, including 1) computer-aided diagnosis for lung cancer in CT, 2) distinction between benign and malignant lung nodules in CT, 3) polyp detection and classification in CT colonography, 4) separation of bones from soft tissue in chest radiographs, and 5) radiation dose reduction in CT and mammography.