Presented: March 23, 2021, 10:00AM-11:00 AM Central Standard Time
Speaker: Dr. Ruwen Qin, Stony Brook University
Advancements in sensor, Artificial Intelligence (AI), and robotic technologies have formed a foundation to enable a transformation from traditional engineering systems to complex adaptive systems. This paradigm shift will bring exciting changes to civil infrastructure systems and their builders, operators and managers. Funded by the INSPIRE University Transportation Center (UTC), Dr. Qin’s group investigated the holism of an AI-robot-inspector system for bridge inspection. Dr. Qin will discuss the need for close collaboration among the constituent components of the AI-robot-inspector system. In the workplace of bridge inspection using drones, the mobile robotic inspection platform rapidly collected big inspection video data that need to be processed prior to element-level inspections. She will illustrate how human intelligence and artificial intelligence can collaborate in creating an AI model both efficiently and effectively. Obtaining a large amount of expert-annotated data for model training is less desirable, if not unrealistic, in bridge inspection. This INSPIRE project addressed this annotation challenge by developing a semi-supervised self-learning (S3T) algorithm that utilizes a small amount of time and guidance from inspectors to help the model achieve an excellent performance. The project evaluated the improvement in job efficacy produced by the developed AI model. This presentation will conclude by introducing some of the on-going work to achieve the desired adaptability of AI models to new or revised tasks in bridge inspection as the National Bridge Inventory includes over 600,000 bridges of various types in material, shape, and age.
Dr. Ruwen Qin is an Associate Professor of Civil Engineering at Stony Brook University. She received her Ph.D. degree in Industrial Engineering and Operations Research from Pennsylvania State University - University Park. Her research focuses on creating analytics and systems methods for forming, operating, and coordinating complex adaptive systems such as cyber-physical-human systems, smart connected systems, and intelligent automation systems. Her research has been sponsored by National Science Foundation, U.S. Department of Transportation, Department of Education, state Departments of Transportation, and industries. She is a member of IEEE, INFORMS, and ASEM.
Presented: June 16, 2021, 10:00AM-11:00 AM Central Standard Time
Speaker: Dr. Genda Chen, Missouri S&T
In this 50-minute lecture, the fundamental concepts of fiber optic sensors for both distributed and point corrosion measurements are reviewed. For the distributed monitoring of a line bridge component such as steel reinforced girders, Brillouin scattering and fiber Bragg gratings (FBG) can be coupled to measure both temperature and radial strain as an indirect indicator of corrosion process. For the point monitoring of steel structures, long period fiber gratings (LPFG) are specially designed for a direct measurement of mass loss or the loss in cross sectional area of the component. In particular, a Fe-C coated LPFG sensor is introduced for corrosion induced mass loss measurement when Fe-C materials are comparative to the parent steel component to be monitored. The sensing system operates on the principle of LPFG that is responsive to not only thermal and mechanical deformation, but also the change in refractive index of any medium surrounding the optical fiber. Fabrication process of the LPFG is demonstrated through the CO2 laser aided fiber grating system. To enable mass loss measurement, a low pressure chemical vapor deposition (LPCVD) system is introduced to synthesize a graphene/silver nanowire composite film as flexible transparent electrode for the electroplating of a thin Fe-C layer on the curve surface of a LPFG sensor. An integrated sensing package is illustrated for corrosion monitoring and simultaneous strain and temperature measurement. Two bare LPFGs, three Fe-C coated LPFG sensors are multiplexed and deployed inside three miniature, coaxial steel tubes to measure critical mass losses through the penetration of tube walls and their corresponding corrosion rates in the life cycle of an instrumented steel component. The integrated package can be utilized for in-situ deterioration detection in reinforced concrete and steel structures. Assisted by a permanent magnet in pipeline monitoring, both FBG and LPFG sensors are combined with an extrinsic Fabry-Perot interferometer (EFPI) to measure both internal and external thickness reductions without impacting the operation of the pipeline.
Dr. Genda Chen is Professor and Robert W. Abbett Distinguished Chair in Civil Engineering, Director of the INSPIRE University Transportation Center, and Director of the Center for Intelligent Infrastructure at Missouri University of Science and Technology (S&T). He received his Ph.D. degree from the State University of New York at Buffalo in 1992 and joined Missouri S&T after over three years of bridge design, inspection, and construction practices. Since 1996, Dr. Chen has authored or co-authored over 400 technical publications in structural health monitoring (SHM), structural control, computational and experimental mechanics, multi-hazards assessment and mitigation, and transportation infrastructure preservation and resiliency including over 180 journal papers, 5 book chapters, and 27 keynote and invited presentations at international conferences. He chaired the 9th International Conference on Structural Health Monitoring of Intelligent Infrastructure (SHMII-9), St. Louis, Missouri, August 4-7, 2019. He received the 2019 SHM Person of the Year award, the 1998 National Science Foundation CAREER Award, the 2004 Academy of Civil Engineers Faculty Achievement Award, and the 2009, 2011, and 2013 Missouri S&T Faculty Research Awards. In 2016, he was nominated and inducted into the Academy of Civil Engineers at Missouri S&T and became an honorary member of Chi Epsilon. He is a Fellow of American Society of Civil Engineers (ASCE), Structural Engineering Institute (SEI), and the International Society for Structural Health Monitoring of Intelligent Infrastructure (ISHMII). He is a Section Editor of the Intelligent Sensors, Associate Editor of the Journal of Civil Structural Health Monitoring, Associate Editor of Advances in Bridge Engineering, Editorial Board Member of Advances in Structural Engineering, and Vice President of the U.S. Panel on Structural Control and Monitoring.
Presented: September 14, 2021, 10:00AM-11:00 AM Central Standard Time (US and Canada)
Speaker: Dr. Jian Zhang, Southeast University, China
Computer-vision is an emerging technology which has been widely used in the field of structural health monitoring. This seminar covered the latest achievements of our research group in defect detection, health monitoring, and construction control using image-based technologies. For defect detection, advanced unmanned aerial vehicles (UAVs) have been developed to automatically detect surface cracks and other types of structural damages based on digital image processing techniques. An integrated navigation system combining binocular camera and inertial sensor enables automated route planning for the developed UAVs in case of GPS failure. Specifically, a wall-climbing UAV is designed to acquire detailed surface crack images with high accuracy and a collision-tolerant UAV is proposed for the defect inspection of complex or internal spaces. In addition to UAV, an unmanned ship is developed for exploration of inaccessible places such as sewers. For structural health monitoring, on-line camera monitoring system has been developed for displacement measurement of long-span bridges. For construction control, a binocular vision-based method is investigated and applied for displacement measurement during the hoisting process of prefabricated components. Experimental results show that compared to conventional methods, the proposed approach is more intelligent, more convenient, and more reliable.
Dr. Jian Zhang is currently a Professor of Civil Engineering at the Southeast University and Director of Jiangsu Key Laboratory of Engineering Mechanics. His research interests include technologies and devices for intelligent inspection and non-contact measurement, mobile rapid impact testing, and approaches for long-term monitoring data processing and uncertainty analysis. He received the 2018 Aftab Mufti Best Paper Award by the International Society for Structural Health Monitoring of Intelligent Infrastructure (ISHMII) and the 2019 Takuji Kobori Prize by the International Association of Structural Control and Monitoring (IASCM). He served as the Co-Chair of the 8th International Conference on Experimental Vibration Analysis for Civil Engineering Structures (EVACES). He has served on the editorial board of academic journals including Computer-Aided Civil and Infrastructure Engineering and Structural Control and Health Monitoring.
Presented: December 14, 2021, 10:00AM-11:00 AM Central Standard Time (US and Canada)
Speaker: Dr. Genda Chen, Missouri S&T
In this 50-minute lecture, the fundamental concept of hyperspectral imaging is reviewed. Each hyperspectral image represents a narrow, contiguous wavelength range of an electromagnetic spectrum, which could be indicative of a chemical substance. All the images in the spectral range of a hyperspectral camera are combined to form a three-dimensional hyperspectral data cube with two spatial dimensions and one spectral dimension. A hyperspectral cube can be sampled in four ways: spatial scanning, spectral scanning, snapshot imaging, and spatio-spectral scanning. This presentation will be focused on spatial scanning from a line-scan camera. The feasibility of horizontal imaging from a synchronized hyperspectral camera and LiDAR scanner system will be explored first. The hyperspectral images are then applied into several infrastructural inspections: concrete roadway condition assessment, fresh mortar strength evaluation, chloride concentration determination in reinforced concrete, steel reinforcing bar and steel plate corrosion, and surface plant stress monitoring as an indication of gas leakage from embedded pipelines. As an example, mortar samples with a water-to-cement (w/c) ratio of 0.4-0.7 were cast and scanned during curing. Reflectance data at a wavelength range of 1920 nm to 1980 nm, associated with the O-H chemical bond, were averaged to classify different mortar types with a Support Vector Machine (SVM) algorithm and predict their compressive strength from a regression equation. After baseline and bias corrections, the reflectance intensity at 2258 nm wavelength was extracted to characterize Friedel’s salt. The possibility of steel corrosion was experimentally shown to increase with the characteristic reflectance intensity that in turn decreases linearly with the diffusion depth at a given corrosion state. For each type of mortar cubes with a constant w/c ratio, the characteristic reflectance intensity linearly increases with chloride ion Cl- concentration up to 0.8 wt.%.
Dr. Chen received his Ph.D. degree from the State University of New York at Buffalo in 1992 and joined Missouri University of Science and Technology (S&T) in 1996 after over three years of bridge design, inspection, and construction practices with Steinman Consulting Engineers in New York City. Since 1996, Dr. Chen has authored or co-authored over 400 technical publications in structural health monitoring (SHM), structural control, structural and robotic dynamics, computational and experimental mechanics, life-cycle assessment and deterioration mitigation of infrastructure, multi-hazards assessment and mitigation, transportation infrastructure preservation and resiliency including over 180 journal papers, 5 book chapters, and 27 keynote and invited presentations at international conferences. He chaired the 9th International Conference on Structural Health Monitoring of Intelligent Infrastructure (SHMII-9), St. Louis, Missouri, August 4-7, 2019. He received one patent on distributed coax cable strain/crack sensors and two patents on enamel coating of steel reinforcing bars for corrosion protection and steel-concrete bond strength. He received the 2019 international SHM Person of the Year award, the 1998 National Science Foundation CAREER Award, the 2004 Academy of Civil Engineers Faculty Achievement Award, and the 2009, 2011, and 2013 Missouri S&T Faculty Research Awards. In 2016, he was nominated and inducted into the Academy of Civil Engineers at Missouri S&T and became an honorary member of Chi Epsilon. He is a Fellow of American Society of Civil Engineers (ASCE), Structural Engineering Institute (SEI), and the International Society for Structural Health Monitoring of Intelligent Infrastructure (ISHMII). He is a Section Editor of the Intelligent Sensors, Associate Editor of the Journal of Civil Structural Health Monitoring, Associate Editor of Advances in Bridge Engineering, Editorial Board Member of Advances in Structural Engineering, and Vice President of the U.S. Panel on Structural Control and Monitoring.
Q1: What is the minimum crack width that can be detected? Also, what's the impact of the image quality on the accuracy of the detection?
A: The minimum detectable crack width is 0.1mm. Finer cracks can be detected by changing to a higher resolution camera. The factors affecting the detection accuracy include image resolution, illumination conditions (dark light, shadow occlusion), image transmission and storage problems (video transmit in radio may have noise), motion blur, etc. because the deep learning based detection method is more robustness than traditional image process methods, the influence of illumination conditions and image noise is very small. For the problem of low image resolution, the image can be enlarged by super-resolution method, and the image blur can be removed by image processing method. In general, image quality degradation can be reduced by corresponding methods.
Q2: Could you explain a bit more how the defect localization was conduced?
A: We proposed a method that combined the SLAM algorithm and image matching algorithm, first the position of each key-frame during inspection was calculated using a mono-visual SLAM，and then each frame contains defects will be matched automatically with key-frames，so the position of the key-frame is the position of the defect.
Q3: Would you mind sharing some publications on climbing UAV that you’ve developed?
A: Shang, Jiang & Zhang, Jian. (2019). Real‐time crack assessment using deep neural networks with wall‐climbing unmanned aerial system. Computer-Aided Civil and Infrastructure Engineering. 35. 10.1111/mice.12519.
Q4: Could you please explain a little bit about your techniques to measure the crack width?
A: Please refer to our paper: Ni, Futao & Zhang, Jian & Chen, Zhiqiang. (2018). Zernike‐moment measurement of thin‐crack width in images enabled by dual‐scale deep learning. Computer-Aided Civil and Infrastructure Engineering. 34. 10.1111/mice.12421.
Q5: What is the strongest wind condition where your UAV can work effectively?
A: The maximum wind speed we encountered during bridge inspection is about 13m / s, which is about a level 6 wind.
Q6: Are the different proposed options supposed to be included in a particular drone or as standalone software packages that can work with any drone package that an agency may be using? I am referring to the collision detect/allowing drone to touch the structure and the bolt detection/clarification, structure deformation, etc.?
A: At present, our algorithms and software need to be used with our developed UAV, because part of real-time detection works was on data transmission and radio hardware. In the future, we may open source some of our early work.
Q7: How can you determine the distance between the camera and the targets? If you do not know the distance, how can you calculate the displacements?
A: The target is a special made infrared target with known size and characteristics. The homography matrix can be used in the calculation, so there is no need to know the distance. Details can also be found from the answer to Question 10.
Q8: Could you please explain what's the tolerance for the cracks width of the bridge inspection specifications? Do you have an evaluation index for the bridge health condition or bridge construction quality
A: Different specifications have different requirements for the allowable crack width of bridges, and requirements are also for different bridge types and different parts of the bridge. The specification generally requires that the maximum crack in the key area of the bridge not allowed exceed 0.4~0.8 mm. Our equipment has a crack detection accuracy of 0.1mm, which is required to meet the specification requirements. There are methods available using information of cracks for structural condition assessment.
Q9: How do you scale images from floating UAV's for crack measurement?
A: Based on the characteristic that our wall-climbing UAVs can cling to the structural surface during inspection, the camera can be kept facing the structural surface, and the distance between the camera and the structural surface remains unchanged, so the scale of the captured image remains unchanged, and tilt correction is not required. For the UAV inspects while flying, we installed a laser ranging sensor on the camera to determine the image scale parameters by using the object distance.
Q10: For tracking multiple infrared targets with one camera, it will cause different measuring accuracy for different infrared targets due to different object distances. How did you deal with this problem? Did you measure each object distance for each infrared target for calculating the scale factors?
A: One camera that we developed can track up to 10 infrared targets at the same time. However, considering the factors of clear imaging and measurement accuracy, we usually install multiple cameras and choose lenses with different focal lengths for targets with different distances. For example, we use a 200mm lens for targets with object distances of 100~200 meters, and we use a 300mm lens for targets with distances of 200~300 meters. We can ensure that our targets are in a clear imaging range, and the accuracy of targets at different object distances can also reach the millimeter level. In addition, we use the object distance method and the known target size method to calculate the scale factor. When the object distance is not convenient to measure, we calculate the scale factor through the known size between several small point light sources on our infrared target.
Q11: What is the maximum expected deflection of the 300-meter railroad cable stay bridge for the train load? Is it supporting light rail or heavy rail?
A: The maximum deflection of the 300-meter cable stayed bridge under moving train was 124 mm, it was increased to 225 mm when the combination of moving train, temperature and shrinkage and creep were applied at this bridge. This bridge is designed as high-speed railway vehicles.
Q12: Do you think it can be tailored to be used for bridge inspection after fires? like comparing pictures of the bridge before and after a fire and track the damage?
A: For bridges after fire, obvious stains will be left on the concrete or steel surface due to flame and smoke, , which will bring challenges to the detection of surface defects. On the one hand, adding some defect images after fire to the deep learning database will bring benefits, on the other hand, the combination of RGB image and short-range laser scanning can greatly reduce the impact of surface stains on inspection accuracy.
Q13: Would you please explain the detailed specification of Lidar that was used in this research? The model and the manufacturer if it is fine.
A: The lidar we use to scan structural three-dimensional point clouds is VZ-400i from Riegl. Its effective distance is 800m and the maximum accuracy is 3mm.
Q14: Can you please tell if you use Infrared images for this research?
A: Instead of using infrared images for displacement calculations, our camera uses active infrared point light sources mounted on the main beam to illuminate the camera mounted on the pier. The infrared light has strong penetration, and the image sensor of the camera captures these infrared light sources to achieve real-time tracking. In addition, we install filters on our lenses that only pass through infrared light above 850nm, thus eliminating the interference of natural stray light.
Q15: How many minutes does it take to detect a defect in Non GPS for a bridge with how long a span?
A: It takes about 40 minutes to inspect the surface of a 150 meter high bridge tower, about 1.5 hours to inspect the bottom surface of a 300 meter span bridge, and about 1 hour to inspect the surface bolts of a 120 meter steel truss bridge.
Q16: What are the processes that can be done on the data you get from crack widths? Are there methods to make a global evaluation of the system dynamics using these data? or they are just used in local evaluations?
A: The first step is to get the crack images of a bridge by cameras, then a deep learning-enabled quantitative crack width measurement method can be employed to get the crack widths from images. In the detection and mapping phase, dual-scale convolutional neural networks are designed to detect cracks in complex scene images with validated high accuracy. Subsequently, a novel crack width estimation method based on the use of Zernike moment operator is further developed for thin cracks. The effect of cracks on global structural dynamic characteristics is not obvious, and cracks reflect more local information for local evaluation. The fusion evaluation method of detection information (e.g., crack) and monitoring data (e.g., acceleration) is a great research direction in the future.
Q17: Wondering whether you are working on any methods to detect subsurface level defects. In other words, do you have methods to identify cracks/deteriorations that are not visible to the outside.
A: With regard to the detection of internal cracks or defects, the method based on acoustic waves is the main method. We are currently trying to use shock echo and ultrasonic methods for detection.
Q18: Can you please discuss what happens with microwave radar monitoring? What does the microwave radar look for and pick up?
A: The transmitter of the microwave radar generates a set of linear frequency modulated continuous wave signals, most of the signals are transmitted to the transmitting antenna (the remaining part is transmitted to the receiver as the local oscillator signal), which is transmitted to the measured target by the transmitting antenna, and then passed through the measured target. The target reflection generates an echo signal, and the receiving antenna receives the echo signal, and then the receiver mixes the echo signal and the local oscillator signal. The signal data obtained by the signal processor filtering and sampling the mixed signal contains the distance information and vibration information of the measured target. The real vibration of the vibrating target can be obtained by combining the vibration information and the elevation angle measured by the built-in gyroscope.